So says Eleonora Harwich, co-author of a new report published by think tank Reform, entitled “Thinking on its own: AI in the NHS".
Harwich, who is head of digital and technological innovation at Reform, said: “AI has been around in the health system for years and has shown very positive results. What is new is things like machine learning which has the capacity to help make the NHS more efficient.”
However, the use of AI in healthcare is controversial; last year, the Information Commissioner’s Office ruled that an AI trial between the Royal Free NHS Foundation Trust and Google DeepMind did not comply with the Data Protection Act.
The regulator concluded that the trust failed to comply when it provided details on 1.6 million patients to test an alerting system for acute kidney injury, named Streams.
With this in mind, the report argues that “public safety and ethical concerns relating to the use of AI in the NHS” should be a central matter for healthcare bodies, including the National Institute for Health and Care Excellence.
In response to the report, NHS Digital’s director of data, Professor Daniel Ray said in a statement: “We need to make sure that the data provided for use in AI algorithms is designed with the best interests of patients at the forefront of all decision making. In specialist areas AI has great potential for success and there are good examples of this starting to happen in the NHS, but we need to understand and evaluate this to move it forwards.
“We know that health data is personal and sensitive, so there are rightly strict rules in place about how and when it can be used or shared. We need to ensure that any new developments harness the power of data but that they do so responsibly and within the legal frameworks.”