With cardiovascular disease causing one third of deaths around the world, according to a 2017 study, it is little wonder that Dr Amitava Banerjee, a specialist in heart failure and cardiac services, is interested in how artificial intelligence can help to bring this figure down. However in the course of researching studies, he came across some promising developments but also some issues of concern.
Banerjee referenced Dr Eric Topol who has detailed the ways in which artificial intelligence in being used in healthcare. These include diagnosing or checking diagnoses of breast or lung cancers, looking at retinopathy scans in ophthalmology, and reading echocardiology scans in cardiology. However Topol did warn that: “We are far from demonstrating very high reproducible machine accuracy, let alone clinical utility for most medical scans and images, in the real world clinical environment.”
For Banerjee there are several issues of concern with regard to widespread implementation of AI in healthcare. The first is what he calls the digital divide. “If now everybody has the ability to provide their data, then how are they going to be put into the algorithm for AI?” he questioned. He pointed out that sometimes companies create symptom checkers for diseases but they have only been piloted on younger patients and those in better health than those he would see in his clinic. He also stated an alarming statistic from the field of genomics.
“In 2009 only 4% of the studies in genomics concern patients who had non-European ancestry. In 2016 we are not yet up to 20%. So how can I benefit from an AI algorithm where my data was never in there?”
The second issue is the ownership of the data which he called “the most contentious issue.” Banerjee brought up the case of the Royal Free NHS Foundation Trust. The trust got into hot water with the Information Commissioner for failing to comply with the Data Protection Act when it provided patient details to Google DeepMind. He said that it needs to be clear who owns the data, whether it will be used for research purposes, and the distance between an arm’s length subsidiary that specialises in data and its parent company.
The third issue is data quality. Data that is low quality due to missing data or low accuracy can’t be used. The Cochrane Review of two studies of melanomas and skin lesions found that “current smartphone applications using automated analysis are observed to have a high change of missing melanomas (false negatives).”
Banerjee said: “With AI are we missing important things and telling people they are unwell when they are not.” Another issue related to data quality is inconsistent data. One of his specialisms is heart failure, the definition of which varies across different study designs, trials and observational studies. “How are you going to cater for all of that in your AI algorithm because the data is messy?” he said.
He also highlighted the issue of training amongst healthcare professionals. He said that while doctors are trained for five years through medical school and 10 more years for their specialism, at the moment there is very poor provision of training in big data methods, informatics and AI. “We are working with the Royal College and with medical schools to see how we can develop this training because the doctors of today and tomorrow are going to have to use this technology and evaluate it,” he said.
Banerjee, a senior lecturer in clinical data science and honorary consultant in cardiology at the Farr Institute of Health Informatics, has valid misgivings about the application of AI in healthcare. Perhaps we are currently riding the crest of a hype wave surrounding AI and it is time come back down to earth and address the concerns raised. The application of intelligent use of information and technology to provide better care for patients has fantastic potential but its advancement must be done safely and in a way that includes all patients.
Dr Amitava Banerjee was speaking at a Lunch Hour Lecture at University College London.