Baw focused on the lack of overall guidelines for the use of AI in clinical settings, observing that with experts currently petitioning governments around the world for better regulation, “it’s very difficult to openly support embedding AI tools into the functioning of a system that’s so important to our well-being as the NHS”.
More time and research are necessary before the health service can understand the potential risks associated with clinical uses of AI, Baw said, noting that clinicians can be held responsible by the General Medical Council (GMC) for decisions they make as well as by civil and criminal courts. “We’re the ones who will eventually carry the can for decisions made using AI,” he added.
Baw also said he was concerned about the future use of proprietary data sets in AI algorithms.
“One of the things that really concerns me about AI is that while the actual machine-learning algorithms can be open source, unless the training data is also open source and the actual final model, once it has been trained, is also open source. Then we risk entire swathes of medical practice becoming proprietary,” he added.
Propriety AI could also open future clinicians to legal risk if they criticise a treatment modality, he said.