Safe, effective, and equitable
AI has massive potential to help us live healthier, happier lives
AI can help us unlock improved & personalised healthcare. But it also has the potential to create or exacerbate harm for certain groups of people.
We focus on enabling AI which is safe, effective, and equitable, allowing everyone to benefit from digital transformation in healthcare, whilst protecting minoritised groups from harm.
Like all medical interventions, AI medical devices have risks and benefits, and can sometimes cause harm to patients. Unfortunately due to the novelty of these emerging technologies, these harms may go unrecognised. Our team has a track record in adapting existing analytical & evaluation techniques to ensure AI medical devices are safe for use for our patients.
Learn more about our work on the medical algorithmic audit, and reporting guidelines for studies involving AI medical devices.
For AI medical devices to help patients they have to work as expected. We have collaborated with the National Institute for Health and Care Excellence (NICE) to update the evidence standards framework for digital health technologies, enabling commissioners and other key decision-makers to select AI and digital health technologies which best serve the needs of patients.
Recent research has flagged biased performance in healthcare AI systems This means that certain groups of patients are less able to benefit, and are more likely to be harmed than others. Often these biases disproportionately affect minoritised groups in society - for instance by unjustly withholding healthcare resources from Black patients, or by generating less accurate diagnoses for under-served patient groups.
Our research has showed that datasets used to develop healthcare AI systems are often biased, contributing to inequitable performance when used for patients.
Learn more about our STANDING Together project, which aims to set standards to encourage generalisability and transparency of healthcare datasets.