Artificial Intelligence
Safety Monitoring

We advocate for collaborative, multistakeholder monitoring of clinical AI to ensure effectiveness, inclusivity, and patient safety

Clinical AI technologies can improve the provision of healthcare in numerous ways. However, there is still limited understanding of how these technologies should be evaluated and monitored from a safety perspective.

To enhance safety practices, our team published the Medical Algorithmic Audit Framework. The Medical Algorithmic Audit is a tool that can be used to better understand the weaknesses of an artificial intelligence system and put in place mechanisms to mitigate their impact.

Overview of the Medical Algorithmic Audit

(A) Overview of the internal audit framework, reproduced from Raji and colleagues. Grey boxes represent processes; red boxes represent documents produced by the auditors; blue boxes represent documents produced by the engineering and product teams; and green boxes represent jointly developed documents. (B) Proposed modifications for the medical algorithmic audit. FMEA=failure modes and effects analysis.

We propose that safety monitoring and medical algorithmic auditing should be a joint responsibility between users and developers, and encourage the use of feedback mechanisms between these groups to promote learning and maintain safe deployment of clinical AI systems.

We have collaborated and are collaborating on work with:

  • The Medicines and Healthcare products Regulatory Agency (MHRA) to help develop guidance around AI as a Medical Device.

  • The National Institute for Health and Care Excellence (NICE) to develop the new Evidence Standards Framework (click here for more information).

  • Birmingham Health Partners (BHP) to identify opportunities and risks around future UK regulatory reform of medical devices (click here for more information).

  • The University Hospitals Birmingham Digital Transformation Team, to ensure that AI deployments remain effective and safe for all patients.

Our team is currently embarking on a programme of research to improve the detection, analysis and reporting of errors and harms in the context of AI health technologies.


We are working with collaborators to:

  • Undertake pilot Medical Algorithmic Audits in collaboration with AI developers

  • Undertake a co-design process to further develop the audit framework into an operationalised safety monitoring tool.

  • Develop best practice guidance for post-deployment monitoring of clinical AI

Publications and collaborations:

  • Read about the Medical Algorithmic Audit published in The Lancet Digital Health journal by clicking here.

  • Read the Evidence Standards Framework on the NICE website by clicking here.

  • Read the Software and AI as a Medical Device Change Programme Roadmap from the MHRA by clicking here.