STANDING Together - Standards for data diversity, inclusivity, & generalisability

Artificial intelligence learns patterns in data, and uses this to make predictions about the future.

It does this very reliably - but if there are problems with the data it learns from, there will be problems with predictions it makes.

Most datasets of eye images contain images of patients from the USA and China.

Our systematic review of publicly available healthcare datasets containing images of eye disease showed that 172 countries representing 45% of the global population were not represented. This means that no images are available for these people for the use in AI development.

Skin tone and race/ethnicity are almost never reported in datasets of skin images.

Our systematic review of publicly available healthcare datasets containing skin images showed that demographic details of participants were reported inadequately. Fitzpatrick skin type (a system of reporting the tone of someone's skin) was almost never reported, and nor was ethnicity.

STANDING Together

STANdards for data Diversity, INclusivity, and Generalisability

STANDING Together's mission is to reinforce ethics and inclusivity in healthcare data, allowing developers to ensure that AI in healthcare works for everyone.

We're working with patients and members of the public, and an international group of AI, clinical, and regulation experts to write standards for how datasets are reported and used during healthcare AI development.

STANDING Togther has three main phases:

1. Developing draft standard items

  • We have extracted learnings from a systematic review of existing best practices, guidance, standards, frameworks and roadmaps.

  • Additional insights derived from a scoping survey of key international stakeholder organisations and individuals.

  • Themes extracted, and draft standard items condensed to form a list.

  • Two stage e-delphi study* to allow public consultation of the draft standards list prior to testing.

2. Testing our draft standards

  • We are conducting systematic reviews of datasets used for AI development in Covid-19, breast cancer, and heart failure to:

    1. Report how datasets are reported and used in these different disease areas

    2. Test whether the datasets meet our draft standard items

    3. Highlight standard items which may be challenging to implement

  • Additionally, we are interviewing 30 dataset curators to understand how our standards might be implemented, and any barriers which may need to be addressed.

3. Ratifying & publishing the standards

  • We plan to produce a report (green paper) in Q1 2023 detailing the draft standards & how they were developed. This will be open for public consultation and feedback.

  • We will invite an international interdisciplinary group of stakeholders to a 'consensus meeting' in June 2023 to process any further comments and finalise the standards.

  • The final standards will be published in Q4 2023.

* A delphi study is a research method used to build consensus around a topic. Participants are asked to vote on a series of statements, and suggest amendments where they disagree with them. The research team then amends the statements based on feedback. After several rounds of voting, a meeting is held to decide the final list of standards.

Hearing the voice of patients and the public is vital to ensuring our standards meet the needs of minoritised groups in society.

  • We have three patient partners who provide consultation and guidance on all aspects of project delivery.

  • Our patient and public involvement and engagement (PPIE) subcommittee meet quarterly to help us consider the moral and ethical implications of our work, and share their lived experiences of healthcare discrimination to contextualise the problems we are trying to solve.

  • We plan to run wider scale engagement activities throughout 2023 to allow public feedback on our draft standards, including via an open consultation on our green paper.