The World Health Organization (WHO) released guidelines on AI ethics and governance, focusing on the use of large multi-modal models in healthcare to ensure safe and ethical application.

The World Health Organization (WHO) has taken a significant step in shaping the future of artificial intelligence (AI) in healthcare by releasing new guidance on the ethics and governance of large multi-modal models (LMMs). This initiative aims to harness the potential of AI in healthcare while addressing the associated risks and ethical concerns.

Large multi-modal models are advanced AI technologies capable of processing various data types like text, videos, and images, and generating diverse outputs. Their unique ability to mimic human communication and perform unprogrammed tasks has made LMMs rapidly popular in consumer applications, with platforms like ChatGPT, Bard, and Bert gaining widespread recognition in 2023.

Dr. Jeremy Farrar, WHO Chief Scientist, emphasizes the need for transparent information and policies to manage LMMs effectively. This is crucial to achieve better health outcomes and address health inequities. The new WHO guidance outlines over 40 recommendations targeting governments, technology companies, and healthcare providers, focusing on the appropriate use of LMMs to promote public health.

LMMs offer promising applications in diagnosis, clinical care, administrative tasks, medical education, and scientific research, including drug development.

However, risks such as producing false or biased statements, which could misguide health decisions, and training on poor-quality or biased data are significant concerns. Accessibility and affordability of LMMs, along with cybersecurity risks, are other challenges highlighted by the guidance.

To mitigate these risks, WHO recommends that various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society, engage at all stages of LMM development and deployment. Governments play a crucial role in setting standards for LMM development and deployment in healthcare. Key recommendations for governments include investing in not-for-profit infrastructure, enforcing laws and regulations to meet ethical obligations, and assigning regulatory agencies for LMM approval. Mandatory post-release auditing and impact assessments are also suggested.

Developers of LMMs should involve potential users and stakeholders in AI development, ensuring that LMMs perform well-defined tasks with necessary accuracy and reliability. Predicting and understanding potential secondary outcomes is also critical.

This guidance, building on WHO’s previous publication from June 2021, represents a step forward in ensuring ethical and safe AI application in healthcare, aligning with human rights standards and advancing patient interests.