A New Tool Designed to Assess AI Ethics in Medicine Developed at HSE University
A team of researchers at the HSE AI Research Centre has created an index to evaluate the ethical standards of artificial intelligence (AI) systems used in medicine. This tool is designed to minimise potential risks and promote safer development and implementation of AI technologies in medical practice.
The rapid expansion of AI technologies across various areas of life, including medicine, has brought about new risks that go beyond information security, economic, or social concerns and extend into ethical challenges as well. Current standards and regulatory frameworks do not adequately address ethical considerations, making it essential to develop a specialised tool for evaluating AI systems from an ethical standpoint.
The team behind the 'Ethical Review in the Field of AI' project at the HSE AI Research Centre carried out comprehensive work in two phases: theoretical and practical. First, the researchers conducted a thorough review of numerous domestic and international documents to identify and define the key principles of professional medical ethics: autonomy, beneficence, justice, non-maleficence, and due care. Subsequently, a qualitative field study using in-depth semi-structured interviews was conducted among medical professionals and AI developers, which allowed the team to refine and actualise the initial principles and to add new ones.
Based on the study results, the researchers developed the Index of AI Systems Ethics in Medicine —a chatbot that allows for 24/7 self-assessment and provides instant feedback from the index developers. The assessment methodology includes a test with closed-ended questions designed to evaluate the awareness of both medical AI developers and AI system operators regarding the ethical risks associated with the development, implementation, and use of AI systems for medical purposes.
The new methodology has been piloted and endorsed by a number of leading IT companies, such as MeDiCase and Globus IT, specialising in the development of AI solutions for medicine. Additionally, it has been approved by the Commission for the Implementation of the AI Ethics Code and the Moscow City Scientific Society of General Practitioners.
'The development of this index is a significant step toward ensuring ethical use of AI in medicine. We hope that the solution we have developed will be valuable to the medical community, which, as our research shows, is concerned about the potential negative ethical consequences of the widespread integration of AI into medical practice,' according to Anastasia Ugleva, Project Head, Professor, Deputy Director of the Centre for Transfer and Management of Socio-Economic Information at HSE University.
The index is expected to be sought after by ethics committees, the forensic medical expert community, and other organisations responsible for evaluating and certifying AI. It will also support the shift from the principle of sole responsibility of a medical professional to a model of shared responsibility among all participants in the process. The introduction of this index will help make AI usage safer and more aligned with high ethical standards.
The methodology guidelines The Index of AI Systems Ethics in Medicine are registered at HSE University as intellectual property (IP), No.8.0176-2023.