ICMRA has published a report setting out recommendations on how regulators should address the issues that the use of AI poses to global medicines regulation.
The International Coalition of Medicines Regulatory Authorities (ICMRA) has published a report setting out recommendations on how regulators should address the issues that the use of artificial intelligence (AI) poses to global medicines regulation.
To formulate the recommendations, the ICMRA Informal Network for Innovation working group employed a horizon-scanning exercise in AI and the work was led by the European Medicines Agency (EMA). Two hypothetical case studies were developed and then used to ‘stress test’ the regulatory systems of ICMRA members to see where changes may be required.
Based on the work, key issues relating to the regulation of future therapies using AI were identified and recommendations were developed by the group to foster the uptake of AI. Some of the main findings were that, when assessing and regulating AI, it may be necessary for regulators to apply a risk-based approach; strengthened governance structures should be established by stakeholders for the oversight of algorithms and AI deployments that are closely linked to the benefit/risk of a medicinal product; and that regulatory guidelines should be developed in areas such as data provenance, reliability, pharmacovigilance, and other areas.
Details from the report were published in an Aug. 16, 2021 press release from the EMA and the full report is available via ICMRA’s website. Implementation of the report is due to be discussed by ICMRA members in the coming months.