Editor's Note
This article was published in Pharmaceutical Technology Europe’s November 2023 print issue.
The EU’s AI Act is set to become the world’s first comprehensive legal framework for artificial intelligence.
The use of artificial intelligence (AI) in the European Union (EU) will be governed by the AI Act, which is set to become the world’s first comprehensive legal framework for AI. The AI Act forms part of the EU’s digital strategy and was originally proposed by the European Commission (EC) in April 2021. The general approach policy was later adopted by the European Council in 2022, with the European Parliament most recently adopting its position in mid-June 2023. Following these developments, the three bodies will now negotiate the final details before the policy can become law, in a process called “trilogue,” or a three-way negotiation (1).
This article was published in Pharmaceutical Technology Europe’s November 2023 print issue.
The AI Act aims to establish harmonized rules for the development, placing on the market, and use of AI in the EU, and its principal aim is to turn the EU into a global trustworthy hub for AI. The scope of the AI Act is very wide, covering systems developed through an array of approaches that are listed in Annex I of the AI Act. These include machine learning approaches that also incorporate deep learning; logic and knowledge-based approaches; as well as statistical approaches, Bayesian estimation, search, and optimization methods (2).
The European Parliament’s priority is “to make sure that AI systems used in the EU are safe, transparent, traceable, prevents bias and discrimination, fosters social and environmental responsibility, and ensures respect for fundamental rights” (3). Crucially, the European Parliament believes that AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes (4).
The parliament also aims to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems. To this end, AI is defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (1). Interestingly, this definition of AI is focused on its outputs and objectives, rather than its underlying technology or algorithms because the regulation aims to establish a framework for the ethical and trustworthy development and use of AI systems in the EU (5).
As regards intellectual property (IP) rights, the European Parliament has also stressed the importance of addressing issues relating to patents and new creative processes, as well as resolving questions of ownership relating to something that is entirely developed by AI (3).
The new rules follow a risk-based approach so that AI systems can be effectively assessed. Through this methodology, the European Parliament establishes obligations for providers and users depending on the level of risk that AI can generate. Providers are those who “develop an AI system to place it on to the market or put it into service under their own name or trademark” (6). The Act splits AI into the following four bands of risk based on the intended use of a system:
Unacceptable risk. Under the new proposals, AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited. These include systems that deploy subliminal or purposefully manipulative techniques (such as cognitive behavioural manipulation) to exploit people’s vulnerabilities; systems that are used for social scoring (by classifying people based on their social behaviour, socio-economic status, and personal characteristics); and the use of real-time and remote biometric identification systems, such as facial recognition (7).
High risk. High-risk AI systems are subject to a detailed certification regime but are not deemed so fundamentally objectionable that they should be banned. These systems are divided into two categories:
AI systems that are used in products that come under the EU’s General Product Safety (8) legislation that includes toys, aviation, cars, medical devices, and lifts
AI systems that fall into eight specific areas that will have to be registered in an EU database:
All systems deemed ‘high risk’ will be assessed before being placed on the market and throughout their lifecycle.
Limited risk. Limited risk AI systems are required to comply with minimal transparency requirements that would allow users to make informed decisions. This category includes AI systems such as chatbots, emotion recognition and biometric categorization systems, and systems generating ‘deepfake’ or synthetic content (6). The legislation stipulates
that users should be made aware when they are interacting with AI and be given an option whether they want to continue using it.
Minimal risk. Minimal risk includes applications such as spam filters or AI-enabled video games for which the commission proposes regulation primarily by voluntary codes of conduct.
The AI Act will also establish a European Artificial Intelligence Board, which will be responsible for overseeing the implementation of the regulation and ensuring uniform application across the EU. The body will be tasked with putting forward recommendations on issues that arise as well as providing guidance to national authorities. According to the legislation, the board should reflect the various interests of the AI ecosystem and be composed of representatives of the EU member states (9).
According to Burges Salmon (10), the AI Act is intended to directly affect health technology companies whose AI systems are placed on the EU market, and are subject to third-party conformity assessments (made by notified bodies), as stipulated by the EU’s Medical Devices Regulation (MDR), [Regulation (EU) 2017/745] (11) and In Vitro Diagnostic Regulation (IVDR), [Regulation (EU) 2017/746] (12). These systems include AI-enabled diagnostic tools, therapeutic devices, or implantable devices such as pacemakers. Furthermore, the AI Act will also affect MedTech companies whose AI systems are not directly affected by the proposed legislation, including AI-enabled general practitioner (GP) apps, patient chatbots, and fall detection systems (assuming that these systems are not already subject to the MDR or IVDR regulations).
For health technology companies whose products fall under the category of ‘High Risk’ AI systems, they will have to meet significant obligations in relation to:
Although the safety risks specific to AI systems are meant to be covered by the AI Act, the overall safety of the product, and how the AI system is integrated, will be addressed by the conformity assessment under the MDR or IVDR regulations, given that the AI Act is intended to “be integrated into the existing sectoral safety legislation [including the MDR and IVDR] to ensure consistency, avoid duplications, and minimize additional burdens” (10).
On 14 June 2023, Members of the European Parliament adopted a negotiating position on the AI Act, with talks now taking place with EU countries in the council regarding the final form of the law. The aim is to reach an agreement by the end of this year, and it is possible that the new legislation could enter into force in 2023. The majority of the provisions will then apply another 24 months later, during which time companies and organizations will have to ensure that their AI systems comply with the requirements and obligations set out in the regulation.
Bianca Piachaud-Moustakis is lead writer at PharmaVision, Pharmavision.co.uk.
Pharmaceutical Technology Europe
Vol. 35, No. 11
November 2023
Pages: 8–9
When referring to this article, please cite it as Piachaud-Moustakis, B. The EU AI Act. Pharmaceutical Technology Europe 2023 35 (11).
Legal and Regulatory Perspectives on 3D Printing: Drug Compounding Applications
December 10th 2024This paper explores the legal and regulatory framework around 3D drug printing, particularly for personalized medicine, considering regulatory compliance, business concerns, and intellectual property rights.