Navigating the EU AI Act: implications for regulated digital medical

Source

The EU AI Act’s overarching goals emphasize enhancing safety, transparency, and accountability for AI systems in regulated digital products. The Act’s comprehensive approach seeks to set a global benchmark for AI regulation, emphasizing ethical considerations and the responsible deployment of AI technologies. It also aims to establish a regulatory environment that promotes these objectives while encouraging innovation. But, will it achieve this dual objective? What are the foreseeable risks of the EU AI Act? Navigating the line between fostering innovation that promotes technological advancement while ensuring that such innovations do not compromise other regulatory goals is challenging. Yet, getting this balance right is crucial for Europe’s competitiveness in the digital healthcare market15.

Ideally, the EU AI Act will encourage the development of advanced AI/ML medical devices and other regulated AI digital products that comply with stringent safety and effectiveness standards. However, there are also potential downsides.

First, developers and providers of AI/ML medical devices will have to conform with regulatory requirements of both the EU MDR and the AI Act. While there is some overlap with EU MDR requirements that can be useful for EU AI Act compliance (e.g., risk assessment16, QMS, technical file, post-marketing surveillance), what similar requirements mean under the different regulations might cause confusion. Specificity and alignment in requirements that may appear similar and overlapping will likely be more burdensome for SMEs which often have to prioritize their limited resources towards engineering, quality, and product development, as opposed to maintaining a large regulatory team capable of navigating and handling the additional regulatory complexity. A recent study showed that medical device companies are already having significant difficulties implementing the EU MDR; the key challenges cited include additional workload for technical documentation, higher resource expenditure and cost increase, lack of clarity regarding regulatory requirements, and delays caused by a lack of availability of notified bodies. The findings reveal that MDR is seen as a challenge for all businesses regardless of size, but especially for SMEs, which are often ‘overwhelmed by the necessary additional expenditure’ and ‘the increased requirements resulting from the MDR are so extensive that it is considered to be an existential threat’ resulting in a reduction of the product portfolio, inability to bring new products to market in the EU, or withdrawal of medical devices from the EU market17. Given that the EU AI Act adds significant regulatory requirements on top of the high compliance requirements already placed by the EU MDR, new medical AI start-ups and small enterprises with limited resources might be disproportionally affected, despite provisions in the AI Act to support SMEs. Currently, European-headquartered corporations are among the top medical AI/ML patent owners, indicating a degree of European innovation leadership at the invention stage1. Could the EU AI Act undermine this leadership position? Future research should empirically evaluate these trends and compare this measure of innovation activity before and after the EU AI Act is operational18.

Second, the successful implementation of the EU AI Act requires concerted efforts from a broad spectrum of stakeholders, including policymakers, regulators, notified bodies, AI providers and AI deployers, industry, and the public sector. This presents potential risks, including the risk of bottlenecks and lack of synchronicity. Even if AI providers and manufacturers of regulated digital medical products are ready to comply with its stringent requirements, there may be challenges due to lack of capacity or readiness level from other stakeholders such as the availability of notified bodies. As an example, it has been several years since the introduction of the EU MDR/IVDR. Yet, the lack of capacity of notified bodies to certify medical devices has hindered their implementation, resulting in compliance extensions and the corresponding legal uncertainty. The actual operationalization of the EU AI Act will require a significant expansion in the availability, capacity, and capability of notified bodies. Even if the current EU MDR notified bodies are deemed competent to carry out the performance assessment for medical AI systems (the best-case scenario), there will need to be a significant increase in capacity because all AI/ML-enabled medical devices are considered high-risk AI systems, which require a notified body. Currently, manufacturers of low-risk devices (i.e., Class I) are able to ‘self-certify’ the conformity assessment without or with limited involvement of a notified body. Yet, even Class I (low-risk) medical devices will require a notified body to perform the EU AI Act conformity assessment if they incorporate AI as part of their product. Similarly, the Commission still has to establish the aforementioned ‘simplified technical documentation’ for SMEs and the AI Office facilitate the creation of AI codes of practice. In sum, its ultimate success will largely depend on multiple stakeholders.

Third, the recent developments in large multimodal models and their potential applications in health care may also bring regulatory challenges in the context of medical devices19. One potential challenge stems from the “intended use” issue, as some of these models may have multiple uses that may or may not be clearly determined ex-ante. With the EU MDR’s applicability based on intended use, the interplay of EU MDR and the EU AI Act might become difficult to navigate. This is especially the case in situations where regulated medical devices are combined with general-purpose AI models developed by a third party. For example, a medical AI device that uses classical ML to analyze medical images to diagnose a medical condition could be augmented with a separate general AI, such as a large language model (LLM) in order to enhance the “reasoning” of the medical AI at the front end, as well as to extend its output capabilities (e.g., deliver the diagnosis results in natural language)20. The LLM started as a general AI (i.e., an AI with a general intended use), but it is now an AI subsystem within an overall medical system or product whose intended use is a medical diagnosis.

Fourth, the dynamic pace of AI innovation might suffer from a static regulatory approach. Such innovation underscores the necessity of flexible, agile and adaptive regulatory frameworks, capable of accommodating new AI advances, technologies, methodologies, and challenges that have yet to emerge. As an example, the initial version of the EU AI Act did not even contemplate the possibility of generative AI models. Luckily, the protracted negotiations for its approval created the opportunity to include generative AI just before its adoption. The recent introduction of large language models (LLMs) underscores the importance of flexible regulatory frameworks capable of accommodating new AI developments and remaining aligned with technological advancements, societal expectations, and ethical considerations21. In fact, the need for last-minute changes in 2023 to the EU Act to account for advances in generative AI—which were entirely uncontemplated in previous versions—helps illustrate how difficult it is for legislation to adapt to rapid technological changes.

All the above concerns are emblematic of the challenges that emerge when horizontal regulation of fast-changing enabling technologies –such as AI– is imposed on sectors with existing regulatory regimes such as regulated digital medical products. For this reason, it will be important that these regulatory interoperability challenges are monitored and addressed in a timely manner before they cause chilling effects on innovation. These underlying concerns have resulted in other jurisdictions, such as the UK, taking a different approach to AI regulation.

The UK has also been playing a leadership role in AI, including hosting the first global AI Safety Summit in November 2023, which brought together the leading AI nations, world leaders, researchers, technology companies, and civil society groups resulting in 28 jurisdictions -including the EU and US- agreeing to The Bletchley Declaration on AI Safety22. The UK’s approach to AI regulation, as it diverges from the EU post-Brexit, emphasizes a flexible, pro-innovation framework that allows for sector-specific adaptations by existing regulators. Unlike the EU AI Act’s broad and prescriptive regulations, the UK adopts a principles-based approach focusing on safety and security, transparency, fairness, accountability and governance, and contestability23. This approach is underpinned by non-statutory guidance and a three-phased approach to issuing guidelines whereby the various regulators are encouraged to ‘promote innovation and competition’ by developing ‘tools and guidance that promote knowledge and understanding […] in the context of their remit’ by establishing ‘published policy material, in respect of AI, that is consistent with their respective regulatory objectives, setting out clearly and concisely the outcomes regulators expect, so that regulated firms can meet these expectations through their actions.’ Such a regulatory environment in the UK contrasts with the EU’s comprehensive legislative approach but aims equally to manage the multifaceted challenges and opportunities presented by AI technologies. This divergence highlights significant developments in the UK that could influence future AI regulation and its interaction with EU laws. Contrary to the EU, the UK government is taking a ‘deliberately agile and iterative approach, recognising the speed at which these technologies are evolving’ aimed at building the evidence base to learn from experience and continuously adapt to develop a regulatory regime that fosters innovation while ensuring regulatory coherence and addressing emerging AI risks. In fact, according to the 2023 UK Policy Paper “A pro-innovation approach to AI regulation,”24 the stated rationale for this pragmatic approach is largely driven by trying to manage the potential risk of hindering AI innovation: “New rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances. Instead, the principles will be issued on a non-statutory basis and implemented by existing regulators. This approach makes use of regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.” For instance, the UK Medicines and Healthcare products Regulatory Agency has issued guidance for ‘AI as a medical device’25.

SHOPPING CART

close