As first reported by Osborne Clarke, the European Union has issued new regulatory guidance clarifying how the recently adopted AI Act intersects with existing medical device regulations. A joint Q&A document by the Medical Device Coordination Group (MDCG) and the Artificial Intelligence Board (AIB) provides a practical reference for companies developing or deploying medical device artificial intelligence (MDAI). The guidance outlines how high-risk AI systems fall within the scope of the AI Act while remaining subject to the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR).
High-risk classification and research exemptions explained
The guidance confirms that most higher-risk devices — including MDR Class IIa, IIb, III, and IVDR Class B, C, D — that use AI are now classified as “high-risk” under the AI Act. This classification brings additional compliance obligations but does not change the original MDR/IVDR risk categorization. The Q&A includes a classification table to help manufacturers determine applicability and outlines research exemptions for pre-market testing. Notably, while AI development and testing are generally outside the scope of the AI Act, real-world testing — such as clinical investigations or performance studies — falls under Article 60 and requires adherence to MDR/IVDR standards.
Navigating dual conformity assessments and documentation
For high-risk MDAI, both the AI Act and MDR/IVDR frameworks apply in parallel. Manufacturers must prepare technical documentation that satisfies both regulatory regimes, covering areas like risk management, data quality, human oversight, and transparency. The guidance encourages companies to consolidate AI-specific documentation within their existing MDR/IVDR conformity materials using the flexibility allowed under Article 8 of the AI Act. This integration aims to streamline compliance while maintaining rigorous safety and quality standards.
Emphasis on robust clinical and data governance
Beyond conformity assessment, the guidance stresses clinical evidence and data quality as pillars of MDAI approval. The AI Act requires that training, validation, and testing datasets be representative, complete, and as error-free as possible. It also mandates bias mitigation strategies and documentation of data governance procedures. When combined with MDR and IVDR requirements for clinical and performance evaluations, these provisions place strong emphasis on ethical, transparent, and data-driven AI deployment in healthcare.
The EU’s joint guidance on MDAI regulation offers long-awaited clarity for medical technology developers, mapping out how the AI Act complements existing medical device rules. While further legislation and harmonised standards are expected, this publication provides a crucial foundation for navigating dual compliance, streamlining documentation, and balancing innovation with patient safety in an increasingly AI-driven healthcare environment.