Shaping the industry:
EU Artificial Intelligence Act (AI Act)
History and Overview
The EU Artificial Intelligence Act (AI Act) was first proposed in 2021 and formally came into force in August 2024, with full effect expected by 2026. It represents the world’s first comprehensive legal framework specifically regulating artificial intelligence. The Act is built on the principles of safety, transparency, human oversight, non-discrimination, and fundamental rights protection.
Much like the General Data Protection Regulation (GDPR), the AI Act is expected to exert a global influence, often referred to as the “Brussels Effect”. This means even organizations outside the EU will adapt their AI practices to align with EU requirements to maintain access to the European market. This is especially critical for the healthcare sector, where AI solutions are deeply tied to patient safety and data protection.
Why is it Relevant to AI in Healthcare?
Healthcare AI often involves high-risk applications, such as diagnostic tools, medical imaging systems, treatment planning,
or predictive analytics for patient outcomes. These systems directly impact patients’ health and safety, meaning compliance with the AI Act is not optional but legally required. Non-compliance could result in substantial penalties and reputational damage, mirroring the enforcement seen under GDPR.
The Four-Tier Risk Classification System
- Unacceptable Risk: AI systems banned outright. Examples include:
- – Government social scoring
- – AI toys encouraging dangerous behavior
- – Real-time biometric surveillance in public spaces (with narrow exceptions)
- High Risk: AI used in sensitive domains, especially healthcare and medical devices. Subject to strict requirements:
- – Risk management frameworks
- – Data governance and bias mitigation
- – Human oversight mechanisms
- – Transparency and explainability standards
- – Technical documentation and conformity assessments
- Limited Risk: Systems like chatbots or emotion recognition tools. Require transparency obligations (e.g., informing users that they interact with AI, how the information is handled).
- Minimal or No Risk: AI with little impact on rights or safety (e.g., spam filters, gaming AI). No additional obligations.
Key Obligations for High-Risk AI Systems
For healthcare AI classified as high-risk, organizations must:
- – Register systems in the official EU database of high-risk AI
- – Conduct risk management and conformity assessments before deployment
- – Maintain detailed technical documentation for accountability
- – Ensure human-in-the-loop oversight in decision-making processes
- – Implement post-market monitoring and incident reporting procedures
- – Adopt robust data governance and bias prevention frameworks
Impact on Innovation and Compliance
The AI Act is widely viewed as establishing a global gold standard for responsible AI use. Supporters argue that it fosters trust,
ensures alignment with the EU Charter of Fundamental Rights, and enhances patient safety in healthcare. Critics, however, caution that compliance costs may burden small and medium enterprises (SMEs) and startups, potentially slowing down innovation in AI-driven healthcare technologies.
To address this, the EU has committed to providing regulatory sandboxes and technical guidance to help organizations test AI systems under supervision while moving toward compliance.
The Road Ahead
The AI Act is only the beginning of AI regulation in Europe. Additional rules are under discussion, particularly concerning:
- – Foundation Models (e.g., large language models like GPTs)
- – General-Purpose AI Systems (GPAI), including generative AI
- – Clarification of obligations for adaptive AI systems that evolve after deployment
Enforcement will begin gradually in 2025 and become fully effective by 2026. Healthcare organizations should act now to
assess, prepare, and implement compliance strategies rather than waiting until deadlines loom.
Alignment with Other Standards
Compliance with the AI Act should be pursued in tandem with other frameworks, including:
- GDPR – for data protection and patient privacy
- PHIPA (Ontario’s Personal Health Information Protection Act) – for healthcare data handling in Canada
- ISO/IEC 27001 – for information security management
- ISO/IEC 42001 – for AI governance
- OECD AI Principles – for fairness, transparency, and accountability
- NIST AI RMF – for risk management in trustworthy AI development
References