EU AI Act

EU Artificial Intelligence Act

Updated: September 26, 2025 

EU AI Act in General

The EU Artificial Intelligence Act (AI Act) was first proposed in April 2021 and entered into force on 1 August 2024. Obligations apply in phases: 2 February 2025 for bans on prohibited AI systems, 2 August 2025 for general-purpose AI obligations, and 2 August 2026 for most high-risk AI obligations. It is the world’s first comprehensive legal framework specifically regulating artificial intelligence, aiming to ensure that AI systems placed on the EU market are safe, respect fundamental rights, and align with European values.

Much like the General Data Protection Regulation (GDPR), the AI Act is expected to exert a global influence — often referred to as the “Brussels Effect.” Organizations worldwide are likely to adapt their AI practices to EU requirements in order to maintain access to the European market. This is particularly significant for healthcare, where AI systems directly affect patient safety and data protection.

Scope of Application

The AI Act applies to:

  • Providers (developers, manufacturers) of AI systems placing products on the EU market.
  • Users of AI systems within the EU, including healthcare institutions.
  • Organizations outside the EU whose AI systems are either used within the EU market or affect the rights and freedoms of EU residents.

How it Applies to AI in Healthcare

Healthcare AI systems are frequently categorized as ‘high-risk’ under the AI Act, particularly when used for diagnostic, treatment, or decision-support purposes in clinical settings. Given their direct impact on health and safety, these systems must comply with the Act’s legal requirements. Non-compliance may lead to significant fines under Article 99 of the AI Act. The most serious infringements, such as the use of prohibited AI systems, can result in administrative fines of up to €35 million or 7% of global annual turnover, whichever is higher.

The Four-Tier Risk Classification System

  • Unacceptable Risk: AI systems banned outright. Examples include government social scoring and AI toys encouraging dangerous behavior. Most uses of real-time biometric surveillance in public spaces are prohibited, with narrow exceptions for law enforcement (such as locating missing persons or preventing terrorist threats).
  • High Risk: AI used in sensitive domains, including healthcare and medical devices. Subject to strict requirements such as risk management, data governance, bias mitigation, human oversight, transparency standards, technical documentation, and conformity assessments.
  • Limited Risk: AI systems like chatbots or emotion recognition tools. These require transparency obligations, such as informing users that they are interacting with AI.
  • Minimal or No Risk: AI with little or no impact on rights or safety, such as spam filters or gaming AI. These face no additional obligations.

Key Obligations for High-Risk AI Systems

  • Ensure eligible high-risk AI systems are registered in the official EU database before being placed on the EU market.
  • Conduct continuous risk management and conformity assessments before deployment.
  • Maintain detailed technical documentation for accountability.
  • Ensure human-in-the-loop oversight in decision-making processes.
  • Implement post-market monitoring and incident reporting procedures.
  • Adopt robust data governance and bias prevention frameworks.

Impact on Innovation and Compliance

The AI Act is widely regarded as a leading global benchmark for responsible AI governance. Supporters argue it fosters trust, aligns with the EU Charter of Fundamental Rights, and enhances patient safety in healthcare. Critics caution that compliance costs may challenge startups and SMEs, potentially slowing innovation. To mitigate this, the EU will provide regulatory sandboxes and technical guidance, allowing organizations to test AI systems under supervision while working toward compliance.

The Road Ahead

The AI Act is the beginning of a broader European regulatory framework for AI. Additional measures are under discussion, particularly regarding:

  • Foundation Models: Oversight for large-scale general-purpose models (e.g., GPT-based systems).
  • General-Purpose AI (GPAI): Clarified obligations for generative and adaptive AI systems.
  • Adaptive AI Systems: The Act includes requirements to ensure AI systems that evolve after deployment continue to comply with obligations and remain safe.

Enforcement is phased:

  • 2 February 2025: Bans on prohibited AI systems

  • 2 August 2025: Obligations for general-purpose AI

  • 2 August 2026: Full enforcement of high-risk AI obligations

Relevant and Overlapping Laws

Compliance with the AI Act should be aligned with other frameworks to ensure comprehensive governance:

  • GDPR: Ensures data protection and patient privacy in tandem with AI Act requirements.
  • PHIPA (Ontario): Provincial privacy law for health data in Canada, relevant for cross-border alignment.
  • ISO/IEC 27001: Information security management standards for enterprise-wide controls.
  • ISO/IEC 42001: AI governance standard for managing AI lifecycles and risks.
  • OECD AI Principles: Promote fairness, accountability, and transparency in AI systems.
  • NIST AI RMF: U.S. framework for managing risks in trustworthy AI development.

References & Official Source

EU Artificial Intelligence Act (Regulation (EU) 2024/1689): https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

AI Act — PDF of Official Journal version: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ%3AL_202401689

AI Act Explorer (summary + languages): https://artificialintelligenceact.eu/the-act/

High-Level Summary of the AI Act: https://artificialintelligenceact.eu/high-level-summary/