Understanding EU AI Act Risk Levels

The EU Artificial Intelligence Act (AI Act), adopted in 2024 and entering phased enforcement from 2025, introduces the world’s first horizontal regulatory framework for AI. Its risk-based approach is designed to ensure that AI systems are regulated in proportion to the risks they pose to people’s safety, rights, and well-being.

For healthcare, this framework is especially critical. Clinical AI tools often involve sensitive health data and can directly influence diagnosis, treatment, and patient outcomes. That combination of sensitive data + direct impact means many clinical healthcare AI systems fall under the ‘high-risk’ classification, while some non-clinical or administrative tools may be limited- or minimal-risk.

Updated: September 20, 2025 

The Four Risk Levels in the EU AI Act

The EU AI Act defines four levels of risk, each with different regulatory consequences:

1. Unacceptable Risk

Definition: AI systems that are considered incompatible with EU values, rights, and safety.

Legal Status: Completely prohibited within the EU.

Examples:

    • AI used for social scoring by governments.

    • Real-time biometric identification for mass surveillance in public spaces (except narrowly defined exceptions).

    • AI that manipulates users subconsciously to cause harm.

2. High Risk

Definition: AI systems that significantly impact fundamental rights, safety, or access to essential services.

Legal Status: Permitted, but subject to strict requirements.

Examples:

    • AI used in critical infrastructure (e.g., electricity grids).

    • AI in education or exams (grading).

    • AI in employment decisions (hiring, firing).

    • AI in medical devices, diagnostics, and treatment.

 

3. Limited Risk

Definition: AI systems where the main risk is lack of transparency to the user.

Legal Status: Allowed, with transparency obligations (e.g., informing users that they are interacting with AI).

Examples:

    • Chatbots in customer service.

    • Deepfakes (must be labeled as such).

    • Supervised AI Scribe tools.

4. Minimal or No Risk

Definition: AI that poses little or no threat to rights or safety.

Legal Status: Fully permitted, with no additional obligations.

Examples:

    • AI spam filters.

    • AI-powered video games.

    • AI recommendation engines in low-stakes contexts.

 

 

How Risk Levels Apply to Healthtech AI

Healthcare is a special case under the EU AI Act because it combines:

Sensitive data: Health data is classified under GDPR as a special category with the highest protections.

High stakes: AI can influence diagnosis, treatment, or patient triage.

Safety-critical environments: Errors can cause physical, emotional, or financial harm.

 

1. Unacceptable Risk in Healthcare

Rare, but possible.

Regulatory outcome: Such tools are banned outright.

Example:

  • An AI tool that denies access to treatment based on economic or social profiling.

  • A system that manipulates patient behavior (e.g., pushing unnecessary treatments for profit).

2. High Risk in Healthcare

The default category for most healthcare AI.

Regulatory obligations: These must meet strict requirements (risk management, technical documentation, human oversight, post-market monitoring).

Examples:

  • Diagnostic imaging systems (e.g., AI that detects cancer in scans).
  • Clinical decision support systems recommending treatments.
  • AI managing drug dosage or insulin delivery.
  • Patient monitoring tools that trigger early warning alerts.
  • AI systems embedded in medical devices (e.g., surgical robots, wearables).

3. Limited Risk in Healthcare

Applies mainly to non-clinical AI where stakes are lower.

Regulatory obligations: Transparency (users must be informed when they are interacting with AI, such as through chatbots or virtual assistants).

Examples:

  • AI chatbots answering general health FAQs.
  • Appointment scheduling systems using conversational AI.
  • Symptom checkers that disclose they are not a replacement for a doctor.

 

4. Minimal/No Risk in Healthcare

Applies to purely administrative or technical tools.

Regulatory obligations: None under the AI Act.

Examples:

  • AI spam filters in clinic email.
  • AI-powered transcription of staff meetings (not patient data).
  • Predictive text in internal communication platforms

Obligations for High-Risk Healthcare AI

Healthcare AI systems that fall into the high-risk category must comply with detailed obligations before and after deployment. These include:

Risk Management System

Identify and analyze risks across the AI lifecycle.

Implement mitigations and continuously review them.

Data Governance & Quality

Training data must be representative, relevant, and free from discriminatory bias.

Data must be processed in line with GDPR requirements for health data, including a valid legal basis, explicit conditions under Article 9, and appropriate safeguards.

Technical Documentation

Developers must produce detailed records of design, training, testing, and deployment.

Documentation must be accessible to regulators.

Transparency & Explainability

AI outputs must be interpretable by clinicians.

Patients should be informed of AI use when decisions affect their care.

Human Oversight

AI cannot make final medical decisions without human validation.

Clinicians must be able to override AI recommendations.

Post-Market Monitoring

Continuous monitoring of performance, safety, and bias.

Reporting of serious incidents or malfunctions.

 

 

 

 

How Regulators Assess Healthcare AI

 

Regulators will focus on:

Classification justification: Has the developer properly assessed the system’s risk category?

Traceability: Can design and testing decisions be verified in documentation?

Clinical validation: Is there evidence that AI outputs meet medical standards?

Oversight mechanisms: Are clinicians able to monitor, interpret, and override AI outputs?

Alignment with GDPR: Are health data processing rules respected in parallel?

 

 

 

Key Takeaways

The EU AI Act applies a tiered system of risk: unacceptable (banned), high risk (heavily regulated), limited risk (transparency), and minimal risk (unregulated).

Healthcare AI is overwhelmingly high risk, due to patient safety and sensitive data concerns.

Compliance for high-risk systems involves documentation, oversight, testing, and ongoing monitoring.

Limited- and minimal-risk healthcare AI tools exist but are mostly non-clinical, administrative applications.

 

 

 

 

Official Resources

European Commission – AI Act Portal: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Full Text of the EU AI Act: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

European Data Protection Board – AI Guidance: https://www.edpb.europa.eu/our-work-tools/our-documents/topic/artificial-intelligence_en

European Medicines Agency – AI in Medicine: https://www.ema.europa.eu/en/about-us/how-we-work/data-regulation-big-data-other-sources/artificial-intelligence