Prohibited AI Systems Under the EU AI Act

by | Oct 31, 2025 | AI hallucinations Danger | 0 comments

The European Union’s Artificial Intelligence Act (EU AI Act) establishes the world’s first comprehensive legal framework for governing artificial intelligence. It divides AI systems into four categories based on their potential impact on safety and fundamental rights — minimal risk, limited risk, high risk, and unacceptable risk.

Systems in the “unacceptable risk” tier are considered so dangerous to human rights, public trust, or democratic values that they are completely banned from use within the European Union. These prohibitions, outlined in Article 5 of the AI Act, target AI systems that manipulate users, exploit vulnerabilities, conduct social scoring, perform intrusive biometric surveillance, or otherwise threaten individual freedoms. Understanding these bans is essential for developers, regulators, and businesses building or deploying AI solutions in Europe.

1. Manipulative or Deceptive Techniques to Distort Behaviour

Definition: AI systems that deploy subliminal, manipulative or deceptive techniques with the objective or effect of materially distorting someone’s behaviour and impairing their ability to make informed decisions.

Scope conditions: The ban applies when the system is placed on the market, put into service or used for that specific purpose, and the distortion is significant or likely to cause physical or psychological harm or infringe rights.

Examples:

  • An online platform that uses hidden emotional triggers to encourage users in vulnerable states to commit high-risk financial decisions without transparent disclosure.
  • A gaming app that uses AI to pressure children into extended play or purchases by exploiting behavioural patterns without transparency.

2. Exploiting Vulnerabilities of Specific Persons or Groups

Definition: AI systems that exploit vulnerabilities linked to age, disability, socio-economic status or other personal circumstances, with the objective or effect of materially distorting behaviour and causing or likely to cause harm.

Scope conditions: The exploitation must target a person or group known to have a specific vulnerability. The system must act in a way that goes beyond legitimate persuasion and lead to distortion of behaviour or harmful outcome.

Examples:

  • An AI-driven chatbot in a social context that exploits an elderly person’s cognitive limitation to lead them into unfavorable credit deals.
  • A children’s interactive toy that uses psychological profiles to encourage unsafe data sharing or purchases by children with limited autonomy.

3. Social Scoring Systems

Definition: AI systems that evaluate or classify natural persons or groups over time based on social behaviour or inferred personal or personality characteristics, where the resulting “social score” leads to detrimental or unfavourable treatment of the person or group.

Scope conditions: The system must (a) evaluate or classify, (b) based on social behaviour or personal traits, (c) across a period of time, and (d) when the scoring influences treatment in an unrelated context (i.e., outside the original context of the behaviour) and leads to disadvantage.

Examples:

  • A mobile app that aggregates data about a person’s social media behaviour, purchases and community engagement, produces a “trust score,” and uses that score to restrict access to services like housing or credit.
  • An employer-facing platform scoring employees’ personal behaviour traits (outside work) and using that score to decide promotion or benefits eligibility.

4. Predictive Systems for Criminal Offence Risk Based on Profiling or Personal Traits

Definition: AI systems that assess or predict the risk that a person will commit a criminal offence, based solely on profiling, personality traits or characteristics of the person, rather than objective facts related to actual criminal activity.

Scope conditions: The prohibition applies when the risk assessment is made by the AI system for the purpose of predicting crime risk of a natural person, and the input is essentially personal traits or profiles rather than evidence-based or human-supervised logic.

Examples:

  • An AI tool that aggregates personality traits, neighbourhood data, online behaviour and banking transactions to assign individuals a probability of committing theft, then restricts access to certain services or flags them for monitoring.
  • A system used by private security vendors to label individuals as “high-risk” for violent crime solely based on demographic profiling and algorithmic scoring without human review or connection to actual investigations.

5. Biometric Categorisation of Sensitive Attributes

Definition: AI systems that categorise, infer or deduce sensitive attributes—such as racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, sex life or sexual orientation—from biometric data.

Scope conditions: The system must use biometric data (e.g., face, fingerprint, iris) and infer or categorise sensitive attributes. Systems simply using biometrics for identification (without inferring sensitive attributes) may fall outside this specific prohibition but may still be high-risk.

Examples:

  • A facial recognition solution claiming to infer political leanings or religious beliefs based on subtle facial cues.
  • A biometric system deployed in a public space that categorises individuals by sexual orientation or trade union membership from fingerprint or face data.

6. Emotion Recognition or Behaviour Analysis in Sensitive Contexts (Workplaces/Education)

Definition: AI systems that infer or detect emotional states, moods, intentions or behaviours of natural persons in the context of workplaces or educational institutions.

Scope conditions: The prohibition applies when the system is designed for emotion inference in those contexts and used for monitoring or decision-making about natural persons. It does not cover emotion detection of customers or for strictly safety-related purposes (e.g., fatigue detection), provided other legal requirements are met.

Examples:

  • A workplace camera system using AI to analyse facial expressions of employees and grade their engagement or suitability for promotion.
  • A digital classroom tool using voice and video data to continuously score student moods or participation and influence grading or resource allocation.

7. Untargeted Scraping or Expansion of Facial Recognition Databases

Definition: AI systems that build or expand facial recognition databases by means of untargeted scraping of facial images from the internet or CCTV footage, for the purpose of recognition of natural persons.

Scope conditions: The scraping must be untargeted (mass collection without specific targeted individuals), must feed recognition (not just anonymised model training), and pursue facial recognition use. Systems collecting facial images purely for model development/training or where persons are not identified may fall outside the ban.

Examples:

  • A data firm scraping millions of social-media profile images and linking them to names and face recognition models used for surveillance.
  • An AI vendor harvesting CCTV feeds from public spaces to build a searchable face database for private security firms.

8. Real-Time Remote Biometric Identification in Public Spaces for Law Enforcement (with Narrow Exceptions)

Definition: AI systems that perform real-time remote biometric identification of natural persons in publicly accessible spaces. Such use is prohibited unless extremely limited exceptions apply (for instance, for seeking specific victims, serious threats or terrorist acts under strict authorization).

Scope conditions: The prohibition covers “use” of the system in public, real-time settings. The system may still be allowed under specific circumstances (law enforcement authorities, judicial authorisation, serious crime prevention) as defined by the Act.

Examples:

  • A city-wide network of live facial cameras identifying passersby in real-time and matching them to criminal databases without individual suspicion or oversight.
  • A public surveillance drone using AI face recognition to identify every person in an open square and track them continuously.

Summary Table of Prohibited Categories

# Category Core Focus Key Example
1 Manipulative or deceptive techniques Covert distortion of behaviour Hidden emotional triggers in ads
2 Exploiting vulnerabilities Targeting age, disability, or socio-economic weakness High-risk credit push to elderly
3 Social scoring Classification based on behaviour leading to disadvantage Reputation score restricting access
4 Predictive crime risk profiling Crime risk prediction via traits or profiling Labeling individuals “high-risk” without evidence
5 Biometric categorisation of sensitive attributes Inferring race, religion, or orientation from biometrics Facial analysis to infer sexual orientation
6 Emotion recognition in work or education Monitoring moods or engagement in sensitive settings Camera scoring students’ attention
7 Untargeted face scraping for recognition Mass collection of face data for identification Scraping social media images into surveillance databases
8 Real-time remote biometric ID in public Live face ID in publicly accessible spaces City-wide facial recognition network

Important Compliance Notes

The prohibitions are absolute for the contextual uses described. They apply to providers, deployers, distributors and importers of AI systems within the EU market. The fact that an AI system may also serve another function or is subject to contracts or consent does not override the prohibition if the banned use is present.

Providers and deployers should screen each AI system for any of the eight categories, document their findings, eliminate or redesign features that fall into prohibited uses, and track regulatory updates since the list may evolve annually under Article 112.

 

Official References

Written by Artem Polynko

Cybersecurity Researcher at AI Healthcare Compliance

Read more

Weekly News and Updates (Nov 8 – 21, 2025)

Weekly News and Updates (Nov 8 – 21, 2025)

Between 8-21 November 2025 regulators and international bodies emphasised moving from principles to practice: the EU launched COMPASS-AI to operationalise safe clinical AI; the UK (MHRA) published AI Airlock pilot outputs and announced AI drug-safety projects; the FDA...

read more
Practical impacts of using AI in Healthcare

Practical impacts of using AI in Healthcare

Artificial Intelligence (AI) is transforming healthcare systems globally - enhancing diagnostics, improving patient outcomes, optimizing workflows, and reducing costs. However, its adoption also brings challenges around data integrity, equity, and ethical use. Below...

read more
Weekly News and Updates (Sept 19–25, 2025)

Weekly News and Updates (Sept 19–25, 2025)

This post will begin our new weekly updates that will cover the most recent developments in AI governance and regulations, with a particular focus on how these changes affect AI in healthcare. We begin with updates from Canada — including privacy enforcement actions,...

read more
Healthcare AI Impact:Speed and Efficiency

Healthcare AI Impact:Speed and Efficiency

AI integrations are accelerating healthcare like never before. From cutting radiology wait times to reducing the hours physicians spend on documentation, AI is proving to be one of the most powerful efficiency drivers in medicine. This article explores how much time...

read more