OECD AI Principles

Organisation for Economic Co-operation and Development AI Principles

Updated: September 20, 2025 

OECD AI Principles in General

The OECD AI Principles were adopted in May 2019 by the Organization for Economic Co-operation and Development (OECD). They represent one of the first internationally adopted frameworks for responsible and trustworthy AI, endorsed by 46 countries, including the U.S., Canada, EU member states, Japan, and others. Although voluntary and non-binding, these principles have strongly influenced regulatory efforts worldwide, including the EU AI Act and national AI strategies.

History and overview

  • Adopted: May 2019, later endorsed by the G20.
  • Nature: Voluntary recommendations, not legally binding regulations.
  • Goal: Provide a global baseline for developing and deploying AI that is innovative, trustworthy, and aligned with societal values.
  • Reach: Applies to both governments and private-sector organizations across multiple sectors, including healthcare.

How it applies to AI in healthcare

Healthcare AI systems directly impact patient safety, trust, and fairness. The OECD AI Principles provide an ethical and operational framework to guide responsible use of AI in clinical and healthcare environments.

Key principles for healthcare AI

  • Inclusive growth and well-being: Ensure AI benefits patients broadly, avoiding bias and inequities in access to care.
  • Human-centered values: Protect human rights and dignity, ensuring clinicians and patients remain central in AI-driven decisions.
  • Transparency and explainability: Provide patients and healthcare professionals with understandable explanations of AI outputs and their limitations.
  • Robustness, security, and safety: Validate AI systems to ensure accuracy, resilience, and patient safety under clinical conditions.
  • Accountability: Ensure clear organizational responsibility for AI outcomes in diagnosis, treatment, and patient engagement.

Documentation and governance requirements in healthcare

  • Ethical oversight: Establish AI review boards or ethics committees for healthcare deployments.
  • Clinical impact assessments: Document how AI may influence patient outcomes, access to care, or treatment quality.
  • Transparency practices: Maintain documentation explaining training data, algorithms, and potential limitations for patient-facing applications.
  • Accountability mechanisms: Assign roles for monitoring, reporting, and addressing adverse AI outcomes.
  • Integration with compliance: Align with legal requirements such as HIPAA, GDPR, and FDA SaMD rules to ensure ethical and regulatory readiness.

Best practices for healthcare AI

  • Adopt OECD AI Principles as a baseline ethical framework for healthcare AI projects.
  • Complement technical compliance (HIPAA, ISO/IEC 42001, ISO/IEC 27001) with ethical commitments such as fairness and transparency.
  • Establish structured risk management processes that include clinical, ethical, and patient impact considerations.
  • Promote patient trust and inclusivity by making AI-driven healthcare tools explainable and accessible.
  • Ensure cross-functional governance between AI engineers, clinicians, compliance officers, and patient representatives.

Future developments in healthcare AI governance

  • OECD plans ongoing updates and monitoring of how principles are applied across sectors, including healthcare.
  • Influence on emerging regulations such as the EU AI Act and U.S. AI policy frameworks.
  • Expansion of sector-specific profiles and case studies for AI in medicine, public health, and life sciences.
  • Potential integration with certification schemes like ISO/IEC 42001 for AI governance.

Relevant and overlapping laws and frameworks

  • HIPAA (U.S.): Privacy and security of PHI, complemented by OECD’s emphasis on accountability and fairness.
  • GDPR, PHIPA, PIPEDA: Data protection and privacy obligations align with OECD’s transparency and rights-based approach.
  • ISO/IEC 42001: Provides a certifiable governance system that operationalizes OECD’s principles in AI development.
  • NIST AI RMF: Offers a structured risk management approach aligned with OECD values of trustworthy AI.
  • FDA AI/ML SaMD Guidance: Regulatory oversight for AI as medical devices, supported by OECD’s principles of robustness and safety.

References and official sources

OECD AI Principles: https://www.oecd.org/en/topics/sub-issues/ai-principles.html

OECD.AI — AI Principles overview: https://oecd.ai/en/ai-principles

OECD updates AI Principles to stay abreast of rapid technological developments: https://www.oecd.org/en/about/news/press-releases/2024/05/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.html

OECD Framework for the Classification of AI systems: https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html