Weekly News and Updates (Nov 22 – 28, 2025)

by | Nov 29, 2025 | AI News & Updates | 0 comments

Between 22–28 November 2025, global regulators accelerated the shift from high-level principles to mandatory operational controls, particularly in Canada, which launched its first public AI Register detailing hundreds of government AI systems. The EU continued streamlining the AI Act, while the US framework solidified around new guidance for Generative AI in medical devices and ongoing state-level guardrails for clinical and payer-facing AI. The core compliance themes are transparency (public registries, patient disclosures), lifecycle accountability (for Generative AI and SaMD), and robust clinical guardrails (especially for mental health and diagnostic tools). Health organizations must move swiftly to formalize AI governance committees and implement mandatory audit trails for all high-risk AI tools.

 

 

 

 

 

 

Canada

 

 

1) Government of Canada Launches First Public AI Register (Nov 28, 2025)

 

The Honourable Shafqat Ali, President of the Treasury Board, published the Government of Canada’s first public AI Register. This register provides detailed, publicly accessible information on where and how over 400 AI systems are being explored, developed, implemented, or deployed across 42 federal institutions. This includes systems in early research up to fully deployed tools. This action is a key milestone in implementing the federal public service’s AI Strategy and reflects a commitment to transparency and responsible AI adoption.

Treasury Board of Canada Secretariat – Canada launches first register of AI uses in federal government (Nov 28, 2025)

 

How it applies to AI Healthcare Compliance:

While the register covers all federal AI systems, the inclusion of systems in healthcare-related agencies sets a new mandatory transparency standard. Health organizations and vendors working with federal or provincial health systems should anticipate similar requirements for public disclosure on their own high-risk AI tools (as outlined in the companion documents for the Artificial Intelligence and Data Act – AIDA). This also provides insight into the type and maturity of AI systems the government is already considering and funding.

 

 

 

 

2) Canada’s Drug Agency 2025 Watch List Focuses on AI in Health Care (Ongoing Operative Guidance)

 

Canada’s Drug Agency (formerly CADTH) released its annual 2025 Watch List, focusing entirely on Artificial Intelligence technologies and the corresponding ethical, legal, and operational issues in health care. This document highlights the top 5 emerging AI technologies (e.g., AI for notetaking, AI for disease detection/diagnosis) and critical issues (e.g., liability, data quality/bias, data sovereignty). Though developed from a workshop in late 2024, the list was officially released in 2025 and remains a key operational guide for health system planning in this window.

Canada’s Drug Agency – 2025 Watch List: Artificial Intelligence in Health Care (2025)

 

How it applies to AI Healthcare Compliance:

This report acts as a direct challenge to Canadian health organizations, signalling where health system planning and procurement will focus. Vendors and health providers must proactively address the issues raised: for instance, documenting data bias mitigation strategies and clearly defining liability models for AI tools used in diagnosis or treatment. The Watch List can be used as a checklist for organizational readiness.

 

 

 

 

 

 

 

 

United States

 

 

1) FDA Digital Health Advisory Committee Focuses on Generative AI in Medical Devices (Operative Guidance)

 

The FDA’s Digital Health Advisory Committee (DHAC) continues to emphasize the need to optimize the Total Product Life Cycle (TPLC) approach for medical devices incorporating complex, iterative technologies like Generative AI. The focus is on ensuring the safety and effectiveness of new digital mental health medical devices, such as AI therapists, which are increasing in complexity and demand. This ongoing focus signals a sustained, high-level regulatory concern for the unique challenges posed by Generative AI in sensitive clinical settings.

FDA – Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices (Nov 20, 2024 – Ongoing)

 

How it applies to AI Healthcare Compliance:

Organizations and vendors deploying Generative AI in high-risk environments, particularly mental health, must adopt an enhanced TPLC strategy. This requires continuous monitoring for model drift, rigorous validation for safety against unintended or harmful outputs (hallucinations), and explicit guardrails to prevent AI from making independent therapeutic decisions without human oversight, as reflected in various state laws.

 

 

 

 

2) New National Guidance on Responsible Use of AI in Healthcare (September 2025 – Ongoing)

 

The comprehensive guidance released by the Joint Commission and the Coalition for Health AI (CHAI) in September 2025 provides a national model for responsible AI practices that remains highly relevant and is being adopted by many US health systems. It stresses the establishment of clear AI Policies and Governance Structures (including a cross-functional oversight team) and mandates increased Patient Privacy and Transparency (clear disclosure when AI is used in care). State laws, such as Texas HB 149 (effective Jan 1, 2026), reinforce these themes by requiring conspicuous patient disclosure when AI influences healthcare services.

The Joint Commission / CHAI – The Responsible Use of AI in Healthcare (Sept 2025)

 

How it applies to AI Healthcare Compliance:

This guidance is a mandatory call to action for internal governance. Health organizations must immediately appoint a cross-functional AI governance committee and develop/update policies for patient transparency and consent forms to address the use of AI. For vendors, this translates into a demand for complete documentation (model cards) that supports the hospital’s transparency obligations.

 

 

 

 

 

 

 

 

European Union & United Kingdom

 

 

1) European Commission Advances AI Act (Nov 19, 2025 – Ongoing)

 

The European Commission proposed targeted amendments to the AI Act on November 19, 2025, as part of a Digital Simplification Package. This effort, alongside the launch of the AI Act Service Desk, aims to provide clarity for businesses and national authorities on implementing the complex AI Act. The AI Act continues its risk-based approach, subjecting High-Risk systems (which include many healthcare applications) to strict requirements for data quality, technical documentation, human oversight, and robustness.

European Commission – European approach to artificial intelligence (Nov 19, 2025 – Ongoing)

 

How it applies to AI Healthcare Compliance:

Despite simplification efforts, the core obligations for healthcare-related High-Risk AI systems remain. Vendors must adhere to standards for accuracy, cybersecurity, and human oversight. Health organizations deploying these systems must have documented procedures ensuring continued compliance with these standards (Article 14 of the AI Act), including human-in-the-loop oversight for critical clinical decisions.

 

 

 

 

2) OECD Publishes Working Paper on Evolving AI Capabilities and the School Curriculum (Nov 21, 2025)

 

The OECD published a working paper on November 21, 2025, exploring the implications of rapidly evolving AI capabilities, particularly Generative AI, for the school curriculum. While not directly regulating healthcare, this publication indicates a global, high-level policy focus on the societal impact of advanced AI, including the need to build digital literacy and address potential harms like misinformation and algorithmic bias from an early age.

OECD – Evolving AI capabilities and the school curriculum (Nov 21, 2025)

 

How it applies to AI Healthcare Compliance:

This reinforces the need for internal workforce education in healthcare. Organizations should create ongoing training programs for clinicians and staff on the capabilities and limitations of AI tools, focusing on the potential for bias and “hallucinations” to ensure critical thinking is not replaced by automation. This aligns with the “human-in-the-loop” requirement in the EU AI Act and US/Canadian clinical guidance.

 

 

 

 

 

 

 

 

Rest of the world

 

 

1) OECD Report on AI in Strategic Foresight (Nov 19, 2025)

 

The OECD published a report on AI in Strategic Foresight on November 19, 2025. This report, focusing on using AI for long-term governmental and policy planning, reflects the increasing global integration of AI into government operations. This is part of the OECD’s continued work on the responsible use of AI across all public sectors.

OECD – AI in Strategic Foresight (Nov 19, 2025)

 

How it applies to AI Healthcare Compliance:

This signals that international bodies are using AI to predict future risks and policy needs. Health organizations and vendors operating globally must embed AI impact assessment (AIAs) and strategic risk assessments into their planning, anticipating that future regulations will be built on AI-driven predictions of societal and health system impact.

 

 

 

 

2) WHO Continues to Urge Collaborative Approach to Safe and Equitable AI in Health (Oct 2025 – Ongoing)

 

The WHO’s ongoing departmental updates continue to urge a collaborative approach to advance safe and equitable AI in health, reinforcing global governance and capacity-building needs. This global push ensures that, even where local laws are absent, international best practices (like WHO’s AI governance guidelines) are the expected minimum standard, especially when working in low- and middle-income countries (LMICs).

WHO – Digital Health (Ongoing Updates)

 

How it applies to AI Healthcare Compliance:

Global implementers must align AI development and deployment strategies with WHO’s established technical guidance on AI for health, focusing heavily on equity, data sovereignty, and ethical capacity building to ensure they meet global ethical minimums and qualify for international funding/partnership opportunities.

 

 

 

 

 

 

 

 

Cross-Cutting Themes Across Jurisdictions

 

  1. Mandatory Transparency: Canada’s AI Register sets a new bar for mandatory public disclosure on AI systems. This trend will likely influence other jurisdictions, pressuring health organizations and vendors to publish clear, accessible information about their clinical AI tools.
  2. Generative AI (GenAI) Scrutiny: The US FDA’s focus on GenAI in digital mental health and the OECD’s attention to its societal impact confirm that GenAI in clinical settings is the highest regulatory priority, requiring enhanced lifecycle monitoring and human safety controls.
  3. Internal Governance as Compliance: National guidance in the US (Joint Commission/CHAI) and the foundational principles in the EU and Canada all mandate the same thing: organizations must establish formal, cross-functional AI governance committees and clear internal policies before deployment.
  4. Human-in-the-Loop is Non-Negotiable: Whether for therapy, diagnosis, or clinical education, regulators are reinforcing that AI must be a decision-support tool that augments, but does not replace, the clinical judgment and accountability of licensed health professionals.

 

 

 

 

 

 

 

 

Immediate, Concrete Checklist for Health Organizations & Vendors

 

  • Establish or Formalize AI Governance Committee: Appoint a standing committee (including Compliance, IT, Clinical Ops, and Legal) with clearly defined responsibilities for approving, monitoring, and updating all AI tools used clinically or operationally.
  • Implement Public AI Inventory/Registry: For all deployed AI tools, begin compiling the information required for a public registry (purpose, data source, risk rating, oversight process) in anticipation of future mandatory requirements, using the Canadian model as a reference.
  • Update Patient Consent & Transparency: Revise patient consent and notice of privacy practices to explicitly disclose how and when AI tools are used to inform or influence their care, aligning with US state-level requirements.
  • Require Generative AI Safety Dossier from Vendors: Demand evidence from GenAI vendors on continuous monitoring for model drift, testing protocols for hallucination mitigation, and documented human-in-the-loop failure modes before procurement.
  • Mandate Workforce Education: Implement mandatory training for clinical staff on AI literacy, focusing on the potential for bias, ‘hallucination,’ and the ultimate accountability that remains with the human clinician.

 

 

 

 

 

 

 

 

Sources

 

Written by Grigorii Kochetov

Cybersecurity Researcher at AI Healthcare Compliance

Read more

Weekly News and Updates (Nov 8 – 21, 2025)

Weekly News and Updates (Nov 8 – 21, 2025)

Between 8-21 November 2025 regulators and international bodies emphasised moving from principles to practice: the EU launched COMPASS-AI to operationalise safe clinical AI; the UK (MHRA) published AI Airlock pilot outputs and announced AI drug-safety projects; the FDA...

read more
Prohibited AI Systems Under the EU AI Act

Prohibited AI Systems Under the EU AI Act

The European Union’s Artificial Intelligence Act (EU AI Act) establishes the world’s first comprehensive legal framework for governing artificial intelligence. It divides AI systems into four categories based on their potential impact on safety and fundamental rights...

read more
Practical impacts of using AI in Healthcare

Practical impacts of using AI in Healthcare

Artificial Intelligence (AI) is transforming healthcare systems globally - enhancing diagnostics, improving patient outcomes, optimizing workflows, and reducing costs. However, its adoption also brings challenges around data integrity, equity, and ethical use. Below...

read more
Weekly News and Updates (Sept 19–25, 2025)

Weekly News and Updates (Sept 19–25, 2025)

This post will begin our new weekly updates that will cover the most recent developments in AI governance and regulations, with a particular focus on how these changes affect AI in healthcare. We begin with updates from Canada — including privacy enforcement actions,...

read more
Healthcare AI Impact:Speed and Efficiency

Healthcare AI Impact:Speed and Efficiency

AI integrations are accelerating healthcare like never before. From cutting radiology wait times to reducing the hours physicians spend on documentation, AI is proving to be one of the most powerful efficiency drivers in medicine. This article explores how much time...

read more