Weekly News and Updates (Dec 12 – 19, 2025)

by | Dec 20, 2025 | AI News & Updates | 0 comments

Between 12–19 December 2025, the regulatory landscape for AI in healthcare shifted decisively toward national-level consolidation and operational security: the U.S. White House issued a landmark Executive Order to centralize AI policy and preempt state-level fragmentation; Canada’s Drug Agency (CDA-AMC) released its 2025 AI Watch List identifying the top technologies and legal hurdles for the coming year; NIST unveiled a dedicated AI profile for its Cybersecurity Framework; and the UK’s MHRA launched a pivotal national call for evidence to shape future medical device standards. Meanwhile, the EU moved to fast-track biotech and cardiovascular AI via the Biotech Act, and WHO Europe warned of a critical “legal safety net” gap across 53 member states.

 

 

 

 

 

 

 

 

Canada

 

 

1) CDA-AMC: 2025 Watch List for AI in Health Care (Dec 15, 2025)

 

Canada’s Drug Agency (CDA-AMC) released its annual horizon-scanning report, the 2025 Watch List, which explicitly focuses on the integration of AI within the Canadian health system. The report identifies the top five emerging technologies: AI for medical notetaking, clinical training/education, disease detection/diagnosis, disease treatment, and personalized care. Crucially, it identifies five “issue guardrails” that will define Canadian health-AI policy in 2026: Privacy/Data Security, Liability/Accountability, Data Quality/Bias, Data Sovereignty, and the impact on the Health Human Resources (HHR) workforce.

CDA-AMC – 2025 Watch List: Artificial Intelligence in Health Care

 

How it applies to AI in healthcare:

The Watch List serves as a procurement and planning guide for provincial health authorities. Vendors must demonstrate how their tools address the “Issue Guardrails”—specifically liability and data sovereignty—to be considered for large-scale public sector adoption. It signals that Canadian regulators are shifting focus from “what” AI can do to “how” it is governed legally and ethically within clinical workflows.

 

 

 

 

2) Health Canada: National Wastewater Drug Surveillance AI Integration (Dec 12, 2025)

 

Health Canada launched an expanded National Wastewater Drug Surveillance dashboard, incorporating AI-driven predictive modeling to monitor community drug use patterns. This initiative represents a first-hand application of the 2025-26 Departmental Plan’s commitment to using “advanced technologies” for public health surveillance and early intervention.

Health Canada – Departmental Plan 2025-26 & Public Health Dashboards

 

How it applies to AI in healthcare:

This deployment sets a precedent for how Health Canada utilizes population-level health data. For AI developers in the public health and surveillance space, this confirms that the federal government is prioritizing “non-invasive” AI data streams (like wastewater) for policy-making. It emphasizes the need for high-quality data governance to maintain public trust in automated surveillance.

 

 

 

 

 

 

 

 

United States

 

 

1) White House: Executive Order on National Policy Framework for AI (Dec 12, 2025)

 

President Trump signed an Executive Order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO’s primary objective is to create a “minimally burdensome” national standard that preempts state-level AI regulations (such as those in California and New York) to avoid a “patchwork” of conflicting laws. It establishes an AI Litigation Task Force to challenge state-level prohibitions on algorithmic discrimination that the administration deems “unfair and deceptive.”

White House / AHA News – Executive Order: National Policy Framework for AI

 

How it applies to AI in healthcare:

This centralizes AI regulation at the federal level (FDA/HHS), potentially simplifying the compliance landscape for vendors by overriding more stringent state laws. However, it also introduces legal uncertainty as state attorneys general are likely to challenge the preemption. Health systems must maintain flexible governance that can withstand shifts between federal and state authority during this litigation phase.

 

 

 

 

2) NIST: Cybersecurity Framework Profile for Artificial Intelligence (Draft released Dec 17, 2025)

 

The National Institute of Standards and Technology (NIST) released a companion draft to its widely-used Cybersecurity Framework (CSF) specifically for AI systems. The profile maps AI-specific challenges—such as data poisoning, model extraction, and adversarial machine learning—onto the CSF’s core functions (Identify, Protect, Detect, Respond, Recover). It introduces three strategic focus areas: Secure (the AI itself), Defend (using AI for cyber defense), and Thwart (blocking AI-powered attacks).

NIST – Cybersecurity Framework Profile for AI (Draft Dec 2025)

 

How it applies to AI in healthcare:

This is now the gold standard for health-AI security audits. Healthcare organizations and vendors should immediately align their risk management frameworks with this NIST profile. Following this profile provides “good faith” evidence in the event of a breach or regulatory audit, particularly as federal agencies (FDA/HHS) increasingly reference NIST standards in their own oversight.

 

 

 

 

 

 

 

 

European Union & United Kingdom

 

 

1) European Commission: Biotech Act and Safe Hearts Plan (Dec 16, 2025)

 

The EU Commission proposed new measures to modernize the healthcare sector, headlined by the Biotech Act. Key provisions include the creation of AI-focused regulatory sandboxes, simplified rules for medical device development, and a shortened clinical trial approval process. The Safe Hearts Plan specifically targets cardiovascular disease by funding the development of AI diagnostic tools and personalized predictive models.

European Commission – Biotech Act and Safe Hearts Plan Press Release (Dec 16, 2025)

 

How it applies to AI in healthcare:

For vendors, the “regulatory sandboxes” offer a vital pathway to test AI in drug discovery and clinical settings with reduced immediate liability. The simplified medical device rules aim to cut the Conformity Assessment delays that have plagued the industry since the MDR was implemented. Companies specializing in cardiovascular AI should look for upcoming “targeted support” and investment facility opportunities within the EU.

 

 

 

 

2) UK MHRA: National Commission Call for Evidence on AI Regulation (Dec 18, 2025)

 

The MHRA officially launched a public “Call for Evidence” for the newly established National Commission into the Regulation of AI in Healthcare. Running until February 2026, the Commission is seeking evidence on whether current UK frameworks are sufficient for AI-enabled healthcare and how to manage liability between manufacturers, clinicians, and NHS Trusts. Key themes include “human-in-the-loop” safeguards and post-market safety monitoring.

UK Government / MHRA – Call for Evidence: Regulation of AI in Healthcare

 

How it applies to AI in healthcare:

This is the most critical opportunity for health-AI innovators to influence the UK’s long-term regulatory roadmap. The evidence gathered will inform the 2026 standards for Software as a Medical Device (SaMD). Organizations should submit data on “real-world performance” and “explainability” to ensure future rules remain risk-proportionate rather than prohibitively restrictive.

 

 

 

 

 

 

 

 

Rest of the world

 

 

1) Australia TGA: Consultation on Digital Mental Health Tools (Dec 17, 2025)

 

The Therapeutic Goods Administration (TGA) has opened a review into the regulation of digital mental health tools, including AI-driven chatbots and therapeutic software. The TGA is seeking feedback to determine when these tools cross the line from “wellness” to “medical device,” necessitating clinical evidence and registration on the Australian Register of Therapeutic Goods (ARTG).

TGA – Seeking input on digital mental health tools

 

How it applies to AI in healthcare:

Vendors in the mental health space must prepare for stricter classification in Australia. This review follows a global trend (seen also in the UK MHRA) toward treating high-risk AI chatbots as regulated medical devices. Early participation in this survey is crucial for companies wishing to avoid sudden reclassification and market removal.

 

 

 

 

2) WHO Europe: Report on Legal Safeguards for AI in Healthcare (Dec 19, 2025)

 

The World Health Organization Regional Office for Europe released its first comprehensive assessment of AI adoption across 53 countries. The report reveals a massive gap in legal protections: while nearly all countries use AI for diagnostics or disease surveillance, 86% cite legal uncertainty as the top barrier to adoption, and less than 10% have liability standards for AI-driven clinical errors.

UN News / WHO – UN calls for legal safeguards for AI in healthcare

 

How it applies to AI in healthcare:

This report will likely drive a new wave of regional legislation across Europe and Central Asia focused on “liability and legal guardrails.” International vendors should anticipate new requirements for “AI literacy” training for healthcare workers and clear “manufacturer liability” clauses in service agreements for tools deployed in these regions.

 

 

 

 

 

 

 

 

Cross-Cutting Themes

 

  1. Centralization of Authority: The U.S. Executive Order and the EU Biotech Act show a move toward federal/centralized control to accelerate innovation and reduce regional fragmentation.
  2. Security Standardization: The NIST AI profile marks a shift from general cybersecurity to AI-specific defense (thwarting data poisoning and model theft) as a regulatory baseline.
  3. Liability as the Primary Barrier: Both the CDA-AMC (Canada) and WHO Europe report identify liability—who is responsible when the AI makes a mistake—as the #1 obstacle to widespread clinical adoption.
  4. Clinical Use-Case Specificity: Regulators are moving away from broad AI rules to specific guidance for high-risk domains like mental health (Australia) and cardiovascular diagnostics (EU).

 

 

 

 

 

 

 

 

Immediate, Concrete Checklist for Health Organizations & Vendors

 

  • Review U.S. State vs. Federal Strategy: For US-based vendors, assess if current state-level compliance (e.g., California) conflicts with the new White House “National Policy Framework” and prepare for potential litigation-driven changes.
  • Audit Security against NIST AI Profile: Map current AI infrastructure against the “Secure, Defend, Thwart” focus areas in the new NIST draft to identify vulnerabilities to model-specific attacks.
  • Document CDA-AMC “Guardrail” Compliance: For Canadian deployments, ensure that product dossiers specifically address the five issue guardrails (Data Sovereignty, Liability, etc.) highlighted in the 2025 Watch List.
  • Participate in UK/Australia Consultations: Submit evidence to the MHRA Call for Evidence (by Feb 2) and TGA Mental Health survey to influence future SaMD classification.
  • Verify “AI Literacy” of Users: Per WHO recommendations, implement and document training programs for clinicians using AI to demonstrate “human oversight” and mitigate liability risks.

 

 

 

 

 

 

 

 

Sources

 

Written by Grigorii Kochetov

Cybersecurity Researcher at AI Healthcare Compliance

Read more

Weekly News and Updates (Jan 1-9, 2026)

Weekly News and Updates (Jan 1-9, 2026)

Between 1st and 9th January 2026, the first full week of the year marks a significant shift from theoretical frameworks to operational infrastructure in AI healthcare governance. Key developments include the UK’s closing of its “AI Growth Lab” consultation, the FDA’s...

read more
Weekly News and Updates (Nov 22 – 28, 2025)

Weekly News and Updates (Nov 22 – 28, 2025)

Between 22–28 November 2025, global regulators accelerated the shift from high-level principles to mandatory operational controls, particularly in Canada, which launched its first public AI Register detailing hundreds of government AI systems. The EU continued...

read more
Weekly News and Updates (Nov 8 – 21, 2025)

Weekly News and Updates (Nov 8 – 21, 2025)

Between 8-21 November 2025 regulators and international bodies emphasised moving from principles to practice: the EU launched COMPASS-AI to operationalise safe clinical AI; the UK (MHRA) published AI Airlock pilot outputs and announced AI drug-safety projects; the FDA...

read more
Prohibited AI Systems Under the EU AI Act

Prohibited AI Systems Under the EU AI Act

The European Union’s Artificial Intelligence Act (EU AI Act) establishes the world’s first comprehensive legal framework for governing artificial intelligence. It divides AI systems into four categories based on their potential impact on safety and fundamental rights...

read more
Practical impacts of using AI in Healthcare

Practical impacts of using AI in Healthcare

Artificial Intelligence (AI) is transforming healthcare systems globally - enhancing diagnostics, improving patient outcomes, optimizing workflows, and reducing costs. However, its adoption also brings challenges around data integrity, equity, and ethical use. Below...

read more
Weekly News and Updates (Sept 19–25, 2025)

Weekly News and Updates (Sept 19–25, 2025)

This post will begin our new weekly updates that will cover the most recent developments in AI governance and regulations, with a particular focus on how these changes affect AI in healthcare. We begin with updates from Canada — including privacy enforcement actions,...

read more
Healthcare AI Impact:Speed and Efficiency

Healthcare AI Impact:Speed and Efficiency

AI integrations are accelerating healthcare like never before. From cutting radiology wait times to reducing the hours physicians spend on documentation, AI is proving to be one of the most powerful efficiency drivers in medicine. This article explores how much time...

read more