Canada
1) CDA-AMC: 2025 Watch List for AI in Health Care (Dec 15, 2025)
Canada’s Drug Agency (CDA-AMC) released its annual horizon-scanning report, the 2025 Watch List, which explicitly focuses on the integration of AI within the Canadian health system. The report identifies the top five emerging technologies: AI for medical notetaking, clinical training/education, disease detection/diagnosis, disease treatment, and personalized care. Crucially, it identifies five “issue guardrails” that will define Canadian health-AI policy in 2026: Privacy/Data Security, Liability/Accountability, Data Quality/Bias, Data Sovereignty, and the impact on the Health Human Resources (HHR) workforce.
CDA-AMC – 2025 Watch List: Artificial Intelligence in Health Care
How it applies to AI in healthcare:
The Watch List serves as a procurement and planning guide for provincial health authorities. Vendors must demonstrate how their tools address the “Issue Guardrails”—specifically liability and data sovereignty—to be considered for large-scale public sector adoption. It signals that Canadian regulators are shifting focus from “what” AI can do to “how” it is governed legally and ethically within clinical workflows.
2) Health Canada: National Wastewater Drug Surveillance AI Integration (Dec 12, 2025)
Health Canada launched an expanded National Wastewater Drug Surveillance dashboard, incorporating AI-driven predictive modeling to monitor community drug use patterns. This initiative represents a first-hand application of the 2025-26 Departmental Plan’s commitment to using “advanced technologies” for public health surveillance and early intervention.
Health Canada – Departmental Plan 2025-26 & Public Health Dashboards
How it applies to AI in healthcare:
This deployment sets a precedent for how Health Canada utilizes population-level health data. For AI developers in the public health and surveillance space, this confirms that the federal government is prioritizing “non-invasive” AI data streams (like wastewater) for policy-making. It emphasizes the need for high-quality data governance to maintain public trust in automated surveillance.
United States
1) White House: Executive Order on National Policy Framework for AI (Dec 12, 2025)
President Trump signed an Executive Order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence.” The EO’s primary objective is to create a “minimally burdensome” national standard that preempts state-level AI regulations (such as those in California and New York) to avoid a “patchwork” of conflicting laws. It establishes an AI Litigation Task Force to challenge state-level prohibitions on algorithmic discrimination that the administration deems “unfair and deceptive.”
White House / AHA News – Executive Order: National Policy Framework for AI
How it applies to AI in healthcare:
This centralizes AI regulation at the federal level (FDA/HHS), potentially simplifying the compliance landscape for vendors by overriding more stringent state laws. However, it also introduces legal uncertainty as state attorneys general are likely to challenge the preemption. Health systems must maintain flexible governance that can withstand shifts between federal and state authority during this litigation phase.
2) NIST: Cybersecurity Framework Profile for Artificial Intelligence (Draft released Dec 17, 2025)
The National Institute of Standards and Technology (NIST) released a companion draft to its widely-used Cybersecurity Framework (CSF) specifically for AI systems. The profile maps AI-specific challenges—such as data poisoning, model extraction, and adversarial machine learning—onto the CSF’s core functions (Identify, Protect, Detect, Respond, Recover). It introduces three strategic focus areas: Secure (the AI itself), Defend (using AI for cyber defense), and Thwart (blocking AI-powered attacks).
NIST – Cybersecurity Framework Profile for AI (Draft Dec 2025)
How it applies to AI in healthcare:
This is now the gold standard for health-AI security audits. Healthcare organizations and vendors should immediately align their risk management frameworks with this NIST profile. Following this profile provides “good faith” evidence in the event of a breach or regulatory audit, particularly as federal agencies (FDA/HHS) increasingly reference NIST standards in their own oversight.
European Union & United Kingdom
1) European Commission: Biotech Act and Safe Hearts Plan (Dec 16, 2025)
The EU Commission proposed new measures to modernize the healthcare sector, headlined by the Biotech Act. Key provisions include the creation of AI-focused regulatory sandboxes, simplified rules for medical device development, and a shortened clinical trial approval process. The Safe Hearts Plan specifically targets cardiovascular disease by funding the development of AI diagnostic tools and personalized predictive models.
European Commission – Biotech Act and Safe Hearts Plan Press Release (Dec 16, 2025)
How it applies to AI in healthcare:
For vendors, the “regulatory sandboxes” offer a vital pathway to test AI in drug discovery and clinical settings with reduced immediate liability. The simplified medical device rules aim to cut the Conformity Assessment delays that have plagued the industry since the MDR was implemented. Companies specializing in cardiovascular AI should look for upcoming “targeted support” and investment facility opportunities within the EU.
2) UK MHRA: National Commission Call for Evidence on AI Regulation (Dec 18, 2025)
The MHRA officially launched a public “Call for Evidence” for the newly established National Commission into the Regulation of AI in Healthcare. Running until February 2026, the Commission is seeking evidence on whether current UK frameworks are sufficient for AI-enabled healthcare and how to manage liability between manufacturers, clinicians, and NHS Trusts. Key themes include “human-in-the-loop” safeguards and post-market safety monitoring.
UK Government / MHRA – Call for Evidence: Regulation of AI in Healthcare
How it applies to AI in healthcare:
This is the most critical opportunity for health-AI innovators to influence the UK’s long-term regulatory roadmap. The evidence gathered will inform the 2026 standards for Software as a Medical Device (SaMD). Organizations should submit data on “real-world performance” and “explainability” to ensure future rules remain risk-proportionate rather than prohibitively restrictive.
Rest of the world
1) Australia TGA: Consultation on Digital Mental Health Tools (Dec 17, 2025)
The Therapeutic Goods Administration (TGA) has opened a review into the regulation of digital mental health tools, including AI-driven chatbots and therapeutic software. The TGA is seeking feedback to determine when these tools cross the line from “wellness” to “medical device,” necessitating clinical evidence and registration on the Australian Register of Therapeutic Goods (ARTG).
TGA – Seeking input on digital mental health tools
How it applies to AI in healthcare:
Vendors in the mental health space must prepare for stricter classification in Australia. This review follows a global trend (seen also in the UK MHRA) toward treating high-risk AI chatbots as regulated medical devices. Early participation in this survey is crucial for companies wishing to avoid sudden reclassification and market removal.
2) WHO Europe: Report on Legal Safeguards for AI in Healthcare (Dec 19, 2025)
The World Health Organization Regional Office for Europe released its first comprehensive assessment of AI adoption across 53 countries. The report reveals a massive gap in legal protections: while nearly all countries use AI for diagnostics or disease surveillance, 86% cite legal uncertainty as the top barrier to adoption, and less than 10% have liability standards for AI-driven clinical errors.
UN News / WHO – UN calls for legal safeguards for AI in healthcare
How it applies to AI in healthcare:
This report will likely drive a new wave of regional legislation across Europe and Central Asia focused on “liability and legal guardrails.” International vendors should anticipate new requirements for “AI literacy” training for healthcare workers and clear “manufacturer liability” clauses in service agreements for tools deployed in these regions.
Cross-Cutting Themes
- Centralization of Authority: The U.S. Executive Order and the EU Biotech Act show a move toward federal/centralized control to accelerate innovation and reduce regional fragmentation.
- Security Standardization: The NIST AI profile marks a shift from general cybersecurity to AI-specific defense (thwarting data poisoning and model theft) as a regulatory baseline.
- Liability as the Primary Barrier: Both the CDA-AMC (Canada) and WHO Europe report identify liability—who is responsible when the AI makes a mistake—as the #1 obstacle to widespread clinical adoption.
- Clinical Use-Case Specificity: Regulators are moving away from broad AI rules to specific guidance for high-risk domains like mental health (Australia) and cardiovascular diagnostics (EU).
Immediate, Concrete Checklist for Health Organizations & Vendors
- Review U.S. State vs. Federal Strategy: For US-based vendors, assess if current state-level compliance (e.g., California) conflicts with the new White House “National Policy Framework” and prepare for potential litigation-driven changes.
- Audit Security against NIST AI Profile: Map current AI infrastructure against the “Secure, Defend, Thwart” focus areas in the new NIST draft to identify vulnerabilities to model-specific attacks.
- Document CDA-AMC “Guardrail” Compliance: For Canadian deployments, ensure that product dossiers specifically address the five issue guardrails (Data Sovereignty, Liability, etc.) highlighted in the 2025 Watch List.
- Participate in UK/Australia Consultations: Submit evidence to the MHRA Call for Evidence (by Feb 2) and TGA Mental Health survey to influence future SaMD classification.
- Verify “AI Literacy” of Users: Per WHO recommendations, implement and document training programs for clinicians using AI to demonstrate “human oversight” and mitigate liability risks.
Sources
- Canada’s Drug Agency (CDA-AMC) – 2025 Watch List: Artificial Intelligence in Health Care (Dec 15, 2025)
https://www.cda-amc.ca/sites/default/files/Tech%20Trends/2025/ER0015%3D2025_Watch_List.pdf - Health Canada – 2025-26 Departmental Plan: Commitments to Digital Health and AI (June 17, 2025/Operative Dec 2025)
https://www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/report-plans-priorities/2025-2026-departmental-plan.html - American Hospital Association (AHA) – White House issues executive order for national AI framework (Dec 12, 2025)
https://www.aha.org/news/headline/2025-12-12-white-house-issues-executive-order-national-ai-framework - RamaOnHealthcare / NIST – NIST adds to AI security guidance with Cybersecurity Framework profile (Dec 17, 2025)
https://ramaonhealthcare.com/nist-adds-to-ai-security-guidance-with-cybersecurity-framework-profile/ - European Commission – Commission proposes new measures to improve health and the healthcare sector / Biotech Act (Dec 16, 2025)
https://commission.europa.eu/news-and-media/news/commission-proposes-new-measures-improve-health-and-healthcare-sector-2025-12-16_en - UK Government / MHRA – MHRA seeks input on AI regulation at ‘pivotal moment’ for healthcare (Dec 18, 2025)
https://www.gov.uk/government/news/mhra-seeks-input-on-ai-regulation-at-pivotal-moment-for-healthcare - Therapeutic Goods Administration (TGA) Australia – Seeking input from users of digital mental health tools (Dec 17, 2025)
https://www.tga.gov.au/news/news-articles/seeking-input-users-digital-mental-health-tools - UN News / World Health Organization – UN calls for legal safeguards for AI in healthcare (Dec 19, 2025)
https://news.un.org/en/story/2025/11/1166400












