Weekly News and Updates (Nov 29 – Dec 12, 2025)

by | Dec 13, 2025 | AI News & Updates | 0 comments

Between 29 November and 12 December 2025, three major jurisdictions released concrete, actionable regulatory documents demonstrating a rapid shift from high-level principles to mandatory operational controls. The U.S. Department of Health and Human Services (HHS) published a comprehensive, “OneHHS” AI Strategy, mandating enterprise-wide governance and a use case inventory. Canada deepened regulatory collaboration with European partners (EU, UK, Germany) following a G7 meeting, reinforcing a risk-based regulatory path similar to the AI and Data Act (AIDA). Most specifically, the UK MHRA issued a framework to clarify the medical device classification of Digital Mental Health Technologies (DMHTs) using AI, providing essential guidance for this rapidly growing sector. The overarching theme is the institutionalization of AI governance and the clarification of regulatory pathways for complex, high-risk tools.

 

 

 

 

 

 

Canada

 

 

1) G7 Industry, Digital and Technology Ministerial Declaration (Dec 9, 2025)

 

As the G7 President, Canada hosted the G7 Industry, Digital and Technology (IDT) Ministers’ Meeting, culminating in the 2025 G7 Industry, Digital and Technology Ministerial Declaration. The Declaration explicitly addresses “Artificial Intelligence for Growth”, committing G7 nations to “promote a human-centric approach and create an enabling environment for the widespread adoption of secure, responsible and trustworthy AI.” The Declaration, co-endorsed by the UK and EU, reinforces Canada’s direction toward a robust, internationally interoperable, and risk-based AI framework, seeking to “mitigate negative externalities” while promoting innovation. Crucially, on the margins of the meeting, Canada confirmed Memoranda of Understanding with the European Union on AI, data governance and digital services, and launched a new Canada–Germany Digital Alliance (Source: G7 2025 IDT Ministerial Declaration; Government of Canada News Release, Dec 9, 2025).

G7 2025 Industry, Digital and Technology Ministerial Declaration (Dec 9, 2025)

 

How it applies to AI in healthcare:

The G7 commitment to a “secure, responsible and trustworthy AI” framework directly aligns with the safety and ethical requirements expected for high-risk AI in healthcare under Canada’s proposed AI and Data Act (AIDA). The international agreements (EU, Germany, UK) signal a commitment to regulatory interoperability. Canadian health-AI vendors must ensure their models and documentation meet this high international common denominator standard, particularly regarding human oversight and accountability, to secure cross-border partnerships and compliant deployments.

 

 

 

 

2) Health Canada: Global Strategy on Digital Health 2020-2027 Endorsed (Dec 1, 2025 / WHO)

 

The World Health Assembly (which includes Canadian representation) endorsed the extension of the Global Strategy on Digital Health to 2027 and approved the next phase (2028-2033). This international alignment reinforces Canada’s internal commitments (Health Canada’s Departmental Plan) to exploring advanced technologies like AI within health services. It signals that Canadian public health initiatives utilizing AI must adhere to global ethical standards for equity, security, and access.

WHO – Global Strategy on Digital Health 2020-2027 (Dec 1, 2025 Document update)

 

How it applies to AI in healthcare:

Canadian health organizations receiving public funds for AI projects must demonstrate alignment with global best practices on equity and data governance, as outlined in WHO’s guidance. This international commitment provides a robust, non-negotiable floor for ethical AI deployment in Canadian public health initiatives.

 

 

 

 

 

 

United States

 

 

1) HHS Unveils Department-Wide AI Strategy (Dec 4, 2025)

The U.S. Department of Health and Human Services (HHS), which encompasses the FDA, CDC, and CMS, released its comprehensive AI Strategy. This strategy is a major shift, committing the department to a “OneHHS” approach, emphasizing five pillars: governance and risk management, infrastructure and platforms, workforce development, health research, and care/public health delivery modernization. Key operational elements include the formation of a high-level “AI Governance Board” and the creation of an “enterprise AI use case inventory” (cataloguing the 271 active or planned AI use cases across HHS divisions) to track progress and encourage reuse.

HHS – HHS Unveils AI Strategy to Transform Agency Operations (Dec 4, 2025)

 

How it applies to AI in healthcare:

This strategy institutionalizes AI governance at the highest level of US health policy. For vendors, this signals strong regulatory expectations on all submissions (FDA, CMS) regarding transparency, risk management, and the ethical use of data. Health organizations should model their internal governance (establishing an AI oversight committee and use-case inventory) on the HHS structure to prepare for future compliance audits and contracting requirements.

 

 

 

 

2) FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (Dec 1, 2025)

The U.S. Food and Drug Administration (FDA) announced the deployment of “agentic AI capabilities” for all agency employees on a voluntary basis. Agentic AI is defined as advanced AI systems designed to achieve specific goals by “planning, reasoning, and executing multi-step actions” with built-in guidelines and human oversight. According to the agency, the tools are intended to assist with complex internal tasks such as “pre-market reviews, post-market surveillance, inspections, and compliance” functions. FDA confirmed the agentic models are deployed within a high-security GovCloud environment and importantly, “do not train on data submitted by regulated industry”, ensuring the security of sensitive research and regulated data (Source: FDA Press Release, Dec 1, 2025).

FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (Dec 1, 2025)

 

How it applies to AI in healthcare:

The FDA’s internal adoption of sophisticated, multi-step agentic AI systems signals an accelerated regulatory comfort level with complex AI workflows. This foreshadows a strong expectation that manufacturers submitting AI-driven medical devices (SaMD) will need equally robust documentation for their systems, detailing how they manage complexity, reasoning chains, human oversight, and the associated risks in high-consequence applications like clinical decision support and radiology analysis. The security caveat also reinforces the agency’s strict stance on data separation and privacy.

 

 

 

 

 

 

 

 

European Union & United Kingdom

 

 

1) MHRA Digital Mental Health Technology: Qualification and Classification Guidance (Updated Dec 2025)

 

The UK Medicines and Healthcare products Regulatory Agency (MHRA) maintained and updated its core guidance on “Digital Mental Health Technology: Qualification and Classification”. This guidance provides the definitive framework for manufacturers to determine when a DMHT (including those using AI) must be regulated as a Software as a Medical Device (SaMD). The classification is determined by the intended purpose (e.g., diagnose, treat, prevent) and functional impact, with high functionality (such as AI algorithms that provide personalized or unverified data processing, or adaptive Generative AI chatbots) being a key factor for medical device qualification. The MHRA’s ongoing commitment to this framework (with recent page updates in December) provides continuous clarity for vendors developing high-risk AI tools for mental healthcare (Source: MHRA Guidance, Digital mental health technology: qualification and classification, latest update Dec 10, 2025).

Digital mental health technology: qualification and classification (Latest updates Dec 10, 2025)

 

How it applies to AI in healthcare:

This is crucial for the Generative AI sector. Any vendor or health organization procuring an AI-driven DMHT in the UK must reference this guidance to confirm device classification. High-risk, complex AI-powered DMHTs are formally classified as SaMD, requiring compliance with UK medical device regulations, rigorous clinical validation evidence, and post-market surveillance plans consistent with their assigned risk classification.

 

 

 

 

2) UK and Singapore Launch Regulatory Innovation Corridor (Dec 12, 2025)

The MHRA and Singapore’s Health Sciences Authority (HSA) launched a new regulatory innovation corridor to fast-track patient access to breakthrough health technologies, including digital health and advanced diagnostics. This partnership allows developers to seek early, informal joint advice from both regulators simultaneously, aiming to cut delays and duplication in the regulatory process for promising therapies in areas like cancer and neurodegenerative disease.

UK and Singapore launch a regulatory innovation corridor to speed up access to breakthrough health technologies (Dec 12, 2025)

 

How it applies to AI in healthcare:

For AI vendors aiming for both UK and Asian markets, this corridor provides an invaluable, accelerated regulatory pathway. Companies should leverage this for novel AI-powered diagnostics and therapeutics. It also signals the MHRA’s commitment to agile regulation that balances rapid patient access with rigorous safety standards.

 

 

 

 

 

 

 

 

Rest of the world

 

 

1) WHO Global Strategy on Digital Health 2020-2027 Extended (Dec 1, 2025)

The World Health Assembly extended the Global Strategy on Digital Health to 2027 and approved the next phase for 2028-2033. This emphasizes WHO’s long-term commitment to fostering safe, equitable, and ethical AI deployment, particularly in low- and middle-income countries (LMICs). Related WHO documents highlight the continued need for guidance on large multi-modal models (LMMs).

WHO – Digital Health (Dec 1, 2025 Document update)

 

How it applies to AI in healthcare:

International organizations and vendors working in global health must maintain alignment with WHO’s ethical frameworks and capacity-building priorities. The strategy’s extension underscores that AI solutions deployed in LMICs will be held to the high bar of WHO governance to ensure they are beneficial, contextually appropriate, and do not introduce bias or exacerbate health inequities.

 

 

 

 

2) EU Launches Strategic Partnership Negotiations with UAE, Highlighting AI (Dec 11, 2025)

The European Commission officially launched negotiations for an EU-UAE Strategic Partnership Agreement (SPA). The key priority areas for cooperation explicitly include “digitalisation and artificial intelligence”. While not a direct regulation, this activity signals the EU’s intention to export its AI governance and standards through international agreements.

European Commission – Daily News 11 / 12 / 2025 (Dec 11, 2025)

 

How it applies to AI in healthcare:
For European and international vendors, this shows that AI compliance, specifically alignment with EU-type standards, will become an increasingly important prerequisite for market access and partnership in fast-growing regions like the Middle East.

 

 

 

 

 

 

 

 

Cross-Cutting Themes Across Jurisdictions

 

  1. Institutionalizing Governance: The HHS AI Strategy is the US equivalent of establishing the AIDA and the EU AI Act. It mandates formal governance structures (AI Governance Board) and internal transparency (use case inventory), moving AI from an IT issue to an executive compliance mandate.
  2. Niche Regulatory Clarity: The MHRA DMHT framework demonstrates a global trend where regulators are no longer issuing generic guidance but are clarifying specific, high-risk, and fast-moving sectors (like mental health AI, as also noted by the US FDA). This allows for targeted innovation while maintaining patient safety.
  3. International Interoperability: Canada’s G7 agreements and the UK-Singapore corridor show a concerted effort to prevent regulatory silos. This means vendors should focus on designing AI that meets the highest common denominator of global standards, not just one local set of rules, to ensure market viability.
  4. Generative AI is the Focal Point: All major updates (HHS internal platform, MHRA DMHT framework, WHO LMM focus) center on managing the advanced risk profiles of Generative AI, requiring enhanced controls for human oversight, bias, and unpredictable outputs (hallucination).

 

 

 

 

 

 

 

 

Immediate, Concrete Checklist for Health Organizations & Vendors

 

  • Establish or Empower an AI Governance Body: Formalize a cross-functional AI Governance Board or Committee, drawing on the HHS model, to oversee all AI-related strategy, risk, and procurement.
  • Conduct DMHT Classification Audit (UK/EU): Immediately review any digital mental health technology (DMHT) utilizing AI against the new MHRA framework to confirm medical device classification and ensure full compliance with MDR/UKCA requirements.
  • Create an Internal AI Use Case Inventory: Following the HHS mandate, begin cataloguing all active or planned AI systems within the organization (purpose, risk level, data sources, clinical owner) to manage risk and prepare for future disclosure requirements.
  • Update Clinical Validation Protocol: For all AI/ML models, require validation data that specifically addresses performance against bias, subgroup equity, and the known risks of Generative AI (hallucination, autonomous drift).
  • Leverage Regulatory Corridors: Vendors targeting UK and Asian markets should immediately explore the UK-Singapore regulatory innovation corridor to accelerate regulatory review and secure a first-mover advantage.

 

 

 

 

 

 

 

 

Sources

 

Written by Grigorii Kochetov

Cybersecurity Researcher at AI Healthcare Compliance

Read more

Weekly News and Updates (Jan 1-9, 2026)

Weekly News and Updates (Jan 1-9, 2026)

Between 1st and 9th January 2026, the first full week of the year marks a significant shift from theoretical frameworks to operational infrastructure in AI healthcare governance. Key developments include the UK’s closing of its “AI Growth Lab” consultation, the FDA’s...

read more
Weekly News and Updates (Dec 12 – 19, 2025)

Weekly News and Updates (Dec 12 – 19, 2025)

Between 12–19 December 2025, the regulatory landscape for AI in healthcare shifted decisively toward national-level consolidation and operational security: the U.S. White House issued a landmark Executive Order to centralize AI policy and preempt state-level...

read more
Weekly News and Updates (Nov 22 – 28, 2025)

Weekly News and Updates (Nov 22 – 28, 2025)

Between 22–28 November 2025, global regulators accelerated the shift from high-level principles to mandatory operational controls, particularly in Canada, which launched its first public AI Register detailing hundreds of government AI systems. The EU continued...

read more
Weekly News and Updates (Nov 8 – 21, 2025)

Weekly News and Updates (Nov 8 – 21, 2025)

Between 8-21 November 2025 regulators and international bodies emphasised moving from principles to practice: the EU launched COMPASS-AI to operationalise safe clinical AI; the UK (MHRA) published AI Airlock pilot outputs and announced AI drug-safety projects; the FDA...

read more
Prohibited AI Systems Under the EU AI Act

Prohibited AI Systems Under the EU AI Act

The European Union’s Artificial Intelligence Act (EU AI Act) establishes the world’s first comprehensive legal framework for governing artificial intelligence. It divides AI systems into four categories based on their potential impact on safety and fundamental rights...

read more
Practical impacts of using AI in Healthcare

Practical impacts of using AI in Healthcare

Artificial Intelligence (AI) is transforming healthcare systems globally - enhancing diagnostics, improving patient outcomes, optimizing workflows, and reducing costs. However, its adoption also brings challenges around data integrity, equity, and ethical use. Below...

read more
Weekly News and Updates (Sept 19–25, 2025)

Weekly News and Updates (Sept 19–25, 2025)

This post will begin our new weekly updates that will cover the most recent developments in AI governance and regulations, with a particular focus on how these changes affect AI in healthcare. We begin with updates from Canada — including privacy enforcement actions,...

read more
Healthcare AI Impact:Speed and Efficiency

Healthcare AI Impact:Speed and Efficiency

AI integrations are accelerating healthcare like never before. From cutting radiology wait times to reducing the hours physicians spend on documentation, AI is proving to be one of the most powerful efficiency drivers in medicine. This article explores how much time...

read more