Weekly News and Updates (Oct 14 – Oct 24, 2025)

by | Oct 24, 2025 | AI News & Updates | 0 comments

Between 14–24 October 2025 regulators and international bodies emphasised moving from principles to practice: the EU launched COMPASS-AI to operationalise safe clinical AI; the UK (MHRA) published AI Airlock pilot outputs and announced new AI drug-safety projects; the FDA ran a workshop on modelling & AI in generic drug development; WHO regional activity reinforced governance & capacity needs; Australia released its Annual Cyber Threat Report calling out AI-enabled threats; Canada had active AI strategy & consultation activity but no new federal health-AI regulation in this window. Each update below is followed by a short “How it applies to AI in healthcare” section.

 

 

 

 

Canada

 

 

1) National AI Strategy Task Force & Sprint (Oct 2025)

 

Government launched an AI Strategy Task Force and a national AI “sprint” (Oct 1–31), with active public consultations and ministerial activity through mid-October (ongoing in the Oct 14–24 window).

The Innovation, Science & Economic Development Canada (ISED) has convened a national AI Strategy Task Force and is running a “sprint” of public consultations during October 2025, aiming to chart Canada’s next chapter in AI leadership. Although not a health-specific regulation, the strategy emphasises safe, ethical, and inclusive AI deployment across sectors. Health-care organisations and vendors should monitor this because AI in healthcare is explicitly cited in the broader national strategy, and positioning proposals around health-care use-cases may increase eligibility for funding and regulatory pilot involvement.

ISED / AIDA companion and AI ecosystem pages
ISED – Help define the next chapter of Canada’s AI leadership

 

How it applies to AI in healthcare:

Although not a health-specific regulation released in 14–24 Oct, federal AI strategy activity signals prioritized coordination and likely increased federal support for health AI pilots and standards. Health organisations and vendors should map potential funding/partnership opportunities, and align their AI governance to federal consultation themes (safety, equity, data stewardship) to be competitive for grants and pilots.

 

 

 

 

2) Health Canada Departmental Plan 2025-26 (June 2025, still operative)

 

Health Canada’s 2025-26 departmental plan reiterates a commitment to exploring advanced technologies – including AI – in health services, and to developing regulatory frameworks and guidance for emerging technologies. Though published earlier, it remains the operative blueprint during the Oct 14-24 window and signals what Health Canada expects from vendors and health-care organisations. Aligning product development, pilot studies and internal governance with those stated departmental priorities (safety, interoperability, equity) will strengthen regulatory readiness and procurement positioning.

Health Canada – Departmental Plan 2025-26

 

How it applies to AI in healthcare:

Health Canada’s stated commitments mean vendors and institutions should prepare evidence-aligned pilot proposals and clinical validation packages consistent with Health Canada priorities (safety, interoperability, equity). Expect procurement and regulatory review to reference these departmental priorities.

 

 

 

 

 

 

 

 

United States

 

 

1) FDA Workshop on Modeling & AI in Generic Drug Development (Oct 15-16, 2025)

The U.S. Food & Drug Administration (FDA) held a two-day public workshop through its Center for Research on Complex Generics (CRCG) focused on the role of modeling and AI in generic drug development and lifecycle management. The event included discussions on model validation, regulatory decision-making, and integrating AI into full product lifecycle controls. For health-care AI developers – especially those in drug development, pharmacovigilance, or generics analytics – this signals stronger regulatory expectations for transparent model provenance, structured validation approaches and continuous monitoring capabilities.

FDA – CRCG Workshop: Modeling and AI in Generic Drug Development (Oct 15-16, 2025)

 

How it applies to AI in healthcare:

The FDA’s workshop signals active regulatory engagement on AI for drug development. Organisations using AI for formulation, bioequivalence modelling, or lifecycle control should prepare to submit model provenance, validation protocols, and continuous-monitoring plans. Expect cross-centre expectations for evidence and possible requests for RWE/bench validation.

 

 

 

 

2) FDA AI/ML Device Guidance & Lifecycle Emphasis (ongoing)

 

Although not a new document in the defined window, the FDA’s guidance on AI/ML-enabled medical devices (including lifecycle management, Good Machine Learning Practice (GMLP) principles, and post-market surveillance) continues to serve as the regulatory foundation and is being emphasised in recent engagements. Health-care organisations and vendors need to ensure their AI-based medical devices or software maintain full lifecycle documentation: training data summaries, subgroup performance, monitoring and retraining plans, and human-in-loop controls where needed.

FDA – AI-Enabled Medical Devices (guidance & resources)

 

How it applies to AI in healthcare:

If your AI is a SaMD or supports drug/device regulatory submissions, ensure documentation covers:

  • Model training data and demographic representativeness.
  • Performance across subgroups.
  • Continuous monitoring plans
  • Human-in-loop fail-safes.

 

 

 

 

 

 

 

 

European Union & United Kingdom

 

 

1) Launch of COMPASS-AI (21 Oct 2025)

 

The European Commission launched COMPASS-AI – a flagship initiative under the EU’s Apply AI Strategy that will create a multidisciplinary expert community, pilot clinical deployment guidelines, and facilitate knowledge-sharing via a digital platform. The initial areas of focus include cancer care and remote/underserved regions. For health-care providers and AI vendors operating in Europe, this means upcoming pilot opportunities, emerging EU-level guidance on clinical deployment of AI, and the potential to shape de-facto standards via participation.

European Commission – Commission launches flagship initiative to increase use of AI in healthcare (21 Oct 2025)

 

How it applies to AI in healthcare:

COMPASS-AI moves the EU from policy to operational tools and pilot validation. Vendors and hospital systems operating in the EU should track COMPASS-AI pilots and guidance: participation or alignment with COMPASS-AI outputs will likely influence procurement decisions and standard-setting (clinical validation, interoperability, fairness testing).

 

 

 

 

2) MHRA AI Airlock Phase 2 Pilot Candidates Selected (16 Oct 2025)

The Medicines and Healthcare products Regulatory Agency (MHRA) announced the selection of seven AI healthcare technologies for Phase 2 of its AI Airlock regulatory sandbox (including tools for clinical note taking, cancer diagnostics, eye disease detection). The sandbox will generate regulatory-grade evidence on real-world performance, explainability, hallucination mitigation, and post-market surveillance. Vendors and health systems in the UK should align their development plans with these sandbox outputs, and healthcare purchasers should expect more rigorous vendor evidence packages matching the Airlock learnings.

MHRA – AI Airlock Sandbox Pilot Programme Report (16 Oct 2025)

 

How it applies to AI in healthcare:

The AI Airlock findings provide concrete test methods and evaluation checklists for explainability and hallucination management. Vendors should adopt similar test suites, and health orgs should demand pilot evidence consistent with MHRA Airlock metrics before clinical deployment.

 

 

 

 

3) MHRA Announces AI Project for Predicting Drug-Interaction Side Effects (22 Oct 2025)

 

The MHRA, in partnership with academic and industry teams and NHS data custodians, announced a government-supported project to use AI and anonymised NHS data to predict adverse drug interactions. The project is funded via the Regulatory Innovation Office’s AI Capability Fund. For vendors in pharmacovigilance, EHR analytics or AI safety systems, this indicates regulator support for innovative AI safety tools and heightens expectations for high-quality data access, validation protocols and explainability frameworks.

MHRA – Side effects from drug interactions to be predicted by AI before reaching patients (22 Oct 2025)

 

How it applies to AI in healthcare:

This is a concrete example of regulator-sponsored AI applied to pharmacovigilance and safety surveillance. Vendors in drug-safety, pharmacovigilance, and EHR analytics should prepare methods for anonymised NHS data access, reproducible model validation, and explainability reports to support regulatory assessment and safe deployment.

 

 

 

 

 

 

 

 

Rest of the world

 

 

1) WHO Regional Committee (Western Pacific) Session Highlights AI Governance (22-23 Oct 2025)

The World Health Organization (WHO) Western Pacific Regional Committee (Session 76) published documentation emphasising AI’s role in health system planning, governance, data stewardship and equitable access across Member States. While not a binding regulation, the session materials underscore the global trend toward standardised frameworks and capacity-building in AI for health. Organisations working internationally should monitor WHO’s evolving guidance and align pilots or deployments to those frameworks – especially when targeting low- and middle-income countries or global health programs.

WHO – Seventy-sixth session of the Regional Committee (20-24 Oct 2025)

 

How it applies to AI in healthcare:

WHO regional activity underscores global governance and capacity-building priorities: low- and middle-income countries will use WHO frameworks to adapt AI governance and deployment. International implementers and vendors should align pilots with WHO technical guidance to enable deployment support and funding.

 

 

 

 

2) Australian Cyber Threat Report 2024-25 – AI Risk to Health Systems (14 Oct 2025)

The Australian Cyber Security Centre (ACSC) released its annual cyber threat report for 2024-25, calling out that AI is both a powerful defensive tool and a growing enabler for malicious actors targeting critical infrastructure including healthcare. For health-care providers and AI vendors in Australia (and globally) this means elevated threats: model-theft, data-poisoning, adversarial-ML and hybrid cyber-AI attacks must be addressed in risk modelling, vendor contracts, audit trails and incident-response plans.

ASD / ACSC – Annual Cyber Threat Report 2024-25 (published Oct 2025)

 

How it applies to AI in healthcare:
Health organisations using AI must include AI-specific threat models, protect model weights and training data, and assume adversaries may weaponise hallucinations, data poisoning, or model extraction. Tighten logging, segmentation, and model-access controls in line with ACSC recommendations.

 

 

 

 

 

 

 

 

Cross-Cutting Themes 

 

  1. From principles to pilots: EU COMPASS-AI and MHRA Airlock outputs show regulator focus on operational pilots and community knowledge-sharing.
  2. Lifecycle & monitoring: FDA workshop and MHRA outputs emphasise continuous monitoring, GMLP and post-market surveillance.
  3. RWD + lab validation: Drug interaction projects and FDA modelling discussions stress combining real-world data with bench/lab validation to make regulator-grade evidence.
  4. Cyber and data governance: Australian ACSC and WHO activity highlight that security, data stewardship, and equity are central to safe AI deployment.

 

 

 

 

 

 

 

 

Immediate, concrete checklist for health organisations & vendors 

 

  • Map applicable regimes: For each product, map EU/UK/FDA/AIDA and local privacy laws; identify sandbox/pilot opportunities (COMPASS-AI, AI Airlock) and regulatory expectations (GMLP, lifecycle monitoring).
  • Prepare an evidence dossier: model card, training data summary, bias and subgroup performance, bench & real-world validation reports, explainability approach, hallucination tests (per MHRA Airlock).
  • Implement lifecycle controls: versioning, drift detection, retraining governance, monitoring dashboards, and incident escalation flows tied to clinical safety metrics.
  • Harden security: treat models as critical assets — access control, encryption, logging, and incident response aligned to ACSC/ASD guidance.
  • Privacy impact assessments: update PIAs to reflect data flows for training, inference, and RWD linking; document de-identification steps and data governance controls.
  • Procurement clauses: require vendor transparency (model provenance, monitoring SLAs, regulatory attachments) and pilot evidence for AI products used clinically.
  • Engage regulators early: where possible, use sandboxes and workshops (MHRA Airlock, FDA engagement forums, COMPASS-AI pilots) to validate evidence and test deployment

 

 

 

 

 

 

 

 

Sources

 

Written by Grigorii Kochetov

Cybersecurity Researcher at AI Healthcare Compliance

Read more

Weekly News and Updates (Nov 8 – 21, 2025)

Weekly News and Updates (Nov 8 – 21, 2025)

Between 8-21 November 2025 regulators and international bodies emphasised moving from principles to practice: the EU launched COMPASS-AI to operationalise safe clinical AI; the UK (MHRA) published AI Airlock pilot outputs and announced AI drug-safety projects; the FDA...

read more
Prohibited AI Systems Under the EU AI Act

Prohibited AI Systems Under the EU AI Act

The European Union’s Artificial Intelligence Act (EU AI Act) establishes the world’s first comprehensive legal framework for governing artificial intelligence. It divides AI systems into four categories based on their potential impact on safety and fundamental rights...

read more
Practical impacts of using AI in Healthcare

Practical impacts of using AI in Healthcare

Artificial Intelligence (AI) is transforming healthcare systems globally - enhancing diagnostics, improving patient outcomes, optimizing workflows, and reducing costs. However, its adoption also brings challenges around data integrity, equity, and ethical use. Below...

read more
Weekly News and Updates (Sept 19–25, 2025)

Weekly News and Updates (Sept 19–25, 2025)

This post will begin our new weekly updates that will cover the most recent developments in AI governance and regulations, with a particular focus on how these changes affect AI in healthcare. We begin with updates from Canada — including privacy enforcement actions,...

read more
Healthcare AI Impact:Speed and Efficiency

Healthcare AI Impact:Speed and Efficiency

AI integrations are accelerating healthcare like never before. From cutting radiology wait times to reducing the hours physicians spend on documentation, AI is proving to be one of the most powerful efficiency drivers in medicine. This article explores how much time...

read more