Canada
1) National AI Strategy Task Force & Sprint (Oct 2025)
Government launched an AI Strategy Task Force and a national AI “sprint” (Oct 1–31), with active public consultations and ministerial activity through mid-October (ongoing in the Oct 14–24 window).
The Innovation, Science & Economic Development Canada (ISED) has convened a national AI Strategy Task Force and is running a “sprint” of public consultations during October 2025, aiming to chart Canada’s next chapter in AI leadership. Although not a health-specific regulation, the strategy emphasises safe, ethical, and inclusive AI deployment across sectors. Health-care organisations and vendors should monitor this because AI in healthcare is explicitly cited in the broader national strategy, and positioning proposals around health-care use-cases may increase eligibility for funding and regulatory pilot involvement.
ISED / AIDA companion and AI ecosystem pages
ISED – Help define the next chapter of Canada’s AI leadership
How it applies to AI in healthcare:
Although not a health-specific regulation released in 14–24 Oct, federal AI strategy activity signals prioritized coordination and likely increased federal support for health AI pilots and standards. Health organisations and vendors should map potential funding/partnership opportunities, and align their AI governance to federal consultation themes (safety, equity, data stewardship) to be competitive for grants and pilots.
2) Health Canada Departmental Plan 2025-26 (June 2025, still operative)
Health Canada’s 2025-26 departmental plan reiterates a commitment to exploring advanced technologies – including AI – in health services, and to developing regulatory frameworks and guidance for emerging technologies. Though published earlier, it remains the operative blueprint during the Oct 14-24 window and signals what Health Canada expects from vendors and health-care organisations. Aligning product development, pilot studies and internal governance with those stated departmental priorities (safety, interoperability, equity) will strengthen regulatory readiness and procurement positioning.
Health Canada – Departmental Plan 2025-26
How it applies to AI in healthcare:
Health Canada’s stated commitments mean vendors and institutions should prepare evidence-aligned pilot proposals and clinical validation packages consistent with Health Canada priorities (safety, interoperability, equity). Expect procurement and regulatory review to reference these departmental priorities.
United States
1) FDA Workshop on Modeling & AI in Generic Drug Development (Oct 15-16, 2025)
The U.S. Food & Drug Administration (FDA) held a two-day public workshop through its Center for Research on Complex Generics (CRCG) focused on the role of modeling and AI in generic drug development and lifecycle management. The event included discussions on model validation, regulatory decision-making, and integrating AI into full product lifecycle controls. For health-care AI developers – especially those in drug development, pharmacovigilance, or generics analytics – this signals stronger regulatory expectations for transparent model provenance, structured validation approaches and continuous monitoring capabilities.
FDA – CRCG Workshop: Modeling and AI in Generic Drug Development (Oct 15-16, 2025)
How it applies to AI in healthcare:
The FDA’s workshop signals active regulatory engagement on AI for drug development. Organisations using AI for formulation, bioequivalence modelling, or lifecycle control should prepare to submit model provenance, validation protocols, and continuous-monitoring plans. Expect cross-centre expectations for evidence and possible requests for RWE/bench validation.
2) FDA AI/ML Device Guidance & Lifecycle Emphasis (ongoing)
Although not a new document in the defined window, the FDA’s guidance on AI/ML-enabled medical devices (including lifecycle management, Good Machine Learning Practice (GMLP) principles, and post-market surveillance) continues to serve as the regulatory foundation and is being emphasised in recent engagements. Health-care organisations and vendors need to ensure their AI-based medical devices or software maintain full lifecycle documentation: training data summaries, subgroup performance, monitoring and retraining plans, and human-in-loop controls where needed.
FDA – AI-Enabled Medical Devices (guidance & resources)
How it applies to AI in healthcare:
If your AI is a SaMD or supports drug/device regulatory submissions, ensure documentation covers:
- Model training data and demographic representativeness.
- Performance across subgroups.
- Continuous monitoring plans
- Human-in-loop fail-safes.
European Union & United Kingdom
1) Launch of COMPASS-AI (21 Oct 2025)
The European Commission launched COMPASS-AI – a flagship initiative under the EU’s Apply AI Strategy that will create a multidisciplinary expert community, pilot clinical deployment guidelines, and facilitate knowledge-sharing via a digital platform. The initial areas of focus include cancer care and remote/underserved regions. For health-care providers and AI vendors operating in Europe, this means upcoming pilot opportunities, emerging EU-level guidance on clinical deployment of AI, and the potential to shape de-facto standards via participation.
How it applies to AI in healthcare:
COMPASS-AI moves the EU from policy to operational tools and pilot validation. Vendors and hospital systems operating in the EU should track COMPASS-AI pilots and guidance: participation or alignment with COMPASS-AI outputs will likely influence procurement decisions and standard-setting (clinical validation, interoperability, fairness testing).
2) MHRA AI Airlock Phase 2 Pilot Candidates Selected (16 Oct 2025)
The Medicines and Healthcare products Regulatory Agency (MHRA) announced the selection of seven AI healthcare technologies for Phase 2 of its AI Airlock regulatory sandbox (including tools for clinical note taking, cancer diagnostics, eye disease detection). The sandbox will generate regulatory-grade evidence on real-world performance, explainability, hallucination mitigation, and post-market surveillance. Vendors and health systems in the UK should align their development plans with these sandbox outputs, and healthcare purchasers should expect more rigorous vendor evidence packages matching the Airlock learnings.
MHRA – AI Airlock Sandbox Pilot Programme Report (16 Oct 2025)
How it applies to AI in healthcare:
The AI Airlock findings provide concrete test methods and evaluation checklists for explainability and hallucination management. Vendors should adopt similar test suites, and health orgs should demand pilot evidence consistent with MHRA Airlock metrics before clinical deployment.
3) MHRA Announces AI Project for Predicting Drug-Interaction Side Effects (22 Oct 2025)
The MHRA, in partnership with academic and industry teams and NHS data custodians, announced a government-supported project to use AI and anonymised NHS data to predict adverse drug interactions. The project is funded via the Regulatory Innovation Office’s AI Capability Fund. For vendors in pharmacovigilance, EHR analytics or AI safety systems, this indicates regulator support for innovative AI safety tools and heightens expectations for high-quality data access, validation protocols and explainability frameworks.
How it applies to AI in healthcare:
This is a concrete example of regulator-sponsored AI applied to pharmacovigilance and safety surveillance. Vendors in drug-safety, pharmacovigilance, and EHR analytics should prepare methods for anonymised NHS data access, reproducible model validation, and explainability reports to support regulatory assessment and safe deployment.
Rest of the world
1) WHO Regional Committee (Western Pacific) Session Highlights AI Governance (22-23 Oct 2025)
The World Health Organization (WHO) Western Pacific Regional Committee (Session 76) published documentation emphasising AI’s role in health system planning, governance, data stewardship and equitable access across Member States. While not a binding regulation, the session materials underscore the global trend toward standardised frameworks and capacity-building in AI for health. Organisations working internationally should monitor WHO’s evolving guidance and align pilots or deployments to those frameworks – especially when targeting low- and middle-income countries or global health programs.
WHO – Seventy-sixth session of the Regional Committee (20-24 Oct 2025)
How it applies to AI in healthcare:
WHO regional activity underscores global governance and capacity-building priorities: low- and middle-income countries will use WHO frameworks to adapt AI governance and deployment. International implementers and vendors should align pilots with WHO technical guidance to enable deployment support and funding.
2) Australian Cyber Threat Report 2024-25 – AI Risk to Health Systems (14 Oct 2025)
The Australian Cyber Security Centre (ACSC) released its annual cyber threat report for 2024-25, calling out that AI is both a powerful defensive tool and a growing enabler for malicious actors targeting critical infrastructure including healthcare. For health-care providers and AI vendors in Australia (and globally) this means elevated threats: model-theft, data-poisoning, adversarial-ML and hybrid cyber-AI attacks must be addressed in risk modelling, vendor contracts, audit trails and incident-response plans.
ASD / ACSC – Annual Cyber Threat Report 2024-25 (published Oct 2025)
How it applies to AI in healthcare:
Health organisations using AI must include AI-specific threat models, protect model weights and training data, and assume adversaries may weaponise hallucinations, data poisoning, or model extraction. Tighten logging, segmentation, and model-access controls in line with ACSC recommendations.
Cross-Cutting Themes
- From principles to pilots: EU COMPASS-AI and MHRA Airlock outputs show regulator focus on operational pilots and community knowledge-sharing.
- Lifecycle & monitoring: FDA workshop and MHRA outputs emphasise continuous monitoring, GMLP and post-market surveillance.
- RWD + lab validation: Drug interaction projects and FDA modelling discussions stress combining real-world data with bench/lab validation to make regulator-grade evidence.
- Cyber and data governance: Australian ACSC and WHO activity highlight that security, data stewardship, and equity are central to safe AI deployment.
Immediate, concrete checklist for health organisations & vendors
- Map applicable regimes: For each product, map EU/UK/FDA/AIDA and local privacy laws; identify sandbox/pilot opportunities (COMPASS-AI, AI Airlock) and regulatory expectations (GMLP, lifecycle monitoring).
- Prepare an evidence dossier: model card, training data summary, bias and subgroup performance, bench & real-world validation reports, explainability approach, hallucination tests (per MHRA Airlock).
- Implement lifecycle controls: versioning, drift detection, retraining governance, monitoring dashboards, and incident escalation flows tied to clinical safety metrics.
- Harden security: treat models as critical assets — access control, encryption, logging, and incident response aligned to ACSC/ASD guidance.
- Privacy impact assessments: update PIAs to reflect data flows for training, inference, and RWD linking; document de-identification steps and data governance controls.
- Procurement clauses: require vendor transparency (model provenance, monitoring SLAs, regulatory attachments) and pilot evidence for AI products used clinically.
- Engage regulators early: where possible, use sandboxes and workshops (MHRA Airlock, FDA engagement forums, COMPASS-AI pilots) to validate evidence and test deployment
Sources
- Innovation, Science & Economic Development Canada (ISED) – Artificial Intelligence and Data Act (AIDA) companion document / AI ecosystem pages (Jan 31, 2025 / ongoing)
https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document - Innovation, Science & Economic Development Canada (ISED) – Help define the next chapter of Canada’s AI leadership / AI Strategy Task Force (Oct 2025 public consultations)
https://ised-isde.canada.ca/site/ised/en/public-consultations/help-define-next-chapter-canadas-ai-leadership - Health Canada – Departmental Plan 2025–26 (2025) — commitments on exploring advanced technologies including AI in health services (17 Jun 2025)
https://www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/report-plans-priorities/2025-2026-departmental-plan.html - U.S. Food and Drug Administration – FDA / Center for Research on Complex Generics (CRCG) Workshop: Modeling and Artificial Intelligence (AI) in Generic Drug Development (Oct 15–16, 2025)
https://www.fda.gov/drugs/news-events-human-drugs/fdacenter-research-complex-generics-crcg-workshop-modeling-and-artificial-intelligence-ai-generic - U.S. Food & Drug Administration – Artificial Intelligence-Enabled Medical Devices (guidance & resource page) (ongoing reference)
https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices - European Commission – Commission launches flagship initiative to increase use of AI in healthcare (21 Oct 2025)
https://digital-strategy.ec.europa.eu/en/news/commission-launches-flagship-initiative-increase-use-ai-healthcare - European Commission – Daily News 21 / 10 / 2025 (COMPASS-AI press entry) (21 Oct 2025)
https://ec.europa.eu/commission/presscorner/detail/en/mex_25_2461 - UK Government / MHRA – AI Airlock Sandbox Pilot Programme Report (Published: 16 Oct 2025)
https://www.gov.uk/government/publications/ai-airlock-sandbox-pilot-programme-report - UK Government / MHRA – Side effects from drug interactions to be predicted by AI before reaching patients (22 Oct 2025)
https://www.gov.uk/government/news/side-effects-from-drug-interactions-to-be-predicted-by-ai-before-reaching-patients - World Health Organization (Western Pacific) – Seventy-sixth session of the Regional Committee (20–24 Oct 2025) (session documents)
https://www.who.int/westernpacific/about/governance/regional-committee/session-76 - World Health Organization – Harnessing artificial intelligence for health (WHO program page — ongoing resource)
https://www.who.int/teams/digital-health-and-innovation/harnessing-artificial-intelligence-for-health - Australian Government / Cyber.gov.au (ASD/ACSC) – Annual Cyber Threat Report 2024–2025 (published Oct 2025)
https://www.cyber.gov.au/about-us/view-all-content/reports-and-statistics/annual-cyber-threat-report-2024-2025








