Canada
1) ISED – National AI Strategy Task Force & October AI “Sprint” (ongoing through Oct 31, 2025)
Innovation, Science & Economic Development Canada (ISED) launched a 30-day national consultation sprint throughout October 2025 to guide the country’s next National AI Strategy. The newly formed AI Strategy Task Force includes experts from academia, industry, and civil society, focusing on issues like research funding, adoption by government and business, AI safety, education, and data infrastructure. The sprint collected input from individuals, startups, and organizations nationwide to ensure a broad, inclusive policy direction for Canada’s evolving AI ecosystem.
These consultations are expected to shape how Canada funds AI research and regulates responsible use, with final strategy recommendations anticipated by early 2026. The initiative signals stronger alignment between AI innovation, ethical governance, and national priorities such as data sovereignty and equitable access.
Innovation, Science & Economic Development Canada (ISED)
How it applies to AI in healthcare:
National strategy work influences funding priorities, data-sharing expectations, and procurement preferences. Health organisations and vendors should track task-force outputs for upcoming funding calls and align proposals to the consultation themes (safety, equity, data stewardship) to increase eligibility for federal pilots or procurement opportunities.
2) Federal Budget / AI measurement funding (late October – early November activity)
The 2025 federal budget introduced substantial AI-related investments, emphasizing the need for measurable outcomes in public sector and industry adoption. In addition to continued infrastructure funding, the government launched an AI & Technology Measurement Program (TechStat) under Statistics Canada to monitor how AI technologies are being used, their economic contributions, and their societal effects.
This move reflects a shift toward evidence-driven governance, where public funding and regulatory frameworks will increasingly depend on measurable performance, accountability, and transparent evaluation of AI’s real-world impacts.
Government of Canada – Budget / Annex references to AI & Technology measurement programs
How it applies to AI in healthcare:
Increased federal measurement capability can create opportunities for partnerships (data access, evaluation frameworks) and raises expectations that healthcare AI systems contribute measurable outcomes and participate in national measurement programs. Health organisations should prepare data-sharing plans and evidence packages that map to government measurement priorities.
United States
1) FDA – Digital Health Advisory Committee (Generative AI in digital mental health) (public meeting: 6 Nov 2025)
The FDA’s Digital Health Advisory Committee convened to evaluate generative AI applications in digital mental health technologies. Discussions focused on how conversational and adaptive AI models are influencing clinical and therapeutic devices, addressing potential benefits alongside concerns about hallucinations, bias, and explainability. The committee also examined pre-market evidence expectations and ongoing monitoring methods.
This meeting marks a turning point for U.S. regulators as generative AI begins to enter regulated healthcare domains. Developers in digital health and therapeutic AI are expected to produce clear evidence of model reliability, subgroup performance, and post-market surveillance readiness.
U.S. Food & Drug Administration (FDA) – Digital Health Advisory Committee meeting announcement
How it applies to AI in healthcare:
Generative-AI in behavioural & mental health will receive focused agency scrutiny – expect requests for explicit safety controls, transparency about generative behaviors and hallucination risk, subgroup performance data, and robust postmarket surveillance plans. Vendors working on generative-AI clinical support or therapeutic apps should prepare premarket evidence packages and monitoring plans aligned to the topics listed on the Advisory Committee docket.
2) FDA Sentinel Initiative public workshop (6 Nov 2025) – RWD and surveillance
The FDA’s Sentinel Initiative workshop focused on the use of real-world data and real-world evidence in active safety surveillance systems, with growing attention to AI-based analytics. The session explored progress in data standardization, model validation, and transparent signal detection. For healthcare stakeholders, this reinforces the expectation that AI systems used in post-market surveillance or pharmacovigilance be compatible with standardized RWE frameworks and maintain traceability, auditability, and reproducibility across datasets and analytical workflows.
U.S. Food & Drug Administration (FDA) – Sentinel Initiative Public Workshop
How it applies to AI in healthcare:
AI systems used for pharmacovigilance, safety signal detection or postmarket surveillance should be architected to integrate with Sentinel-style RWE pipelines: standardised data models, audit trails, reproducible signal-detection methods, and validated performance metrics for prospective monitoring.
European Union & United Kingdom
1) EU – Work-programme / funding governance activity (early November 2025)
The European Commission’s early-November updates to its Digital Europe and European Innovation Council work-programmes outlined major AI funding initiatives for the next cycle. The programs prioritize AI capacity-building, cross-sector pilots, testing facilities, and healthcare data-space development. Funding will continue to emphasize ethical deployment, cybersecurity, and interoperability of AI systems across member states. This reinforces the EU’s strategic objective to maintain leadership in responsible AI innovation, combining strong regulatory oversight with sustained investment in practical implementation and infrastructure readiness.
European Commission – Digital Strategy pages and Apply AI Strategy / Digital Europe Programme updates
EIC Work Programme 2026 (decision reference)
How it applies to AI in healthcare:
The EU’s work programmes fund clinical AI pilots, testing facilities and data-space activities – vendors and hospital systems should watch specific calls (Digital Europe, EU4Health, Horizon/EIC) and prepare consortium proposals that meet EU requirements for cross-border data, interoperability and clinical evaluation.
2) UK – MHRA & regulatory innovation commentary (late Oct – early Nov 2025)
The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) released updates related to its AI Airlock sandbox, including results from the second-phase cohort of companies testing AI-enabled medical technologies. The reports highlight progress in evidence generation, model validation, and post-market performance tracking for AI-driven medical devices. These activities show the UK’s continued effort to modernize its regulatory framework for AI, using sandbox testing to identify and address gaps in data reliability, transparency, and patient-safety evaluation ahead of formal regulation.
AI Airlock Phase 2 cohort / AI Airlock Sandbox Pilot Programme report and related MHRA updates
Regulatory Innovation Office: One Year On.
How it applies to AI in healthcare:
The MHRA’s sandbox outputs and regulator-led projects (including AI-powered assessment tools and pharmacovigilance projects) set practical evidence expectations: reproducible validation, synthetic data guidance, surveillance metrics and explainability tests. UK deployers and vendors should map their evidence and PMS (post-market surveillance) approaches to MHRA Airlock recommendations and procurement signals.
Rest of the world
1) WHO – “Countries, regulators and partners urge a collaborative approach to advance safe and equitable AI in health” (24 Oct 2025)
The World Health Organization issued a global call for coordinated action on AI in healthcare, urging collaboration among regulators, governments, and industry. The statement emphasized equity, capacity-building, and responsible governance as key priorities for developing and deploying AI systems in health. This initiative reflects growing international alignment on AI safety and ethical use, particularly for health technologies aimed at low- and middle-income countries. It also sets a framework for funding programs and cross-border cooperation anchored in transparency and fairness.
How it applies to AI in healthcare:
WHO’s call amplifies the push for harmonised governance, measurement frameworks, and capacity-building. Multilateral programmes and funders will expect deployments in low- and middle-income settings to demonstrate alignment with WHO governance and equity principles; vendors aiming for global adoption should document governance, explainability and equity testing plans.
2) Australia – Regional cyber guidance & risk reporting (reported mid-October)
Australia’s Cyber Security Centre released its annual cyber threat report for 2024-25, noting both the defensive benefits and the emerging risks of AI-driven automation. The report highlights a rise in AI-enhanced cyberattacks targeting healthcare and other critical sectors, urging greater investment in resilience, monitoring, and workforce training. This reflects a broader trend where governments increasingly recognize AI as a double-edged tool—improving detection and response capabilities while simultaneously expanding the attack surface and complexity of security operations.
Australian Cyber Security Centre (ASD/ACSC) – Annual Cyber Threat Report 2024-25
How it applies to AI in healthcare:
Where organisations deploy clinical AI, expect increased attention to adversarial robustness, model-access controls, and supplier security requirements. Even if the report falls just before the Oct-24 cut-off, its operational security guidance is being referenced by national agencies during the Oct24-Nov8 window and therefore informs current risk expectations.
Cross-Cutting Themes Across Jurisdictions
- Operationalisation & funding: governments moved from strategy statements to concrete funding/work-programme activity and public advisory engagements (EU work programmes, ISED sprint, FDA Advisory Committee).
- Focus on lifecycle & surveillance: regulator engagements emphasise total-product-lifecycle controls and postmarket surveillance (FDA Sentinel, MHRA Airlock learnings).
- Capacity & governance alignment (global): WHO’s call highlights that countries need governance roadmaps and capacity building if health AI is to be equitable and safe.
- Security is now a regulatory expectation: national cyber reports and regulator procurement notes increasingly expect AI-specific threat models and vendor security guarantees.
Immediate, concrete checklist for health organisations & vendors
- Map your regulatory footprint – identify which regimes apply (AI Act / Apply AI outputs, MHRA Airlock learnings, FDA guidance & advisory committee topics, AIDA/AIDA-adjacent consultations). Keep a tracker for active funding calls (EU DIGITAL/EIC, national sprint funds).
- Evidence dossier (short, deployable): model card, training-data summary, subgroup performance, bench & RWE validation notes, synthetic-data fidelity tests (MHRA Airlock recommended), and drift/monitoring plans.
- Lifecycle & surveillance: implement versioning, drift detection, monitoring dashboards and incident-report pathways that can feed regulator surveillance (e.g., Sentinel-compatible RWE flows).
- Security & procurement controls: treat models & training sets as high-value assets – access control, encryption, logging, secure model-hosting, contractual incident response and model-provenance obligations.
- PIA & data governance: update PIAs/PIAs/DPIAs for training & inference data, document de-identification, lawful bases for RWD linkages and cross-border flows.
- Engage early: use sandboxes, advisory meetings and funding calls (MHRA Airlock learnings / EU pilots / ISED Task Force hooks / FDA advisory docket) to validate evidence and test deployment strategies.
Sources
- Innovation, Science & Economic Development Canada (ISED) – Help define the next chapter of Canada’s AI leadership (public consultation / AI Strategy Task Force / October 2025 sprint).
https://ised-isde.canada.ca/site/ised/en/public-consultations/help-define-next-chapter-canadas-ai-leadership. - Government of Canada – Budget / Annex references to AI & Technology measurement programs (Budget documents, early Nov 2025 releases).
https://budget.canada.ca/2025/report-rapport/anx6-en.html. - U.S. Food & Drug Administration (FDA) – Digital Health Advisory Committee meeting announcement (Generative AI in digital mental health; meeting 6 Nov 2025).
https://www.fda.gov/medical-devices/digital-health-center-excellence/fda-digital-health-advisory-committee. - European Commission – Digital Strategy pages and Apply AI Strategy / Digital Europe Programme updates (work-programme documents; COMPASS-AI / Apply AI pages – background & funding).
https://digital-strategy.ec.europa.eu/en/policies/apply-ai.
EIC Work Programme 2026 (decision reference). - Medicines & Healthcare products Regulatory Agency (MHRA), UK – AI Airlock Phase 2 cohort / AI Airlock Sandbox Pilot Programme report and related MHRA updates (Oct-Nov 2025).
https://www.gov.uk/government/publications/ai-airlock-phase-2-cohort.
Regulatory Innovation Office: One Year On. - WHO – Countries, regulators and partners urge a collaborative approach to advance safe and equitable AI in health (24 Oct 2025). Official WHO departmental update.
https://www.who.int/news/item/24-10-2025-countries–regulators-and-partners-urge-a-collaborative-approach-to-advance-safe-and-equitable-ai-in-health. - Australian Cyber Security Centre (ASD/ACSC) – Annual Cyber Threat Report 2024–25 (published Oct 2025).
https://www.cyber.gov.au/about-us/view-all-content/reports-and-statistics/annual-cyber-threat-report-2024-2025.








