Canada
1) Federal political engagement: Ministers at ALL IN (Sep 25, 2025)
The Government of Canada published an official statement describing Ministers (including Mélanie Joly) meeting AI leaders at the ALL IN 2025 conference and signalling continued federal focus on AI governance, international collaboration, and investment in trustworthy AI. This is a political/strategic signal rather than a binding law. Official statement (Gov of Canada news).
How this relates to AI in healthcare:
Policy direction: federal political attention typically accelerates downstream activity: new guidance, funding calls, standards contracts, and interdepartmental working groups. Expect Health Canada / ISED / CIHR / PHAC to receive clearer mandates or resources to coordinate healthcare AI initiatives.
- Procurement & standards: hospitals and provincial health authorities will likely see stronger expectations for alignment with federal “trustworthy AI” objectives in vendor selection and procurement language (e.g., explicit checks for transparency, bias testing and safety cases). Prepare procurement templates requiring supplier attestations to federal trust/ethics standards.
- International alignment: statements emphasising international cooperation increase the probability that Canada will adopt or harmonize technical standards that mirror EU/FDA approaches — useful when planning multi-jurisdiction product strategies.
2) OPC enforcement — TikTok PIPEDA Report of Findings #2025-003 (Sep 23, 2025)
The Office of the Privacy Commissioner (OPC) published its Report of Findings for a joint investigation into TikTok addressing issues such as transparency, de-identification, and cross-border data flows. The report is an active enforcement action and public statement of the OPC’s expectations. OPC: Report of Findings #2025-003.
How this relates to AI in healthcare:
Enforcement posture: the OPC may be testing the limits of PIPEDA enforcement. For healthcare AI that touches personal health information (PHI), the risk of regulatory inquiry is non-trivial — we can expect the OPC to scrutinize transparency statements, de-identification claims, risk-based consent models, and vendor cross-border transfers.
Suggested actions to take:
- Re-run Privacy Impact Assessments (PIAs / DPIAs) specifically for AI pipelines; document algorithms’ inputs/outputs, retention, and re-identification risk testing.
- Keep reproducible documentation that supports de-identification claims (techniques used, re-identification risk metrics, datasets used to test de-identification).
- When exporting PHI, ensure contractual and technical safeguards (standard contractual clauses, equivalent protections, encryption, vendor audits) and record lawful bases. Consider data localization where feasible.
- Clarify patient-facing notices: make AI usage and data uses prominent (not buried in long privacy policies), and obtain consent where required under provincial health privacy regimes (PHIPA, etc.).
- Evidence trail: maintain an auditable decision log (who approved model changes, why, and with what validation data) because OPC actions emphasise accountability and traceability.
3) Canadian Digital Regulators Forum (CDRF) — synthetic media paper (Sep 18, 2025)
The Canadian Digital Regulators Forum (via the Competition Bureau) published a policy paper addressing synthetic media: risks, harms, and recommended regulatory approaches including provenance, labeling and cross-regulator coordination. CDRF / Competition Bureau: synthetic media policy paper.
How this relates to AI in healthcare:
Synthetic media issues map directly to patient or clinician-facing outputs (generated reports, voice assistants, avatar clinicians, automatically generated educational material). Regulators expect clear provenance and labeling of AI-generated content.
Suggested operational controls to adopt:
- Provenance metadata: embed and retain metadata ribbons that record model version, dataset provenance, timestamp, confidence scores, and whether content was synthesized (text, voice, image).
- Mandatory labelling: display visible notices in patient portals and clinician interfaces when content or suggestions are AI-generated (recommend wording, placement, and persistence in UI/UX tests).
- Audit and rollback: implement traceable audit trails and a capability to flag or remove AI-generated content that is inaccurate or harmful; perform human review for high-risk outputs.
Liability & governance: be prepared for cross-sector enforcement (privacy + competition + consumer protection) if AI outputs cause misinformation; risk assessments should include reputational, safety and regulatory dimensions.
4) Health Canada — Pre-market guidance for machine learning-enabled medical devices (published Feb 5, 2025 — continued implementation)
Health Canada’s pre-market guidance for ML-enabled medical devices (MLMD) remains the primary source of obligations for devices that meet the SaMD definition in Canada; the document includes expectations on transparency, validation, clinical evidence, and introduces the Predetermined Change Control Plan (PCCP) for planned algorithm changes.
Full guidance & PDF: Health Canada MLMD guidance (web) · Guidance (PDF).
How this relates to AI in healthcare:
This guidance applies to Class II–IV MLMD. If your clinical decision support (CDS), diagnostic aid, triage system, or image analysis module influences clinical decisions and meets the medical-device definition, you must treat it as a regulated device in Canada.
Key pre-market expectations (actionable checklist):
- Device classification mapping: confirm your device class (I–IV) based on intended use and risk; document the rationale and regulatory pathway.
- Clinical evidence: provide clinical validation: retrospective & prospective datasets, external validation cohorts, performance metrics (sensitivity/specificity, AUROC, calibration), and subgroup analysis for bias. Include sample sizes, selection criteria, and missing-data handling.
- Algorithm description & provenance: describe model architecture, training datasets (sources, date ranges), preprocessing, feature engineering, and data governance controls; identify any use of synthetic or third-party datasets and consent provenance.
- Risk management: include hazard analysis mapped to clinical impact, mitigation strategies, and human-in-the-loop controls where applicable. Document failure modes, severity, and likelihood estimates (as per ISO 14971 style).
- PCCP (Predetermined Change Control Plan): define allowable post-market changes (retraining, threshold tuning), validation requirements for each change type, monitoring triggers, and communication plans for regulators and users. The PCCP should specify what changes can be made without a new submission and what requires one.
- Transparency & labeling: provide device labeling for clinicians/patients, describing intended use, limitations, model version, and performance metrics. Consider including confidence intervals and recommended clinician actions on low-confidence outputs.
- Post-market surveillance: set up active post-market monitoring (real-world performance, drift detection, adverse event reporting) with defined KPIs, thresholds, and rapid mitigation pathways.
- Cybersecurity & data integrity: follow device cybersecurity guidance (update processes, patching, secure telemetry) and protect model weights and training data.
Suggested deliverables for a Health Canada pre-market package for MLMD:
- Device description + intended use statement
- Model card + dataset datasheet + bias/representativeness analysis
- Clinical validation study reports (statistical analysis plan + results)
- Risk management file (per ISO 14971 style) and safety case
- PCCP with change classification and validation protocols
- Post-market monitoring plan and patient communication templates
Timeline & resourcing: building these artifacts requires clinical partnerships for validation cohorts, statisticians for study design, legal/privacy teams for data provenance, and dedicated regulatory resources to prepare the submission — factor 3–9 months depending on device class and readiness.
United States
1) FDA — AI/ML-enabled device transparency (AI-Enabled Medical Device List)
The U.S. Food & Drug Administration continues to maintain and update its AI/ML-Enabled Medical Device resources (a public list/registry and guidance pages that describe the agency’s approach to AI/ML SaMD). This is an operational transparency measure used by purchasers and clinicians to identify regulated AI devices. FDA AI/ML resources & device list.
How this relates to AI in healthcare:
- Usefulness to procurement: hospitals should consult the FDA list to check marketed devices’ regulatory status and to verify whether an AI tool is FDA-authorized (and under which conditions). This affects purchasing, liability, and clinical trust.
- Regulatory expectations mirrored elsewhere: the FDA has emphasized lifecycle approaches, transparency, and real-world performance monitoring — concepts now echoed by Health Canada and other regulators. Vendors targeting North America should harmonize submissions and PCCP-style artifacts where possible.
- Post-market & labeling: where the FDA requires specific labeling or post-market studies, purchasers should ensure vendor contracts require compliance support and data sharing for those studies.
2) HHS / OCR — HIPAA enforcement & guidance (ongoing)
HHS (OCR) continues to publish HIPAA guidance and maintain enforcement resources; there was no new HIPAA-specific AI rule this week, but HIPAA obligations remain central when AI processes protected health information in the U.S. HHS / OCR — HIPAA resources for professionals.
How this relates to AI in healthcare:
- Business Associate Agreements (BAAs): cloud vendors or AI processors that handle PHI will routinely be considered Business Associates under HIPAA — ensure BAAs explicitly cover model training, secondary uses, subcontractors, and breach notification.
- Security Rule requirements: implement administrative, technical and physical safeguards for ePHI, including encryption in transit/at rest, access controls, and logging that captures who ran models and what data they accessed.
- Incident response & breach reporting: ensure rapid detection and notification flows are contractually and operationally defined; consider tabletop exercises that include AI model compromise or data leak scenarios.
European Union & United Kingdom
1) EU — EU4Health calls; DG SANTE signals (Sep 23, 2025)
HaDEA (the EU Health and Digital Executive Agency) published new EU4Health open calls for proposals under the 2025 Work Programme on Sept 23, 2025, tied to DG SANTE priorities (digital health, interoperability, and safe AI in health). These calls steer funding toward projects that implement EU policy goals. EU4Health — new calls (HaDEA).
How this relates to AI in healthcare:
- Funding & collaboration: EU-based consortia can bid for funds that explicitly promote trustworthy, interoperable AI in health; Canadian or global partners should monitor RFPs for collaboration opportunities and for signals on EU technical priorities (e.g., transparency, data governance, interoperability).
- Standards alignment: EU funding often requires alignment with the EU AI Act principles and MDR/IVDR device rules — vendors should prepare documentation to demonstrate compliance with both the AI Act’s obligations (where applicable) and medical device requirements.
2) EMA — AI workplan for medicines regulation and data strategy
The European Medicines Agency continues implementing an AI workplan that addresses the use of AI across regulatory science (data analysis, pharmacovigilance, clinical trials), and maintains pages describing data and big data strategies. No single new binding law this week, but continuing activity shapes member states’ expectations. EMA — AI & data strategy.
How this relates to AI in healthcare:
- Regulatory science: we may expect EMA guidance that focuses on AI for pharmacovigilance (signal detection), trial analytics, and modelling — sponsors should plan for data standards, explainability of algorithms used for safety signals, and auditability.
- Multi-stakeholder alignment: EMA’s work encourages national regulators to harmonize approaches to AI in regulated products (drugs + devices), meaning combined submissions (e.g., a drug–device combination that uses AI for dosing) will require coordinated evidence.
3) UK — MHRA AI Airlock sandbox & expanded pilot activity
The MHRA’s regulatory sandbox for AI as a Medical Device (AI Airlock) continues to run and expand cohorts; public pages describe pilot cohorts, background and recent cohort openings. The UK positions the AI Airlock as a world-leading sandbox to test real-world regulatory approaches. MHRA — AI Airlock (collection).
How this relates to AI in healthcare:
- Practical pathway for innovators: the AI Airlock offers a route to test devices under regulatory oversight with structured data collection and feedback — consider applying if you need real-world evidence or want early regulatory engagement for a UK roll-out.
- Learning outcomes: MHRA pilot results feed technical expectations (monitoring, clinical evaluation frameworks) that other regulators often watch — participants should expect to share anonymized learnings and validation approaches.
Rest of world — key official items
1) Australia — TGA publishes list of AI-enabled medical devices (ARTG)
The Therapeutic Goods Administration published and continues to update a list of AI-enabled medical devices on the Australian Register of Therapeutic Goods (ARTG), and published outcomes of its AI review that stress compliance, transparency and monitoring. TGA — list of AI-enabled devices in ARTG · TGA — AI & medical device software guidance.
How this relates to AI in healthcare:
- Transparency & comparability: ARTG’s AI device list provides purchasers with visibility on marketed AI devices — use it as an input to procurement and to benchmark device labeling and claims.
- Compliance focus: TGA has emphasized compliance reviews and has guidance for digital scribes and AI products — vendors should expect routine compliance checks and should maintain up-to-date device dossiers.
2) World Health Organization (WHO) — ethics & governance guidance
WHO continues to publish and update high-level guidance on ethics and governance for AI in health (including recent work on large multi-modal models and the Global Initiative on AI for Health (GI-AI4H)). These documents are non-binding but influential globally. WHO — AI for health programme · WHO — Ethics & governance of AI for health (guidance).
How this relates to AI in healthcare:
- Global norms & equity: WHO guidance focuses on equity, human rights, explainability and safety — valuable when designing multinational studies or deployments, especially for low-resource settings.
- Operational adoption: WHO’s GI-AI4H can provide technical toolkits and validation frameworks useful for regulators and health systems; reference WHO recommendations when arguing for ethical clearance or funding.
Cross-cutting themes & recommendations (what regulators are converging on)
Across Canada, the U.S., EU, UK, Australia and WHO outputs in this period, several durable themes are evident: transparency & registries, lifecycle oversight (PCCP / change control), post-market surveillance & real-world performance monitoring, provenance/labeling for synthetic outputs, active privacy enforcement, and use of regulatory sandboxes to handle novel adaptive AI designs. These themes define the practical checklist below.
Immediate, concrete checklist for health organisations & vendors (actionable & detailed)
- Regulatory mapping (by jurisdiction): classify each AI component vs local medical-device definitions (Health Canada, FDA, MHRA, TGA, EU MDR/IVDR). Produce a one-page map: component → intended use → likely device class/regime → required filings.
- Pre-market evidence & PCCP readiness: for MLMD, prepare the PCCP (change taxonomy + validation protocols), clinical evidence packages (external validation + subgroup analysis), and full model documentation (model card + dataset datasheet). Have a validation statistics plan and independent clinical reviewer.
- Privacy & DPIAs: run DPIAs with adversarial re-identification tests, log data lineage, and document cross-border contracts. Update consent language to explicitly mention AI processing and potential secondary uses.
- Provenance & labeling: implement metadata ribbons, visible notices for AI-generated outputs, and a human review threshold for clinical-impact outputs; maintain immutable audit logs.
- Post-market surveillance system: define KPIs (performance drift metrics, calibration over time, A/B monitoring), reporting pipelines to regulators and clinicians, and an incident triage/rollback plan. Automate drift alerts with human escalation.
- Contracts & BAAs: ensure vendor contracts include obligations for regulatory submissions, audit rights, breach notification, BAA coverage for PHI, model retraining governance and liability allocation.
- Sandbox & early engagement: where feasible, apply to sandboxes (MHRA AI Airlock) or request early meetings with regulators (Health Canada pre-submission meetings, FDA Q-sub meetings) to de-risk approval pathways.
Sources
- Government of Canada — Ministers at ALL IN (news index) —Ministers at ALL IN
- Office of the Privacy Commissioner (OPC) — PIPEDA Report of Findings #2025-003 (TikTok) — TikTok report (OPC)
- Competition Bureau / CDRF — synthetic media policy paper — CDRF synthetic media policy paper
- Health Canada — Pre-market guidance for machine learning-enabled medical devices (web & PDF) — Health Canada MLMD guidance (web)
- FDA — AI/ML-Enabled Medical Device resources & list — FDA AI/ML resources
- HHS / OCR — HIPAA resources — HHS HIPAA
- EU HaDEA / EU4Health — new calls under 2025 Work Programme — EU4Health calls
- EMA — AI & data strategy / AI workplan — EMA AI pages
- MHRA — AI Airlock (collection & pilot pages) — MHRA AI Airlock
- TGA (Australia) — AI-enabled devices in ARTG + guidance — TGA ARTG AI list
- WHO — AI for health programme & ethics guidance — WHO AI for health