April 2026 saw meaningful regulatory movement across several major jurisdictions. While activity in Canada and the United States was relatively modest compared to previous months – each producing one notable update – the EU and UK were particularly active, with a cluster of funding announcements, institutional developments, and governance reports that together signal a significant acceleration in how Europe is approaching AI in healthcare. Canada finalized its long-awaited machine learning device guidance, the FDA launched AI-powered real-time clinical trial pilots, the UK secured permanent funding for its AI regulatory sandbox, and the EU opened €63.2 million in AI health funding. WHO/Europe published its first comprehensive snapshot of AI deployment across all 27 EU member states.
Disclaimer: This article is produced for educational and informational purposes only. It summarises publicly available regulatory updates and does not constitute legal, regulatory, financial, or professional advice. All readers should seek qualified counsel before taking any action based on the information presented here.
Canada
Health Canada — Final Pre-Market Guidance for Machine Learning-Enabled Medical Devices (April 1, 2026)
Category: Regulatory Guidance / Market Authorization
Health Canada published its pre-market guidance for machine learning-enabled medical devices (MLMD), establishing what has been described as a comprehensive national frameworks of its kind. The guidance covers the full device lifecycle: good machine learning practice, design and architecture, risk management, data selection and curation, model development and training, performance testing, clinical validation, transparency obligations, and post-market monitoring. It applies to manufacturers submitting new or amended applications for Class II, III, and IV MLMDs under Canada’s Medical Devices Regulations.
The most significant structural feature is the Predetermined Change Control Plan (PCCP) — a mechanism that may allow manufacturers to pre-authorize a defined envelope of future ML model modifications without requiring a new regulatory submission for each change. This aims to address one of the core commercial tensions in deploying adaptive AI in regulated settings: the need to continuously improve models post-market while remaining within an approved regulatory scope. Health Canada aligned definitions and terminology with the International Medical Device Regulators Forum (IMDRF), with the stated aim of supporting harmonization with FDA and EU frameworks that have adopted comparable PCCP approaches. Readers are encouraged to consult the original guidance and seek qualified regulatory advice to understand how these requirements apply to their specific circumstances.
Pre-market guidance for machine learning-enabled medical devices
United States
FDA — Real-Time Clinical Trials Proof-of-Concept and RFI Launch (April 28, 2026)
Category: Clinical Trials Modernization / Regulatory Innovation
The FDA announced two proof-of-concept clinical trials that will report endpoints and data signals to the agency in real time, alongside a formal Request for Information for a proposed summer 2026 pilot program. The agency described this as a direct product of advances in AI and data science, noting that sponsors and trial sites may have the practical capability to conduct continuous, real-time trials that could dramatically enhance safety monitoring and operational efficiency. Comments on the RFI were open until May 29, 2026.
For those following AI-enabled therapeutics and tools, this suggests that the regulatory evidentiary framework itself may be changing — not just the tools used to navigate it.
FDA Announces Major Steps to Implement Real-Time Clinical Trials
UK & EU
1) UK MHRA — AI Airlock Programme Secures Multi-Year Funding (April 8, 2026)
Category: Regulatory Sandbox / Institutional Governance
The MHRA confirmed that the AI Airlock Programme — the UK’s dedicated regulatory sandbox for AI as a medical device — had secured multi-year funding, removing the year-to-year financial uncertainty that had constrained the programme’s scope and ambition. Phase 2 of the sandbox, which concluded in March 2026, assessed seven AI technologies spanning clinical note-taking, advanced cancer diagnostics, eye disease detection, and obesity treatment support. Phase 3 design is underway, with details to follow later in 2026.
The significance here is institutional: multi-year funding may effectively transform the AI Airlock from a pilot initiative into a permanent feature of the UK’s medical device regulatory infrastructure. This could have meaningful implications for developers seeking a structured pre-submission regulatory engagement pathway, and the programme is being watched as a potential template by other jurisdictions.
MHRA expands AI Airlock programme with a £3.6 million funding boost over three years
2) UK National Commission on AI in Healthcare — Public Engagement and Evidence Synthesis (April 15–17, 2026)
Category: Policy Development / Public Consultation
On April 15, the MHRA published a blog from Dame Jennifer Dixon (CEO, Health Foundation) setting out the National Commission’s approach to adaptive AI regulation, emphasizing that both pre-market evaluation and robust post-market surveillance should be in place as AI technologies evolve — with safety, performance, and equity as stated anchors. On April 17, Professor Henrietta Hughes (Patient Safety Commissioner and Deputy Chair of the National Commission) reported that the Commission’s Call for Evidence received over 770 responses, with approximately one-third from patients and members of the public. The dominant theme across responses was trust — in the technologies themselves and in the systems and institutions that govern them. A public Ask Me Anything session was also published on April 17, aimed specifically at patients and carers.
This may be relevant to the industry because the Commission’s evidence base is intended to inform recommendations on how AI medical devices are regulated in the UK going forward. The strong patient voice in the evidence record suggests that equity, transparency, and public accountability are likely to feature prominently in those recommendations.
How to seize the growing opportunities of AI and technology ahead
Shaping the future of healthcare
3) EU Commission — €63.2 Million Digital Europe Programme Funding for AI in Health (April 21, 2026)
Category: Funding / Policy Implementation
The European Commission opened seven Digital Europe Programme calls totalling €63.2 million to support AI in health, digital health, digital skills, and online safety. These calls form part of the AICare@EU initiative and the broader Apply AI Strategy, which aims to move AI from clinical research settings into widespread deployment across EU healthcare systems. Priority areas include diagnostics, cancer imaging infrastructure (targeting 60 million cancer images by end of 2026), and cardiovascular care. Eligible applicants include health systems, research institutions, and technology developers.
For those operating in or seeking entry into EU markets, these calls may represent both a direct funding opportunity and an indication of where the Commission expects AI adoption to be concentrated over the near term. Prospective applicants should consult the official programme documentation for eligibility and application requirements.
Commission makes €63.2 million available to support AI innovation in health and online safety
4) EU Joint Research Centre — AI in Cardiovascular Care Report (April 7, 2026)
Category: Clinical Evidence / Policy Guidance
The European Commission’s Joint Research Centre published Artificial Intelligence in Cardiovascular Care: From Promise to Practice, a report examining the current deployment landscape of AI tools in cardiovascular prevention, diagnosis, and treatment across EU member states. The report sits within the AICare@EU initiative and aims to map the gap between demonstrated AI capability and actual clinical adoption — identifying where barriers are regulatory, infrastructural, or cultural. Cardiovascular disease remains the leading cause of death across the EU, making this a high-priority domain for the Commission’s Apply AI Strategy.
Artificial intelligence in cardiovascular care: from promise to practice
5) WHO/Europe — First Comprehensive Report on AI in Healthcare Across All 27 EU Member States (April 20, 2026)
Category: Landscape Assessment / International Governance
WHO/Europe released what it describes as the first comprehensive report assessing AI in healthcare across all 27 EU member states, based on data collected from June 2024 to March 2025. Key findings: all 27 countries identified improved patient care as the primary driver of AI development, and the majority are already deploying AI tools in clinical settings — a baseline of adoption that did not exist even three years prior. WHO/Europe identified three priority areas for governments: workforce readiness (education in AI fundamentals, ethics, and data governance); inclusive engagement (actively involving health professionals, patients, and the public in AI policy development); and centres of excellence (shared testing infrastructure and common standards for safe and equitable implementation).
This report has been described as important not just as a snapshot but as a calibration tool — it may establish an evidence-based baseline against which EU member states and their regulators could measure progress over the next regulatory cycle, and its recommendations may inform both national policy and EU-level governance priorities.
Cross-Cutting Themes
1) Lifecycle governance is now a widely stated baseline.
The single most consistent thread across April’s updates is that regulatory responsibility no longer ends at market authorization — at least according to the frameworks discussed above. Canada’s MLMD guidance, the EU’s AICare@EU framework, and the FDA’s real-time trial initiative all embed ongoing post-market obligations — model drift monitoring, performance surveillance, and change reporting — as stated requirements. The PCCP mechanism adopted by Health Canada (aligning with FDA and EU approaches) is the practical expression of this. Those building ML-enabled medical devices may wish to consider how post-market obligations factor into their product and operational planning, in consultation with qualified regulatory counsel.
2) Regulatory sandboxes appear to be becoming permanent infrastructure.
The UK AI Airlock’s multi-year funding is a structural signal, not just a financial one. It suggests the MHRA has committed to maintaining a pre-market regulatory engagement pathway for AI medical devices as an ongoing institutional function. This is consistent with how other jurisdictions are evolving: the sandbox model appears to be transitioning from an experiment to an expected feature of the regulatory landscape for AI-specific medical technologies.
3) Public trust is increasingly being discussed as a regulatory consideration.
The UK National Commission’s Call for Evidence — where a third of over 770 responses came from patients and the public, with trust emerging as the dominant theme — reflects a broader international conversation. WHO/Europe’s recommendations similarly centre inclusive public engagement as a government priority. Observers have noted that transparency, explainability, and equity may be moving from differentiators toward expected features in regulatory frameworks — though the precise requirements will vary by jurisdiction and product type.
4) Clinical evidence frameworks may be restructured around AI.
The FDA’s real-time clinical trials initiative represents a potential shift in how clinical evidence is generated, accumulated, and reviewed. Moving toward continuous, phase-less evidence generation could change the evidentiary requirements for approval decisions — and may reshape how AI tools used in trial operations, patient monitoring, and outcome analysis are evaluated. This is still early-stage, and the eventual shape of this framework remains to be seen.
5) Investment and deployment appear to be converging.
The EU’s €63.2 million in Digital Europe Programme funding, combined with the AICare@EU initiative and the Apply AI Strategy, suggests that the Commission is increasingly investing in AI deployment at scale alongside its regulatory work. For those following this space, this creates a notable parallel track: regulatory compliance and publicly-funded deployment programmes appear to be increasingly co-located in the same policy architecture.
Key Considerations for Regulatory Alignment
For Founders & Business Owners
1) Consider lifecycle governance as part of your product planning.
With Canada, the FDA, and the EU all embedding post-market monitoring, drift detection, and change reporting into their frameworks, these may need to become operational capabilities built into product architecture from an early stage. Consulting with regulatory and technical advisors early in development is recommended.
2) Understand the PCCP and its potential commercial relevance.
The Predetermined Change Control Plan — now described as available in Canada and aligned with FDA and EU equivalents — may allow manufacturers to pre-authorize a defined envelope of future model modifications without triggering a full re-submission. For products with continuous learning or iterative improvement cycles, this could be a meaningful commercial enabler. Those interested in leveraging a PCCP should consult regulatory counsel early in development.
3) Monitor the FDA’s real-time clinical trials pilot for developments.
The summer 2026 pilot and the now-closed RFI represented an early opportunity to shape how this framework develops. Those whose products touch clinical trial operations, decision support, or outcome monitoring may find the evolving framework relevant to their regulatory planning.
4) The EU’s €63.2 million in funding may represent an opportunity for eligible organisations.
The seven Digital Europe Programme calls opened in April may offer direct funding access for AI health applications, particularly in diagnostics, cancer imaging, and cardiovascular care. Eligible organisations should review the official programme documentation directly.
5) Be aware that public trust mechanisms are receiving increasing regulatory attention.
The pattern across jurisdictions — the UK’s patient-dominated Call for Evidence, WHO/Europe’s emphasis on inclusive governance, Canada’s transparency obligations — suggests that external stakeholders are increasingly present in the regulatory process. How this translates into enforceable requirements will depend on jurisdiction and product type.
For Compliance & Regulatory Specialists
1) Consider initiating a PCCP assessment for relevant ML-enabled medical device products.
Health Canada’s April 1 guidance introduces the PCCP as a formal regulatory mechanism for Canadian submissions. Those operating in Canada or planning to may wish to map anticipated model modification types — retraining on new data, threshold adjustments, architecture updates — against the PCCP scope. Coordination with qualified regulatory counsel across relevant jurisdictions is advisable given the IMDRF harmonization intent.
2) Review post-market surveillance obligations in light of new guidance.
Canada’s MLMD guidance codifies requirements for model drift monitoring and performance reporting that may exceed what existing post-market surveillance plans cover. Readers may wish to cross-reference existing plans against the new guidance requirements — particularly around documentation of “significant changes,” bias monitoring protocols, and incident reporting thresholds for ML-specific failure modes. Qualified regulatory counsel should be involved in any gap assessment.
3) Note that the FDA’s early-phase AI trials RFI closed May 29, 2026.
Docket No. FDA-2026-N-4390 was an active public consultation on how AI tools will be evaluated in early-phase trial contexts. Those with experience deploying AI in biomarker identification, dose optimization, or patient selection may wish to monitor how the framework develops as the pilot progresses.
4) Consider assessing AI Airlock eligibility for Phase 3.
With the UK AI Airlock Phase 3 design underway and multi-year funding confirmed, the next cohort intake is expected to be announced later in 2026. Those whose products lack a clearly mapped UK market authorization pathway may wish to begin eligibility assessment in consultation with regulatory advisors.
5) Track the EU AI Act high-risk rules timeline alongside Medical Devices Regulation obligations.
The AI Act’s high-risk rules, originally set for August 2, 2026, are now linked to the availability of harmonized standards via the Digital Omnibus — with political agreement reached May 7, 2026. This may create a window to align AI Act compliance work with existing MDR/IVDR regulatory programmes rather than treating them as entirely separate tracks. Readers should monitor official EU communications and seek qualified legal advice on how this timeline affects their specific obligations.
Disclaimer: This checklist is provided for general informational purposes only and does not constitute legal, regulatory, or professional advice; organizations should consult with their legal and compliance departments to ensure adherence to specific jurisdictional requirements.
Sources
- Pre-market guidance for machine learning-enabled medical devices (February 5, 2025)
https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/pre-market-guidance-machine-learning-enabled-medical-devices.html - FDA Announces Major Steps to Implement Real-Time Clinical Trials (April 28, 2026)
https://www.fda.gov/news-events/press-announcements/fda-announces-major-steps-implement-real-time-clinical-trials - MHRA expands AI Airlock programme with a £3.6 million funding boost over three years (April 8, 2026)
https://www.gov.uk/government/news/mhra-expands-ai-airlock-programme-with-a-36-million-funding-boost-over-three-years - How to seize the growing opportunities of AI and technology ahead (April 15, 2026)
https://www.gov.uk/government/news/how-to-seize-the-growing-opportunities-of-ai-and-technology-ahead - Shaping the future of healthcare (April 17, 2026)
https://medregs.blog.gov.uk/2026/04/17/shaping-the-future-of-healthcare/ - Commission makes €63.2 million available to support AI innovation in health and online safety (April 21, 2026)
https://digital-strategy.ec.europa.eu/en/news/commission-makes-eu632-million-available-support-ai-innovation-health-and-online-safety - Artificial intelligence in cardiovascular care: from promise to practice (April 7, 2026)
https://joint-research-centre.ec.europa.eu/jrc-news-and-updates/artificial-intelligence-cardiovascular-care-promise-practice-2026-04-07_en - New WHO/Europe report provides first-ever snapshot of AI in health care across European Union Member States (April 20, 2026)
https://www.who.int/europe/news/item/20-04-2026-new-who-europe-report-provides-first-ever-snapshot-of-ai-in-health-care-across-european-union-member-states




















