FDA AI/ML

Guidance for Software as a Medical Device

Updated: September 20, 2025 

FDA AI/ML Guidance in General

The U.S. Food and Drug Administration (FDA) regulates medical devices under the Federal Food, Drug, and Cosmetic Act (FDCA), which gives it authority to oversee not just traditional hardware devices but also Software as a Medical Device (SaMD). SaMD includes software applications that are intended to diagnose, treat, mitigate, or prevent disease without being part of a physical medical device.

With the rise of artificial intelligence (AI) and machine learning (ML), the FDA has issued specific guidance documents and discussion papers outlining how AI/ML-based software will be evaluated. While these guidance documents are not laws and are technically non-binding, they represent the FDA’s current expectations and are critical for developers seeking market clearance or approval.

AI/ML-based SaMD is increasingly being deployed in:

  • Diagnostics and imaging (e.g., radiology tools that detect tumors or abnormalities)

  • Predictive analytics (e.g., models forecasting disease risk or adverse events)

  • Clinical decision support (AI assisting physicians with treatment planning)

  • Patient triage and monitoring (AI tools directing care priorities or monitoring vital signs in real time)

Depending on a product, the FDA requires one of several regulatory pathways:

  • 510(k) Clearance – Showing that the device is “substantially equivalent” to an existing, legally marketed device.

  • De Novo Classification – For novel devices with no existing predicate but considered low-to-moderate risk.

  • Premarket Approval (PMA) – The most rigorous pathway, required for high-risk devices, demonstrating clinical safety and effectiveness through extensive evidence.

The FDA’s AI/ML SaMD framework emphasizes:

  • Patient safety and clinical validity – AI must be safe and beneficial in real-world clinical use.

  • Transparency and explainability – Developers should provide clear documentation of algorithms, limitations, and intended uses.

  • Ongoing performance monitoring – AI systems must be continuously evaluated post-market, especially if adaptive or retraining mechanisms are involved.

This guidance complements broader U.S. regulations such as HIPAA, which governs privacy and security of patient health information, ensuring that healthcare AI systems meet both safety and data protection standards.

 

 

 

 

 

How it applies to AI in healthcare

AI/ML-based SaMD is transforming clinical care but comes with high patient-safety implications. FDA oversight ensures that these technologies deliver clinical benefits while minimizing risks.

 

Scope of application

  • AI systems designed to diagnose, treat, or mitigate medical conditions.
  • Decision-support systems that provide clinical recommendations to healthcare professionals.
  • AI tools for medical image analysis, triage, and anomaly detection.
  • Real-time monitoring systems that track patient health or predict adverse events.

 

Relevance to healthcare AI

  • Evaluation of clinical validity and patient benefit.
  • Assessment of algorithm performance, reproducibility, and consistency.
  • Requirements for cybersecurity and data integrity, aligned with HIPAA safeguards.
  • Integration with quality management systems such as ISO 13485 for medical devices.

 

Key obligations and requirements

  • Premarket submissions: Depending on risk, devices may require 510(k), De Novo, or PMA approval.
  • Good Machine Learning Practices (GMLP): Transparent model development, validation, and testing procedures.
  • Real-world performance monitoring: Continuous post-market surveillance and data collection to ensure safety.
  • Change management protocols: Predefined plans and documentation for adaptive or retrained AI systems.

 

Governance requirements in healthcare AI

  • Model design specifications: Document system architecture, input data, and intended clinical use.
  • Clinical evaluation reports: Demonstrate safety, efficacy, and alignment with medical standards.
  • Performance monitoring logs: Track outcomes, errors, and deviations under real-world use.
  • Change logs and version control: Maintain detailed records of model updates, retraining, and adaptive behaviors.

 

Risks and challenges

  • Regulatory complexity: Approval processes can be lengthy for high-risk AI systems.
  • Adaptive AI uncertainty: Existing frameworks struggle with continuously learning models.
  • Cross-functional demands: Requires collaboration between clinicians, data scientists, and regulatory teams.
  • Privacy and cybersecurity: Ensuring HIPAA compliance and robust security in real-world deployments.

 

Best practices

  • Engage early with the FDA via the Pre-Submission (Pre-Sub) process to clarify expectations.
  • Use explainability tools for clinical decision-support AI to aid clinician trust and accountability.
  • Conduct pilot deployments in controlled environments before broad rollout.
  • Document all design, validation, and post-market activities to align with GMLP principles.
  • Integrate cybersecurity and HIPAA-compliant privacy controls into system design and monitoring.

 

Future developments

  • Development of a framework for adaptive and continuously learning AI systems.
  • Draft guidance on predefined change control plans to streamline updates and retraining.
  • Greater alignment with international standards such as ISO 13485 (medical devices) and ISO/IEC 42001 (AI governance).
  • Increasing focus on bias mitigation, transparency, and human oversight for AI in clinical use.

 

 

 

Alignment with other standards

  • ISO 13485: Quality management systems for medical devices, required for many FDA submissions.
  • ISO/IEC 42001: AI lifecycle governance that complements FDA’s risk-based approach.
  • HIPAA: Protects privacy and security of PHI in AI systems.
  • FDA SaMD Guidance: Broader regulatory guidance for software-based medical devices.
  • NIST AI RMF: Provides a voluntary risk management framework that complements FDA requirements.

 

References and official sources

Artificial Intelligence and Machine Learning Software as a Medical Device (SaMD): https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device

Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft): https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing

Good Machine Learning Practice for Medical Device Development: Guiding Principles: https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles

Artificial Intelligence-Enabled Medical Devices: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enable