Validating Clinical AI: What FDA SaMD Guidance Means for Your Product
The Regulatory Landscape Has Changed
For most of the history of medical device regulation, software was either a component of a hardware device or a standalone tool with predictable, static behavior. The FDA's regulatory frameworks were designed for devices that do what they're programmed to do, consistently, without changing over time.
AI/ML-based medical software doesn't fit this model. Adaptive algorithms change their behavior as they encounter new data. The model that was validated before clearance may behave differently six months post-market as it learns from real-world patient populations.
The FDA's Action Plan for AI/ML-Based Software as a Medical Device, published in 2021 and updated since, represents an attempt to create a regulatory framework for software that changes over time. Understanding this framework is essential before designing your clinical AI system — not after.
Is Your Product SaMD?
Software as a Medical Device is defined by the IMDRF (International Medical Device Regulators Forum) as software intended to be used for medical purposes that performs these purposes without being part of a hardware medical device.
The key determiner is intended use. A dermatology AI that analyzes photos to diagnose melanoma is SaMD. An administrative tool that schedules patient appointments is not. A clinical decision support tool that provides information to a clinician who independently makes the final decision occupies an ambiguous middle ground — and the FDA has provided guidance on where that line falls.
If you're building in the gray zone, get FDA feedback early. The Pre-Submission (Q-Sub) program allows you to describe your intended device and ask the FDA to characterize its regulatory status before you invest in your development program.
The Software Development Lifecycle Requirements
FDA-regulated medical software must follow a documented software development lifecycle that demonstrates: requirements are defined before development begins, design decisions are documented and reviewed, code is tested against requirements, changes are managed with version control and impact assessment, and defects are tracked and resolved.
For clinical AI specifically, the development lifecycle documentation must also include:
- Training data description: source, population, size, labeling methodology, and quality controls
- Algorithm selection rationale: why this model architecture was chosen
- Performance testing methodology: what metrics were evaluated and how the test set was constructed
- Failure mode analysis: what happens when the model is wrong and how the device handles it
- Human factors: how the AI output is presented to clinicians to support appropriate reliance
The Predetermined Change Control Plan
The PCCP is one of the most practically significant developments in FDA AI policy. Traditionally, any change to a cleared medical device that could significantly affect safety or effectiveness requires a new 510(k) submission — a process that takes months. For AI/ML devices that learn from post-market data, this created a fundamental tension: the devices could improve with real-world data, but regulatory requirements effectively prevented them from doing so.
The PCCP framework allows a manufacturer to pre-specify the types of changes they plan to make (algorithm updates, expansion to new patient populations, changes to intended use) and the performance testing they will conduct to validate each change type. If the device performs within the pre-specified bounds on the pre-specified tests, the change can be implemented without a new submission.
The practical implication: if you're building a clinical AI product that will benefit from continuous learning, you should design your PCCP into the product architecture from the beginning — not add it as an afterthought before submission.
Building for Regulatory Success
The teams that navigate FDA SaMD review successfully share a common characteristic: they treat regulatory compliance as an engineering requirement, not a documentation exercise. The model architecture, training pipeline, evaluation framework, and post-market monitoring system are all designed from the start with regulatory documentation in mind.
This means more upfront work. It also means that when the 510(k) submission is being prepared, the evidence package largely writes itself from the development documentation that already exists — rather than requiring a retrospective reconstruction of decisions that were made informally months earlier.
Frequently Asked Questions
Does my clinical AI product need FDA clearance?
It depends on intended use. If your software is intended to diagnose, treat, mitigate, cure, or prevent a disease or condition, it is likely a Software as a Medical Device (SaMD) subject to FDA oversight. Decision support tools that provide information for clinical decision-making without directly driving decisions may qualify as clinical decision support (CDS) exempt from oversight. The FDA's Digital Health Center of Excellence has guidance documents and pre-submission meetings available to help determine your regulatory pathway before significant engineering investment.
What is the FDA's Predetermined Change Control Plan for AI/ML medical devices?
The FDA's AI/ML-based SaMD Action Plan proposes a Predetermined Change Control Plan (PCCP) framework that allows AI/ML medical devices to be updated post-market without requiring a new 510(k) submission for each update — provided the types of changes and the performance testing required to validate them are pre-specified in the original submission. This enables continuous learning models to improve post-market while maintaining regulatory compliance, a significant advance over traditional static software review frameworks.
How do you handle training data that includes protected health information (PHI)?
Clinical AI model training requires either fully de-identified data (safe harbor de-identification under HIPAA, which removes 18 specific identifiers) or a BAA in place with all parties who access PHI during training. De-identification is preferred but not always achievable — some clinical signals are correlated with identity. For models requiring identified training data, we implement technical safeguards (encryption at rest and in transit, access controls, audit logging), organizational controls (training, access minimization), and document the approach for your compliance team and any regulatory submissions.
