AI in U.S. Healthcare Law: FDA Regulation, Liability, and Patient Rights
Artificial intelligence tools embedded in clinical workflows, diagnostic imaging, drug discovery, and patient record systems now operate under a layered regulatory structure that spans the U.S. Food and Drug Administration, the Department of Health and Human Services, and state tort law. This page maps the regulatory classifications, liability frameworks, and patient rights protections that apply when AI systems influence medical decisions in the United States. The stakes are high: a misclassified device or an unresolved liability gap can expose patients to harm and providers to legal jeopardy without clear recourse. Understanding these frameworks is foundational to any analysis of AI liability in torts under U.S. law.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps
- Reference table or matrix
Definition and scope
AI in U.S. healthcare law refers to the body of federal and state legal obligations governing the development, clearance, deployment, and use of artificial intelligence and machine learning (AI/ML) systems that affect clinical care, administrative decisions, and patient data. The scope encompasses Software as a Medical Device (SaMD), clinical decision support (CDS) tools, AI-driven diagnostic imaging platforms, predictive risk stratification algorithms, and AI systems processing protected health information under the Health Insurance Portability and Accountability Act of 1996 (HIPAA, 45 CFR Parts 160 and 164).
The FDA defines Software as a Medical Device in alignment with the International Medical Device Regulators Forum (IMDRF) framework as "software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device" (FDA SaMD Guidance, 2023). That definition is the threshold determination for whether an AI health tool enters the FDA premarket review pathway.
The practical scope of healthcare AI law in the United States is broad. As of the FDA's 2023 action plan update, the agency had authorized more than 690 AI/ML-enabled medical devices (FDA AI/ML Action Plan), with the preponderance concentrated in radiology and cardiovascular imaging. These authorizations do not exhaust the legal landscape — algorithms used for insurance coverage determinations, prior authorization, and hospital resource allocation may fall outside FDA jurisdiction but remain subject to HHS anti-discrimination rules under Section 1557 of the Affordable Care Act (42 U.S.C. § 18116).
Core mechanics or structure
FDA Premarket Pathways
Three primary FDA pathways govern AI medical devices:
510(k) Substantial Equivalence is the most common route for lower-risk AI/ML SaMD. A manufacturer demonstrates that the device is substantially equivalent to a legally marketed predicate device. The majority of cleared AI radiology tools have used this pathway (FDA 510(k) Database).
De Novo Classification applies when no predicate exists and the device presents low-to-moderate risk. The FDA has used De Novo to establish novel classifications for AI tools, including the first AI-based diabetic retinopathy screening system, IDx-DR, authorized in 2018 (FDA De Novo Authorization DEN180001).
Premarket Approval (PMA) is reserved for Class III high-risk devices and requires the most rigorous clinical evidence, including valid scientific evidence of reasonable assurance of safety and effectiveness under 21 C.F.R. § 814.
Clinical Decision Support Carve-Out
The 21st Century Cures Act (Pub. L. 114-255, enacted 2016) carved out certain CDS software from FDA device regulation. Non-device CDS must meet four criteria: it must not acquire, process, or analyze a medical image; it must display or print recommendations that a clinician can independently review; the basis for recommendations must be transparent; and the intended use must be for administrative support or general patient wellness. AI tools that fail any of these criteria remain subject to FDA oversight (FDA CDS Guidance, September 2022).
HIPAA and Algorithmic Data Processing
AI systems that process protected health information (PHI) must operate within HIPAA's Privacy Rule and Security Rule. The Privacy Rule governs permissible uses and disclosures of PHI; the Security Rule requires administrative, physical, and technical safeguards for electronic PHI (ePHI) under 45 C.F.R. § 164.302–318. Training an AI model on de-identified patient data requires compliance with the de-identification standard at 45 C.F.R. § 164.514(b), which specifies either expert determination or the Safe Harbor method.
Causal relationships or drivers
Three intersecting pressures have accelerated the legal complexity of healthcare AI:
Regulatory gap between device and non-device AI. The 21st Century Cures Act's CDS carve-out was designed for simple reference tools, not for large-scale predictive algorithms. Vendors engineered products to satisfy the four-factor test without independent clinical validation, creating unreviewed tools operating in high-stakes settings. The FDA's 2022 CDS Guidance attempted to clarify boundaries but acknowledged residual ambiguity for AI tools that partially meet the criteria.
Adaptive AI and post-market drift. Unlike static software, AI/ML models may change predictions as they train on new data. The FDA's proposed framework for "predetermined change control plans" (PCCPs) in its 2021 AI/ML Action Plan identifies this as a central challenge: a device cleared at one performance specification may degrade or shift outside that specification after deployment without triggering a new premarket submission. This connects directly to the broader challenge of AI regulatory frameworks in the United States.
Liability ambiguity under malpractice doctrine. Medical malpractice under state law requires proof that a defendant owed a duty, breached the standard of care, and caused injury. When an AI system generates a recommendation that a clinician follows and harm results, courts must resolve whether liability attaches to the clinician, the hospital, the device manufacturer, or all three. No federal statute currently allocates that liability. Product liability doctrine under the Restatement (Third) of Torts: Products Liability may apply to device manufacturers, but software defect claims remain contested across circuits.
Classification boundaries
FDA risk classification for AI/ML medical devices follows the three-class structure established in 21 C.F.R. § 860:
- Class I (General Controls): Low-risk devices subject only to general controls. Most administrative AI tools, if subject to FDA jurisdiction at all, would fall here. Examples include scheduling optimization software that does not influence clinical decisions.
- Class II (Special Controls + 510(k)): Moderate-risk devices requiring premarket notification. AI-assisted radiology CAD (computer-aided detection) tools and many cardiovascular risk algorithms are Class II.
- Class III (PMA): High-risk devices with no predicate and significant injury potential. Autonomous diagnostic AI with no clinician review loop may qualify as Class III.
Outside FDA classification: AI systems used purely for billing, coding, and administrative prior authorization are generally not medical devices under 21 U.S.C. § 321(h). However, these systems may be subject to HHS Section 1557 non-discrimination requirements, CMS conditions of participation, or state insurance regulation when they systematically deny coverage.
The "locked vs. adaptive" distinction is a secondary classification axis the FDA uses in its SaMD framework. Locked algorithms produce the same output for a given input and change only through a software update triggering a new submission. Adaptive algorithms modify behavior without such an update, requiring a PCCP or separate regulatory strategy.
Tradeoffs and tensions
Innovation speed versus evidence standards. The 510(k) pathway requires substantial equivalence, not superiority. A new AI diagnostic tool may be cleared on the basis that it resembles a predicate from a different technological era without demonstrating improved patient outcomes. Critics including the Brookings Institution and the American Medical Association have noted this creates a lower evidentiary bar for AI devices than for pharmaceuticals.
Transparency versus trade secrets. Clinicians and patients increasingly seek algorithmic transparency — understanding what variables drove a recommendation. Device manufacturers assert that model architecture and training weights constitute trade secrets protected under the Defend Trade Secrets Act (18 U.S.C. § 1836). This tension is unresolved at the federal statutory level and is relevant to the broader discussion of AI and trade secret law.
Autonomy versus oversight. Fully autonomous AI diagnostics — where no clinician reviews the AI output before a care decision is made — raise distinct liability and patient rights questions. The FDA approved IDx-DR in 2018 as the first autonomous AI diagnostic with no clinician review required, demonstrating that the pathway exists but remains exceptional.
Anti-discrimination obligations versus algorithmic opacity. Section 1557 of the ACA prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in health programs receiving federal financial assistance. If an AI tool systematically produces inferior recommendations for one demographic group, the using entity may bear Section 1557 liability even without intentional discrimination. HHS's 2024 rule on Section 1557 (published at 89 Fed. Reg. 37,522) explicitly addresses algorithmic discrimination in covered health programs.
Common misconceptions
Misconception 1: FDA clearance means the device is safe for all patient populations.
Clearance means the device met the substantial equivalence or PMA standard at review. Performance may vary across patient subgroups not represented in the training data. The FDA's 2021 action plan explicitly acknowledged the need for demographic subgroup performance transparency, which is not a current mandatory clearance requirement.
Misconception 2: HIPAA prohibits AI training on patient data.
HIPAA does not categorically prohibit using PHI to train AI models. It requires that such use fall within a permissible purpose — such as a valid authorization, treatment operations, or de-identification under the Safe Harbor or expert determination methods of 45 C.F.R. § 164.514. Improper use constitutes a HIPAA violation subject to civil monetary penalties up to $1.9 million per violation category per year (HHS HIPAA Enforcement).
Misconception 3: If a clinician reviewed the AI output, the manufacturer bears no liability.
Learned intermediary doctrine in pharmaceutical law holds that a manufacturer discharges its duty to warn by adequately informing prescribing physicians. Whether this doctrine applies to AI medical devices is unsettled. Courts have not uniformly adopted it for software-based tools, and hospitals may face independent negligence claims for deploying poorly validated AI systems regardless of clinician review.
Misconception 4: The 21st Century Cures Act exempted all clinical decision support from FDA oversight.
The Act created a narrow carve-out requiring satisfaction of all four CDS criteria simultaneously. AI tools that aggregate multiple patient data streams, apply non-transparent scoring, or support decisions a clinician cannot independently verify fall outside the carve-out and remain subject to FDA jurisdiction. The FDA's September 2022 CDS guidance provides worked examples of tools that do and do not qualify.
Checklist or steps
The following sequence describes the regulatory determination framework applicable to an AI system intended for clinical use in the United States. This is a descriptive map of the process, not legal or regulatory advice.
Step 1 — Determine intended use.
Identify whether the software is intended to diagnose, treat, cure, mitigate, or prevent a disease or condition. If yes, the software meets the statutory definition of a device under 21 U.S.C. § 321(h) and FDA jurisdiction is presumptively applicable.
Step 2 — Apply the 21st Century Cures Act CDS four-factor test.
Evaluate whether the software: (a) does not acquire/process medical images or signals; (b) displays recommendations for independent review; (c) provides transparent basis for recommendations; and (d) is intended for administrative support or patient wellness. Failure on any factor leaves the tool subject to FDA oversight.
Step 3 — Identify the FDA device class.
Using the FDA's product classification database (accessdata.fda.gov), identify the applicable device classification regulation and risk class. Determine whether a predicate exists for 510(k) or whether De Novo or PMA is required.
Step 4 — Assess locked versus adaptive algorithm status.
Determine whether the model updates its behavior without a discrete software update. Adaptive models require a Predetermined Change Control Plan submitted to the FDA before deployment.
Step 5 — Evaluate HIPAA compliance for training and inference data.
Confirm that any PHI used during development and deployment has a valid legal basis under HIPAA's Privacy Rule and that ePHI at inference is protected under the Security Rule.
Step 6 — Assess Section 1557 non-discrimination obligations.
If the deploying entity receives federal financial assistance, evaluate whether the AI system's outputs produce disparate impacts along protected characteristics. Document validation data by demographic subgroup.
Step 7 — Review state law obligations.
Identify applicable state medical device laws, informed consent requirements, and any state-enacted algorithmic accountability statutes. As of the FDA's 2023 strategic priorities document, the agency coordinates with states but federal preemption of state AI device claims remains litigated.
Step 8 — Establish post-market monitoring.
Implement a Quality Management System and adverse event reporting protocol consistent with 21 C.F.R. § 803 (Medical Device Reporting). Adaptive AI models require ongoing performance monitoring against the PCCP benchmarks.
Reference table or matrix
AI Healthcare Tool Regulatory Classification Matrix
| Tool Type | FDA Jurisdiction | Typical Pathway | HIPAA Applicability | Section 1557 Risk |
|---|---|---|---|---|
| Autonomous diagnostic AI (e.g., retinal screening) | Yes — SaMD | De Novo or PMA | Yes (ePHI at inference) | High if federally funded |
| AI-assisted radiology CAD | Yes — SaMD | 510(k) | Yes | Moderate |
| Adaptive clinical risk scoring (updates without update) | Yes — SaMD with PCCP required | 510(k) + PCCP | Yes | High |
| CDS tool meeting all 4 Cures Act criteria | No — statutory carve-out | N/A | Yes (if PHI used) | Moderate |
| Prior authorization AI (insurance, administrative) | No — not a device | N/A | Depends on data use | High |
| Administrative scheduling/operations AI | No | N/A | Limited | Low |
| AI trained on de-identified data (Safe Harbor) | Depends on intended use | Variable | No (if properly de-identified) | Moderate |
Federal Penalty Reference
|
References
- U.S. Food and Drug Administration – Digital Health Center of Excellence (Software as a Medical Device)
- U.S. Food and Drug Administration – Artificial Intelligence and Machine Learning in Software as a Medical Device
- U.S. Food and Drug Administration – Clinical Decision Support Software Guidance
- Electronic Code of Federal Regulations – 45 CFR Part 164 (HIPAA Security and Privacy Standards)
- Electronic Code of Federal Regulations – 45 CFR Part 160 (HIPAA General Administrative Requirements)
- U.S. Department of Health and Human Services – Health Insurance Portability and Accountability Act (HIPAA)
- U.S. Department of Health and Human Services – HHS Artificial Intelligence Strategy
- U.S. Department of Health and Human Services Office for Civil Rights – HIPAA Enforcement
- National Institute of Standards and Technology – AI Risk Management Framework (AI RMF 1.0)
- NIST Computer Security Resource Center – AI and Machine Learning Security
- Federal Trade Commission – Protecting Privacy and Security in Health Technology
- U.S. House of Representatives – 21st Century Cures Act (Public Law 114-255)
- [