AI Liability Under U.S. Tort Law: Products Liability and Negligence Frameworks

U.S. tort law has not yet produced a unified statutory framework for artificial intelligence liability, leaving courts and regulators to apply products liability doctrine, negligence principles, and agency theories developed for earlier technologies to AI systems that embed autonomous decision-making, probabilistic outputs, and opaque internal logic. This page examines how those existing frameworks apply — and strain — under AI-specific fact patterns, covering the operative legal tests, the classification boundaries that determine which theory applies, and the contested zones where doctrine has not stabilized. The analysis draws on published Restatement provisions, Federal Trade Commission enforcement records, and academic commentary recognized by U.S. courts as persuasive authority.



Definition and Scope

AI liability under U.S. tort law refers to the legal mechanisms by which injured parties may hold developers, deployers, or operators of artificial intelligence systems responsible for harm caused by those systems' outputs or actions. The scope spans physical injury (autonomous vehicle collisions, AI-directed medical device errors), economic injury (algorithmic credit denial, AI-generated defamation), and dignitary harm (biased predictive policing outputs, facial recognition misidentification).

Two primary tort doctrines govern most AI harm claims. Products liability applies when an AI system is characterized as a product and the harm is traced to a manufacturing defect, design defect, or failure to warn. Negligence applies when a party owes a duty of care in the development or deployment of an AI system and breaches that duty in a way that proximately causes harm. A third pathway — strict liability — may attach in products liability contexts without requiring proof of fault, depending on the jurisdiction and the applicable Restatement.

The Restatement (Third) of Torts: Products Liability (American Law Institute, 1998), adopted or cited in the majority of U.S. jurisdictions, distinguishes among manufacturing defects (deviation from intended design), design defects (the product's design is unreasonably dangerous), and inadequate instructions or warnings. Each category maps differently onto AI system characteristics. The broader AI regulatory framework in the United States involves agency-level enforcement actions but does not yet displace common-law tort doctrine as the primary compensation mechanism.


Core Mechanics or Structure

Products Liability Mechanics

Under the Restatement (Third) framework, a plaintiff pursuing an AI products liability claim must establish:

  1. That the AI system qualifies as a "product." Courts have generally extended product status to embedded software and hardware-software combinations. Pure software sold as a standalone service raises contested questions about whether the product/service distinction bars strict liability. The Restatement (Second) of Torts § 402A, which predates the Third Restatement, covers "products" without addressing software directly, creating a split of authority.

  2. A cognizable defect category. For AI, design defect claims predominate. The risk-utility test, used in most jurisdictions following the Third Restatement, asks whether the foreseeable risks of the design could have been reduced by a reasonable alternative design at acceptable cost. The consumer expectations test, retained in a minority of jurisdictions, asks whether the product performed below ordinary consumer expectations — a standard that becomes difficult to apply when consumer expectations of AI performance are themselves undefined.

  3. Causation. The plaintiff must show that the defect caused the harm, a requirement that grows technically complex when the AI output reflects probabilistic inference rather than deterministic logic. Actual cause (but-for causation) and proximate cause both apply.

  4. Damages. Standard tort categories apply: personal injury, property damage, and in some jurisdictions, pure economic loss — though the economic loss rule bars purely economic tort claims in products contexts in most states.

Negligence Mechanics

A negligence claim against an AI developer or deployer requires establishing: (1) duty, (2) breach, (3) causation, and (4) damages. The key structural questions for AI are:


Causal Relationships or Drivers

Several technical and structural characteristics of AI systems create causation challenges that drive doctrinal difficulty:

Opacity of inference chains. Deep learning models derive outputs through weight matrices involving billions of parameters. When a model produces a harmful recommendation — a misdiagnosis, a biased loan denial — the internal reasoning path is not human-readable. This creates a "black box" problem for but-for causation: plaintiffs cannot always isolate which feature of the model's design caused the specific harmful output. Courts have not yet settled on whether probabilistic causation evidence (statistical correlation between model outputs and protected characteristics, for example) satisfies but-for standards.

Distributed development chains. Modern AI products involve foundation model developers, fine-tuning deployers, API intermediaries, and end-product integrators. Each layer may have altered the model's behavior. This mirrors the component parts liability analysis under Restatement (Third) § 5, under which component-part sellers are liable only when the component itself is defective or when the seller participates substantially in the integration. Allocation of liability across AI supply chains remains an open doctrinal question — one that intersects directly with AI legal malpractice risk when law firms use third-party AI tools.

Post-deployment model drift. Many deployed AI systems retrain on new data continuously or are updated without user notice. A product that was non-defective at time of sale may become defective post-deployment. Traditional products liability attaches defect status at the time of sale; courts applying that rule may find no liability for post-sale drift unless a duty to update or warn is independently established.

Intervening human action. When an AI system outputs a recommendation and a human actor — a physician, a loan officer, a parole board — makes the final decision, defendants frequently argue that the human decision breaks the causal chain as a superseding cause. The strength of that argument depends on whether the human exercised genuine independent judgment or functionally rubber-stamped the AI recommendation. This issue surfaces acutely in AI pretrial detention decisions and similar high-stakes contexts.


Classification Boundaries

The applicable liability theory depends on how the AI system and the harm are classified:

Scenario Governing Doctrine Key Classification Question
AI embedded in physical product causes injury Products liability (strict or negligence) Is the software a component "product"?
Standalone AI software subscription causes economic harm Negligence or warranty; strict liability uncertain Does the economic loss rule bar tort recovery?
AI as professional service (legal, medical, financial advice) Professional negligence / malpractice Does AI output constitute practice of a licensed profession?
AI outputs used in criminal justice Constitutional tort (§ 1983) + common law negligence Is state action present?
AI-generated defamation Defamation / product liability hybrid Is the developer a "publisher" under 47 U.S.C. § 230?

Section 230 of the Communications Decency Act (47 U.S.C. § 230) is one of the sharpest classification boundaries: if an AI system is characterized as a publisher or speaker of third-party content, the developer may be immune from tort liability for that content. Courts are actively contesting whether generative AI outputs constitute "information provided by another information content provider" or are independently generated content that falls outside § 230 protection. The Electronic Frontier Foundation maintains public analysis of § 230 scope litigation.


Tradeoffs and Tensions

Strict liability versus negligence for design defects. Strict liability in design defect cases — available under the consumer expectations test but constrained under the risk-utility test — places cost-internalization pressure on developers regardless of fault. Critics argue this deters beneficial AI development. Negligence-based standards allow developers to set their own practices as the benchmark for "reasonableness," which may underprotect plaintiffs facing information asymmetry about model behavior.

Transparency requirements versus trade secret protection. Establishing a design defect often requires access to model architecture, training data, and evaluation results. Developers resist disclosure as trade secret. Partial disclosure regimes, such as those contemplated under Executive Order 14110 on AI (October 2023), do not resolve the litigation discovery question. This tension also shapes AI evidence admissibility disputes.

Developer liability versus deployer liability. Imposing liability upstream on foundation model developers creates broad incentive effects but may be over-inclusive (a general-purpose model is not designed for every harmful downstream use). Imposing liability on deployers creates targeted incentives but may leave under-capitalized deployers unable to pay judgments. Multi-defendant joint and several liability schemes address this in some jurisdictions but create their own allocation complexity.

Speed of AI iteration versus legal standard stability. The reasonable care standard in negligence is applied at the time of the alleged breach. The NIST AI RMF 1.0 and similar guidance documents are updated periodically, meaning the benchmark for reasonable care shifts. Developers face retrospective liability under standards that were not fully articulated when design decisions were made.


Common Misconceptions

Misconception 1: AI systems cannot be "products" under tort law because they are software.
Correction: U.S. courts in the majority of jurisdictions have extended products liability to mass-marketed software, including AI components embedded in physical devices. The FDA's Software as a Medical Device (SaMD) framework reflects regulatory treatment of AI as a regulated product subject to premarket requirements. Product status for standalone software-as-a-service remains unsettled, not categorically denied.

Misconception 2: Section 230 immunizes AI developers from all tort liability.
Correction: Section 230 immunity is conditioned on the defendant acting as a publisher or speaker of third-party content. Generative AI that produces original outputs — rather than hosting user submissions — may fall outside § 230 protection. Courts are actively litigating this boundary as of the early 2020s.

Misconception 3: If a human makes the final decision, the AI developer bears no liability.
Correction: Human intermediation does not automatically sever proximate cause. If the human decision-maker was effectively constrained by the AI output — through automation bias, time pressure, or lack of independent information — courts may find the AI's role remained a substantial factor in causing harm. This analysis is central to AI bias in criminal justice litigation.

Misconception 4: Negligence always requires proving intent.
Correction: Negligence is a fault standard based on objective reasonableness, not on subjective intent. An AI developer can be negligent without any subjective awareness that the system would cause harm, if a reasonable developer in the same position would have identified and mitigated the risk.

Misconception 5: Federal preemption shields AI developers whose products comply with agency guidance.
Correction: Agency guidance documents — including NIST AI RMF 1.0 and FTC policy statements — are not statutes or regulations and do not carry preemptive force. Compliance with voluntary guidance may be relevant to establishing a negligence defense (showing reasonable care) but does not categorically preclude state tort claims.


Checklist or Steps

The following identifies the analytical sequence courts and litigants apply when assessing an AI tort claim. This is a structural description of legal analysis, not advisory guidance.

Step 1 — Characterize the AI system.
Determine whether the system is embedded hardware-software (more clearly a "product"), standalone software-as-a-service, or a professional advisory service. This determines available theories.

Step 2 — Identify the harm category.
Classify the harm as physical injury, property damage, economic loss, or dignitary/constitutional harm. The economic loss rule may bar tort recovery for purely economic harms in products contexts.

Step 3 — Identify all potentially liable parties.
Map the AI supply chain: foundation model developer, fine-tuning entity, API provider, deployer, and integrator. Each may bear independent or joint liability under applicable state law.

Step 4 — Assess the defect or breach theory.
For products liability: determine whether the claim sounds in manufacturing defect, design defect (risk-utility or consumer expectations test), or failure to warn. For negligence: identify the specific act or omission alleged to fall below the standard of care.

Step 5 — Analyze causation.
Establish but-for and proximate causation. Address the black-box opacity problem and any human intermediation that may constitute a superseding cause.

Step 6 — Check preemption and immunity.
Evaluate applicability of Section 230 (if content generation is involved), federal agency preemption (if a regulated product is involved, e.g., FDA-cleared AI medical device), and any applicable state tort reform caps.

Step 7 — Address damages.
Quantify compensatory damages. Assess availability of punitive damages under the applicable state standard (typically requiring willful, wanton, or reckless conduct).

Step 8 — Evaluate disclosure obligations.
Determine what model documentation, training data records, and internal testing records are subject to discovery. Assess trade secret protections under the Defend Trade Secrets Act (18 U.S.C. § 1836) and state equivalents.


Reference Table or Matrix

Legal Theory Fault Required? Plaintiff Must Prove Key AI-Specific Challenge Governing Authority
Strict products liability (design defect — consumer expectations) No Defect; causation; damages Defining "ordinary consumer expectations" for AI Restatement (Second) § 402A; minority jurisdictions
Strict products liability (design defect — risk-utility) No Reasonable alternative design; causation; damages Identifying feasible alternative AI architecture Restatement (Third) Products Liability § 2(b)
Products liability (manufacturing defect) No Deviation from intended design; causation Proving deviation in probabilistic model output Restatement (Third) Products Liability § 2(a)
Products liability (failure to warn) No Inadequate warning; causation Scope of disclosure duty for AI behavior Restatement (Third) Products Liability § 2(c)
Negligence Yes (objective) Duty; breach of reasonable care; causation; damages Establishing standard of care for novel AI risk Common law; NIST AI RMF 1.0 as benchmark evidence
Professional negligence Yes (professional standard) Duty; deviation from professional standard; causation Whether AI advisory output constitutes professional practice State licensing law; Restatement (Third) Torts § 299A
§ 1983 constitutional tort Yes (state action + constitutional violation) State action; rights violation; causation Proving government use and due process deprivation 42 U.S.C. § 1983; Mathews v. Eldridge, 424 U.S. 319 (1976)
Negligence per se Violation of statute sufficient Statutory violation; causation; plaintiff in protected class Identifying applicable AI-specific regulatory statute Common law doctrine; varies by jurisdiction

References

📜 6 regulatory citations referenced  ·  ✅ Citations verified Feb 25, 2026  ·  View update log

Explore This Site