AI Legal Terminology: Key Definitions for U.S. Legal Practice

Artificial intelligence has introduced a distinct vocabulary that intersects with established legal doctrine, regulatory frameworks, and professional conduct rules across U.S. practice. This page defines and scopes the core terms practitioners, courts, and regulators use when analyzing AI systems, their outputs, and their legal consequences. Precise terminology matters because ambiguity in contract drafting, evidentiary arguments, or ethics opinions can produce materially different legal outcomes. The definitions below draw from named federal agencies, standards bodies, and published regulatory guidance.


Definition and Scope

Artificial Intelligence (AI) in a legal context refers to computational systems that perform tasks typically requiring human cognition — including classification, prediction, generation, and decision-making. The National Institute of Standards and Technology (NIST) defines AI in its AI Risk Management Framework (AI RMF 1.0, 2023) as "an engineered or machine-based system that can, for a given set of objectives, make predictions, recommendations, or decisions influencing real or virtual environments."

U.S. law has not yet adopted a single statutory definition of AI applicable across all domains, though the National AI Initiative Act of 2020 (15 U.S.C. § 9401) provides a working federal definition: "a machine-based system that can, for a given set of objectives, make predictions, recommendations, or decisions influencing real or virtual environments." The European Union's AI Act uses a similar but distinct formulation, creating cross-border definitional friction addressed on the International AI Law and U.S. Comparison page.

Key terms and their legal scope:

  1. Algorithm — A finite set of rules or instructions that a system follows to produce an output. In legal proceedings, the word "algorithm" often appears in challenges to algorithmic due process and sentencing tools.
  2. Machine Learning (ML) — A subset of AI in which systems improve performance through exposure to data without explicit reprogramming. ML underpins most commercial AI legal research tools and predictive analytics platforms.
  3. Large Language Model (LLM) — A deep-learning model trained on large text corpora to generate or classify language. LLMs are the engine behind AI legal drafting tools and raise specific hallucination risks with legal consequences.
  4. Generative AI — AI capable of producing novel content (text, images, code) rather than merely classifying existing data. Generative AI intersects directly with AI-generated content and copyright law and AI patent inventorship questions.
  5. Autonomous System — A system that acts without real-time human intervention. Autonomy level is central to AI liability and tort analysis.
  6. Explainability / Interpretability — The degree to which a system's internal processes can be understood by a human. Courts and regulators increasingly require explainability, particularly in AI pretrial detention decisions and AI sentencing contexts.
  7. Bias (Algorithmic) — Systematic and repeatable errors in output that correlate with protected characteristics. The FTC, DOJ Civil Rights Division, and CFPB have each published guidance on algorithmic bias. See the AI Bias in Criminal Justice page for sector-specific treatment.
  8. Training Data — The dataset used to develop an AI model's parameters. Legal disputes over training data implicate copyright, trade secret, and privacy law (see AI Trade Secret Law).
  9. Hallucination — An AI output that is factually incorrect but presented with apparent confidence. In legal practice, hallucinated citations have resulted in sanctions under Fed. R. Civ. P. 11 in federal courts.
  10. Foundation Model — A large AI model trained on broad data and adaptable to many downstream tasks. The term appears in the Executive Order on Safe, Secure, and Trustworthy AI (E.O. 14110, October 2023).

How It Works

Legal terminology in the AI space functions across 3 distinct regulatory layers, each with its own definitional authority:

Layer 1 — Federal Statutory and Executive Definitions
Congress and the Executive Branch set baseline definitions through statutes and executive orders. The National AI Initiative Act of 2020 and E.O. 14110 are the primary federal definitional instruments as of 2023–2024. Agencies including the FTC, HHS, SEC, and CFPB then publish sector-specific guidance that refines or extends those definitions for their regulatory domains.

Layer 2 — Standards Body Definitions
NIST produces the AI RMF and companion glossary, which courts and agencies increasingly treat as persuasive authority for technical meaning. The NIST AI 100-1 document defines terms including "AI system," "risk," "trustworthiness," and "transparency" with precision that statutory language often lacks.

Layer 3 — Professional Conduct and Bar Definitions
The American Bar Association's Formal Opinion 512 (2024) addresses generative AI use by attorneys and implicitly defines "generative AI" for ethics purposes. State bar associations — including California, New York, and Florida — have issued their own guidance that may apply local definitional variants. The Attorney Ethics and AI Use page catalogs these opinions by jurisdiction.

The interaction among these 3 layers creates definitional gaps. A term like "decision support system" may carry one meaning under FDA medical device regulations (21 C.F.R. Part 882, as amended effective February 2, 2026), a different meaning under EEOC algorithmic hiring guidance, and no settled meaning in common law tort doctrine. The February 2, 2026 amendment to 21 C.F.R. Part 882 is currently in effect, and practitioners must consult the current amended text when applying FDA definitional standards to AI-based medical decision support tools. The amendment represents the controlling regulatory authority for this domain, superseding all prior versions of Part 882 for all compliance and interpretive purposes. Practitioners and regulated entities should verify the current amended regulatory text through the Electronic Code of Federal Regulations (eCFR) to ensure they are applying the operative post-February 2, 2026 provisions rather than any superseded version.

Common Scenarios

Scenario A — Evidentiary Disputes Over AI Output
When AI-generated documents or predictions are offered into evidence, courts must resolve whether the output is hearsay, what foundational showing is required, and whether the underlying model is sufficiently reliable under Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). The operative terms here — "scientific knowledge," "reliable principles and methods," and "fit" — were not drafted with AI in mind, requiring definitional extension by analogy. The AI Evidence Admissibility page covers these standards in detail.

Scenario B — Professional Responsibility
When a lawyer uses an LLM to draft a brief, the terms "competence" (Model Rules of Professional Conduct 1.1), "confidentiality" (MRPC 1.6), and "supervision" (MRPC 5.3) all activate. ABA Formal Opinion 512 specifically addresses that "generative AI" tools require the lawyer to understand the tool's limitations — a requirement grounded in the duty of competence for AI use.

Scenario C — Regulatory Enforcement
The FTC's enforcement authority under Section 5 of the FTC Act (15 U.S.C. § 45) extends to deceptive or unfair uses of AI in commerce. When the FTC alleges "deceptive AI," the factual and legal analysis hinges on how "AI" and "automated decision system" are defined in the complaint and applicable guidance. The FTC AI Enforcement page tracks enforcement actions by term and theory.

Scenario D — Criminal Justice Risk Scores
Tools such as COMPAS assign numeric risk scores that influence bail, sentencing, and parole. The legal controversy centers on whether these scores constitute "evidence," whether they are sufficiently "reliable," and whether their opacity violates due process. Each question requires a precise definition of the underlying term — as explored on the COMPAS Risk Assessment Tools page.


Decision Boundaries

Definitional classification determines which legal rules apply. Two contrasts illustrate this:

Decision Support vs. Autonomous Decision-Making
A system that recommends a credit decision to a human who then independently decides is treated differently under the Equal Credit Opportunity Act (15 U.S.C. § 1691 et seq.) than a system that makes the decision without human review. The CFPB's adverse action notice guidance (Regulation B, 12 C.F.R. Part 1002) turns on this boundary: automated systems that generate adverse actions must provide specific reasons, regardless of whether a human reviews the output.

General-Purpose AI vs. High-Risk AI
NIST AI RMF 1.0 distinguishes between AI systems deployed in low-stakes contexts and those affecting "safety, rights, or other high-impact areas." This classification boundary — which the EU AI Act formalizes into prohibited, high-risk, and limited-risk tiers — has no direct U.S. statutory analog yet, but federal agency guidance increasingly uses a similar risk-tiered vocabulary. High-risk classification triggers enhanced documentation, explainability, and human oversight requirements under proposed rules from agencies including the EEOC (algorithmic employment tools) and HHS (clinical AI tools, 45 C.F.R. subparts).

Practical Classification Questions
Practitioners applying these definitions must resolve:

  1. Does the system generate output or retrieve existing information? (Generative vs. retrieval-augmented — relevant to copyright and hallucination liability)
  2. Does the system replace human judgment or inform it

References

📜 11 regulatory citations referenced  ·  ✅ Citations verified Feb 25, 2026  ·  View update log

Explore This Site