This platform is an automated regulatory reference platform with no staff or offices.
Federal courts in the United States occupy a distinctive regulatory position with respect to artificial intelligence: they are simultaneously rule-makers, adopters, and arbiters of AI-related disputes. This page maps the formal policies governing AI use by judges, clerks, and attorneys in the federal judiciary; the pilot programs underway across circuits; and the precedent-setting decisions that are shaping how AI evidence, AI-generated filings, and algorithmic tools are treated under federal procedural and evidentiary rules. The treatment covers both the institutional use of AI within court operations and the litigation dimensions where AI outputs enter federal proceedings as evidence or argument.
- Definition and Scope
- Core Mechanics or Structure
- Causal Relationships or Drivers
- Classification Boundaries
- Tradeoffs and Tensions
- Common Misconceptions
- Checklist or Steps
- Reference Table or Matrix
Definition and Scope
"AI use in federal courts" encompasses three functionally distinct domains. The first is administrative AI, meaning tools deployed by the Administrative Office of the U.S. Courts (AO) and individual district courts for case management, scheduling, document processing, and workload analytics. The second is litigant-facing AI, meaning generative AI and legal research tools used by attorneys and self-represented parties to draft filings, conduct research, or prepare exhibits. The third is decisional AI, meaning algorithmic tools that inform or purport to inform judicial or quasi-judicial decisions — the category that has drawn the most constitutional scrutiny under due process doctrine, addressed in depth at Algorithmic Due Process.
The Administrative Office of the U.S. Courts operates under Title 28 of the U.S. Code, and the Judicial Conference of the United States — the national policy-making body for the federal courts — holds authority to issue guidance governing court technology and practice under 28 U.S.C. § 331. That authority is the formal basis for any AI-specific policy the federal judiciary adopts.
Scope boundaries matter: federal courts do not directly regulate how private parties develop AI systems, but they do regulate what attorneys may submit as evidence, how filings must be certified, and what disclosure obligations attach to AI-generated content. Those procedural constraints are the operative boundary of "federal court AI policy."
Core Mechanics or Structure
The Judicial Conference Policy Framework
The Judicial Conference of the United States issued guidance in 2023 acknowledging that generative AI tools pose verification and accuracy risks in federal filings. Rather than a single uniform rule, the Conference approach operates through model local rules that individual district courts adopt and modify. As of the district court orders publicly available through PACER and court websites, at least 30 federal district courts had issued standing orders addressing generative AI use in court filings by mid-2024, according to the Thomson Reuters Institute's tracking of federal AI standing orders.
The core mechanic in most standing orders follows a certification model: attorneys filing documents must either affirm that no AI-generated content appears in the filing, or disclose that AI was used and that a human attorney has reviewed and verified all content. This mirrors — but extends — the existing certification obligation under Federal Rule of Civil Procedure 11, which requires an attorney's signature to certify that filings are not frivolous and are supported by existing law or a nonfrivolous argument for modifying it (Fed. R. Civ. P. 11(b)).
Electronic Filing Infrastructure
The federal judiciary's Case Management/Electronic Case Files (CM/ECF) system, administered by the AO, does not currently incorporate AI filtering or detection at the point of submission. Verification responsibility remains with the filing attorney. The AO has published strategic technology plans through its Long Range Plan for Information Technology, which identifies AI as a priority area for case analytics and workload distribution, without specifying deployment timelines.
AI-Generated Evidence Admission
When AI outputs are offered as evidence — whether as exhibits, expert reports, or demonstrative aids — the Federal Rules of Evidence govern admissibility. Rule 702 controls expert testimony and was amended effective December 1, 2023 (Fed. R. Evid. 702) to clarify that the proponent bears the burden of demonstrating by a preponderance of evidence that qualified professionals's methodology satisfies reliability requirements. AI-generated analytical outputs offered through an expert witness fall squarely within this amended framework, as documented in regulatory sources.
Causal Relationships or Drivers
Three structural forces are driving formalization of AI policy in federal courts.
Documented hallucination incidents created immediate pressure. The 2023 disciplinary proceedings in Mata v. Avianca (S.D.N.Y.) — in which attorneys submitted a brief citing six nonexistent cases generated by ChatGPT — produced a $5,000 sanctions award against the filing attorneys (Order, Mata v. Avianca, No. 22-cv-1461 (S.D.N.Y. June 22, 2023)). That single incident generated standing orders from courts nationwide within months. The mechanics of AI hallucination in legal contexts explain why large language models produce plausible but fabricated citations.
COMPAS and algorithmic sentencing litigation demonstrated that decisional AI without disclosure creates appellate vulnerability. State v. Loomis, 881 N.W.2d 749 (Wis. 2016), while a state court decision, established a template for federal courts evaluating due process objections to opaque risk scores. Federal circuits have since addressed analogous arguments regarding pretrial risk assessment instruments used in the federal pretrial services system, which operates under 18 U.S.C. § 3142.
Executive Branch AI governance created an external policy environment that courts cannot ignore. Executive Order 14110 (October 2023), titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," directed federal agencies to develop AI use policies, and while Article III courts are not executive agencies, the AO observed parallel policy development pressure from bar associations and Congress.
Classification Boundaries
Federal court AI use falls into four distinct regulatory categories, each with different governing authority:
- Judicial administrative AI (e.g., docket management, case assignment analytics): governed by AO internal technology governance; no public disclosure requirement.
- Attorney-submitted AI content in filings: governed by local standing orders, FRCP Rule 11, and circuit-level bar discipline rules.
- AI as expert or analytical evidence: governed by Federal Rules of Evidence 702–705 and Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).
- Decisional AI influencing detention, sentencing, or supervision: governed by constitutional due process (Fifth Amendment), 18 U.S.C. § 3553 (sentencing factors), and 18 U.S.C. § 3142 (pretrial detention).
The boundary between categories 2 and 3 is frequently contested: an attorney who uses AI to draft a damages calculation and presents it as counsel argument (category 2) versus an expert who uses an AI model to generate a valuation and presents it as expert opinion (category 3) triggers entirely different evidentiary standards. The AI Evidence Admissibility page maps this distinction in detail.
Tradeoffs and Tensions
Transparency Versus Competitive Sensitivity
Mandatory AI disclosure requirements in standing orders create tension with attorney work-product protection. When a litigant must disclose that AI tools assisted in document review or argument drafting, opposing counsel gains information about litigation strategy. The work-product doctrine under Fed. R. Civ. P. 26(b)(3) does not straightforwardly resolve this because disclosure-triggering standing orders operate at the filing level, not the discovery level.
Uniformity Versus Local Innovation
The absence of a single Judicial Conference rule — relying instead on district-by-district standing orders — produces a patchwork of at least 6 materially different disclosure formulations across districts. Attorneys practicing in multiple districts face inconsistent obligations. The U.S. Court of Appeals for the Fifth Circuit, for instance, adopted an AI disclosure rule applicable to all filings in that circuit, while the Second Circuit had not issued a circuit-wide rule as of mid-2024.
Access Versus Accuracy
AI tools dramatically lower the cost of legal research and drafting, potentially expanding access for self-represented litigants — a constituency the federal courts explicitly recognize under the AO's Pro Se Statistics program. However, self-represented parties using AI without verification training face higher hallucination risk, and standing orders imposing attorney-certification requirements do not straightforwardly apply to pro se filers, creating an enforcement gap. This dynamic is explored at AI Legal Access for Self-Represented Litigants.
Algorithmic Opacity Versus Procedural Due Process
Pretrial Services Officers in the federal system use validated risk instruments (the Pretrial Risk Assessment Instrument, or PRAI) whose algorithm weights are not fully public. Defendants cannot challenge the instrument's methodology without access to its source code or training data, raising Fifth Amendment due process concerns analogous to those documented in COMPAS Risk Assessment Tools.
Common Misconceptions
Misconception: All federal courts now ban AI in filings.
Correction: No federal circuit or the Judicial Conference has issued a blanket prohibition on AI use in filings. Existing orders require disclosure and human verification, not prohibition. The distinction between disclosure requirements and bans is operationally significant.
Misconception: FRCP Rule 11 already covers all AI filing problems.
Correction: Rule 11 requires certification that legal contentions are warranted by existing law, but it does not specifically address AI-generated citations or require disclosure of AI tool use. Standing orders were issued precisely because Rule 11 was considered insufficient on its own to address hallucinated citations.
Misconception: Daubert automatically excludes AI-generated evidence.
Correction: Daubert establishes a reliability gatekeeping standard — it does not categorically exclude any technology. AI-generated analytical outputs can satisfy Daubert if the proponent demonstrates the methodology's reliability, validation, and known error rate. Federal courts have admitted outputs from algorithmic models in antitrust, patent, and environmental damages contexts.
Misconception: The Judicial Conference has no authority over AI in courts.
Correction: The Judicial Conference holds express statutory authority under 28 U.S.C. § 331 to prescribe general rules of practice and procedure for the federal courts, including technology-related rules. That authority is the formal basis for any future binding AI policy.
Checklist or Steps
The following sequence reflects the publicly documented procedural elements that apply when AI-generated content appears in federal court contexts. This is a reference map of process elements, not legal advice.
Phase 1: Pre-Filing
- [ ] Identify whether the applicable district court has a standing order on AI use in filings (check court's local rules page on uscourts.gov)
- [ ] Identify whether the applicable circuit court has issued a circuit-wide AI disclosure requirement
- [ ] Determine whether any AI-generated content appears in the proposed filing (drafts, citations, legal arguments, exhibits)
- [ ] Cross-verify every citation against primary sources — Westlaw, Lexis, or official government repositories (regulations.gov, ecfr.gov, uscode.house.gov)
Phase 2: Filing Certification
- [ ] Prepare required AI disclosure language consistent with the applicable standing order (exact language varies by district)
- [ ] Ensure the certifying attorney has personally reviewed all AI-generated content for accuracy
- [ ] Apply Rule 11 signature certification to the completed filing
- [ ] Retain documentation of AI tool used, prompts, and outputs as potential sanctions-response material
Phase 3: Evidence Presentation
- [ ] If AI-generated analysis is offered as expert evidence, confirm Rule 702 foundation materials are prepared (methodology, validation, error rate)
- [ ] Anticipate Daubert challenge; prepare reliability documentation from AI tool's technical specifications or research-based validation studies
- [ ] If AI output is offered as a demonstrative or lay exhibit, confirm relevance and authentication grounds under FRE 901
Phase 4: Appellate Preservation
- [ ] Ensure objections to opposing AI evidence are made on the record at the trial level to preserve appellate review
- [ ] Document any due process objections to decisional AI (risk instruments, sentencing algorithms) in the district court record before appeal
Reference Table or Matrix
| Domain | Governing Authority | Key Instrument | Disclosure Required? | Enforcement Mechanism |
|---|---|---|---|---|
| Attorney AI in filings | District standing orders + FRCP Rule 11 | Local standing orders (30+ districts as of mid-2024) | Yes (disclosure + verification) | Rule 11 sanctions; bar discipline |
| AI as expert evidence | Federal Rules of Evidence 702–705 | Daubert standard (509 U.S. 579) | Yes (via expert report under FRCP 26(a)(2)) | Exclusion; adverse instruction |
| AI in pretrial risk assessment | 18 U.S.C. § 3142; Fifth Amendment | AO Pretrial Risk Assessment Instrument (PRAI) | Limited (instrument score disclosed, not full algorithm) | Due process challenge; appellate review |
| AI in sentencing | 18 U.S.C. § 3553; USSG | U.S. Sentencing Guidelines Manual | No binding rule; circuit-dependent | Due process objection; Loomis framework |
| Judicial administrative AI | AO internal governance; 28 U.S.C. § 331 | AO Long Range Plan for IT | No public disclosure requirement | Internal AO oversight |
| AI in e-discovery / document review | FRCP Rules 26, 34, 37; Fed. R. Evid. 901 | TAR/CAL protocol orders | Negotiated per case (protocol) | Sanctions under Rule 37; exclusion |
References
- Judicial Conference of the United States — About the Judicial Conference
- 28 U.S.C. § 331 — Judicial Conference of the United States
- Federal Rule of Civil Procedure 11 — Cornell LII
- Federal Rule of Evidence 702 (2023 Amendment) — Cornell LII
- 18 U.S.C. § 3142 — Release or Detention of a Defendant Pending Trial
- 18 U.S.C. § 3553 — Imposition of a Sentence
- Administrative Office of the U.S. Courts — Pro Se Statistics
- Executive Order 14110 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Federal Register)
- Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) — Cornell LII
- U.S. Courts — CM/ECF Overview
- U.S. Sentencing Commission — Guidelines Manual