AI Use in State Courts: Adoption Trends and Judicial Policies

State court systems across the United States are navigating a rapid and uneven integration of artificial intelligence tools into judicial operations, case management, and legal research. This page maps the scope of that adoption, the policy frameworks courts have issued to govern it, and the boundaries that distinguish permissible AI assistance from prohibited substitution of judicial judgment. Understanding how state courts differ from federal courts in their AI governance approach is essential context for practitioners, researchers, and the public following AI in federal courts.


Definition and scope

AI use in state courts encompasses any application of machine-learning systems, large language models, natural language processing, or algorithmic scoring tools within the judicial branch of a state government. This includes tools used by judges, clerks, court administrators, public defenders, prosecutors, and self-represented litigants operating within the court's digital infrastructure.

The scope divides into two primary categories:

The Conference of State Court Administrators (COSCA) and the National Center for State Courts (NCSC) serve as the two principal national bodies tracking and advising on AI deployment across the 50 state court systems and the District of Columbia. NCSC has published guidance frameworks and survey data on court technology adoption that form the primary evidence base for understanding adoption patterns nationally.

State courts operate under their own rule-making authority, meaning the 50 state systems plus the District of Columbia each retain independent discretion to permit, restrict, or regulate AI use. No single federal mandate governs AI in state courts, though federal constitutional floors — particularly due process requirements — constrain how AI outputs can be used in consequential decisions.


How it works

AI adoption in state courts follows a recognizable institutional pathway, though timelines vary substantially by state funding levels, population served, and judicial leadership priorities.

Phase 1 — Procurement and piloting: A court administrator or technology committee identifies a specific operational problem (e.g., backlog in document processing, inconsistent risk assessment practices) and evaluates commercial or publicly available AI tools. Procurement is governed by state procurement law and, in some cases, state-level AI policies issued by governors' offices.

Phase 2 — Policy drafting: Before or concurrent with deployment, courts draft internal standing orders or administrative rules governing disclosure, use limitations, and oversight. As of 2023 and 2024, courts in at least 15 states had issued formal orders or guidelines specifically addressing generative AI use by attorneys and court staff, according to NCSC tracking (NCSC AI in Courts Resource Guide).

Phase 3 — Disclosure requirements: Courts require that attorneys disclose when AI tools assisted in drafting filings, and that human review was applied. This mirrors guidance from state bar associations and addresses AI hallucination legal consequences that have produced sanctions in documented cases.

Phase 4 — Ongoing audit and review: Courts with formal AI policies typically require periodic review of vendor accuracy claims and maintain human override authority for all AI-generated outputs touching case decisions.

Risk assessment instruments — the most scrutinized category — operate by ingesting structured data about a defendant's criminal history, demographics, and other variables, then outputting a score or classification. The COMPAS risk assessment tool is the most publicly examined example, following the State v. Loomis decision by the Wisconsin Supreme Court in 2016, which upheld the use of COMPAS at sentencing while requiring that judges not treat the score as determinative.


Common scenarios

State courts encounter AI across a widening range of operational contexts:

  1. Pretrial risk assessment: Algorithmic tools score defendants for flight risk and public safety risk to assist bail and release decisions. New Jersey's Public Safety Assessment (PSA), developed with Arnold Ventures, is used across all 21 New Jersey counties under court oversight.

  2. Legal research assistance: Judges and clerks use AI-powered legal research platforms to surface relevant precedent. Platforms built on large language models require verification workflows to catch hallucinated citations — a documented failure mode addressed in AI citation verification in legal practice.

  3. Document review and e-discovery in civil matters: Courts with high-volume civil dockets increasingly permit AI-assisted document review under controlled conditions; see AI document review and e-discovery for the framework governing that process.

  4. Self-represented litigant support: Courts in states including California and Utah have piloted AI-powered guidance tools that help unrepresented parties navigate form selection and procedural steps without constituting legal advice. This implicates AI legal access for self-represented litigants.

  5. Sentencing support tools: Beyond risk scores, some jurisdictions have piloted tools that surface comparable sentences from prior cases to promote consistency, raising the algorithmic due process questions addressed in academic and policy literature.

  6. Translation and transcription: AI-powered transcription and real-time translation services are now operational in courts across Texas, California, and New York, reducing reliance on human interpreters for routine proceedings.


Decision boundaries

The critical legal and policy boundary in state court AI use is the distinction between decision support and decision delegation. Courts and bar bodies have consistently drawn this line: AI may inform, organize, and surface information, but a human officer of the court or judge must make the actual determination.

Key boundary markers established by court orders and professional ethics guidance include:

The contrast between state court AI governance and federal court governance is primarily one of uniformity: federal courts operate under Judicial Conference oversight, which can issue consistent guidance across all 94 federal district courts, while state courts fragment across 50 independent rule-making authorities. This fragmentation creates compliance complexity for practitioners appearing in AI in state courts across multiple jurisdictions, as obligations and disclosure formats differ state by state. Practitioners must also track state AI laws affecting legal practice enacted through state legislatures that may impose obligations separate from court rules.


References

Explore This Site