AI Legislation Tracker: Federal and State Bills Affecting Legal Practice
The legislative landscape governing artificial intelligence in the United States spans dozens of active federal proposals, enacted state statutes, and executive directives — each carrying direct consequences for how lawyers practice, how courts operate, and how AI-driven tools may be deployed in legal contexts. This page maps the structural categories of AI legislation, the mechanisms through which bills become binding obligations, the scenarios where legislative change most sharply affects legal practitioners, and the classification boundaries that distinguish federal from state regulatory authority. Understanding this legislative map is foundational to navigating the AI regulatory framework in the US and to tracking obligations that flow from state AI laws affecting legal practice.
Definition and scope
AI legislation, as a category of law affecting legal practice, encompasses statutory enactments, pending bills, executive orders, and agency rulemakings that impose obligations, prohibitions, or procedural requirements on systems that use machine learning, algorithmic decision-making, or large language models. The scope includes:
- Federal bills introduced in the 117th, 118th, and 119th Congresses addressing AI accountability, transparency, and civil rights
- State statutes already signed into law governing algorithmic employment decisions, automated decision systems in public benefits, and AI use in criminal justice
- Executive orders with regulatory force, including Executive Order 14110 (signed October 2023, later revoked and partially superseded by Executive Order 14179, signed January 2025)
- Agency rulemakings from bodies including the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Consumer Financial Protection Bureau
For lawyers, the operative question is not merely whether a bill has passed, but whether it imposes compliance obligations on law firms, courts, or clients — and whether it creates new causes of action or evidentiary standards. The AI unauthorized practice of law landscape, for instance, is shaped more by state bar authority than by federal statute, while data privacy obligations tracking AI data privacy law arise from a patchwork of state consumer protection statutes and sector-specific federal rules.
How it works
AI legislation moves through distinct procedural stages, each of which carries different legal weight:
-
Bill introduction — A sponsor files a bill in the House or Senate (federal) or in a state legislature. At this stage, no legal obligation exists, but practitioner awareness matters because bills signal regulatory intent and may prompt agency guidance even before passage.
-
Committee markup and amendment — Bills are refined in committee, often narrowing or expanding definitional scope. Key terms such as "high-risk AI system," "automated decision system," and "consequential decision" are frequently defined at this stage.
-
Floor passage and bicameral reconciliation — Federal bills require passage in both chambers; most state legislatures are bicameral (Nebraska is the single unicameral exception among US state legislatures).
-
Executive signature or veto — A governor's or president's signature enacts the statute. Vetoed bills may be overridden by a supermajority.
-
Effective date and rulemaking trigger — Many AI statutes delegate rulemaking authority to agencies. The statute's effective date may precede agency rules, creating a gap period of legal uncertainty.
-
Enforcement activation — Statutes specifying a private right of action, agency enforcement authority, or both activate compliance obligations with concrete penalty structures.
At the federal level, the National Institute of Standards and Technology (NIST) has published the AI Risk Management Framework (AI RMF 1.0), which, while not a statute, is referenced in pending federal bills as a compliance standard and informs how agencies interpret "responsible AI" obligations. The FTC's authority under Section 5 of the FTC Act (15 U.S.C. § 45) covers unfair or deceptive practices involving AI, as detailed in FTC AI enforcement.
Common scenarios
Scenario 1: A state enacts an automated decision system transparency law.
Illinois, Colorado, and Virginia have enacted privacy statutes with provisions governing automated profiling. Colorado's AI Act (SB 205, signed May 2024) (Colorado General Assembly) imposes developer and deployer obligations for "high-risk artificial intelligence systems" affecting consequential decisions. Law firms advising covered entities must assess whether client systems qualify as high-risk under the statutory definition.
Scenario 2: A federal bill proposes mandatory algorithmic impact assessments.
Bills such as the Algorithmic Accountability Act of 2023 (introduced in the 118th Congress) (Congress.gov) would require impact assessments for automated decision systems used in employment, credit, housing, and criminal justice. Attorneys advising technology companies track such bills because committee amendments can shift definitional scope within weeks.
Scenario 3: An executive order reshapes agency AI governance.
Executive Order 14110 (2023) directed agencies including the Department of Justice and the Department of Homeland Security to develop AI use policies. Its partial revocation by Executive Order 14179 (2025) (Federal Register) altered agency obligations, illustrating how executive order AI legal implications create volatile compliance environments.
Scenario 4: State criminal justice AI bills affect court-admissible tools.
At least 11 states have introduced or enacted bills addressing algorithmic risk assessment in pretrial and sentencing contexts, directly intersecting with AI bias in criminal justice and AI sentencing guidelines.
Decision boundaries
Classifying an AI legislative measure requires answering four threshold questions:
Federal vs. state jurisdiction. Federal bills preempt state law only where Congress explicitly legislates or where field preemption applies. Absent federal preemption, states retain authority to regulate AI within their borders — producing a 50-jurisdiction compliance matrix. Colorado's SB 205 and Illinois's Artificial Intelligence Video Interview Act (820 ILCS 42) exemplify state-level statutes with no current federal counterpart.
Binding statute vs. non-binding guidance. NIST's AI RMF and EEOC technical assistance documents on AI hiring (EEOC) are not statutes. They carry interpretive weight but do not independently create enforceable obligations. Practitioners tracking AI employment law must distinguish between agency guidance and promulgated rules under the Administrative Procedure Act (5 U.S.C. § 553).
High-risk vs. general-purpose AI classification. Multiple legislative frameworks bifurcate obligations based on risk level. High-risk systems — those affecting employment, credit, healthcare, criminal justice, or housing — typically carry heavier transparency, audit, and impact-assessment requirements. General-purpose AI tools face lighter or no sector-specific obligations under most current proposals.
Private right of action vs. agency-only enforcement. Colorado's SB 205 directs enforcement through the state Attorney General, with no private right of action. Illinois's Biometric Information Privacy Act (740 ILCS 14), by contrast, provides a private right of action with statutory damages between $1,000 and $5,000 per violation (Illinois General Assembly). This distinction determines litigation exposure and class action risk, bearing directly on AI legal malpractice risk for practitioners advising clients on compliance.
The intersection of these classification boundaries with attorney ethics and AI use and the duty of competence for lawyers using AI underscores why static compliance snapshots are insufficient — legislative tracking must be continuous and jurisdiction-specific.
References
- Executive Order 14110 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Federal Register, 2023)
- Executive Order 14179 — Removing Barriers to American Leadership in Artificial Intelligence (Federal Register, 2025)
- NIST AI Risk Management Framework (AI RMF 1.0)
- Colorado SB 24-205 — Artificial Intelligence Act (Colorado General Assembly)
- Algorithmic Accountability Act of 2023, S.3312, 118th Congress (Congress.gov)
- Illinois Biometric Information Privacy Act, 740 ILCS 14 (Illinois General Assembly)
-
Illinois Artificial Intelligence Video Interview Act, 820 ILCS 42 (Illinois General Assembly)