International AI Law Compared to U.S. Frameworks: EU AI Act and Beyond
Artificial intelligence governance has fractured along jurisdictional lines, producing meaningfully different legal obligations depending on where an AI system is deployed, who deploys it, and what it decides. This page maps the structural differences between the European Union's AI Act, the United Kingdom's sector-led approach, China's algorithm regulations, and the United States' fragmented federal and state frameworks. Understanding these differences matters for legal practitioners, compliance officers, and technology developers operating across borders, particularly as extraterritorial provisions increasingly pull non-EU actors into EU compliance obligations.
Definition and scope
The EU AI Act, formally Regulation (EU) 2024/1689, entered into force on 1 August 2024 and applies a risk-tiered classification system to AI systems placed on the EU market or used within EU territory — regardless of where the developer is headquartered. This extraterritorial scope mirrors the GDPR model and directly affects U.S. companies with EU-facing products.
The United States, by contrast, has no single enacted federal AI statute as of the Act's effective date. Governance is distributed across sector-specific agencies — the Federal Trade Commission under 15 U.S.C. § 45, the Equal Employment Opportunity Commission under Title VII, the Food and Drug Administration under 21 C.F.R. Part 820 (as amended, with amendments effective February 2, 2026 and February 4, 2026), and the Consumer Financial Protection Bureau under the Fair Credit Reporting Act — each applying existing authority to AI-enabled conduct. The amendments to 21 C.F.R. Part 820, effective February 2, 2026 and February 4, 2026 respectively, updated the FDA's Quality Management System Regulation framework, replacing the prior Quality System Regulation and aligning it with the international ISO 13485 standard for medical device quality management systems; the amended rule imposes updated design controls, risk management, and documentation requirements that bear directly on AI-enabled medical devices subject to FDA oversight. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110, October 2023) directed agency action but did not itself create enforceable private rights. For a broader map of the domestic landscape, see AI Regulatory Framework (U.S.).
The UK's approach, articulated in the Department for Science, Innovation and Technology's AI Regulation Policy Paper (2023), delegates AI oversight to existing sector regulators — the Financial Conduct Authority, the Information Commissioner's Office, and the Medicines and Healthcare Products Regulatory Agency — without a horizontal AI statute.
China enacted three sequential measures: the Provisions on the Management of Algorithmic Recommendations (2022), the Provisions on Deep Synthesis (2022), and the Interim Measures for the Management of Generative AI Services (2023), creating obligations specific to content-generating systems used within Chinese jurisdiction.
How it works
EU AI Act: Risk classification structure
The EU AI Act organizes AI systems into four tiers:
- Unacceptable risk (prohibited) — AI systems that manipulate persons through subliminal techniques, exploit vulnerabilities of specific groups, enable real-time biometric surveillance in public spaces by law enforcement (with narrow exceptions), and social scoring by public authorities. These are banned entirely under Article 5.
- High risk — AI used in biometric identification, critical infrastructure, education, employment, access to essential services, law enforcement, migration, and administration of justice. High-risk systems must comply with conformity assessments, maintain technical documentation, register in an EU database, and implement human oversight measures (Articles 8–15).
- Limited risk — Systems with transparency obligations only, such as chatbots that must disclose AI identity.
- Minimal risk — AI such as spam filters and AI-enabled video games, subject to no mandatory requirements beyond voluntary codes.
Penalties under the EU AI Act reach €35 million or 7% of global annual turnover for prohibited-use violations, and €15 million or 3% of turnover for high-risk system violations (EU AI Act, Article 99).
U.S. approach: Enforcement-led, sector-specific
The U.S. framework operates through enforcement actions rather than pre-market approval for most AI applications. The FTC has used Section 5 of the FTC Act to challenge deceptive AI claims. The EEOC issued Technical Assistance on AI and Title VII (2023) addressing algorithmic hiring tools. NIST published the AI Risk Management Framework (AI RMF 1.0) in January 2023 as a voluntary governance standard, which E.O. 14110 directed agencies to use as a baseline.
This structure means U.S. AI compliance is determined reactively — through litigation, enforcement, and administrative guidance — rather than through the proactive conformity assessments the EU requires. The implications for attorney ethics and AI use differ accordingly.
Common scenarios
Scenario 1: A U.S. legal tech company deploys an AI contract review tool to EU law firms.
Under the EU AI Act, AI used in "administration of justice and democratic processes" is listed as high-risk (Annex III, point 8). The vendor must conduct a conformity assessment, maintain logs of system operations, and register the system in the EU database before deployment. Under U.S. law, no equivalent pre-deployment requirement exists; the primary exposure is through FTC enforcement for deceptive capability claims or bar association rules on AI competence.
Scenario 2: An AI tool used in U.S. pretrial detention decisions.
In the United States, risk assessment tools used in pretrial detention are governed by state statutes and judicial rules, with constitutional constraints under the Due Process Clause. The EU AI Act would classify such a tool as high-risk under Annex III, point 6 (law enforcement), triggering mandatory human oversight and adverse-decision explanation requirements. The U.S. has no federal equivalent for AI-specific explanation rights in criminal proceedings, though algorithmic due process arguments are litigated under existing constitutional doctrine.
Scenario 3: Generative AI used in legal drafting.
China's 2023 Generative AI Measures require providers to label AI-generated content, ensure political content conforms to state ideology, and conduct security assessments before public release. The EU's limited-risk tier requires disclosure of AI identity but not content-level review. U.S. obligations are minimal at the federal level; AI hallucination and its legal consequences are addressed primarily through professional responsibility rules rather than AI-specific statute.
Decision boundaries
The table below identifies the primary classification variables that determine which legal framework applies:
| Variable | EU AI Act | U.S. Framework | UK Framework | China Framework |
|---|---|---|---|---|
| Trigger | Market placement or use in EU territory | Sector of deployment + enforcement action | Sector regulator jurisdiction | Use within China or service to Chinese users |
| Pre-market requirement | Yes (high-risk systems) | No (general); Yes (medical devices, drugs) | No horizontal requirement | Security assessment (generative AI) |
| Risk classification | Statutory 4-tier hierarchy | Agency-by-agency, case-by-case | Sector-defined | Activity-specific |
| Penalty basis | Global turnover percentage | Per-violation statutory cap | Sector-specific | Fine schedules in each measure |
| Extraterritorial reach | Explicit | Limited (FTC, FCPA in specific contexts) | Limited | Applies to services accessed in China |
The critical decision boundary for U.S. practitioners is whether the AI system touches EU-based users, operators, or data subjects. If it does, EU AI Act obligations attach regardless of the developer's location. For AI systems operating purely within U.S. jurisdiction, compliance is determined by the sector — financial services AI faces CFPB and SEC scrutiny, healthcare AI faces FDA oversight, and employment AI faces EEOC analysis. The state AI laws tracker captures the growing layer of state-level obligations that add further jurisdictional complexity, particularly for AI systems used in consumer-facing applications in Colorado, Illinois, and Texas.
References
- EU AI Act, Regulation (EU) 2024/1689 — European Parliament and Council
- NIST AI Risk Management Framework (AI RMF 1.0) — National Institute of Standards and Technology
- Executive Order 14110 on Safe, Secure, and Trustworthy AI (October 2023) — The White House
- EEOC Technical Assistance on AI and the ADA / Title VII (2023) — U.S. Equal Employment Opportunity Commission
- [UK AI Regulation Policy Paper: A