AI in U.S. Election Law: Disinformation, Deepfakes, and Regulatory Responses

AI-generated disinformation and synthetic media — commonly called deepfakes — have become measurable threats to U.S. electoral integrity, prompting legislative action at both the federal and state levels. This page covers the legal classification of AI-produced election content, the technical mechanisms that make detection difficult, the scenarios in which such content has appeared in electoral contexts, and the regulatory boundaries that distinguish protected political speech from unlawful deceptive conduct. The subject intersects campaign finance law, federal election statutes, state disclosure mandates, and First Amendment doctrine in ways that remain actively contested.

Definition and scope

Synthetic media in election contexts refers to audio, video, or image content generated or substantially altered by machine learning systems — including generative adversarial networks (GANs) and large language models — to depict a real candidate, official, or voter in a false or fabricated scenario. The Federal Election Commission (FEC) regulates campaign communications under the Federal Election Campaign Act (FECA), codified at 52 U.S.C. § 30101 et seq., but FECA does not contain provisions that explicitly name "deepfakes" or "synthetic media" as regulated categories.

The definitional gap is consequential. Whether a given piece of AI-generated content constitutes a regulated "public communication" under FECA — and thus triggers disclaimer and disclosure requirements — depends on its medium, sponsor, and proximity to an election. The FEC has issued Advance Public Opinion 2023-02 addressing AI-generated content in political advertising, though the opinion stopped short of creating a categorical prohibition.

At the state level, at least 20 states had enacted legislation specifically targeting AI-generated election content by 2024, according to tracking by the National Conference of State Legislatures (NCSL). These statutes vary significantly: some require disclosure labels on synthetic media in political ads; others criminalize the knowing distribution of materially deceptive AI content within a defined window before an election.

The scope of this regulatory domain also touches AI and voting and election law, AI constitutional law questions, and the broader AI regulatory framework in the U.S..

How it works

AI-generated election disinformation typically moves through four identifiable phases:

  1. Content generation: A generative model — either a GAN, a diffusion model, or a text-to-video system — is used to produce or alter audio/video/image content depicting a candidate, election official, or voter. Voice-cloning tools can synthesize speech from as little as three seconds of source audio.
  2. Platform distribution: Synthetic content is uploaded to social media platforms, messaging applications, or email networks, where algorithmic amplification can accelerate reach before human reviewers flag the content.
  3. Contextual injection: The content is timed relative to electoral events — primaries, debate nights, or the 72-hour window before Election Day — to maximize confusion and minimize time for correction.
  4. Attribution evasion: Creators use anonymous accounts, foreign-based servers, or intermediary political committees to sever the traceable link between the content and its origin, complicating FEC enforcement of disclaimer rules under 52 U.S.C. § 30120.

Detection relies on forensic tools that analyze pixel-level artifacts, unnatural blinking patterns, lighting inconsistencies, and phoneme-to-lip synchronization errors. The Defense Advanced Research Projects Agency (DARPA) funded the Media Forensics (MediFor) program specifically to develop automated deepfake detection, though detection accuracy degrades as generation models improve. The National Institute of Standards and Technology (NIST) maintains research into media provenance standards that could support chain-of-custody verification for political communications.

Common scenarios

Four scenario types recur in documented electoral disinformation cases:

Candidate impersonation audio: Synthetic voice recordings impersonate a candidate conceding an election, making inflammatory statements, or discouraging voter turnout. A documented 2024 New Hampshire primary incident involved robocalls using a synthetic voice resembling President Biden urging Democrats not to vote in the primary — an incident the Department of Justice (DOJ) later linked to an indictment of a political consultant.

Fabricated video of election officials: AI-generated video depicts election administrators making false statements about voting procedures, poll closures, or vote-counting irregularities, targeting specific precincts or demographic groups.

Synthetic supporter content: AI-generated social media profiles and content simulate grassroots political movements — a form of "astroturfing" that the FEC's existing disclaimer rules were not designed to detect when the content lacks a human author.

Targeted voter suppression messaging: Personalized AI-generated messages, derived from data broker files or voter rolls, send jurisdiction-specific false instructions — wrong polling locations, incorrect ID requirements — to targeted voter segments. This conduct may trigger federal criminal liability under 18 U.S.C. § 594 (intimidation of voters) and 18 U.S.C. § 1343 (wire fraud).

These scenarios contrast meaningfully with AI bias issues in criminal justice and AI surveillance and Fourth Amendment concerns, where the harm pathway runs through government actors rather than private political operatives.

Decision boundaries

Regulatory treatment of AI-generated election content turns on four principal distinctions:

Disclosure vs. prohibition: State statutes generally fall into one of two categories. Disclosure-based regimes (adopted in states including California under AB 2839) require that AI-generated political ads carry a visible or audible label identifying synthetic content. Prohibition-based regimes criminalize knowing distribution of materially deceptive synthetic media within a specified pre-election window, typically 60 to 90 days. These approaches carry different First Amendment exposure: pure prohibition regimes face heightened scrutiny under Alvarez (United States v. Alvarez, 567 U.S. 709 (2012)), which limits government power to criminalize false statements without additional harm elements.

Material deception threshold: Not all AI-generated content is legally actionable. Satire and parody that a reasonable viewer would not mistake for fact generally retains First Amendment protection. Statutes that reach only "materially deceptive" content — content designed to create a false impression of fact and likely to do so — track the constitutional limiting principle more closely.

Federal vs. state jurisdiction: FEC jurisdiction attaches to content that qualifies as a "public communication" and constitutes an "expenditure" or "electioneering communication" under FECA. State election law applies independently but may not preempt federal regulation of the same communication.

Coordination and attribution: Content produced by a campaign or coordinated with a candidate is subject to contribution and expenditure limits regardless of its AI origin. Independent expenditures that happen to use synthetic media face the same disclosure obligations as conventional political ads under 52 U.S.C. § 30104.

Administrative Fine Program extension: Congress enacted legislation on December 19, 2023, amending FECA to extend the FEC's Administrative Fine Program for certain reporting violations. This program allows the FEC to assess civil monetary penalties for late or non-filed reports without initiating a full enforcement proceeding, a mechanism that bears directly on campaigns and committees that may fail to properly disclose expenditures on AI-generated political communications. The extension, effective December 19, 2023, reinforces the compliance infrastructure that applies to disclosure obligations for synthetic media spending and reflects active congressional intent to strengthen the FEC's enforcement tools in the current regulatory environment. Committees spending on AI-generated political content should treat the extended Administrative Fine Program as an active enforcement mechanism applicable to any reporting lapses connected to such expenditures.

The FTC's AI enforcement activity and AI consumer protection law frameworks offer parallel enforcement vectors when election-related AI content also constitutes deceptive commercial practice — a boundary that remains poorly defined when political speech and commercial activity overlap. For practitioners tracking how these rules interact with platform obligations and AI legislation across U.S. jurisdictions, the statutory landscape continues to evolve at the state level faster than federal rulemaking.

References

📜 6 regulatory citations referenced  ·  ✅ Citations verified Mar 02, 2026  ·  View update log

Explore This Site