Attorney Ethics and AI Use: ABA Rules and State Bar Guidance

The intersection of artificial intelligence and legal practice has generated a growing body of formal guidance from the American Bar Association and dozens of state bars, creating a patchwork of obligations that directly shapes how attorneys may research, draft, and advise using AI tools. This page covers the primary ethical rules implicated by AI use in legal practice, the formal guidance documents that interpret those rules, the classification boundaries between permissible and prohibited conduct, and the unresolved tensions that remain contested across jurisdictions. Understanding these frameworks is essential for any legal practitioner deploying AI in client-facing or case-related work.


Definition and scope

Attorney ethics in the context of AI use refers to the body of professional conduct rules — principally the ABA Model Rules of Professional Conduct and their state-level adoptions — that govern how lawyers may integrate artificial intelligence tools into legal work. The scope encompasses generative AI applications (such as large language models used for drafting), AI-assisted legal research platforms, predictive analytics tools, and automated document review systems.

The ABA Model Rules do not mention AI explicitly in their base text, but the ABA's Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 (2024) addressing generative AI specifically. Before that opinion, the ABA's Formal Opinion 477R (2017) on cybersecurity and communication established foundational principles on technology security that courts and bars have applied to AI contexts.

At the state level, bar associations in California, Florida, New York, North Carolina, and Pennsylvania, among others, have issued formal ethics opinions or guidance documents specifically addressing AI use. The California State Bar's Practical Guidance for the Use of Generative Artificial Intelligence (2023) is among the most detailed publicly available state-level documents. The geographic scope of applicable rules follows which state(s) have disciplinary jurisdiction over a given attorney under ABA Model Rule 8.5, making multi-jurisdictional practice a compounding complexity.


Core mechanics or structure

The ethical framework governing AI use in legal practice operates through 5 primary ABA Model Rules, each activated by different features of AI deployment.

Model Rule 1.1 — Competence requires lawyers to provide competent representation, which the comments define as including the legal knowledge, skill, thoroughness, and preparation reasonably necessary. Comment 8 to Rule 1.1 explicitly states that competence includes keeping current with the benefits and risks of relevant technology. The ABA Formal Opinion 512 identifies AI competence as falling within this obligation — meaning attorneys must understand how a tool functions, its known error modes, and its limitations before relying on it. The duty of AI competence for lawyers is now a discrete analytical category in state guidance documents.

Model Rule 1.6 — Confidentiality prohibits disclosure of client information without informed consent. When client data is entered into a third-party AI platform, confidentiality obligations attach. The threshold question is whether the platform's data retention, training, and sharing practices constitute a prohibited "disclosure." As examined in depth at AI confidentiality and attorney-client privilege, ABA Formal Opinion 512 instructs attorneys to review vendor terms of service, disable training data collection features where available, and obtain client consent where risk of disclosure is non-trivial.

Model Rule 5.1 / 5.3 — Supervision imposes responsibility on supervising attorneys and law firm partners for the work of subordinates and nonlawyer assistants. Bar guidance in North Carolina (2024 Formal Ethics Opinion 5) and Florida (Florida Bar Ethics Opinion 24-1) treats AI tools as subject to the same supervision obligations as paralegals — meaning partners cannot delegate AI output review entirely to junior associates without maintaining adequate oversight structures.

Model Rule 3.3 — Candor Toward the Tribunal prohibits making false statements of law or fact to a court and requires correction of material errors. The AI hallucination problem — where large language models generate plausible but nonexistent citations — directly implicates Rule 3.3. Federal sanctions in Mata v. Avianca (S.D.N.Y. 2023) and Park v. Kim (2d Cir. 2024) were grounded in failures to verify AI-generated citations before filing.

Model Rule 7.1 governs attorney communications about services. Attorneys using AI to generate marketing content must ensure statements are not materially false or misleading.


Causal relationships or drivers

The acceleration of formal bar guidance after 2022 traces directly to the public release of large language model products accessible without specialized infrastructure. Before 2023, AI ethics questions arose primarily in e-discovery contexts governed by predictable workflows; generative AI introduced open-ended text generation into drafting and research workflows with no built-in citation verification.

Three structural drivers have shaped the current regulatory response:

Court sanctions created concrete disciplinary data points. After sanctions orders in Mata v. Avianca (2023) and related cases became widely reported, state bars received formal inquiry requests from members seeking prospective guidance — creating political and institutional pressure to issue formal opinions.

Vendor terms of service variability — across platforms including Westlaw AI, Lexis+ AI, Harvey AI, and general-purpose platforms — created jurisdiction-crossing confidentiality exposure that existing guidance did not address. The AI legal research tools market fragmented in ways that made blanket rules impractical.

ABA Resolution 112 (2019) had already urged courts and bar associations to address AI's implications for practice, creating a soft mandate that accelerated after generative AI's commercial debut.


Classification boundaries

Ethics guidance distinguishes AI use along 3 primary axes:

1. Client-data exposure vs. no client-data exposure. Entering client-identifying information, case facts, or privileged communications into an AI platform with external data retention creates confidentiality risk. Using AI tools exclusively on sanitized or hypothetical data does not. This boundary determines whether Rule 1.6 analysis is triggered.

2. AI as research aid vs. AI as final work product. Using AI to generate a research outline that an attorney then independently verifies falls under ordinary supervision norms. Filing AI-generated text without independent verification is the conduct pattern courts have sanctioned. AI citation verification in legal practice is the procedural checkpoint at this boundary.

3. Disclosed vs. undisclosed AI use. A growing body of court standing orders — including the U.S. District Courts for the Northern District of Texas and the Eastern District of Texas — require affirmative disclosure of generative AI use in filed documents. Absent such an order, no ABA Model Rule independently mandates disclosure of AI use to opposing counsel or the court, though disclosure to clients may be required under Rule 1.4 (communication) where AI use is material to the representation.


Tradeoffs and tensions

The core tension in attorney AI ethics is between the competence obligation to adopt effective technology and the accuracy obligation to verify outputs before reliance. A lawyer who avoids AI tools entirely may fall behind workflow efficiency norms, potentially disadvantaging clients through higher costs or slower production. A lawyer who relies on AI output without verification exposes clients to errors and the lawyer to sanctions.

A second tension exists between client confidentiality and AI tool effectiveness. More contextual client data fed into an AI system produces better-tailored outputs; less data protects confidentiality but reduces utility. Bar guidance does not resolve this tension with a fixed rule — it requires case-by-case analysis, creating interpretive burden.

Disclosure obligations present a third unresolved area. Courts that require AI disclosure in filings create asymmetric obligations: attorneys in those districts must disclose, while attorneys in other districts need not. The lack of a uniform federal rule produces inconsistent practice. The AI in federal courts and AI in state courts landscapes remain fragmented across jurisdictions.

Fee ethics create a fourth dimension: if AI tools reduce drafting time from 8 hours to 2 hours, hourly billing for 8 hours without disclosure may implicate Rule 1.5 (reasonable fees) and Rule 8.4 (misconduct). The ABA has not issued a formal opinion on AI and fee ethics as of the publication of Formal Opinion 512.


Common misconceptions

Misconception 1: ABA Model Rules do not apply to AI use.
The ABA Model Rules apply to the conduct of lawyers, not to specific technologies. AI use triggers existing rules (1.1, 1.6, 3.3, 5.1, 5.3) through the conduct the technology enables, not through technology-specific provisions. Formal Opinion 512 makes this application explicit.

Misconception 2: Using a "legal-specific" AI platform eliminates confidentiality concerns.
Platforms marketed to law firms still vary in their data retention, training, and security practices. A legal-specific branding does not itself satisfy Rule 1.6 analysis. Attorneys must review specific vendor data agreements, not assume compliance from product category.

Misconception 3: AI-generated citations are automatically accurate if drawn from a legal research database.
Legal research AI tools integrated with Westlaw or LexisNexis reduce but do not eliminate hallucination risk. ABA Formal Opinion 512 and bar guidance in Pennsylvania and North Carolina treat independent verification as a non-waivable obligation regardless of platform.

Misconception 4: Supervision rules (5.1/5.3) do not apply to software.
Bar guidance in Florida, North Carolina, and New York has explicitly extended supervision-analog analysis to AI tool outputs, treating the attorney's relationship to AI-generated work product as analogous to oversight of nonlawyer assistant work.

Misconception 5: There is a uniform national standard.
No uniform federal or national attorney ethics rule on AI exists. The ABA Model Rules are models — each state adopts, modifies, or supplements them independently. California, New York, and Florida each have guidance documents that diverge in specific respects from one another and from the ABA opinion.


Checklist or steps (non-advisory)

The following steps reflect the procedural sequence described in ABA Formal Opinion 512 and state bar guidance documents for AI use in legal practice:

  1. Identify applicable jurisdiction(s). Determine which state bar(s) have disciplinary jurisdiction under ABA Model Rule 8.5 for the matter at issue.

  2. Review governing ethics opinions. Locate formal opinions from the relevant state bar(s) addressing AI use; check for updates, as guidance has changed in multiple states between 2023 and 2024.

  3. Assess client data exposure. Identify whether any information to be entered into an AI platform constitutes client confidential information under Rule 1.6, including information that could identify the client.

  4. Review vendor data practices. Obtain and read the AI platform's current terms of service, privacy policy, and data processing agreement with specific attention to training data use, retention periods, and third-party sharing.

  5. Determine consent requirements. Assess whether the intended use requires client informed consent under Rule 1.6 and whether the engagement agreement or matter-specific authorization addresses AI tool use.

  6. Verify all AI-generated legal citations independently. Cross-reference every case citation, statute reference, and regulatory citation against an authoritative primary source before inclusion in any court filing or client document.

  7. Check court-specific disclosure requirements. Confirm whether the applicable court has a standing order, local rule, or judge-specific requirement mandating disclosure of generative AI use in filings.

  8. Document the review process. Retain records of verification steps taken, vendor agreements reviewed, and any client communications regarding AI use, consistent with ordinary matter documentation practices.

  9. Apply supervision obligations. Ensure that any junior attorney or staff member using AI tools on a matter is subject to oversight consistent with Rules 5.1 and 5.3, including review of AI-assisted work product before filing or delivery.

  10. Monitor for guidance updates. State bar opinions on AI are being issued at an accelerating rate; subscribe to ethics opinion updates from each relevant state bar.


Reference table or matrix

Ethical Obligation Governing Rule(s) Primary Trigger in AI Context Key Authority
Technological competence ABA Model Rule 1.1, Comment 8 Deploying AI without understanding its limitations or error modes ABA Formal Opinion 512 (2024)
Client confidentiality ABA Model Rule 1.6 Entering client data into third-party AI platforms with external data retention ABA Formal Opinion 512; CA State Bar AI Guidance (2023)
Candor to tribunal ABA Model Rule 3.3 Filing AI-generated citations without independent verification Mata v. Avianca (S.D.N.Y. 2023); Park v. Kim (2d Cir. 2024)
Supervision of AI outputs ABA Model Rules 5.1, 5.3 AI-assisted work product not reviewed before filing or delivery FL Bar Ethics Opinion 24-1; NC Formal Ethics Opinion 5 (2024)
Reasonable fees ABA Model Rule 1.5 Billing hourly rates for time AI has materially reduced Not yet addressed in ABA formal opinion
Communication with client ABA Model Rule 1.4 Material AI use affecting representation not communicated to client ABA Formal Opinion 512
Court disclosure Local rules / standing orders Generative AI used in preparing filed documents N.D. Tex., E.D. Tex. standing orders; jurisdiction-specific
Supervision of staff AI use ABA Model Rules 5.1, 5.3 Paralegal or associate AI use without partner oversight PA Bar Association Guidance (2024)

References

Explore This Site