Duty of Technological Competence: What Lawyers Must Know About AI
The duty of technological competence requires attorneys to understand the benefits and risks of technology relevant to their practice — a standard that has expanded significantly as AI tools have entered legal workflows. This page covers the regulatory foundation of that duty, its operational mechanics, the practice scenarios where it applies, and the boundaries that separate permissible reliance on AI from professional violations. The subject matters because disciplinary consequences, malpractice exposure, and client harm all follow from misapplication of AI tools by practitioners who lack adequate technical literacy.
Definition and scope
Rule 1.1 of the ABA Model Rules of Professional Conduct establishes the baseline competence obligation, requiring that a lawyer provide representation with "the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation." Comment 8 to Rule 1.1, adopted by the ABA in 2012, extended this requirement explicitly to technology, stating that competence includes keeping "abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology" (ABA Model Rules of Professional Conduct, Comment 8 to Rule 1.1).
As of 2023, at least 40 U.S. states had adopted Comment 8 or equivalent language into their state bar rules, according to the ABA's Center for Professional Responsibility. The scope of technological competence is not limited to cybersecurity or e-filing systems. It now encompasses AI-powered legal research platforms, contract analysis tools, predictive analytics, and AI legal drafting tools — any technology that a competent practitioner in a given practice area would reasonably be expected to evaluate and use appropriately.
The duty is ongoing, not point-in-time. An attorney who understood a particular tool at the time of its adoption may become non-compliant as the tool's capabilities, limitations, or known failure modes evolve. This dynamic quality distinguishes the technological competence duty from static knowledge requirements in substantive law areas.
How it works
The duty of technological competence operates through a three-layer framework:
-
Evaluation duty: Before deploying any AI tool in a client matter, the attorney bears responsibility for understanding what the tool does, how it generates outputs, its documented error rates, and whether its outputs require verification. This applies with particular force to AI legal research tools that may produce hallucinated citations — fabricated case references that appear authoritative but do not exist.
-
Supervision duty: Under ABA Model Rule 5.1 and 5.3, attorneys must supervise subordinates and nonlawyers. AI tools fall within the Rule 5.3 framework by analogy, as confirmed in ethics opinions from state bars including the State Bar of California's Interim AI Guidance (2023) and the New York City Bar Association's Formal Opinion 2023-2. Supervision means reviewing AI outputs before relying on them in pleadings, negotiations, or advice.
-
Disclosure duty: Competence intersects with Model Rule 1.4 (communication) when clients need material information about how their matter is being handled. State ethics opinions diverge on whether AI use must be disclosed proactively, but the California State Bar's guidance and the Florida Bar's Ethics Opinion 24-1 both identify circumstances where disclosure or consent is required, particularly when confidential data is input into third-party AI systems.
These layers are not sequential in practice — they run concurrently throughout representation. The attorney ethics AI use framework across jurisdictions operationalizes these layers through bar opinions, court rules, and guidance documents rather than through a single unified federal standard.
Common scenarios
Four practice scenarios illustrate how the competence duty activates in AI-integrated workflows:
Legal research: An attorney uses a large language model to generate a memorandum of supporting authority. Without independent verification of each cited case, the attorney submits a brief containing nonexistent citations. Courts including the U.S. District Court for the Southern District of New York (Mata v. Avianca, 2023) have sanctioned attorneys under Rule 11 of the Federal Rules of Civil Procedure for this failure. The competence duty required verification; the attorney's omission breached it. The AI hallucination legal consequences page covers this failure mode in detail.
Contract review: An attorney uses an AI platform to flag nonstandard clauses in a commercial agreement. The platform misses a limitation-of-liability provision because the clause was formatted atypically. If the attorney treated the AI output as complete without reviewing the full document, the competence duty — alongside potential malpractice liability — is triggered. Contrast this with an attorney who uses the AI as a first-pass filter and reviews the full document independently: that workflow satisfies the duty.
Criminal and immigration matters: AI risk-assessment tools generate scores that influence plea bargaining advice and case strategy. An attorney relying on such scores without understanding the underlying methodology may provide inaccurate risk assessments to clients, implicating both competence and AI bias in criminal justice concerns documented by researchers and civil liberties organizations including the Electronic Frontier Foundation.
Document review in litigation: AI-assisted AI document review eDiscovery workflows require attorneys to understand predictive coding validation protocols. Courts have held, in cases applying Federal Rule of Evidence 502 and Federal Rule of Civil Procedure 26, that attorneys must be able to explain and defend the reasonableness of their document review methodology.
Decision boundaries
The duty of technological competence draws clear lines between permissible and impermissible conduct. The following contrasts define those boundaries:
Delegation vs. Abdication: Using AI to accelerate research, draft initial documents, or flag issues is delegation within a supervised workflow. Submitting AI outputs without review, representing AI-generated content as independently verified, or failing to understand the tool's known limitations is abdication. Bar ethics authorities in California, New York, and Florida treat these differently in outcome determinations.
Tool familiarity vs. Technical mastery: The standard does not require attorneys to understand the mathematical architecture of a neural network. It requires sufficient functional understanding to recognize when an output is unreliable, when to seek technical consultation, and when a tool is unsuitable for a given task. The ABA's Formal Opinion 477R (cybersecurity) provides an analogous framework: competence means reasonable understanding, not engineering expertise.
Confidentiality breach vs. Permissible processing: Inputting client confidential information into a public-facing AI tool that retains and trains on that data may violate ABA Model Rule 1.6 (confidentiality). The AI confidentiality attorney-client privilege framework turns on whether the attorney assessed the tool's data retention policies before use — a competence prerequisite.
Supervised use vs. Unauthorized practice risks: AI tools that generate legal documents without attorney review raise AI unauthorized practice of law concerns when offered directly to consumers, but an attorney's internal use of the same tool does not. The competence duty governs the attorney's review obligation in the latter scenario.
The AI legal malpractice risk exposure associated with competence failures is not theoretical. Malpractice insurers have begun asking underwriting questions about AI workflows, signaling that the insurance market treats technological competence as a measurable risk variable. Bar disciplinary authorities treat ignorance of a tool's known limitations as an aggravating factor, not a mitigating one, when AI-related harm occurs in a client representation.
References
- ABA Model Rules of Professional Conduct, Rule 1.1 and Comment 8
- ABA Center for Professional Responsibility
- ABA Formal Opinion 477R — Securing Communication of Protected Client Information
- State Bar of California — Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (2023)
- New York City Bar Association, Formal Opinion 2023-2
- Florida Bar Ethics Opinion 24-1
- Federal Rules of Civil Procedure, Rule 11 and Rule 26 — U.S. Courts
- Electronic Frontier Foundation — AI and Civil Liberties