ABA Opinion 512 on AI | Five Questions GCs Should Be Able to Answer. Most Can't.
Welcome to Legal Ops Briefs—inspired by the mot-r mindset, this blog series of 3-minute reads gives in-house Legal Ops quick, operational insights. Each post will explore the tech, trends, and tactics that boost operational effectiveness and ease legal team stress—without adding to the noise.
Somewhere in your legal department, someone is using an AI tool you didn’t authorize, on a matter you don’t know about, in a way you haven’t reviewed. In a 2024 survey commissioned by Axiom Law and conducted by Wakefield Research, 83% of in-house lawyers reported using AI tools not provided by their company, and 81% acknowledged using unapproved tools. Every respondent acknowledged that this carries risk.
Bar discipline is the least of the consequences here, as enforcement against in-house counsel is historically rare. The more significant risks are structural, which are already present in most departments and continue to grow without any visible evidence.
In July 2024, the ABA issued Formal Opinion 512, its first formal guidance on generative AI in legal practice. The opinion maps existing professional obligations (competence, confidentiality, communication, supervision) onto a new category of tool. Summarized simply, it produces five questions any GC should be able to answer about AI use in their department.
Did the people using these tools understand them well enough to know when to trust the output?
Did client information stay out of systems where vendors could access it or use it for model training?
Were clients informed where the rules require disclosure, and did someone make a deliberate decision where they don’t?
Did a lawyer exercise genuine professional judgment over every AI output before it went anywhere?
Did the GC, as supervising attorney, ensure that lawyers and non-lawyers alike were using AI consistently with their obligations?
They describe the minimum a competent GC should be able to answer about any tool in active use across their department, and most can’t, for reasons that are structural.
You can write an AI policy, deliver training, and mandate human review of outputs, but if your workflows aren’t documented and there’s no systematic way to see what’s being worked on, by whom, and with what tools, your policy framework describes controls you can’t enact.
Privileged communications routed through a vendor’s commercial AI tool can end up in their training data, covering anything from a board discussion to an employment matter to an acquisition, and the privilege is gone before anyone knew it was at risk. An overworked AGC uses AI to draft advice that turns out to be wrong, the business acts on it, and the harm is injected before any lawyer reviews the output.
A department that can see its own workflows can answer all five questions. Most can’t, and there is a particular irony to that fact when the legal team is also the one setting AI policy for the rest of the enterprise.
Chime In. Be Heard.
The conditions outlined above — unauthorized tool use, undocumented workflows, policy frameworks that describe controls no one can actually verify — are not hypothetical. They exist in most departments right now. If you're working on this problem, or if you've made progress on it, the specifics of what you've tried and what you've learned are worth more to this community than any framework we could offer. Your insights can help other Legal Ops leaders navigate one of the most urgent and least understood shifts in how legal work gets done. Share them in the comments.
mot-r is the next-generation ELM platform for modern Legal Ops teams. Unlike traditional ELMs, CLM tools, or disconnected point solutions, mot-r provides a low-risk way to resolve the structural causes of legal overload—not just track matters after the fact. By bringing structure to legal intake and visibility to execution, mot-r helps legal teams improve service quality, regain capacity, and reduce burnout. The result is better decisions, higher-value legal service, and an operating model teams can sustain as demand grows.

