The 95% Problem
At least a third of lawyers use AI in their practice. The ABA’s 2024 Legal Technology Survey Report found that 30% of respondents are using AI-based tools, up from 11% the year before. The Thomson Reuters 2025 Generative AI in Professional Services Report puts it at 26% of legal organizations actively using generative AI, up from 14% in 2024. The vendor ecosystem, meanwhile, has grown as if the number is much higher. Harvey, CoCounsel, Spellbook, Everlaw, Lexis+ AI, and a dozen others now compete to sell lawyers a chatbot for every task they do.
Even among the adopters, integration is shallow. The ABA survey found that an additional 15% of respondents were “seriously considering” AI tools but had not yet purchased them, while 22% said they did not know enough about AI to say whether their firm was using it. The Thomson Reuters 2025 Future of Professionals Report identifies what it calls a stark competitive divide: organizations with a clear AI strategy are nearly four times more likely to see benefits from AI than those without one. Among law firms, nearly a third had no plan for AI adoption at all.
So: a minority of lawyers have adopted AI, most of them recently, and most adopters have no framework for what they are doing with it.
But what the adopters are actually doing with it is the real problem.
Ask a lawyer who does use AI what they use it for, and you get the same handful of answers: drafting routine contracts, summarizing depositions and transcripts, drafting discovery responses, accelerating legal research, and cleaning up correspondence. These are the use cases vendors market, the use cases CLE panels demonstrate, and the use cases that show up in every “AI for lawyers” explainer published in the last couple of years. Each one treats AI as a faster paralegal — a tool for producing routine work product faster.
This is the 95% use case. It is not wrong. It saves hours on routine tasks, but it caps the value of the technology at exactly the ceiling of the work it replaces, which is the grunt work.
The high-leverage work is untouched.
What the 95% Use Case Misses
Strategic litigation turns on what you notice before the other side does. Which case to bring. Which theory to advance. Which factual pattern in the record will undermine the other side’s argument. Which case from an unrelated area of law turns out to control the argument.
Every one of these tasks is cognitive — pattern recognition, interpretation, adversarial reasoning — and none of them appear in any vendor’s feature list. They require holding enough of the statute and the record in mind to see the patterns across them, weighing two readings of the same provision against each other, and finding someone willing to push back when the reasoning is weak.
Historically, the only way to get this kind of analytical partnership has been to find a senior colleague willing to give you their time, or a co-counsel relationship where the intellectual labor is reciprocal, or a small inner circle of lawyers in your subject area who will take your call. Senior attention is the thing in short supply, and even big public interest shops run short on it.
Generative AI changes what is available. It is available at any hour, adversarial when you need it to be, and capable of reading the statute and the record together. Used at the strategic level, it works through the statute, the facts, and your theory of the case, then pushes on the theory until it breaks or holds.
This is not what the vendors are selling.
A Concrete Example
Consider a litigator preparing to file a federal challenge to an agency action that is already being challenged by three separate plaintiff groups in another district. The claims need to satisfy several constraints simultaneously. They must be strong enough on the merits to support a preliminary injunction. They must avoid duplicating claims already being litigated by other plaintiffs — both to prevent waste of judicial resources and to increasing the chance of winning by bringing a wider range of distinct legal theories. And certain statutory frameworks, while facially available, are undesirable because they introduce vulnerabilities on the merits or dilute the strongest theories.
These constraints cut against each other. The strongest merits theory may overlap substantially with claims already pending elsewhere. The claims most distinct from existing litigation may rest on weaker legal ground. Flagging a related case requires enough doctrinal overlap to satisfy the local rule, while the existence of parallel litigation in another district means the government will argue the whole dispute belongs there. Every choice about which claims to include changes every other piece at once.
A single attorney working through this problem sequentially would start with the merits analysis, then map the existing complaints, then check whether the strongest theories duplicate what other plaintiffs have already filed, then loop back to adjust. Each pass changes the picture. The process is iterative, slow, and limited by how many moving pieces a single mind can hold at once.
Loaded with the full text of the relevant statutes, the existing complaints from all three plaintiff groups, the agency’s decision documents, and the local rules governing related-case designation, AI can hold the entire problem in view simultaneously. Prompted to identify claims that satisfy all constraints at once — strong on the merits, doctrinally parallel to a pending related matter on the key statutory question, absent from the other plaintiffs’ complaints, and grounded in statutes that avoid the undesired framework — it can generate candidate claim sets and test each one against every constraint at once.
The output of this process is a litigation architecture — a set of claims selected for strategic fit across multiple simultaneous requirements that no single linear review could optimize as quickly or as thoroughly.
Why the Profession Is Stuck
Several forces hold the 95% pattern in place.
The billable hour rewards inefficiency at the level of the individual matter, which means any technology that shortens the thinking work threatens revenue instead of building capacity. The vendors, in turn, have built their products around tasks that can be sold as time-savers without disrupting the underlying economic model — drafting, summarizing, reviewing — because those are the tasks firms will actually pay to automate. The CLE and conference circuit reflects what the vendors are selling, which reinforces the perception that the 95% use case is the use case.
Risk aversion compounds the problem. The dominant narrative around AI in law is hallucination and ethics opinions and sanctions orders that have followed it. That narrative has produced a defensive posture in which the safest use of AI is the most clerical use — the use where a human reviews every output against a known template, and the worst that happens is a typo. Strategic use feels riskier because the lawyer is using AI to think, not just to draft.
There is also a reason nobody talks about. The lawyers who have figured out how to use AI strategically are not, for the most part, writing about it. They are using it. The public conversation about legal AI is dominated by vendors, consultants, and generalists, but the practitioners are quiet.
Where the Cost of the Mistake Is Highest
Opposing counsel in public interest cases is already investing in this capability. Regulated industries facing environmental, civil rights, and consumer enforcement actions are represented by BigLaw firms with the budgets, the technologists, and the institutional appetite to use AI aggressively. They are doing it now. The asymmetry between resourced defendants and public interest plaintiffs — already the defining structural problem in this kind of litigation — is widening in real time, and the public interest bar has not noticed.
The organizations doing this work are not small. The largest public interest litigation organizations carry budgets in the tens to hundreds of millions of dollars and field benches of well over a hundred attorneys. They are also slow to adopt AI, and slower to change how they work.
Buying more Harvey licenses will not produce a litigator who can use AI to spot the vulnerability the agency missed. That skill develops inside live matters, against real opposition, over time. It is not something a vendor can sell, and it is not something an institution can procure on a timeline.
That is what the profession is missing.

