The Rule 5.1 1.5 Palindrome of AI
- Frederick L Shelton
- Jun 20
- 3 min read

The AI Evolution From Rule 5.1 to 1.5
By Frederick Shelton
Attorneys need to view AI as the equivalent of a junior associate...
Before the advent of Chat GPT, I published an article predicting that AI would not only reduce millions of billable hours, but also reshape the legal profession itself.
At the time, many dismissed my insights as hyperbole. And yet here we are, with AI tools now embedded in everything from research to contract analysis, and even risk assessment.
Some of those early criticisms weren’t without merit—AI tools did, and still sometimes do have serious flaws and glitches.
What attorneys missed then—and many still miss now—is this: the most dangerous misconception about legal AI isn’t that it’s fallible. It’s that it’s just tech.
It isn’t. Not anymore.
It’s Not Just a Tool—It’s a Team Member
In 2025 and beyond, attorneys need to view AI as the equivalent of a junior associate—an intelligent, tireless, if occasionally overeager team member that you are responsible for supervising. This isn’t a metaphor. It’s an ethical reality that places AI squarely under the purview of ABA Model Rule 5.1.
Under Rule 5.1, attorneys are obligated to supervise subordinate lawyers and ensure that their conduct complies with the Rules of Professional Conduct. With AI now performing tasks traditionally done by junior attorneys e.g. research, drafting, summarization, redlining, citation validation, and more—its outputs must be reviewed with the same scrutiny we apply to work by a first-year associate.
The consequences of failing to do so aren’t theoretical. Inaccurate citations, incomplete risk language, misinterpreted precedent—any of these can lead to professional embarrassment or or even professional liability claims.
The Human Eyes Failsafe
This doesn’t mean lawyers need to become AI engineers. But they do need a Human Eyes Failsafe (HEF)—an internal checkpoint to ensure everything the AI produces is reviewed, verified, and, where needed, revised. As is the case with our clients who are using both generative and agentic AI, this must be included in all client agreements.
Think of AI like a highly capable associate on an endless caffeine drip. It won’t sleep. It won’t complain. But it also won’t raise its hand when something seems off. That’s still your job.
My Next Prediction: AI Will Shift From Rule 5.1 to Rule 1.5
Here’s where the next wave of ethical scrutiny is headed. Rule 1.5 of the Model Rules requires that a lawyer’s fees be “reasonable.”
So what happens when a lawyer bills ten hours for a task that AI could have completed in five minutes? What happens when firms refuse to adopt AI tools that dramatically reduce overhead, complexity, and time, and pass those inefficiencies onto clients as higher fees?
Eventually, lawyers who decline to use appropriate AI tools could face the same professional critique once levied at those who charged for days in the law library, after Westlaw became standard. The professional rules won’t just require the supervision of AI, they will require the use of it.
Within 18 - 24 months, the firms still resisting AI adoption will face a two-pronged threat: clients questioning their value, and regulators questioning their ethics.
Frederick Shelton is a Market Advisor and Consultant to law firms, legal MSO's and funds on subjects which include legal AI, ABS models, MSO's and M&A. He can be reached at fs@sheltonsteele.com
Comments