Law School Admissions Just Became a Test of AI Judgment
- Ayven Dodd

- Oct 8
- 3 min read
Updated: Oct 9

For years, law schools warned students not to touch AI. That stance is shifting. The University of Michigan and the University of Miami now invite applicants to engage with generative AI in optional essay prompts. While not mandatory, these prompts signal a changing direction in legal education and the legal industry as a whole. At Michigan, applicants may submit an extra essay reflecting on how they have used AI and how they plan to manage its ethical use. Miami went further by encouraging applicants to experiment with AI in a separate essay prompt, with a significant portion of applicants taking up the invitation.
Why Law Schools Are Doing This
Law schools are responding to what firms and clients are already looking for. Both want attorneys who know how to use AI responsibly and efficiently.
Like a pianist who starts young and develops muscle memory over time, a law graduate who learns AI early will move faster and perform with more confidence than those who start later.
This can also help prevent future missteps. Courts have sanctioned lawyers for fake AI citations, and firms are investing in tools they still do not fully understand. Clients are starting to ask what their counsel’s AI policy looks like!
By introducing AI earlier, schools are teaching future lawyers to use these tools with discipline and judgment. The goal is to strengthen critical thinking around technology that is already changing the legal market.
Clients will notice. If law schools are normalizing AI, clients will expect the same from their counsel. Firms that cannot show how they use technology to save time and increase accuracy will start to look overpriced, no matter how strong the brand.
For firms already implementing AI, when partners explain to clients that their teams are trained on responsible AI use and that new associates are already learning it in law school, it builds confidence. Clients hear two things at once: that the firm is efficient with resources and that it is ahead of compliance risk. In a pitch meeting, that matters.
The New Wave
Younger attorneys will see AI the way older attorneys saw Westlaw when it arrived, as the natural next step. They will treat it as part of the work, not an optional tool.
If leadership cannot mentor AI-literate attorneys or explain to clients how those tools are governed, they will lose credibility fast.
Law schools are producing a class of lawyers who expect to work with AI. Firms still debating it are starting to look like Kodak arguing about digital cameras.
The Crystal Ball
Since the Mississippi School of Law has already announced that all first years will be required to complete a certification in AI and the law in Spring of 2026, it's not difficult to determine where things are headed. Today, it is an optional essay. In a few years, it might be part of the bar exam. AI ethics and verification (human eyes fail-safe) will likely become standard curriculum on a national and/or international basis. Firms will begin factoring AI fluency in their hiring process, not just pedigree. Client RFPs will include questions about responsible AI use. And before long, prompt logs might sit alongside billing records to show how work was produced.
If law schools are already teaching attorneys to reason with AI, it’s because they know what’s coming. The firms that learn now will lead. The firms that can’t explain how attorneys use AI to streamline work won’t just lose clients, but also fresh blood.





Comments