The Emerging Role of Artificial Intelligence in the Courtroom

By: Smriti Vijay 
Edited by: Sophia Cheng and Hanna Becker

As artificial intelligence in everyday life has grown more prominent in recent years, it was only a matter of time before discussions emerged about AI’s impact on the legal industry. AI continues to challenge the balance between improving productivity and preserving human judgment in the courtroom. 

Earlier this year, U.S. Senate Judiciary Committee Chairman Chuck Grassley discovered that two U.S. District Judges, Henry Wingate of Mississippi and Julien Xavier Neals of New Jersey, used artificial intelligence to prepare court orders.[1] One of Wingate’s law clerks used Perplexity.ai, an AI-powered answer engine, as a drafting assistant, while a law school intern for Neals used ChatGPT for research purposes. As a result, Wingate and Neals’s court orders were riddled with errors, including misquotes of state law and references to nonexistent people and events. 

Following the exposure of their inaccurate court orders, both Wingate and Neals have instituted preventive measures to ensure that similar AI-related mistakes do not occur again. For instance, Wingate now requires all draft opinions, orders, and memos to undergo another independent review by other law clerks, and all cited cases to be printed and attached to the final draft. Similarly, Neals requires further reviews for all documents and has also prohibited all law clerks and interns from using AI when drafting opinions and orders.[2] 

Although these judges quickly moved to resolve these issues, the broader conversation about the extent of AI usage in the courtroom persists. As AI becomes more popular and embedded in daily life, is it reasonable to completely prohibit its use? Furthermore, can lawmakers use AI as a beneficial and effective tool while still upholding the law's main tenets?

The National Center for State Courts recommends that courts begin implementing AI through small tasks like summarizing documents or drafting internal communications. [8] With larger undertakings, however, courts need to consider their contracts with the parties involved in cases and other security risks. 

In a statement, Grassley warned lawmakers of the use of AI in legal proceedings and emphasized that those who use this technology must ensure that it does not violate the rights of litigants or prevent fair treatment under the law.[3] Similarly, many lawmakers have been taking measures to understand how AI can play a part in the courtroom. 

Despite its potential harms, AI boasts indisputable benefits. For example, Technology-Assisted Review (TAR) is an AI tool that uses predictive coding to categorize electronic documents.[4] The court first approved of such computer-assisted review in 2012, in the landmark case Da Silva Moore v. Publicis Groupe et al.[5] In litigation, TAR helps handle the large volume of data typically involved in legal cases. Human reviewers manually review some documents and organize them based on their relevance to the case, and Large Language Models learn from these categorizations and analyze the remaining documents. TAR has the potential to significantly reduce the time required for document review and to accelerate the overall timeline of court cases.

While AI tools like TAR increase productivity and efficiency, it is important for lawmakers to consider the potential impact of AI-related mistakes in the courtroom. Ranking a document as highly important or insignificant could be the difference in an important breakthrough in a case. As a result, people’s lives could be at risk, so procedures involving real human action should be handled with care. 

Earlier this year, the Administrative Office of the U.S. Courts Director Judge Robert J. Conrad said an AI task force has assumed responsibility for developing a guide on using AI and distributing it to federal courts.[5] While Conrad acknowledged the benefits of AI in his statement, he did not shy away from the qualms he and many other lawmakers have about its capabilities, including “concerns around maintaining high ethical standards, preserving the integrity of judicial opinions, safeguarding sensitive Judiciary data, and ensuring the security of the Judiciary’s IT systems.” While the AI task force’s guide was not released to the public, Conrad shared that it focuses on suggestions for AI use when handling confidential information and ensuring the security of court cases. 

In addition, the Senate has held hearings on AI, discussing the future of regulation and its implications for the Judiciary.[6] During these hearings, the Senate focused on the main risks of AI, including bias, privacy violations, scams, fraud, cyberattacks, discrimination, and misinformation. As for handling these risks, much of the conversation has focused on the National Institute of Standards and Technology (NIST) AI Risk Management Framework.[7] The Framework outlines four imperatives to manage AI risks: govern, or promote a culture of risk management; map, or recognize the context of AI risks in any given situation; measure, or assess and analyze identified risks; and manage, or prioritize AI risks based on their impact. Additionally, commonly suggested preventive measures have emphasized transparency in AI use, like implementing disclosure requirements and adding watermarks to AI-generated content. 

Ultimately, while AI has the potential to transform judicial proceedings, human verification at every step remains necessary. [9] AI is successful at pattern recognition but it still lacks the ability to make judgments that are comparable to those of humans. Especially in the courtroom, where human intelligence and empathy are so crucial, AI, in its current form, should be used cautiously. To be as safe as possible, courts should assume that AI-generated content may contain errors or biases. Even the most advanced AI models can still suffer from “hallucinations,” which are fabricated and deceptive statements in AI-generated work. 

However, the answer is not to completely reject AI in the litigation process; many courts are currently testing AI tools for legal research, document review, and case management. As of now, courts should use AI on a case-by-case basis to ensure that generative tools are used responsibly. Furthermore, courts must continue to prioritize human judgment and fair treatment under the law. 

Notes: 

1. Sara Merken, “Two Federal Judges Say Use of AI Led to Errors in U.S. Court Rulings,” Reuters, October 23, 2025,  https://www.reuters.com/sustainability/society-equity/two-federal-judges-say-use-ai-led-errors-u s-court-rulings-2025-10-23/. 

2. U.S. Senate Committee on the Judiciary, “Grassley Releases Judges’ Responses Owning Up to AI Use, Calls for Continued Oversight and Regulation,” Press Release, November 5, 2024, https://www.judiciary.senate.gov/press/rep/releases/grassley-releases-judges-responses-owning-u p-to-ai-use-calls-for-continued-oversight-and-regulation. 

3. National Center for State Courts, “Guidance for Implementing AI in Courts,” NCSC, 2024, https://www.ncsc.org/resources-courts/guidance-implementing-ai-courts. 

4. U.S. Senate Committee on the Judiciary, “Grassley Calls on the Federal Judiciary to Formally Regulate AI Use,” Press Release, October 17, 2024, https://www.judiciary.senate.gov/press/rep/releases/grassley-calls-on-the-federal-judiciary-to-for mally-regulate-ai-use. 

5. Paul W. Grimm, Cary Coglianese, and Maura R. Grossman, “AI in the Courts: How Worried Should We Be?” Judicature 107, no. 3 (2024). https://judicature.duke.edu/articles/ai-in-the-courts-how-worried-should-we-be/. 

6. Da Silva Moore v. Publicis Groupe, 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012)

7. Madison Alder, “Interim AI Guidance for U.S. Courts Aims for Experimentation, Guardrails,” FedScoop, August 14, 2024, 

https://fedscoop.com/interim-ai-guidance-us-courts-aims-experimentation-guardrails/. 

8. Faiza Patel and Melanie Geller, “Senate AI Hearings Highlight Increased Need for Regulation,” Brennan Center for Justice, July 9, 2024, 

https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased -need-regulation. 

9. National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1 (Gaithersburg, MD: U.S. Department of Commerce, 2023), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

10. Alexander Melvin, “AI Hallucinations in Legal Proceedings,” JDSupra, July 8, 2024, https://www.jdsupra.com/legalnews/ai-hallucinations-in-legal-proceedings-9738494/ 

Bibliography: 

Alder, Madison. “Interim AI Guidance for U.S. Courts Aims for Experimentation, Guardrails.” FedScoop, August 14, 2024. 

https://fedscoop.com/interim-ai-guidance-us-courts-aims-experimentation-guardrails/. 

​​Da Silva Moore v. Publicis Groupe. 2012 WL 607412 (S.D.N.Y. February 24, 2012).

Grimm, Paul W., Cary Coglianese, and Maura R. Grossman. “AI in the Courts: How Worried Should We Be?” Judicature. Vol. 107 No. 3 (2024) 

https://judicature.duke.edu/articles/ai-in-the-courts-how-worried-should-we-be/. 

Melvin, Alexander. “AI Hallucinations in Legal Proceedings.” JDSupra, July 8, 2024. https://www.jdsupra.com/legalnews/ai-hallucinations-in-legal-proceedings-9738494/. 

Merken, Sara. “Two Federal Judges Say Use of AI Led to Errors in U.S. Court Rulings.” Reuters, October 23, 2025. 

https://www.reuters.com/sustainability/society-equity/two-federal-judges-say-use-ai-led-errors-u s-court-rulings-2025-10-23/. 

National Center for State Courts. “Guidance for Implementing AI in Courts.” NCSC. 2024. https://www.ncsc.org/resources-courts/guidance-implementing-ai-courts. 

National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. Gaithersburg, MD: U.S. Department of Commerce, 2023. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf. 

Patel, Faiza, and Melanie Geller. “Senate AI Hearings Highlight Increased Need for Regulation.” Brennan Center for Justice, July 9, 2024. 

https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased -need-regulation. 

U.S. Senate Committee on the Judiciary. “Grassley Calls on the Federal Judiciary to Formally Regulate AI Use.” Press Release, October 17, 2024. 

https://www.judiciary.senate.gov/press/rep/releases/grassley-calls-on-the-federal-judiciary-to-for mally-regulate-ai-use. 

U.S. Senate Committee on the Judiciary. “Grassley Releases Judges’ Responses Owning Up to AI Use, Calls for Continued Oversight and Regulation.” Press Release, November 5, 2024.

https://www.judiciary.senate.gov/press/rep/releases/grassley-releases-judges-responses-owning-u p-to-ai-use-calls-for-continued-oversight-and-regulation.

Previous
Previous

The Case of Kilmar Abrego Garcia

Next
Next

The Ongoing Controversy Over the Epstein Files