As AI enters courtrooms, courts leaders lay out potential risks and rewards

A new report from the Unified Court System says that artificial technology can increase public access to the courts, but improper use can risk spreading bias and false information in court cases. AP file photo Kiichiro Sato

By Noah Powelson

The New York Unified Court System on Thursday released its first annual report on the use of artificial intelligence in state courtrooms, outlining both the technology’s potential benefits and the risks it poses to the legal system.

The inaugural 154-page report from the UCS’ recently formed Advisory Committee on Artificial Intelligence and the Courts detailed the possible benefits of AI’s use and the significant concerns attorneys, judges and court staff should have when employing the technology, which has power to completely alter the industry.

While the report establishes some of the first court policies on AI use, it also confirms a well-known fact – that AI technology is here and has already integrated itself into the court community in New York.

The report lays out possible benefits AI tools can have for judges, attorneys and court users. It highlights how AI can help the public gain access to courts through language interpretation, making fillable forms more user-friendly and improving online navigation systems for public use.

The report, and UCS leadership, made it clear that they understood AI tools were only going to become more prevalent in the years to come, and that courts needed to be proactive in establishing guidelines to prevent future problems.

“The use of AI in and by our courts must be thoughtful, careful, and principled,” Chief Judge Rowan Wilson said in a statement. “This report provides a roadmap for harnessing technology to improve efficiency and access to justice while safeguarding fairness and fostering public trust in our courts and justice system.”

But the report also highlights major concerns and breaches of security that can occur when using AI for highly sensitive and complex legal matters. The bias present in using AI tools is a significant worry, the report said, as these tools are trained on historical data, and as such, run a high risk of replicating language reflecting systemic inequities.

What’s more concerning, however, is the potential for attorneys to submit papers to the court with fabricated content generated by AI. The AI subcommittee said in the report that educating attorneys on the function of AI tools is the best current practice to prevent fabrications from being submitted. The report also noted judges have already begun adopting individual policies for AI use or outright prohibition in their courts.

“This inaugural Annual Report reflects the Advisory Committee’s in-depth exploration of the myriad, knotty issues relating to the use of AI within our courts and legal system as we seek,

cautiously, to embrace the efficiencies and enhancements offered by this fast-evolving technology,” Chief Administrative Judge Joseph Zayas said in a statement.

Because AI technology is relatively nascent in its development and use, the report notes that more needs to be done to research potential uses and educate court personnel on the findings.

“A key takeaway from this pivotal document, which lays the groundwork for the courts’ ongoing progress in integrating AI, is the need for continuous learning, collaboration, and adaptability as we work to leverage this emerging — and transformative — technology in the fairest, most effective manner,” First Deputy Chief Administrative Judge Norman St. George said in a statement.

AI tools in legal matters have generated controversy over the past year, as reports of attorneys being caught submitting motions written by AI tools that contained completely fabricated case law.

In February 2025, two lawyers were threatened with sanctions by a Wyoming judge after they cited two cases in their lawsuit against Walmart that did not exist. One of the lawyers later apologized after admitting he used AI tools to help craft his case and did not review the case law the AI tool generated. The lawyer said that the AI “hallucinated” two cases that never actually occurred.

Later that year in October, Fox News reported a federal judge reprimanded Alabama attorney James Johnson and fined him $5,000 for using AI to draft court filings with inaccurate information during a drug case. Johnson’s client dropped him after declaring he had no confidence in his attorney’s ability, and the judge on the case wrote to the court’s advisory panel to consider removing Johnson's eligibility for criminal appointments.

These reports have prompted several state court systems to issue their own AI policies, including California, Delaware and Illinois.

In New York, AI tools are currently available for UCS employees through UCS devices, but only after they complete an initial training course. Use of AI tools without completing the UCS training course is prohibited. The policy also limits UCS employees to using the set of pre-approved and vetted AI tools.

Microsoft’s variety of AI tools, including Copilot and Azure AI Services, are permitted for UCS use. ChatGPT, the AI chatbot that can generate text for a variety of functions, is also approved.

But outside of court system leadership, judges, attorneys and bar associations are trying to find their own ways to reckon with AI tools' rapid incorporation into their work lives.

Kristen Dubowski-Barba, the president of the Queens County Bar Association, said many association members recognized AI tools have a practical use, but also come with many dangers that could jeopardize attorneys and their clients alike.

“There’s a way, if you understand it, to use [AI] to benefit you in helping to do some research and help search documents themselves,” Dubowski-Barba said. “However, I think there are concerns that…without reviewing the information that you're receiving is right, it could be dangerous. It could be a pitfall.”

Dubowski-Barba said she was aware of cases where attorneys in Queens had submitted motions to court that cited cases that didn’t actually exist, because the attorneys did not review the information that AI had given them when they drafted their papers.

But more than just submitting incorrect information, Dubowski-Barba said there are also concerns that overreliance on AI can have long-term effects on new attorneys’ performance. Researching and reviewing case law is a cornerstone skill for any attorney developing their career, and relying on AI tools to do the research for them can impact how lawyers develop their knowledge of the law.

“Another one of the concerns of experienced practitioners, for attorneys that are just starting out, is that [AI] could be a way to use shortcuts where you're not necessarily learning the information that you need to,” Dubowski-Barba said. “That you’re not learning how to do the research and how to gain this knowledge that you would continue to grow as an attorney.”

It’s an issue that’s come up in discussions many times at the QCBA, Dubowski-Barba said. The association has been incorporating more teachings and training regarding AI use for their members.

Dubowski-Barba also said QCBA’s Technology in the Law Committee has been revitalized in recent months, and that the committee has been holding regular meetings on all the developments with AI in their work.

“They're trying to keep our members as informed as possible, and I think that’s the best we can do as an association,” Dubowski-Barba said.