Courtroom Chaos: AI Avatar Stunt Backfires on Plaintiff | Image Source: www.fox5ny.com
NEW YORK, April 4, 2025 – In a scene easily confused with satire beyond the digital age, a 74-year-old man represented in court gave his legal argument to an avatar generated by AI. This incident, which occurred on 26 March during a session in the first Judicial Department of the Appeals Division of the Supreme Court of New York State, not only raised eyebrows, but also generated intense debate on the limitations and risks of artificial intelligence in court proceedings.
Jerome Dewald, the complainant in a labour dispute against the insurance company MassMutual Metro, took an unconventional route to claim his case. Instead of speaking directly to the jury, Dewald presented a pre-recorded video with a digital replica: a “large and beautiful embrace of a guy” called Jim, as Dewald later described. The avatar, hung in a sweater and sitting in a digital office, opened the statement with the decorum of the room, but was quickly interrupted by a confused judge.
According to the Registry, Associate Judge Sallie Manzanet Daniels arrested the video and requested clarification. When Dewald admitted that the speaker was not real and had been generated by AI, the judge’s tone shifted from confusion to fury. The exchange of rooms quickly intensified, leading to a region that highlighted the lack of transparency and an apparent attempt to use the room as a promotional platform for the start-up of Dewald, Pro Se Pro, a service designed to assist self-represented litigators through avatars generated by AI.
Why Dewald Resort at AI?
Dewald stated that his reasons were rooted in practice and not in performance. He had survived throat cancer 25 years ago and suggested that a prolonged speech remained difficult. “The wrong speech is a problem for me,” Dewald told the transplant. He had received permission from the court to present a video, but he admitted that he had not revealed the nature of the President’s AI.-generated. This omission, now recognized, has probably aggravated the situation.
Dewald originally intended to create an avatar in his image using a service called Tavus. But pressed by time, he used a pre-built avatar instead. The motion, although perhaps well-intentioned, came back seriously when the judges felt blind for what they perceived as a theatrical trick rather than a serious legal presentation.
Is it legal to use AI in court?
The answer is short: yes, but with caves. Courts often allow litigants who present themselves to present evidence in several forms – videos, writings, visual aids – while they are being disseminated and approved in advance. But as Dewald’s experience shows, pushing this limit without any transparency invites scepticism.
“You will not use this courtroom as a launch for your company, sir,” said Judge Manzanet-Daniels during the exchange, clearly interpreting Dewald’s presentation as an attempt to promote Pro Se Pro rather than a real job for a medical limitation.
Law professionals point out that while AI can play a role in the preparation of legal documents or in the analysis of case law, its presence in oral defence remains highly controversial. According to the Associated Press, the post-Dewald apology included a note stating that he had no lawyer and did not intend to deceive the court, an admission that highlighted the fine line between innovation and overtaking.
What was the mistake in court?
The entire event seemed to be a collision of intent and execution. Dewald does not openly violate any law, but his inability to communicate his plan clearly leads to a sense of deception. The court had approved a video, not an artificially generated imitator. This is an important distinction.
Legal experts caution that transparency is essential for introducing new technologies into legal proceedings. According to Dr. Adam Wandt of John Jay College of Criminal Justice, the use of AI avatars in oral arguments is far from becoming the main trend. ”I don’t think there’s any time in the near future, a judge will let an AI avatar debate a case. ”
His comments reflect a broader discomfort within the judiciary about the implications of AI impersonation, especially when it comes to credibility and accountability in legal advocacy.
Can AI be a reliable legal adjustment assistant?
While the avatars can be a bridge too far away, AI continues to find its way to the legal world, although behind the scenes. Legal research, document writing and data analysis are areas where AI has already proven useful. However, according to Fox News and the New York Times, even these uses are full of obstacles. Lawyers have already been fined for citing the “inclusive” jurisprudence generated by AI, showing the potential of technology for misinformation.
At the beginning of last year, two lawyers in New York were penalized $5,000 each after using ChatGPT for legal research, which resulted in legal appointments. The fall of this case has highlighted how the unrevealed dangerous AI content can be placed in a legal context. Michael Cohen, former personal lawyer for President Trump, also admitted that his legal team had submitted written citing non-existent errors produced by artificial intelligence tools.
What are you doing? Does this mean for the future of AI in court?
The Dewald incident will probably only start a long and complicated conversation about how AI will be used in the courts. Whether they are avatars produced by AI, evidence based on AI, or predictive analyses used by lawyers, the legal system needs to be adapted – and quickly.
“Prosecutors and defence counsel will want to use AI evidence,” said Dr. Wandt. The key issue is not the technology itself, but the way it is presented, valid and understood in the structure of a judicial proceeding. This means stricter guidelines, more training and perhaps even judicial reform to meet these changing challenges.
The legal community already takes note. Discussions on the need for IA transparency, such as mandatory disclosure of how content was generated, or even pre-approval of IA tools used in courtrooms, are emerging. Some believe that AI’s “avatars” may eventually play a role in arbitration or remote hearings, where formal rules of evidence are more relaxed. But in the traditional configuration of the court, trust and authenticity remain non-negotiable.
What can litigants who present themselves learn?
Dewald’s story is a precautionary account for others who navigate the judicial system without legal representation. Technology can be a valuable tool, but it must be used responsibly and in the context of the legal ethics and decoration of premises.
In any event, the incident underscores the importance of communicating clearly with the court and following not only the letter of the law, but its spirit. If a person needs accommodation for medical reasons, it must be documented and discussed in advance. The attempt to replace itself with an artificial intelligence figure (even with good intentions) can easily be interpreted as misleading or disrespectful.
Dewald told the media that, although he believed that the court had given him space to present creatively, “they were not ready to see an artificially produced image.” This lack of preparation, combined with a general cultural scepticism around IV, has made a moment of fuel in legal history.
For the courts, new guidelines may be needed to manage new technologies. For complainants such as Dewald, it is a reminder that the courts are not technological exhibitions, they are institutions based on trust, clarity and human judgment.
The continuation of this debacle leaves an open question: Can AI be integrated into legal defence without eroding the courts of authenticity? For now, the answer seems to be no. But the conversation started – and it is a conversation that judges, lawyers, technologists and citizens will have to continue.