Australian King's Counsel Apologizes for AI-Generated Fake Legal Submissions in Murder Case
Australian Lawyer Apologizes for AI-Generated Fake Legal Submissions

In a significant legal embarrassment that highlights growing concerns about artificial intelligence in judicial systems, a senior Australian lawyer has issued a formal apology to a Supreme Court judge for submitting documents containing fabricated quotes and nonexistent case judgments generated by AI technology. The incident occurred in the Supreme Court of Victoria state, where defense lawyer Rishi Nathwani, who holds the prestigious title of King's Counsel, took full responsibility for filing incorrect information in a murder case involving a teenage defendant.

Court Proceedings Disrupted by AI Errors

According to court documents reviewed by The Associated Press, Nathwani expressed deep regret and embarrassment on behalf of his defense team during proceedings before Justice James Elliott. The AI-generated errors resulted in a substantial 24-hour delay in resolving a case that Justice Elliott had hoped to conclude promptly. Despite the submission issues, Elliott ultimately ruled on Thursday that Nathwani's client, who cannot be identified due to being a minor, was not guilty of murder because of mental impairment.

Judge's Strong Criticism of Legal Practice

Justice Elliott delivered pointed criticism of the situation, stating that the unfolding of events was unsatisfactory and emphasizing that the court's ability to rely upon the accuracy of counsel submissions is fundamental to the proper administration of justice. The judge specifically noted that the Supreme Court had released comprehensive guidelines last year regarding how lawyers should appropriately use artificial intelligence in legal practice.

Elliott made clear that it is completely unacceptable for artificial intelligence to be employed in legal work unless the resulting product undergoes independent and thorough verification by legal professionals. The court documents revealed that the problematic submissions included completely fabricated quotes from a speech to the state legislature and citations for cases that simply did not exist, purportedly from the Supreme Court's own records.

Discovery and Admission of Errors

The artificial intelligence-generated errors were uncovered by Justice Elliott's associates, who became suspicious when they couldn't locate the referenced cases through normal legal research channels. When defense lawyers were asked to provide copies of the cited judgments, they were forced to admit that the citations did not actually exist and that their submission contained fictitious quotes. The lawyers explained that they had checked the accuracy of initial citations but had wrongly assumed that subsequent references would also be correct.

Prosecutor Daniel Porceddu, who received copies of the problematic submissions, also failed to verify their accuracy through independent checking procedures. The court documents did not identify which specific generative artificial intelligence system the lawyers had utilized to produce the erroneous legal materials.

Global Pattern of AI Legal Mishaps

This Australian incident represents another entry in a growing list of artificial intelligence-related mishaps affecting justice systems worldwide. In a comparable 2023 case in the United States, a federal judge imposed $5,000 fines on two lawyers and their law firm after ChatGPT was blamed for their submission of completely fictitious legal research in an aviation injury claim. Judge P. Kevin Castel determined that the lawyers had acted in bad faith but credited their apologies and remedial actions when explaining why more severe sanctions weren't necessary.

Later that same year, additional fictitious court rulings invented by artificial intelligence appeared in legal papers filed by attorneys representing Michael Cohen, the former personal lawyer for U.S. President Donald Trump. Cohen accepted responsibility for the errors, explaining that he hadn't realized the Google tool he was using for legal research was capable of producing what are commonly called AI hallucinations—plausible-sounding but completely fabricated information.

Judicial Warnings About Consequences

British High Court Justice Victoria Sharp issued a stern warning in June about the serious consequences of presenting false material as genuine in legal proceedings. She indicated that such actions could potentially be considered contempt of court or, in the most egregious cases, perverting the course of justice—an offense that carries a maximum sentence of life imprisonment in the United Kingdom.

The Melbourne incident serves as a cautionary tale for legal professionals worldwide who are increasingly incorporating artificial intelligence tools into their practice. It underscores the critical importance of maintaining traditional legal verification processes even when utilizing advanced technological assistance, particularly in matters as serious as murder trials where defendants' lives and liberties hang in the balance.