Sullivan & Cromwell recently conceded to a U.S. bankruptcy court that an error-ridden filing, attributed to artificial intelligence, was submitted in a significant case. These errors included fabricated citations and distorted references.
Andrew Dietderich, head of restructuring at the firm, apologized to Judge Martin Glenn, acknowledging the presence of AI-induced ‘hallucinations’ that led to fictional authorities and misrepresentations of real ones. The admission came through correspondence with the U.S. Bankruptcy Court for the Southern District of New York, where Sullivan & Cromwell represents court-appointed liquidators from the British Virgin Islands.
Errors were identified in a motion filed on April 9, stemming from non-compliance with the firm’s AI usage policies during its preparation. The case pertains to claims against Prince Group and its owner, Chen Zhi, whom prosecutors accuse of orchestrating scams that defrauded victims globally and led to billions in cryptocurrency being seized. Chen was detained in Cambodia earlier this year before his repatriation to China.
Through Chapter 15 proceedings in the U.S., liquidators aim for recognition to represent creditors and alleged victims. The Prince Group, registered in the British Virgin Islands, is connected by U.S. authorities to large-scale frauds in Southeast Asia and has been sanctioned by both UK and U.S. governments.
In a rectified submission, the original filing inaccurately cited case laws, included unsupported citations, and mischaracterized several references. The firm retracted its initial motion, submitting a revised version instead.
Boies Schiller Flexner, representing Prince Group and Chen, initially highlighted these errors. They pointed out language incorrectly attributed to the U.S. Bankruptcy Code and incorrect or misrepresented authorities. In one case, a cited source referred to a different ruling from another circuit.
In additional filings, defendants claimed that at least 28 citations were erroneous, including nonexistent court quotations. They argued that the timing of the corrected filing was prejudicial since it followed their objections submission, requesting an adjournment of the scheduled hearing and proposing a status conference instead.
Sullivan & Cromwell emphasized mandatory training for lawyers using AI tools, stressing independent verification of all output. Lawyers must complete two training modules before accessing generative AI tools, highlighting risks like citation fabrication and misinterpretation. The firm mandates that ‘trust nothing and verify everything,’ with policy violations occurring if this rule is breached.
A broader review identified minor drafting issues in other documents as human error rather than AI-induced mistakes. Details about the lawyers who drafted the original motion were not disclosed.
This incident reflects a series of AI-related blunders in legal practice, as firms explore tools to expedite research and drafting. Courts have recently reprimanded or sanctioned lawyers for submitting filings with inaccurate references from AI. Last year in Australia, a lawyer lost their principal status due to improper AI usage.
Legal education is adapting, with law schools incorporating instruction on the technology. Senior judges caution that misuse could compromise proceedings’ integrity. Recent rulings consider how AI integrates into legal frameworks and whether interactions are protected by privilege. Meanwhile, some courts explore using AI systems to manage heavy caseloads.