
Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. (AP Photo/Richard Drew, File)
A personal injury lawyer representing a man suing an airline now faces sanctions for citing fake cases generated by ChatGPT in court documents.
Roberto Mata sued airline Avianca after he was injured by a metal serving cart colliding with his knee during a flight. As is typical procedure in many personal injury cases, Avianca moved to dismiss the claim on the grounds that the applicable statute of limitations had expired. Mata’s lawyers opposed the motion to dismiss and in the accompanying court documents cited multiple cases that supported their client’s legal position: Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.
Problematically, though, Avianca’s lawyers could not find the cases cited in the motion anywhere, even after extensive legal research. They raised the issue in a letter to U.S. District Judge Kevin Castel, a George W. Bush appointee. In the letter, the airline’s lawyers said, “Defendant respectfully submits that the authenticity of many of these cases is questionable,” and indicated that despite doing standard legal research, no sign of the cases cited could be found.
The origin of the mysterious “cases” unraveled when Steven A. Schwartz of the New York law firm Levidow, Levidow & Oberman submitted an affidavit to the court explaining that he had used the artificial intelligence program ChatGPT to “supplement the legal research” while drafting the documents. Schwartz told the judge that the program now “has revealed itself to be unreliable.”
Schwartz, who has been an attorney since 1991, said that he “consulted with” the chatbot for the legal work, but that because it had been his first time using the program, he “was unaware of the possibility that its content could be false.” Indeed ChatGPT provided case names, captions, summaries, and citations in a standard format.
Schwartz accepted full responsibility for the error and said that he had no intent to deceive the court. He also said that he “greatly regrets using generative artificial intelligence” and promised he “will never do so in the future without absolute verification of its authenticity.” Schwartz also said that Peter LoDuca, the attorney named as counsel of record on the case, had no part in drafting the document that contained false sources.
LoDuca, Schwartz, and the law firm now face potential consequences for Schwartz’s mistake. Castel ordered them Friday to appear on June 8 to face possible sanctions pursuant to the Federal Rules of Civil Procedure for “the citation of non-existent cases.” A written response to the possibility of sanctions is due by June 2.
Representatives of Levidow, Levidow & Oberman did not immediately respond to a request for comment.
Have a tip we should know? [email protected]