Premium Only Content
Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations - The New York Times
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations - The New York Times
In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray. Steven A. Schwartz told a judge considering sanctions that the episode had been “deeply embarrassing.” Credit... Jefferson Siegel for The New York Times June 8, 2023 Updated 5:50 p.m. ET As the court hearing in Manhattan began, the lawyer, Steven A. Schwartz, appeared nervously upbeat, grinning while talking with his legal team. Nearly two hours later, Mr. Schwartz sat slumped, his shoulders drooping and his head rising barely above the back of his chair. For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT. The judge, P. Kevin Castel, said he would now consider whether to impose sanctions on Mr. Schwartz and his partner, Peter LoDuca, whose name was on the brief. At times during the hearing, Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him. “God, I wish I did that, and I didn’t do it,” Mr. Schwartz said, adding that he felt embarrassed, humiliated and deeply remorseful. “I did not comprehend that ChatGPT could fabricate cases,” he told Judge Castel. In contrast to Mr. Schwartz’s contrite postures, Judge Castel gesticulated often in exasperation, his voice rising as he asked pointed questions. Repeatedly, the judge lifted both arms in the air, palms up, while asking Mr. Schwartz why he did not better check his work. As Mr. Schwartz answered the judge’s questions, the reaction in the courtroom, crammed with close to 70 people who included lawyers, law students, law clerks and professors, rippled across the benches. There were gasps, giggles and sighs. Spectators grimaced, darted their eyes around, chewed on pens. “I continued to be duped by ChatGPT. It’s embarrassing,” Mr. Schwartz said. An onlooker let out a soft, descending whistle. The episode, which arose in an otherwise obscure lawsuit, has riveted the tech world, where there has been a growing debate about the dangers — even an existential threat to humanity — posed by artificial intelligence. It has also transfixed lawyers and judges. “This case has reverberated throughout the entire legal profession,” said David Lat, a legal commentator. “It is a little bit like looking at a car wreck.” The case involved a man named Roberto Mata, who had sued the airline Avianca claiming he was injured when a metal serving cart struck his knee during an August 2019 flight from El Salvador to New York. Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Mata’s lawyers responded with a 10-page brief citing more than half a dozen court decisions, with names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, in support of their argument that the suit should be allowed to proceed. After Avianca’s lawyers could not locate the cases, Judge Castel ordered Mr. Mata’s lawyers to provide copies. They submitted a compendium of decisions. It turned out the cases were not real. Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally. He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases. “I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said. Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet. Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critic...
-
LIVE
Game On!
12 hours agoTom Brady critical of Patrick Mahomes abusing NFL sliding rules!
1,149 watching -
6:01
Dr. Nick Zyrowski
22 hours agoIs Beef Tallow healthy? Here Is How I Use It...
9.55K7 -
19:54
Degenerate Jay
19 hours ago $2.68 earnedThe Decline Of Assassin's Creed's World
27.8K3 -
8:30
Chrissy Clark
13 hours agoBehind The Scenes Of Trump’s Inauguration🇺🇸 I Vlog
25.1K7 -
1:06:05
PMG
12 hours ago $3.80 earned"What the FDA is Hiding About Nicotine Will SHOCK You w/ Dr. Ardis"
12.6K9 -
23:15
The Based Mother
13 hours ago $1.57 earnedHOLY BLASPHEMY! Trump vs. Episcopal Bishop Budde at the National Cathedral in DC
7.97K40 -
14:20
RealitySurvival
12 hours agoDrug Cartels Designated As Foreign Terrorist Organizations
6.59K8 -
3:16:13
The Nunn Report - w/ Dan Nunn
1 day ago[Ep 591] Shock & Awe! Trump is Back! | Guest: Sam Anthony of [Your]News
29.4K8 -
3:21:30
CharLee Simons Presents Do Not Talk
1 day agoDO NOT TALK with SIMONE ANDERSON, & MIKE COOK (World Collapse. Are You Ready?)
15.4K8 -
1:38:35
FreshandFit
11 hours agoDecoding Lily Phillips' Lies Through An RP Lense!
78.1K31