ChatBot Lawyer Citations

ChatBot Lawyer Citations

ChatGPT and a lawyer are sitting in a bar ... the legal bar for sanctions, that is
Some of these links are affiliate links. This lets me create great content at no cost to you. Thank you for your support!

I busted out laughing at this boneheaded mistake made by one of my brethren. Attorney Steven Schwartz of Levidow, Levidow & Oberman landed himself in internet fame when he used ChatGPT to write a brief … in which cited at least six fake cases. This has not been the first time that an attorney messed up technology. Remember the “I am not a cat” memes?

???? Attorney Schwartz went viral for all the wrong reasons! Read about the hilarious legal blunder that involved ChatGPT and six FAKE cases! ???? Don't miss this fun legal comedy! ???? The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.#LegalHumor #ChatGPTFails #AttorneyMistakes #FunnyLegalStories #HilariousAI #LegalComedy
MidJourney prompt: a commercial photograph of legal books on bookshelves behind a desk, feminine neutral colors, minimalistic –ar 4:5 –s 250

The original case was Mata v. Avianca. Roberto Mata was suing Avianca airlines, alleging that he was injured when a metal serving cart hit him in the knee a flight from El Salvador to Kennedy International Airport in New York back. Mata claims he was injured on August 27, 2019.

I betcha Attorney Schwartz wishes he read The Deep View before he wrote that brief. It’s a 5-minute daily newsletter about what matters in AI.

Avianca filed for bankruptcy in May 2020. Mata filed his complaint on July 27, 2020. Unfortunately, that put him squarely inside of the bankruptcy filing, so he asked for his case to be dismissed. The same day, on February 2, 2022, he filed in New York court.

The defense counsel then asked to move jurisdiction to federal court in the Southern District of New York because it was a federal subject matter. (Transportation issues, including airlines, are federal issues because the federal government regulates interstate travel.)

What followed was that Avianca filed a Motion to Dismiss because the filing date … it was February 2, 2022 … was “time-barred by the Montreal Convention.” Like some torts (including fraud and defamation), the Montreal Convention allows you to claim compensation for damages up to two years. (Honestly, this looks like for baggage claim items, but I don’t practice in federal law … so it could fall under this too.)

And here’s the fun part. Attorney Schwartz filed a response to the Motion to Dismiss. Attorney Schwartz claims that the statute of limitations is three years (see above point) in New York. (FYI, I have no idea what the statute of limitations is for a federal tort.) After a response from Avianca stating that their lawyers couldn’t find some of the citations, Attorney Schwartz filed an affidavit apology, in which the Court responded for sanctions.

Book Look

Attorney Schwartz needs this book: Understanding and Mastering The Bluebook. The Bluebook is basically the legal citation guide for attorneys and paralegals.

What Were the Bogus Citations?

Attorney Schwartz identified six cases that were miscited (all errors in citation in original):

  • Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019)
  • Shaboon v. Egyptair 2013 Il App (1st) 111279-U (Ill App. Ct. 2013)
  • Petersen v. Iran Air 905 F. Supp 2d 121 (D.D.C. 2012)
  • Martinez v. Delta Airlines, Inc., 2019 WL 4639462 (Tex. App. Sept. 25, 2019)
  • Estate of Durden v. KLM Royal Dutch Airlines, 2107 WL 2418825 (Ga. Ct. App. June 5, 2017)
  • Miller v. United Airlines, Inc., 174 F.3d 366 (2d Cir. 1999)
  • MSN reported another one: Zicherman v. Korean Air Lines

Error #1: Not Using WestLaw or LexisNexis for Current Law

ChatGPT doesn’t know recent events.

The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.
The most obvious issue here is that ChaptGPT cannot tell you anything beyond September 2021 (although I have found that it does, in fact, know stuff, and you can “fool” the ChatBot sometimes). What about case law that happened after October 2021?

WestLaw and LexisNexis are expensive. Very, very expensive. Although, Google Scholar does exist for case law research, although it tends to give you everything and the kitchen sink. But many state bars (including the Pennsylvania state bar) let you have a free case research tool that is built-in to the bar membership.

In other words, this was more than a mere temporary lapse of judgment. This was straight-up cutting (financial) corners.

Error #2: Not Shepardizing

“Shepardizing” is a term used (sometimes wrongly) to case-law check whatever cases you use. Shepardizing is actually specific to LexisNexis. It’s been popularized by TV shows like Better Call Saul. Saul Goodman uses WestLaw, which has its own term called “Key Citing.” (Just FYI, this is very irritating to me that the show writers get this stuff wrong.)

The legal field perpetuates an overwork culture: you are indoctrinated into slavery in law school. Law firms, including the state and federal governments, and all courts, use free labor. In exchange for being treated like shit, you get to put something on your resume, so when you get out of law school, you can show your potential employer that you are(n’t) a complete loser.

For the lucky few graduates who land jobs in the private sector, the Ponzi scheme that is the private law firm continues. First, second, and sometimes third year associates are regulated to doing duties that no one wants to do unless you want to be a career law clerk, in which case you do not work for a private law firm. The three branches of the government have much better hours, a pension, and work-life balance (and you don’t get treated like shit).

Bottom line: Bruh, get a lowly associate or law school student to do the grunt work of Shepardizing.

Error #3: Not Even Looking at the Cases You Cited

The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about Attorney Steven Schwartz's potential sanctions for using ChatGPT's faulty citations.
Photo by Glenn Carstens-Peters / Unsplash
One of the first things that stood out to me was that some of the cases appeared to be unreported (assuming that they weren’t fake).

An attorney can (and does) cite to unreported cases, but these cases are supposed to be for persuasive value. A reported case is a case that makes it into one of those big-ass, leather-bound tomes you see on television shows and in stock images of attorneys. Before the days of the internet, us lawyers had to look up cases in our own library-sorting system that makes the Dewey Decimal look like child’s play.

But if you are going to cite an unreported case … one of the proliferated thousands of cases that are exactly the same as the last case on the subject … then you should at least make an attempt to find a reported one. It’s unreported for a reason.

Error #4: Not Editing Your Work

Judges are unwieldy beasts of the system. Most are also strict grammarians. This is also a law school foible: usually (but not always) the top of the class gets to be a law clerk for a judge.* You cannot get into the top of the class unless you also are top of the legal analysis class, the class where you learn how to do things like cite cases and not be an idiot using ChatGPT to do legal research.

Now, granted, Attorney Schwartz has been practicing for a long time (maybe 30 years?), so the ChatBot wasn’t around when he was taking legal analysis.

But the point here is that, if you submit something to the court, it should be edited. At least twice. That’s why you have law students and first year associates doing slave work, okay???

* The exception is if you have stellar personality. What constitutes a stellar personality is unique to each judge.

What’s Happening From Here

The Court issued a Rule to Show Cause asking why Attorney Schwartz and Peter LoDuca (the representative of the firm) as to why they should not be sanctioned. After a couple of requests for Continuances to respond to said Rule to Show Cause, the Court said:

Mr. LoDuca is differently situated from Mr. Schwartz and the Firm. He has awarded himself of a full and fair opportunity to respond to the Court’s OSC regarding non-existent case law and three (3) possible grounds for sanctions. He is not entitled to a do-over. The only point of response to to the supplemental OSC of May 26 is whether he, in fact, physically appeared before Mr. Schwartz, a notary public, on April 25, 2023 and took an oath to tell the truth. That should be a simple and straight forward matter. But if he needs until June 6, 2023 to respond on this point he may have the time. Application Granted.

Translation: STFU.

Full Circle: What Happens When I Searched for These Cases?

Just for giggles, I put these cases into WestLaw.

  • Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019)
The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.

  • Shaboon v. Egyptair 2013 Il App (1st) 111279-U (Ill App. Ct. 2013) … this one doesn’t even look like a freaking citation
The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.

  • Petersen v. Iran Air 905 F. Supp 2d 121 (D.D.C. 2012) … hey, this one brought something up! It’s not the same name, but at least something showed.
The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.

  • Martinez v. Delta Airlines, Inc., 2019 WL 4639462 (Tex. App. Sept. 25, 2019)
The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.

  • Estate of Durden v. KLM Royal Dutch Airlines, 2107 WL 2418825 (Ga. Ct. App. June 5, 2017)
The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.

  • Miller v. United Airlines, Inc., 174 F.3d 366 (2d Cir. 1999)
The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.

  • MSN reported another one: Zicherman v. Korean Air Lines … this one actually looks like something. Maybe the article was written by Bing’s AI? (And it miscited a “miscitation”?)
The use of chatbot is becoming increasing common, but it's important to ensure their citations are accurate. Read about the faulty chatbot lawyer citations by Attorney Steven Schwartz's and his potential sanctions.
Total time spent copying and pasting: 1 minute. It took me longer to copy the images for this article.

My Opinion on the Faulty Chatbot Lawyer Citations

This is hilarious, but it does highlight some of the issues with using AI … for anything.

Disclaimer: 0% of this article was written with the help of ChatGPT or AI.

 

If you like this post, please PIN it! This helps me know what people like and create better content.