Chatbot Legal Defamation Issues

Chatbot Legal Defamation Issues

When you've been defamed by a bot
Some of these links are affiliate links. This lets me create great content at no cost to you. Thank you for your support!

I have two separate, equally unfunny news articles about ChatGPT inventing a sexual harassment scandals.

Chatbots are becoming increasingly popular, but they can also pose legal risks. Learn about the potential defamation issues that can arise with chatbots in this guide.
Midjourney prompt: high quality blogger photography of computer chips flying around everywhere, good lighting, hyper realistic detail on the objects –ar 4:5 –v 5.2

The first was Jonathan Turley, a law professor at George Washington University, in which ChatGPT cited a 2018 Washington Post article … that didn’t exist.

The second is an Australian Hepburn Shire mayor, Brian Hood, where ChatGPT claimed that he was a convicted criminal, involved in the (real) bribery scandal at Australia’s Reserve Bank (RBA). He was not. He was, in fact, the whistleblower ins aid bribery scandal.

Book Look

Want to know more about how to use ChatGPT? I’ve read tons of Amazon books on the subject, and “authors” are proliferating the Kindle marketplace with AI-written books … that are all utterly terrible. Utterly. (Or is it udderly? Okay, I couldn’t resist that. #DadJokesRule)

chatbot legal issues
Photo by Megumi Nachev on Unsplash

I recommend this book, though: ChatGPT for Nonfiction Authors: How to Use ChatGPT to Write Better, Faster, and More Effectively by Hassan Osman. I also like his other books, so give them a go.

Chatbot Legal Defamation Issues: What’s Defamation of Character Mean, Anyway?

Okay, first thing is first. Defamation means very different things in different jurisdictions. While I normally use the term “jurisdiction” to mean the different states here in the U.S., it can also refer to different countries.

So Brian Hood’s defamation lawsuit would look very different … and could have opposite results … from Jonathan Turley’s lawsuit.

In the U.S., Defamation of Character (generally) is (1) a false statement someone makes about you, (2) which is published as a statement of fact, and (3) which harms your personal and/or professional reputation (or causes other damages, such as financial loss or emotional distress).

Defamation: Libel or Slander

Libel and slander are types of defamation. Libel is something written. Slander is something spoken. I get confused about these, as well, but I try to remember that S = S. Slander = Spoken.

Burden of Proof

We have a concept in the law called the burden of proof. These are things that you must, well, prove. In civil suits (in America), the burden is “the preponderance of the evidence,” which is generally “there is a greater than 50% chance that the claim is true.”

Sometimes, you might here “more likely than not.” The weighing factor is generally “what a reasonable person may think.”

So let’s break defamation down again:

(1) a false statement someone makes about you

(2) which is published as a statement of fact

(3) which harms your personal and/or professional reputation

False Statement

In both of these cases, the statements were false, so both Mr. Turley and Mr. Hood would win on the first factor.

But, there is a second criteria, #1.5, which means that the person (in the U.S., companies are considered people) either knew it was false at the time or showed “reckless disregard” for whether it was true or false.

#1.5 is important because, if you are suing in the U.S., you would have to show that ChatGPT (or OpenAI) knew that the statement was false or showed a “reckless disregard” for whether it was true.

That begs this question: how does one know what ChatGPT knows? How could you prove a “reckless disregard” for the truth?

Mr. Hood is giving OpenAI some time to correct the problem. If OpenAI fails to correct ChatGPT, would this show a reckless disregard for the truth?

N.B. Keep in mind that it is OpenAI’s actions that count.

Published

The second may also be true, depending on what your definition of “published” is. Merely repeating the statement means that it would be published.

There is a broader issue here, though, about publication. If something is published about you, to you, is that publication? In a Chat AI scenario, does publication mean to someone other than you?

This kind of blurs the lines here with the concept of harm or damages (below), but how do you prove that something is published?

Harm

In a case here in America (but also highly depending on the state jurisdiction), if you can prove #1, #1.5, and #2, the jury will presume that you are harmed.

There is a concept of harm, though, that it needs to be measured in damages. That means that it has to have caused you harm.

Some harm is obvious. Let’s say that your boss looks you up on ChatGPT, and ChatGPT said you sexually harassed students. You get fired. Well, you might be able to sue Open AI for the amount of your salary.

Let’s say a student writes an journal article about the #MeToo movement, and that student isn’t very student-ly and uses ChatGPT instead of doing research. That student writes about you. Another student trashes your car. You get to sue for the cost of your car.

You can also get something called “punitive damages,” which are damages generally to punish another person just because he/she/it did it. The criteria for punitive damages is that the statement must have been made maliciously, which is very difficult to show.

We also have an exception for “famous” people: public figures, government officials, and my dog, Joe, being that he is an Instagram star. Just kidding on Joe. Although he is an Instagram star. You get damages, you must show that actual malice existed in the making of the statement.

So here is the next question: Can ChatGPT have malice? Probably not, all kidding aside about AI becoming sentient beings.

Disclaimers

OpenAI’s terms of use contain a disclaimer that the information that the chat spits out could be false:

“Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.”

Putting these two cases aside, what happens if you use the ChatBot, and it defames someone?

Chatbots have become an increasingly popular tool for businesses looking to improve their customer service and engagement. These automated tools can handle a wide range of tasks, from answering simple questions to providing personalized recommendations.

However, these AI tools can open you up to possible legal risks, especially defamation. Let’s take the above scenario one step further. Let’s say that you wrote an article, using ChatGPT, on Brian Hood and cited the above defamation. Are you liable?

The short answer: yes, although I expect to have “I used ChatGPT” as a defense in the future.

Companies can mitigate its damage by training the AI properly, monitoring conversations, and establishing a clear process to handle situations that do arise.

My Opinion

I would be infuriated if ChatGPT created some sort of false information about me, especially that I was a convicted criminal or sexually harassed someone. So don’t get me wrong, here. I would more than likely sue the company that made ChatGPT, too.

But, here in the U.S., a defamation suit against an AI might be a very difficult thing to prove on all three factors.

Although individual people (not OpenAI) have shallower pockets, I predict that people will get sued from their “reckless disregard” of fact-checking what ChatGPT spits out.

As always, though, this is a new area of the law. One that is breaking smashing everything as we know it. Jobs, the economy, learning … everything. I would definitely not be surprised if someone was successful with a defamation lawsuit against the AI.

Viva la Judgment Day.

 

If you like this post, please PIN it! This helps me know what people like and create better content.