By Joseph Kroetsch
Boies Schiller Flexner’s Joseph Kroetsch takes the position that AI-generated fake content undermines the credibility of all digital content, and makes it harder to fault a defendant for disregarding the truth.
Artificial intelligence technology is fast-moving with little precedent in recent times. Among its many uses—and misuses—are generative image, video, and synthesized voice audio that are increasingly indistinguishable from the real thing. This technology will only become more realistic and accessible in coming years.
AI technology is said to facilitate lying by making it easier to pass off fake images as genuine. But there is a flip side: It will become increasingly reasonable to disbelieve the authenticity of real images and other digital evidence.
Videos and images that today seem like irrefutable evidence may soon be given little weight by a public that will become familiar with how a simple text prompt can create similar proof out of thin air in seconds.
This looming decline in the credibility of digital evidence will have a host of ramifications for our legal system. With formerly hard evidence easily fabricated, we may see a shift in emphasis at trials towards the credibility of the person who created or authenticated evidence.
And given the historical role prejudice has played in how juries weigh the credibility of witnesses, we will have to take care that we don’t slide backwards when so much work still remains to be done.
Implications for Defamation Law
The declining credibility of digital evidence presents particularly thorny issues in the context of defamation litigation. As it becomes harder for people to evaluate for themselves whether digital imagery is truly authentic, the press could play an increasingly important role in helping the public understand what is real. But the press’s credibility needs accountability to survive, and defamation law plays an important role in this respect.
In most defamation cases, a public-figure plaintiff must satisfy the “actual malice” standard by proving the defendant subjectively knew that the defamatory statement was false, or recklessly disregarded the truth. It is not enough to show that the defamation defendant should have known its speech was false—the plaintiff must, at a minimum, prove by clear and convincing evidence that the defendant was actually aware of the probable falsity of its speech.
This is a notoriously daunting burden, as Justice Neil Gorsuch observed—“over time the actual malice standard has evolved from a high bar to recovery into an effective immunity from liability”—a view that prompted strong rebuke from media defense counsel.
The rise of AI-generated content could make the actual malice standard even harder to meet. A defendant’s subjective intent may be proven through circumstantial evidence, such as by showing the defendant reviewed contradictory evidence before publication. In such circumstances, a jury can infer the defendant knew its defamatory publication was false.
As the credibility of hard evidence wanes, however, this inference becomes more difficult to justify, especially under the applicable “clear and convincing evidence” standard. In a world where anyone with a phone can create a fake video, photograph, or voice recording, isn’t it reasonable—or at least not reckless—for a defendant to genuinely doubt the authenticity of formerly irrefutable evidence?
A similar problem may arise when proving actual malice by showing that the defendant purposefully avoided the truth. In the US Supreme Court’s 1989 decision in Harte-Hanks Communications v. Connaughton, the defendant published a defamatory article claiming that the plaintiff offered a bribe in a conversation. The plaintiff provided the defendant with tapes of that conversation to rebut the allegation, but defendant chose not to listen to the tapes.
The Supreme Court rejected defendant’s excuse that sources said the tape was not worth listening to because plaintiff had selectively started and stopped the recording, holding that such manipulation would have been apparent from simply listening to the tape. But with sophisticated tools for mimicking voices beginning to emerge, it may be harder for future courts to fault a reporter for not reviewing evidence when a source says the evidence is fake.
Thus, even as AI technology makes the public more reliant on the press to authenticate images and recordings, the same technology could erode public trust in the press if it makes actual malice so difficult to prove that it creates an appearance of unaccountability.
Future Implications
Calls for overturning the actual malice standard have grown in recent years, fueled by claims that the standard makes the media nearly immune to liability. In truth, it is hard to sue the media, but not impossible.
But if AI-generated content undermines plaintiffs’ ability to prove actual malice even in strong cases, we may see criticism of the actual malice standard gain more traction. Any erosion of the actual malice standard triggered by AI-generated content is unlikely to be confined to the AI context.
What is to be done? On the technological front, efforts such as the Content Authenticity Initiative seek to restore credibility to digital content by developing standards for authenticating image and video files. Legally, it is still too early to say how courts will adapt to this changing landscape, but the challenge of adapting law to new technology is hardly new.
Courts have adapted to other technologies such as fingerprints, DNA, polygraphs, and even photography. In these early days, lawyers should ensure they maintain thorough documentation for authenticating evidence to preempt new avenues of doubt.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Reproduced with permission. Published April 20, 2023. Copyright 2023 Bloomberg Industry Group.