The Liar's Dividend: How Deepfakes Make Real Evidence Deniable

When everyone knows fakes are possible, anyone can claim the truth is one. That asymmetry — not the fakes themselves — is the deeper crisis.

In 2019, law professors Robert Chesney and Danielle Citron published a landmark paper in the California Law Review titled "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." Most readers focused on the obvious threat the paper described: synthetic media that makes people appear to say and do things they never did. But buried inside it was a second, subtler problem — one that may prove even harder to contain.

Chesney and Citron called it the liar's dividend: the advantage accruing to bad actors who, in a world saturated with convincing fakes, can dismiss authentic evidence as fabricated. The logic is clean and merciless. As the public learns that deepfakes are real and increasingly indistinguishable from genuine footage, the mere possibility of fabrication becomes a credible defense against the real thing. Deepfakes don't just manufacture false evidence. They corrode the evidentiary value of true evidence.

Real Evidence, Dismissed

The liar's dividend has already moved from theory to practice, in courtrooms, boardrooms, and political campaigns.

In litigation over Tesla's self-driving technology, lawyers for the company argued that video of Elon Musk making public claims about autonomous vehicles should be treated with skepticism because Musk "like many public figures, is the subject of many 'deepfake' videos." The presiding judge rejected the argument in terms that deserve attention: "What Tesla is contending is that Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do." The court declined to set that precedent — but the fact that the argument was made at all signals the direction of travel.

Defendants in trials related to the January 6th Capitol riot attempted to challenge video evidence of their presence by invoking the possibility of AI manipulation. In a UK child custody dispute, a mother submitted a doctored audio recording as evidence, and when challenged, the father's attorney warned that "it would never occur to most judges that deepfake material could be submitted as evidence." Courts, for now, have generally held firm — but the burden of defending authenticity is rising with every case.

Outside the courtroom, the pattern is the same. A Slovak politician whose embarrassing audio was confirmed as authentic by independent researchers nonetheless insisted it was AI-generated. An Indian politician made the same claim about audio clips, even after research teams concluded at least one was real. A U.S. mayor called tape-recorded comments "phony, engineered tapes" despite expert confirmation of their authenticity. These aren't isolated incidents. They are a strategy — one that works better every year.

The Structural Asymmetry

What makes the liar's dividend structurally dangerous is the asymmetry it creates. Claiming that something is fake is cheap and instant. Proving it is real is expensive, slow, and never fully conclusive.

A journalist publishes footage of a government official committing misconduct. The official calls it a deepfake. A forensic analysis takes days or weeks. The original story is buried in the rebuttal cycle. Even if the analysis confirms authenticity, a sizable portion of the audience will have moved on — or will trust the denial over the technical report. Doubt, once seeded, grows independently of the facts that follow.

"When people learn that deepfakes are increasingly realistic, false claims that real content is AI-generated become more persuasive too." Brennan Center for Justice, analyzing the Chesney/Citron framework

This asymmetry scales badly across every domain where documented evidence matters:

  • Courts: Any audio, video, or image can now be challenged as synthetic, forcing resource-intensive authentication proceedings that disadvantage parties without forensic experts on retainer.
  • Journalism: Eyewitness footage of atrocities, police conduct, or political wrongdoing faces automatic credibility challenges — not because the footage is suspect, but because the category of footage is.
  • Elections: Authentic recordings of a candidate's statements can be dismissed by the candidate's own campaign as AI-fabricated, with no requirement to prove the claim before it shapes public perception.
  • Personal reputation: Real images and videos of private individuals — submitted in custody disputes, harassment cases, or civil proceedings — are now vulnerable to the same denial strategy available only to well-resourced defendants a few years ago.

The unifying feature across all of these is that the victim of the denial bears the burden of proof in an environment deliberately designed to make that proof harder.

Why Detection Cannot Win This

The intuitive response is to build better detectors. If AI can reliably identify deepfakes, the argument goes, false claims of fabrication can be refuted quickly and the liar's dividend collapses.

The problem is that detection is structurally downstream of generation. Deepfake detectors are trained on known fakes; they fail on architectures they haven't seen. Every advance in detection incentivizes a corresponding advance in generation. Deepfake creators have already learned to use adversarial techniques — pixel-level manipulations that defeat specific detectors — as part of their workflow. Detection accuracy is measured in a controlled environment; in the wild, with compressed video, re-encoded files, and novel generation techniques, the numbers deteriorate quickly.

This is not a solvable engineering problem. It is a structural one. Detection is reactive by definition. The generator always gets first-mover advantage, and in the liar's dividend scenario, even a small probability of fabrication is enough to give a determined bad actor room to maneuver.

There is also a deeper issue: even accurate detection produces a probabilistic finding, not a legal or journalistic standard of proof. "This video is 94% likely to be authentic" is a weaker foundation for accountability than a verifiable chain of custody from capture to publication. Courts, news organizations, and democratic institutions need the latter, not the former.

Authentication at the Source

The only way to close the asymmetry is to shift the burden of proof before it becomes a dispute. If the authenticity of a piece of media can be verified by reference to an unforgeable record made at the moment of capture — before any chain of custody questions arise — then a claim of fabrication has to contend with a specific, cryptographically verifiable fact, not just general doubt about the category.

This is the logic behind creation-time authentication: embedding proof of origin and integrity into media at the instant it is captured, not retrofitted afterward. When a video carries a cryptographic signature tied to the device, time, and conditions of its recording, the question "is this real?" has a concrete answer that doesn't depend on forensic analysis of the content itself. The provenance is the proof.

This is what Mysterion is built to provide. By embedding invisible, steganographic authentication directly into media at the point of creation, every frame carries its own verifiable record. That record can't be detached from the content, can't be retroactively added to synthetic media, and doesn't degrade as the file passes through compression and re-encoding. When authenticity is challenged — in a court, a newsroom, or a public forum — the response is not a probability estimate from a detector. It is a verified chain of custody that begins at the shutter.

Chesney and Citron identified the liar's dividend as a looming crisis. It has arrived. The question now is whether we respond with tools that play defense at the wrong end of the problem — or with infrastructure that makes authentic media provably, irrefutably real from the moment it exists.