In January 2024, a robocall impersonating President Biden's voice told New Hampshire Democrats to skip the primary and save their votes for November. The call was fake — AI-generated in minutes, distributed at scale, and convincing enough that the Federal Communications Commission had to issue an emergency ruling banning AI voices in robocalls. It was an early warning shot. Eighteen months later, the situation is categorically worse.
A Crisis of Scale, Not Just Sophistication
The numbers are no longer theoretical. According to cybersecurity firm DeepStrike, the volume of deepfake content shared online grew from roughly 500,000 files in 2023 to an estimated 8 million in 2025 — a 900% annual growth rate. In the first quarter of 2025 alone, there were more recorded deepfake fraud incidents than in all of 2024. Financial losses from AI-enabled fraud in the United States exceeded $3 billion between January and September 2025. The Deloitte Center for Financial Services projects those losses will reach $40 billion annually by 2027. This is not a niche threat. It is a mainstream one.
Voice cloning has crossed what researchers are calling the "indistinguishable threshold." A few seconds of audio now suffice to generate a convincing clone — complete with natural intonation, rhythm, pauses, and breathing noise. Major retailers report receiving more than 1,000 AI-generated scam calls per day. A 2024 McAfee study found that 1 in 4 adults have experienced an AI voice scam. Human detection rates for high-quality video deepfakes sit at just 24.5%. The perceptual tells that once gave synthetic content away have largely disappeared.
"In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions." — Dr. Siwei Lyu, computer scientist and deepfake researcher, University at Buffalo
The Real-World Damage Is Already Here
The harms extend well beyond financial fraud. AI-generated content is actively corroding the foundations of democratic discourse. Deepfakes have been used to discourage voters in Indian elections, to fabricate battlefield footage during the Russian invasion of Ukraine, and to destabilize governments — an allegation that a video of Gabon's president was a deepfake helped fuel a military coup attempt. The World Economic Forum's Global Risks Report 2025 ranked misinformation and disinformation as the number one short-term global risk for the second consecutive year, identifying generative AI as its primary accelerant.
But there is a subtler harm that may prove even more durable: the erosion of trust in authentic content. Scholars Robert Chesney and Danielle Citron named this the "liar's dividend" — the ability for bad actors to dismiss genuine evidence as AI-generated simply because synthetic content now exists. As awareness of deepfakes grows, so does the plausibility of claiming that real footage is fake. The problem is not only that we believe false things. It is that we stop being able to trust true ones.
Why the Current Toolbox Is Failing
The industry's primary response has been a combination of metadata standards and AI watermarking — most notably the Coalition for Content Provenance and Authenticity (C2PA), which attaches cryptographically signed provenance records to media files. C2PA is serious, well-designed infrastructure backed by Adobe, Microsoft, Google, and major camera manufacturers. It is also structurally limited in ways that its proponents frequently understate.
The core problems are these:
- Metadata is trivially stripped. WhatsApp, iMessage, Facebook, Instagram, and most social platforms automatically re-encode images on upload, silently removing any embedded C2PA credentials. A file that leaves a camera with a valid manifest arrives at its destination with no signal that credentials ever existed.
- Absence of credentials proves nothing. The vast majority of authentic content today carries no C2PA manifest. Treating an unsigned file as suspicious would condemn nearly everything on the internet. The standard only works if adoption is near-universal — a condition that will not be met for years, if ever.
- C2PA cannot detect fabrication. A camera can cryptographically sign a photo of a screen displaying a deepfake. The manifest passes every check. The content is still false. C2PA verifies the signer's claims — not the underlying reality.
- Visible watermarks are easily defeated. They can be cropped, inpainted over, or removed with freely available tools in seconds. Invisible statistical watermarks are more robust but can still be degraded or stripped through compression and re-encoding.
- Adoption barriers are significant. A trusted C2PA signing certificate currently costs approximately $289 per year from the only authorized Certificate Authorities — creating real friction for individual creators and smaller organizations.
None of this makes C2PA worthless. It is a meaningful step forward for structured media pipelines at large organizations. But as a comprehensive solution to the trust crisis facing everyday digital content, it is insufficient. The attack surface it leaves open is enormous.
Authentication That Survives the Real World
What the moment requires is not metadata that can be stripped — but authentication baked into the content itself. Steganographic embedding addresses the fundamental weakness of external metadata standards: it conceals authentication data within the pixels, audio frames, or video structure of a file, so that the provenance signal travels with the content regardless of what happens to its container. Cropped, compressed, re-encoded, and re-shared across platforms that strip every byte of conventional metadata — the authentication persists.
This matters because content doesn't move through the world the way standards bodies imagine it does. It moves through WhatsApp threads, Discord channels, screenshots, and social feeds. It gets resized, re-exported, and screenshotted dozens of times before it reaches most of the people who see it. An authentication method that survives only inside a controlled pipeline is not a solution to public misinformation. A signal embedded in the content itself is.
The difference, in practice, is the difference between a passport and a luggage tag. A luggage tag can be removed or swapped. A passport contains the identity. Steganographic authentication is the passport model — the credential is the content, not an annotation attached to it.
That is the problem Mysterion is built to solve. Our approach embeds cryptographically verifiable provenance directly into media at the moment of creation — invisible to viewers, durable across distribution channels, and verifiable by anyone without requiring the cooperation of every platform in the chain. If you created something, you can prove it. If content has been tampered with, that tampering is detectable. In a media environment where the volume of synthetic content is doubling every few months, that is not a nice-to-have. It is the foundation of trust.
The tools that allowed anyone to fabricate convincing media in minutes exist now. The tools to prove what's real need to exist in every camera, every publishing workflow, and every distribution channel — not as optional add-ons, but as defaults.