The 2026 State of Synthetic Media

From 500,000 deepfake files in 2023 to 8 million in 2025. An annual look at the numbers, the incidents, the legislation, and what the trajectory means for trust online.

Every year we try to take stock of where synthetic media actually stands — not the hype, not the panic, but the documented numbers, the real incidents, and the structural shifts that will determine whether the internet becomes a fundamentally untrustworthy place. 2025 was a year when several trends that had been building for years crossed thresholds that are hard to walk back from. Here's what the data shows.

The Volume Numbers

The clearest single signal is volume. Cybersecurity firm DeepStrike estimates that approximately 8 million deepfake files were shared online in 2025, up from roughly 500,000 in 2023. That's a 16x increase in two years, with annual growth running close to 900% at peak. The BitMind 2025 Year in Review corroborates this trajectory, noting 179 documented deepfake incidents in Q1 2025 alone — a figure that exceeded the entire incident count for 2024 by 19%.

Creation speed has collapsed. Some deepfakes can now be generated in as little as 27 seconds. The barrier that used to be technical sophistication has been replaced by tools accessible to anyone with a phone. What required a production studio five years ago can now be done in a browser tab in minutes.

The quality threshold has also moved decisively. Human detection rates for high-quality deepfake video sit at just 24.5%, according to DeepStrike's research. In practice, a determined viewer examining a polished synthetic video will correctly identify it as fake less than one time in four. Detection by training alone is not a viable organizational strategy.

The Financial Toll

Where volume trends are alarming, the financial data is genuinely staggering. North American losses tied to deepfake fraud exceeded $200 million in the first quarter of 2025 alone. The IRONSCALES 2025 Deepfake Threat Report found that over half of surveyed organizations reported financial losses tied to deepfake or AI voice fraud in the past year, with an average loss of more than $280,000 per incident. Nearly 20% reported losses of $500,000 or more.

Looking further out, the Deloitte Center for Financial Services projects that U.S. fraud losses facilitated by generative AI will grow from $12.3 billion in 2023 to $40 billion by 2027 — a compound annual rate of 32%.

"In 2024, deepfake attacks occurred at a rate of one every five minutes. In the time it takes to read this paragraph, another has likely been attempted." DeepStrike, Deepfake Statistics 2025

The attack vector breakdown is instructive. Voice cloning remains the most widely deployed form — cheap, fast, and startlingly convincing. A 2024 McAfee study found that 1 in 4 adults had experienced an AI voice scam, with 1 in 10 having been personally targeted. Video-based fraud is the higher-value end of the attack spectrum, and the incidents from 2025 illustrate why.

Notable Incidents of 2025

The defining case of the year wasn't a political deepfake or a celebrity fake — it was the Arup engineering firm incident, which became the year's most studied corporate fraud. A finance employee at the UK-based firm was convinced to authorize 15 wire transfers totaling $25.6 million after participating in a video conference that appeared to feature his CFO and multiple colleagues. Every person on that call was a deepfake. The attackers used publicly available footage of Arup executives to train the models. As of early 2026, none of the funds have been recovered and no arrests have been made. Arup's CIO later told the World Economic Forum that the incident revealed how completely standard verification procedures fail when the attacker controls both the audio and video of a "live" call.

Beyond corporate fraud, the political and personal dimensions of the problem widened considerably:

  • Ireland: A highly realistic deepfake video mimicking an RTÉ News broadcast circulated on Facebook ahead of the Irish presidential election in October 2025, fabricating statements by a candidate.
  • India: A 71-year-old retired doctor lost over ₹20 lakh (~$22,600) to scammers using a deepfake video of India's Finance Minister promoting a fake investment scheme.
  • Non-consensual imagery: Reports noted a sharp rise in sextortion cases involving AI-generated intimate images, with lawmakers in every U.S. state introducing some form of sexual deepfake legislation during 2025.
  • Celebrity targeting: 47 celebrity deepfake incidents were recorded in Q1 2025 alone — up 81% from 2024. Taylor Swift topped McAfee's 2025 Most Dangerous Celebrity list based on how often her likeness was exploited in AI-generated scams.

The IRONSCALES report surfaced a finding worth dwelling on: 99% of security leaders reported confidence in their deepfake defenses. In simulated detection exercises, the average organization scored 44%. Only 8.4% scored above 80%. The gap between perceived readiness and actual readiness is one of the more dangerous features of the current landscape.

The Legislative Response

Lawmakers moved faster in 2025 than in any previous year, though the legislation still lags well behind the threat.

The Take It Down Act, signed into federal law in 2025, requires online platforms to remove AI-generated non-consensual sexual content upon notice. The NO FAKES Act — reintroduced in Congress in April 2025 — would establish a federal right protecting individuals' voice and visual likeness from unauthorized AI recreation, with a tiered liability framework for platforms. Critics note the bill still contains gaps in protecting ordinary people versus well-resourced public figures, and the preemption of existing state laws has raised constitutional concerns.

At the state level, every U.S. state introduced some form of deepfake legislation in 2025, primarily targeting non-consensual sexual content and AI manipulation of political advertising. Several state laws were challenged on First Amendment grounds, with California having two laws struck down by federal courts for being overbroad.

Looking into 2026, regulators are expected to expand their scope beyond individual creators to include the platforms, payment processors, and AI tools that enable deepfake production at scale. The EU AI Act, effective August 2026, requires transparency labeling for AI-generated content — a provision that C2PA's AI assertion type is directly designed to satisfy.

Provenance Technology: The Other Side of the Ledger

If the threat side of the ledger darkened considerably in 2025, the provenance side showed real movement for the first time.

The Coalition for Content Provenance and Authenticity (C2PA) saw meaningful hardware adoption. The Google Pixel 10 shipped with native C2PA support built into the camera app — making it the first widely available consumer device to sign images at capture using dedicated security hardware. The Content Authenticity Initiative noted that this moved provenance from an enterprise-only capability into the hands of millions of people. Photo Mechanic — the dominant ingest tool in professional photojournalism workflows — announced C2PA support in February 2026, creating a path for signed provenance to travel from camera through editing to publication without being stripped.

The gap remains distribution. As Glyn Dewis noted in a detailed 2026 analysis of C2PA adoption, "basically no photos published online were carrying C2PA metadata" as of 2025, and while that is improving, it remains an adoption loop rather than a solved problem. The infrastructure is being built; the question is whether the pipeline from capture to publication to platform display can be completed before the trust deficit becomes structural.

What the Trajectory Means

The numbers from 2025 tell a coherent story. Volume is up 16x in two years. Financial losses are accelerating. Detection by humans has essentially failed as a primary defense. Legislation is advancing but remains reactive and fragmented. And provenance technology — the only approach that addresses the problem at the source rather than the symptom — is gaining hardware adoption but has yet to achieve the pipeline completeness that would make it a systemic answer.

The structural problem hasn't changed since Chesney and Citron described it in 2019: the asymmetry between making a fake and proving something is real. What has changed is the scale. In 2025, that asymmetry stopped being a theoretical concern and became a documented operational reality for corporations, courts, journalists, and individuals alike.

The case for authentication at the source — embedding proof of origin into media at the moment of capture, before any question of provenance arises — becomes harder to argue against with each passing quarter. The infrastructure is being built. The regulatory pressure is increasing. What's needed now is adoption at the pace the threat demands.