Back to Knowledge Hub
Research

Deepfakes, Disinformation, and the Fight for Media Authenticity

Deepfake videos surged 16-fold between 2023 and 2025. This is what that means for elections, financial markets, and public trust.

ResponsibleAI Labs
March 23, 2026
9 min read
Deepfakes, Disinformation, and the Fight for Media Authenticity

Deepfake videos shared online surged from 500,000 in 2023 to a projected 8 million by 2025 - a 16-fold increase. Losses from deepfake-enabled fraud exceeded $200 million in the first quarter of 2025 alone, and 38 countries have experienced deepfake interference in their elections. This article examines the scale of the synthetic media threat, the emerging regulatory and technical responses, and what remains to be done.


From novelty to national security threat

Not long ago, deepfakes were a curiosity - impressive but detectable, entertaining but rarely consequential. That era is over. In 2026, synthetic media has matured into one of the most versatile tools in the arsenals of fraudsters, foreign influence operators, and online harassers. Deepfake-as-a-service platforms became widely available in 2025, lowering the cost and technical skill required to produce convincing fake video, audio, and images to near zero.

The numbers tell the story of an exponential escalation. According to European Parliamentary Research Service estimates, deepfake files surged from roughly 500,000 in 2023 to a projected 8 million by 2025. Deepfake incidents in the first half of 2025 already exceeded the total from the entire prior year by 171%, according to tracking by Resemble AI. And a 2025 iProov study delivered a sobering finding: only 0.1% of human participants correctly identified all deepfakes and genuine media shown to them.

The implications extend far beyond individual deception. Deepfakes are now a systemic risk - to financial markets, democratic elections, public health, and personal safety.


The scale of the threat

The deepfake explosion: key statistics

Figure 1: Key statistics on the deepfake threat, from financial losses and election interference to the human detection gap.

Financial fraud

Deepfake-enabled fraud is growing at an alarming rate. Financial losses exceeded $200 million in Q1 2025 alone, according to Resemble AI's incident tracking. In the United States, deepfake-related losses reached $1.1 billion in 2025, tripling from $360 million the prior year, with deepfake-enabled vishing attacks surging over 1,600% between Q4 2024 and Q1 2025.

Corporate impersonation has become a particularly costly vector. In one widely reported 2024 incident, criminals used deepfake video to impersonate senior executives at British engineering firm Arup, convincing staff to transfer $25 million to fraudulent accounts. According to Cyble's threat monitoring, AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks in 2025. Deloitte projects that deepfake-enabled fraud losses in the US alone could reach $40 billion by 2027.

Just three seconds of audio is now sufficient to clone a voice with an 85% match to the original speaker - making phone-based social engineering trivially easy at scale.

Non-consensual intimate imagery

The most common use of deepfake technology by volume remains the creation of non-consensual sexually explicit content. In Q2 2025, 84% of deepfake attempts targeted women. South Korea reported approximately 297 deepfake sex crime cases in just seven months of 2024, nearly double the figure from 2021. AI-generated sexual images of Taylor Swift reached 47 million views before removal, illustrating the speed at which such content can spread.

In late 2025 and early 2026, the Grok chatbot faced global criticism after its image-editing features were used to generate non-consensual sexualized imagery of women and children, leading X (formerly Twitter) to restrict the feature following investigations in multiple countries.

Political manipulation and election interference

The weaponization of deepfakes in elections has accelerated dramatically, with 38 countries experiencing election-related deepfake incidents since 2021, affecting a population of 3.8 billion people.

Election deepfakes: a global timeline

Figure 2: Major deepfake election incidents across seven countries in 2024–2025, from robocalls to fabricated candidate withdrawals.

The pattern is striking for its consistency across geographies. In the January 2024 New Hampshire primary, a deepfake Biden robocall urged Democrats not to vote, resulting in a $6 million FCC fine. During India's 2024 elections, a deceased politician's AI-generated avatar appeared at a live rally. In Argentina's May 2025 Buenos Aires elections, deepfakes falsely claimed candidates had withdrawn hours before polls opened. Poland's May 2025 presidential election saw AI-generated images featured in four of 23 viral disinformation videos - none with disclosure labels. In Ireland's October 2025 presidential election, a deepfake falsely announced a candidate's withdrawal four days before the vote.

As the Turing Institute's Centre for Emerging Technology and Security observed: the most effective election deepfakes are deployed in the final hours before polls open, when fact-checkers have no time to respond. The goal is not always to change minds - sometimes it is simply to create enough confusion that people stay home.


The detection crisis

The technical challenge of detecting deepfakes is growing harder, not easier. Human detection accuracy for high-quality deepfake video stands at just 24.5%. Detection tools trained on known deepfake techniques often fail when encountering newer or adversarial approaches. The detection tool market is projected to grow from $5.5 billion in 2023 to $15.7 billion by 2026, reflecting urgent demand - but the fundamental arms race between generation and detection shows no sign of resolving.

Several approaches are emerging. Content provenance systems - most notably the C2PA Content Credentials standard, backed by Adobe, Microsoft, Intel, and others - aim to embed verifiable metadata into media at the point of creation, establishing a tamper-evident chain of custody. Google DeepMind's SynthID applies invisible watermarks to AI-generated images and audio. Enterprise detection platforms like Reality Defender and Sensity AI offer real-time scanning APIs for businesses.

But these solutions face a fundamental adoption problem. Provenance standards only work if universally implemented. Open-source generation tools and non-compliant platforms can easily bypass watermarks. And even well-resourced platforms struggle with scale: by Q3 2025, YouTube, Instagram, Facebook, TikTok, and WhatsApp were the top five platforms for deepfake distribution.


The regulatory response

Governments worldwide have moved from debating deepfakes to legislating against them - though the resulting framework remains fragmented and uneven.

Global deepfake regulation landscape

Figure 3: Major regulatory approaches across the US, EU, Asia-Pacific, UK, and France, plus key industry responses.

United States

The US has enacted significant legislation in 2025–2026. The TAKE IT DOWN Act, signed in May 2025, criminalizes the publication of non-consensual intimate deepfakes with penalties of up to two years' imprisonment (three for content involving minors). Platforms must remove flagged content within 48 hours. The DEFIANCE Act, which passed the Senate unanimously in January 2026, would create a federal civil remedy allowing victims to sue for up to $250,000 in damages.

At the state level, the pace has been even faster. As of February 2026, 46 states have enacted deepfake-specific legislation, with 146 bills introduced in 2025 alone. These laws vary widely - some focus on election integrity, others on intimate imagery, and a growing number address AI content labeling requirements. California's AI Transparency Act (AB853) established watermarking standards, while New York's proposed Stop Deepfakes Act would require traceable metadata on all AI-generated content.

European Union

The EU's approach is the most comprehensive globally. Article 50 of the EU AI Act, becoming enforceable on August 2, 2026, requires AI-generated content to be "marked in a machine-readable format and detectable as artificially generated." Deployers must disclose synthetic content clearly at first interaction. Penalties can reach €15 million or 3% of global turnover. The European Commission published the first draft of its Code of Practice on AI-generated content transparency in December 2025, with the final version expected mid-2026.

Asia-Pacific

China's "Deep Synthesis" provisions, in force since 2023 with 2025 updates, require mandatory labeling and real-identity verification for AI-generated content. South Korea's AI Basic Act, effective January 2026, addresses both deepfake harms and broader AI fairness. India, whose massive social media user base makes it particularly vulnerable to viral deepfakes, has increased digital surveillance and partnerships with platforms, though detection continues to lag behind creation.

UK and France

The UK's Online Safety Act requires platforms to remove illegal deepfakes, including AI sexual imagery, once notified. France amended its Penal Code in 2024 to criminalize non-consensual sexual deepfakes (up to two years' imprisonment and €60,000 fines), with a proposed bill mandating AI content labeling on social networks pending.


The governance gap

Despite this flurry of legislation, significant gaps remain. As the London School of Economics observed in a December 2025 analysis, current AI regulation "misdiagnoses the challenge" by treating deepfakes primarily as a content distribution and transparency problem - solvable through labeling and platform moderation - while overlooking their use as tools of organized financial crime, identity theft, and systematic harassment.

Key structural weaknesses include:

Fragmentation. With 46 US states, federal law, and multiple international frameworks applying different standards, compliance is burdensome and enforcement inconsistent. A single deepfake incident may cross multiple jurisdictions simultaneously - 65% of tracked incidents in Q3 2025 crossed national borders.

The timing problem in elections. Deepfakes deployed in the final hours before polling cannot be debunked in time. No current regulation addresses this temporal vulnerability specifically.

Detection lag. Regulation requires enforcement, and enforcement requires detection. But detection technology consistently lags generation capability, creating an enforcement gap even where strong laws exist.

Emerging attack vectors. Data-poisoning attacks on AI chatbots - corrupting the training data of widely used LLMs to make them generate misleading election information - represent a new frontier that existing deepfake-focused legislation does not address.

Insurance gaps. Most standard corporate crime and fidelity policies exclude deepfake-enabled fraud under "voluntary parting" clauses. As Jones Walker LLP noted in 2026, most companies remain financially exposed to deepfake losses.


What comes next

The deepfake challenge is, fundamentally, a challenge to the concept of shared truth. As media scholars warn, the long-term effect may not be that people believe specific deepfakes, but that the existence of convincing synthetic media breeds universal skepticism - what researchers call "truth fatigue" - in which any inconvenient piece of evidence can be dismissed as potentially fake.

Addressing this requires action across multiple fronts:

Universal content provenance. The C2PA standard and similar approaches need to move from voluntary adoption to industry-wide and regulatory-mandated implementation. If provenance metadata becomes the norm, content without it can be flagged for scrutiny.

Election-specific protections. Regulators should establish "quiet periods" prohibiting the distribution of unverified synthetic media in the 48–72 hours before elections, with expedited platform takedown obligations.

Financial system hardening. Multi-factor authentication that goes beyond voice and video verification - incorporating behavioral biometrics, out-of-band confirmation, and real-time deepfake detection - is essential to protect against corporate impersonation fraud.

Media literacy at scale. Public education programs that teach citizens how to verify media provenance, recognize deepfake indicators, and report suspicious content are a critical complement to technical solutions. The fact that only 13% of companies have anti-deepfake protocols - and 22% of people have never heard of deepfakes - underscores the gap.

International coordination. Deepfake threats are inherently transnational. Cross-border detection sharing, harmonized legal definitions, and mutual assistance frameworks are needed to match the global reach of deepfake operations.


Conclusion

The era of "seeing is believing" is ending. In its place, we are entering what experts call the "post-truth AI era," where the ability to generate convincing synthetic media at zero cost and global scale challenges every institution that depends on authentic evidence - from courts to newsrooms to democracies themselves.

The tools to fight back exist: content provenance standards, detection algorithms, legislative frameworks, and public education. The question is whether they will be deployed with the speed and coordination that the threat demands. With the 2026 US midterms approaching, the stakes could not be higher.


References

  1. Stimson Center (2026). "AI in the Age of Fake (Imagined) Content."

  2. Jones Walker LLP (2026). "Deepfakes-as-a-Service Meets State Laws."

  3. TechPolicy.Press (2026). "What the EU's New AI Code of Practice Means for Labeling Deepfakes."

  4. Ondato (2026). "Deepfake Laws Explained: Global Regulations and Legal Risks."

  5. Reality Defender (2025). "The State of Deepfake and AI Regulations."

  6. Pearl Cohen (2025). "New Guidance under the EU AI Act."

  7. Regula Forensics (2025). "Deepfake Regulations: AI and Deepfake Laws of 2025."

  8. Programs.com (2025). "Deepfake Legislation Tracker."

  9. Turing Institute CETaS (2025). "From Deepfake Scams to Poisoned Chatbots."

  10. Surfshark (2025). "38 countries have faced deepfakes in elections."

  11. SQ Magazine (2026). "Deepfake Statistics 2026: The Hidden Cyber Threat."

  12. Keepnet Labs (2026). "Deepfake Statistics & Trends 2026."

  13. Programs.com (2025). "The Latest Deepfake Facts & Statistics."

  14. Variety (2025). "Deepfake-Enabled Fraud Has Already Caused $200 Million in Financial Losses in 2025."

  15. Cyble (2026). "Deepfake-as-a-Service Exploded In 2025."

  16. LSE International Development (2025). "The Deepfake Blindspot in AI Governance."


This article is part of ResponsibleAI Labs' 2026 series on emerging AI ethics and risk. For more, visit responsibleailabs.com.