Deepfake Statistics By Types, Fraud, Crime, Scams and Facts (2026)
Updated · Dec 22, 2025
Table of Contents
- Introduction
- Editor’s Choice
- General Deepfake Statistics
- Impacts of Deepfake Threats
- Types of Deepfake Statistics
- Deepfake Marketing Use Cases Statistics
- Deepfake Fraud Statistics
- Deepfake Scams Statistics
- Deepfake Crime Statistics
- Deepfake Technology Market Statistics
- Length of Deepfake Ads Statistics
- Deepfake Statistics By Affected Sectors
- Employee Confidence In Spotting AI Deepfakes
- Conclusion
Introduction
Deepfake Statistics: Deepfake statistics show that synthetic media is spreading very quickly. As AI tools become more powerful, it has become much easier to create videos, images, and audio that look real but are actually fake. Recent data indicate a sharp increase in deepfake content online, and many of these fakes are associated with scams, misinformation, and privacy violations. At the same time, the numbers reveal that people are putting more effort into finding and blocking deepfake content.
These patterns show how deepfakes are affecting ordinary individuals, businesses, and even government agencies. In general, the statistics indicate that deepfakes are increasing rapidly and will continue to shape how we interact and share information online.
Editor’s Choice
- The average deepfake fraud incident now costs around USD 500K.
- Deepfake files are projected to reach 8 million by 2025.
- About 96% to 98% of deepfakes online are intimate images made without consent.
- Due to deepfakes, companies worldwide lost an average of USD 500,000, while large firms lost up to USD 680,000.
- By 2025, approximately 81% of marketers are expected to use deepfakes with synthetic voices, and 71% are expected to use them for localisation.
- Only 1 in 4 leaders admits they do not really understand deepfakes.
- North America saw a substantial 1740% increase in deepfake fraud.
- Deepfake spear phishing has increased by more than 1,000% over the last decade.
- Meanwhile, around 42.5% of financial fraud attempts were linked to AI, while deepfake attacks accounted for 6.5% of cases and 1 in 15 incidents.
- Meanwhile, deepfake attempts occurred once every five minutes.
- As of 2024, more than 10% of companies had experienced deepfake fraud attempts.
- Over three years, deepfake fraud grew from 0.1% to about 6.5% of detected cases by 2025.
- In 2024, most deepfake ads were short, with 40% lasting under 30 seconds.
- Deepfake usage in pornography is the biggest concern, making up 30.6% of the total impact.
- Between February 2024 and February 2025, 73.8% of security staff and 68.7% of employees felt able to recognise audio deepfake payment calls.
General Deepfake Statistics
- According to DeepStrike.io, Deepfake files have increased from 5.5 million in 2023 to a projected 8 million by 2025.
- Meanwhile, fraud attempts grew by 3,000% in 2023, including a 1,740% increase in North America.
- Voice cloning has become the top attack vector, while human detection rates for high-quality video are just 24.5%.
- The average deepfake fraud incident now costs around USD 500K.
- As of 2024, deepfake attacks occur at a rate of one every 5 minutes.
- As video deepfakes improve, BEC scams are becoming more prevalent, affecting over 400 companies worldwide each day.
- A 2024 report published by eftsure.com found that, due to deepfakes, companies worldwide lost an average of USD 500,000, while large firms lost up to USD 680,000.
- In 2024, McAfee reported that 1 in 4 people encountered AI voice scams, and 1 in 10 were personally targeted.
![]()
(Source: deepstrike.io)
- About 96% to 98% of deepfakes online are intimate images made without consent, causing a major social problem.
- Approximately 60% of people report being confident they can spot a deepfake, but tests show that humans correctly identify high-quality fake videos only 24.5% of the time.
- Approximately 80% of companies still lack clear plans for responding to deepfake attacks today.
Impacts of Deepfake Threats
| Aspect | Details |
| Attack growth | Monthly deepfake cases rose from 4 to 5 and then to hundreds. |
| Ease of scams | Anyone can run a DIY deepfake for about USD 5 and 10 minutes of effort. |
| Crypto sector | Crypto firms lose approximately USD 440k per incident, and 57% have experienced deepfake attacks. |
| Major heists | In Hong Kong, a fraudulent video call resulted in USD 25.6 million in losses. In Australia, 20% of targets lost USD 25 million. |
| Fintech losses | Around 25% of fintech companies have deepfake losses above USD 1 million. |
| Public exposure | People detect deepfakes in 80% of images, 64% of videos, and 48% of audio, but only 22% report them. |
| Business response | Only 42% of firms are confident in detecting deepfakes, 62% want alerts, and pausing to verify adds 8% to detection. |
Types of Deepfake Statistics
![]()
(Reference: amazonaws.com)
- In 2025, deepfake tools are highly advanced, and video deepfakes account for 46% of all fake content.
- Moreover, image deepfakes account for 32% and audio deepfakes for 22%.
Deepfake Marketing Use Cases Statistics
![]()
(Reference: amraandelma.com)
- By 2025, approximately 81% of marketers are expected to use deepfakes with synthetic voices, and 71% are expected to use them for localisation.
- Around 40% appear in skincare and beauty promotions, while 28% are used as AI spokesperson ads.
- Brands also experiment with people: 19% of uses involve reviving legacy celebrities, and 17% feature completely fake celebrities.
- However, only 12% of deepfake marketing content is openly disclosed as AI-generated.
Deepfake Fraud Statistics
- According to a report from eftsure.com, about 1 in 4 leaders admit they do not really understand deepfakes.
- Meanwhile, 31% think deepfakes have not raised their fraud risk.
- Approximately 32% of respondents doubt that their staff can spot a deepfake scam, and more than half report that employees have had no training in recognising or handling such attacks.
- Another 10% are unsure whether their company has already been affected by a deepfake-related cyberattack.
- In 2023, the number of deepfakes found across industries increased 10-fold, with cryptocurrency becoming the primary target, accounting for 88% of detected deepfake fraud cases.
- North America saw a substantial 1740% increase in deepfake fraud.
- Besides, losses from generative AI-driven fraud in the United States are expected to reach USD 40 billion by 2027.
In the Financial Sector
- According to keepnetlabs.com, in 2024, 42.5% of financial fraud attempts were linked to AI, while deepfake attacks accounted for 6.5% of cases and 1 in 15 incidents.
- Around 25.9% of executives reported that their organisations had experienced at least one deepfake incident targeting financial or accounting data in the previous year.
- Approximately 53% of financial professionals reported having encountered attempted deepfake scams, and 85% in the US and the UK considered them an “existential” threat.
- Approximately 50% of firms in these countries had been targeted by AI-powered financial scams, and 43% were successfully tricked.
- Overall, more than 50% of finance staff reported being targeted.
- Meanwhile, cryptocurrency accounted for 9.5% of fraud attempts, followed by lending & mortgages (5.4%), and traditional banks (5.3%).
Deepfake Scams Statistics
- Entrust’s Identity Fraud Report says that in 2024, deepfake attempts occurred once every five minutes.
- Cryptocurrency firms experienced nearly twice as many fraud attempts as any other industry (9.5%), whereas lending and mortgage providers faced 5.4% and traditional banks 5.3%.
- Crypto platforms also saw the fastest growth, with attempts rising 50% year over year, from 6.4% in 2023 to 9.5% in 2024.
- Keepnet’s 2025 review found deepfake cases climbed from 42 in 2023 to 150 in 2024, a 257% rise, and total activity increased about 680%.
- Deepfake spear phishing has increased by more than 1,000% over the last decade.
- In early 2025, 179 incidents were logged, 19% more than in 2024.
- Deepfake videos rose 550% between 2019 and 2024.
- As of 2024, more than 10% of companies had experienced deepfake fraud attempts.
- Yet 25% of leaders had only a limited understanding of the technology, and 31% believed it had not increased fraud risk.
- Moreover, 50% reported that staff had no training, and only 5% reported multi-layered protections, leaving many organisations dangerously exposed to fraud.
Deepfake Crime Statistics
- A 2024 Axios report states that deepfake attacks occur every five minutes, while document forgeries increased by 244%.
- Signicat’s 2024 data indicate that AI-driven fraud accounted for 42.5% of fraud attempts in the finance and payments sector.
- Over three years, deepfake fraud grew from 0.1% to about 6.5% of detected cases by 2025.
- About 25.9% of executives reported at least one deepfake incident.
- 50% of U.S. and U.K. firms faced deepfake financial scams, and 43% reported success.
- Around 96% of deepfakes are non-consensual sexual content, and 99% of those depict women.
- Deepfakes are doubling every six months, with about 8 million expected in 2025.
- The UK IWF found 1,286 illegal AI child abuse videos in early 2025.
Deepfake Technology Market Statistics
- According to Coherent Market Insights, the global deepfake technology market is expected to be USD 5.82 billion by 2025.
- The market is estimated to reach 32.23 billion by the end of 2032, with a CAGR of 27.7% from 2025 to 2032.
- By 2025, software is expected to be the largest segment of this market, accounting for approximately 68.4% of total revenue.
- Generative Adversarial Networks (GANs) are projected to dominate the market, accounting for 58.2%.
- The North American region will lead the global deepfake market, accounting for approximately 38.25%, followed by the Asia-Pacific region with 27.1%.
By Usage and Creation
- SQ Magazine further states that deepfakes use Generative Adversarial Networks GANs to synthesise voices, while detecting payments with over 95% accuracy.
- Voice cloning needs 20 to 30 seconds of audio.
- Real-time video-and-voice systems are readily available, enabling live scams.
- A benchmark found that audio detectors lost 43% of their performance on more realistic fakes.
- A challenge-response method achieved an AUROC of 87.7%.
- Whereas detectors trained on older synthetic data struggle, tools claiming 99% accuracy often fail on new, zero-shot deepfakes.
Length of Deepfake Ads Statistics
![]()
(Reference: amraandelma.com)
- In 2024, most deepfake ads were short, with 40% lasting under 30 seconds.
- Meanwhile, Ads between 30 and 60 seconds accounted for 32%.
- Long deepfake ads were rare: 20% lasted 60-90 seconds, and only 8% exceeded 90 seconds.
Deepfake Statistics By Affected Sectors
![]()
(Reference: els-cdn.com)
- The above pie chart shows that deepfake usage in pornography is the biggest concern, making up 30.6% of the total impact.
- This is followed by a 21.4% impact on trust in the media and an 18.4% increase in public awareness of deepfakes.
- Approximately 15.3% relates to the increase in deepfake-related crimes, while 6.1% is attributable to deepfake use in cybercrime.
- Only 8.2% of the impact is attributed to deepfakes used in politics.
By Affected Areas
| Area | Germany | Mexico | Singapore | UAE | Average |
| News and media | 34% | 48% | 32% | 23% | 33% |
| Legal and judicial systems | 27% | 35% | 25% | 36% | 32% |
| Political elections and campaigns | 34% | 23% | 27% | 21% | 28% |
| Personal relationships/social media | 23% | 25% | 28% | 34% | 26% |
| Healthcare and medical advice | 22% | 28% | 35% | 16% | 24% |
Employee Confidence In Spotting AI Deepfakes
![]()
(Reference: statista.com)
- Between February 2024 and February 2025, 73.8% of security staff and 68.7% of employees felt able to recognise audio deepfake payment calls.
- Regarding deepfake identity documents such as ID cards or passports, confidence was 72.2% among security teams and 71% among employees.
- For video-conference deepfakes, a fake Zoom meeting, 69% of security staff and 70.2% of employees felt confident.
- In the case of fake videos of senior executives announcing layoffs, confidence dropped slightly to 65.9% among security professionals and 65.5% among employees.
Conclusion
After completing the article, it has been observed that AI-generated photos, videos, and voices are spreading rapidly worldwide and creating new risks of deepfakes. As fake content proliferates, it becomes easier to deceive people, spread misinformation, commit online scams, and breach their privacy. At the same time, more specialists, businesses, and governments are building tools to find and block these fakes.
By understanding these trends, individuals, companies, and countries can prepare for a future in which distinguishing authentic content from fake is even more important.
Sources
FAQ.
AI tools learn from real photos or voice recordings, and then use that learning to create fake videos or audio, which are called deepfakes.
Deepfakes are a problem because they spread lies, enable scams, harm privacy, and damage reputations.
Most people can’t see the tiny digital clues in a deepfake, but advanced tools can detect them easily.
Yes, deepfakes can mimic faces or voices when sufficient photos, videos, or audio samples are available.
Protect yourself by limiting your exposure to online content, using privacy settings, and being cautious with suspicious media.
Maitrayee Dey has a background in Electrical Engineering and has worked in various technical roles before transitioning to writing. Specializing in technology and Artificial Intelligence, she has served as an Academic Research Analyst and Freelance Writer, particularly focusing on education and healthcare in Australia. Maitrayee's lifelong passions for writing and painting led her to pursue a full-time writing career. She is also the creator of a cooking YouTube channel, where she shares her culinary adventures. At Smartphone Thoughts, Maitrayee brings her expertise in technology to provide in-depth smartphone reviews and app-related statistics, making complex topics easy to understand for all readers.