iProov Study Reveals Deepfake Blindspot: Only 0.1% of People Can Accurately Detect AI-Generated Deepfakes

Most Consumers Can't Identify AI-Generated Fakes;

Take the Deepfake Detection Test

iProov Study Reveals Deepfake Blindspot: Only 0.1% of People Can Accurately Detect AI-Generated Deepfakes

For more information:
Louise Burke
Global PR Manager
iProov
Louise.burke@iproov.com

New research from iProov, the world's leading provider of science-based solutions for biometric identity verification, reveals that most people can’t identify deepfakes – those incredibly realistic AI-generated videos and images often designed to impersonate people. The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content. The results are alarming: only 0.1% of participants could accurately distinguish real from fake content across all stimuli which included images and video.

Key Findings:

  • Deepfake detection fails: Just 0.1% of respondents correctly identified all deepfake and real stimuli (e.g., images and videos) in a study where participants were primed to look for deepfakes. In real-world scenarios, where people are less aware, the vulnerability to deepfakes is likely even higher.
  • Older generations are more vulnerable to deepfakes: The study found that 30% of 55-64 year olds and 39% of those aged 65+ had never even heard of deepfakes, highlighting a significant knowledge gap and increased susceptibility to this emerging threat by this age group.
  • Video challenge: Deepfake videos proved more challenging to identify than deepfake images, with participants 36% less likely to correctly identify a synthetic video compared to a synthetic image. This vulnerability raises serious concerns about the potential for video-based fraud, such as impersonation on video calls or in scenarios where video verification is used for identity verification.
  • Deepfakes are everywhere but misunderstood: While concern about deepfakes is rising, many remain unaware of the technology. One in five consumers (22%) had never even heard of deepfakes before the study.
  • Overconfidence is rampant: Despite their poor performance, people remained overly confident in their deepfake detection skills at over 60%, regardless of whether their answers were correct. This was particularly so in young adults (18-34). This false sense of security is a significant concern.
  • Trust takes a hit: Social media platforms are seen as breeding grounds for deepfakes with Meta (49%) and TikTok (47%) seen as the most prevalent locations for deepfakes to be found online. This, in turn, has led to reduced trust in online information and media— 49% trust social media less after learning about deepfakes. Just one in five would report a suspected deepfake to social media platforms.
  • Deepfakes are fueling widespread concern and distrust, especially among older adults: Three in four people (74%) worry about the societal impact of deepfakes, with "fake news" and misinformation being the top concern (68%). This fear is particularly pronounced among older generations, with up to 82% of those aged 55+ expressing anxieties about the spread of false information.
  • Better awareness and reporting mechanisms are needed: Less than a third of people (29%) take no action when encountering a suspected deepfake which is most likely driven by 48% saying they don’t know how to report deepfakes, while a quarter don’t care if they see a suspected deepfake.
  • Most consumers fail to actively verify the authenticity of information online, increasing their vulnerability to deepfakes: Despite the rising threat of misinformation, just one in four search for alternative information sources if they suspect a deepfake. Only 11% of people critically analyze the source and context of information to determine if it's a deepfake, meaning a vast majority are highly susceptible to deception and the spread of false narratives.

Professor Edgar Whitley, a digital identity expert at the London School of Economics and Political Science adds: “Security experts have been warning of the threats posed by deepfakes for individuals and organizations alike for some time. This study shows that organizations can no longer rely on human judgment to spot deepfakes and must look to alternative means of authenticating the users of their systems and services.”

"Just 0.1% of people could accurately identify the deepfakes, underlining how vulnerable both organizations and consumers are to the threat of identity fraud in the age of deepfakes," says Andrew Bud, founder and CEO of iProov. "And even when people do suspect a deepfake, our research tells us that the vast majority of people take no action at all. Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting our personal information and financial security at risk. It’s down to technology companies to protect their customers by implementing robust security measures. Using facial biometrics with liveness provides a trustworthy authentication factor and prioritizes both security and individual control, ensuring that organizations and users can keep pace and remain protected from these evolving threats."

The Growing Threat of Deepfakes

Deepfakes pose an overwhelming threat in today's digital landscape and have evolved at an alarming rate over the past 12 months. iProov’s 2024 Threat Intelligence Report highlighted an increase of 704% increase in face swaps (a type of deepfake) alone. Their ability to convincingly impersonate individuals makes them a powerful tool for cybercriminals to gain unauthorized access to accounts and sensitive data. Deepfakes can also be used to create synthetic identities for fraudulent purposes, such as opening fake accounts or applying for loans. This poses a significant challenge to the ability of humans to discern truth from falsehood and has wide-ranging implications for security, trust, and the spread of misinformation.

What can be done?

With deepfakes becoming increasingly sophisticated, humans alone can no longer reliably distinguish real from fake and instead need to rely on technology to detect them. To combat the rising threat of deepfakes, organizations should look to adopt solutions that use advanced biometric technology with liveness detection, which verifies that an individual is the right person, a real person, and is authenticating right now. These solutions should include ongoing threat detection and continuous improvement of security measures to stay ahead of evolving deepfake techniques. There must also be greater collaboration between technology providers, platforms, and policymakers to develop solutions that mitigate the risks posed by deepfakes.

Take the Deepfake Detection Test

Think you're immune to deepfake deception? Put your skills to the test! iProov has created an online quiz that challenges you to distinguish real from fake. Take the quiz and see how you score.

About iProov

iProov provides science-based biometric identity solutions that combine exceptional user experiences with the highest levels of assurance. The company's Biometric Solutions Suite enables secure and effortless remote onboarding and authentication, streamlining both digital and physical access experiences. Backed by a unique blend of scientific expertise, AI, and proactive threat intelligence, iProov safeguards high-value transactions and empowers organizations seeking innovative identity verification that outpaces evolving threats without compromising usability. With proven success in global deployments, iProov is a trusted partner for governments and enterprises, including the Australian Taxation Office, GovTech Singapore, ING, Rabobank, UBS, U.K. Home Office, UK National Health Service (NHS), and the U.S. Department of Homeland Security. In December 2023, Gartner listed iProov as a representative vendor in the Innovation Insight report for Biometric Authentication, and Acuity Market Intelligence listed it as a Luminary in the 2023 Biometric Digital Identity Prism. iProov was also recognized as an Innovation Leader by industry analyst KuppingerCole, Market Compass of Providers of Verified Identity 2022. For more information, please see www.iproov.com or follow us on LinkedIn or Twitter.


Read Previous

Faraday Future Hosts Event in New York C

Read Next

Snap Announces Pricing of Upsized Offeri

Add Comment