Surge in AI-Driven Deepfake Fraud: New Report Unveils Rising Threats for Businesses and Consumers

Report Reveals A Concerning Surge In Ai Driven Deepfake Fraud

TL; DR: Leading identity verification engine AuthenticID recently released its 2024 Mid-Year Identity Fraud Review. It revealed surprising insights into identity and financial fraud, with a particular focus on the role of generative AI technologies such as deepfakes. We talked with Blair Cohen, the Founder & President of AuthenticID, about the report and what its findings may mean for the future of consumers and businesses.

In January 2024, explicit deepfake images of Taylor Swift began circulating on the internet. These AI-generated images were viewed millions of times before they were finally taken down, causing significant outrage among her fans and general concern about consent and privacy.

In April 2024, cybercriminals used deepfake videos to defraud a Hong Kong-based multinational company. During a video conference call featuring multiple deepfakes, including that of their U.K.-based chief financial officer, employees were instructed to wire $25.6 million. The employees did so, believing it was genuinely their bosses requesting the funds.

In March 2023, Russian President Vladimir Putin was the focus of a deepfake video during the Russia-Ukraine war. As part of a broader disinformation campaign, the video falsely depicted him declaring peace with Ukraine. It was quickly identified and removed by social media platforms.

Even though deepfakes have recently surged in popularity, they’re not a new phenomenon. Their use keeps rising as cyberattackers become more sophisticated with increasingly clever techniques and tools.

AuthenticID logo
AuthenticID’s solutions specialize in fraud prevention and identity verification.

AuthenticID, a leader in identity proofing and fraud detection, recently released its 2024 Mid-Year Identity Fraud Review. The report revealed alarming statistics about the current state of identity theft and fraud, particularly highlighting how aspects of generative AI are exacerbating the problem.

Blair Cohen, AuthenticID’s Founder and President, shared some essential takeaways from the report, including how consumers and businesses can better protect themselves from becoming victims of new fraud techniques.

The Surge in AI-Driven Deepfake Fraud

Before we dive in, here’s what you need to know about the 2024 Mid-Year Identity Fraud Review: Its findings refer to the first half of 2024 (H1) with insights from fraud surveys conducted in Q2 2024.

Bar graph displaying surveyed responses on deepfake
Another alarming statistic shows that 90% of surveyed respondents say they could not select a real person from a line of deepfake headshots.

The report identified that a lot of the fraud in the first half of 2024 focuses on consumer and business fraud in the finance, automotive, government, and retail industries.

The report aims to explore the growing role of cyberattackers in identity fraud and theft. As technology evolves, so do cybercriminals. They’re increasingly using deepfakes for various fraudulent activities, including identity theft, financial fraud, and social engineering attacks.

Here are some of the most concerning statistics disclosed in the paper:

  • 63% of surveyed financial firms saw fraud increase by at least 6% in 12 months, with digital channels responsible for half of the losses
  • 40% of people responding to the survey reported their personal data was involved in a data breach this year
  • Fraudulent transactions rose by 73% from H1 2023 to H1 2024
  • Suspected fraudulent transactions increased by 84% from H1 2023 to H1 2024
  • 68% of people surveyed say the threat of identity fraud and scams affect their purchasing, account opening, and business activities
  • 74% of respondents said they are worried about deepfakes influencing elections
  • 91% of people in the study couldn’t identify a real person from deepfake headshots

The report shared that in the hands of bad actors, deepfake technology is a frightening tool that allows fraudsters to create more convincing fakes then ever – at a fraction of the time and cost.

“What we’ve seen thus far in 2024 is that identity crime will continue to hit record highs, targeting both businesses and consumers,” said Blair.

The Consumer Impact of Rising Identity Fraud

Thanks to generative AI, it’s becoming easier to create fake identities by using manipulated images, videos, and audio.

This technology enables impersonation by appearance and voice — potentially deceiving individuals into believing they’re interacting with someone they know.

Additionally, deepfakes are often used in catfishing schemes, where fake images or videos are leveraged to extort or deceive individuals. Both deepfakes and catfishing are social engineering tactics used to gain the victim’s trust.

“It’s up to businesses to stay ahead of fraud as it continues to evolve — fast.”

Blair Cohen, Founder and President of AuthenticID

Deepfakes illustrate just one of the many ways bad actors exploit personal information.

About 80% of total fraudulent activity in AuthenticID’s report shows bad actors conducted ID verification fraud, including headshot manipulation, fake signatures, and more.

And when 91% of people cannot distinguish between authentic and deepfake images, this becomes a significant problem.

Around 40% of People Have Had Their Data Exposed

Identity fraud happens all the time and can affect anyone. (Heck, my personal data has been compromised on multiple occasions due to data leaks.)

Even the popstar queen herself, Taylor Swift, isn’t immune. During her 2024 “Eras Tour,” hackers exploited vulnerabilities in Ticketmaster’s system to gain unauthorized access to the personal and financial information of nearly 500,000 fans.

Of course, neither I nor the millions of Taylor fans are alone: According to the report, 40% of people have had their data exposed.

Graphic of statistics displaying surveyed responses on data leaks
This report was conducted in only a few months, which makes these stats even more shocking.

Earlier, I touched on how consumer trust and business profits are interconnected. In this context, a data breach can lead to brand mistrust by its customers. Historically speaking, customer mistrust typically results in business financial downfalls.

Take Target, for example. In 2013, hackers gained access to the personal and financial information of nearly 40 million customers, including debit and credit card details. Target’s reputation was hit hard, and sales declined noticeably in the months after.

While 39% of surveyed consumers say they’ve lost confidence in retail’s ability to safeguard their identity data, social media took an even bigger hit with 54% saying they’ve lost confidence.

Nearly 70% of Consumers Are Changing Their Behaviors

As a result of this mistrust, 68% of consumers are changing their behavior due to fears of identity fraud.

The attitude change is particularly geared toward some common digital identity trends, such as biometric authentication and mobile driver’s licenses (mDLs).

AuthenticID’s report shows that 65% of people prefer biometric authentication over traditional passwords, and 78% are open to the idea of adopting a mDL once it becomes more readily available.

Screenshot of statistic graphic pertaining to mDLs
mDL users can store their virtual IDs in their smartphone wallet apps.

Biometric authentication uses biological characteristics, such as fingerprints, facial features, voice patterns, and retina scans, to verify identities.

mDLs are digital versions of traditional driver’s licenses that can be stored and accessed on smartphones. Only a few states support and accept mDLs as legitimate forms of identification, including New York and California.

Combating Identity Fraud with New Technologies

After the “Eras Tour” fiasco, Ticketmaster immediately implemented enhanced security measures. It introduced identity monitoring services and collaborated closely with cybersecurity experts to mitigate future incidents.

Businesses of all types and sizes can take proactive steps to prevent incidents before they happen. AuthenticID is made up of experienced cybersecurity experts who can deploy cutting-edge tools and technologies to safeguard your system, no matter what creative attacks hackers come up with next.

Screenshot of AuthenticID's statistics
AuthenticID is also the recipient of the 2024 Fortress Cybersecurity Award and the 2023 Cybersecurity Breakthrough Award.

Here’s a peek into a few of the solutions it offers:

  • Technological Advancements: Say hello to new product enhancements designed to address challenges in identity verification, including synthetic signatures, high false rates in authentication, slow verification processes, and deepfake detection.
  • Consumer Acceptance: AuthenticID’s ID verification software uses AI and machine learning (ML) to achieve more than 99% accuracy in detecting fraudulent documents.
  • Identity Fraud Taskforce: The report also highlights innovations from AuthenticID’s Identity Fraud Taskforce. Through its identity verification platform, this expert team focuses on detecting fraud before it impacts a client’s systems.

“With the proliferation of new, generative AI-powered tools for fraudsters, businesses and consumers alike face a significant challenge in the form of breaches and business attacks,” Blair said. “But those same generative AI-powered tools can also stop fraud. It’s up to businesses to stay ahead of fraud as it continues to evolve — fast.”

Blair makes a great point. While the unfortunate truth is that cyberattackers will never go away, that doesn’t mean cyberattacks can’t be prevented. Discover how the industry is evolving and learn how to protect yourself with cutting-edge technologies such as identity proofing, ID verification, biometric authentication, and fraud shielding in AuthenticID’s free Mid-Year Identity Fraud Report 2024.