Deepfake-enabled fraud has cost victims worldwide $2.19 billion in total losses, with a striking $1.65 billion of that figure recorded in 2025 alone - a concentration that signals not just growth but acceleration. The figures, drawn from an analysis by cybersecurity firm Surfshark, reveal that AI-generated deception has moved well beyond a niche threat and into one of the most financially damaging forms of fraud operating today. The human and institutional toll is spreading across borders, though the United States bears the heaviest burden.
Investment Scams Built on Borrowed Faces
The single most destructive application of deepfake technology is not hacking or data theft - it is persuasion. Scammers have found that fabricating a video of a trusted public figure endorsing an investment opportunity is, at scale, extraordinarily effective. This method alone has generated $1.13 billion in losses, representing 52 percent of all reported deepfake fraud. The faces used are typically those of government officials or celebrities - figures whose perceived authority or wealth lends credibility to fake financial schemes.
The mechanism is straightforward and difficult to counter. A realistic AI-generated video of a recognizable figure - a finance minister, a billionaire entrepreneur, a well-known broadcaster - is circulated on social media or through targeted messaging platforms. Victims, believing they are acting on a credible endorsement, transfer money into fraudulent accounts. The scam dissolves. Recovery is rare. The damage is compounded by the fact that the real person whose likeness was stolen has no advance knowledge of the fraud and limited legal recourse once it circulates internationally.
Corporate Vulnerability and the Rise of Executive Impersonation
The second-largest category of deepfake fraud - accounting for 25 percent of global losses - targets organizations rather than individuals. These corporate attacks typically involve the impersonation of senior executives, most often a CEO or CFO, instructing finance staff to authorize large wire transfers. The tactic, sometimes called a "virtual kidnapping" of corporate identity, exploits the deference employees extend to figures at the top of an organization's hierarchy.
In the United States, corporate losses are disproportionately high. Of the $712 million in total US deepfake losses, 43 percent - roughly $306 million - occurred in the corporate sector. This includes not only fraudulent financial transfers but a distinct and growing pattern: the placement of fake candidates in remote jobs. Using deepfakes during video interviews, bad actors have secured positions within companies to gain access to internal systems, sensitive data, or financial infrastructure. Remote hiring, normalized during the pandemic era and now a permanent feature of many industries, has created a structural opening that fraud operations are actively exploiting.
Family Impersonation: A Distinctly American Crisis
One of the most unsettling findings in the Surfshark analysis concerns a form of fraud that is personal in the most literal sense. Deepfake impersonation of family members - in which a victim receives what appears to be a video or voice message from a relative in distress, requesting urgent money - has caused $124 million in losses in the United States. That figure represents 17 percent of all US deepfake losses and, remarkably, 99.9 percent of all such losses globally.
The concentration of this crime in one country likely reflects a combination of factors: high smartphone penetration, widespread comfort with video communication, and possibly the particular emotional and financial vulnerability that accompanies a culture of geographically dispersed families. Surfshark's researchers note explicitly that this trend is expected to spread internationally as the underlying technology becomes cheaper and more accessible to criminal networks worldwide. What is currently an American problem carries the architecture of a global one.
A Technology Outpacing Defenses
The broader context behind these figures is the rapid commodification of deepfake generation tools. What once required significant computing resources and specialized knowledge can now be produced with consumer hardware and freely available software. The barrier to entry for this class of fraud has collapsed over the past several years, while detection tools - whether deployed by platforms, financial institutions, or law enforcement - have struggled to keep pace.
Financial crimes enabled by deepfakes, including identity theft used to secure bank loans or drain accounts, account for 9 percent of losses. Romance scams involving fabricated identities make up 7 percent. Family impersonation, as described above, accounts for 6 percent. These categories are not static. Each represents a fraud typology that evolves as perpetrators refine their methods and as new platforms and communication channels emerge.
The policy response remains fragmented. Some jurisdictions have introduced or proposed legislation specifically targeting non-consensual deepfakes, but enforcement across borders - where most of these fraud operations originate - is a persistent obstacle. Financial institutions have begun piloting real-time deepfake detection during video verification processes, but adoption is uneven. For individuals, the practical guidance is sobering in its simplicity: verify any urgent financial request through a separate, independently confirmed channel, regardless of how convincing the source appears.