'This Isn't Funny Anymore': AI Deepfakes Are Stealing Millions Every Year — and These Entrepreneurs Are Racing to Stop Them

10 minute read

Author

Date

7/23/2025

Share

Your CFO is on the video call asking you to transfer $25 million. He gives you all the bank info. Pretty routine. You got it.

But—what the —? It wasn't the CFO? How can that be? You saw him with your own eyes and heard that undeniable voice. Even the other colleagues on the screen weren’t really them. And yes, you already made the transaction.

Ring a bell? That actually happened to an employee at the global engineering firm Arup last year, which lost $25 million to criminals. In other incidents, people were scammed when "Elon Musk" and "Goldman Sachs executives" took to social media promoting investment opportunities. And a WPP agency leader was almost tricked into transferring money during a Teams call with a deepfake they believed was CEO Mark Read.

Experts have warned for years about deepfake technology evolving to dangerous levels—and now it's happening. Used maliciously, these clones are infiltrating everything from Hollywood to the White House. Although most businesses keep deepfake attacks under wraps to avoid client alarm, insiders say such incidents are increasing. Deloitte predicts fraud losses in the U.S. could reach $40 billion by 2027.

The U.S. government is rolling out deepfake regulations, and the AI community is developing guardrails—like digital signatures and watermarks—to help identify AI-generated content. But scammers aren't known for stopping at boundaries.

That’s why many have pinned their hopes on "deepfake detection," an emerging field with promise. Ideally, these tools can detect whether something—voice, video, image, or text—was generated by AI, giving individuals a way to protect themselves. But there’s a catch: sometimes the tools themselves accelerate the problem. Every time a new detector emerges, bad actors could learn from it, training their own malicious tools and making deepfakes even harder to detect.

So who will take up this challenge? It’s a high-stakes cat-and-mouse game that moves faster than most. But startups may have an edge—unlike larger firms, they can focus solely on the problem and iterate quickly, says Ankita Mittal, senior research consultant at The Insight Partners, which forecasts explosive growth in this market.

Highlights from the startup landscape:

  • Reality Defender, based in the old Western Union building in Manhattan, was launched in early 2021 by Ben Colman (a former Goldman Sachs cybersecurity expert) even before ChatGPT existed. Originally aimed at detecting AI avatars, the platform now claims 99% accuracy in identifying real-time voice and video deepfakes. Clients include banks and government agencies, as well as partners like Accenture, IBM Ventures, and Booz Allen Ventures forensics tools.
  • GetReal Security, co-founded by Hany Farid (a digital forensics pioneer behind PhotoDNA), counts clients like John Deere and Visa. Farid demonstrated a real-time deepfake on a Zoom call by seamlessly replacing himself with a younger, unfamiliar face while retaining his voice—making the threat immediately clear.
  • Both companies acknowledge detection systems cannot definitively label content as “real” or “fake.” Instead, they assign probabilistic ratings like strong, medium, weak—critics say this can confuse users, but defenders believe it nudges clients to ask the right security questions.
  • To stay ahead, Reality Defender and GetReal engage in rapid product iterations and maintain both deployed and experimental model pipelines. Farid’s team even employs "red team" (attack) and "blue team" (defense) cycles to stress-test their systems. Detection tactics include identifying AI artifacts in images/videos, analyzing inconsistent lighting, checking lip-sync, and assessing acoustic signatures tied to physical environments.
  • The tools rely on high-quality training data—both real and fake—and gathering that data remains a major challenge. Phil Swatton of The Alan Turing Institute notes detectors often fail on real-world data, and labeled datasets remain scarce.
  • Reality Defender addresses this by using pre-AI-era real datasets and generating its own fake data in-house. It also partners with AI companies (like ElevenLabs, PlayAI, Respeecher) for early access to models and additional data. An inventive attempt to gather real-world voice samples via rideshare drivers failed to yield clean audio—but provided insights into noise patterns used by fraudsters to evade detection. They’ve also developed offline solutions (like a standalone laptop) to protect proprietary detection tools from reverse-engineering.
  • Some startups, like Polyguard, are taking a different tack: instead of detecting fakes, they authenticate real identity. Users verify identity via mobile—for example, during secure calls, validated with IDs or facial scans. Polyguard aims to certify truth rather than just flag deception.
  • Undetectable, another startup, began with text-detection tools and is expanding into image, audio, and video detection, now boasting nearly 19 million users and a team of 76.
  • Loti, founded by Luke and Rebekah Arrigoni, focuses on impersonation threats. Users submit real images and voice clips, and Loti scans the internet to find unauthorized use. Clients range from celebrities to parents and professionals guarding their public identity. New laws (like the Take It Down Act) help support takedown efforts.

The big question remains: will these anti-deepfake solutions become sustainable businesses?

Many are early-stage and constantly investing in R&D to keep pace with evolving AI threats. But the market is just beginning to awaken—and needs could expand across academia, HR, law, and more. Many predict acquisition by larger tech or security firms. Reality Defender’s founder sees detection becoming ubiquitous—like antivirus software that runs automatically in the background.

Hany Farid imagines a nightmare: a fake earnings call for a Fortune 500 going viral.

GetReal’s CEO, Matt Moynahan, believes 2026 could be the tipping point—driven by a mix of clear threats and increasing regulation. “Executives will connect the dots,” he says. “And start saying, ‘This isn’t funny anymore.’”