Deepfake fraud has gone “industrial”, an analysis published by AI experts has said.
Tools to create tailored, even personalised, scams – leveraging, for example, deepfake videos of Swedish journalists or the president of Cyprus – are no longer niche, but inexpensive and easy to deploy at scale, said the analysis from the AI Incident Database.
It catalogued more than a dozen recent examples of “impersonation for profit”, including a deepfake video of Western Australia’s premier, Robert Cook, hawking an investment scheme, and deepfake doctors promoting skin creams.
These examples are part of a trend in which scammers are using widely available AI tools to perpetuate increasingly targeted heists. Last year, a finance officer at a Singaporean multinational paid out nearly $500,000 to scammers during what he believed was a video call with company leadership. UK consumers are estimated to have lost £9.4bn to fraud in the nine months to November 2025.
“Capabilities have suddenly reached that level where fake content can be produced by pretty much anybody,” said Simon Mylius, an MIT researcher who works on a project linked to the AI Incident Database.
He calculates that “frauds, scams and targeted manipulation” have made up the largest proportion of incidents reported to the database in 11 of the past 12 months. He said: “It’s become very accessible to a point where there is really effectively no barrier to entry.”
“The scale is changing,” said Fred Heiding, a Harvard researcher studying AI-powered scams. “It’s becoming so cheap, almost anyone can use it now. The models are getting really good – they’re becoming much faster than most experts think.”
In early January, Jason Rebholz, the chief executive of Evoke, an AI security company, posted a job offer on LinkedIn and was immediately contacted by a stranger in his network, who recommended a candidate.
Within days, he was exchanging emails with someone who, on paper, appeared to be a talented engineer.
“I looked at the resume and I was like, this is actually a really good resume. And so I thought, even though there were some red flags, let me just have a conversation.”
Then things became strange. The candidate’s emails went directly to spam. His resume had quirks. But Rebholz had dealt with unusual candidates before and decided to go ahead with the interview.
Then, when Rebholz took the call, the candidate’s video took nearly a minute to appear.
“The background was extremely fake,” he said. “It just looked super, super fake. And it was really struggling to deal with [the area] around the edges of the individual. Like part of his body was coming in and out … And then when I’m looking at his face, it’s just very soft around the edges.”
Rebholz went through with the conversation, not wanting to face the awkwardness of asking the candidate directly if they were, in fact, an elaborate scam. Afterwards, he sent a recording of it to a contact at a deepfake detection firm, who told him that the video image of the candidate was AI-generated. He rejected the candidate.
Rebholz still does not know what the scammer wanted – an engineering salary, or trade secrets. While there have been reports of North Korean hackers trying to get jobs at Amazon, Evoke is a startup, not a massive player.
“It’s like, if we’re getting targeted with this, everyone’s getting targeted with it,” said Rebholz.
Heiding said the worst was ahead. Currently deepfake voice cloning technology is excellent – making it easy for scammers to impersonate, for example, a grandchild in distress over the phone. Deepfake videos, on the other hand, still have room for improvement.
This could have extreme consequences: for hiring, for elections, and for broader society. Heiding added: “That’ll be the big pain point here, the complete lack of trust in digital institutions, and institutions and material in general.”





