It is being fought out on the battlefield of artificial intelligence (AI), the newest leap in computer science that simulates human intelligence and distorts reality itself, with near-perfect imitations of voices, faces and videos.
These images are being used to “deepfake” recipients into losing millions of their hard-earned dollars, to trash reputations, create fake investment schemes, post embarrassing phony porn online, destroy political candidates, launch “celebrity-endorsed” fake ads or create various kinds of scams to rip off the public.
And it’s only getting worse; deepfake is exploding.
“Just a single image and five seconds of audio online mean that it’s definitely possible for a scammer to make some kind of realistic deepfake of you.”
“We’re seeing a dramatic increase in the volume of deepfakes, especially in comparison to 2023 and 2024,” David Maimon, a Georgia State University professor of criminology, said. “It wasn’t a whole lot. We’re talking about maybe four or five a month [before]. Now, we’re seeing hundreds of these on a monthly basis across the board, which is mind-boggling.”
Often enormous amounts of money are involved. One man in Hong Kong lost $25 million after receiving a perfectly imitated video call from someone pretending to be the chief financial officer of his company. In New Zealand, a retiree lost $133,000 in a deepfake scam imitating the country’s prime minister.
All it takes is a brief snippet of video, of someone’s voice and appearance, and anyone skilled in AI can take it from there—producing a fake-but-highly-believable clip of someone saying something they never said.
Perhaps it’s a realistically attractive woman, pretending to be in love with you, then eventually asking for money. It might be a slick-sounding “investment advisor” with a “hot tip”—and after you invest, they disappear and so does your money.

Even influential celebrities and politicians have fallen victim to deepfake artists who have stolen their images and voices, and manipulated them to make it appear they are endorsing a product they may never have even heard of.
TV chef Gordon Ramsay, singer Taylor Swift, Facebook billionaire Mark Zuckerberg and actress Jennifer Garner have all had their personas shanghaied by AI tricksters using them to falsely pitch products.
“If there’s an image of you on the internet, that would be enough to manipulate a face to look like it’s saying something that you haven’t said before or doing something you haven’t done before,” said Northwestern University’s Matt Groh, who studies people’s ability to detect deepfake. “Just a single image and five seconds of audio online mean that it’s definitely possible for a scammer to make some kind of realistic deepfake of you.”
Most famously, in 2020, even the late Queen Elizabeth was the subject of a deepfake Christmas broadcast in which she allegedly “discussed” Prince Harry’s decision to leave the UK and the involvement of Prince Andrew with the scandals of notorious sex offender Jeffrey Epstein.
Of course, it wasn’t really her. BBC royal correspondent Nicholas Witchell said: “There have been countless imitations of the Queen. This isn’t a particularly good one. The voice sounds what it is—a rather poor attempt to impersonate her. What makes it troubling is the use of video technology to attempt to sync her lips to the words being spoken.”
Deepfake’s rendition of Queen Elizabeth may have been “rather poor,” but with the rapid and extensive advancements in computer and AI technology in the five years since, AI and deepfake are now much more developed and much harder to detect.
“While it [AI] offers tremendous commercial and creative opportunities, transforming entire industries from entertainment to communication, it is also a technology that will be weaponized,” Nina Schick, author of Deep Fakes and the Infocalypse: What You Urgently Need to Know. “Used maliciously, AI-generated synthetic media, or deepfakes, are sophisticated forms of visual disinformation.”
And that’s where the problem lies.
So is AI a blessing or a curse… or, perhaps, both?
Many aren’t even aware that such technology exists—and, even if they are, are unable to recognize it.
There are companies developing technology today designed to thwart deepfake hijacking of images or scams but, as in any other arms race, as soon as a new defensive method emerges, a new offense emerges to negate and overcome it—and on it goes.
“ The major thing we have to understand is that the technology we have right now is not good enough to detect those deepfakes,” Maimon said. “ We’re still very much behind.”
At the same time, cybercriminals have sprung up, offering their content creation services and instruction on how to use AI to produce deepfake videos and voice calls. In fact, one person posted an offer to pay $16,000 for “deepfake services that included video and photo editing,” while yet another offered $1,500 for “deepfake services to construct and design fraudulent bank cards, signatures, documents, persons (images) and card numbers that are not detectable via Google or Yandex searches.”
One of the biggest edges that those who use AI deepfakery possess is that many aren’t even aware that such technology exists—and, even if they are, are unable to recognize it.
AI deception has now breached every border—welcome to the era of globalized fraud. A retired man in India named Radhakrishnan received a call from someone pretending to be an “old colleague,” who said he was in serious need of hospital funds. Suspicious, he initiated a video call, and it did seem to be his old friend. Instead, it was a deepfake video and Radhakrishnan got suckered out of nearly $500.
So how do you protect yourself against deepfake scammers?
If you get a call, an email or a text message from someone pretending to be an old friend or a relative, even though you believe you recognize their voice or image, ask a question only they would know, like the name of a book they suggested you read. A scam artist wouldn’t know the answer. Then, verify with that friend or relative to see if they really made that contact.
Be wary of posting photos, recordings or videos of yourself that could be used to create a deepfake version of you to victimize your friends or family.
The Federal Communications Commission advises that you not answer calls from unrecognized numbers, never give out personal information like account or Social Security numbers and do not respond to any questions from unknown callers.
If a caller claims to be from a company or government agency, hang up and call the real number for the agency or company to verify the contact.
Talk with your phone company about blocking tools or look into computer apps that can block unwanted calls.
In other words, be suspicious—always. Be very aware that there are people out there working on ways to trick you.
In an age where voices lie and faces cheat, trust is no longer enough—verify everything.