3D generated face representing artificial intelligence technology
Sports Cloud | Stocks | Getty Images
A growing wave of deepfake scams have robbed companies around the world of millions of dollars, and cybersecurity experts warn the situation is likely to get worse as criminals harness generative artificial intelligence to perpetrate fraud.
Deepfakes are videos, voices or images of real people that are digitally altered and manipulated (often through artificial intelligence) to convincingly distort them.
Relevant departments revealed to local media in February this year that in the largest known case this year, a Hong Kong financial staff was deceived and used Deepfake technology to disguise himself as a colleague and transferred more than 25 million to the scammer during a video call. Dollar.
Last week, British engineering firm Arup confirmed to CNBC that it was the company involved, but could not disclose details of the matter as the investigation is ongoing.
David Fairman, chief of information and security at cybersecurity company Netskope, said such threats are increasing with the popularity of Open AI Chat GPT, launched in 2022, which is quickly pushing generative AI technology into the mainstream.
“The public accessibility of these services lowers the barrier to entry for cybercriminals—they no longer need to possess special technical skills,” Fairman said.
He added that as artificial intelligence technology continues to develop, the number and sophistication of scams continues to expand.
Upward trend
Various generative AI services can be used to produce human-like text, images, and video content and can therefore become powerful tools for illicit actors seeking to digitally manipulate and recreate certain individuals.
“Like many other businesses around the world, our business is regularly subject to attacks including invoice fraud, phishing scams, WhatsApp voice spoofing and deepfakes,” an Arup spokesperson told CNBC.
The finance staff member reportedly participated in a video call with what was believed to be the company’s chief financial officer and other employees, who asked him to make the transfer. However, other attendees at the conference were actually digitally recreated deepfakes.
Arup confirmed that “fake voices and images” were used in the incident, adding that “the number and sophistication of these attacks have increased dramatically in recent months”.
Chinese state media this year reported a similar case in Shanxi province involving a female finance employee who was tricked into transferring 1.86 million yuan ($262,000) into a scammer’s account after a video call with her boss.
wider impact
Cybersecurity experts say that in addition to direct attacks, companies are increasingly concerned that deepfake photos, videos or speeches of senior executives could be used for malicious purposes.
Jason Hogg, cybersecurity expert and resident executive at Great Hill Partners, said deepfakes created by senior members of a company can be used to spread fake news, manipulate stock prices, denigrate a company’s brand and sales, and spread other harmful disinformation.
“It’s just superficial,” said Hogg, a former FBI agent.
He emphasized that generative artificial intelligence can create deep fakes based on large amounts of digital information, such as public content hosted on social media and other media platforms.
In 2022, Binance Chief Communications Officer Patrick Hillmann claimed in a blog post that scammers created deepfakes of him based on previous press interviews and TV appearances, using it to defraud customers and contacts people attend meetings.
Netskope’s Fellman said such risks have led some executives to begin purging or restricting their online operations because they feared they could be used as ammunition by cybercriminals.
Deepfake technology has spread beyond the corporate world.
From fake pornographic images to manipulated videos promoting cookware, celebrities like Taylor Swift have fallen victim to deepfake technology. Deepfakes among politicians are also rampant.
At the same time, some scammers are creating deepfakes of individuals’ family and friends in an attempt to defraud them of their money.
Hogg said the broader problem will accelerate and get worse over time, as cybercrime prevention requires thoughtful analysis to develop systems, practices and controls to defend against new technologies.
However, cybersecurity experts told CNBC that companies could strengthen their defenses against AI threats by improving employee education, cybersecurity testing, and requiring the use of passwords and multiple layers of approval for all transactions — something that would have prevented the likes of Arup case.