Deepfakes and WormGPT: The Advent of AI-Assisted Cyber Fraud

In the past, spotting a phishing email was easy due to its spelling and grammatical errors. Similarly, recognizing your child’s voice on the phone assured you of their identity. However, with advances in AI, these simple identifiers are no longer reliable.

The Fakiest Fakes of All

Let’s turn first to so-called deepfakes. A deepfake is a form of synthetic media created by AI that shows a known individual saying something they never said and/or doing something they never did – very convincingly. One of the most famous examples is MIT’s Apollo 11 disaster deepfake, which includes a two-minute speech by President Nixon mourning the loss of the Apollo 11 astronauts. Of course, the Apollo 11 astronauts made it home safely and Nixon never delivered that speech – but you wouldn’t know nit from watching this video. (If you want to see Nixon’s speech, fast-forward to about the 4:40 mark.)

It might seem at first blush that audio deepfakes would represent a significant threat to any financial institution that uses voice biometrics to authenticate inbound callers. The good news: Experts in the field assert that current voice biometric systems can accurately distinguish synthetic voices, even if they sound convincing to us.

Therein lies the rub. If your institution spent tens of thousands of dollars on voice biometrics, you can rest assured that a synthetic voice can’t breach that system. Unfortunately, your accountholders don’t have access to that same technology. If a fraudster can obtain a good enough sampling of Junior’s voice, they can probably convince you that it’s really Junior calling for that money.

ChatGPT’s Evil Twin

Since its introduction in 2022, ChatGPT has taken the world by storm. The ethics and accuracy of ChatGPT can be subject to debate. However, it’s a known fact that OpenAI, the developer of ChatGPT, has incorporated safeguards to deter misuse. Yet, the impact of recent changes in governance on these safeguards is yet to be fully understood and observed in the future. But what if those safeguards had not been included at all? What if ChatGPT had been created specifically to facilitate illegal activity?

Enter WormGPT.

Debuting this past summer, WormGPT has been promoted online as a ChatGPT alternative “that lets you do all sorts of illegal stuff and easily sell it online in the future.” This “stuff” includes creating sophisticated phishing campaigns and writing custom malware. Since its introduction, WormGPT’s creator, known as Last, has backpedaled somewhat, claiming now that they want to focus on the “uncensored” aspect of the software rather than its potential criminal uses. To that end, Last has already added some constraints to WormGPT.

Said Last in a public statement: “Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC (business email compromise) too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations.”

Almost immediately after that happened, FraudGPT appeared on the dark web.

Dramastically Better Digital Banking®

A powerful platform that offers high configurability balanced with the economies of scale of a SaaS offering giving you and your users an extensible digital banking solution on every device.

What’s the Point?

AI has made cybercriminals more efficient and effective. Still, like the obvious phishing scams of 2008, these advanced schemes depend on people making unwise decisions. So how do you help protect your accountholders from becoming the hapless victims of these new and improved scams?

Accountholder education is your first line of defense. Even though your accountholders should know better, many don’t. It’s smart business to provide them with as much information as possible to counter the cyber crime AI offensive that’s taking place right now.

To address AI-assisted cyber fraud, it’s crucial for organizations to adapt training continuously to the rapidly changing AI landscape and advocate for clear labeling of AI-generated content through regulations, complemented by multi-factor authentication systems that verify human intent in the physical world, fortifying defenses against advanced AI threats. This educational approach should be complemented by technical safeguards in your payment systems, such as dollar and velocity limits, along with advanced authentication methods.

It’s harder and harder to stay one step ahead of the crooks, but it’s more important than ever.