The Dark Side of AI: WormGPT

The Dark Side of AI: WormGPT

By Rebecca Tague, Security Analyst

18 Oct, 2023

Cybersecurity is an important topic that is constantly evolving. In this blog, Q2 Security Analyst Rebecca Tague tackles the subject of phishing kits. Check for more blogs from Rebecca throughout October as Q2 recognizes National Cybersecurity Awareness Month in the U.S. 

WormGPT is a chatbot-like custom generative artificial intelligence (GenAI) module based on GPT-J, an open source LLM (large language model). It has been used ‘in the wild’ to carry out business email compromise (BEC) attacks by assisting with automating the creation of phishing emails, texts, and potentially malware.

The tool was first seen being sold on a hacker forum on a subscription base license. It is described as "similar to ChatGPT but has no ethical boundaries or limitations.” The creator of this tool has been adamant that it was created as a 'jailbroken' or 'uncensored' version of ChatGPT, not as a vector for malicious purposes. According to a post made within the WormGPT Telegram channel, the creator commented, “From the beginning, the media has portrayed us as a malicious LLM), when all we did was use the name ‘blackhatgpt’ for our Telegram channel as a meme. We encourage researchers to test our tool and provide feedback to determine if it is as bad as the media is portraying it to the world.” It is possible that WormGPT is being used as a honeypot to further collect other data from its users (as with any tool advertised on a hacking forum), so anyone researching the tool should do so with caution.

With this being said, there are still conflicting actions being taken with this tool. On one side, the creator has claimed to make updates prohibiting certain topics that relate to murders, kidnapping, ransomwares, financial crimes, and others. The creator also claims they are attempting to remove the ability to craft BEC template generation from the tool. On the other side, we can see that the creator has advertised their tool as being able to generate malware scripts and perform other malacious activities. Even if created with good intentions, a tool can always be maliciously evolved by the hands of those it falls into.

WormGPT may be one of the first AI-based tool to gain criminal notoriety; however, it is important to note that many others may rear their head in the future. Few may be able to show similar broad categorical capabilities as the tooling seen within developed GenAI tools like Bard and ChatGPT; currently, training and maintaining an AI agent is a time and money intensive endeavor. This does not stop fraudsters from leveraging already available AI tooling for nefarious goals. Cybercriminals have been recorded using legitimate tools to generate fake invoice material, phishing emails, automation, and to assist with preforming OSINT (Open-Source Intelligence) research on their targets. If your voice has ever been posted online, it can be used to generate audio samples that can be used to target the people around you. Within the financial industry alone, we see AI being leveraged to create bot users, audio deepfakes, AI generated malware, and convincing social engineering content.

To stay vigilant against AI generated scams, you can protect yourself by:

  • Ensuring multifactor authentication (MFA) is enabled for all your accounts. NEVER give out your secure access codes for MFA.
  • If you need to discuss financial or personal matters over the phone, email, or text, consider validating the identity of the individual prior to the discussion. You can create a passphrase for the other party to validate before details are discussed.
  • Consider limiting the personal material you share online. Fraudsters can use public information to target and manipulate you.
  • Do not answer or reply to calls or texts from unknown parties. In many cases, doing so may encourage the fraudster to re-target you using different methods. Instead, you can report text scams by forwarding them to 7726 (SPAM), which will help your provider detect similar messages.

We will see new malicious tools evolve from AI based technology, and we must constantly calibrate our defense tooling and awareness of adapting threats that utilize AI. Although there is no way to eliminate all financial crime: real-time fraud detection, bot identification, improved ML pattern adapting algorithms, artificial neural network prediction software, and other tools can be used to bolster the ability to identify and halt new AI fraud vectors. Thankfully, fraud fighters have been given room to innovate and create within the space. In a 2022 report from Juniper Research, global businesses spent just over $6.5 billion on AI-enabled financial fraud detection and prevention strategy platforms. This has led to an increase in the security and effectiveness of defense tooling. As an example, Paypal reported that they were able to cut their false alerts in half utilizing AI tooling.

As AI-related crimes evolve and increase, it is vital for industries (and others within the supply chain) to continue advancing their systems and advocating for AI tooling standards. With that said, it is important to not be fully reliant on AI as your only layer of defense; as always, a defense in depth is required. AI tools should also be used with caution, as users may inadvertently give up their rights to the data they input within the tool, depending on the Terms and Conditions. Together, we can fight the up-hill battle against fraud and cybercrime within our work and communities.