WormGPT is a malware generation AI tool created by a hacker for hackers. This was essentially made to produce malware and phishing email templates. 


At this point, everyone is familiar with ChatGPT, but are you aware of WormGPT, its evil twin? WormGPT is not an AI chatbot designed to amusingly provide wriggly invertebrate assistance, like the cat-focused CatGPT. Instead, it’s a much more malevolent instrument made without regard for morality with the express intent of increasing productivity, improving efficacy, and decreasing the entrance barrier for your average cybercriminal.


WormGPT is the “degenerate Large Language Model (LLM),” as PCGamer clearly put it. WormGPT was created by a hacker, thus it doesn’t care about morality and may be asked to carry out evil deeds like creating malware and “everything blackhat related.”


What is WormGPT?

WormGPT is based on the open-source 2021 LLM GPT-J malware generation tool and is trained on this data. Essentially, it’s made to produce malware and phishing email templates. Certainly sounds intriguing. That is accurate, but it also serves as a good illustration of how dangerous such technologies could end up being.


It functions somewhat similarly to ChatGPT in that it accepts human-language requests and then produces anything requested of it, such as summaries or code. WormGPT is a riskier variant of ChatGPT or Bard in many ways, though, when ethics aren’t factored in.


The AI Boom’s Dark Side

You’d have been close to insane to think that the AI boom would just result in friendly chatbots, generative art, and simple ways to get your homework done faster.


WormGPT’s publication may be the first concrete evidence that we’re hurtling toward an enormous cybersecurity hellscape as the list of troubling ways AI is currently being used expands every day.


Security experts have been concerned for some time about the possibility of hackers using AI as a weapon, with only a tiny buffer zone of frequently avoided ethical barriers standing in the way of bad actors’ intentions. Due to widespread attempts to “jailbreak” the ChatGPT program into working without limitations, the buffer zone is gradually shrinking.


As highly customized, one-of-a-kind, and diverse malware is developed in mere seconds by AI tools like ChatGPT and WormGPT, our ability to protect against a potential wave of security threats could be pushed to its absolute limits.


Capabilities of WormGPT

The AI chatbot was asked to create a phishing email, often known as a business email compromise (BEC) attack, as part of a test conducted by SlashNext, which found WormGPT’s capabilities to be “unsettling.” Unsurprisingly, WormGPT succeeded in doing so. WormGPT created something that was “remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” according to the report.


What Can You Do?

With countless customized malware that can be easily generated by AI, everyone should be alarmed. Hackers are now very sophisticated with their attacks, and with AI that is very accessible, anything can happen. Now is the time for you to be concerned about your cybersecurity. Constantly updating the security of your network, enabling multi factor authentication, and not opening malicious emails are the first steps to avoid being a victim to hackers. 


Remember that being vigilant is key to keeping yourself safe from any form of cyberattacks. If you want to make sure that you are absolutely secured, feel free to contact us about our cybersecurity services here at Intelecis.