You know the trope: Something fishy’s going on with the protagonist, and here comes that tired twist — there’s an evil twin.
Gasp. So surprising.
This latest evil twin reveal will surprise nobody, but it’s pretty jarring all the same — ChatGPT has an evil twin that caters to cybercriminals.
… was created by an unknown hacker and launched last month, with its designer calling it ChatGPT’s “biggest enemy.”
OpenAI built ethical guardrails into ChatGPT to prevent malicious activities; WormGPT has none of that.
This huge jerk of a chatbot proudly “lets you do all sorts of illegal stuff,” per PCMag.
Cybersecurity firm SlashNext put WormGPT to use:
… so, that all sucks.
Unlike its more responsible sibling, WormGPT isn’t free: Access costs ~$617/year, limiting adoption.
Also slowing its growth: bad word-of-mouth. One buyer on the hacker forum where WormGPT is sold said it’s “not worth any dime.”
And while smarter BEC attacks present a challenge, their threat is nothing new — a leading cybercrime that cost victims $2.4B worldwide in 2021, BEC attacks are already a focus for cybersecurity experts.
For now, WormGPT is less an active concern and more a sign of the looming fights ahead as AI increasingly gets in the hands of bad actors.