Update, March 21, 2025: This story, originally published March 19, has been updated with highlights from a new report into the AI threat landscape as well as a statement from OpenAI regarding the LLM jailbreak threat to Chrome password manager users.
There is, so it seems, quite literally no stopping the rise of infostealer malware. With 2.1 billion credentials compromised by the insidious threat, 85 million newly stolen passwords being used in ongoing attacks, and some tools able to defeat browser security in 10 seconds flat, it’s certainly hard to ignore. But things look set to get worse as new research has revealed how hackers can use a large language model jailbreak technique, something known as an immersive world attack, to get AI to create the infostealer malware for them. Here’s what you need to know.
AI Password Infostealer Creation, No Coding Experience Needed
A threat intelligence researcher with absolutely no malware coding experience has managed to jailbreak multiple large language models and get the AI to create a fully functional, highly dangerous, password infostealer to compromise sensitive information from the Google Chrome web browser.
That is the chilling summary of an introduction to the latest Cato Networks threat intelligence report, published March 18. The worrying hack managed to get around protections built into large language models that are supposed to provide guardrails against just this kind of malicious behavior by employing something known as the immersive world jailbreak.
“Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World,” Vitaly Simonovich, a threat intelligence researcher at Cato Networks, said, “showcases the dangerous potential of creating an infostealer with ease.” And, oh boy, Vitaly is not wrong.
An Immersive World AI Attack
According to the Cato Networks researchers, an immersive world attack involves the use of what is called “narrative engineering” in order to bypass those aforementioned LLM security guardrails. This requires a highly detailed but totally fictional world to be created by the attacker and roles within it assigned to the LLM to normalize what should be restricted operations. The researcher in question, the report said, got three different AI tools to play roles within this fictional and immersive world, all with specific tasks and challenges involved.
The end result, as highlighted in the Cato Networks report, was malicious code that successfully extracted credentials from the Google Chrome password manager. “This validates both the Immersive World technique and the generated code’s functionality,” the researchers said.
Cato Networks said that it contacted all the AI tools concerned, with DeepSeek being unresponsive while Microsoft and OpenAl acknowledged receipt of the threat disclosure. Google also acknowledged receipt, Cato said, but declined to review the code. I have reached out to Google, Microsoft and DeepSeek regarding the AI jailbreak report and will update this article if any statements are forthcoming.
An OpenAI spokesperson provided the following statement: “We value research into AI security and have carefully reviewed this report. The generated code shared in the report does not appear to be inherently malicious—this scenario is consistent with normal model behavior and was not the product of circumventing any model safeguards. ChatGPT generates code in response to user prompts but does not execute any code itself. As always, we welcome researchers to share any security concerns through our bug bounty program or our model behavior feedback form.”
New research from Zscaler, contained within the March 20 ThreatLabz 2025 AI Security Report, paints a vivid picture of just how dangerous the AI landscape is. With the growth in enterprise AI tools usage experiencing a 3,000% upward momentum year-over-year, Zscaler warned about the need for security measures as these technologies are rapidly adopted into almost every industry. Businesses are well aware of this risk, of course, which is why Zscaler reported that 59.9% of all AI and machine language transactions were blocked by enterprises according to its analysis of some 536.5 billion such transactions between February 2024 and December 202 in the Zscaler cloud.
The potential risks included data leakage and unauthorized access, as well as compliance violations. “Threat actors are also increasingly leveraging AI to amplify the sophistication, speed, and impact of attacks,” Zscaler said, which means everyone, enterprises and consumers, need to rethink their security strategies.
When it came to the most used AI applications, ChatGPT was unsurprisingly the front runner with 45.2% of all the identified global transactions and the most blocked due to concerns regarding exposure and unsanctioned usage. Grammarly, Microsoft Copilot, QuillBot and Wordtune were also towards the top of the tree.
“As AI transforms industries, it also creates new and unforeseen security challenges,” Deepen Desai, chief security officer at Zscaler, said. “Zero trust everywhere is the key to staying ahead in the rapidly evolving threat landscape as cybercriminals look to leverage AI in scaling their attacks.”