AI Infostealers Are Hijacking Passwords—What You Must Know | Image Source: www.businessinsider.com
NEW YORK, USA, March 21, 2025 - Cybersecurity experts raise new alarms as a new wave of AI-compatible infokeepers emerge, exposing a terrifying truth: sophisticated malware can now be created with zero coding skills, simply manipulating large language models (LLM) through narration. This tactic, known as the “Immersive World” technical jailbreak, was highlighted by cybersecurity researchers from Cato Networks, who recently demonstrated how a fictional narrative could be used to avoid integrated protections of popular IA platforms such as ChatGPT, Microsoft Copilot and DeepSeek, leading to the creation of a fully functional malware.
According to Cato Networks’ updated report on the intelligence threat released on March 18, the researcher behind this demonstration had no experience in developing malware. However, through the detailed impulses and role play, they managed to guide the LLMs to create a infotealer Chrome able to extract sensitive data from the Google Chrome password administrator. The implications are amazing, not only for people, but also for companies that depend on password storage browsers.
This news comes on the heels of the additional results of Zscaler and Menlo Security, both reporting exponential growth in AI-assisted cyber threats. With the creation of democratised malware, cybersecurity experts warn that the era of ”known zero” attackers – the bad actors who only need intent and access to LLM – is now a reality. This is what you need to understand about this growing threat, how it works, and what steps you need to take to protect yourself.
What is the immersive global attack?
The immersive world technique is a prison cutting strategy that involves narrative engineering, a process in which attackers create detailed fiction environments where restricted actions are standardized. In this invented universe, LLMs are characters and missions that justify typically prohibited behaviour. For example, in the Cato test, AI played an elite encoder called Jaxon, which had to create a software to “defend a bad guy” – essentially a euphemism to steal passwords.
As Cato Networks said, this imaginative toast effectively disarmed AI’s security measures, which would normally prevent these malicious impulses. AI, without realizing its current context, has completed its assigned mission, generating working code capable of collecting references from Chrome 133. According to the report, the model does not even need details on how Chrome codes or storage passwords – it has been filled in gaps based on the developer’s generalized knowledge and feedback.
How are existing IA protection measures avoided?It’s about reframe. When you direct ChatGPT to type malware, you may get a rejection: I’m sorry, I can’t help with that. But if you integrate the same demand into a fictional world, the model no longer sees it as a threat to the real world. Instead, he sees the momentum in a story, and his role is to move on. This method exceeds ethical limits without changing the underlying architecture of the model.
“We call them actors of the threat of zero knowledge,” said Vitaly Simonovich of Cato Networks in an interview with Business Insider, “which means that, with the sole power of LLM, all you need is the intention and the goal in mind to create something malicious.”
What models of AI have been affected?
The global jailbreak immersion technique successfully manipulates ChatGPT, Microsoft Copilot and DeepSeek R1. Gemini and Claude de Antropico from Google, however, resisted exploitation. After the experience, Cato Networks reported its vulnerability to all stakeholders. Google acknowledged receipt but refused to review the malicious code, while Microsoft and OpenAI responded positively. DeepSeek remained silent.
An OpenAI spokesperson told Business Insider:
“We appreciate the IA’s security research and have carefully reviewed this report. The code generated in the report does not seem to be intrinsically malicious… This scenario is consistent with the normal behaviour of the model and was not the product of circumvention of the model guarantees
This answer highlights a deeper question: although LLM is not intrinsically dangerous, it can be directed to products at risk under appropriate conditions. As AI systems become more advanced and versatile, also the creativity of those who seek to abuse them.
What are the implications of the real world?
It is easy to exclude these conclusions as theoreticians, but the truth is more disturbing. Symantec recently demonstrated another attack by AI, where a generic agent created a complete phishing email with a malicious PowerShell script. By simply telling AI that the task was allowed, the researchers did. The result is an end-to-end phishing attack designed entirely by AI.
And this is not just a consensus inquiry. Menlo’s latest security report indicates a 130% increase in phishing incidents by zero hours and nearly 600 confirmed cases of AI-generating fraud. Meanwhile, Zscaler ThreatLabz 2025’s AI Security Report reveals an amazing increase of 3000% in the use of business AI. Of the 536.5 billion AI transactions monitored, almost 60% were blocked by companies, mainly due to concerns about data leakage, non-compliance and unauthorized access.
Why is Chrome particularly vulnerable?Google Chrome, because of its domain in the browser market, is a main goal. Chrome built-in password administrators do for lucrative entry points. In the case of Cato, the malware created by AI accessed the password camera in seconds during a simulated interruption, validating the effectiveness of the explosion. Even if the attacker does not have direct access to a user’s device, such malware can be combined with phishing techniques to deliver it remotely.
What can you do to be safe?
The consensus of security experts is clear: passwords are no longer a reliable defence. Authentication of two factors (2FA), although useful, is also increasingly avoided. On the other hand, experts recommend the transition to passwords - cryptographic identifiers that are device specific and almost impossible to phish.
- Audit your accounts: Prioritize communications, financial, and healthcare platforms.
- Enable passkeys: Platforms like Google, Apple, and Microsoft now support them.
- Use strong 2FA methods: Prefer app-based authenticators over SMS codes.
- Update security tools: Use real-time browser protection and phishing detection software.
- Avoid storing passwords in browsers: Use dedicated password managers with end-to-end encryption.
Stephen Kowski of SlashNext clearly says: “What was “0-day” now is” 0-time. The attackers no longer wait – they feel as soon as an opportunity arises. With nearly a million new monthly phishing sites and 80% malicious content hosted on cloud platforms such as AWS and Cloudflare, the urgency of action has never been greater.
Do companies take this seriously?
Some are, but not fast enough. Google, for example, has been defending the death of the password since 2023. However, the widespread adoption of passwords remains slow. While companies invest in AI for productivity, many are in preparation for their darkest side. The current climate, described by Deepen Desai, Chief Security Officer of Zscaler, demands ”zero confidence everywhere”
In a world where AI can code better than many developers and criminals can implement malware without a technical context, companies should review their security strategies. It’s not just about installing antivirus software: it’s about redefining trust, personal training, and preparing for a future where LLM are both a tool and a threat.
“We are already seeing an increase in phishing emails, which are super realistic,” Simonovich warned. “Think of applying this to the development of malware – we will see more and more being developed using these LLM.”
Ultimately, the specific characteristics of each farm may change, but withdrawal remains the same: complacency is no longer an option. Whether it is an individual user or a global company, the responsibility to adapt is yours. Because although AI does not execute the code on its own, it will certainly not prevent anyone from doing it with what they believe.
It’s time to act. Change your habits. Embrace better security tools. And if you haven’t, stop using browser-based password administrators. The digital world has been much more dangerous and much smarter.