Pacific Drive's 'Endless Expeditions' Update Revolutionizes Survival Gaming Experience | Image Source: federalnewsnetwork.com
HELSINKI, Finland, April 4, 2025 – In a change that cybersecurity experts have long warned, artificial intelligence has clearly overperformed human hackers in the creation of phishing emails. A recent and sober report by AI Hoxhunt’s security company reveals that its patented IA agent, JKR encoded, has become 24% more effective than the elite Red Human Teams in simulating phishing attacks, marking a critical point in the war against cybercrime.
As Forbes pointed out and supported by corroborating BetaNews, SiliconANGLE and IBM X-Force data, this progress is not just academic. The phishing campaigns generated by AI have reached a level of sophistication and scale that traditional defences, built for less agile opponents, struggle to contain. This is an alarming and inevitable milestone, reflecting the acceleration of the arms race between digital and deceptive defenders.
How did AI hit human pirates in their own game?
Phishing has long been based on psychological tips – emergency, fear, curiosity – to attract users to click on malicious links or give up references. Traditionally, it requires a human touch: someone who understands culture, grammar, emotional triggers. But now AI has learned all this, and it doesn’t need coffee.
According to Hoxhunt’s report, the IA JKR anchoring agent has been trained using large and dynamically adapted language models through an internal process called “Evolves”, which fina- grants so much incentive and response in real time. This allowed AI to create new personalized phishing emails using public data from LinkedIn profiles, social networks and professional affiliations, on a scale and with cooling accuracy.
In testing conducted in March 2025, JKR phishing emails outperform humans at all target skill levels, including users with more than six months of cybersecurity training. This is a significant leap from previous baselines – in 2023 AI was 31% less effective than humans. By the end of 2024, the gap was closed to 10%. Then, within a few months, 24% advanced, making 55% more effective than two years ago.
Q: What makes AI phishing attacks so effective?
A: AI combines speed, personalization and emotional intelligence at a level never seen before. It can generate convincing messages in seconds that resonate with the bottom of a receiver, a working paper, or even a recent online activity.
The alarming reality of AI. Phishing campaigns
Despite great technological giants like Google and Microsoft boast that they block ”over 99%” of malicious emails, daily user trays still receive many of them, even miswritten scams from suspicious domains. If these basic attempts can be overlooked, what are the possibilities of users against AI messages that mimic HRR internal emails or mimic senior management with unconscious accuracy?
“The wrong AI wolf knocks on the door,” said Pyry Åvist, co-founder and CTO of Hoxhunt, when the report was published. “It is irrefutablely preferable to enter. He strangles and suffocates, but many organizations continue to build their human defenses out of straw. »
Simply put, the digital defenses we trust are no longer enough. Attackers can use the General AI to exit very convincing phishing emails in less than five minutes, a task that took experienced social engineers up to 16 hours, without including infrastructure configuration.
Q: Are AI phishing attacks already common?
A: Not yet. Most current phishing campaigns are still human, although AI is increasingly being used to improve them. But according to Hoxhunt, the massive adoption of AI is imminent and inevitable.
The rise of Physing-as-a-Service (PhaaS) platforms was led by AI
The introduction of progressive phishing AI agents is reshaping the entire phishing ecosystem. Just as Ransomware has become RaaS (Ransomware–a-Service), phishing is now undergoing a similar transformation. Malware actors can subscribe to platforms that generate, distribute and even manage phishing campaigns for them, all driven by AI.
“The physical services market will shift to massive adoption of AI anchors,” warns the Hoxhunt report. Once this transition is completed, we will no longer talk about unique spam messages. Instead, we will face a flood of targeted, effective and incessant attacks that adapt to user behaviour and continue to evolve with minimal human supervision.
Q: What is spear phishing and why is AI a threat here?
A: Lance phishing is intended for specific individuals or organizations, often using customized information. AI can analyze public data to automate and evaluate this process, making each message more credible and therefore more dangerous.
Why are current defences not enough
Many organizations depend on filters, anti-spam tools and static rules to protect users. They were designed to detect models – known signatures, related domains and keywords. But AI doesn’t play with these rules. He writes in natural language, understands the context and modifies its content sufficiently to escape traditional detection methods.
Moreover, the attackers are not yet standing. They continually improve their impulses, experiment with delivery vectors and take advantage of stolen data to increase authenticity. Instead, most users have not even received basic phishing training, at least adaptive simulations designed to match the sophistication of current AI threats.
“In 2024,” show the data of Hoxhunt, “AI began to deceive more beginner users with better written emails.” But this gap did not last long. In February 2025, even experienced users fell for AI-generated attacks more often than those created by humans.
Q: Can AI also help defend against phishing?
A: Yes, AI is used by cybersecurity platforms to identify models, suspicious behaviors and automated responses. But the pace of progress in offensive AI is faster than defensive adaptations at the moment.
Hope on Horizon: Build an AI. Immune system driven
Despite the terrible warnings, the report is not without optimism. “IA is a sword that cuts both senses,” says Hoxhunt. The same technology that feeds phishing agents can be deployed to build stronger and smarter defences, especially with regard to human user training.
Mika Aalto, CEO and co-founder of Hoxhunt, is parallel to the first days of cybersecurity. “When the first computer viruses emerged in the 1980s, they hit people. But this shock led to antivirus software, firewalls and intrusion detection systems. We are now at a similar time with AI. We need to build an immune system for people – fed by AI, rooted in behavior.”
Training simulations that adapt to the strengths and weaknesses of each user can significantly improve resilience. Behaviour-based detection, real-time threat modelling and continuous engagement can provide protective layers beyond a mouse click.
Q: What should organizations do now?
A: Prioritize adaptive training, invest in behavior-based artificial intelligence defenses and stop treating phishing as a purely technical problem. This is now a hybrid human AI threat, and countermeasures must reflect this complexity.
Above all, leaders must understand that this is not just a future concern – it is happening now. Like OpenAI, Microsoft and others continue to develop more capable models, attackers are not too late. The race is open, and there is no time to waste.
According to Hoxhunt, phishing attacks that escape traditional filters have already increased by 49% since the increase in ChatGPT in 2022. It is not just a statistic – it is a warning that the threat landscape is changing faster than many organizations cannot move forward.
The question is not whether AI will be used in phishing. That’s already answered. The real question is whether we can match your rhythm and build the numerical and behavioral immunity we need before it’s too late.