Cyber Security Ransomware Email Phishing Encrypted Technology, Digital Information Protected Secured
HELSINKI, Finland, 4 April 2025 – A decisive step has been taken in the evolution of cybersecurity threats. According to a new report by the training company in cybersecurity Hoxhunt, artificial intelligence has officially outperformed humans in the field of phishing, and not only by a margin. After years of delay, AI’s phishing agents are now 24% more effective than elite Red Human Teams in simulating phishing campaigns. This tectonic change raises serious concerns about the future of digital security and the ability of individuals and organizations to defend themselves against increasingly intelligent and personalized cyber attacks.
The alarm sounds stronger than ever. “For the first time in more than two years, AI agents have created more effective simulated campaigns than our elite Red Teams. What took 16 hours of experience with social engineers can now be generated by AI in five minutes. As Forbes asserts, these machine attacks are not only effective; They evolve at a frightening pace. This rapid rise calls into question how long our current guarantees – including spam filters and user training – can remain effective.
How did you get past AI humans in Phishing?
The turning point came in March 2025. The Hoxhunt AI speed phishing agent, encoded JKR, achieved a surprising 55% improvement over its 2023 performance, now exceeding the red human equipment at each proven skill level. As for Silicon Angle, JKR uses a continuous feedback loop that opens the “Evolves” process, allowing it to refine and adapt its strategies in real time. Unlike human attackers, who need to sleep, eat and study their goals manually, JKR can extract social networks, LinkedIn and endless public archives, adapting emails with a slight precision.
“The bad wolf of AI knocks at the door,” said Pyry Åvist, CTO and Hoxhunt’s co-founder. “He strangles and suffocates, but too many organizations continue to build their human defenses from straw. ”
In other words, our traditional cyber defenses — training, vigilance, spam filters — might no longer be enough.
What makes phishing so dangerous?
AI does not need to learn from experience as a human. Once a phishing model is trained, it can replicate its tactics on an unlimited scale without fatigue or emotional bias. According to the Hoxhunt report, modern AI systems can write impeccable phishing emails with almost perfect grammar, human tone and zero error, all adapted to specific objectives. Unlike human scams, these AI-generated messages do not differ from legitimate correspondence.
This custom scale phishing is particularly frightening because it merges the forces of broad base attacks with the purpose of “framing” pointed. Historically, attackers had to choose between quantity and quality. AI gives them both. As Hoxhunt describes, physics-as@@ a@-@ service is evolving rapidly, and AI becomes its spine.
How did human rights defenders pass out against AI?
In controlled simulations, novice users were the most vulnerable, with red human teams that significantly surpass AI by clicking on malicious links. But by early 2025 even experienced employees with more than six months of phishing training had fallen into AI manufacturing traps. Margin? A 24% advantage for AI over its human counterparts. IBM X-Force highlights this change as “the beginning of a new arms race in cyberdeception”
According to data from Hoxhunt, AI attacks avoid email filters have increased by 49% since ChatGPT was launched in 2022. It’s not just a statistical anomaly, it’s a sign of a growing storm. Mika Aalto, CEO of Hoxhunt, makes an appropriate historical comparison: “Just as we built the immune system for computers when the first viruses hit the world in the 1980s, we now have to build one for people and the digital environment – powered by AI.”
Can AI be the Cure as well as the curse?
Ironically, AI can also be our best defense against AI. Hoxhunt researchers point out that increasing the behavioral formation of AI could provide a viable solution. Adaptive training tools – those that learn from user behaviour and evolving threats - can simulate phishing attempts using AI to strengthen user instincts. Organizations must start fighting fire with fire.
As Hoxhunt points out, AI is “a sword that cuts both senses.” Virtually deployed, it can strengthen employee resilience by simulating very common threats, providing them with safe space to fail, learn and improve. However, these instruments must be widely adopted and continuously updated to monitor the evolution of threats. Waiting is not an option.
What does this mean for businesses and individuals?
The implications go far beyond IT services. For individuals, phishing by AI means that it no longer depends solely on gut instincts. That the suspicious HR email or a surprise invoice could be perfectly written, referring to a recent post he made in Linking En or a transaction he made online. As Pyry Åvist says, “We are not ready for this.”
Companies need to take proactive action now. This includes:
- Implementing behavior-based security training powered by AI
- Regularly auditing and updating spam filters and threat detection systems
- Encouraging a culture of skepticism and verification within teams
- Monitoring emerging phishing trends with actionable intelligence
The urgency cannot be overstated. With each month passing, AI is improving. And unlike humans, it’s neither plateau nor burn.
There’s still hope?
Despite difficult statistics, experts are optimistic. The technology that feeds these attacks can also stimulate advanced defences. “We are at a crossroads,” said Aalto, “and the decisions we make today will shape e-landscapes for decades. This includes not only business but also governments, educational institutions and individuals.
The arms race will not be won by the firewall alone. It requires a collective change in mentality, behaviour and tools that we implement. Organizations should not expect a major violation to force change. If AI has taught us something, it is that the evolution is constant and fast.
As users, we need to improve our digital “sense of grocer.” Look twice. Check the sender. Think before you click. And above all, never underestimate an email simply because it seems too good to be false – in 2025, it could mean that it is even more dangerous.