Cyberattacks skyrocket to 131%, as hackers embrace AI

Threat actors have harnessed artificial intelligence and automation at an unprecedented speed, according to a new cybersecurity report.

In a year defined by acceleration, Hornetsecurity's annual Cybersecurity Report revealed that threat actors embraced automation, artificial intelligence, and social engineering at unprecedented speed, while defenders raced to adapt governance, resilience, and awareness programmes to match. 

Analysis of over six billion emails processed monthly (72 billion annually) confirmed that email was a consistent delivery vector for cyber-attacks in 2025.

Malware-laden emails surged by 131 percent year-over-year, accompanied by a rise in both email scams (+34.7 percent) and phishing (+21 percent). 

Threat actors were enabled to create even more convincing fraudulent content thanks to generative AI, with more than three-quarters of CISOs (77 percent) identifying AI-generated phishing as a serious and emerging threat.

Nevertheless, defence teams are catching up, with 68 percent of organisations invested in AI-powered detection and protection capabilities against such threats this year. 

Commenting on these findings, Daniel Hofmann, Hornetsecurity CEO, said: "AI is both a tool and a target, and attack vectors are expanding faster than many realise.

“The result is an arms race where both sides are using machine learning.

“On one side, the goal is to deceive; on the other, to defend and forestall.

"Attackers are increasingly using generative AI and automation to identify vulnerabilities, craft more convincing phishing lures, and orchestrate multi-stage intrusions with minimal human oversight." 

AI's emerging cybersecurity threats: synthetic identity fraud, deepfakes, and more 
AI's potential for misuse has become a defining feature of the threat landscape, with 61 percent of CISOs believing AI has directly increased ransomware risk.

For CISOs, the most pressing concerns include synthetic identity fraud, which uses AI to generate documents and credentials; voice cloning and deepfake videos to impersonate users; model poisoning, in which malicious data corrupts internal AI systems; and the employee misuse of public AI tools. 

These emerging technologies blur the line between legitimate and malicious activity, making traditional security controls less effective, especially as cybercriminals seek to compromise trust rather than forced access. 

The AI leadership awareness gap 
Yet, even as companies strengthen their recovery capabilities, many risk guarding an old goalpost.

The next wave of attacks will target something less tangible but more powerful, and that is trust.

CISOs highlighted a wide disparity in leadership's understanding of AI-related risks this year. Some reported that their C-suite executives had a "deep awareness" to "no real understanding" of AI's role in such attacks.

The median response across the board, however, was that there was some awareness, but it is clear that progress was inconsistent and varied widely from business to business.

Looking ahead, resilience, driven by a cultural change rather than prevention alone, will define cybersecurity success in 2026. 

Hofmann adds, "The results of our report demonstrate that organisations are learning to recover without negotiating. But in-house security awareness efforts need to evolve at the pace of AI adoption. 

"Few boards run cyber crisis simulations, and cross-functional playbooks remain the exception rather than the rule.

“As AI-driven misinformation and deepfake extortion become more commonplace, a security culture of readiness, backed by an awareness of AI and the possibilities it creates, will have to be a focus for 2026." 

To read the full Cybersecurity Report, including its 2026 predictions, please click here

Previous Article AI redesigns nasal spray to stop flu and COVID-19 in their tracks
Next Article Southern Manufacturing & Electronics 2026: What to expect
Related Posts
fonts/
or