Two independent analyses from the UK’s AI Security Institute (AISI) and Palo Alto Networks suggest frontier AI systems have reached a new level of autonomous cybersecurity capability. Anthropic’s Claude Mythos Preview and OpenAI’s GPT-5.5 reportedly exceeded previously established capability growth trends, with researchers unable to determine whether the jump represents a temporary anomaly or a permanent acceleration.
The findings focus on how effectively AI systems can independently perform complex cybersecurity tasks such as vulnerability discovery, reverse engineering, exploit chaining, and multi-stage attack simulations.
Researchers say the progression is happening on a scale measured in months rather than years, significantly compressing assumptions about how quickly advanced AI cyber capabilities are evolving.
Why It Matters: The reports reinforce growing concerns that AI is becoming both a defensive security tool and an offensive force multiplier. Enterprises, governments, and infrastructure operators may now face a shrinking timeline to harden systems before increasingly autonomous AI-driven attacks become commonplace.
- AI capability growth is accelerating faster than prior forecasts: AISI researchers previously estimated that AI cyber task performance was doubling every eight months in late 2025. By February 2026, that estimate had already accelerated to roughly every 4.7 months. Claude Mythos Preview and GPT-5.5 then exceeded even those updated projections, suggesting current forecasting models may underestimate the pace of capability growth.
- Models are now completing realistic multi-stage attack simulations: In structured cyber range tests, Claude Mythos Preview became the first AI model to fully complete both of AISI’s simulated enterprise attack environments. One scenario, “The Last Ones,” involved a 32-step corporate network compromise. Another, “Cooling Tower,” had previously been unsolved by any AI system. GPT-5.5 also demonstrated meaningful autonomous attack capability in the same environments.
- Security companies are already seeing operational impact: Palo Alto Networks reported that AI-assisted testing uncovered 75 security issues mapped to 26 CVEs across more than 130 products, dramatically above its normal monthly discovery rate. The company described the latest models as exceptionally capable at identifying vulnerabilities and escalating them into exploitable attack chains in near real time.
- Benchmarks may already be lagging behind reality: AISI acknowledged its current evaluation framework is becoming insufficient because some models perform too well under existing constraints. The institute noted that without token caps, success rates become so high that meaningful “time horizon” calculations break down. Researchers also admitted their longest tests, capped at 12 hours, may no longer expose where model reliability actually begins to fail.
- Defensive preparation is now considered urgent, not optional: Both AISI and Palo Alto Networks emphasized that organizations should immediately strengthen baseline cybersecurity controls, reduce exposed attack surfaces, accelerate patching cycles, and improve automated detection and response systems. Researchers warned that AI-powered attacks could soon unfold in minutes rather than days or weeks, fundamentally changing incident response expectations.
AI is getting better at security – and it’s doing it faster than expected – ITPro
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


