AI and Cybersecurity: The Arms Race That Claude Mythos Just Escalated
Cybersecurity has always been an arms race between attackers and defenders. Claude Mythos Preview marks a significant escalation of that race — introducing a new class of AI-powered capability that can find and exploit vulnerabilities autonomously. This post examines what that means for the balance between offense and defense.
The Historical Pattern: Security Tools and the Attacker-Defender Balance
Anthropic’s technical disclosure explicitly frames the Mythos Preview capability in the context of the historical pattern for new security tools. When automated fuzzers were first deployed at scale, the security community had the same concern now being raised about AI: would these tools enable attackers to find vulnerabilities faster than defenders could patch them? They did accelerate vulnerability discovery. But the long-term outcome was net positive for defenders: fuzzers like AFL became standard components of defensive software development, used by projects like Google’s OSS-Fuzz to systematically find and patch vulnerabilities in critical open source software before attackers could exploit them.
Anthropic’s expectation is that AI security tools follow the same trajectory — initial period of risk during the transition, followed by a new equilibrium where AI primarily benefits defenders. The reasoning: defenders are a larger, better-organised, and better-resourced constituency than attackers for the purpose of deploying AI security tools systematically. Attackers are motivated individually; defenders — operating systems, browser teams, open source maintainers, security researchers — are motivated collectively and have the infrastructure to deploy AI tools systematically at scale.
Why the Transitional Period Is the Critical Risk Window
The timing asymmetry
The risk during the transitional period comes from a timing asymmetry: Mythos Preview (and future models with similar capabilities) exists now. The defensive infrastructure to counter AI-powered attacks does not yet exist at scale. Project Glasswing is Anthropic’s attempt to use this period to patch vulnerabilities before they are discovered by attackers using similar tools — but the programme reaches a limited set of critical systems. The broader software ecosystem — the thousands of open source projects, enterprise applications, and infrastructure components that are not covered by Project Glasswing — remains exposed during this transitional period.
The democratisation of sophisticated attack capability
Anthropic’s finding that non-experts with no formal security training can use Mythos Preview to find and exploit remote code execution vulnerabilities changes the threat model. Previously, the most sophisticated attacks required specialised expertise — which limited the number of potential attackers to those with significant technical skills. AI tools that democratise this capability expand the potential attacker population. This does not mean catastrophic risk is imminent, but it does mean that the baseline security investment required to protect against a broader range of potential attackers is higher than it was before.
The N-day window compression
N-day vulnerabilities — known vulnerabilities with patches available but not yet deployed — have historically given defenders a grace period of days to weeks before exploit code is developed and weaponised. Mythos Preview’s ability to autonomously develop working exploits from known vulnerabilities compresses this window dramatically. A vulnerability disclosed today may have working exploit code within hours if AI tools are applied to it. This changes the urgency calculus for patch deployment and raises the cost of patch management delays.
What the Long-Term Equilibrium Looks Like
AI-powered defensive scanning as the new standard
In the long-term equilibrium Anthropic anticipates, AI-powered vulnerability scanning will be a standard component of software development and deployment. The same capability that Mythos Preview demonstrates — finding zero-day vulnerabilities autonomously in real codebases — will be available to defenders at scale. Open source projects will benefit from AI-powered security review. Enterprise development teams will use AI security tools in their CI/CD pipelines. The result: a higher baseline security quality across the software ecosystem.
The advantage shifts to defenders with more to protect
Anthropic’s expectation that defenders ultimately benefit more than attackers is grounded in the structural difference between the two sides. Defenders have more to gain from systematic AI-powered security review — they have large, known codebases, established relationships with software maintainers, and the organisational infrastructure to act on vulnerability findings. Attackers benefit from finding a single exploitable vulnerability; defenders benefit from finding and patching all of them. AI tools that search comprehensively are more structurally aligned with the defender’s objective.
The role of coordinated disclosure and industry collaboration
The Project Glasswing approach — deploying AI defensively to critical systems, then responsibly disclosing vulnerabilities through coordinated processes — is the template for how the industry manages the transitional period. For this to work at scale: software maintainers need to be able to receive, triage, and act on large volumes of AI-discovered vulnerability reports. The security industry’s coordinated disclosure infrastructure — currently designed for human-paced vulnerability discovery — may need to adapt to handle AI-paced discovery rates.
Is AI making cyberattacks inevitable for all businesses?
No — but the risk profile is changing. AI tools make sophisticated attacks more accessible, which raises the baseline security investment required for businesses that operate internet-connected systems. The most effective response is not to assume breaches are inevitable but to make them harder through systematic patching, reduced attack surface, and stronger detection capabilities. The businesses most at risk are those with significant unpatched known vulnerabilities — the N-day compression that Mythos demonstrates makes legacy, unpatched systems significantly more exposed than they were before.
What is the likely timeline for the new security equilibrium?
Anthropic does not specify a timeline, and the honest answer is that nobody knows. The fuzzer analogy is instructive but imperfect — AI capability is advancing faster than fuzzer capability did, and the potential applications are broader. The transitional period could be measured in months if the industry responds quickly and coordinated defensive deployment is effective. It could be measured in years if the response is fragmented or slow. Anthropic’s publication of technical details and launch of Project Glasswing are attempts to accelerate the defensive response and shorten the transitional period.
Want to Understand What AI Advances Mean for Your Business Risk?
SA Solutions helps businesses navigate the AI landscape — from practical integrations to understanding the security and strategic implications of frontier AI developments.
