AI in Cybersecurity: A History

From Fuzzers to Frontier AI: The History of AI in Cybersecurity

Anthropic’s Claude Mythos Preview announcement draws an explicit analogy to the introduction of automated fuzzers in cybersecurity — tools that found vulnerabilities, raised concerns, and ultimately became indispensable to defenders. Understanding that history explains why Mythos represents an inflection point, not just another model release.

1988Morris Worm — the first AI-adjacent automated security event
1990sFuzzing first applied to software testing
2026Claude Mythos Preview — autonomous zero-day discovery

A Brief History of Automation in Security

1

The early era: manual vulnerability research (pre-1990s)

Before automated tools, vulnerability research was entirely manual — security researchers read source code, reverse-engineered binaries, and constructed exploits by hand. The barrier to entry was extremely high: finding and exploiting a serious vulnerability required deep specialist knowledge, significant time, and often a specific set of skills that very few people possessed. This high barrier meant that the number of effective attackers was small, and the attacks that occurred were primarily the work of highly skilled individuals or well-resourced nation states.

2

The fuzzer era: automated vulnerability discovery (1990s-2010s)

Fuzzing — the automated generation of random or semi-random inputs to find software crashes — transformed security research. Early fuzzers like SPIKE and Sulley lowered the barrier to finding crashes. AFL (American Fuzzy Lop), released by Michal Zalewski in 2013, introduced coverage-guided fuzzing that could systematically explore code paths — finding vulnerabilities that purely random fuzzing missed. Google’s OSS-Fuzz, launched in 2016, applied coverage-guided fuzzing at scale to open source software — finding and enabling the patching of tens of thousands of vulnerabilities. The fuzzer story followed the pattern Anthropic now cites: initial concerns about enabling attackers, followed by adoption as a critical defensive tool.

3

The ML-assisted era: AI augments security research (2015-2024)

Machine learning began augmenting security research in meaningful ways: anomaly detection systems using ML to identify unusual network traffic, malware classification models trained on behavioural features, and natural language processing for threat intelligence analysis. These applications improved security tooling but did not fundamentally change the nature of vulnerability research — they made existing approaches faster and more scalable but did not produce qualitatively new capabilities.

4

The autonomous AI era: Mythos and beyond (2025-present)

Claude Mythos Preview represents a qualitative shift: an AI system that can autonomously perform the full vulnerability research cycle from initial code analysis through working exploit development. Unlike ML-assisted tools that augment human security researchers, Mythos demonstrates the capability to complete the research cycle without human intervention at each step. Anthropic engineers with no security training could ask for remote code execution vulnerabilities and wake up to working exploits — a capability that previously required years of specialist training.

5

What comes next: the defensive equilibrium (projected 2027+)

Anthropic’s expectation — explicitly modelled on the fuzzer trajectory — is that the same AI capability that currently raises security concerns will ultimately become a standard component of defensive security practice. The analogy is instructive: OSS-Fuzz now continuously tests hundreds of critical open source projects, finding and enabling patching of vulnerabilities faster than they can be exploited at scale. AI-powered vulnerability scanning at the Mythos capability level, deployed defensively through programmes like Project Glasswing and their successors, is the expected destination.

The Lessons From the Fuzzer Transition

The transition period is real and requires active management

When fuzzers became powerful and accessible, there was a genuine period during which attackers could find vulnerabilities faster than defenders could patch them. This period was managed — imperfectly but effectively — through coordinated disclosure programmes, defensive deployment prioritisation, and industry collaboration. The Mythos transitional period requires the same active management, accelerated because AI capability advances faster than fuzzer capability did.

⚖️

Access control matters during the transition

The fuzzer transition was smoother because early powerful fuzzers required significant technical expertise to deploy effectively — which limited their accessibility during the period before defensive deployment was complete. Mythos Preview’s accessibility to non-experts (as documented in Anthropic’s disclosure) means the access control burden is higher and the limited-release approach Anthropic has taken with Project Glasswing is correspondingly more important.

💪

Defenders organise more effectively than attackers at scale

The ultimate reason fuzzers became more beneficial to defenders: defenders — operating system teams, browser vendors, open source maintainers — could coordinate to deploy fuzzing at scale across their entire codebase, systematically finding and patching vulnerabilities. Attackers need to find only one exploitable vulnerability per target. Defenders need to find and fix all of them. Tools that search comprehensively are structurally more useful to defenders — and the same logic applies to AI security tools.

How long did the fuzzer transition take?

The transition from fuzzer concern to fuzzer adoption as a standard defensive tool took roughly a decade: from AFL’s release in 2013 to OSS-Fuzz becoming the standard continuous fuzzing platform for open source security. The AI transition may be faster — the institutional infrastructure for security coordination exists now in ways it did not in 2013, and the potential defensive value of AI security tools is more clearly understood from the start. It could also be slower if the capability gap between AI security tools and defensive AI tooling is larger than expected.

Are there historical precedents where the attacker-defender balance never fully recovered?

Some security tools have had more lasting offensive impact than defensive: certain classes of exploit frameworks and some automated attack tools became primarily offensive in practice despite theoretical defensive applications. The difference with fuzzers — and the reason Anthropic draws this specific analogy — is that fuzzers are structurally better suited to defenders: they require access to source code or a cooperative target, which attackers may not have. AI security tools that require source code access share this defender advantage; those that work purely on binary analysis are more symmetrically useful.

Want to Understand How AI Security Developments Affect Your Business?

SA Solutions tracks frontier AI developments and helps businesses understand their practical implications — security, strategy, and integration opportunity.

Book a Free ConsultationOur AI Integration Services

Simple Automation Solutions

Business Process Automation, Technology Consulting for Businesses, IT Solutions for Digital Transformation and Enterprise System Modernization, Web Applications Development, Mobile Applications Development, MVP Development

Copyright © 2026