Claude Mythos Preview: The Questions Anthropic Hasn’t Answered Yet
Anthropic’s April 7, 2026 technical disclosure is unusually detailed by AI industry standards — but it leaves specific questions unanswered that security professionals, businesses, and policymakers need to address. This post identifies them honestly.
Questions About Mythos’s Full Capability
What is the full scope of vulnerabilities found — by category and severity?
Anthropic discloses that over 99% of vulnerabilities found have not been publicly disclosed because they are not yet patched. The categories of software, the severity distribution, and the specific capability depth across different vulnerability classes are unknown. Understanding whether Mythos’s capabilities are uniformly distributed (equally capable against all software types) or concentrated (particularly strong against certain classes like memory corruption vs logic vulnerabilities) would help security teams prioritise their defensive response. This information will become available progressively as vulnerabilities are patched and disclosed.
How does capability vary with guidance and scaffolding?
Anthropic describes Mythos finding vulnerabilities autonomously and also mentions researchers developing scaffolds that allow Mythos to turn vulnerabilities into exploits without human intervention. The relationship between the model’s raw capability and its scaffolded capability — how much does purpose-built scaffolding improve performance beyond the base model — is not disclosed. This matters for understanding what a well-resourced adversary with Mythos-level capability and custom scaffolding could achieve versus the baseline capabilities described.
What is the false positive rate — how often does Mythos report non-exploitable issues?
The disclosure focuses on successful exploits — 181 working Firefox exploits, 10 tier-5 crashes. The false positive rate — how many reported vulnerabilities turned out to be non-exploitable, misidentified, or duplicates — is not disclosed. For practitioners using AI-powered vulnerability discovery: the false positive rate determines how much human review time is required to validate AI findings. A 50% false positive rate doubles the human review burden; a 5% false positive rate makes AI-discovered vulnerabilities nearly actionable without extensive human validation.
Questions About Project Glasswing
What is the scale of the defensive impact so far?
Anthropic’s disclosure launched Project Glasswing without specifying the scale of the defensive deployment: how many software projects are being scanned, how many vulnerabilities have been found and reported to maintainers, and what the projected patching timeline looks like for the known findings. For the security community evaluating whether Project Glasswing is achieving its defensive objective, quantitative progress data would be valuable. Some of this data will become publicly available as vulnerabilities are disclosed following patching.
What are the governance structures for partner access?
The disclosure describes Project Glasswing as a limited release to vetted critical industry partners and open source developers. The specific vetting criteria, the governance structure for how partners can use Mythos, the audit and accountability mechanisms, and the process for partners who misuse access are not disclosed. For organisations evaluating whether to participate if given the opportunity, and for regulators evaluating whether the programme’s governance is adequate, these details matter.
What is the timeline for broader access?
The most commercially relevant unanswered question: when will Mythos Preview be available for broader business API access? Anthropic has not announced a timeline. The timeline depends on factors that are not public: the progress of defensive patching for discovered vulnerabilities, Anthropic’s confidence in the monitoring and governance infrastructure for broader access, and regulatory considerations for dual-use AI capability. Following Anthropic’s official channels is the only way to get this answer when it becomes available.
Questions About Industry Implications
Beyond Mythos-specific questions, the announcement raises broader questions that no single organisation can answer: Do other frontier AI models have comparable security capabilities that have not been publicly evaluated or disclosed? What industry standards for AI security capability evaluation and disclosure should emerge from the Mythos precedent? How should coordinated vulnerability disclosure processes adapt to handle AI-paced discovery rates — which may be dramatically faster than human-paced discovery? How should regulatory frameworks address AI dual-use capability in ways that are specific enough to be enforceable but flexible enough to accommodate rapid capability advance?
SA Solutions does not have answers to these questions — they require the collective engagement of frontier AI labs, the security research community, policymakers, and standards bodies. What SA Solutions can do is track the answers as they emerge and translate them into practical implications for the businesses we work with. The Mythos announcement opened a conversation; the conversation will continue through 2026 and beyond.
Will Anthropic answer these questions in follow-up communications?
Some questions — particularly those about vulnerability scale and Project Glasswing impact — will be partially answered through the coordinated disclosure process as vulnerabilities are patched and disclosed. Questions about broader access timelines will be answered through Anthropic’s commercial communications when decisions are made. Questions about governance structures may be addressed if Anthropic publishes a Project Glasswing governance document as the programme matures. The industry and regulatory questions will be addressed through the broader community process rather than by Anthropic alone.
Should businesses wait for these questions to be answered before making AI investments?
No — the questions identified in this post are important for the security community, policymakers, and researchers but they are not necessary for most business AI investment decisions. The decision to implement a Claude-powered proposal generation system or client reporting automation does not depend on knowing Mythos’s full vulnerability category breakdown. Build AI infrastructure on the information available now; incorporate additional Mythos-specific context as it becomes available.
Want to Stay Current on Mythos Developments as They Emerge?
SA Solutions publishes analysis of frontier AI developments and their business implications. Follow our content series for updates.
