TITLE: Anthropic Mythos Leaked: A Step-Change Model Above Opus DATE: 2026-03-31 COMPANY: Anthropic TOPIC: AI Security SUMMARY: A misconfigured content management system exposed internal Anthropic documents on 27 March 2026, revealing a new model called Claude Mythos, described as a step change above the existing Opus tier. The leaked draft blog warns that Mythos poses unprecedented cybersecurity risks and is far ahead of any other AI model in cyber capabilities. Anthropic has confirmed the model exists and is restricting early access to cyber defence organisations while it improves efficiency before a general release. WHAT CHANGED: On 27 March 2026, independent security researchers discovered that Anthropic's content management system had been misconfigured, leaving close to 3,000 unpublished internal assets publicly accessible on the open internet. The exposed material included a draft blog post intended to announce a new AI model called Claude Mythos, referred to internally under the codename "Capybara." The draft blog described Mythos as "by far the most powerful AI model we've ever developed" and framed it as a new tier of model, larger and more capable than the existing Opus range. According to the leaked document, "Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others." Anthropic quickly locked down access after being notified, and a company spokesperson confirmed the situation to Fortune: "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, we're being deliberate about how we release it." The company attributed the exposure to human error in the configuration of its systems. The leaked documents did not stop at capability benchmarks. They also disclosed that Mythos has a feature described as "recursive self-fixing," referring to an ability to autonomously identify and patch vulnerabilities in its own code. Internal documents warned that the model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Anthropic has reportedly been privately briefing government officials that Mythos makes large-scale cyberattacks more likely in 2026. WHY IT MATTERS: A new AI model tier has been confirmed above Opus, which will eventually raise the capability ceiling for every task that AI is used for, including coding, reasoning, and security analysis The model's cybersecurity capabilities are dual-use: they can help defenders find and close vulnerabilities faster, but they can equally help attackers exploit them at speed and scale Recursive self-fixing suggests that the gap between AI and human software engineering capability in security contexts is narrowing faster than most organisations have planned for Cybersecurity stocks including CrowdStrike, Palo Alto Networks, Zscaler, and Fortinet fell on the news, reflecting market uncertainty about how frontier AI models affect the existing security vendor landscape 48% of cybersecurity professionals now rank agentic AI as the number one attack vector for 2026, according to a Dark Reading poll conducted in the same week as the leak The fact that this model was disclosed through a security breach at Anthropic itself adds a layer of practical significance: AI companies are not immune to the risks they are building tools to address DAVID & GOLIATH ANALYSIS: The Mythos leak is a preview of a shift that was already underway. Frontier AI models have been growing more capable in cybersecurity contexts for two years. What the leaked documents confirm is that the pace of that development has accelerated significantly, and that Anthropic is far enough ahead of the public narrative that it felt necessary to restrict early access entirely. For operators, the immediate question is not whether to adopt Mythos. It is not available to most organisations and will be expensive when it is. The question is what a world with Mythos-level capabilities means for the security posture of businesses that cannot afford enterprise-grade defence tools. Attackers do not need general availability. They need access, and access to powerful models will find its way to bad actors well before it reaches most small and mid-sized businesses through official channels. The practical recommendation is straightforward: treat this as a signal to review your security fundamentals now, before more capable attack tools are in wider circulation. Patch your systems. Audit your vendor access. Understand where your most sensitive data lives. And when Mythos or models like it do become available to defenders, get there early. In this particular race, the organisations that move first on defence will have a meaningful advantage. RELEVANT SYSTEMS: Secure AI Brain, Employee Amplification Systems SOURCE URL: https://davidandgoliath.ai/daily-ai-briefing/anthropic-mythos-leaked-step-change-model FEED URL: https://davidandgoliath.ai/daily-ai-briefing/feed --- Published by David & Goliath | https://davidandgoliath.ai Daily AI Briefing: one AI development per day, decoded for business operators. This is a structured companion file optimised for LLM retrieval and citation.