The leak that repriced cybersecurity
Anthropic's accidental Mythos reveal crashed cybersecurity stocks — but the market was catching up to a reality that was already here. On the same day, CISA warned of active exploitation of AI agent frameworks, researchers disclosed basic vulnerabilities in LangChain's widely-used tooling, and IBM confirmed the first AI-generated malware in a live ransomware campaign. AI is simultaneously the most powerful offensive tool, the most attacked infrastructure, and the newest malware author — and the market just priced that triple threat in.
Fortune
Anthropic accidentally reveals Claude Mythos, a 'step change' model with unprecedented cyber capabilities
A CMS misconfiguration exposed nearly 3,000 unpublished assets including draft announcements for Claude Mythos, which Anthropic confirms is 'the most capable model we've built to date' and 'far ahead of any other AI model in cyber capabilities.'
fortune.com

Anthropic's content management system had a default setting that made uploads public. That's it. A checkbox. And it wiped billions off cybersecurity stocks in a single afternoon.
Fortune reported that nearly 3,000 unpublished assets leaked from Anthropic's CMS, including draft announcements for Claude Mythos, a model the company describes as 'far ahead of any other AI model in cyber capabilities'. The leaked materials reference a new 'Capybara' tier above Opus, with scores in coding, reasoning, and cybersecurity that suggest a step change, not an increment. Anthropic is restricting early access to cyber defence organisations while it evaluates the risks. The company blamed 'human error'. The market blamed the company.
But here's what bothers me: the sell-off treated Mythos as the news. It wasn't. Mythos was the catalyst for the market to price in a reality that already existed across three separate stories on the same day.
The triple threat
Start with the infrastructure. BleepingComputer reported that CISA added a critical Langflow vulnerability to its Known Exploited Vulnerabilities catalogue after attackers began hijacking AI pipelines within 20 hours of disclosure. No public exploit code existed at the time. The flaw, CVE-2026-33017 with a CVSS of 9.3, allows arbitrary Python code execution via a single crafted HTTP request because the flow execution environment runs unsandboxed. Compromised instances expose API keys for OpenAI, Anthropic, AWS, and any connected databases. Federal agencies have until 8 April to patch or stop using the product entirely.
Then look at the tooling. The Hacker News disclosed three separate vulnerabilities in LangChain and LangGraph: a path traversal that lets attackers read Docker configs through prompt-loading functionality, an SQL injection in LangGraph's SQLite checkpoint that exposes conversation histories, and a deserialisation flaw. These aren't exotic attack vectors. They're the kind of basic security failures that would get a junior developer pulled into a code review. Patches exist (langchain-core 1.2.22+, langgraph-checkpoint-sqlite 3.0.1+), but the pattern is telling: the most widely used AI agent frameworks shipped with vulnerabilities that predate the LLM era.
Finally, the offence. IBM X-Force confirmed the first known AI-generated malware used in a live ransomware campaign. The PowerShell backdoor, dubbed 'Slopoly' and deployed by ransomware group Hive0163, carries the unmistakable fingerprints of LLM generation: extensive code comments, structured logging, clearly named variables. The malware maintained persistent command-and-control access for over a week during a data exfiltration campaign. The breach started with a ClickFix social engineering lure; the AI-generated component handled persistent access.
Put these four stories together and you get a triangle that the cybersecurity industry hasn't had to deal with before. AI is simultaneously the most capable offensive weapon (Mythos, Slopoly), the most targeted infrastructure (Langflow exploitation), and the source of basic engineering failures in the tools people use to build with it (LangChain). That's not three separate problems. It's one problem with three faces.
The way I see it, the market correction on cybersecurity stocks was overdue but misdirected. Investors panicked about a model that isn't even publicly available yet while ignoring the fact that attackers are already exploiting AI infrastructure with yesterday's techniques. The threat from Mythos is theoretical. The threat from unsandboxed Python execution in production AI pipelines is happening now.
For anyone building on AI agent frameworks: patch today and audit your API key exposure. Stop assuming that popular open-source tooling has been through rigorous security review. The Langflow and LangChain disclosures make clear that it hasn't. The next model capable of finding these flaws won't need to be leaked. It'll just need access to the same public repositories your agents already depend on.
Read the original on Fortune
fortune.com