Anthropic's Claude Mythos: A Paradigm Shift Toward AI-Native Security

The series of announcements around Anthropic’s Claude Mythos Preview in April 2026 feels different in nature from previous model upgrades.

What these announcements make clear is that AI has moved past the stage of being a “convenient tool,” and has started to dramatically compress the timeline of attack and defense in cybersecurity, undermining many of the assumptions underneath current defenses.

Anthropic explained that Mythos Preview discovered “a number of critical vulnerabilities spanning major operating systems, major web browsers, and other important software.” The company decided against a general release and launched Project Glasswing as a limited, defense-oriented framework instead.

Separately, the UK’s AI Security Institute (AISI) reported that this is the first model to have completed a 32-stage enterprise network attack simulation end to end.

What is really at stake here is a very practical shift in strategy: how should defenders respond when the lead time between attack and defense is disappearing?


First, a Fact-Based Summary of What Happened

The headlines are dramatic, so let me first organize what has actually been published.

In Anthropic’s technical write-up, Mythos Preview is said to have autonomously found a 27-year-old bug in OpenBSD and a 16-year-old vulnerability in FFmpeg during testing. For the OpenBSD case, Anthropic notes that the cost of a specific run that led to the discovery was under $50 (Claude Mythos Preview | red.anthropic.com).

In AISI’s evaluation, on a 32-step enterprise network attack simulation called “The Last Ones” that mimics realistic attack chains, Mythos Preview completed the full chain in 3 out of 10 runs. Its average was 22 of 32 stages, beating the next-best model (Our evaluation of Claude Mythos Preview’s cyber capabilities | AISI Work).

Given these capabilities, Anthropic has held back a general release and is offering limited access centered on Glasswing participants and organizations responsible for critical software. Reuters has reported that Anthropic has limited access to roughly 40 companies and is in ongoing discussions with the US government.

The story has reached financial authorities as well. According to Reuters, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell held an emergency meeting with major bank CEOs to brief them on the cyber risks associated with Mythos. In the UK, Bank of England Governor Andrew Bailey has said that financial authorities must urgently understand what this new AI model means.

The market reacted too. According to Reuters, on April 9 cybersecurity stocks such as Cloudflare, Okta, CrowdStrike, and SentinelOne fell 4.9% to 6.5%. What matters here is not simply “cybersecurity stocks were sold off” — rather, the concern that AI itself is absorbing security functionality and reducing the value of existing software has come back to the front.


The Core of Mythos Shock Is the Compression of the Attack-Defense Cycle

What Mythos represents is not just a spec bump.

The process of “finding a vulnerability → verifying it can actually be abused → building the attack code (the exploit) that takes advantage of it” — which used to require deep expertise and a lot of time — is being automated by a combination of high-performance AI models and appropriate agent environments.

Vulnerabilities that skilled humans used to find through careful work can now be tried and evaluated in parallel, at high speed, by AI.

The change this brings is that the traditional reactive defense model — “wait for a vulnerability to be disclosed, then patch it” — no longer holds up on the time axis.

Once attackers can use AI to find and abuse vulnerabilities at scale, defenders cannot afford to wait for public disclosure. They have to run an active self-healing loop continuously: proactively find the vulnerabilities in their own systems and keep closing them, on their own initiative.

In other words, the future looks like this: waiting until you are attacked is too late. The only option is to keep finding and closing gaps in your own code and configuration continuously, before anyone else does.


Does This Mean Existing Security Companies Are No Longer Needed?

I do not think it is that simple.

One thing worth noting in the market’s reaction is that Cloudflare, CrowdStrike, Okta, and SentinelOne all fell together. Just looking at the short-term price reaction, it is hard to say the market has clearly separated winners from losers.

The more important point is this: whether a product defends at the software layer (EDR-style) or closer to the network edge and physical layer (Cloudflare-style), no type of security company is outside the same AI-driven pressure. Both face the same underlying question — can they redesign themselves on AI-native terms, assuming that attackers will use AI to speed up their work? In the next sections, I look at what specifically happens in each of these two structures.

”Software-level detection” alone becomes harder to differentiate on

Let me take CrowdStrike as an example.

EDR (Endpoint Detection and Response) products like CrowdStrike continuously record and monitor behavior on endpoints and workloads, detect suspicious activity, and respond to it. CrowdStrike itself describes EDR as “technology that continuously monitors activity on endpoints, surfaces hard-to-see threats, and responds to them” (What is EDR? Endpoint Detection & Response Defined | CrowdStrike).

This role does not disappear just because Mythos exists. Actual intrusions do not come only from code vulnerabilities — they also involve credential theft, privilege abuse, living-off-the-land techniques, configuration changes, lateral movement, and so on. Attacks that only become visible at runtime will still be there.

However, the piece of CrowdStrike’s portfolio most exposed to pressure is the upstream workflow of pre-identifying vulnerabilities and exposures, prioritizing them, and driving remediation. CrowdStrike’s Falcon Exposure Management product highlights attack surface visibility, AI-driven prioritization, and remediation support.

This is exactly where Mythos comes in.

Anthropic says Mythos Preview found zero-days across major OSes and major browsers, and autonomously discovered a 27-year-old OpenBSD bug and a 16-year-old FFmpeg vulnerability.

In other words, the knowledge-intensive upstream work — vulnerability search, code analysis, attack path discovery, and patch generation — is becoming something that is not exclusive to security software anymore.

And there is another layer beyond that.

Even if EDR itself remains necessary, once Mythos-class AI is available to attackers, they can rapidly test “what behavior triggers EDR” and “how to reshape an attack to slip past it.” EDR then has to become a product that assumes it will be evaded, and learns and updates faster in response.

The Mythos shock is not a story of CrowdStrike’s value going to zero. It is a story that pressures companies like CrowdStrike to redesign their products, on an AI-first basis, faster than before.

What about security closer to the physical layer — Cloudflare-style companies?

What about security that sits at the network entry point or edge, like Cloudflare’s products? I do not think that area is safe either.

Companies like Cloudflare can filter requests, enforce authentication, and apply rate-limiting or blocking at the edge through WAFs and Gateway products. Cloudflare’s own reference architecture shows WAFs inspecting requests at the edge and Gateway controlling HTTP and network traffic as it passes through.

At the end of the day, the decisions — is this traffic suspicious, is this a bot or a human, is this a valid-looking but illegitimate access — depend on Cloudflare’s own software capabilities.

In that sense, Cloudflare-style companies are not outside the Mythos shock either. If Mythos-class AI starts testing attack techniques at scale and rapidly learning patterns that bypass WAF, bot detection, and Gateway decisions, then even software running close to the physical layer will be forced into the same arms race.

In short, both software that protects on the endpoint and software that protects at the network entry are under the same AI-driven sophistication-and-evasion pressure.


Defense Going Forward Will Look More Like “AI vs. AI”

So how should we respond?

In an earlier post — “A New World Powered by Multi-AI Agents — Toward an Era Where AIs Review, Complement, and Negotiate with Each Other” — I wrote about a GAN-style (Generative Adversarial Networks) approach, which I think is worth considering as a defense strategy.

The idea there was to separate the “generating side” and the “breaking side” and pit them against each other so the whole system gets stronger. To be clear, this is not about literally using GAN as a training mechanism — it is about borrowing the core idea behind it: AIs in different roles competing with each other, and the whole system getting sharper through that competition.

Applied to security, the Generator is the defender’s AI. The security company’s AI reads code, produces patches, corrects configurations, and generates mitigations.

The Discriminator is a Mythos-class attack model. It looks for attack paths and vulnerability exploits that the defender has not thought of, and tries to actually break through.

If a gap is found, the Generator fixes it, and the Discriminator goes after the fix. By running this loop repeatedly, defense turns from a static rulebook into a dynamic system that gets hardened through adversarial interaction.

Anthropic itself positions Glasswing as “defensive use for protecting critical software.” What AISI’s evaluation and Anthropic’s own technical writeup point to is that Mythos-class models can significantly accelerate the discovery of unknown vulnerabilities and attack paths.

That means defenders will have to move to the side that runs a defensive AI continuously, assuming an attack AI is out there.

The important shift is this: assume a Mythos-class model will be used for attacks, and have your own critical/attacker-style AI on the defense side first. The defensive AI produces patches, the attacking AI breaks them, and when it does, the defender fixes it again. If you can run this loop continuously and dynamically, a change like the Mythos shock starts to look like something you can actually push back against.


Conclusion

At its core, the Mythos shock is the fact that the speed of vulnerability discovery, attack chaining, patching, and monitoring has all been compressed dramatically by AI.

This change is turning cybersecurity from a story about defensive software into a theme that pulls in AI operations, software quality, infrastructure control, and even the stability of the financial system as a whole.

What carries value in AI-era security, then, is not the company that simply raises alerts, but the company that can find, fix, and keep controlling issues faster than the attacker.

But reaching that state is not just a matter of “adding AI” to existing products. What is needed is the kind of GAN-style thinking described above — an AI-native form of security where defensive AI and attacking AI are continuously pitted against each other, and defense itself is hardened through that interaction.

The defender’s AI fixes code, adjusts configurations, and generates patches. A Mythos-class attacker AI is thrown at it, including techniques the defender never considered. When the attacker breaks through, the defender fixes it immediately and runs the attacker again. Whether you can keep this dynamic loop going is, I think, what will determine defensive strength from here.

Mythos Shock was, to me, a fairly clear signal that the very way we think about security needs to be rebuilt on AI-native terms.

Share this article

Join the conversation on LinkedIn — share your thoughts and comments.

Discuss on LinkedIn

Related Posts