In partnership with

The Key to This $240B Market Is in Your Bloodstream

Every year, $240B is spent on treating the symptoms of osteoarthritis. But not a single therapy has been able to actually stop it. The answer, it turns out, has been inside us all along.

A startup named Cytonics discovered the human body already produces a protein designed to protect cartilage. It just doesn’t produce enough where it's needed most. So Cytonics harnessed it. 

Their first-generation therapy has already treated 10,000+ patients. Now they've engineered a 200% more potent, mass-producible version pushing toward FDA approval.

If approved, it could be the first therapy to actually halt cartilage destruction and promote regrowth in a market that has never had a real solution. Claim a piece at the pre-clinical stage as an early-stage investor before March 26 to receive time-sensitive investor bonuses.

This is a paid advertisement for Cytonics Regulation CF offering. Please read the offering circular at https://cytonics.com/

Project Glasswing
12 companies. 1 unreleased model. Same day.
Claude Mythos
Preview
↓ Launch partners ↓
AWS
Apple
Broadcom
Cisco
CrowdStrike
Google
JPMorgan
Linux Fdn
Microsoft
NVIDIA
Palo Alto
Anthropic
$100M committed · 90-day report due July

Today Anthropic did something unusual.

They announced an unreleased frontier model called Claude Mythos Preview, and on the same day, twelve of the biggest companies in technology signed on as launch partners. AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic itself.

Coordinated launches at this scale don't happen by accident. When CrowdStrike, Microsoft, and Google all show up to the same press release on the same day, something has shifted. And when the press release is about an AI model that doesn't even have a public release date, the shift is bigger than the model itself.

The reason they all moved at once is in three specific code findings that Anthropic published alongside the announcement. Each finding is more striking than the last. Together, they describe the moment AI became the best vulnerability hunter in the world.


🔍 Finding #1

A 27-year-old vulnerability in OpenBSD

OpenBSD has a reputation as the most security-hardened operating system on Earth. It is what runs firewalls, VPN endpoints, and the kind of critical infrastructure where a single bug can mean catastrophic failure. The OpenBSD project has a culture built around adversarial code review. Every commit is scrutinized. Every line of network-facing code has been read by some of the best security engineers alive, in many cases multiple times across multiple decades.

Mythos found a vulnerability sitting in OpenBSD's code for twenty-seven years. The bug allows an attacker to remotely crash any machine running OpenBSD just by connecting to it. No credentials. No exploit chain. Connect to the right port and the machine goes down.

Twenty-seven years of human review by some of the best security engineers in the world. One AI model, working autonomously, found it.

The bug has been disclosed to the OpenBSD maintainers and patched. Anthropic isn't releasing the technical details until the fix has propagated to all production systems, which is the responsible disclosure norm. But the fact that this bug existed at all, in this codebase, for this long, tells you something about where AI capabilities have arrived. The hardest target in operating system security held a 27-year-old vulnerability that didn't survive its first contact with a frontier model.


🔍 Finding #2

A 16-year-old vulnerability in FFmpeg

FFmpeg is the video codec library that almost every piece of software on Earth uses to encode and decode video. Streaming services depend on it. Browsers depend on it. The video upload pipeline at most companies passes through FFmpeg at some point. It is one of the most-tested pieces of open source software in existence, with continuous fuzzing harnesses that have been running against it for over a decade.

Mythos found a bug in a single line of FFmpeg code that automated testing tools had executed five million times without ever catching the problem.

One line of code.
FFmpeg, used everywhere.
5,000,000
automated test executions over 16 years
AFL
fuzzing
static analysis
code review
CI/CD
Mythos Preview
1 run. Found the bug.

Read that again. Five million automated test executions. Sixteen years of production use. Continuous fuzzing by AFL. Static analysis tools. Continuous integration. Code reviews by paid maintainers. None of it caught this bug. One run with Mythos found it.

What that tells you is that automated testing and frontier-AI vulnerability discovery are not the same thing. Fuzzers are random input generators with coverage guidance. They find bugs that get triggered by mutating inputs in mechanical ways.

Mythos appears to be doing something different. It is reading the code, building a mental model of what the code is trying to do, and reasoning about where that intent breaks down. That is a fundamentally more powerful technique, and it is what the next decade of security tooling is going to look like.


🔍 Finding #3

An autonomous Linux kernel privilege escalation chain

The Linux kernel runs most of the world's servers. A privilege escalation bug in the kernel is the most valuable kind of vulnerability there is, because it converts ordinary user access into complete machine control. Finding one such bug is hard. Chaining multiple bugs together to defeat the kernel's defense-in-depth protections is harder. Doing it without human guidance has historically been the boundary between human security researchers and automated tooling.

Mythos crossed that boundary. It found multiple vulnerabilities in the Linux kernel and chained them together by itself, escalating from ordinary user access to full root.

Autonomous privilege escalation
Linux kernel · No human steering
User
ordinary access
Vuln 1
kernel race condition
Vuln 2
memory corruption
Vuln 3
privilege bypass
Root
full machine control
Mythos found, chained, and
weaponized. Autonomously.
Source: Anthropic Frontier Red Team writeup

The phrase to focus on is "by itself." This was not a researcher giving Mythos hints. Not a human suggesting which subsystems to look at. Not a chain assembled by a security expert with the model filling in details. The model was given access to the Linux kernel source and a goal. It found three separate bugs, recognized that the bugs could be combined, designed the chain, and produced working exploit code.

This is the line the security industry has been watching for. Autonomous discovery of single bugs is impressive but not new. Autonomous chaining of multiple bugs into a working exploit, against a target as scrutinized as the Linux kernel, is the threshold capability that changes how every defender has to think about their exposure. If Mythos can do this for the Linux kernel, the assumption has to be that future models can do it for any codebase a defender owns. Which means defenders now have to either get access to comparable tools or accept that they are behind.


[Ad placement]

Type /ads in editor to insert claimed offer


The benchmark numbers tell the same story

Mythos is not a marginal improvement over Opus 4.6, the previous flagship Claude model. It is a step change. The gap is large enough to suggest Mythos is using techniques that Opus 4.6 simply cannot do, not just doing the same things better.

Benchmark Mythos Opus 4.6
CyberGym (vuln reproduction) 83.1% 66.6%
SWE-bench Verified 93.9% 80.8%
Terminal-Bench 2.0 82.0% 65.4%

A 17-point jump on CyberGym is the number to watch. CyberGym measures the model's ability to reproduce known vulnerabilities given a description of what to look for. That is the closest public benchmark we have to "can this model find security bugs the way a human researcher would," and Mythos is now within striking distance of expert human performance on it.


The thing nobody is talking about yet

Anthropic is not releasing Mythos generally. This is the most important sentence in the entire announcement and almost no coverage has highlighted it. Instead of a public API launch, Anthropic is committing $100M in API credits to the 12 launch partners plus 40 additional critical-infrastructure organizations, restricting access to defensive use only, plus $4M in direct donations to open-source security maintainers.

After the preview ends, Mythos will be priced at $25 per million input tokens and $125 per million output tokens. That is roughly 1.7x the price of Opus 4.6. The pricing alone tells you Anthropic considers Mythos a different class of model, not a refresh. Frontier labs do not raise prices on routine releases. They raise prices when they have a capability tier that justifies it.

In Microsoft's quote, Igor Tsyganskiy mentions Mythos showed "substantial improvements" against CTI-REALM, Microsoft's open-source security benchmark. The actual numbers aren't in the public post.

That benchmark detail is probably the most interesting thing the announcement didn't fully explain. CTI-REALM is Microsoft's internal benchmark for cyber threat intelligence, and it covers reasoning about adversary techniques, attack chains, and incident response, not just raw bug discovery. If Mythos is also strong on CTI-REALM, then the model is not just a vulnerability hunter. It is also a defender's analyst. That is a much bigger claim than the one Anthropic led with, and it is where the next few weeks of independent analysis will land.


The framing matters more than the model

Anthropic is not selling this as "look how powerful our model is." They are framing it as a race condition: the same capabilities that make Mythos dangerous in adversarial hands are what make it the best defensive tool ever shipped, and the only way to come out ahead is for defenders to deploy it first, at scale, before equivalent capabilities proliferate.

This framing is doing two things at once. First, it gives the launch partners a rationale for moving quickly. None of these companies wants to be the one that didn't deploy AI-augmented defense the year their competitors did.

Second, it positions Anthropic as the responsible actor. They are not selling Mythos to the highest bidder. They are giving it away in API credits, to a curated list, for a curated purpose, with a public report due in 90 days. That is a different kind of release than the industry has seen before, and it sets a template that other labs are going to be pressured to follow.

The 90-day report is the thing to actually wait for. That report will tell you which kinds of vulnerabilities Mythos is and is not finding, which codebases are getting hardened, and what the early production deployment looks like for AI-augmented defensive security. If you only read one thing about Project Glasswing, read that report when it lands in July.

For the cyber side of AI, today is the inflection point we have been told was coming for two years. It arrived as a coordinated coalition press release, not as a research paper. That detail is the most honest signal of where the field actually is right now.


Read the full Project Glasswing announcement →

Read the Anthropic Frontier Red Team technical writeup →

Read the Claude Mythos Preview system card →

That's it for today.

ResearchAudio.io · Daily AI research, decoded

Keep Reading