When Anthropic keeps its most advanced AI model under wraps, enterprising developers take matters into their own hands. Kye Gomez has introduced OpenMythos on GitHub—an open-source attempt to decipher the workings of Claude Mythos. Garnering over 10,000 stars in a matter of weeks, the repository features an extensive ‘readme’ with equations and citations, accompanied by a disclaimer asserting its independence from Anthropic.
While speculative, this effort is grounded in structured coding.
A bit of context: Mythos came to light in late March when draft materials mistakenly revealed it as Anthropic’s premier model, surpassing Opus. Its successor, the Mythos Preview, demonstrated exceptional cybersecurity prowess during Mozilla tests, discovering 271 Firefox vulnerabilities and completing a complex corporate attack simulation. Anthropic secured this AI within Project Glasswing, partnering with industry giants like Microsoft, Apple, Amazon, and the NSA.
Since public access remains off-limits, Gomez endeavored to decode its mechanics. OpenMythos postulates that Mythos utilizes a Recurrent-Depth Transformer—or looped transformer—characterized by repeated cycles through a smaller stack of layers in each forward pass. This approach facilitates deeper processing within continuous latent space before generating tokens.
The repository suggests this architecture explains Mythos’s unique abilities: tackling unprecedented challenges while displaying inconsistent memorization, indicative of looping architecture that prioritizes composition over raw storage capacity.
OpenMythos references the Parcae paper from UC San Diego and Together AI, which tackled looped model instability in April 2026. This model competes with a fixed-depth transformer using fewer parameters yet maintains quality and scaling predictability. The repo also incorporates DeepSeek’s Multi-Latent Attention for memory compression and Mixture-of-Experts to manage various domains.
However, OpenMythos lacks the actual weights, rendering it more of a theoretical framework than a functioning model.
The code outlines configurations ranging from 1 billion to 1 trillion parameters. Yet, users must undertake their own training efforts, with the readme directing them to a script for training on FineWeb-Edu and a target inspired by Chinchilla, requiring substantial computational resources that could cost hundreds of thousands of dollars using H100s. This endeavor remains untested.
Why is this significant? OpenMythos follows closely on the heels of a Vidoc Security study that replicated several Mythos vulnerability findings with GPT-5.4 and Claude Opus 4.6, all within an open-source agent framework at minimal cost. Both instances suggest Anthropic’s protective measures around Mythos may not be as robust as portrayed.
Vidoc focused on replicating Mythos outputs—vulnerability discoveries—using existing models. In contrast, OpenMythos aims to replicate the architecture behind those outputs, proposing that one could eventually construct a model akin to Mythos independently.
While Anthropic likely does not confirm Gomez’s architectural assumptions, many of OpenMythos’s design choices are intentionally vague, using terms like “likely” and “suspected.” Thus, the real Mythos might differ significantly or still hold unknown details.
OpenMythos highlights that key components for constructing a model comparable to Mythos exist in public research. It compiles known methodologies such as looped transformers, Mixture of Experts, Multi-Latent Attention, Adaptive Computation Time, and Parcae’s stability solutions—none proprietary. The repository serves more as an inventory of publicly accessible knowledge on building a Mythos-like AI.
Licensed under MIT, OpenMythos has attracted 2,700 forks, with the training script awaiting someone equipped with substantial resources to undertake its proof.