During an interview on theCore Memorypodcast with tech journalist Ashlee Vance, OpenAI CEO Sam Altman criticized rival Anthropic for allegedly employing ‘fear-based marketing’ in promoting their new AI model, Claude Mythos. He suggested this strategy aims to limit AI accessibility to a select few by invoking safety concerns.
Altman highlighted that while some fears about AI safety are valid, using them as justification for restricted access can be problematic: “You could argue many angles, and some fears are genuine, but if the goal is ‘we need control over AI because we’re trustworthy,’ fear-based marketing becomes a persuasive tool,” he explained.
He further illustrated his point by comparing it to extreme scenarios: “It’s brilliant marketing to say, ‘We’ve created a bomb. We’re about to drop it on you. Buy our $100 million bomb shelter if you want protection—but only if we choose you.'”
Altman also mentioned the challenge of balancing AI capabilities with OpenAI’s philosophy that technology should be broadly accessible.
Recently unveiled, Anthropic’s Claude Mythos has attracted significant attention from researchers, governments, and the cybersecurity sector. It demonstrated its ability to autonomously identify software vulnerabilities and simulate complex cyber operations during tests. The model is currently restricted to a select group of organizations through Project Glasswing.
The debate over controlled versus widespread distribution of AI systems like Claude Mythos reflects broader industry divisions. Anthropic portrays the model as both a defensive tool for detecting critical flaws and a potential risk if misused, citing its identification of hundreds of vulnerabilities in Mozilla’s Firefox browser during testing.
Access is limited to select entities including Amazon, Apple, and Microsoft under Project Glasswing, while Anthropic supports open-source security efforts. The company has also pointed out that many existing AI evaluation systems are inadequate for assessing the capabilities of Claude Mythos.
Despite some researchers replicating Mythos’ findings with publicly available models, concerns about its potential use in warfare or surveillance have led parts of the U.S. government to call for a halt on its technology. However, the National Security Agency is reportedly testing an early version on classified networks. Prediction marketMyriad estimates a 49% chance of Claude Mythos being released by June 30.
Altman anticipates increased rhetoric around dangerous AI systems as capabilities advance but advises caution in taking all claims at face value: “There will be more talk about models too hazardous to release, alongside genuinely risky ones that must be deployed differently,” he noted. Altman expressed confidence in OpenAI’s strategy for releasing such technologies.
Additionally, Altman refuted reports of OpenAI reducing its infrastructure investment, asserting the company’s commitment to expanding computing capacity: “I’m not sure where these stories originate… People seem eager to portray us as retreating, yet soon enough, they’ll criticize us again for reckless spending,” he remarked.