Major tech companies including Google, Microsoft, and xAI have agreed to allow the U.S. government early access to their advanced AI models for evaluation before public release. This development follows a report suggesting that President Donald Trump’s administration is contemplating an executive order on this topic.
The announcement was made by the Commerce Department’s Center for AI Standards and Innovation, which stated that these companies have consented to provide pre-release access to assess system capabilities. “Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” commented Chris Fall, director of the center. He added that these collaborations will scale their public interest work during a critical period.
A day earlier, The New York Times revealed that discussions are underway about forming an executive order-based working group to oversee advanced AI models before they hit the market. White House officials reportedly met with executives from Anthropic, Google, and OpenAI regarding oversight plans last week.
The talks were partly spurred by Anthropic’s recent disclosure of its Claude Mythos model, which demonstrated proficiency in identifying cybersecurity vulnerabilities, prompting national security concerns among officials. Instead of launching publicly, Anthropic provided limited access to select startups and organizations. Mozilla used the model to identify 271 vulnerabilities within its Firefox browser.
On Myriad, a prediction market platform under Decrypt’s parent company Dastan, users currently see only a 13% chance that Claude Mythos will be broadly released by June 30.
The administration has also experienced tension with Anthropic over access to their models. In February, the Trump administration disputed with Anthropic after it refused an unrestricted access request for its AI models, leading Defense Secretary Pete Hegseth to label Anthropic as a national security supply chain risk. A federal appeals court declined to halt this designation amidst ongoing litigation, though Axios reported that the White House is reconsidering whether to restore its partnership with Anthropic.
The prospective executive order contrasts sharply with Trump’s previous stance on AI regulation, where he advocated for minimal industry oversight. “We’re going to make this industry absolutely the top because right now it’s a beautiful baby that’s born,” he stated last July, emphasizing the need for growth without regulatory constraints.
Since his return in 2025, Trump has undone Biden-era regulations requiring AI developers to conduct safety evaluations and report models with military applications. On his first day back, he repealed a 2023 executive order by former President Joe Biden that mandated sharing of safety test results before public release for potentially risky AI systems.
Recently, the Trump administration proposed a national framework for AI regulation, aiming to set standards without establishing a new regulatory body. This federal initiative occurs as states pursue their own AI regulations, some facing resistance from federal agencies like Colorado’s law against “algorithmic discrimination.”