Politics

Trump weighs federal AI review before public model releases

Trump was weighing a federal review of new AI models before release, a move that could slow launches and shift power toward Washington.

Sarah Chen··2 min read
Published
Listen to this article0:00 min
Share this article:
Trump weighs federal AI review before public model releases
Source: usnews.com

President Donald Trump was weighing an executive order that would put the federal government inside the AI release process, creating a government working group and a formal review for new models before they reach the public. For a White House that has generally leaned toward lighter-touch tech regulation, the idea marked a sharp turn toward oversight, with the government moving from cheerleader to gatekeeper.

The discussions took on new urgency after White House officials met last week with executives from Anthropic, Google and OpenAI. The catalyst was Anthropic’s Claude Mythos Preview, which the company said had unusually strong capability to find and exploit software vulnerabilities. That kind of advance matters beyond Silicon Valley: it raises the stakes for cyber defense, gives Washington more reason to inspect frontier systems before release, and increases the odds that product timelines will be shaped by federal review as much as by engineering decisions.

AI-generated illustration
AI-generated illustration

The proposal would sit somewhere between voluntary consultation and a licensing regime. In practice, that means the biggest frontier labs would gain a clearer path to federal sign-off, but smaller developers could lose one of their main advantages, speed. A review process could also give the government earlier visibility into safety and cybersecurity risks, including whether a model can be used to automate intrusion, accelerate exploitation or expose critical systems before any public rollout.

Related photo
Source: a57.foxnews.com

The structure under discussion appeared to draw from the United Kingdom’s frontier-AI safety approach, where several government bodies help evaluate safety standards. The UK AI Security Institute describes itself as the first state-backed organization dedicated to advancing safe, secure and beneficial advanced AI. That model would give government a much more active role than the United States has taken so far, and it would also preserve some industry input by bringing executives and officials into the same oversight process.

Related stock photo
Photo by Matheus Bertelli

The broader policy shift is already visible. The White House issued a national AI legislative framework on March 20, 2026, emphasizing innovation, national security and federal leadership. In March, Trump also appointed a 13-member AI advisory panel that included Mark Zuckerberg, Larry Ellison, Jensen Huang and Michael Dell, showing the administration was already willing to bring major industry figures into the policy orbit even as it considered tighter controls.

Donald Trump — Wikimedia Commons
Shealeah Craighead via Wikimedia Commons (Public domain)

The cyber angle extends beyond model review. U.S. officials were also considering whether to shorten the time federal agencies get to patch critical vulnerabilities from two weeks to three days in response to AI-enabled threats. Taken together, the ideas suggest Washington is moving toward a more selective security regime for frontier AI, one that could give the federal government more leverage over the sector’s most powerful systems while blurring the line between safety policy and industrial policy.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Politics