Microsoft, Google, xAI to share new AI models with U.S. government first
Microsoft, Google DeepMind and xAI will give U.S. officials early access to new models, letting Washington test frontier AI before the public sees it.

The federal government has secured a rare look inside the next generation of artificial intelligence before it reaches consumers. Microsoft, Google DeepMind and Elon Musk’s xAI agreed to share new models early with the U.S. Department of Commerce’s Center for AI Standards and Innovation, giving officials a chance to inspect systems for national-security risks before public release.
CAISI said it will use the access for pre-deployment evaluations and targeted research aimed at understanding frontier AI capabilities and improving AI security. The center, housed at the National Institute of Standards and Technology, said it has already completed more than 40 evaluations, including assessments of state-of-the-art models that had not yet been released. Its focus includes demonstrable national-security risks tied to cybersecurity, biosecurity and chemical weapons.

The arrangement marks a quiet but significant expansion of federal oversight. Instead of waiting for harmful uses to surface after deployment, Washington is trying to build a technical checkpoint in advance, one that can examine what powerful models can do, how they might be misused and where defensive safeguards still fail. That matters because the risks now extend well beyond misinformation or content moderation. Officials are increasingly concerned about cyber abuse, model theft, capability escalation and defense-related threats.
The timing fits a broader policy shift. The White House’s “Winning the AI Race: America’s AI Action Plan,” released on July 23, 2025, directed more than 90 federal actions on AI innovation, infrastructure and security. CAISI’s new agreements suggest that the administration is pushing for a more active federal role in AI development and testing, not just post hoc enforcement.

The companies involved are not the only ones in the tent. NIST said CAISI has worked with OpenAI and Anthropic as well, and Reuters-linked reporting has described those relationships as part of a growing federal testing regime. NIST also sought public comment on draft best practices for automated benchmark evaluations through March 31, 2026, and signed an AI evaluation science memorandum with the General Services Administration in March 2026 to support federal procurement and adoption. NIST has also published guidance on post-deployment monitoring, signaling that scrutiny is meant to continue after launch.

The deal is voluntary, which gives it both reach and limits. CAISI can deepen its access, compare models and develop scientific measurement standards, but it does not amount to a blanket licensing system. That leaves the central question intact: whether this new partnership is a meaningful check on frontier AI or another version of the industry self-regulation promises that have often sounded stronger than they proved to be.
Know something we missed? Have a correction or additional information?
Submit a Tip

