EU welcomes OpenAI cybersecurity access offer, presses Anthropic to follow
Brussels praised OpenAI’s cyber access offer and singled out Anthropic’s narrower approach, turning AI security tools into a test of EU oversight.

The European Commission welcomed OpenAI’s offer to open up its cybersecurity features, but made clear that Brussels is looking for more than promises. The dispute now centers on what “open access” means in practice: not public release, but controlled access for vetted defenders, with guardrails meant to keep the same tools from being repurposed by attackers.
OpenAI said its Trusted Access for Cyber program was built on an identity- and trust-based framework and was being scaled to thousands of verified individual defenders and hundreds of teams responsible for protecting critical software. The company said it had committed $10 million in API credits to accelerate cyber defense, and that approved users could face lower classifier-based refusals for authorized work such as vulnerability identification and triage, malware analysis, binary reverse engineering, detection engineering and patch validation. OpenAI also said it had already given GPT-5.4-Cyber access to the U.S. Center for AI Standards and Innovation and the UK AI Security Institute for evaluations.

Anthropic has moved into the same terrain, but more cautiously. In October 2025, the company said it had invested in improving Claude’s ability to help defenders detect, analyze and remediate vulnerabilities. More recently it released Claude Code Security as a limited research preview for Enterprise and Team customers, with expedited access for maintainers of open-source repositories. Anthropic’s Project Glasswing also ties its cyber push to a broad coalition of partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks.
For Brussels, the stakes go beyond one company’s product roadmap. The European Union’s AI Act is the bloc’s first comprehensive legal framework on AI, and the Commission says obligations for general-purpose AI models entered into application on 2 August 2025. Enforcement powers for newer models begin on 2 August 2026, while older models placed on the market before August 2025 fall under enforcement from 2 August 2027. The Commission’s guidance says the most advanced systems may pose systemic risks and face duties including model evaluations, incident reporting and cybersecurity protections.

The cyber debate also sits inside a wider policy push. On 20 January 2026, the Commission proposed a new cybersecurity package, warning that the threat landscape had worsened and that state actors were among the players developing more sophisticated offensive capabilities. In that setting, OpenAI’s willingness to expand verified access gives Brussels a working example of the model it appears to want: not fully open, not fully closed, but filtered through trust, verification and oversight. For European regulators, that makes cyber-AI governance less a theoretical debate than a test of whether voluntary access can be turned into accountable practice.
Know something we missed? Have a correction or additional information?
Submit a Tip
