UK regulators warn frontier AI already outpaces human cyber skills
UK regulators said frontier AI already beats skilled hackers, warning banks and utilities that cyber risk is no longer theoretical. The threat is now operational.

Britain’s top financial regulators warned companies that frontier AI has already crossed into cyber territory once thought to belong only to highly skilled attackers. In a joint statement on Friday, the Bank of England, the Financial Conduct Authority and HM Treasury said current frontier models are a “step-change in capability” and that their cyber abilities are “already exceeding what a skilled practitioner could achieve,” only faster, at greater scale and lower cost.
The warning was blunt about what that means for business. Regulators said malicious use of frontier AI could amplify threats to firms’ safety and soundness, customers, market integrity and financial stability. They urged boards and senior managers to treat AI security as a core governance issue now, not after a breach, and to strengthen protective, detective, threat-containment and cyber-response capabilities alongside vulnerability management and third-party risk controls.

That has implications far beyond the City of London. Hospitals, utilities, banks and government systems all depend on software, data and vendors that can be probed, manipulated or disrupted at speed. If frontier models can identify weaknesses, draft exploit code and automate attack chains, then the pressure shifts from defending against occasional human intruders to defending against systems that can test thousands of targets in parallel. For institutions that rely on AI for coding, analysis, customer service and internal operations, the same tools that raise productivity can also widen the attack surface.
The new guidance followed a rapid escalation in official concern across Whitehall and Threadneedle Street. Andrew Bailey warned on 14 April that regulators needed to quickly understand the cybersecurity implications of Anthropic’s Mythos model. The next day, the UK government told business leaders that AI systems were becoming capable of work that once required rare expertise, including finding software weaknesses and writing exploit code. The AI Security Institute said Anthropic’s Claude Mythos Preview was substantially more capable at cyber offence than any model it had previously assessed, and that frontier model capabilities were doubling every four months, rather than every eight.

The picture is now widening into macroeconomic risk. On 7 May, the International Monetary Fund warned that AI-driven cyberattacks could lower the time and cost of exploiting vulnerabilities and create correlated failures that disrupt payments, financial intermediation and confidence across the system. Taken together, the UK warning and the IMF’s assessment suggest regulators are no longer treating AI cyber risk as a future scenario. They are treating it as part of the operating environment, and one that boards, lenders, hospitals, power networks and public agencies will have to manage before the next major incident forces the issue.
Know something we missed? Have a correction or additional information?
Submit a Tip

