UK lawmakers urge AI stress tests to protect financial stability
UK parliamentary panel tells regulators to run AI stress tests to identify system vulnerabilities and set binding guidance by end-2026.

A cross-party parliamentary committee has warned that Britain’s financial regulators are not sufficiently prepared for the risks posed by artificial intelligence and has urged the Bank of England and the Financial Conduct Authority to design and run AI-specific stress tests to probe system-wide vulnerabilities.
The Treasury Select Committee’s report calls for scenarios that simulate severe market shocks triggered by automated and algorithmic systems, saying such exercises would help firms and supervisors identify weak points before they become systemic. The committee argued that tailored stress testing would reveal how interconnected systems and AI-driven behaviours could amplify market disruption and create knock-on consequences across retail and wholesale markets.
Lawmakers singled out the rapid rollout of so-called agentic AI systems among banks and other firms, tools that can make decisions and take autonomous actions, as a particular concern. The report warned that agentic systems, combined with heavy reliance on a small number of major U.S. technology firms for AI services, could create concentration risk that amplifies the effects of an outage or malfunction.
Committee chair Meg Hillier set a stark tone in a statement accompanying the report: “Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI‑related incident and that is worrying.” Her comments underscored the committee’s criticism of a perceived “wait and see” approach by Britain’s financial watchdogs and its call for faster, more proactive regulation.

Among the committee’s concrete recommendations is a requirement that the Financial Conduct Authority publish detailed guidance by the end of 2026 clarifying how existing consumer protection rules apply to AI systems and specifying the extent to which senior managers should be expected to understand and oversee the AI tools used by their firms. The committee also wants the Bank of England and the FCA to coordinate on scenario design and to ensure stress tests cover both market functioning and consumer-facing harms.
The report drew support from outside central bank circles. Jonathan Hall, an external member of the Bank of England’s Financial Policy Committee, told the committee that bespoke AI stress tests could help oversight bodies detect emerging risks earlier, enabling supervisors and firms to adjust governance and controls before problems propagate system-wide.
The recommendations arrive amid a broader parliamentary push to strengthen regulatory information powers and oversight of opaque corners of finance. The Bank of England last year launched a stress test of the global private equity and private credit sector, an exercise described by the committee as covering roughly $16 trillion, with results due in early 2027. Sheila Noakes, a member of the House of Lords, said the proposed information-gathering power for the central bank would be “an information gathering power, it’s not the same as regulating the industry.”
The FCA has said it has already undertaken work to ensure AI is used safely and responsibly and indicated it will review the committee’s findings carefully. The Treasury Select Committee’s timetable for guidance and its push for stress-testing mark a shift toward time-bound regulatory action that could reshape firm practices, senior manager responsibilities, and the data collection powers of supervisors.
If regulators adopt the committee’s recommendations, banks and other financial firms will face new simulation requirements and clearer expectations for AI governance, potentially accelerating investment in resilience and transparency while raising questions about oversight of third-party AI providers and the costs of compliance.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

