LANL study finds some quantum machine-learning models are classically simulable
LANL researchers found some quantum-learning models built to dodge barren plateaus can still run on classical machines, a sharp check on quantum hype.

Los Alamos National Laboratory researchers have delivered an important reality check for quantum machine learning: some variational quantum computing architectures designed to avoid barren plateaus were still classically simulable. In a county where LANL’s computing work shapes both the local economy and the lab’s national-security standing, that finding lands well beyond the theory seminar.
Variational quantum computing is one of the most closely watched near-term uses for quantum devices because it blends quantum hardware with classical optimization. It has also been dogged by barren plateaus, the training problem that makes it hard to improve performance as circuits grow more complex. LANL said its new work showed that architectures proposed as a fix for quantum “curse of dimensionality” could also narrow or erase any practical advantage over classical computing.
The underlying Nature Communications perspective, “Does provable absence of barren plateaus imply classical simulability?”, said the authors found case-by-case evidence that many commonly used models with favorable loss landscapes can also be simulated classically. In some settings, the paper said, that can hold if classical data are gathered during an initial data-acquisition phase. The message was not that quantum machine learning is dead, but that the field’s most optimistic claims need to be tested against what a classical computer can already do.
For Los Alamos, that distinction matters. LANL’s computing portfolio is not just an academic calling card. It helps shape which projects get scaled, which teams get staffed, and where future investments go in a lab that serves both scientific and strategic missions. A model that looks promising in theory but turns out to be classically tractable can change grant plans, delay hardware commitments, and steer researchers toward approaches with a clearer chance of real quantum advantage.
The new result also fits into a longer LANL research arc. The lab said it has spent six years studying barren plateaus, including a 2021 Physical Review X paper that reported an absence of barren plateaus in quantum convolutional neural networks and a 2024 Nature Communications paper that developed a Lie-algebraic theory of barren plateaus for deep parameterized quantum circuits. That history shows the latest study was not a standalone warning, but part of a sustained effort to separate useful quantum methods from claims that sound stronger than the hardware can support.
Know something we missed? Have a correction or additional information?
Submit a Tip

