Quantum computers can't remember anything
Quantum computers can process information at speeds that make classical supercomputers look like abacuses. They can simulate molecular structures, crack optimization problems, and model entire financial systems. But there's a catch that rarely makes the headlines: they can't hold a thought. The real bottleneck for quantum computing isn't processing power. It's memory. Quantum information vanishes unpredictably, often within microseconds, and until very recently, we couldn't even measure how fast it was disappearing. A team of researchers from Norway and Denmark just changed that, building a method to track quantum data loss 100 times faster than anything before it. It's not a fix for the problem, but it's the diagnostic tool that might finally make a fix possible.
The fastest computers with the worst memory
Classical computers store information as bits, zeros and ones etched into silicon that stay put until you tell them otherwise. Quantum computers use qubits, which exploit quantum mechanical properties like superposition and entanglement to represent vastly more complex states. A qubit can be both 0 and 1 simultaneously, and when qubits are entangled, the state of one instantly influences the others. This is what gives quantum computers their extraordinary potential. But those same quantum properties that make qubits powerful also make them fragile. The moment a qubit interacts with its environment, even slightly, through thermal fluctuations, stray electromagnetic fields, or vibrations, it loses its quantum state. This process is called decoherence, and it's the single biggest obstacle standing between today's prototype quantum machines and the fault-tolerant quantum computers the industry has been promising. Think of it like writing on a whiteboard in a wind tunnel. You can write incredibly fast, but the letters start disappearing before you finish the sentence. In superconducting qubits, the most widely used type in systems from IBM, Google, and others, the average time before information is lost (called the coherence time) has improved steadily over the years. But as Professor Jeroen Danon from NTNU's Department of Physics explains, the real problem isn't the average. "The time it takes for information to disappear seems to vary randomly over time." Some moments a qubit holds information reasonably well. The next, it collapses almost immediately. And until now, there hasn't been a fast, reliable way to see that happening.
You can't fix what you can't see
This is where the Norwegian breakthrough comes in. Danon and an international team led by the Niels Bohr Institute in Copenhagen developed a new measurement technique that can track qubit relaxation rates, essentially how fast quantum information is leaking away, in approximately 10 milliseconds. Previous methods took about one full second. That's more than a 100x improvement, and in quantum physics, that difference is enormous. The research, published in Physical Review X in early 2026, uses a technique the team calls "real-time adaptive tracking." Rather than running lengthy calibration routines that only give you a snapshot average, the new method monitors qubits continuously, revealing rapid fluctuations that were previously invisible. "This will in turn make it easier to identify the underlying causes that make the information disappear," Danon said. This matters because decoherence isn't a single, uniform problem. It's a collection of different noise sources, each contributing in different ways at different times. Thermal noise, two-level system defects in the qubit materials, crosstalk between neighboring qubits, and electromagnetic interference all play a role. Without real-time visibility into how and when information is being lost, engineers are essentially trying to debug a program without access to the error logs. The NTNU/Niels Bohr technique gives quantum engineers something closer to a live debugger. By seeing exactly when and how fast decoherence spikes, researchers can start isolating causes, whether it's a manufacturing defect, an environmental factor, or something intrinsic to the qubit design.
Quantum circuits forget their own work
The memory problem goes even deeper than individual qubits. A separate study published in Nature Physics in April 2026, led by researchers at EPFL and the Free University of Berlin, showed that noise causes entire quantum circuits to forget their earlier computations. Quantum algorithms work by building up layers of operations, each one depending on the results of the ones before it. But the EPFL team demonstrated that in noisy circuits, the influence of earlier layers fades as noise accumulates. Only the final few layers of operations actually affect the output. Deep quantum circuits, in practice, behave like shallow ones. The domino analogy the researchers used is apt. Imagine setting up an elaborate chain of dominoes where each piece must strike the next in perfect sequence. Now imagine each domino is slightly wobbly. By the time you reach the end of the chain, only the last few dominoes determine whether the final piece falls. Everything before that has been absorbed by the wobble. This has serious implications for quantum advantage, the idea that quantum computers can solve problems classical computers cannot. If noise effectively collapses deep circuits into shallow ones, then many of the algorithms that theoretically give quantum computers their edge may not work in practice on current hardware.
Why sound waves might be part of the answer
While the Norwegian team focused on diagnosing the memory problem, others are working on extending it. In August 2025, a team at Caltech demonstrated a hybrid quantum memory that converts qubit information into sound waves using a mechanical oscillator, essentially a miniature tuning fork vibrating at gigahertz frequencies. The result: quantum memory lifetimes up to 30 times longer than the best superconducting qubits alone. Sound waves travel slower than electromagnetic waves, which means the devices can be more compact, and they don't propagate in free space, reducing energy leakage. It's an elegant approach, using the physical properties of sound to protect information that electromagnetic systems can't hold. The team acknowledged that transfer rates need to improve by a factor of three to ten before the system is practical, but the proof of concept is promising.
The honest timeline
Every year brings "quantum breakthroughs" that make it sound like useful quantum computers are just around the corner. The reality is more nuanced. IBM's roadmap targets a large-scale, fault-tolerant quantum computer (their "Starling" system) by 2029, with 200 logical qubits capable of running circuits with 100 million gates. IonQ projects over 2 million physical qubits by 2030, translating to 40,000 to 80,000 logical qubits. Google has laid out a five-stage roadmap from theoretical advantage to real-world applications. But these are the optimistic projections from companies with billions invested in the outcome. Independent assessments paint a wider range. A 2025 survey of expert predictions found timelines ranging from 2 to 5 years for early practical use to 15+ years for the kind of fault-tolerant machines that could transform industries. Caltech researchers published a finding in March 2026 suggesting useful quantum computers could be built with as few as 10,000 to 20,000 qubits, potentially by the end of the decade, but that's still a theoretical result that needs engineering validation. The industry consensus seems to be converging on the early 2030s for fault-tolerant systems that can do things classical computers genuinely cannot. That's not far off, but it's also not tomorrow. And the gap between "fault-tolerant" and "commercially transformative" could be another decade beyond that.
Why this matters for AI
The intersection of quantum computing and AI is often cited as the ultimate endgame. Quantum machine learning algorithms could theoretically process and classify massive datasets exponentially faster than classical methods. Quantum simulation could revolutionize drug discovery and materials science, areas where AI is already making strides but hitting computational walls. But quantum AI requires stable qubits, long coherence times, and deep circuits that actually work. Everything the memory problem currently prevents. The EPFL study on circuit depth is particularly relevant here: many proposed quantum machine learning algorithms require deep circuits to function. If noise collapses those circuits into effectively shallow ones, the quantum advantage for AI evaporates. This is why the measurement breakthrough from the NTNU team is more significant than it might appear. It's not glamorous, it's a diagnostic tool, not a quantum computer that can break encryption or simulate proteins. But diagnosing the memory problem is a prerequisite for solving it, and solving it is a prerequisite for everything else.
The real quantum race
The race in quantum computing isn't really about who can build the most qubits. It's about who can make qubits remember. Speed without stability is just expensive noise. The Norwegian team's 100x faster measurement technique, Caltech's sound-based quantum memory, and the EPFL team's honest assessment of circuit depth limitations all point to the same conclusion: quantum computing's hardest problem isn't computation. It's preservation. When we eventually solve the memory problem, everything else, fault tolerance, quantum advantage, quantum AI, follows. Until then, quantum computers remain the fastest thinkers that can't hold a thought.
References
- Quantum computers keep losing data. This breakthrough finally tracks it, ScienceDaily, April 8, 2026
- Helping resolve quantum computers' memory problem, Norwegian SciTech News, April 4, 2026
- Helping resolve quantum computers' memory problem, Tech Xplore, April 4, 2026
- Scientists find quantum computers forget most of their work, ScienceDaily, April 6, 2026
- Caltech breakthrough makes quantum memory last 30 times longer, ScienceDaily, August 27, 2025
- IBM lays out clear path to fault-tolerant quantum computing, IBM Quantum Blog, June 10, 2025
- Caltech team finds useful quantum computers could be built with as few as 10,000 qubits, Caltech News, March 31, 2026
- Quantum computing timelines 2025: predictions from around the globe, Brian Lenahan, Substack