Requirements go down, qubit counts go up. Plan accordingly.

Welcome,

Our team is at Bitcoin Vegas this week. We’re eager to speak with anybody thinking about how cryptosystems should prepare for quantum computing. Let us know if you’re there.

dx/dt

Best not to base your security on the assumption that a number is set in stone, when in fact it is changing rapidly.

Craig Gidney got the world’s attention again last week, demonstrating a 20x reduction in the number of qubits required to run Shor’s algorithm. We can now say confidently that RSA-2048 will be broken with less than 1M physical qubits. The exact number in Gidney’s paper is 897,864 physical qubits and future advances will certainly bring it down further.

Researchers keep figuring out cheaper ways to run algorithms on quantum computers. In 2023, Gidney and Google colleagues tacked logical qubits together so that they cost fewer physical qubits. In 2024, Gidney and Google colleagues (again) addressed a core bottleneck of quantum algorithms by making it cheaper to run Toffoli gates and T gates*.* And also in 2024, a French team figured out a cheaper way to do modular exponentiation, by approximating it instead of calculating it exactly. Applied in isolation, these 3 techniques reduced the requirements for Shor’s algorithm to 3M qubits (yoked surface codes), 2M qubits (magic state cultivation) or 4M qubits (approximate exponentiation). These techniques build on and are additive to a rich set of older work, including windowing, myriad arithmetic optimizations, and Ekerå-Hastad’s version of Shor’s algorithm, which is tailored for RSA integers.

Last week’s paper combines these techniques into a single algorithm, giving us the 1M qubit headline figure. The paper makes very reasonable physical assumptions: A square grid of physical qubits with nearest neighbor connectivity, 0.1% error rate, surface code cycles of <1 μs. For context, the worst error rate you could ascribe to Google’s Willow chips would be 0.77%, so they’re already close to the assumed specs. It’d be surprising if they don’t get to 0.1% error rates in the next chip they ship, say in 2-3 years. Note that the physical assumptions did not change from Gidney-Ekerå 2019 to Gidney 2025, meaning any improvements are exclusively due to better algorithms.

Gate counts matter too. They determine how long an algorithm will take and how much error-correction is required. Gidney estimates that factoring a 2048-bit number now takes 1 week, 6.9B Toffoli gates and 900k qubits, whereas his 2019 result took 8 hours, 3B Toffoli gates and 20M qubits. The difference arises, in part, from space-time tradeoffs. We can ‘spend’ gates in exchange for saving on qubit count. As well as the space-time tradeoff, gate count has an indirect effect on qubit count. Less gates mean we need fewer physical qubits per logical qubit. If we need to perform 1B logical operations, we need logical qubits with an error rate below one in a billion. If we are performing 1M logical operations, our logical error rate only needs to be below one in a million. Logical qubits get better as we add physical qubits to them, so less gates means less physical qubits, as we can get away with poorer-quality logical qubits. There are a number of other parameters that can be varied in Gidney’s implementation, and he gets the main result from optimizing for the minimum number of physical qubits.

There is one area where Gidney’s estimate might be conservative. He assumes that we need logical qubits with error rates of 10^-15, which allows us to run Shor’s algorithm successfully 93% of the time. If instead we were to use logical qubits with error rates of 10^-14, we could still run Shor’s algorithm, but with a 50% success probability. If an adversary’s aim was to forge a single digital signature at the earliest possible date (e.g. for one of the largest exposed wallets), they might take this approach - after adopting Gidney’s work to the discrete log problem on elliptic curves. One forged signature is enough to launch an incredibly destructive attack.

Gidney’s result is a triumph of algorithmic progress. Further improvements might arise from improvements to error-correcting codes, physical qubit quality, and compilation techniques.

TLDR: Resources required to run Shor’s algorithm continue to decrease. Quantum computer specs continue to improve. Plan accordingly.

PS: If you’d like to play around with how the ratio of physical to logical qubits varies with error rates, check out Sam Jaques’ neat surface code cost calculator.

 

Links

Until next time,

The Project Eleven Team

[email protected]