Timelines for quantum computers relevant to cryptography are often exaggerated—leading to urgent calls for a sweeping, immediate transition to post-quantum cryptography.
But these calls often overlook the costs and risks of premature migration, and ignore the dramatically different risk profiles among cryptographic primitives:
Post-quantum encryption, despite its costs, still demands immediate deployment: “Harvest-Now-Decrypt-Later” (HNDL) attacks are already underway, because sensitive data encrypted today will still be valuable when quantum computers arrive, even if that’s decades from now. The performance overhead and implementation risks of post-quantum encryption are real, but HNDL attacks leave no choice for data that requires long-term confidentiality.
Post-quantum signatures face different considerations. They are not subject to HNDL attacks, and their costs and risks (larger sizes, performance overhead, immature implementations, and bugs) demand careful consideration rather than immediate migration.
These distinctions are critical. Misunderstanding them distorts cost-benefit analyses and can lead teams to overlook more pressing security risks—such as implementation bugs.
The real challenge in a successful transition to post-quantum cryptography is matching urgency with actual threats. Below, I will clarify common misconceptions about the quantum threat to cryptography—covering encryption, signatures, and zero-knowledge proofs—with a particular focus on their implications for blockchains.
How is our timeline progressing?
Despite the hype, the likelihood of a cryptographically relevant quantum computer (CRQC) appearing in the 2020s is extremely low.
By “cryptographically relevant quantum computer,” I mean a fault-tolerant, error-corrected quantum computer, large enough to run Shor’s algorithm in order to break {secp}256{k}1 or {RSA-2048} (i.e., attack elliptic curve cryptography or RSA) within a reasonable timeframe (e.g., within at most a month of continual computation).
By any reasonable interpretation of public milestones and resource estimates, we are still very far from a cryptographically relevant quantum computer. Companies sometimes claim that a CRQC may appear before 2030 or as late as 2035, but publicly known progress does not support those claims.
For context, across all current architectures—trapped ion, superconducting qubits, and neutral atom systems—today’s quantum computing platforms are nowhere near the hundreds of thousands to millions of physical qubits required to run Shor’s algorithm to attack {RSA-2048} or {secp}256{k}1 (the exact number depends on error rates and error correction schemes).
The limiting factors are not just the number of qubits, but also gate fidelity, qubit connectivity, and the sustained error-correction circuit depth required to run deep quantum algorithms. While some systems now exceed 1,000 physical qubits, the raw qubit count is misleading: these systems lack the connectivity and gate fidelity needed for crypto-relevant computations.
Recent systems are approaching the physical error rates needed for quantum error correction to start being effective, but no one has demonstrated more than a handful of logical qubits with sustained error-correction circuit depth… let alone the thousands of high-fidelity, deep-circuit, fault-tolerant logical qubits required to actually run Shor’s algorithm. The gap between demonstrating quantum error correction in principle and achieving the scale needed for cryptanalysis remains enormous.
Simply put: unless both the number and fidelity of qubits improve by several orders of magnitude, cryptographically relevant quantum computers remain a distant prospect.
Nevertheless, corporate press releases and media coverage can easily create confusion. Here are some common sources of misunderstanding and confusion, including:
Demonstrations claiming “quantum advantage” currently target artificially crafted tasks. These tasks are chosen not for their practical value but because they can run on current hardware and appear to demonstrate significant quantum speedups—a fact often obscured in announcements.
Companies claiming to have achieved thousands of physical qubits. But this refers to quantum annealers, not gate-model machines needed to run Shor’s algorithm against public-key cryptography.
Companies using the term “logical qubit” loosely. Physical qubits are noisy. As noted above, quantum algorithms need logical qubits; Shor’s algorithm needs thousands. Using quantum error correction, many physical qubits can be used to implement one logical qubit—typically hundreds to thousands, depending on error rates. But some companies have stretched this term beyond recognition. For example, a recent announcement claimed to realize a logical qubit using just two physical qubits and a distance-2 code. This is absurd: a distance-2 code can only detect errors, not correct them. Real fault-tolerant logical qubits for cryptanalysis each require hundreds to thousands of physical qubits, not two.
More generally, many quantum computing roadmaps use the term “logical qubit” to refer to qubits that only support Clifford operations. These can be efficiently simulated classically and are insufficient to run Shor’s algorithm, which requires thousands of error-corrected T gates (or, more generally, non-Clifford gates).
Even if one roadmap’s target is “thousands of logical qubits in year X,” that does not mean the company expects to run Shor’s algorithm to break classical cryptography in the same year X.
These practices seriously distort public perception of how close we are to cryptographically relevant quantum computers, even among well-informed observers.
That said, some experts are indeed excited by the progress. For example, Scott Aaronson recently wrote that, given “the current astonishing rate of hardware progress,”
I now regard it as a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election.
But Aaronson later clarified that his statement did not mean a cryptographically relevant quantum computer: he considers even a fully fault-tolerant Shor’s algorithm factoring 15 = 3 × 5 to count as an achievement—and that computation can be done faster with pencil and paper. The standard is still for very small-scale executions of Shor’s algorithm, not at crypto-relevant scale, because previous experiments factoring 15 on quantum computers used simplified circuits, not the full, fault-tolerant Shor’s algorithm. And there’s a reason these experiments always target factoring 15: arithmetic mod 15 is easy, while factoring even slightly larger numbers like 21 is much harder. Thus, quantum experiments that claim to factor 21 usually rely on extra hints or shortcuts.
In short: the expectation that a cryptographically relevant quantum computer capable of breaking {RSA-2048} or {secp}256{k}1 will appear within the next 5 years—which is what actually matters for practical cryptography—is not supported by any publicly known progress.
Even 10 years is still ambitious. Given how far we are from a CRQC, it is entirely reasonable to be excited about progress while still thinking on timelines of a decade or more.
What about the US government’s setting of 2035 as a deadline for full post-quantum (PQ) migration of government systems? I think this is a reasonable timeline for completing such a large-scale transition. However, it is not a prediction that a cryptographically relevant quantum computer will exist by then.
What does an HNDL attack apply to (and not apply to)?
A Harvest-Now-Decrypt-Later (HNDL) attack is when an adversary archives encrypted traffic now, then decrypts it once a cryptographically relevant quantum computer exists. Nation-state adversaries are certainly already archiving encrypted communications from the US government at scale, to decrypt them years later if and when a CRQC does exist.
That’s why encryption needs to transition immediately—at least for anyone with confidentiality needs of 10–50 years or more.
But digital signatures—all blockchains rely on them—are different: there is no confidentiality to be attacked retroactively.
In other words, if a cryptographically relevant quantum computer arrives, signature forgery does become possible from that point forward, but past signatures do not “hide” secrets the way encrypted messages do. As long as you know a digital signature was generated before a CRQC arrived, it cannot be a forgery.
This makes the transition to post-quantum digital signatures less urgent than the post-quantum transition for encryption.
Major platforms are acting accordingly: Chrome and Cloudflare have rolled out hybrid {X}25519+{ML-KEM} for encryption in TLS.
In this piece, for readability, I use “encryption schemes” even though, strictly, secure communication protocols like TLS use key exchange or key encapsulation mechanisms, not public-key encryption.
Here, “hybrid” means both a post-quantum secure scheme (i.e., ML-KEM) and the current scheme ({X}25519) are used together, to get the combined security of both. This way, they can (hopefully) prevent HNDL attacks via ML-KEM, while retaining classical security via {X}25519 in case ML-KEM is found insecure even against today’s computers.
Apple’s iMessage has also deployed this kind of hybrid post-quantum encryption via its PQ3 protocol, and Signal has done so via its PQXDH and SPQR protocols.
In contrast, the rollout of post-quantum digital signatures to critical internet infrastructure is being postponed until a cryptographically relevant quantum computer is truly on the horizon, because current post-quantum signature schemes come with performance penalties (covered in more detail below).
zkSNARKs—Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge, crucial for blockchain scalability and privacy in the long term—are in a similar position to signatures. This is because, even for non-post-quantum-secure {zkSNARKs} (which use elliptic curve cryptography, just as today’s non-post-quantum encryption and signature schemes do), their zero-knowledge property is post-quantum secure.
The zero-knowledge property ensures that nothing about the secret witness is leaked in the proof—even to a quantum adversary—so there is no secret material to “harvest” for later decryption.
Thus, {zkSNARKs} are not vulnerable to Harvest-Now-Decrypt-Later attacks. Just as today’s non-post-quantum signatures generated before a CRQC are safe, any {zkSNARK} proof generated before a CRQC arrives is trustworthy (i.e., the statement proven is definitely true)—even if the {zkSNARK} uses elliptic curve cryptography. Only after a CRQC arrives can an attacker find convincing proofs of false statements.
What this means for blockchains
Most blockchains are not exposed to HNDL attacks:
Most non-privacy chains, like today’s Bitcoin and Ethereum, primarily use non-post-quantum cryptography for transaction authorization—that is, they use digital signatures, not encryption.
Likewise, these signatures are not an HNDL risk: “Harvest-Now-Decrypt-Later” attacks apply to encrypted data. For example, Bitcoin’s blockchain is public; the quantum threat is signature forgery (deriving private keys to steal funds), not decrypting already-public transaction data. This removes the immediate encryption urgency from HNDL attacks.
Unfortunately, even otherwise reputable sources like the Federal Reserve have incorrectly claimed that Bitcoin is vulnerable to HNDL attacks, a mistake that overstates the urgency to transition to post-quantum cryptography.
That said, reduced urgency does not mean Bitcoin can wait: it faces different timeline pressures from the immense social coordination required to change the protocol.
Exceptions as of today are privacy chains, many of which encrypt or otherwise obfuscate recipients and amounts. Once quantum computers can break elliptic curve cryptography, this confidentiality can be harvested now and retroactively deanonymized.
For such privacy chains, the severity of attack varies by blockchain design. For example, for Monero’s curve-based ring signatures and key images (linkability tags for each output to prevent double-spending), the public ledger alone is enough to retrospectively reconstruct the spend graph. But in other chains, the damage is more limited—see Zcash cryptography engineer and researcher Sean Bowe’s discussion for details.
If it’s important that users’ transactions are not exposed by a cryptographically relevant quantum computer, then privacy chains should transition to post-quantum primitives (or hybrid schemes) as soon as feasible. Alternatively, they should adopt architectures that avoid putting decryptable secrets on-chain.
Bitcoin’s special dilemma: governance + abandoned coins
For Bitcoin in particular, two realities drive urgency to begin moving toward post-quantum digital signatures. Neither is about quantum technology itself.
One concern is governance speed: Bitcoin moves slowly. Any contentious issue that the community cannot reach consensus on for an appropriate solution can risk a destructive hard fork.
Another concern is that Bitcoin’s move to post-quantum signatures cannot be a passive migration: owners must actively migrate their coins. This means abandoned, quantum-vulnerable coins cannot be protected. Some estimates put the number of quantum-vulnerable and likely-abandoned BTC at several million coins, worth tens of billions of dollars at current prices (as of December 2025).
However, the quantum threat to Bitcoin will not be a sudden, overnight disaster… but rather a selective, gradual targeting process. Quantum computers will not break all crypto at once—Shor’s algorithm must be run one public key at a time. Early quantum attacks will be extremely expensive and slow. So, once quantum computers can break a single Bitcoin signature key, attackers will selectively prey on high-value wallets.
Moreover, users who avoid address reuse and do not use Taproot addresses—which directly expose the public key on-chain—are largely protected even without protocol changes: their public key is hidden behind a hash function until their coins are spent. When they eventually broadcast a spend transaction, the public key becomes visible, and there is a short, real-time race between honest spenders needing confirmation and a quantum-equipped attacker trying to find the private key and spend the coins before the legitimate transaction finalizes. So, truly vulnerable coins are those whose public keys are already exposed: early pay-to-pubkey outputs, reused addresses, and Taproot holdings.
For abandoned vulnerable coins, there is no simple solution. Some options include:
The Bitcoin community agrees on a “flag day” after which any un-migrated coins are declared burned.
Allowing abandoned, quantum-vulnerable coins to be easily seized by anyone with a cryptographically relevant quantum computer.
The second option presents serious legal and security concerns. Taking coins without the private key using a quantum computer—even if claiming legitimate ownership or good intentions—could trigger major issues under theft and computer fraud laws in many jurisdictions.
Furthermore, “abandoned” is itself a presumption based on inactivity. Nobody really knows if these coins lack a living keyholder. Evidence of prior ownership may not provide legal authority to break cryptographic protection to recover them. This legal ambiguity increases the likelihood that abandoned, quantum-vulnerable coins fall into the hands of malicious actors willing to ignore legal restrictions.
A final Bitcoin-unique issue is its low transaction throughput. Even if a migration plan is finalized, migrating all quantum-vulnerable funds to post-quantum-secure addresses would take months at Bitcoin’s current transaction rate.
These challenges make it vital for Bitcoin to start planning its post-quantum transition now—not because a cryptographically relevant quantum computer is likely before 2030, but because the governance, coordination, and technical logistics needed to migrate tens of billions of dollars’ worth of coins will take years to resolve.
The quantum threat to Bitcoin is real, but its timeline pressure comes from Bitcoin’s own limitations, not an imminent quantum computer. Other blockchains face their own quantum-vulnerable funds challenges, but Bitcoin’s exposure is unique: its earliest transactions used pay-to-pubkey outputs, putting public keys directly on-chain and making a large portion of BTC especially vulnerable to quantum computers. This technical distinction—combined with Bitcoin’s age, value concentration, low throughput, and governance rigidity—makes the problem particularly acute.
Note that the vulnerabilities described above pertain to the cryptographic security of Bitcoin’s digital signatures—not the economic security of the Bitcoin blockchain. That economic security derives from proof-of-work (PoW) consensus, which is not easily attacked by quantum computers, for three reasons:
PoW is based on hashing, so it is subject only to Grover’s quadratic quantum speedup, not Shor’s exponential speedup.
The practical overhead of implementing Grover’s search makes it extremely unlikely that any quantum computer could achieve even moderate real-world speedup over Bitcoin’s PoW mechanism.
Even if significant speedup were realized, it would give large quantum miners an advantage over smaller ones, but would not fundamentally undermine Bitcoin’s economic security model.
Costs and risks of post-quantum signatures
To understand why blockchains should not rush to deploy post-quantum signatures, we need to understand the performance costs and our still-evolving confidence in post-quantum security.
Most post-quantum cryptography is based on one of five approaches:
Hashing (hashing)
Codes (codes)
Lattices (lattices)
Multivariate quadratic systems (multivariate quadratic systems, MQ)
Isogenies (isogenies).
Why five different approaches? The security of any post-quantum primitive is based on the assumption that quantum computers cannot efficiently solve a specific mathematical problem. The more “structured” this problem is, the more efficient crypto protocols we can build on top of it.
But this is a double-edged sword: extra structure also gives attack algorithms more to exploit. This creates a fundamental tension—stronger assumptions enable better performance, but at the cost of potential security holes (i.e., the risk that the assumption is proven false).
In general, hash-based approaches are the most conservative from a security perspective, since we are most confident quantum computers cannot efficiently attack these protocols. But they are also the least performant. For example, the NIST-standardized hash-based signature scheme, even in its smallest parameter setting, has a size of 7-8 KB. In contrast, today’s elliptic curve digital signatures are just 64 bytes. That’s about a 100x size difference.
Lattice schemes are the main focus for deployment today. The only encryption scheme and two out of three signature algorithms selected by NIST for standardization are lattice-based. One lattice scheme (ML-DSA, formerly Dilithium) produces signatures from 2.4 KB (at 128-bit security) to 4.6 KB (at 256-bit security)—making them about 40–70 times larger than today’s elliptic curve signatures. Another lattice scheme, Falcon, has somewhat smaller signatures (Falcon-512 is 666 bytes, Falcon-1024 is 1.3 KB), but involves complex floating-point operations, which NIST itself flagged as a special implementation challenge. Falcon’s creator Thomas Pornin called it “the most complicated crypto algorithm I’ve ever implemented.”
Implementation security for lattice-based digital signatures is also more challenging than for elliptic curve-based schemes: ML-DSA has more sensitive intermediates and non-trivial rejection sampling logic, requiring side-channel and fault protection. Falcon adds constant-time floating-point issues; in fact, several side-channel attacks have already extracted secret keys from Falcon implementations.
These issues pose immediate risks, unlike the distant threat of cryptographically relevant quantum computers.
There is good reason for caution in deploying more performant post-quantum cryptography. Historically, leading candidates like Rainbow (an MQ-based signature scheme) and SIKE/SIDH (an isogeny-based encryption scheme) were broken classically—that is, using today’s computers, not quantum computers.
This happened late in the NIST standardization process. This is healthy science at work, but it shows that premature standardization and deployment can backfire.
As noted above, internet infrastructure is taking a cautious approach to signature migration. This is especially notable given how long the internet’s cryptographic transitions take once started. The move away from MD5 and SHA-1 hash functions—technically deprecated by internet governance bodies years ago—took many years to actually implement in infrastructure, and in some cases is still ongoing. This happened because these schemes were completely broken, not just potentially vulnerable to future technology.
Unique challenges for blockchains vs. internet infrastructure
Fortunately, blockchains like Ethereum or Solana, maintained by active open-source developer communities, can upgrade faster than traditional internet infrastructure. On the other hand, traditional infrastructure benefits from frequent key rotations, moving its attack surface faster than early quantum machines can target—which blockchains cannot do, since coins and their associated keys can be exposed indefinitely.
But overall, blockchains should still follow the internet’s thoughtful approach to signature migration. Signatures in both settings are not exposed to HNDL attacks, and the costs and risks of prematurely migrating to immature post-quantum schemes are still significant, regardless of key longevity.
There are blockchain-specific challenges that make premature migration especially risky and complex: for example, blockchains have unique requirements for signature schemes, particularly the ability to rapidly aggregate many signatures. Today, BLS signatures are often used because they enable very fast aggregation, but they are not post-quantum secure. Researchers are exploring SNARK-based post-quantum signature aggregation. This work is promising but still in its early days.
For SNARKs, the community is currently focused on hash-based constructions as the leading post-quantum option. But a major shift is coming: I believe that in the coming months and years, lattice-based options will become attractive alternatives. These alternatives will offer better performance across the board compared to hash-based {SNARKs}, such as shorter proofs—just as lattice-based signatures are shorter than hash-based ones.
The more pressing current issue: implementation security
For the next few years, implementation bugs will be a greater security risk than cryptographically relevant quantum computers. For {SNARKs}, the main concern is bugs.
Bugs are already a challenge in digital signature and encryption schemes, and {SNARKs} are much more complex. Indeed, a digital signature scheme can be viewed as a very simple {zkSNARK} for the statement “I know the private key corresponding to my public key and authorize this message.”
For post-quantum signatures, the immediate risks also include implementation attacks such as side-channel and fault injection attacks. These types of attacks are well-documented and can extract secret keys from deployed systems. They are a much more urgent threat than distant quantum computers.
The community will spend years identifying and fixing bugs in {SNARKs}, and hardening post-quantum signature implementations against side-channel and fault injection attacks. Because the dust has not yet settled on post-quantum {SNARKs} and signature aggregation schemes, blockchains that transition prematurely risk being locked into suboptimal schemes. When better options appear, or implementation bugs are found, they may need to migrate again.
What should we do? 7 recommendations
Given the realities I’ve outlined above, I’ll close with recommendations for various stakeholders—from builders to policymakers. The core principle: take the quantum threat seriously, but do not operate on the assumption that a cryptographically relevant quantum computer will arrive before 2030. There is no evidence for this based on current progress. Yet, there are things we can and should do now:
We should deploy hybrid encryption immediately.
Or at least in any case where long-term confidentiality is important and costs are bearable.
Many browsers, CDNs, and messaging apps (like iMessage and Signal) have already deployed hybrid approaches. Hybrid—post-quantum + classical—can defend against HNDL attacks while hedging against potential weaknesses in post-quantum schemes.
Use hash-based signatures immediately when size is acceptable.
Software/firmware updates—and other such low-frequency, size-insensitive scenarios—should immediately adopt hybrid hash-based signatures. (Hybrid here is to hedge against implementation bugs in new schemes, not because there are doubts about the hash-based security assumption.)
This is conservative, and it provides society with a clear “lifeboat” in the unlikely event a cryptographically relevant quantum computer appears unexpectedly soon. If post-quantum signatures for software updates are not deployed in advance, we would face a bootstrapping problem after a CRQC appears: we would be unable to safely distribute the post-quantum crypto fixes we need to defend against it.
Blockchains need not rush to deploy post-quantum signatures—but should start planning immediately.
Blockchain developers should follow the lead of the network PKI community and adopt a thoughtful approach to post-quantum signature deployment. This allows post-quantum signature schemes to continue maturing in performance and in our understanding of their security. This approach also gives developers time to rearchitect systems to handle larger signatures and to develop better aggregation techniques.
For Bitcoin and other L1s: communities need to define migration paths and policies for abandoned quantum-vulnerable funds. Passive migration is not possible, so planning is essential. And since Bitcoin faces special non-technical challenges—slow governance and a large volume of high-value, potentially abandoned quantum-vulnerable addresses—it is especially important for the Bitcoin community to start planning now.
Meanwhile, we need to allow research on post-quantum {SNARKs} and aggregatable signatures to mature (likely still several years). Again, premature migration risks lock-in to suboptimal schemes or the need for a second migration to fix implementation bugs.
A note on Ethereum’s account model: Ethereum supports two account types with different implications for post-quantum migration—Externally Owned Accounts (EOAs), the traditional account type controlled by {secp}256{k}1 private keys; and smart contract wallets with programmable authorization logic.
In a non-emergency, if Ethereum adds post-quantum signature support, upgradable smart contract wallets could switch to post-quantum verification via contract upgrade—while EOAs may need to move their funds to new post-quantum-secure addresses (though Ethereum would likely provide a dedicated migration mechanism for EOAs as well).
In a quantum emergency, Ethereum researchers have proposed a hard fork plan to freeze vulnerable accounts and allow users to recover funds by proving knowledge of their mnemonic via post-quantum-secure {SNARKs}. This recovery mechanism would apply to EOAs and any smart contract wallets not yet upgraded.
Practical effects for users: Well-audited, upgradable smart contract wallets may offer a slightly smoother migration path—but not dramatically, and with tradeoffs in trust in wallet providers and upgrade governance. More important is that the Ethereum community continues its work on post-quantum primitives and emergency response plans.
A broader design lesson for builders: Today, many blockchains tightly couple account identity to a specific cryptographic primitive—Bitcoin and Ethereum to ECDSA signatures on {secp}256{k}1, and other chains to EdDSA. The challenges of post-quantum migration highlight the value of decoupling account identity from any specific signature scheme. Ethereum’s move toward smart accounts and similar account abstraction work on other chains reflect this trend: allowing accounts to upgrade their authentication logic without abandoning their on-chain history and state. This decoupling does not make post-quantum migration trivial, but it does provide more flexibility than hard-coding accounts to a single signature scheme. (It also supports unrelated features like sponsored transactions, social recovery, and multisig.)
For privacy chains, which encrypt or hide transaction details, earlier transition should be prioritized if performance allows.
User confidentiality on these chains is currently exposed to HNDL attacks, though the severity varies by design. Chains where full retroactive deanonymization is possible from the public ledger face the most urgent risk.
Consider hybrid schemes (post-quantum + classical) to guard against the possibility that a surface-level post-quantum scheme is found insecure even classically, or implement architectural changes to avoid putting decryptable secrets on-chain.
In the near term, prioritize implementation security—not quantum threat mitigation.
Especially for complex cryptographic primitives like {SNARKs} and post-quantum signatures, bugs and implementation attacks (side-channel, fault injection) will be a much bigger security risk over the next few years than cryptographically relevant quantum computers.
Immediately invest in audits, fuzzing, formal verification, and defense-in-depth/layered security approaches—don’t let quantum concerns distract from the more pressing threat of implementation bugs!
Fund quantum computing development.
One major national security implication of all the above is that we need sustained funding and talent development in quantum computing.
A major adversary achieving cryptographically relevant quantum computing before the US would pose a severe national security risk to us and the rest of the world.
Keep perspective on quantum computing announcements.
As quantum hardware matures, there will be many milestones in the coming years. Paradoxically, the very frequency of these announcements is evidence of how far we are from cryptographically relevant quantum computers: each milestone is just one of many bridges to cross before reaching that point, and each will generate its own headlines and excitement.
Treat press releases as progress reports to be critically evaluated, not as triggers for sudden action.
Of course, there may be surprising breakthroughs or innovations that accelerate projected timelines, just as there may be severe scaling bottlenecks that delay them.
I am not arguing that a cryptographically relevant quantum computer in five years is impossible, just that it is extremely unlikely. The above recommendations are robust to this uncertainty, and following them will avoid more immediate and probable risks: implementation bugs, rushed deployments, and the usual ways crypto transitions go wrong.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Quantum Computing and Blockchain: Matching Urgency with Real Threats
Written by: Justin Thaler
Translated by: Plain Language Blockchain
Timelines for quantum computers relevant to cryptography are often exaggerated—leading to urgent calls for a sweeping, immediate transition to post-quantum cryptography.
But these calls often overlook the costs and risks of premature migration, and ignore the dramatically different risk profiles among cryptographic primitives:
Post-quantum encryption, despite its costs, still demands immediate deployment: “Harvest-Now-Decrypt-Later” (HNDL) attacks are already underway, because sensitive data encrypted today will still be valuable when quantum computers arrive, even if that’s decades from now. The performance overhead and implementation risks of post-quantum encryption are real, but HNDL attacks leave no choice for data that requires long-term confidentiality.
Post-quantum signatures face different considerations. They are not subject to HNDL attacks, and their costs and risks (larger sizes, performance overhead, immature implementations, and bugs) demand careful consideration rather than immediate migration.
These distinctions are critical. Misunderstanding them distorts cost-benefit analyses and can lead teams to overlook more pressing security risks—such as implementation bugs.
The real challenge in a successful transition to post-quantum cryptography is matching urgency with actual threats. Below, I will clarify common misconceptions about the quantum threat to cryptography—covering encryption, signatures, and zero-knowledge proofs—with a particular focus on their implications for blockchains.
How is our timeline progressing?
Despite the hype, the likelihood of a cryptographically relevant quantum computer (CRQC) appearing in the 2020s is extremely low.
By “cryptographically relevant quantum computer,” I mean a fault-tolerant, error-corrected quantum computer, large enough to run Shor’s algorithm in order to break {secp}256{k}1 or {RSA-2048} (i.e., attack elliptic curve cryptography or RSA) within a reasonable timeframe (e.g., within at most a month of continual computation).
By any reasonable interpretation of public milestones and resource estimates, we are still very far from a cryptographically relevant quantum computer. Companies sometimes claim that a CRQC may appear before 2030 or as late as 2035, but publicly known progress does not support those claims.
For context, across all current architectures—trapped ion, superconducting qubits, and neutral atom systems—today’s quantum computing platforms are nowhere near the hundreds of thousands to millions of physical qubits required to run Shor’s algorithm to attack {RSA-2048} or {secp}256{k}1 (the exact number depends on error rates and error correction schemes).
The limiting factors are not just the number of qubits, but also gate fidelity, qubit connectivity, and the sustained error-correction circuit depth required to run deep quantum algorithms. While some systems now exceed 1,000 physical qubits, the raw qubit count is misleading: these systems lack the connectivity and gate fidelity needed for crypto-relevant computations.
Recent systems are approaching the physical error rates needed for quantum error correction to start being effective, but no one has demonstrated more than a handful of logical qubits with sustained error-correction circuit depth… let alone the thousands of high-fidelity, deep-circuit, fault-tolerant logical qubits required to actually run Shor’s algorithm. The gap between demonstrating quantum error correction in principle and achieving the scale needed for cryptanalysis remains enormous.
Simply put: unless both the number and fidelity of qubits improve by several orders of magnitude, cryptographically relevant quantum computers remain a distant prospect.
Nevertheless, corporate press releases and media coverage can easily create confusion. Here are some common sources of misunderstanding and confusion, including:
Demonstrations claiming “quantum advantage” currently target artificially crafted tasks. These tasks are chosen not for their practical value but because they can run on current hardware and appear to demonstrate significant quantum speedups—a fact often obscured in announcements.
Companies claiming to have achieved thousands of physical qubits. But this refers to quantum annealers, not gate-model machines needed to run Shor’s algorithm against public-key cryptography.
Companies using the term “logical qubit” loosely. Physical qubits are noisy. As noted above, quantum algorithms need logical qubits; Shor’s algorithm needs thousands. Using quantum error correction, many physical qubits can be used to implement one logical qubit—typically hundreds to thousands, depending on error rates. But some companies have stretched this term beyond recognition. For example, a recent announcement claimed to realize a logical qubit using just two physical qubits and a distance-2 code. This is absurd: a distance-2 code can only detect errors, not correct them. Real fault-tolerant logical qubits for cryptanalysis each require hundreds to thousands of physical qubits, not two.
More generally, many quantum computing roadmaps use the term “logical qubit” to refer to qubits that only support Clifford operations. These can be efficiently simulated classically and are insufficient to run Shor’s algorithm, which requires thousands of error-corrected T gates (or, more generally, non-Clifford gates).
Even if one roadmap’s target is “thousands of logical qubits in year X,” that does not mean the company expects to run Shor’s algorithm to break classical cryptography in the same year X.
These practices seriously distort public perception of how close we are to cryptographically relevant quantum computers, even among well-informed observers.
That said, some experts are indeed excited by the progress. For example, Scott Aaronson recently wrote that, given “the current astonishing rate of hardware progress,”
I now regard it as a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election.
But Aaronson later clarified that his statement did not mean a cryptographically relevant quantum computer: he considers even a fully fault-tolerant Shor’s algorithm factoring 15 = 3 × 5 to count as an achievement—and that computation can be done faster with pencil and paper. The standard is still for very small-scale executions of Shor’s algorithm, not at crypto-relevant scale, because previous experiments factoring 15 on quantum computers used simplified circuits, not the full, fault-tolerant Shor’s algorithm. And there’s a reason these experiments always target factoring 15: arithmetic mod 15 is easy, while factoring even slightly larger numbers like 21 is much harder. Thus, quantum experiments that claim to factor 21 usually rely on extra hints or shortcuts.
In short: the expectation that a cryptographically relevant quantum computer capable of breaking {RSA-2048} or {secp}256{k}1 will appear within the next 5 years—which is what actually matters for practical cryptography—is not supported by any publicly known progress.
Even 10 years is still ambitious. Given how far we are from a CRQC, it is entirely reasonable to be excited about progress while still thinking on timelines of a decade or more.
What about the US government’s setting of 2035 as a deadline for full post-quantum (PQ) migration of government systems? I think this is a reasonable timeline for completing such a large-scale transition. However, it is not a prediction that a cryptographically relevant quantum computer will exist by then.
What does an HNDL attack apply to (and not apply to)?
A Harvest-Now-Decrypt-Later (HNDL) attack is when an adversary archives encrypted traffic now, then decrypts it once a cryptographically relevant quantum computer exists. Nation-state adversaries are certainly already archiving encrypted communications from the US government at scale, to decrypt them years later if and when a CRQC does exist.
That’s why encryption needs to transition immediately—at least for anyone with confidentiality needs of 10–50 years or more.
But digital signatures—all blockchains rely on them—are different: there is no confidentiality to be attacked retroactively.
In other words, if a cryptographically relevant quantum computer arrives, signature forgery does become possible from that point forward, but past signatures do not “hide” secrets the way encrypted messages do. As long as you know a digital signature was generated before a CRQC arrived, it cannot be a forgery.
This makes the transition to post-quantum digital signatures less urgent than the post-quantum transition for encryption.
Major platforms are acting accordingly: Chrome and Cloudflare have rolled out hybrid {X}25519+{ML-KEM} for encryption in TLS.
In this piece, for readability, I use “encryption schemes” even though, strictly, secure communication protocols like TLS use key exchange or key encapsulation mechanisms, not public-key encryption.
Here, “hybrid” means both a post-quantum secure scheme (i.e., ML-KEM) and the current scheme ({X}25519) are used together, to get the combined security of both. This way, they can (hopefully) prevent HNDL attacks via ML-KEM, while retaining classical security via {X}25519 in case ML-KEM is found insecure even against today’s computers.
Apple’s iMessage has also deployed this kind of hybrid post-quantum encryption via its PQ3 protocol, and Signal has done so via its PQXDH and SPQR protocols.
In contrast, the rollout of post-quantum digital signatures to critical internet infrastructure is being postponed until a cryptographically relevant quantum computer is truly on the horizon, because current post-quantum signature schemes come with performance penalties (covered in more detail below).
zkSNARKs—Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge, crucial for blockchain scalability and privacy in the long term—are in a similar position to signatures. This is because, even for non-post-quantum-secure {zkSNARKs} (which use elliptic curve cryptography, just as today’s non-post-quantum encryption and signature schemes do), their zero-knowledge property is post-quantum secure.
The zero-knowledge property ensures that nothing about the secret witness is leaked in the proof—even to a quantum adversary—so there is no secret material to “harvest” for later decryption.
Thus, {zkSNARKs} are not vulnerable to Harvest-Now-Decrypt-Later attacks. Just as today’s non-post-quantum signatures generated before a CRQC are safe, any {zkSNARK} proof generated before a CRQC arrives is trustworthy (i.e., the statement proven is definitely true)—even if the {zkSNARK} uses elliptic curve cryptography. Only after a CRQC arrives can an attacker find convincing proofs of false statements.
What this means for blockchains
Most blockchains are not exposed to HNDL attacks:
Most non-privacy chains, like today’s Bitcoin and Ethereum, primarily use non-post-quantum cryptography for transaction authorization—that is, they use digital signatures, not encryption.
Likewise, these signatures are not an HNDL risk: “Harvest-Now-Decrypt-Later” attacks apply to encrypted data. For example, Bitcoin’s blockchain is public; the quantum threat is signature forgery (deriving private keys to steal funds), not decrypting already-public transaction data. This removes the immediate encryption urgency from HNDL attacks.
Unfortunately, even otherwise reputable sources like the Federal Reserve have incorrectly claimed that Bitcoin is vulnerable to HNDL attacks, a mistake that overstates the urgency to transition to post-quantum cryptography.
That said, reduced urgency does not mean Bitcoin can wait: it faces different timeline pressures from the immense social coordination required to change the protocol.
Exceptions as of today are privacy chains, many of which encrypt or otherwise obfuscate recipients and amounts. Once quantum computers can break elliptic curve cryptography, this confidentiality can be harvested now and retroactively deanonymized.
For such privacy chains, the severity of attack varies by blockchain design. For example, for Monero’s curve-based ring signatures and key images (linkability tags for each output to prevent double-spending), the public ledger alone is enough to retrospectively reconstruct the spend graph. But in other chains, the damage is more limited—see Zcash cryptography engineer and researcher Sean Bowe’s discussion for details.
If it’s important that users’ transactions are not exposed by a cryptographically relevant quantum computer, then privacy chains should transition to post-quantum primitives (or hybrid schemes) as soon as feasible. Alternatively, they should adopt architectures that avoid putting decryptable secrets on-chain.
Bitcoin’s special dilemma: governance + abandoned coins
For Bitcoin in particular, two realities drive urgency to begin moving toward post-quantum digital signatures. Neither is about quantum technology itself.
One concern is governance speed: Bitcoin moves slowly. Any contentious issue that the community cannot reach consensus on for an appropriate solution can risk a destructive hard fork.
Another concern is that Bitcoin’s move to post-quantum signatures cannot be a passive migration: owners must actively migrate their coins. This means abandoned, quantum-vulnerable coins cannot be protected. Some estimates put the number of quantum-vulnerable and likely-abandoned BTC at several million coins, worth tens of billions of dollars at current prices (as of December 2025).
However, the quantum threat to Bitcoin will not be a sudden, overnight disaster… but rather a selective, gradual targeting process. Quantum computers will not break all crypto at once—Shor’s algorithm must be run one public key at a time. Early quantum attacks will be extremely expensive and slow. So, once quantum computers can break a single Bitcoin signature key, attackers will selectively prey on high-value wallets.
Moreover, users who avoid address reuse and do not use Taproot addresses—which directly expose the public key on-chain—are largely protected even without protocol changes: their public key is hidden behind a hash function until their coins are spent. When they eventually broadcast a spend transaction, the public key becomes visible, and there is a short, real-time race between honest spenders needing confirmation and a quantum-equipped attacker trying to find the private key and spend the coins before the legitimate transaction finalizes. So, truly vulnerable coins are those whose public keys are already exposed: early pay-to-pubkey outputs, reused addresses, and Taproot holdings.
For abandoned vulnerable coins, there is no simple solution. Some options include:
The Bitcoin community agrees on a “flag day” after which any un-migrated coins are declared burned.
Allowing abandoned, quantum-vulnerable coins to be easily seized by anyone with a cryptographically relevant quantum computer.
The second option presents serious legal and security concerns. Taking coins without the private key using a quantum computer—even if claiming legitimate ownership or good intentions—could trigger major issues under theft and computer fraud laws in many jurisdictions.
Furthermore, “abandoned” is itself a presumption based on inactivity. Nobody really knows if these coins lack a living keyholder. Evidence of prior ownership may not provide legal authority to break cryptographic protection to recover them. This legal ambiguity increases the likelihood that abandoned, quantum-vulnerable coins fall into the hands of malicious actors willing to ignore legal restrictions.
A final Bitcoin-unique issue is its low transaction throughput. Even if a migration plan is finalized, migrating all quantum-vulnerable funds to post-quantum-secure addresses would take months at Bitcoin’s current transaction rate.
These challenges make it vital for Bitcoin to start planning its post-quantum transition now—not because a cryptographically relevant quantum computer is likely before 2030, but because the governance, coordination, and technical logistics needed to migrate tens of billions of dollars’ worth of coins will take years to resolve.
The quantum threat to Bitcoin is real, but its timeline pressure comes from Bitcoin’s own limitations, not an imminent quantum computer. Other blockchains face their own quantum-vulnerable funds challenges, but Bitcoin’s exposure is unique: its earliest transactions used pay-to-pubkey outputs, putting public keys directly on-chain and making a large portion of BTC especially vulnerable to quantum computers. This technical distinction—combined with Bitcoin’s age, value concentration, low throughput, and governance rigidity—makes the problem particularly acute.
Note that the vulnerabilities described above pertain to the cryptographic security of Bitcoin’s digital signatures—not the economic security of the Bitcoin blockchain. That economic security derives from proof-of-work (PoW) consensus, which is not easily attacked by quantum computers, for three reasons:
PoW is based on hashing, so it is subject only to Grover’s quadratic quantum speedup, not Shor’s exponential speedup.
The practical overhead of implementing Grover’s search makes it extremely unlikely that any quantum computer could achieve even moderate real-world speedup over Bitcoin’s PoW mechanism.
Even if significant speedup were realized, it would give large quantum miners an advantage over smaller ones, but would not fundamentally undermine Bitcoin’s economic security model.
Costs and risks of post-quantum signatures
To understand why blockchains should not rush to deploy post-quantum signatures, we need to understand the performance costs and our still-evolving confidence in post-quantum security.
Most post-quantum cryptography is based on one of five approaches:
Hashing (hashing)
Codes (codes)
Lattices (lattices)
Multivariate quadratic systems (multivariate quadratic systems, MQ)
Isogenies (isogenies).
Why five different approaches? The security of any post-quantum primitive is based on the assumption that quantum computers cannot efficiently solve a specific mathematical problem. The more “structured” this problem is, the more efficient crypto protocols we can build on top of it.
But this is a double-edged sword: extra structure also gives attack algorithms more to exploit. This creates a fundamental tension—stronger assumptions enable better performance, but at the cost of potential security holes (i.e., the risk that the assumption is proven false).
In general, hash-based approaches are the most conservative from a security perspective, since we are most confident quantum computers cannot efficiently attack these protocols. But they are also the least performant. For example, the NIST-standardized hash-based signature scheme, even in its smallest parameter setting, has a size of 7-8 KB. In contrast, today’s elliptic curve digital signatures are just 64 bytes. That’s about a 100x size difference.
Lattice schemes are the main focus for deployment today. The only encryption scheme and two out of three signature algorithms selected by NIST for standardization are lattice-based. One lattice scheme (ML-DSA, formerly Dilithium) produces signatures from 2.4 KB (at 128-bit security) to 4.6 KB (at 256-bit security)—making them about 40–70 times larger than today’s elliptic curve signatures. Another lattice scheme, Falcon, has somewhat smaller signatures (Falcon-512 is 666 bytes, Falcon-1024 is 1.3 KB), but involves complex floating-point operations, which NIST itself flagged as a special implementation challenge. Falcon’s creator Thomas Pornin called it “the most complicated crypto algorithm I’ve ever implemented.”
Implementation security for lattice-based digital signatures is also more challenging than for elliptic curve-based schemes: ML-DSA has more sensitive intermediates and non-trivial rejection sampling logic, requiring side-channel and fault protection. Falcon adds constant-time floating-point issues; in fact, several side-channel attacks have already extracted secret keys from Falcon implementations.
These issues pose immediate risks, unlike the distant threat of cryptographically relevant quantum computers.
There is good reason for caution in deploying more performant post-quantum cryptography. Historically, leading candidates like Rainbow (an MQ-based signature scheme) and SIKE/SIDH (an isogeny-based encryption scheme) were broken classically—that is, using today’s computers, not quantum computers.
This happened late in the NIST standardization process. This is healthy science at work, but it shows that premature standardization and deployment can backfire.
As noted above, internet infrastructure is taking a cautious approach to signature migration. This is especially notable given how long the internet’s cryptographic transitions take once started. The move away from MD5 and SHA-1 hash functions—technically deprecated by internet governance bodies years ago—took many years to actually implement in infrastructure, and in some cases is still ongoing. This happened because these schemes were completely broken, not just potentially vulnerable to future technology.
Unique challenges for blockchains vs. internet infrastructure
Fortunately, blockchains like Ethereum or Solana, maintained by active open-source developer communities, can upgrade faster than traditional internet infrastructure. On the other hand, traditional infrastructure benefits from frequent key rotations, moving its attack surface faster than early quantum machines can target—which blockchains cannot do, since coins and their associated keys can be exposed indefinitely.
But overall, blockchains should still follow the internet’s thoughtful approach to signature migration. Signatures in both settings are not exposed to HNDL attacks, and the costs and risks of prematurely migrating to immature post-quantum schemes are still significant, regardless of key longevity.
There are blockchain-specific challenges that make premature migration especially risky and complex: for example, blockchains have unique requirements for signature schemes, particularly the ability to rapidly aggregate many signatures. Today, BLS signatures are often used because they enable very fast aggregation, but they are not post-quantum secure. Researchers are exploring SNARK-based post-quantum signature aggregation. This work is promising but still in its early days.
For SNARKs, the community is currently focused on hash-based constructions as the leading post-quantum option. But a major shift is coming: I believe that in the coming months and years, lattice-based options will become attractive alternatives. These alternatives will offer better performance across the board compared to hash-based {SNARKs}, such as shorter proofs—just as lattice-based signatures are shorter than hash-based ones.
The more pressing current issue: implementation security
For the next few years, implementation bugs will be a greater security risk than cryptographically relevant quantum computers. For {SNARKs}, the main concern is bugs.
Bugs are already a challenge in digital signature and encryption schemes, and {SNARKs} are much more complex. Indeed, a digital signature scheme can be viewed as a very simple {zkSNARK} for the statement “I know the private key corresponding to my public key and authorize this message.”
For post-quantum signatures, the immediate risks also include implementation attacks such as side-channel and fault injection attacks. These types of attacks are well-documented and can extract secret keys from deployed systems. They are a much more urgent threat than distant quantum computers.
The community will spend years identifying and fixing bugs in {SNARKs}, and hardening post-quantum signature implementations against side-channel and fault injection attacks. Because the dust has not yet settled on post-quantum {SNARKs} and signature aggregation schemes, blockchains that transition prematurely risk being locked into suboptimal schemes. When better options appear, or implementation bugs are found, they may need to migrate again.
What should we do? 7 recommendations
Given the realities I’ve outlined above, I’ll close with recommendations for various stakeholders—from builders to policymakers. The core principle: take the quantum threat seriously, but do not operate on the assumption that a cryptographically relevant quantum computer will arrive before 2030. There is no evidence for this based on current progress. Yet, there are things we can and should do now:
We should deploy hybrid encryption immediately.
Or at least in any case where long-term confidentiality is important and costs are bearable.
Many browsers, CDNs, and messaging apps (like iMessage and Signal) have already deployed hybrid approaches. Hybrid—post-quantum + classical—can defend against HNDL attacks while hedging against potential weaknesses in post-quantum schemes.
Use hash-based signatures immediately when size is acceptable.
Software/firmware updates—and other such low-frequency, size-insensitive scenarios—should immediately adopt hybrid hash-based signatures. (Hybrid here is to hedge against implementation bugs in new schemes, not because there are doubts about the hash-based security assumption.)
This is conservative, and it provides society with a clear “lifeboat” in the unlikely event a cryptographically relevant quantum computer appears unexpectedly soon. If post-quantum signatures for software updates are not deployed in advance, we would face a bootstrapping problem after a CRQC appears: we would be unable to safely distribute the post-quantum crypto fixes we need to defend against it.
Blockchains need not rush to deploy post-quantum signatures—but should start planning immediately.
Blockchain developers should follow the lead of the network PKI community and adopt a thoughtful approach to post-quantum signature deployment. This allows post-quantum signature schemes to continue maturing in performance and in our understanding of their security. This approach also gives developers time to rearchitect systems to handle larger signatures and to develop better aggregation techniques.
For Bitcoin and other L1s: communities need to define migration paths and policies for abandoned quantum-vulnerable funds. Passive migration is not possible, so planning is essential. And since Bitcoin faces special non-technical challenges—slow governance and a large volume of high-value, potentially abandoned quantum-vulnerable addresses—it is especially important for the Bitcoin community to start planning now.
Meanwhile, we need to allow research on post-quantum {SNARKs} and aggregatable signatures to mature (likely still several years). Again, premature migration risks lock-in to suboptimal schemes or the need for a second migration to fix implementation bugs.
A note on Ethereum’s account model: Ethereum supports two account types with different implications for post-quantum migration—Externally Owned Accounts (EOAs), the traditional account type controlled by {secp}256{k}1 private keys; and smart contract wallets with programmable authorization logic.
In a non-emergency, if Ethereum adds post-quantum signature support, upgradable smart contract wallets could switch to post-quantum verification via contract upgrade—while EOAs may need to move their funds to new post-quantum-secure addresses (though Ethereum would likely provide a dedicated migration mechanism for EOAs as well).
In a quantum emergency, Ethereum researchers have proposed a hard fork plan to freeze vulnerable accounts and allow users to recover funds by proving knowledge of their mnemonic via post-quantum-secure {SNARKs}. This recovery mechanism would apply to EOAs and any smart contract wallets not yet upgraded.
Practical effects for users: Well-audited, upgradable smart contract wallets may offer a slightly smoother migration path—but not dramatically, and with tradeoffs in trust in wallet providers and upgrade governance. More important is that the Ethereum community continues its work on post-quantum primitives and emergency response plans.
A broader design lesson for builders: Today, many blockchains tightly couple account identity to a specific cryptographic primitive—Bitcoin and Ethereum to ECDSA signatures on {secp}256{k}1, and other chains to EdDSA. The challenges of post-quantum migration highlight the value of decoupling account identity from any specific signature scheme. Ethereum’s move toward smart accounts and similar account abstraction work on other chains reflect this trend: allowing accounts to upgrade their authentication logic without abandoning their on-chain history and state. This decoupling does not make post-quantum migration trivial, but it does provide more flexibility than hard-coding accounts to a single signature scheme. (It also supports unrelated features like sponsored transactions, social recovery, and multisig.)
For privacy chains, which encrypt or hide transaction details, earlier transition should be prioritized if performance allows.
User confidentiality on these chains is currently exposed to HNDL attacks, though the severity varies by design. Chains where full retroactive deanonymization is possible from the public ledger face the most urgent risk.
Consider hybrid schemes (post-quantum + classical) to guard against the possibility that a surface-level post-quantum scheme is found insecure even classically, or implement architectural changes to avoid putting decryptable secrets on-chain.
In the near term, prioritize implementation security—not quantum threat mitigation.
Especially for complex cryptographic primitives like {SNARKs} and post-quantum signatures, bugs and implementation attacks (side-channel, fault injection) will be a much bigger security risk over the next few years than cryptographically relevant quantum computers.
Immediately invest in audits, fuzzing, formal verification, and defense-in-depth/layered security approaches—don’t let quantum concerns distract from the more pressing threat of implementation bugs!
Fund quantum computing development.
One major national security implication of all the above is that we need sustained funding and talent development in quantum computing.
A major adversary achieving cryptographically relevant quantum computing before the US would pose a severe national security risk to us and the rest of the world.
Keep perspective on quantum computing announcements.
As quantum hardware matures, there will be many milestones in the coming years. Paradoxically, the very frequency of these announcements is evidence of how far we are from cryptographically relevant quantum computers: each milestone is just one of many bridges to cross before reaching that point, and each will generate its own headlines and excitement.
Treat press releases as progress reports to be critically evaluated, not as triggers for sudden action.
Of course, there may be surprising breakthroughs or innovations that accelerate projected timelines, just as there may be severe scaling bottlenecks that delay them.
I am not arguing that a cryptographically relevant quantum computer in five years is impossible, just that it is extremely unlikely. The above recommendations are robust to this uncertainty, and following them will avoid more immediate and probable risks: implementation bugs, rushed deployments, and the usual ways crypto transitions go wrong.