Quantum Ready (Part 5): The Quantum-Ready Playbook: A 5-Step Plan for Crypto-Agility

By Ryan Wentzel
11 Min. Read
#Quantum#AI#Crypto-Agility#PQC#NIST
Quantum Ready (Part 5): The Quantum-Ready Playbook: A 5-Step Plan for Crypto-Agility

Table of Contents

What Does "Quantum Ready" Actually Mean?

Throughout this series, we have traced the quantum computing landscape from fundamental physics to cryptographic threats to near-term applications to engineering challenges. Now we arrive at the question that matters most for security leaders, CISOs, and technology executives: what should your organization actually do about all of this?

"Quantum readiness" is not about purchasing a quantum computer. It is not about hiring quantum physicists. For the vast majority of organizations, quantum readiness is a defensive cybersecurity strategy focused on ensuring that your cryptographic infrastructure can survive the transition to a post-quantum world.

The core of this strategy is straightforward in concept, even if complex in execution: you must inventory every piece of cryptography your organization depends on, assess which systems face the greatest quantum risk, and architect your infrastructure so that cryptographic algorithms can be swapped efficiently when the time comes -- and in some cases, that time is now.

This is not a problem you can solve in a quarter. Cryptographic migrations are historically measured in years. The transition from SHA-1 to SHA-256 began with NIST's deprecation recommendation in 2011 and was still incomplete a decade later. The move from 3DES to AES followed a similar multi-year trajectory. The post-quantum migration is more complex than either, involving larger key sizes, new protocol behaviors, potential performance regressions, and ecosystem-wide coordination.

Organizations that begin this work now are not overreacting to a distant threat. They are applying the same risk management discipline they would to any other foreseeable business continuity challenge. The difference is that the "Harvest Now, Decrypt Later" threat (discussed in Part 2) means that data encrypted today with quantum-vulnerable algorithms may already be compromised in the future. Every day of delay extends the window of exposure.

The Prime Directive: Achieve Crypto-Agility

If there is a single organizing principle for quantum readiness, it is crypto-agility: the ability to swap cryptographic algorithms, protocols, and implementations across your infrastructure without re-architecting your systems.

Crypto-agility is not a new concept invented for the quantum threat. It has always been a best practice in security architecture. Every time an algorithm is deprecated (MD5, SHA-1, RC4, 3DES), every time a vulnerability is discovered (Heartbleed in OpenSSL, the Dual EC DRBG backdoor), every time a compliance requirement changes (PCI DSS mandating TLS 1.2+), organizations with crypto-agile architectures adapt quickly while those with hard-coded cryptographic dependencies scramble for months or years.

The quantum transition simply makes crypto-agility existential rather than aspirational. When NIST-standardized post-quantum algorithms must be deployed across your infrastructure, the question is whether that deployment takes weeks or years. Crypto-agility determines the answer.

What does crypto-agility look like in practice? At its core, it means three things:

Abstraction: Cryptographic operations are accessed through abstraction layers rather than direct algorithm calls. Your application code calls "encrypt" or "sign" through an API; the specific algorithm is configured externally, not embedded in source code. This sounds obvious, but an enormous amount of production code contains hard-coded references to specific algorithms, key sizes, and parameters.

Configuration-driven algorithm selection: The choice of which algorithm to use for a given operation is determined by configuration, policy, or a centralized cryptographic service -- not by compiled code. Changing the algorithm should require a configuration update, not a code release.

Modularity: Cryptographic components (key management, certificate authorities, TLS termination, data-at-rest encryption, digital signatures) are modular and independently upgradable. You should be able to upgrade your TLS library without rebuilding your application, rotate certificates without downtime, and migrate key management to new algorithms without touching every system that consumes keys.

Achieving full crypto-agility across a large enterprise is a multi-year effort. But even partial progress dramatically reduces migration risk. The 5-step playbook that follows provides a structured approach to building quantum readiness with crypto-agility as the foundation.

A 5-Step Quantum-Ready Playbook

Step 1: Build Your Cryptographic Inventory (CBOM)

You cannot protect what you cannot see. The first step in any quantum readiness program is a comprehensive cryptographic inventory -- sometimes called a Cryptographic Bill of Materials (CBOM), analogous to the Software Bill of Materials (SBOM) concept that has gained traction in software supply chain security.

A CBOM catalogs every cryptographic asset across your technology stack:

  • Algorithms in use: RSA, ECDSA, ECDH, AES, SHA-256, HMAC, and others. Document key sizes, curve parameters, and modes of operation.
  • Protocols: TLS versions and cipher suites, IPsec configurations, SSH key types, S/MIME certificates, VPN protocols, API authentication mechanisms.
  • Certificates: Every X.509 certificate in your infrastructure -- web servers, internal services, code signing, email, mutual TLS, IoT device certificates. Document their issuing CAs, validity periods, key types, and renewal processes.
  • Key management systems: Hardware Security Modules (HSMs), Key Management Services (cloud KMS), certificate management platforms, secrets managers. Document what algorithms they support and their upgrade paths.
  • Data-at-rest encryption: Database encryption (TDE), file-level encryption, full-disk encryption, backup encryption, archive encryption. Document the algorithms, key sizes, and key rotation procedures.
  • Third-party and SaaS dependencies: Your vendors' cryptographic implementations matter too. If your payment processor or cloud provider uses quantum-vulnerable cryptography, that is your risk.

Manual inventory is impractical for any organization of meaningful size. Automated discovery tools are essential. Several vendors now offer cryptographic discovery and inventory solutions that scan code repositories, network traffic, certificate stores, and configuration files to build a CBOM. Open-source tools like the OWASP Dependency-Check project can identify cryptographic libraries and their versions in your software supply chain.

The output of this step is a comprehensive map of your cryptographic surface area, with enough detail to prioritize migration efforts.

Step 2: Prioritize by Data Shelf-Life and Risk

Not all cryptographic assets face equal quantum risk. Prioritization requires understanding two dimensions: the sensitivity of the data being protected and its required confidentiality duration (shelf-life).

Start by classifying your data into shelf-life categories:

  • Decades-long sensitivity: Military/government secrets, healthcare records (patient lifetime), trade secrets, biometric data, intellectual property with long competitive value
  • Medium-term sensitivity (5-15 years): Financial records, legal documents, strategic business plans, customer PII under regulatory retention requirements
  • Short-term sensitivity (under 5 years): Session tokens, short-lived authentication credentials, transient communications

Cross-reference shelf-life with the threat model. Systems handling data in the first category are immediate HNDL targets and should be prioritized for PQC migration. Systems in the second category should be on near-term roadmaps. Systems in the third category have more runway but should still be included in crypto-agility architecture planning.

Beyond data sensitivity, prioritize systems that are:

  • Internet-facing: Exposed to traffic interception by sophisticated adversaries
  • High-value targets: Government, defense, healthcare, financial services, critical infrastructure
  • Difficult to update: Embedded systems, IoT devices, OT/ICS environments where firmware updates are rare or risky
  • Long-lived: Systems expected to operate for a decade or more without major overhaul

Step 3: Follow NIST Standards and Regulatory Guidance

The cryptographic community has not been idle. NIST finalized its first set of post-quantum cryptographic standards in 2024, providing concrete algorithms that organizations can begin implementing:

  • FIPS 203 (ML-KEM): Module-Lattice-Based Key Encapsulation Mechanism, based on CRYSTALS-Kyber. This is the primary standard for key exchange and encryption, replacing the quantum-vulnerable ECDH and RSA key exchange.
  • FIPS 204 (ML-DSA): Module-Lattice-Based Digital Signature Algorithm, based on CRYSTALS-Dilithium. This is the primary standard for digital signatures, replacing ECDSA and RSA signatures.
  • FIPS 205 (SLH-DSA): Stateless Hash-Based Digital Signature Algorithm, based on SPHINCS+. This provides a conservative, hash-based alternative for signatures that relies on the well-understood security of hash functions rather than lattice assumptions.

The NSA's CNSA 2.0 (Commercial National Security Algorithm Suite 2.0) guidance provides a timeline for U.S. national security systems. Key milestones include: software and firmware signing must use CNSA 2.0 algorithms by 2025, web servers and cloud services by 2025, traditional networking equipment by 2026, operating systems by 2027, and custom and legacy applications by 2030. Non-national-security organizations should treat these dates as leading indicators of broader industry expectations.

The regulatory landscape is expanding rapidly. The White House's National Security Memorandum 10 (NSM-10) directed federal agencies to inventory their cryptographic systems and develop migration plans. The EU is developing its own post-quantum transition guidance. Financial regulators, healthcare regulators, and critical infrastructure authorities are incorporating quantum risk into their frameworks.

The message is clear: PQC migration is becoming a compliance requirement, not just a best practice.

Step 4: Architect for Agility

With your inventory complete, priorities set, and standards identified, the next step is ensuring your architecture can actually execute the migration efficiently.

Abstraction layers: Implement cryptographic abstraction in your application code. Instead of calling specific algorithm implementations directly, use wrapper libraries or cryptographic service providers that support algorithm selection through configuration. Languages and frameworks increasingly offer this: Java's JCA/JCE architecture, .NET's CNG API, and Python's cryptography library all support pluggable algorithm backends.

Algorithm-agnostic APIs: Design internal APIs so that cryptographic parameters (algorithm, key size, mode) are metadata, not embedded logic. When a service requests encryption, it should specify a security policy ("encrypt-sensitive-data"), not an algorithm ("AES-256-GCM"). A central policy engine maps security policies to algorithms, making algorithm changes a configuration update.

HSM readiness: Verify that your Hardware Security Modules support PQC algorithms or have firmware upgrade paths to add support. Major HSM vendors (Thales, Entrust, Utimaco) have been adding PQC capabilities, but older hardware may require replacement.

Hybrid key exchange: During the transition period, implement hybrid schemes that combine a classical algorithm with a post-quantum algorithm. For example, X25519+ML-KEM combines the battle-tested X25519 elliptic curve key exchange with the new ML-KEM post-quantum algorithm. If either algorithm is broken, the other still provides protection. This belt-and-suspenders approach is recommended by NIST and is already supported in TLS 1.3 implementations. Chrome and other major browsers began supporting hybrid key exchange in 2024.

Certificate management: PQC certificates are significantly larger than their classical counterparts. ML-DSA public keys are approximately 1,312 bytes (compared to 32 bytes for Ed25519), and signatures are approximately 2,420 bytes. This impacts certificate chain sizes, TLS handshake latency, and certificate storage. Ensure your certificate management infrastructure can handle larger certificates and plan for the bandwidth implications.

Step 5: Test, Benchmark, and Roadmap

The final step translates architecture into action through rigorous testing, performance validation, and a phased rollout plan.

Performance benchmarking: PQC algorithms have different performance characteristics than their classical predecessors. ML-KEM key encapsulation is fast (comparable to or faster than RSA key exchange), but key sizes are larger. ML-DSA signing and verification are fast, but signatures and public keys are significantly larger. Test the impact on your specific workloads: TLS handshake times, API response latencies, certificate validation overhead, bandwidth consumption, and storage requirements. Pay particular attention to constrained environments (mobile devices, IoT, embedded systems) where increased key and signature sizes may have outsized impact.

Interoperability testing: Your systems do not operate in isolation. Test PQC algorithm support across your ecosystem: load balancers, CDNs, API gateways, partner integrations, client applications, and third-party services. Identify interoperability gaps early. The OQS (Open Quantum Safe) project provides PQC-enabled forks of OpenSSL and other libraries for testing purposes.

Phased rollout plan: Define a migration sequence that prioritizes high-risk systems identified in Step 2 while managing operational risk:

  • Phase 1 (immediate): Enable hybrid key exchange on internet-facing TLS endpoints. This protects new communications against HNDL with minimal disruption.
  • Phase 2 (near-term): Migrate internal PKI and certificate infrastructure to support PQC algorithms. Update code signing to use ML-DSA.
  • Phase 3 (medium-term): Migrate data-at-rest encryption, VPN infrastructure, and internal service-to-service communication.
  • Phase 4 (longer-term): Address legacy systems, embedded devices, and third-party dependencies that require vendor coordination.

Vendor readiness assessment: Survey your critical technology vendors on their PQC roadmaps. Key questions include: when will their products support FIPS 203/204/205? Do they have a hybrid deployment option? What is their HSM upgrade path? Vendor readiness (or lack thereof) will constrain your migration timeline and should be factored into procurement decisions.

Governance and Communication

Technical execution alone is insufficient without organizational governance. Establish a quantum readiness working group that includes representation from security, IT infrastructure, application development, compliance, legal, and executive leadership.

Board-level communication should frame quantum risk in business terms, not technical jargon. The message is: "Our encrypted data has a shelf-life, and our encryption has an expiration date. The gap between those two dates is our risk window, and we are closing it through a structured migration program." Quantify the risk where possible: regulatory exposure, competitive intelligence loss, and customer trust implications.

Budget planning should account for the multi-year nature of cryptographic migrations. Major cost categories include: cryptographic discovery tooling, application code refactoring, HSM upgrades or replacements, certificate infrastructure changes, performance testing and optimization, staff training, and third-party integration coordination. The investment is significant but substantially less than the cost of an emergency migration under regulatory or threat pressure.

Conclusion

Quantum readiness is not a technology purchase -- it is an organizational capability. It begins with understanding your cryptographic surface area, prioritizing by risk, aligning with standards, architecting for agility, and executing a disciplined migration plan. The organizations that treat this as a strategic program rather than a one-time project will navigate the quantum transition smoothly. Those that defer will face compressed timelines, regulatory pressure, and the uncomfortable realization that years of their most sensitive data may have been harvested by adversaries who planned ahead.

The quantum era is approaching. The question is not whether your cryptography will need to change -- it will. The question is whether you will change it on your timeline or be forced to change it on someone else's. Start now. Build your inventory. Achieve crypto-agility. The playbook is clear; the only variable is execution.

Share Your Thoughts

Found this article helpful? Share it with your network.

Get in Touch
Trusted by teams using
NetflixOracleFigmaCoinbaseDellServiceNowAppleDeloitteNikeAWSJPMorgan ChaseT-MobileAtlassianBoschStripeL'OréalDatadogMicrosoftPalantirHPRobinhoodEYSonyCanvaVisaAutoCADDiscordBell HelicopterAdobeCharles SchwabE*TRADENVIDIAGoogleJohnson & JohnsonFidelityClaudeMastercardIntuitBoeingAT&TShopifyPwCOpenAIKPMGIBMDatabricksSalesforceGitHubAmerican ExpressWorkdayMailerSend