Muhammad Ibrahim
April 2026 · 14 min read · Post-Quantum Cryptography TLS Research

Wire-Level PQC Detection: What Your TLS Handshakes Are Actually Saying

Most tools tell you what cipher suites your server supports. the prototype system tells you what cryptographic guarantees your infrastructure is actually providing — at the wire level.

There is a meaningful difference between what your server advertises and what it actually negotiates. Most post-quantum cryptography scanning tools operate at the configuration layer — they inspect your server settings, your nginx or Apache TLS configuration, your certificate metadata. They tell you what cipher suites are enabled.

But cipher suite configuration is not the same as cryptographic negotiation. A server can be configured to support ML-KEM and still negotiate a classical ECDH handshake with every client that connects. The configuration says one thing. The wire says another.

This gap — between advertised capability and actual negotiated cryptography — is the problem that this research prototype was built to close.

The core argument If you haven't inspected your live TLS handshakes at the byte level, you don't actually know your PQC posture. You know your configuration. Those are different things.

How TLS negotiation actually works

To understand why wire-level detection matters, it helps to understand how TLS 1.3 negotiation proceeds. The sequence is deceptively simple:

  1. The client sends a ClientHello — listing the cipher suites, supported groups, and key share data it is willing to use.
  2. The server responds with a ServerHello — selecting a single cipher suite and key share from the client's offerings, and providing its own key share data.
  3. Both parties derive session keys from the exchanged key material. Everything after this point is encrypted.

The ServerHello is the moment of truth. It is the server's binding commitment to a specific cryptographic algorithm for this session. And it is encoded as a structured binary message with a precisely defined format — which means it can be parsed, inspected, and analysed at the byte level.

What the ServerHello byte structure tells you

A TLS 1.3 ServerHello message follows a well-documented structure. The fields that matter most for PQC detection are the supported_groups extension (which encodes the named group used for key exchange) and the key_share extension (which contains the actual key material).

For classical TLS, you typically see named groups like x25519 (0x001d) or secp256r1 (0x0017). For hybrid PQC handshakes, you see combined named groups — such as X25519MLKEM768 (0x11ec) — where the key share contains concatenated classical and PQC key material.

OffsetFieldValue (example)Meaning
0Content type0x16Handshake record
1–2Legacy version0x03 0x03TLS 1.2 compatibility
5Handshake type0x02ServerHello
38–39Cipher suite0x13 0x01TLS_AES_128_GCM_SHA256
ext offsetSupported group0x11 0xecX25519MLKEM768 — hybrid PQC
ext offsetSupported group0x00 0x1dx25519 — classical only

The named group value at the key_share extension is the definitive signal. 0x001d means classical x25519. 0x11ec means hybrid ML-KEM-768 with x25519. These two bytes tell you more about your actual PQC posture than any configuration audit.

The the prototype system detection approach

a prototype PQC detection system I developed as part of my research — performs detection by capturing live TLS traffic and parsing the raw ServerHello bytes in real time. The approach has three stages:

Stage 1 — Packet capture and handshake isolation

the prototype captures network traffic on the target interface and filters for TLS handshake records. Because TLS 1.3 encrypts application data immediately after the handshake, the ServerHello is the only point at which the negotiated parameters are visible in plaintext on the wire. the prototype system isolates these records from the packet stream for analysis.

Stage 2 — ServerHello byte parsing

Each captured ServerHello is parsed according to the TLS 1.3 record format. the prototype system extracts the extensions block, locates the key_share extension (type 0x0033), and reads the named group identifier from the server's key share entry.

The named group value is then classified against a registry of known algorithm identifiers:

Stage 3 — Formal verification artefact generation

This is where this approach goes beyond conventional scanning tools. For each observed handshake, the prototype system generates a structured verification artefact — a formal record of the negotiated parameters, the observed key material lengths, and the classification result. These artefacts are designed to bridge the gap between runtime observation and formal cryptographic verification.

The artefacts can be ingested by formal verification toolchains — including Lean 4 theorem provers and Tamarin protocol models — to produce machine-verifiable proofs about the cryptographic guarantees that a given session actually provided. This transforms a runtime observation into a formally verifiable claim: not just "we observed a PQC handshake" but "we have machine-verifiable proof that this session used ML-KEM-768 key encapsulation."

Why formal verification matters for PQC migration Regulatory and compliance frameworks increasingly require verifiable evidence of security posture, not just configuration documentation. A formally verified artefact that proves a session used quantum-resistant key exchange is a fundamentally different class of evidence from a configuration screenshot. As PQC compliance requirements emerge in financial services, healthcare, and government sectors, the ability to produce machine-verifiable proof of cryptographic posture will become increasingly important.

What the detection surface looks like in practice

When the prototype is deployed against against a real infrastructure, the findings typically fall into four categories:

Fully classical endpoints — servers negotiating x25519 or ECDH exclusively, with no PQC capability. These are the highest-priority migration targets. They are producing encrypted traffic that is potentially vulnerable to harvest-now-decrypt-later collection.

PQC-capable but not negotiating PQC — servers that are configured to support hybrid cipher suites but are negotiating classical handshakes in practice. This is the most common and most dangerous finding. The configuration audit says PQC-ready; the wire says classical. This happens when clients connecting to the server don't yet support hybrid groups, so the server falls back to classical negotiation.

Hybrid PQC endpoints — servers successfully negotiating hybrid key exchange, typically X25519MLKEM768, with PQC-capable clients. This is the current best-practice posture — quantum-resistant for clients that support it, classically secure for those that don't.

Application-layer PQC with classical transport — perhaps the most interesting finding. Some systems implement PQC algorithms at the application layer — encrypting payloads using ML-KEM or Dilithium — while presenting a classical TLS frontend. the prototype detects this pattern through the mismatch between the observed ServerHello (classical) and application-layer indicators, and flags it as a configuration that requires careful documentation to avoid being misclassified as either fully quantum-safe or fully classical.

The mismatch finding The application-layer versus transport-layer PQC mismatch is one of the most commonly misunderstood postures in enterprise infrastructure. A system running CRYSTALS-Kyber at the application layer is not the same as a quantum-safe system — if the TLS transport is classical, the session keys are still vulnerable. the prototype makes this distinction explicit.

What the methodology reveals in practice

When this detection approach is applied to real infrastructure, findings tend to fall into one of the four classification states described above. The most significant and consistent finding across evaluations is the application-layer versus transport-layer mismatch — servers that are configured with PQC algorithm support at the application layer but are presenting a classical TLS frontend to every connecting client.

This finding is not rare. It is the default state of most systems that have begun PQC implementation without upgrading their TLS frontend to an OQS-capable library. Configuration audits miss it entirely. Certificate inspection misses it. Only wire-level ServerHello parsing reveals it — because only the ServerHello contains the actual group identifier that was negotiated.

A second consistent finding: detecting PQC negotiation in the wild requires OQS-capable client tooling. A measurement client running stock OpenSSL will advertise only classical groups in its ClientHello — which means even servers with full server-side PQC capability will respond with classical negotiation. This means that most existing PQC deployment surveys significantly undercount actual PQC capability in production infrastructure.

I will share detailed evaluation findings here as this research progresses toward publication. If you are working on PQC deployment assessment and want to discuss methodology, I would welcome the conversation.

The gap between scanning and knowing

Configuration audits answer the question: what is this server capable of? Wire-level detection answers the question: what is this server actually doing?

For most organisations, the answer to these two questions is different — and the gap between them is where the real risk lives. A server that is capable of hybrid PQC but negotiating classical handshakes with 90% of its clients is, for those clients, indistinguishable from a server with no PQC capability at all.

Understanding your actual negotiated cipher suite distribution — across real clients, under real traffic conditions — requires wire-level visibility. It cannot be inferred from configuration alone.

Of the TLS sessions your infrastructure completed in the last 24 hours, what percentage negotiated a PQC or hybrid key exchange?

If you don't know the answer to that question, you don't yet know your PQC posture. You know your PQC configuration. The research underpinning the prototype — including the full detection methodology and formal verification pipeline — is in preparation for publication. I will share details here as the papers approach submission.

If you are working on PQC migration, infrastructure security, or formal verification of cryptographic protocols, I would welcome the conversation. You can reach me on LinkedIn or via my research blog.

Muhammad Ibrahim
Muhammad Ibrahim Post-Quantum Security Researcher · Cloud Architect · Contributor to the prototype system at my research group · Research co-authored with Imperial College London · MSc, Middlesex University London
Previous Why Post-Quantum Cryptography Is Urgent Now Home Back to all posts