I would like to disagree with Felix's conclusions.

As for his three assumptions, I can't disagree with (1); if CRQCs never exist, 
this is largely a nonissue (I'm not particularly worried about classical 
attacks against large RSA keys).  However, I don't believe that 'a CRQC will 
never exist' is a prudent security assumption.

Assumption (2) appears to assume that the certificates on the entire internet 
can easily be updated to all be postquantum.  That is, there won't be an old 
server out there with only classical certificates, or if there is, we don't 
mind denying access to them.  Given the number of largely unmaintained servers 
out there in the world, I personally find that implausible.  Remember how long 
it took before we got rid of the old certificates with MD5 or SHA-1 based 
signatures?

And, assumption (3) asks whether we have any responsibility for the 'unwashed 
masses', that is, the people who aren't security experts.  Well, yes, I would 
claim that we do.  Those people are relying on us to protect them (so that they 
can spend their time on the areas they are experts in).  I believe that we have 
a responsibility for those who depend on us, just as we rely on other aspects 
(e.g. DNS, routing) that we assume 'just work'.

Felix also raising the point of failures.  I can think of two nonattack 
scenarios where an abort connection may happen:


  *
The web site has multiple servers (with DNS records pointing to the various 
servers to do load distribution), and some of the servers have PQ certs and 
some don't.
  *
The server was upgraded to a PQ cert, and then that cert was deliberately 
backed out for some reason.

Both of those are possible, but (IMHO) fairly rare; section 3.5 of the draft 
discusses (but not totally mitigates) the second scenario - further discussion 
in the draft may be warranted.


________________________________
From: Felix Linker <[email protected]>
Sent: Monday, February 9, 2026 6:02 AM
To: Eric Rescorla <[email protected]>
Cc: TLS WG <[email protected]>
Subject: [TLS] Re: PQC Continuity draft

Hi everyone,

I'm doubtful whether this draft can actually enhance security of TLS 
connections. I think this draft relies on three assumptions (eventually 
holding). (1) CRQCs exist, (2) the world is somehow oblivious to this and 
PQ-certs are not widely deployed, and (3) we want to protect "the masses," not 
security-sensitive individuals.

If not (1): No downgrade attack possible without compromising server key 
material (which might as well be PQ-key material).
If not (2): Every client could refuse non-PQ-secure connections.
If not (3): Security-sensitive individuals could enforce PQ-secure connections 
(potentially on a per-server basis, when they're concerned about specific 
connections; effectively out-of-band pinning).

Supporting legacy clients cannot really be the concern. It seems more relevant 
how many servers can be expected to support PQC. If servers overwhelmingly 
support PQC, the adversary could try MITM-ing sessions by pretending that a 
client does not support PQC, but the honest client supporting PQC would abort 
that connection as they see it be non-PQ-secure. The client not supporting PQC 
is screwed anyway.

Additionally, I think this draft can only work if the "failure case" (abort 
connection) is infrequent and correlates with an attack taking place. If there 
are too many false positives, adoption might drop significantly. It seems 
intuitive to me that this may not be the case, and that policy violations 
rather lead to aborted connections because of misconfiguration (especially 
factoring in operational feedback mentioned earlier).

Also, I'm doubtful that assumption (2) will be the case. Wouldn't it be the 
preferred way forward to enforce PQ algorithms and eventually treat 
non-PQ-connections as insecure?

Best,
Felix


Am Sa., 7. Feb. 2026 um 22:20 Uhr schrieb Eric Rescorla 
<[email protected]<mailto:[email protected]>>:


On Sat, Feb 7, 2026 at 1:12 PM Muhammad Usama Sardar 
<[email protected]<mailto:[email protected]>>
 wrote:

On 07.02.26 21:07, Eric Rescorla wrote:

However, if the client successfully
connects to the server once with the PQ algorithm, then the client can remember
that and in future insist on the server using P and thus prevent this kind of 
attack.

[I don't have a PQ model yet, this is just my intuition which may be completely 
wrong] What I am failing to see is how remembering is better than a simple 
solution: If the client is already convinced that traditional signature 
algorithm T is weak and it only wants PQ signature algorithm P, then it should 
simply not offer T in ClientHello.

The setting of interest is one where there is a large fraction of servers which 
do
not support PQ algorithms. In this case, any client which rejects T will 
effectively
be unable to communicate with those servers. This might be desirable if CRQCs
are ubiquitous and attacks are cheap, but what about the case where CRQCs are
very expensive or where it's unknown whether a CRQC even exists. In this case,
it might be desirable to have clients insist on PQ algorithms for servers it 
knows
support them, but fall back to non-PQ algorithms otherwise.

You might find this post useful, as it goes into the situation in some more 
detail:
https://educatedguesswork.org/posts/pq-emergency/#signature-algorithms

-Ekr

_______________________________________________
TLS mailing list -- [email protected]<mailto:[email protected]>
To unsubscribe send an email to [email protected]<mailto:[email protected]>
_______________________________________________
TLS mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to