Hi Yaroslav, Thank you for raising important operational questions.
Let me start with your concrete questions: - Who is expected to deploy and benefit from this mechanism? In this context, I am only interested in the "open Internet". As an aside, I only wish real-life enterprise usage was as well-governed as you present it... But that's beside the point. - In which environments would it realistically be safe and reliable to use? If we're talking about the open Internet then this question is meaningless, it has to "just work" no matter what the environment. - How clients are expected to recover from cache poisoning or environmental changes? I don't have a good answer on recovery but I think there are good ways to prevent this risk. But let me start by mentioning that in addition to the two failures you mentioned, there was also HSTS which was a success. Arguably, because from a server admin POV it was very simple - just an on/off switch and a time duration. In addition, many of the attacks on HPKP were the result of it being at the application/HTTP layer and would not apply to a TLS-level mechanism. To address your specific failure mode, I believe this could be managed by indexing the client cache correctly, e.g. including not only the EE certificate fingerprint but also a hash of the entire certificate chain. Alternatively, we could add rules for middlebox behavior but I think you'll agree that would be less effective. Thanks, Yaron From: Yaroslav Rosomakho <[email protected]> Date: Thursday, 5 February 2026 at 17:53 To: Yaron Sheffer <[email protected]> Cc: TLS WG <[email protected]> Subject: Re: [TLS] PQC Continuity draft Hi Yaron, Thanks for the draft and for driving discussion on this important topic. I have a general question about the intended deployment model and threat environment for this mechanism. From my perspective, there appear to be two broad classes of TLS usage: 1. Closed or managed ecosystems. These environments typically operate with centrally managed TLS profiles, well-defined trust anchors, and coordinated upgrade paths. Once such an ecosystem migrates to PQC or composite signature schemes, downgrade attacks are largely a configuration and policy problem, not a protocol problem. In these settings, endpoints can be required to enforce PQC-only policies directly, and I am not sure that an additional cache-based continuity mechanism materially improves security. 2. The open public Web. For public web clients, we have substantial historical experience with mechanisms that rely on client-side state to enforce stronger future behavior (HPKP and Token Binding immediately come to mind). In practice, these mechanisms have proven extremely difficult to deploy. One concern I have with the proposed approach is the interaction with TLS intermediaries. Some clients operate behind trusted middleboxes such as enterprise proxies, NGFWs, or inspection devices. Such intermediaries could legitimately inject the proposed extension while presenting a locally-issued (non-WebPKI) certificate that the client trusts in that environment. This would effectively "poison" the client's cache with a continuity commitment tied to the intermediary rather than to the origin server. If the same client later moves outside that managed environment, it may no longer trust the original server certificate, even though the server is operating correctly. Recovering from this state could be operationally very challenging. More generally, public services that migrate to PQC certificates and deploy this extension are likely to see sporadic client failures caused purely by environmental factors. These failures would be intermittent, hard to reproduce, and difficult for operators to troubleshoot. My concern is that this proposal may run into many of the same practical obstacles that ultimately limited the deployment of HPKP and Token Binding: mechanisms with strong theoretical security properties but problematic real-world failure modes when applied to the heterogeneous public Internet. Before we pursue this further, it would be helpful to better understand: - Who is expected to deploy and benefit from this mechanism? - In which environments would it realistically be safe and reliable to use? - How clients are expected to recover from cache poisoning or environmental changes? Without clear answers to these questions, I worry that the proposal may be hard to operationalize for the open web, while providing limited additional value and unnecessary complexity for tightly managed ecosystems. -yaroslav On Sun, Feb 1, 2026 at 5:04 PM Yaron Sheffer <[email protected]<mailto:[email protected]>> wrote: Hi, A few months ago, Tiru and I published a draft [1] whose goal is to minimize rollback attacks while the Internet is slowly migrating from classic to PQC (or composite) certificates. It seems that the TLS WG is now ready to turn its attention to PQ resistant signatures, and we would like to present the draft at the upcoming IETF-125. If anybody has had a chance to read the draft in the meantime, we would appreciate your feedback. People might also want to refer to the earlier discussion on this list [2]. Thanks, Yaron [1] https://datatracker.ietf.org/doc/draft-sheffer-tls-pqc-continuity/ [2] https://mailarchive.ietf.org/arch/msg/tls/qfmTs0dFq-79aJOkKysIP_3KhEI/ _______________________________________________ TLS mailing list -- [email protected]<mailto:[email protected]> To unsubscribe send an email to [email protected]<mailto:[email protected]> This communication (including any attachments) is intended for the sole use of the intended recipient and may contain confidential, non-public, and/or privileged material. Use, distribution, or reproduction of this communication by unintended recipients is not authorized. If you received this communication in error, please immediately notify the sender and then delete all copies of this communication from your system.
_______________________________________________ TLS mailing list -- [email protected] To unsubscribe send an email to [email protected]
