Hi Usama,

>I have a formal proof for that for TLS. Please clarify what you see wrong in 
>my proof.

Did your proof consider known past and current attacks on deployed systems that 
rely on malleable signatures or signatures lacking beyond-unforgeability 
properties, including firewalls that use certificate fingerprints in blocklists?

I think your PR would have been less controversial if it had not included a 
reference to very weak hybrid constructions. Given your usual emphasis on 
strong security properties, I was somewhat surprised by that direction. ML-DSA 
is an excellent algorithm in this context. In addition to being non-malleable, 
which should be a baseline requirement, NIST also significantly strengthened 
Dilithium with hedged signing and beyond unforgeability (BUFF) properties.

>It increasingly feels to me that if we had adopted Stephen's draft [1]
>and focused even a small fraction of the energy we have spent on debates
>on ML-DSA and ML-KEM, we would have been far better.

Stephen’s draft suggested not doing any work on PQC signatures, which would 
have been risky and is not aligned with the EU PQC Roadmap, which identify PQC 
as a high priority for trust anchors in long-lived devices and PKI.

>The broader IETF consensus is captured in [0]
There was IETF consensus to publish [0]. However, I do not think there is 
neither IETF nor LAMPS consensus that [0] is on par with RFC 9881 or RFC 9909. 
[0] inherits the malleability weakness of ECDSA, introduces a new independent 
malleability weakness, and destroys the beyond-unforgeability (BUFF) properties 
provided by ML-DSA. Compared to standalone ML-DSA, I believe [0] has a higher 
likelihood of introducing serious vulnerabilities than of mitigating them. In 
addition, the legal uncertainty around the use of composite constructions is 
high. I also recently noted that designing new cryptographic algorithms such as 
[0], without CFRG vetting, is out of scope for the LAMPS charter. If there were 
a new WGLC, I would oppose publication.

Note that a main driver for [0], SecP256r1MLKEM768, and SecP384r1MLKEM1024 is 
the ability to sell FIPS-validated implementations of P-256, P-384, and 
RSA-PKCS#1 v1.5 as “quantum-resistant”, even though only the quantum-vulnerable 
components are FIPS certified.

Ericsson just posted a long public comment [4] on signature security 
properties, government recommendations, and different approaches to signature 
hybridisation.

[4] https://emanjon.github.io/NIST-comments/2026%20SP%20800-230%20IPD.pdf

--- Quote from [4] ---

All modern signature schemes (RSA-PSS, EdDSA, LMS, XMSS, ML-DSA, SLH-DSA, 
FN-DSA) avoid trivial attacks on strong unforgeability and are widely believed 
to provide a high level of SUF-CMA security [10]. The transition to PQC 
provides an excellent opportunity to phase out signatures with trivial attacks 
on strong unforgeability. Malleable EUF-CMA signatures have enabled serious 
attacks in the past [11] and will likely do so again if they continue to be 
used. If any future signature schemes are standardized despite known 
low-complexity attacks against SUF-CMA security, such schemes should be clearly 
labeled with appropriate warnings and should not be considered general-purpose.

Malleable EUF-CMA signatures can undermine system integrity, auditability, and 
provenance, and, in the worst case, may even enable replay attacks, see [12]. 
This is also true for certificates. There is a significant gap between what 
people think a certificate fingerprint represents and what cryptography 
actually guarantees when a certificate is signed with a malleable EUF-CMA only 
signature algorithm. With such signature algorithms, a CA does not issue a 
single certificate; instead, it issues a set of valid certificates, each with 
its own fingerprint. This mismatch has practical consequences. Logging, SIEM, 
and threat intelligence systems often record events such as “Observed 
certificate fingerprint X connecting to service Y,” implicitly treating the 
fingerprint as a stable identifier. Similarly, some firewall blocklists operate 
on fingerprints (e.g., “Block fingerprint X”), see e.g., [13], and incident 
response workflows often rely on fingerprints as unique identifiers when 
searching for the attacker across datasets. In the presence of only EUF-CMA 
guarantees, these assumptions break down, as the same underlying certificate 
can appear under many fingerprints. That is, given a certificate 
(tbsCertificate, Signature), an attacker can derive one or more additional 
valid certificates (tbsCertificate, Signature′) that have different 
fingerprints H(tbsCertificate, Signature). Nation-state attackers are known to 
deliberately confuse logging systems, evade signature-based detection, and 
introduce ambiguity into forensic analysis, and they can be expected to exploit 
signature malleability for these purposes as well.

We believe that standardizing ECDSA with trivial attacks on strong 
unforgeability was a mistake that should not be repeated. Several widely 
deployed cryptographic libraries enforce canonical (low-s) ECDSA signatures to 
eliminate signature malleability. This should be best practice; however, it is 
not compliant with ECDSA as specified in FIPS 186-5.

SP 800-227 [5] provides requirements and specifications for hybridizing KEMs. 
It states that a well-constructed composite KEM should preserve the security 
properties of its components, which aligns with the European definition [2], 
where a hybrid is defined as “a combination of a post-quantum algorithm and a 
quantum-vulnerable algorithm for the same mechanism, such that the security is 
as high as the stronger of the two components.” While hybridization of 
signatures is generally not considered necessary, some European agencies 
require temporary hybridization of ML-DSA and FN-DSA during the post-quantum 
migration period [14]. As no government requires hybridization of SLH-DSA, it 
is expected that many global industries will use standalone SLH-DSA. For many 
infrastructure use cases involving TLS and IPsec, the performance of SLH-DSA is 
adequate.

For the limited cases where signature hybridization is used, additional NIST 
guidance would be valuable. We suggest that NIST recommend avoiding signature 
hybridization altogether, and clarify that any composite signature construction 
must preserve the security properties of its components. Composite 
constructions that do not preserve the security properties of ML-DSA should be 
disallowed from 2035. For niche use cases where hybridization is necessary and 
EUF-CMA security can be shown to suffice, it is appropriate to use 
non-composite hybrid approaches based on two independent signatures, each 
anchored in a separate certificate chain. This approach is currently the only 
hybridization method that appears sufficiently mature for medium-term 
deployment. It also offers a clear operational advantage: it avoids the 
combinatorial explosion problem inherent in composite constructions, and the 
traditional component can be cleanly removed once it no longer considered to 
provide meaningful security, which according to NIST is from 2035.

Composite public keys and signatures cannot be used for long-lived roots of 
trust in Europe, where the ECCG ACM [15] only considers cryptographic 
mechanisms as agreed if their underlying cryptographic primitives are agreed. 
Consequently, composite signatures will likely be deprecated alongside their 
quantum-vulnerable components. Some poorly designed composite signature 
constructions such as [16] not only significantly reduce the security 
properties of ML DSA by inheriting the malleability of ECDSA, but also 
introduce additional malleability weaknesses. In particular, when hedged or 
randomized signing is used, an attacker who has observed n valid signatures of 
a message M can derive up to O(n²) distinct new valid signatures for the same 
message. In practice, repeated signing of the same message M with the same 
private key ξ often occur when a signing request is retried after failure or 
interruption, as well as in high-availability systems where the same message is 
submitted to multiple HSMs. The composites in [16] also significantly weaken 
the security of ML-DSA by failing to preserve its beyond-unforgeability (BUFF) 
properties [6–7]. As shown in [17], existential unforgeability alone does not 
capture the guarantees required by real-world protocols, and the lack of beyond 
unforgeability (BUFF) properties in traditional signature schemes has enabled 
concrete attacks on deployed systems; see [6–7]. We note that there exist 
signature combiners that preserve both SUF-CMA and BUFF properties [18].

---

Cheers,
John Preuß Mattsson

From: Muhammad Usama Sardar <[email protected]>
Date: Wednesday, 6 May 2026 at 11:13
To: [email protected] <[email protected]>
Subject: [TLS] Re: [EXT] Re: Complaint to chairs regarding false claim of 
consensus to issue an RFC for draft-ietf-tls-mldsa


Hi all,

Not to go into an endless loop here, but just to mention: My technical 
objection is outstanding and has not been addressed to date. The broader IETF 
consensus is captured in [0], since it is in the publication queue. I have a 
formal proof for that for TLS. Please clarify what you see wrong in my proof. 
To overturn that broader IETF consensus captured in [0], proponents have to 
come up with strong technical arguments, because the burden of proof here is on 
the proponents, not the opponents.

Neither making meta arguments (like A-B; rechartering; "milk") nor presenting a 
one-sided story (like counting of proponents) seems to be helpful. Please 
address the technical objections technically, not by exhausting the opponents.

Also to say that I will respond only to technical arguments, and no longer to 
these meta points. That doesn't mean my objection is addressed.

It increasingly feels to me that if we had adopted Stephen's draft [1] and 
focused even a small fraction of the energy we have spent on debates on ML-DSA 
and ML-KEM, we would have been far better.

On 06.05.26 02:58, Blumenthal, Uri - 0553 - MITLL wrote:
Well, I’ve been participating in the IETF WGs only since  ̴1992, so how would I 
know…

I am very naive in process things but I'm happy to know that I learnt in less 
than 34 years that "consensus" is not the same as "rough consensus." Chairs 
declared the former not the latter.

But there’s a difference between “declaring” a consensus (which you kindly 
attributed to me), and repeating what the Chairs already stated a while ago 
(especially when some people keep contesting their decision).

I don't see how repetition helps, especially without adding any technical 
argument and without addressing my technical objection.

IMHO, the only “key participant” remaining in this WG today is Eric Rescorla.

To the best of my understanding, Ekr has been swinging back and forth. Very 
recently he has been in strong opposition of publishing such drafts: see [2]. I 
fail to understand what changed it suddenly to support the publication of this 
draft, since it seems to be in the same category as pointed out in [2].

In particular, I also haven't seen him refuting my proof of security of hybrids.

>>  Considering the ratio of the “objectors” to the “supporters”, the consensus 
>> seems to be there.

I believe ratio alone is not what determines the 'consensus.' Technical 
objections have to be addressed. Chairs, please correct me if I am wrong.


Sincerely,

-Usama


[0] 
https://www.ietf.org/archive/id/draft-ietf-lamps-pq-composite-sigs-19.html#section-9.1

[1] https://datatracker.ietf.org/doc/draft-farrell-tls-pqg/

[2] https://mailarchive.ietf.org/arch/msg/tls/vIGryOB0TU_vD81HUUxXQUNdnN0/
_______________________________________________
TLS mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to