Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-15 Thread D. J. Bernstein
Eric Rescorla writes:
> It's of course worth noting that a CRQC might be very far in the
> future and we might get better PQ algorithms by that point, in which
> case we'd never deploy pure ML-KEM.

There are already various lattice KEMs that outperform Kyber, the most
recent being https://eprint.iacr.org/2023/1298. So there are at least
two obvious scenarios where deploying pure Kyber doesn't make sense:

   * Scenario 1: Continued advances in lattice attacks publicly break
 Kyber, in which case pure Kyber will (hopefully!) never have been
 deployed.

   * Scenario 2: Lattice cryptanalysis eventually stabilizes and people
 switch to any of the more efficient lattice KEMs, in which case
 pure Kyber won't be a security problem but also won't be a sensible
 investment of IETF time.

Is there an argument that Kyber will simultaneously avoid both of these
scenarios? People are supposed to trust lattice cryptanalysis enough to
be sure Kyber will survive, while also being sure that all of the more
efficient lattice KEMs will be broken? This sounds fragile.

Another interesting scenario to consider is Scenario 3: Quantum attacks
are demonstrated, but not with low enough cost to make users think that
it's a good idea to give up on hybrids.

Meanwhile the elephant in the room, a problem for both pure Kyber and
hybrid Kyber, is Scenario 0: Kyber deployment is slow, tentative, and
perhaps ultimately aborted, because Kyber is in a patent minefield. Part
of the minefield is two patents where it seems that NIST's buyouts will
finally activate this year, but there are further patents that threaten
Kyber, as illustrated by Yunlei Zhao in

   
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/Fm4cDfsx65s/m/F63mixuWBAAJ

saying "Kyber is covered by our patents". That was almost two years ago.
I haven't heard reports of Zhao asking for money yet, but I also haven't
seen an analysis explaining why Zhao is wrong.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread D. J. Bernstein
Bas Westerbaan writes:
> We think it's worth it now, but of course we're not going to keep
> hybrids around when the CRQC arrives.

I think this comment illustrates an important ambiguity in the "CRQC"
terminology. Consider the scenario described in the following paragraph
from https://blog.cr.yp.to/20240102-hybrid.html:

   Concretely, think about a demo showing that spending a billion
   dollars on quantum computation can break a thousand X25519 keys.
   Yikes! We should be aiming for much higher security than that! We
   don't even want a billion-dollar attack to be able to break _one_ key!
   Users who care about the security of their data will be happy that we
   deployed post-quantum cryptography. But are the users going to say
   "Let's turn off X25519 and make each session a million dollars
   cheaper to attack"? I'm skeptical. I think users will need to see
   much cheaper attacks before agreeing that X25519 has negligible
   security value.

It's easy to imagine the billion-dollar demo being important as an
advertisement for the quantum-computer industry but having negligible
impact on cryptography:

   * Hopefully we'll have upgraded essentially everything to
 post-quantum crypto before then.

   * It's completely unclear that the demo should or will prompt users
 to turn off hybrids.

   * On the attack side, presumably real attackers will have been
 carrying out quantum attacks before the public demo happens.

For someone who understands what "CRQC" is supposed to mean: Is such a
demo "cryptographically relevant"? Is the concept of relevance broad
enough that Google's earlier demonstration of "quantum supremacy" also
counts as "cryptographically relevant", so CRQCs are already here?

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-07 Thread D. J. Bernstein
Here's a chart I sent CFRG a few weeks ago of recent claims regarding
the exponent, including memory-access costs, of attacks against the most
famous lattice problem, namely the "shortest-vector problem" (SVP):

   * November 2023: 0.396, and then 0.349 after an erratum:
 
https://web.archive.org/web/20231125213807/https://finiterealities.net/kyber512/

   * December 2023: 0.349, or 0.329 in 3 dimensions:
 
https://web.archive.org/web/20231219201240/https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/faq/Kyber-512-FAQ.pdf

   * January 2024: 0.311, or 0.292 in 3 dimensions:
 
https://web.archive.org/web/20240119081025/https://eprint.iacr.org/2024/080.pdf

I then wrote: "Something is very seriously wrong when the asymptotic
security level claimed three months ago for SVP---as part of a chorus of
confident claims that these memory-access costs make Kyber-512 harder to
break than AES-128---is 27% higher than what's claimed today."

This sort of dramatic instability in security analyses is exciting for
cryptographers, and one of the perennial scientific attractions of
lattice-based cryptography. It's also a security risk. The right way to
handle this tension is to treat these cryptosystems _very_ carefully.
The wrong way is to try to conceal the instability.

John Mattsson writes:
> https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/faq/Kyber-512-FAQ.pdf

That's the December 2023 document above. There are many problems with
that document, but the most obvious is that the document claims a much
higher exponent for the "cost of memory access" than the January 2024
document. This is not some minor side issue: the December 2023 document
labels this cost as an "important consideration" and spends pages
computing the exponent.

One wonders why NIST didn't issue a prompt statement either admitting
error or disputing the January 2024 document. That document was posted
almost two full months ago. The document is on the list of accepted
papers for NIST's next workshop, but accepting a paper (1) isn't a
statement of endorsement and (2) doesn't tell readers "Please disregard
the fundamentally flawed December 2023 statement".

> https://keymaterial.net/2023/11/18/kyber512s-security-level/

See https://blog.cr.yp.to/20231125-kyber.html for comments on that.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-06 Thread D. J. Bernstein
Andrey Jivsov writes:
> Does this point apply in your opinion to hash-based signatures?

Yes. Here's a comment I made about this topic in CFRG a few weeks ago:
"I've sometimes run into people surprised that I recommend _always_
using hybrids rather than making exceptions for McEliece and SPHINCS+.
This is easy to answer: When a defense is simple and easily affordable,
why make exceptions? Many reviewers aren't familiar with post-quantum
cryptography; why give them excuses to delay deployment? Also, if some
random McEliece implementation has a devastating bug, is blaming the
programmer really the right answer?"

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ML-KEM key agreement for TLS 1.3

2024-03-05 Thread D. J. Bernstein
The security analysis of post-quantum crypto is far less mature than the
security analysis of ECC was when the Internet moved to ECC:

   * 48% of the 69 round-1 submissions to the NIST post-quantum
 competition in 2017 have been broken by now.

   * 25% of the 48 submissions unbroken during round 1 have been broken
 by now.

   * 36% of the 28 submissions _selected by NIST in 2019 for round 2_
 have been broken by now.

See https://cr.yp.to/papers.html#qrcsp for the data, and slide 11 of
https://cr.yp.to/talks.html#2024.01.11 for a graph showing when the
breaks were published.

We have to try to protect users against quantum computers. This means
rolling out post-quantum crypto asap. But we also need a simple rule of
always using hybrids in case the post-quantum crypto fails. This rule
has been followed by every major post-quantum deployment so far, has
played an important role in _encouraging_ post-quantum deployment, and
meant that the break of SIKE didn't turn into an immediate break of
real user data that Google and Cloudflare had encrypted with CECPQ2.

NSA and GCHQ have been arguing to the contrary. Their arguments don't
hold up to examination; see https://blog.cr.yp.to/20240102-hybrid.html.
But the arguments are still sitting there, and NSA's market influence
cannot be ignored. I would treat non-hybrid drafts in IETF the same way
as "export" options in code: they're security risks. I would encourage
explicit withdrawal of any such drafts.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] Chempat-X: hybrid of X25519 and sntrup761

2024-01-29 Thread D. J. Bernstein
th?

https://cr.yp.to/papers.html#pppqefs includes dollar costs for Internet
communication (roughly 2^-40 dollars/byte) and for CPU cycles (roughly
2^-51 dollars/cycle). Cycle counts for hashing (under 2^4 cycles/byte)
are readily available from https://bench.cr.yp.to. Combining these
numbers produces the 1% figure (2^-47/2^-40). 1% is negligible.

There was also a question about security tokens, where I laid out
details of a calculation concluding that communication takes 82ms (plus
lower-layer overhead) while this hashing takes 2ms. The CPUs there cost
roughly $1, a smaller fraction of the token cost than desktop CPUs
inside desktop computers.

> If somebody starts blindly replacing algorithms, there is much higher
> risk of actual security problem caused by choosing bad algorithms than
> omitting some hashing.

How much higher? Where is that number coming from? Why is "blindly" a
useful model for real-world algorithm choice?

To be clear, I'm also concerned about the risk of people plugging in
KEMs that turn out to breakable. The basic reason we're applying double
encryption is that, as https://kyberslash.cr.yp.to illustrates, this is
a serious risk even for NIST's selected KEM! But I don't see how that
risk justifies picking a combiner that's unnecessarily fragile and
unnecessarily hard to review.

> No real attacker is looking how to break IND-CCA. They are looking at
> how to attack actual systems. The value of IND-CCA is that despite
> unrealistically strong attack model, most algorithms that fail it tend
> to fail in rather dangerous ways. Which is not at all clear for
> combiner with insufficient hashing.

What's the source for these statistics? Is the long series of
Bleichenbacher-type vulnerabilities, including

   https://nvd.nist.gov/vuln/detail/CVE-2024-23218

from Apple a week ago, being counted as just one failure?

I agree that the security goals we're aiming for here aren't as
_obviously_ necessary as, e.g., stopping attackers from figuring out
secret keys. But I'd be horrified to see a car manufacturer or plane
manufacturer saying "We don't really have to worry whether our seatbelts
work since they're only for disasters". We should be assuming there's a
disaster and making sure our seatbelts do what they're supposed to do.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] [EXT] Re: Chempat-X: hybrid of X25519 and sntrup761

2024-01-28 Thread D. J. Bernstein
Uri Blumenthal writes:
> D. J. Bernstein writes:
> > Filippo Valsorda writes:
> > > Because "a nominal group and a ciphertext collision resistant KEM" is
> > > not "any two KEMs"
> > How is the above dividing line supposed to interact with the stated
> > harm scenario? Why won't implementors in both cases want to be
> > "flexible" and "agile" and so on, leading to "more and more poorly
> > understood and tested combinations"?
> Because of the simple fact that in the stated scenario "KEM" choice is
> constrained by possession of certain listed properties?

I agree that asking for a KEM to be "ciphertext collision resistant"
(plus of course IND-CCA2) is a constraint on the choice of KEM. But how
is the stated scenario of

   (1) some implementations exposing the combiner, and
   (2) applications using "a different combination", leading to
   (3) "more and more poorly understood and tested combinations"

supposed to be prevented by the fact that there's a constraint on the
choice of KEM? And why is asking just for the standard IND-CCA2 goal
(which is also a constraint) _not_ supposed to prevent this scenario?

To be clear, I find #1+#2+#3 plausible for other reasons---and am forced
to conclude that the current X-Wing design is unnecessarily dangerous:

   * The modules in typical cryptographic libraries come from common
 software-layering practices that have nothing to do with security
 and that I'd expect to produce some instances of #1.

   * The elephant in the room---the fact that NIST picked a KEM in the
 middle of a patent minefield---provides a clear incentive for #2,
 whether #2 is supported by #1 or handled by itself.

   * #3 will be a default consequence of #2. Maybe #3 can be stopped,
 but I'm skeptical.

   * Once we're in situation #3, the "ciphertext collision resistant"
 constraint is a security risk. Maybe all KEMs in use fortuitously
 meet the constraint, but we have no reason to assume they will, so
 we're in uncharted waters.

Even without that risk, this constraint is asking security reviewers to
investigate something they wouldn't otherwise have to investigate.

We can, at negligible cost, make the combined KEM safer and easier to
review by having the combiner hash the full transcript (the same way
that TLS moved to hashing the full transcript, but of course the goal
here is to have something that's safe far beyond TLS). What's the
contrary argument supposed to be?

There continue to be unquantified suggestions that the hashing is a cost
issue (e.g., "significant"). I've given numbers showing that the hash
cost is around 1% of the communication cost.

There were various complaints about exposing parameters to applications
(e.g., "leaking heavily parameterized algorithms up into the
cryptography interface"). None of the proposals at hand (X-Wing; my
recommended modifications to X-Wing; Chempat-X) are doing that.

I'm still hoping for clarification of the latest messages. I think the
messages are trying to argue that the parameters will (or "more likely"
will?) _indirectly_ be exposed because the difference between

   (A) a combiner that reaches its security goals given any IND-CCA2 KEM
   and

   (B) a combiner that reaches its security goals given any "ciphertext
   collision resistant" IND-CCA2 KEM

will somehow make the difference between #1+#2+#3 happening and not
happening. But, again, I don't see the mechanism that's supposed to
make the A-B distinction trigger #1+#2+#3. Meanwhile there's much more
clear damage done if we pick unnecessarily fragile mechanism B and then
#3 occurs for other reasons, such as the reasons predicted above.

https://www.imperialviolet.org/2018/12/12/cecpq2.html commented that
CCA-vs.-CPA was a "subtle and dangerous" distinction. How should we
imagine that implementors are going to notice the even more subtle
distinction between A and B, let alone taking action on that basis? If
the answer is supposed to be that implementors will see and heed
warnings in a spec using B: Why shouldn't such warnings be just as
effective in a spec using A, which has the giant advantage of protecting
the implementors who _aren't_ paying attention to the warnings?

Filippo Valsorda writes:
> it was argued that modularity should be the dispositive reason to
> choose universal combiners, even despite modest performance
> disadvantages.

No. I stated various reasons for modifications to X-Wing, and then my
email dated 16 Jan 2024 15:24:57 - gave a numbered list of four
reasons "for assigning responsibility to the combiner rather than to the
underlying KEM". I stand by all of those reasons; I did not say that one
of those was "the dispositive reason". Narrowing the list to just one
entry understates

Re: [TLS] [CFRG] Chempat-X: hybrid of X25519 and sntrup761

2024-01-27 Thread D. J. Bernstein
David Benjamin writes:
> No more heavily parameterized algorithms. Please precompose them.

https://cr.yp.to/papers.html#coolnacl explains advantages of providing
precomposed parameter-free bundles to the application. The current
discussions are about specific proposals for such bundles (or at least
KEM bundles to wrap inside bigger bundles such as HTTPS). I don't see
anyone claiming that precomposition has disadvantages.

However, it's important to keep in mind that proposed bundles are
_internally_ choosing parameters for more general algorithms. This
generality can be tremendously helpful for implementation, testing,
security review, and verification. Judging which parameters are useful
_inside_ designs is an engineering question, and shouldn't be conflated
with the question of what's useful to expose to applications.

Consider, e.g., modular inversion, with the modulus as a parameter.
There's now fully verified fast constant-time software for this---quite
a change from, e.g.,

   
https://www.forbes.com/sites/daveywinder/2019/06/12/warning-windows-10-crypto-vulnerability-outed-by-google-researcher-before-microsoft-can-fix-it/

---and almost all of the effort that went into this was shared across
moduli. For applications that use prime moduli and care more about code
conciseness than speed, Fermat inversion is better, and again the tools
are shared across moduli.

Applications using X25519 should be simply using the DH functions
without caring what happens at lower layers, and inside the DH functions
I think it's good to have an invert-mod-2^255-19 layer to abstract away
the choice of modular-inversion algorithm---but in the end the algorithm
is there, and the modulus parameter is clearly useful. Hiding it from
the application doesn't mean eliminating it!

Should there be further parameters to allow not just inversion mod p or
inversion mod m but, e.g., inversion mod f(x) mod p? That's many more
parameters for the degree and coefficients of the polynomial f, but this
enables support for many post-quantum algorithms. One sees different
decisions being made as to how much of this generality is useful:

   * Inversion libraries typically cover only one case or the other (or
 are even more specialized, at some expense in overall code size;
 see the examples in https://cr.yp.to/papers.html#pqcomplexity).

   * https://cr.yp.to/papers.html#safegcd covers the integer and
 polynomial cases, while commenting on the analogies.

   * Number theorists talking about "local fields" are using an
 abstraction that covers both cases simultaneously.

The benefits of generality need to be weighed against the costs. I don't
think most readers of https://cr.yp.to/papers.html#safegcd would have
appreciated having the paper phrased in terms of arbitrary local fields.

When I read "no more heavily parameterized algorithms", I see "no more"
as saying something absolute but "heavily" as missing quantification, so
I don't know how to evaluate what this means for any concrete example.
Meanwhile this is understating the application goal of _zero_ parameters
("I just want a secure connection, don't ask me to make choices").

> Once you precompose them, you may as well take advantage of properties
> of the inputs and optimize things.

In my implementor's hat, I partially agree. Knowing the context often
enables speedups that aren't available in more generality, as long as
the implementation isn't factored into context-independent pieces.

However, at the design stage, speedups have to be weighed against other
considerations, such as overloading security reviewers and introducing
unnecessary traps into the ecosystem.

The particular speedup we're talking about here is eliminating hashing a
kilobyte or two of data. In context, this speedup is negligible: about
1% of the cost of communicating that data in the first place, never mind
other application costs.

There's a 20-page paper saying that this tiny speedup is safe _for
Kyber_. Why exactly do we want security reviewers spending time on this?
And why should this be sitting around in the ecosystem, given the clear
risk of someone plugging in something other than Kyber?

These two issues are cleanly separated for sntrup761: it's easy to check
that sntrup761 internally hashes public keys and ciphertexts, but this
doesn't answer the question of what happens if someone plugs in, say,
mceliece6688128. Meanwhile the argument that the speedup is negligible
applies equally to whichever KEM people plug in: the cost of hashing the
public key is negligible next to the cost of communicating the key; same
for ciphertexts.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-18 Thread D. J. Bernstein
Natanael writes:
> This assumes performance requirements only comes from settings like
> datacenters, but this is also likely to eventually get used in embedded
> devices some of which may be battery powered. Consider NFC devices such as
> security keys (typically 1-10 mW, less than 100 Kbps, and few available
> cycles).

Sure, security tokens could move from signatures to KEMs. The security
token's public key is then a KEM public key. To prove it's online (and
optionally approve and/or send some data), the token receives a KEM
ciphertext from the verifier, and uses the KEM session key as the key
for a MAC (or for an authenticated cipher to send secret data).

For Kyber-768, communicating 1088 bytes of ciphertext at (say) 106 Kbps
(minimum-speed NFC) takes 82 ms plus lower-layer overhead.

Meanwhile the "few" cycles that you're talking about would be on, say, a
Cortex-M4 running at tens of MHz (see https://eprint.iacr.org/2022/1225
running Dilithium on an OpenSK security token), and this CPU would hash
1088 bytes with SHA3-256 in about 2 ms (even without Keccak hardware).
How is the user of the security token supposed to notice those 2 ms, or
another 2 ms to also hash the public key?

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-17 Thread D. J. Bernstein
> > This reduces load on security reviewers: everyone can see that the full
> > ct is included in the hash, without having to worry about KEM details.
> Here I disagree: the security analysis needs to be done *once* (and
> has been done, but of course still needs review; ideally also
> computer-verification).

To clarify, what I'm talking about is the extra review load that comes
from having more combiners than necessary in the ecosystem.

For example, if Situation A asks security reviewers to look at a generic
combiner, and Situation B asks security reviewers to look at a generic
combiner _plus_ a 20-page paper about a Kyber-specific combiner, then
Situation A clearly reduces review load compared to Situation B.

In Situation A, with the combiner that I'm talking about, everyone can
look at the hybridss formula and immediately see that everything is
hashed in. In Situation B, reviewers have to look at the generic formula
_and_ look at how Kyber details interact with another combiner.

Of course we hope that any particular piece of security review can be
done just once and that's the end of it (OK Google, please read once
through the Chrome source code and remove the buffer overflows), but the
bigger picture is that many security failures are easily explained by
reviewer overload, so minimizing the complexity of what has to be
reviewed is a useful default policy. Minimizing TCBs is another example
of the same approach.

> if we manage to eliminate a significant (albeit not huge)
> amount of cycles through a careful security analysis that needs to be
> done once, I expect this to be helpful for adoption.

I don't see how this argument survives quantification regarding the
hashes that we're talking about. Sending 1KB of ciphertext has roughly
the same dollar cost as 2 million CPU cycles; why would the application
care about a combiner spending 1 or even 2 cycles on hashing?

Another way to save a similar amount of CPU time would be to use a
universal hash to compress the input to Kyber's implicit-rejection hash
("J"). This has well-known proofs, and it can be decided separately by
each secret key without any interoperability issues---but I don't see
how applications would care: the KEM bottleneck is the communication
cost, not the cycle count.

> > It also reduces risks for people who rip out the KEM (for example,
> > because of patent concerns) and swap in another KEM.
> This is why it's very important to standardize (and communicate) X-Wing
> as a KEM, not as a combiner.

And yet the combiner appeared explicitly in the first message in this
thread! I do think that adding enough warnings can reduce risks, but
it's even better to take the sharp edges out of the underlying tool.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-17 Thread D. J. Bernstein
Peter Schwabe writes:
> we would like to have an answer to the question "What KEM should
> I use" that is as simple as
>   "Use X-Wing."

Having an easy-to-use, prepackaged answer is great! What I'm saying is
that the easy-to-use, prepackaged answer should _internally_ use a
combiner that includes the full ciphertext and public key in the hash:

   H = SHA3-256,
   hybridpk = (receiverpkECDH,receiverpkKEM),
   hybridct = (senderpkECDH,senderctKEM),
   hybridss = H(ssECDH,ssKEM,H(hybridct),H(hybridpk),context)

This reduces load on security reviewers: everyone can see that the full
ct is included in the hash, without having to worry about KEM details.
It also reduces risks for people who rip out the KEM (for example,
because of patent concerns) and swap in another KEM.

  [ regarding TLS ]
> I would trust that careful
> evaluations of the pros and cons lead to the decision to *not* use a
> generic combiner to build a hybrid KEM from Kyber768 and X25519.

When and where would this comparison of combiners have happened?
Citation needed, especially if the previous evaluation is supposed to
serve as a substitute for current evaluation.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-17 Thread D. J. Bernstein
Ilari Liusvaara writes:
> I am not one of draft authors, but I tried to estimate the overhead
> and ended up with in ballpark of 7%.

To clarify, you mean that the _cycle_ counts go up by 7%?

My comparison was explicitly against the cost of "communicating the
ciphertexts". That's a much larger cost.

Quantitatively (see https://cr.yp.to/papers.html#pppqefs), sending a
byte through the Internet costs roughly 2^(-40) dollars, while a cycle
of CPU time costs roughly 2^(-51) dollars.

Sending a kilobyte, for example, costs roughly 2^(-30) dollars. Hashing
a kilobyte, at roughly 2^3 cycles/byte, costs roughly 2^(-38) dollars,
which is hundreds of times smaller.

As another example of the same type of comparison, spending 2^15 cycles
on Kyber enc or dec costs roughly 2^(-36) dollars, which is still a very
small percentage of roughly 2^(-30) dollars for communicating the Kyber
ciphertext (never mind the scenarios where the key is sent too).

This is very different from the situation with (e.g.) X25519, where the
cost of CPU time is much more noticeable next to communication cost.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-16 Thread D. J. Bernstein
Bas Westerbaan writes:
> X-Wing is a KEM - not a combiner.

Sure, but there's a combiner present inside it---and even advertised:
see "X-Wing uses the combiner" etc. at the beginning of this thread.

If people are motivated by things like http://tinyurl.com/5cu2j5hf to
use the same combiner with a different KEM, would they be deterred by a
presentation purely as a unified package? Or by enough warnings? Maybe,
but a little more hashing has negligible cost and will reduce the risk.

> Insisting that X-Wing use that generic combiner, is not dissimilar to
> insisting that every KEM that uses an FO transform, should use the
> same generic FO transform.

The title and introduction of https://cr.yp.to/papers.html#tightkem
recommend unifying FO transforms. This would have avoided various
subsequent breaks of NIST submissions.

To be clear, I think other concerns such as efficiency _can_ outweigh
the advantages of unification, but this has to be quantified. When I see
a complaint about "hashing the typically large PQ ciphertexts", I ask
how this compares quantitatively to communicating the ciphertexts, and
end up with a cost increment around 1%, which is negligible even in the
extreme case that the KEM is the main thing the application is doing.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-16 Thread D. J. Bernstein
Jack Grigg writes:
> As the paper states at the top of page 4, X-Wing includes the recipient's
> X25519 public key "as a measure of security against multi-target attacks,
> similarly to what is done in the ML-KEM design".

Thanks for the data. Assuming arguendo that this matters (as in my first
message), the basic risk to consider is that people end up mixing

   * a combiner that doesn't hash the post-quantum KEM public key
 because it expects the KEM to do that and

   * a KEM that doesn't hash the public key because it expects the
 combiner to do that,

so that KEM's public key doesn't end up getting hashed at all.

Given the basic goal of helping auditors, I think we should settle on
principles of (1) always using double encryption, (2) having as few
combiners as possible, and (3) having the combiner responsible for
hashing public keys and ciphertexts along with the shared secrets.

Rationale for assigning responsibility to the combiner rather than to
the underlying KEM: (1) KEM designs and analyses are already hard, with
breaks often taking years to appear---see the graph on slide 11 of

   https://cr.yp.to/talks/2024.01.11/slides-djb-20240111-pqcrypto-4x3.pdf

for the timeline of breaks so far of round-1 NIST submissions---and
adding more goals makes it harder. (2) KEM designs typically focus on
IND-CCA2. Properties beyond IND-CCA2 might be achieved as experiments or
accidents, but treating those as stable commitments is unjustified and
unsafe. (3) We know how to convincingly analyze a combiner as a separate
module, as in https://eprint.iacr.org/2018/024, starting purely from
IND-CCA2 assumptions on KEMs. (4) The arguments for multiple KEMs are
stronger than the arguments for multiple combiners. Having just one
combiner, and thus just one universal review of whether ciphertexts and
public keys are adequately hashed, seems more feasible than having just
one KEM.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-15 Thread D. J. Bernstein
> > 2. I think it's good that both of the X25519 public keys are included
> > where some hybrid constructions would include just one (labeled as
> > ciphertext).
> And it is required for the IND-CCA robustness: without it, it's not.

Well, that depends on _which_ X25519 key is included in the hash.

If the recipient's KEM public key is an X25519 public key, and the
sender's KEM ciphertext is an X25519 public key, and the KEM session key
is the X25519 shared secret, then the KEM obviously isn't IND-CCA2: it
has what https://www.shoup.net/iso/ calls "benign malleability".

The normal way to upgrade from benign malleability to IND-CCA2 is to
also hash the ciphertext---i.e., the sender's X25519 public key---into
the session key. If the goal is maximum streamlining for IND-CCA2 then
one shouldn't include the _recipient's_ X25519 public key in the hash,
so why exactly does X-Wing include it?

I don't think the goal here should be maximum streamlining for IND-CCA2.
The point of hybrids is to do a bit more work to reduce the damage from
screwups; I think the scope of screwups under consideration should go
beyond mathematical breaks and also include implementation issues.

In particular, what happens if protocol designers confuse the two X25519
public keys here, and hash the _recipient's_ public key instead of the
_sender's_ public key? The upgrade to IND-CCA2 doesn't work. Hopefully
the protocol is okay with benign malleability, but there has been so
much emphasis on IND-CCA2 that it's hard to blame a protocol designer
for assuming that KEMs have that property.

So, as I said, I'm happy with a combiner hashing both of the X25519
public keys (along with the shared secret, obviously). But the same
perspective also makes me ask what happens when people replace Kyber
with a different post-quantum KEM. The combiner

   H = SHA3-256,
   hybridpk = (receiverpkECDH,receiverpkKEM),
   hybridct = (senderpkECDH,senderctKEM),
   hybridss = H(ssECDH,ssKEM,H(hybridct),H(hybridpk),context)

is then safer than X-Wing. Even for people using Kyber, this KEM makes
security review easier than X-Wing does. This combiner also satisfies
requests (see my first message for references) to include KEM public
keys (or at least prefixes of those) in the hash for other reasons.

I don't see how the cost of hashing hybridct can be an issue next to the
cost of communicating it. Same for hybridpk (and obviously the hash can
be saved whenever the public key is saved). I worry about complicating
KEM analyses since they're already complicated and error-prone in the
first place---we've seen many breaks of KEM proposals---but this
combiner is a separate module, and the IND-CCA2 property that it's
requesting from the input KEM is what we're asking a KEM to do anyway. I
think that the potential review benefit of _omitting_ hybridct and/or
hybridpk will be outweighed by the review complication of ending up with
more combiners than necessary.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-12 Thread D. J. Bernstein
Scott Fluhrer (sfluhrer) writes:
> If we have a combiner with a proof that “if either of the primitives
> we have meet security property A, then the output of the combiner
> meets security property B”, and we have proofs that both our
> primitives meet security property A”, then doesn’t that mean that our
> system has a proof that it meets security property B?

Certainly those proofs would compose. Even better, _one_ of the
primitives having property A would be enough.

However, the logic relies on a match between the two A properties that
you mention: the property provided by the KEMs, and the property assumed
by the combiner. The situation is different when

   * KEM designers and reviewers focus primarily on IND-CCA2, and then
   * a combiner comes along requiring its input KEM to provide some
 property _beyond_ IND-CCA2.

Then more review time is needed to see which KEMs have that property.
There's no reason to think that KEMs will always have that property.
There's a clear risk of people making the mistake of using the combiner
with a KEM that doesn't have that property.

We already know how to avoid this risk by changing the combiner to hash
more data, as in the

   hybridss = H(ECDHss,KEMss,H(hybridct),H(hybridpk),context)

construction that I mentioned before. Sure, for Kyber this is kilobytes
of extra hashing; how can this matter next to the cost of communicating
those kilobytes through the Internet in the first place?

Note also that real KEMs _aren't_ proven to be IND-CCA2 (never mind
properties beyond IND-CCA2); they aren't even proven to resist a
narrower class of attacks, namely QROM IND-CCA2. For example,

   
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/C0D3W1KoINY/m/GuWevJfPAQAJ

plausibly claims provability for the statement that an attacker limited
to (say) 2^90 hash calls can't carry out a high-probability QROM
IND-CCA2 attack against Kyber _if_ the attacker can't carry out an
IND-CPA attack with probability around 2^-92---but there's no proof that
such a low-probability IND-CPA attack is hard. Right now one can't even
find a clear statement of the resources required for the best _known_
low-probability IND-CPA attack.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-11 Thread D. J. Bernstein
Bas Westerbaan writes:
> At the moment the choice of hybrid is left to the application/protocol.
> This has led to many different proposals for hybrids, which wastes a lot of
> engineering, standardisation and security review time. I think it's better
> if hybridisation is done at the level of cryptographic primitive.

I agree that it's desirable to have KEMs that are internally using
double encryption and that can be reviewed independently of the target
application---not relying on details of TLS 1.3, for example.

For the same reasons (reducing security-review time, simplifying
standardization, etc.), it's desirable to minimize the number of
different double-encryption mechanisms. So I'd think that the default
path forward would be to pick a single combiner such as

   hybridss = H(ssECDH,ssKEM,H(hybridct),H(hybridpk),context)

where H is SHA3-256, hybridpk is (receiverpkECDH,receiverpkKEM), and
hybridct is (senderpkECDH,senderctKEM).

If there's instead a mechanism where the security analysis makes
reference to the Kyber details, then that's more effort to review, and
people plugging in KEMs other than Kyber (for good reasons: worrying
about the Kyber patent claim from http://tinyurl.com/5cu2j5hf, trusting
McEliece more, etc.) will need a different mechanism, so we end up with
more mechanisms than necessary. What's the advantage justifying this?

I saw the "hashing the typically large PQ ciphertexts" comment, but the
dollar-cost calculations from https://cr.yp.to/papers.html#pppqefs imply
that several thousand cycles to hash 1KB cost roughly 2^-38 dollars,
whereas sending or receiving 1KB costs roughly 2^-30 dollars. Is there
evidence of applications where the hashing cost is an issue?

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CFRG] X-Wing: the go-to PQ/T hybrid KEM?

2024-01-11 Thread D. J. Bernstein
Do we have a survey of hybrid patents?

To be clear, for security reasons I recommend a straightforward policy
of always using hybrids (https://blog.cr.yp.to/20240102-hybrid.html).
NIST reportedly bought out some hybrid patents; I'm not aware of hybrid
patents that predate the clear prior art; and in any case it has been
obvious for many years to try hashing any selection of the available
inputs that both sides see, such as ciphertexts, public keys, session
keys, and/or context labels. But I worry that a profusion of hybrid
mechanisms could have someone getting into trouble with a non-bought-out
patent on some specific hybrid mechanism, because of an unfortunate
choice of details matching what the patent happens to cover. A patent
survey would reduce concerns here.

Bas Westerbaan writes:
> SHA3-256( xwing-label || ss_ML-KEM || ss_X25519 || ct_X25519 || pk_X25519
> )

1. I'd include the post-quantum ciphertext (or a hash of it). Rationale:
This makes the construction more generic, and simplifies security
review. There's negligible performance cost compared to the cost of
communicating the ciphertext in the first place. (For quantification of
costs of communication etc., see https://cr.yp.to/papers.html#pppqefs.)

2. I think it's good that both of the X25519 public keys are included
where some hybrid constructions would include just one (labeled as
ciphertext). Rationale: less chance of confusion regarding which key to
include; better fit with some existing uses of X25519; might marginally
simplify security review; even smaller performance cost than including
the post-quantum ciphertext.

3. There are papers that recommend also including at least a 32-byte
prefix of the post-quantum pk: (1) https://eprint.iacr.org/2021/708
recommends including some sort of user identifier and claims that it
isn't "robust" to have ciphertexts that might be decryptable by multiple
users; (2) https://eprint.iacr.org/2021/1351 recommends including a pk
prefix for a different reason, namely to ensure that certain types of
cryptanalytic attacks have to commit to the key they're attacking, which
might make multi-key attacks harder.

These arguments are weak, but the counterarguments that I see are also
weak. On balance, I'd think that it's best to just include the pk (or a
hash of the pk) in the hybrid-hash input, so people won't have to worry
about the possibility of protocols where omitting it causes issues.

There's a layering question regarding who's responsible for this hash. 
https://classic.mceliece.org/mceliece-security-20221023.pdf says the
following: "Classic McEliece follows the principle that any generic
transformation aiming at a goal beyond IND-CCA2 is out of scope for a
KEM specification. This is not saying that further hashing should be
avoided; it is saying that cryptographic systems should be appropriately
modularized."

I think the hybrid construction is a good place to put this hash. If
there are many different hybrid constructions then factoring out another
layer might be useful for reviewers, but I'd rather settle on a minimal
number of hybrid constructions.

4. I'd put ss_X25519 before the post-quantum session key. This has a
two-part rationale.

First, there's a rule of thumb saying to start with the input that's
least predictable for the attacker. This provides a principled way to
settle the order even in situations where there's no reason to think
that the order matters.

Second, available evidence such as https://kyberslash.cr.yp.to indicates
that the post-quantum session key is more likely to be predictable than
the X25519 shared secret. It's of course reasonable to argue that this
situation will be reversed eventually by a combination of quantum
computing and any required upgrades to the post-quantum KEMs, the same
way that it's reasonable to argue that hybrids will eventually be
unnecessary, but hybrid designs should disregard those arguments.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

2023-11-09 Thread D. J. Bernstein
Sophie Schmieg writes:
> NTRU being chosen for non-security related criteria that have since
> materially changed.

I recommend discussing the patent issues explicitly, including public
analysis of the patent threats. For example, Yunlei Zhao in

   
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/Fm4cDfsx65s/m/F63mixuWBAAJ

said "Kyber is covered by our patents (not only the two patents
mentioned in the KCL proposal, but also more patent afterforwards)". The
first two patents were filed a month before the publication of "NewHope
without reconciliation", Kyber's direct predecessor:

   https://patents.google.com/patent/CN107566121A/en
   https://eprint.iacr.org/2016/1157

Maybe there's some reason that Zhao is wrong and these patents don't
actually cover Kyber, but then it's important to have a public analysis
convincingly saying why.

More broadly, the big project of protecting user data against future
quantum computers has suffered years of delay from the combination of

   * paying inadequate attention to patents and
   * selecting cryptosystems in the GAM/LPR family.

Some people seem to think that the activation of NIST's licenses in 2024
will bring this mess to an end; I'm skeptical.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

2023-11-08 Thread D. J. Bernstein
John Mattsson writes:
> NIST does not deserve any criticism for continuing to evaluate SIKE.

The NIST actions that I quoted go far beyond "continuing to evaluate
SIKE". NIST explicitly pointed to SIKE as part of its official rationale
for throwing away FrodoKEM and delaying a decision on Classic McEliece.
NIST flatly denied the impact of an important line of SIKE attacks. NIST
said SIKE was attractive because of its small key and ciphertext sizes.
NIST said it intended to standardize _some_ KEM not based on structured
lattices; NIST selected SIKE as one of just a few options for that.

NIST's main complaint about SIKE was about the CPU time used---"SIKE
encapsulation and decapsulation take on the order of tens of millions of
cycles, which is still relatively slow compared to other post-quantum
schemes"---but SIKE was affordable in the Google-Cloudflare experiment,
was continuing to find speedups, was already down to 8 million cycles on
Intel's Alder Lake CPUs, and was within striking range of BIKE and HQC
in Figure 10 of NIST's report.

For people imagining that NIST's post-quantum selection procedures
reduce the risk to such low levels that the risk can be ignored: How
exactly is that supposed to work? Why is the SIKE example not enough
evidence that it _doesn't_ work?

> D. J. Bernstein wrote:
> > as far as I know there was only one other cryptographer on record
> > recommending against using SIKE.
  [ ... ]
> NSA

NSA in 2021 telling people to wait for post-quantum standards is not a
"cryptographer on record recommending against using SIKE".

https://www.youtube.com/watch?v=q6vnfytS51w is a talk that Tanja Lange
and I gave in 2018. At minute 48 she says the following:

   There's some hope it will remain unbroken until next year. But, well,
   I'm not sure yet where I'll put my money. At this moment I think
   actually CSIDH has a better chance than SIKE of surviving, but who
   knows. Don't use it for anything yet.

This is a very clear warning not just about CSIDH (which she coauthored)
but also about SIKE. My own public warning in 2021 was also evaluating
SIKE (https://twitter.com/hashbreaker/status/1387779717370048518):

   Agreeing with main points in 3, 4, 6, 10 in
   https://eprint.iacr.org/2021/543. More objections to 2, 5, 7, 9. Most
   important dispute is regarding risk management, 1+8. Recent advances
   in torsion-point attacks have killed a huge part of the SIKE
   parameter space, far worse than MOV vs ECDLP.

For comparison, here's context you omitted from the 2021 NSA quote:

   Once NIST post-quantum cryptographic standards are published and
   certification procedures for those algorithms are established,
   CNSSP-15 will be updated with a timeline for required use of the
   post-quantum algorithms and disuse of the quantum-vulnerable portion
   of the current CNSA Suite of algorithms. The nature of this timeline
   will depend upon the standards selected and the ability of the market
   to provide supporting products. NSS customers are reminded that NSA
   does _not_ recommend and policy does not allow implementing or using
   unapproved, non-standard or experimental cryptographic algorithms.

This isn't saying anything about any particular cryptographic algorithm:
it's a generic please-don't-roll-out-post-quantum-crypto-yet. There's
also no cryptographer named as an author.

> I think it is very sad that deploying a lot of non-standard and
> experimental cryptographic algorithms gives a lot of positive media
> attention…

People rolling out post-quantum encryption as quickly as possible are at
least _trying_ to solve the problem of today's user data being recorded
by attackers and decrypted with future quantum computers. If you have a
rational argument for simply giving away today's user data to attackers,
please lay out the argument without using emotion-laden language.

People adding this encryption as a second layer on top of the existing
encryption layer (typically X25519), rather than throwing away the
existing layer, are limiting the damage if the post-quantum encryption
turns out to be breakable. This has negligible cost.

> you according to this paper don’t agree that

If you have a question about something I wrote, please quote it and ask
the question.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] What is the TLS WG plan for quantum-resistant algorithms?

2023-11-07 Thread D. J. Bernstein
Yoav Nir writes:
> To justify a hybrid key exchange you need people who are both worried
> about quantum computers and worried about cryptanalysis or the new
> algorithms, but are willing to bet that those things won’t happen at
> the same time. Or at least, within the time where the generated key
> still matters.

Google and Cloudflare encrypted quite a bit of actual user data using
SIKE:

   https://blog.cloudflare.com/the-tls-post-quantum-experiment/

The only reason this didn't give the user data away to today's attackers
is that Google and Cloudflare had the common sense to insist that any
post-quantum algorithm be added as a second layer of encryption on top
of the existing X25519 layer, rather than removing the existing layer.

That was in 2019. For anyone who thinks a few years of subsequent study
were enough for the public to identify which post-quantum cryptosystems
are breakable, it's useful to look at NIST's official report

   
https://web.archive.org/web/20220705160405/https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8413.pdf

in July 2022 saying that

   * SIKE is "being considered for future standardization";

   * regarding NIST deciding to throw away FrodoKEM: "While NIST intends
 to select at least one additional KEM not based on structured
 lattices for standardization after the fourth round, three other
 KEM alternates (BIKE, HQC, and SIKE) are better suited than
 FrodoKEM for this role";

   * "SIKE remains an attractive candidate for standardization because
 of its small key and ciphertext sizes";

   * regarding NIST delaying a decision on Classic McEliece: "For
 applications that need a very small ciphertext, SIKE may turn out
 to be more attractive";

   * regarding torsion-point attacks: "there has been no impact on
 SIKE".

SIKE had been advertised in 2021 (https://eprint.iacr.org/2021/543) as
"A decade unscathed". I think I was the only person speaking up to
object (https://twitter.com/hashbreaker/status/1387779717370048518), and
as far as I know there was only one other cryptographer on record
recommending against using SIKE.

---D. J. Bernstein

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Revised hybrid key exchange draft

2022-01-24 Thread D. J. Bernstein
Nimrod Aviram writes:
> The construction is proven to satisfy this property under precise
> assumptions about its components.

So, to be clear, the statement that a function F provides "provable
security" means that there's a proof that F achieves the security
property under discussion (in this case, dual PRF) under precise
assumptions about various functions declared to be "components" of F?

Katz and Lindell wrote that "One of the key intellectual contributions
of modern cryptography has been the realization that formal definitions
of security are _essential_ prerequisites for the design, usage, or
study of any cryptographic primitive or protocol." Surely the concept of
"provable security" should be defined, so that everyone can evaluate
claims that particular functions are "provably secure".

> We think the resulting assumptions are reasonable when SHA-256 is used
> in the instantiation

To support security evaluation, can you please list the assumptions
regarding SHA-256? Also, for people who prefer SHA-3, can you please
list any differences in the case of SHA-3? Thanks in advance.

Btw, I did read the paper before asking questions. Section 5 doesn't
have a clearly labeled list of hash-function hypotheses, never mind
proofs. Sometimes there was a clear flow of logic from something that I
would presume was intended as a hypothesis, but in general the section
doesn't follow basic falsifiability rules. A cryptanalyst wants to see
authors committing to a clear statement of the hypotheses being made
about SHA-256, so that the work for breaking those hypotheses is given
appropriate credit, rather than being met by a change in the rules.

---Dan

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Revised hybrid key exchange draft

2022-01-24 Thread D. J. Bernstein
Nimrod Aviram writes:
  [ regarding the "dual-PRF" security property ]
> Our construction satisfies this property.

To make sure I understand:

   (1) You mean that the construction is _conjectured_ to satisfy this
   property, i.e., to be a dual PRF? There must be some sort of
   limit on the hash functions allowed here; is SHA-256 allowed?

   (2) The basis for this conjecture is your previous claim that the
   construction provides "provable security"?

   (3) Meanwhile you claim that the H(x,y) construction used in the
   hybrid-key-exchange draft doesn't provide "provable security"?

In any case, can you please clarify what precisely you mean by "provable
security" in the previous claim that the construction provides "provable
security"? Clarity is a prerequisite for evaluation of the claim. Thanks
in advance.

---Dan

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Revised hybrid key exchange draft

2022-01-24 Thread D. J. Bernstein
Nimrod Aviram writes:
> To summarize, we recommend using our new proposed construction. It’s fast,
> easy to implement, and provides provable security.

The baseline construction is faster and is easier to implement, so
you're saying it doesn't provide "provable security"? Can you please
clarify what precisely you mean by "provable security" here? 

---Dan

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-20 Thread D. J. Bernstein
The messages on the list seem to be perfectly split between "TLS 1.3"
and "TLS 4". I suspect that the "TLS 2017" idea will break this impasse:

   * it shares the fundamental advantage that led to the "TLS 4" idea;
   * it has the additional advantage of making the age obvious;
   * it eliminates the "4 sounds too much like 3" complaint; and
   * it eliminates the "where are TLS 2 and TLS 3?" complaint.

Perhaps it's worth starting a poll specifically between "TLS 1.3" and
"TLS 2017"? Or at least asking whether the new "TLS 2017" option would
swing some previous opinions?

Of course people who prioritize retaining the existing "TLS 1.3"
mindshare will be just as unhappy with "TLS 2017" as with "TLS 4", but
they'll get over it within a few years. :-)

---Dan

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-18 Thread D. J. Bernstein
The largest number of users have the least amount of information, and
they see version numbers as part of various user interfaces. It's clear
how they will be inclined to guess 3>1.3>1.2>1.1>1.0 (very bad) but
4>3>1.2>1.1>1.0 (eliminating the problem as soon as 4 is supported).

We've all heard anecdotes of 3>1.2>1.1>1.0 disasters. Even if this type
of disaster happens to only 1% of site administrators, it strikes me as
more important for security than any of the arguments that have been
given for "TLS 1.3". So I would prefer "TLS 4".

Yes, sure, we can try to educate people that TLS>SSL (but then we're
fighting against tons of TLS=SSL messaging), or educate them to use
server-testing tools (so that they can fix the problem afterwards---but
I wonder whether anyone has analyzed the damage caused by running SSLv3
for a little while before switching the same keys to a newer protocol),
and hope that this education fights against 3>1.3 more effectively than
it fought against 3>1.2. But it's better to switch to a less error-prone
interface that doesn't require additional education in the first place.

---Dan

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.2 Long-term Support Profile draft posted

2016-03-20 Thread D. J. Bernstein
Peter Gutmann writes:
> compressed points are patented

Which patent are you referring to?

US 6141420, I suppose. Let's ignore the question of what's in the prior
art (1992 Harper--Menezes--Vanstone) and what's actually claimed in the
patent. Are you aware that this patent expired in July 2014?

> Everything uses uncompressed points at the moment without any problems

The same way that everyone uses C and C++ without any problems?

https://www.nds.rub.de/research/publications/ESORICS15/ completely broke
two implementations of uncompressed (x,y) ECDH in TLS. The problem, of
course, is that the implementors forgot to check that the input (x,y)
was on the curve.

OpenSSL _does_ try to check, but it seems that this check is sometimes
affected by recently announced bugs in OpenSSL's carry handling. The
impact isn't clear---analyzing this sort of thing is very difficult.
Using compressed (x,y) significantly reduces the amount of rarely tested
checking code for the implementor to screw up.

More importantly, there's a third option (introduced in Miller's
original ECC paper), namely using just x-coordinates. Section 4.1 of
https://cr.yp.to/papers.html#nistecc explains how X25519 uses this third
option to proactively and robustly avoid this type of attack.

---Dan

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls