Re: [cryptography] Implementing constant-time string comparison

2014-06-18 Thread D. J. Bernstein
John Gilmore writes, on a semi-moderated mailing list:
 A bugfree C compiler

Bwahahaha. That's funny.

A large part of the game here is to envision the screwups that people
will make and build systems that survive those screwups. For example,
it's common to have C code such as

   x ? MACRO_A : MACRO_B

where the two macros happen, in this compilation, to be the same:

   x ? hello : hello

So it's perfectly reasonable to see a C compiler producing just

   hello

by folding the code across the branch---the compiler writer saw this
pattern often enough and realized that it wouldn't be hard to optimize.
Of course, this optimization is valid only if the two branch results are
identical compile-time constants (after constant propagation etc.), so
the compiler writer does a compile-time comparison of those constants:
== for int, strcmp() for strings, etc.

What I've just described isn't exactly right---it's a compiler bug, a
bug that really occurred in gcc some years ago. Did you notice the bug
as I was describing it? Would you have produced test cases that catch
the bug?

 would be unable to shortcut the loop if the
 arguments were merely declared as pointers to volatile storage

The compiler would be required to access the storage but would still be
allowed to skip the intermediate calculations. For example, it could
convert

   int result = 0;
   int iszero;
   for (i = 0;i  n;++i) result |= (x[i] ^ y[i]);
   iszero = (result == 0);
   return iszero - 1;

into

   int iszero = 1;
   for (i = 0;i  n;++i) if (x[i] ^ y[i]) iszero = 0;
   return iszero - 1;

or into

   int iszero = 1;
   for (i = 0;i  n;++i) if (x[i] != y[i]) iszero = 0;
   return iszero - 1;

or into

   for (i = 0;i  n;++i) if (x[i] != y[i]) goto shortcut;
   return 0;
   shortcut: while (++i  n) { x[i]; y[i]; }
   return -1;

without violating the C language specification. You're deluding yourself
if you think that the guarantees made by the C specification are
adequate for writing constant-time code.

What's the chance of a compiler screwing things up in this way? This
isn't a question of language lawyering; it's a question of what the
compiler writer is thinking. Has the compiler writer seen examples where
it might seem useful to replace

   result = 0; result |= ...; result |= ...; result == 0

with

   iszero = 1; if (...) iszero = 0; if (...) iszero = 0; iszero

which would then hook nicely into early exits? Sure, the early exits
should check for volatile memory accesses in the skipped calculations,
but this doesn't mean that the replacement has to check for volatile.

NaCl has no data flow from secret data to array indices, and no data
flow from secret data to branch conditions---including == inputs. In
particular, to screw up the crypto_verify() timing, the compiler writer
would have to do two things wrong: not just the type of replacement
described above, but also converting NaCl's arithmetic to branching in
the first place. This isn't a common type of arithmetic, and so far we
haven't heard any reports of compilers screwing it up.

Of course, to properly assign blame for screwups, and maybe to prevent
the screwups from happening in the first place, it would be useful to
have the code written in a language whose semantics _do_ include timing.
To some extent NaCl is written in asm, and to some extent asm semantics
do include timing, although the situation could be improved:

   http://blog.cr.yp.to/20140517-insns.html

---Dan
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ECC curves that are safe safecurves.cr.yp.to

2014-01-20 Thread D. J. Bernstein
Peter Gutmann writes (on one of the harder-to-use mailing lists):
 Some of their objections seem pretty subjective though, I mean they
 don't like the Brainpool curves

Actually, the Brainpool curves _meet_ the rigidity requirement that
you're alluding to. The SafeCurves site displays this in the summary
table (bottom of http://safecurves.cr.yp.to) and in the detailed
rigidity analysis (http://safecurves.cr.yp.to/rigid.html).

What's confusing you, apparently, is the distinction between fully
rigid and somewhat rigid. The question here is quantitative and
objective: how many different curves could have been generated within
the limits imposed by the curve-generation method?

Brainpool, in particular, generated curves using a method with _some_
details that are not explained anywhere in the Brainpool documents. This
structure gave _some_ unnecessary power to the people who chose those
details. If those people had (for whatever reason) secretly decided to
target a one-in-a-million type of curve, they would have been able to do
so by varying these details. The general public has no way to verify
that this didn't happen.

Something else that might be confusing you is the gap between what's
absolutely known to be required for security and what conservative
cryptographers recommend. For example, ECC standards prohibit small
discriminants merely to eliminate a potential source of concern---not to
stop any actual attacks. Similarly, the SafeCurves site describes a lack
of rigidity as a potential lack of assurance in a corner case---not
as something that's actually known to allow attacks.

I'm not aware of any dispute regarding any of this, so I don't know why
you describe it as subjective. Perhaps you should read more carefully
through what the SafeCurves site actually says.

---Dan
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] another Certicom patent

2014-01-07 Thread D. J. Bernstein
Dan Brown writes, on the semi-moderated c...@irtf.org list:
 I agree with your multiple PK algs suggestion, for parties who can afford it.
 What about sym key algs? Maybe too costly for now?
 By the way, this kind of idea goes back at least as far as 1999 from
 Johnson and Vanstone under the name of resilient cryptographic schemes.

What Dan Brown carefully avoids mentioning here is that his employer
holds patents US7797539, US8233617, USRE44670 (issued just last month),
and CA2259738 on Resilient cryptographic schemes. Presumably this is
also why he's so enthusiastic about the idea.

Of course, the idea of using multiple cryptographic algorithms together
has a long history before the 1999.01.20 priority date of the patent
(see, e.g., http://link.springer.com/article/10.1007%2FBF02620231). The
idea also has very little use, for several obvious reasons:

   * We have enough problems even getting _one_ algorithm deployed.

   * For applications with larger cost limits, we obtain much more
 security by pumping up the key size, rounds, etc. of a single
 algorithm rather than by combining algorithms.

However, no matter how minor the idea is, it's interesting to see how
Dan Brown pushes the idea on a standardization-related mailing list
without mentioning his employer's related patents.

There's a common myth that security is the primary design goal for
cryptographic standards. In reality, security might be somewhere on the
list of goals, but it certainly isn't at the top---it's constantly being
compromised for the sake of other goals that have more obvious value for
the participants in the standardization process.

---Dan
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ECC patent FUD revisited

2014-01-05 Thread D. J. Bernstein
NSA's Kevin Igoe writes, on the semi-moderated c...@irtf.org list:
 Certicom has granted permission to the IETF to use the NIST curves,
 and at least two of these, P256 and P384, have p = 3 mod 4.  Not
 being a patent lawyer, I have no idea what impact the Certicom patents
 have on the use of newer families of curves, such as Edwards curves.

There are several interesting aspects to this patent FUD. Notice that
the FUD is being used to argue against switching to curves that improve
ECC security. Notice also the complete failure to specify any patent
numbers---so the FUD doesn't have any built-in expiration date, and
there's no easy way for the reader to investigate further.

http://www.certicom.com/index.php/licensing/certicom-ip says that
Certicom discovered and patented many fundamental innovations and has
more than 350 patents and patents pending worldwide. This sounds
impressive until you look at what the portfolio actually contains.

The reality is that Certicom has contributed essentially nothing to
state-of-the-art ECC. Its patent portfolio consists of a few fringe
ideas and a few obsolete ideas---nothing essential for mainstream ECC
usage. Nobody needs MQV, for example: traditional DH achieves the same
security goals in a much more straightforward way, and very few people
notice the marginal performance benefit provided by MQV.

The reason that Certicom has so many patents and patents pending
worldwide, despite having contributed so few ideas, is that it keeps
splitting its patent applications. For example, the original MQV patent
filings in early 1995 ended up being split into an incredibly redundant
collection of US patents 5761305, 5889865, 5896455, 5933504, 6122736,
6487661, 7243232, 7334127, 7779259, 8090947, and 8209533, not to mention
the corresponding non-US patents CA2237688, DE69636815, EP0873617, etc.

---Dan
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] RSA is dead.

2013-12-23 Thread D. J. Bernstein
Peter Gutmann writes (on the moderated cryptogra...@metzdowd.com list):
 Any sufficiently capable developer of crypto software should be
 competent enought to backdoor their own source code in such a way that
 it can't be detected by an audit.

Some of us have been working on an auditable crypto library:

   https://twitter.com/TweetNaCl

The original, nicely indented, version is 809 lines, 16621 bytes. The
Python script to print tweetnacl.h is 1811 bytes. The accompanying paper
(to be posted soon) says Of course, compilers also need to be audited
(or to produce proofs of correct translations), as do other critical
system components---but there's progress on that too. In general it
seems that Peter's fatalist view consists entirely of nobody has done
this yet rather than it's impossible.

TweetNaCl's speed doesn't match the asm in NaCl, but if you can tolerate
OpenSSL's 4.2 million cycles for RSA-2048 decryption then you should be
able to tolerate TweetNaCl's 2.5 million cycles for Curve25519.

---Dan
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] What is the state of patents on elliptic curve cryptography?

2013-08-25 Thread D. J. Bernstein
Zooko Wilcox-OHearn writes (on the old cryptogra...@metzdowd.com list):
 I'd be keen to see a list of potentially-relevant patents which have
 expired or are due to expire within the next 5 years.

http://ed25519.cr.yp.to/software.html includes a chart and pointers.
Pretty much the entirety of the ECC patent FUD comes from patents that

   * have expired by now (Schnorr in 2008, Crandall in 2011, etc.) and
   * never claimed in the first place to cover anything that we're
 actually doing in real-world ECC.

The only exception I'm aware of is the point-compression patent 6141420,
but there's very solid prior art for that one, and in any case it'll
expire in July 2014.

---D. J. Bernstein
   Research Professor, Computer Science, University of Illinois at Chicago
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-16 Thread D. J. Bernstein
Aaron Toponce writes:
 Cryptographers don't like the idea that it's possible, even if it's
 excessively remote, and highly unprobable. This is why you see suggestions
 to use /dev/random for long term SSH, SSL and OpenPGP keys.

Cryptographers are certainly not responsible for this superstitious
nonsense. Think about this for a moment: whoever wrote the /dev/random
manual page seems to simultaneously believe that

   (1) we can't figure out how to deterministically expand one 256-bit
   /dev/random output into an endless stream of unpredictable keys
   (this is what we need from urandom), but

   (2) we _can_ figure out how to use a single key to safely encrypt
   many messages (this is what we need from SSL, PGP, etc.).

For a cryptographer this doesn't even pass the laugh test.

I'm not saying that /dev/urandom has a perfect API. It's disappointingly
common for vendors to deploy devices where the randomness pool has never
been initialized; BSD /dev/urandom catches this configuration bug by
blocking, but Linux /dev/urandom (unlike Linux /dev/random) spews
predictable data, causing (e.g.) the widespread RSA security failures
documented on http://factorable.net. But fixing this configuration bug
has nothing to do with the /dev/random superstitions.

---D. J. Bernstein
   Research Professor, Computer Science, University of Illinois at Chicago
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Web Cryptography API (W3C Working Draft 8 January 2013)

2013-03-10 Thread D. J. Bernstein
Ryan Sleevi writes:
 What use case makes the NaCl algorithms (whose specification is merely
 'use NaCl', which boils down to Use Salsa+Curve25519) worthwhile?

Here's the abstract of The security impact of a new cryptographic
library (http://cr.yp.to/highspeed/coolnacl-20120725.pdf):

   This paper introduces a new cryptographic library, NaCl, and explains
   how the design and implementation of the library avoid various types
   of cryptographic disasters suffered by previous cryptographic
   libraries such as OpenSSL. Specifically, this paper analyzes the
   security impact of the following NaCl features: no data flow from
   secrets to load addresses; no data flow from secrets to branch
   conditions; no padding oracles; centralizing randomness; avoiding
   unnecessary randomness; extremely high speed; and cryptographic
   primitives chosen conservatively in light of the cryptanalytic
   literature.

The paper cites and analyzes cryptographic failures in SSH, ECDSA in
SSL, RSA in SSL, Linux disk encryption, the PlayStation 3, et al. What's
particularly convincing is to look at _newer_ failures, such as the very
recent Lucky 13 attack recovering plaintext from SSL, and observe that
those failures would have been prevented by precisely the NaCl features
identified in this document. (Lucky 13 relies on padding oracles and
on these prohibited forms of data flow.)

These NaCl features are at a quite different level from the features
advertised by cryptographic APIs from the dark ages (e.g., we support
MD5 and RSA-512), and in many cases are in direct conflict with those
features. This is one of the reasons that NaCl has a simple high-level
API. Of course, simplicity has other benefits.

As for specification, there's a state-of-the-art Cryptography in NaCl
document (http://cr.yp.to/highspeed/naclcrypto-20090310.pdf) that has
complete self-contained definitions of every aspect of key generation,
encryption, and authentication involved in NaCl's crypto_box(); plus an
end-to-end example expressed both as

   * self-contained Python/Sage test scripts that compute every detail
 of the crypto and
   * simple test programs using NaCl,

of course producing the same results; plus security notes. Someone who
wants to write a new implementation that interoperates with crypto_box()
doesn't need to read anything else. I'm not saying that this is the end
of the story---implementors should also learn about crypto_sign(),
constant-time code, and more---but it's way ahead of the documentation
mess that one has to read to reimplement other existing protocols with
similar functionality.

 And how can we be sure that the problems that NaCl sets out to solve
 are the same problems developers want or need to solve, especially
 when all the evidence suggests otherwise?

The main reason for a developer to use a cryptographic library is to
protect data against espionage, sabotage, etc. There's ample evidence
that most cryptographic libraries _don't_ actually manage to protect
data---and imitating their decisions is simply going to produce more
security disasters.

Of course, this doesn't imply that NaCl is what developers want, but
high-profile applications such as DNSCrypt are in fact using NaCl in
ways that seem easily generalizable to other applications.

---D. J. Bernstein
   Research Professor, Computer Science, University of Illinois at Chicago
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Last call: DIAC: Directions in Authenticated Ciphers

2012-06-19 Thread D. J. Bernstein
For people interested in the future of secret-key cryptography, the list
of talks for next month's ECRYPT DIAC workshop has now been posted:

   http://hyperelliptic.org/DIAC

Registration is still open today:

   http://hyperelliptic.org/DIAC/registration.html

Here's the list of invited speakers and refereed papers:

   * Invited: Joan Daemen, STMicroelectronics
   * Invited: David McGrew, Cisco
   * Invited: Phillip Rogaway, University of California at Davis
   * Invited: Palash Sarkar, ISI Kolkata
   * Refereed papers:
 - A Do-It-All-Cipher for RFID: design requirements (extended abstract)
   (Saarinen, Engels)
 - AEGIS: a fast authenticated encryption algorithm
   (Wu, Preneel)
 - An improved hardware implementation of the Grain-128a stream cipher
   (Mansouri, Dubrova)
 - Authenticated encryption in civilian space missions: context and 
requirements
   (Sanchez, Fischer)
 - Cryptanalysis of EAXprime
   (Minematsu, Lucks, Morita, Iwata)
 - Hash-CFB
   (Forler, McGrew, Lucks, Wenzel)
 - Heavy Quark for secure AEAD
   (Aumasson, Knellwolf, Meier)
 - How fast can a two-pass mode go? A parallel deterministic authenticated 
encryption mode for AES-NI
   (Aoki, Iwata, Yasuda)
 - Lightweight AES-based authenticated encryption
   (Bogdanov, Mendel, Regazzoni, Rijmen)
 - Permutation-based encryption, authentication and authenticated encryption
   (Bertoni, Daemen, Peeters, Van Assche)
 - SipHash: a fast short-input PRF
   (Aumasson, Bernstein)
 - Stronger security guarantees for authenticated encryption
   (Boldyreva, Paterson, Stam)
 - Suggestions for hardware evaluation of cryptographic algorithms
   (Gurkaynak)

See you in Stockholm!

---D. J. Bernstein
   Research Professor, Computer Science, University of Illinois at Chicago
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] DIAC: Directions in Authenticated Ciphers

2012-05-02 Thread D. J. Bernstein
The DIAC submission page is now open, with a deadline at the end of
Monday 7 May (American Samoa time):

   http://hyperelliptic.org/conferences/diac/iChair/submit.php

DIAC is an ECRYPT-sponsored workshop that will take place 5--6 July in
Stockholm, in particular evaluating the idea of a new competition for
authenticated ciphers. The call for papers asks for submissions on new
components, combinations, attacks, and implementations, but also asks
for submissions discussing requirements---what users actually want.
Submissions of panel proposals, white papers, lists of desiderata, etc.
are encouraged, and there are no particular length requirements.

I should emphasize that an authenticated-cipher competition would be
much more than an AE mode competition. There are certainly people
working on new ways to use AES, but there are many more people working
on new authenticators, new block ciphers, new stream ciphers, new
ciphers with built-in authentication mechanisms, etc.

Zooko Wilcox-O'Hearn writes:
 authenticated encryption can't satisfy any of my use cases!

Of course it can! Evidently you to want to combine it with public-key
signatures, which will render the secret-key authenticator useless, so
for efficiency you'd like to suppress that authenticator. This doesn't
work well with something like AES-OCB3, but it _does_ work well with
something like AES-GCM, giving you AES-CTR.

There are clear engineering advantages to having an AES-CTR module
that's reused by AES-GCM (for applications that want the authentication)
and by Tahoe-LAFS. On the other hand, AES-OCB3 encrypts faster. If you
help people see Tahoe-LAFS as part of this picture then you have a
chance of influencing future work in a way that you'd find useful.

Let me again emphasize that these AES modes are only one corner of the
authenticated-ciphers topic. If we do in fact end up with hundreds of
cryptographers working on authenticated ciphers for years then I
wouldn't bet on AES (or GCM, or OCB3) being part of the final result.

ianG writes:
 the cryptographer's push for AE mode is simply the creation of a more
 perfect hammer, when our real worries are about the building, not the nail.

I agree that the building is in sorry shape, but you shouldn't paint an
overly positive view of the current hammer. Here are a few recent and
ongoing examples of failures of secret-key cryptography:

   * OpenSSH leaking some plaintext (Albrecht, Paterson, Watson).
   * OpenSSL DTLS leaking much more plaintext (AlFardan, Paterson).
   * TLS leaking cookies et al. (Dai, Moeller, Bard, Duong, Rizzo).
   * EAXprime (Smart Grid) allowing fast forgeries (Minematsu et al.).
   * Many breaks in encrypt only; authentication is too slow IPsec.
   * Keeloq door/car/garage RFID completely broken (Eisenbarth et al.).
   * More broken AES is too big RFID proposals: HB, HB+, etc.

To summarize: Yes, non-cryptographic security is a disaster, but
cryptography is a disaster too. :-)

---D. J. Bernstein
   Research Professor, Computer Science, University of Illinois at Chicago
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread D. J. Bernstein
  My thoughts exactly, I've always stayed away from DLP-based PKCs
  (except DH) because they're extraordinarily brittle, with RSA you
  have to get entropy use right just once, with DLP PKCs you have to
  get it right every single time you use them. For embedded systems
  in particular that's just too risky.
 Just to make it clear, if you re-use the random input value to a DSA
 signature, you not only compromise the signature, you compromise the
 private key. In my opinion this makes DSA much more brittle then RSA
 (I wrote a paper about this for one of the early NDSS papers).

ObPlug: DSA certainly has this problem (as illustrated by the Sony PS3
disaster), but there are other discrete-log systems such as Ed25519---

   http://ed25519.cr.yp.to
   
---that completely eliminate this class of problems. There's no need for
any randomness after the initial key generation; the same message always
generates the same signature.

Ed25519 in particular uses just 32 bytes (256 bits) of randomness to
generate its key, and is deterministic after that. It's also reasonably
robust against deficiencies in these 256 bits: nothing goes wrong if,
e.g., the first 30% of the random bits are all replaced by 0, and the
system still isn't trivial to break if the first 70% of the random
bits are all replaced by 0.

There are of course more defenses that one can add to provide resilience
against more severe randomness deficiencies: one can start with more
random bits and hash them down to 256 bits; use repeated RDTSC calls as
auxiliary randomness input; etc. These details have essentially nothing
to do with the choice of cryptographic primitive, and the whole point of
/dev/urandom is to centralize these details and get them right rather
than having everybody reimplement them badly. It would be interesting to
understand how /dev/urandom failed for the repeated RSA primes---I'm
presuming here that /dev/urandom was in fact the main culprit.

---D. J. Bernstein
   Research Professor, Computer Science, University of Illinois at Chicago
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography