```My paper ``Understanding brute force'' explains an attack with a much
better price-performance ratio than the attack described by Biryukov:

http://cr.yp.to/talks.html#2005.05.27
http://cr.yp.to/papers.html#bruteforce

Biryukov's central point regarding key amortization was made earlier
(and, I think, more clearly) in my paper. My paper also analyzes the
merits of various defenses against the attack.

---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: Optimisation Considered Harmful

```Here's an amusing example of optimization: On the PowerPC 7450 (G4e),
integer multiplication is faster by one cycle if the second operand is
between -131072 and 131071. Ever use multiplication in cryptography?

Jerrold Leichter writes:
There are only a couple of roads forward:
- Develop algorithms that offer reasonable performance even if
implemented in unoptimized ways.

We already have some secret-key ciphers that naturally run in constant
time on current CPUs. One example is my own Salsa20, which is a quite
conservative design but still faster than AES. Some other examples are
Phelix and good old TEA.

Furthermore, most CPUs have constant-time floating-point multiplication.
Floating-point multiplication usually turns out to be the fastest way to
do integer arithmetic in RSA etc., although it takes some effort to use.

The quick summary is that we _can_ have high-speed constant-time
cryptography. The algorithms are already there---although they need to
be distinguished from the impostors such as Rijndael. The implementation
techniques are already there---although they need to be more widely
understood and used.

I can't recall ever seeing a benchmark that reported the
variance of timing across instances of the loop.

For years now I've been reporting multiple individual timings rather
than averages. See, e.g., http://cr.yp.to/mac/poly1305-20041101.pdf,
Appendix B; those are the AES timings that prompted me to start
investigating cache-timing attacks.

(Subsequent versions of the poly1305 paper report even more timing
information but, for space reasons, have to compress the information
into small graphs. Big tables are on the web.)

---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: Protecting against the cache-timing attack.

```Jon Callas writes:
So let's conduct a small thought experiment. Take the set of timings T,
where it is the timings of all possible AES keys on a given computer.
(It's going to be different depending on cpu, compiler, memory, etc.)
Order that set so that the shortest timing is t_0 and the longest one
is t_omega. Obviously, if you delay until t_omega, you have a
constant-time encryption.

A network packet arrives during your AES computation, takes tens of
thousands of cycles to handle, and knocks a few of your table lines
(perhaps chosen by a remote attacker?) out of both L1 and L2 cache.

Part of the problem here is that t_omega is huge, forcing a huge delay.
Another part of the problem is that your timings are now interacting
with the timings of other processes. An extreme form of this effect is
illustrated by the very fast hyperthreading attack; I'm sure that some
bored undergraduate will figure out a remote exploit for a less extreme
form of the effect.

Section 13 of my paper discusses a solution to the interrupt problem,
but that solution requires massive software changes. I'm not aware of
simpler solutions.

---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: AES cache timing attack

```http://cr.yp.to/talks.html#2005.06.01 has slides that people might find
useful as an overview of what's going on. In particular, there's a list
of six obstacles to performing array lookups in constant time. People
who mention just one of the obstacles are oversimplifying the problem.

Hal Finney writes:
The one extra piece of information it does return is the encryption of an
all-zero packet.  So there is a small element of chosen plaintext involved.

Known plaintext, not chosen plaintext. I used timings to identify 105
key bits and then compared the remaining 2^23 keys against a known
plaintext-ciphertext pair, namely the encrypted zero that you mention.

One can carry out the final search with nothing more than known
ciphertext: try decrypting the ciphertext with each key and see which
result looks most plausible. It should even be possible to carry out a
timing attack with nothing more than known ciphertext, by focusing on
either the time variability in the last AES-encryption round or the time
variability in the first AES-decryption round.

Of course, most applications have some known plaintext, and some
applications allow chosen plaintext, so even a chosen-plaintext attack
is considered to be a fatal flaw in a cryptographic standard. The user
isn't supposed to have to worry that someone who influences part of the
plaintext will be able to read all the rest.

---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```