Peter Gutmann wrote:
Stephan Neuhaus <[EMAIL PROTECTED]> writes:
Concerning the practical use of AES, you may be right (even though it would
be nice to have some advice on what one *should* do instead).
Definitely. Maybe time for a BCP, not just for AES but for general block
I think so.
I find it pretty alarming that in spite of all the review that AES
got, [resistance to timing attacks] was not met, and in an exploitable
fashion to boot.
Well, it depends on what your design assumptions were. [...] In fact I'd say
it's actually not possible to certify resistance
to timing attacks across all possible CPUs, because it'll always be possible
to find some oddball CPU for which an AES-critical instruction somewhere has
some weird characteristic that helps in an attack.
True, but what we have here is not some oddball CPU, but the fact that a
natural AES implementation on one of the most popular CPUs in existence
today has this problem. It's a problem because the algorithm (and by
extension, any natural implementation of it) isn't supposed to be
vulnerable to a timing attack.
Lets say you want constant timing for at least the most common CPU family,
So in the end you've got an algorithm design that happens to be resistant to
timing attacks on the D0 stepping of a Northwood-core Intel P4. Anything else
and all bets are off. This doesn't seem very useful to me.
I don't know. That cache accesses are faster than memory accesses is
not exactly new.
I agree totally that we shouldn't insist on constant-time
implementations across all possible architectures. This way madness
lies. But the fact that it is apparently difficult to produce a fast
constant-time implementation on the P4 is definitely a warning sign,
especially when resistance to timing attacks was an explicit design
How can we get fast constant-time implementations? (Or even just an
implementation that is resistant to timing attacks, which isn't
necessarily the same thing?) I don't know. But what you can't do is
solicit a cipher that is supposed to be free of timing attacks and then,
when one is found, say, "well, don't do that then" :-)
I think this says more about the standardization and review process than
I think the standardisation process went about as well as can be expected,
given Newtonian physics-level assumptions about how CPUs work.
Again, I don't know. That cache accesses are faster than memory
accesses is very much inside the limits of "Newtonian physics-level
assumptions". If the standardizers had had a testable, implementable
phrasing of their design requirements, this embarrassing mistake could
have been avoided. Granted, I don't see at the moment how you could
phrase this so that the word "cache" does not already appear somewhere,
but I feel that this should have been possible. It's just good
IIRC, the timing resistance was accepted on a theoretical argument (that
table accesses take constant time); nobody actually tried it out before
accepting it. If they had, they would have seen that the implementation
was not constant-time. I think this is bad and I still think that the
fault lies with the standardization process.
org;quoted-printable:Universit=C3=A4t des Saarlandes;Department of Informatics
adr;quoted-printable:;;Postfach 15 11 50;Saarbr=C3=BCcken;;66041;Germany