Travis writes:
>The naive countermeasure to timing attacks is to add a random delay,
>but of course that can be averaged out by repeating the computation. 
>I have never heard anyone propose a delay that is based on the input,
>and maybe some per-machine secret, so that it is unpredictable but
>constant.  Of course this wouldn't solve averaging across different
>inputs from some subset, but it would prevent averaging on the same

Sadly, I don't think this works.

In many cases, the observed time depends both on the input and on some
other random noise.  In such cases, averaging attacks that use the same
input over and over again will continue to work, despite the use of
a pseudorandom input-dependent delay.  For instance, think of a timing
attack on AES, where the time compute the map X |--> AES(K,X) depends only
on K and X, but where the measured time depends on the computation time
(which might be a deterministic function of K and X) plus the network
latency (which is random).  Indeed, in this example even the computation
time might not be a deterministic function of K and X: it might depend
on the state of the cache, which might have some random component.

And there are many settings where you can average across many slightly
different inputs.

So I don't think this kind of approach is likely to anywhere, except
possibly in a few specialized settings.

In many cases, a better defense than random delays is to make the total
time constant and independent of the inputs.  Then averaging is pointless.
It's not always possible, but when it is, it's worth considering.

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to