Here's an amusing example of optimization: On the PowerPC 7450 (G4e), integer multiplication is faster by one cycle if the second operand is between -131072 and 131071. Ever use multiplication in cryptography?
Jerrold Leichter writes: > There are only a couple of roads forward: > - Develop algorithms that offer reasonable performance even if > implemented in "unoptimized" ways. We already have some secret-key ciphers that naturally run in constant time on current CPUs. One example is my own Salsa20, which is a quite conservative design but still faster than AES. Some other examples are Phelix and good old TEA. Furthermore, most CPUs have constant-time floating-point multiplication. Floating-point multiplication usually turns out to be the fastest way to do integer arithmetic in RSA etc., although it takes some effort to use. The quick summary is that we _can_ have high-speed constant-time cryptography. The algorithms are already there---although they need to be distinguished from the impostors such as Rijndael. The implementation techniques are already there---although they need to be more widely understood and used. > I can't recall ever seeing a benchmark that reported the > variance of timing across instances of the loop. For years now I've been reporting multiple individual timings rather than averages. See, e.g., http://cr.yp.to/mac/poly1305-20041101.pdf, Appendix B; those are the AES timings that prompted me to start investigating cache-timing attacks. (Subsequent versions of the poly1305 paper report even more timing information but, for space reasons, have to compress the information into small graphs. Big tables are on the web.) ---D. J. Bernstein, Associate Professor, Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
