(The use of memory speed leads to an interesting notion:  Functions that are
designed to be differentially expensive on different kinds of fielded hardware.
On a theoretical basis, of course, all hardware is interchangeable; but in
practice, something differentially expensive to calculate on an x86 will remain
"expensive" for many years to come.)

In fact, such things are probably pretty easy to do - as was determined during
arguments over the design of Java.  The original Java specs pinned down
floating point arithmetic exactly:  A conforming implementation was required
to use IEEE single- and double-precision arithmetic, and give answers
identical at the bit level to a reference implementation.  This is easy to do
on a SPARC.  It's extremely difficult to do on an x86, because x86 FP
arithmetic is done to a higher precision.  The hardware provides only one way
to round an intermediate result to true IEEE single or double precision:
Store to memory, then read back.  This imposes a huge cost.  No one could find
any significantly better way to get the bit-for-bit same results on an x86.
(The Java standards were ultimately loosened up.)

So one should be able to define an highly FP-intensive, highly numerically
unstable, calculation all of whose final bits were considered to be part of
the answer.  This would be extremely difficult to calculate rapidly on an

Conversely, one could define the answer - possibly to the same problem - as
that produced using the higher intermediate precision of the x86.  This would
be very hard to compute quickly on machines whose FP hardware doesn't provide
exactly the same length intermediate results as the x86.

One can probably find problems that are linked to other kinds of hardware. For
example, the IBM PowerPC chip doesn't have generic extended precision values,
but does have a fused multiply/add with extended intermediate values.

Some machines provide fast transfers between FP and integer registers; others
require you to go to memory.  Vector-like processing - often of a specialized,
limited sort intended for graphics - is available on some architectures and
not others.  Problems requiring more than 32 bits of address space will pick
out the 64-bit machines.  (Imagine requiring lookups in a table with 2^33
entries.  8 Gig of real memory isn't unreasonable today - a few thousand
dollars - and is becoming cheaper all the time.  But using it effectively on a
the 32-bit machines out there is very hard, typically requiring changes to
the memory mapping or segment registers and such, at a cost equivalent to
hundreds or even thousands of instructions.)

                                                        -- Jerry

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to