Lucky Green writes:
> The panel, consisting of Ian Goldberg and Nicko van Someren, put forth
> the following rough first estimates:
>
> While the interconnections required by Bernstein's proposed architecture
> add a non-trivial level of complexity, as Bruce Schneier correctly
> pointed out in his latest CRYPTOGRAM newsletter, a 1024-bit RSA
> factoring device can likely be built using only commercially available
> technology for a price range of several hundred million dollars to about
> 1 billion dollars. Costs may well drop lower if one has the use of a
> chip fab. It is a matter of public record that the NSA as well as the
> Chinese, Russian, French, and many other intelligence agencies all
> operate their own fabs.
> ...
> Bernstein's machine, once built, will have power requirements in the MW
> to operate, but in return will be able to break a 1024-bit RSA or DH key
> in seconds to minutes.

Is there anything written, even a paragraph or two of back of the
envelope scrawls, to justify these numbers?  How big a factor base are
they assuming?  What sieving/factoring method are they using to generate
the relations?  Are they following Bernstein's proposal to abandon
sieving and try to factor values directly using massive parallelism?

Rough calculations on the cryptography list showed that Bernstein's
strategy of replacing sieving with direct smoothness testing is
not feasible.  For a 1024 bit RSA modulus, assuming a factor base of
size approximately 2^36, we need to test approximately 2^89 values for
smoothness in order to get enough relations.  These values are several
hundred bits in length.

Assuming each smoothness test (which amounts to finding all the factors
of each of these values up to 2^36 in size) takes 2^18 cycles, the
total number of cycles in this phase is 2^107.  These are not trivial
algorithms, they involve multi-precision multiplication and will require
substantial complexity for each unit on the chip.  Any problem with a
work factor of 2^107 cycles is effectively impossible even if you've
got a billion dollars to spend.

Understand this: Bernstein only had two ideas in his paper.  The first was
to replace sieving with direct smoothness testing, which is asymptotically
faster (but almost certainly not faster for keys of practical size;
sieving is just too efficient).  The second was to speed up the matrix
solving phase by building a smart computing fabric.

It's possible that you can throw out the first idea and just use the
clever matrix solver, but that's not going to give you the kind of
speedups claimed by his paper (the factor of 3 in modulus length).
It seems very questionable that this change alone would let you solve
RSA keys in "seconds to minutes", unless you already thought that you
could solve them in hours to days using conventional technology of
comparable expense.

It's too bad that Lucky took the precipitous action of revoking his keys
before the community has had a chance to perform some peer review of
these claims of feasibility.  We need to see the analysis, rough though
it may be, so we can judge for ourselves whether these remarkable claims
are sound.

Reply via email to