Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis

2002-05-13 Thread Nomen Nescio

Wei Dai writes:
 Using a factor base size of 10^9, in the relationship finding phase you
 would have to check the smoothness of 2^89 numbers, each around 46 bits
 long. (See Frog3's analysis posted at
 http://www.mail-archive.com/cryptography%40wasabisystems.com/msg01833.html.  
 Those numbers look correct to me.)  If you assume a chip that can check
 one number per microsecond, you would need 10^13 chips to be able to
 complete the relationship finding phase in 4 months. Even at one dollar
 per chip this would cost ten trillion dollars (approximately the U.S. 
 GDP).

This is probably not the right way to approach the problem.  Bernstein's
relation-finding proposal to directly use ECM on each value, while
asymptotically superior to conventional sieving, is unlikely to be
cost-effective for 1024 bit keys.  Better to extrapolate from the recent
sieving results.

http://citeseer.nj.nec.com/cavallar00factorization.html is the paper
from Eurocrypt 2000 describing the first 512 bit RSA factorization.
The relation-finding phase took about 8000 MIPS years.  Based on the
conventional asymptotic formula, doing the work for a 1024 bit key
should take about 10^7 times as long or 80 billion MIPS years.

For about $200 you can buy a 1000 MIPS CPU, and the memory needed for
sieving is probably another couple of hundred dollars.  So call it $500
to get a computer that can sieve 1000 MIPS years in a year.

If we are willing to take one year to generate the relations then
($500 / 1000) x 8 x 10^10 is $40 billion dollars, used to buy
approximately 80 million cpu+memory combinations.  This will generate
the relations to break a 1024 bit key in a year.  If you need it in less
time you can spend proportionately more.  A $400 billion dollare machine
could generate the relations in about a month.  This would be about 20%
of the current annual U.S. federal government budget.

However if you were limited to a $1 billion budget as the matrix
solver estimate assumed, the machine would take 40 years to generate
the relations.

 BTW, if we assume one watt per chip, the machine would consume 87 trillion
 kWh of eletricity per year. The U.S. electricity production was only 3.678 
 trillion kWh in 1999.

The $40 billion, 1-year sieving machine draws on the order of 10 watts
per CPU so would draw about 800 megawatts in total, adequately supplied
by a dedicated nuclear reactor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-05-12 Thread Bill Stewart

At 08:52 AM 04/24/2002 +0800, Enzo Michelangeli wrote:
In particular, none of the naysayers explained me clearly why it should be
reasonable to use 256-bit ciphers like AES with 1024-bit PK keypairs. Even
before Bernstein's papers it was widely accepted that bruteforcing a 256-bit
cipher requires computing power equivalent to ~16Kbit RSA or DH keys (and
~~512-bit ECC keys). Given that a cipher protects only one session,

*Something* has to be the weakest link; calls for balance really come down to
If Algorithm A is already the stronger part of the system,
why should I waste extra time/work strengthening it instead of Algorithm B?.
It doesn't hurt to make parts of the system stronger than necessary,
unless there are other costs like limiting the sizes of the other keys
that can fit in a UDP packet or whatever.   And making the AES algorithm
use longer-than-needed keys gives you some extra insurance against
mathematical breakthroughs or other sorts of discovered weaknesses.

The important issue about whether you can use X-bit block cyphers with
Y-bit public-key cyphers is whether Y bits of PK can give you X good key bits.
For Diffie-Hellman, the conventional wisdom seems to be that
Y bits of public key gives you Y/2 bits of usable keying material,
which means that 1024-bit DH is good enough for 256-bit AES.
For RSA, I think you can protect almost up to pubkeylength bits of message,
since the key generation happens separately from the public-key parts,
but you're definitely into overkill range.

So the question falls back to Is 1024 bits long enough?.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Is There a Quantum Mechanic in the House? (was: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis])

2002-05-12 Thread Mark S. Miller

At 05:52 PM 4/23/2002 Tuesday, Enzo Michelangeli wrote:
[...] And if the reason for the 256 bits is the possible deployment,
sometimes in the future, of quantum computers, well in that case we should
stop using PK cryptography altogether.

Hi Enzo!

Disclaimer: I am not a quantum mechanic, and I find the whole subject, 
including quantum computation, deeply confusing.  Also, the article I refer 
to -- an article in Science by the guy who came up with the original 
factoring result -- is an article that I haven't read, and wouldn't 
understand if I had.  (Neither do I happen to know enough to give a proper 
citation.)  Perhaps someone on this list can fill in some of these gaps, or 
let us know that I've badly misinterpreted the situation, which is quite 
possible.

All that said, my understanding is that the plausible worst case news from 
quantum computing would probably not require us to give up PK crypto.  My 
understanding is that the article I'd like to cite states something like the 
following:


1) Factoring is special because it's all pervasively about periodicities.  
The author's original factoring proposal took advantage of this in an 
essential way in order to show a potential exponential speedup.

2) For NP problems in general, since they are search trees with only 
polynomial depth to an answer but exponential fanout of choices, by using 
superposition, quantum computation may indeed be able to give an exponential 
speedup on the first phase of the search -- finding the answer.

3) The problem then is reporting the answer out.  The superposition that 
found the answer co-exists with an exponential number of superpositions that 
don't.  My understanding is that the paper is postulating an upper bound 
for search problems without peculiar special properties (like the 
periodicities of factoring) -- the most we can get is an 
order-of-square-root speedup on the problem as a whole.


If the above description is approximately right, then the remaining question 
for PK is, can one design a PK system now whose trapdoor depends on a search 
problem that we can be confident cannot be transformed into one with the 
enabling peculiar properties?  AFAIK, this question has not been examined.


Btw, out of fear that quantum computation might destroy PK but not 
single-key, I worried about whether one could do cryptographic capabilities 
in such a world.  http://www.erights.org/elib/distrib/captp/unibus.html is a 
protocol sketch of a distributed capability protocol that depends only on 
single-key cryptography.  But I hope it will never be necessary.



Text by me above is hereby placed in the public domain

Cheers,
--MarkM


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-05-12 Thread Wei Dai

Sorry, there's a mistake in my post, which makes the relationship finding
phase look easier than it actually is. BTW, why did it take 5 days for
that post to go through?

On Wed, Apr 24, 2002 at 12:30:26PM -0700, Wei Dai wrote:
 Using a factor base size of 10^9, in the relationship finding phase you
 would have to check the smoothness of 2^89 numbers, each around 46 bits
 long.

Obviously there are not 2^89 integers that are 46 bits long. Each of the 
numbers that need to be checked for smoothness is actually around 195 bits 
long.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-04-29 Thread Wei Dai

I have one other question about the panel analysis. Why did it focus only 
on the linear algebra part of the NFS algorithm? I would like to know, 
given the same assumption on the factor base size (10^9), how much would 
it cost to build a machine that can perform the relationship finding phase 
of the algorithm in the same estimated time frame (i.e. months)?

Using a factor base size of 10^9, in the relationship finding phase you
would have to check the smoothness of 2^89 numbers, each around 46 bits
long. (See Frog3's analysis posted at
http://www.mail-archive.com/cryptography%40wasabisystems.com/msg01833.html.  
Those numbers look correct to me.)  If you assume a chip that can check
one number per microsecond, you would need 10^13 chips to be able to
complete the relationship finding phase in 4 months. Even at one dollar
per chip this would cost ten trillion dollars (approximately the U.S. 
GDP).

So it would seem that it's still not possible for even a major government
to factor 1024-bit keys within an operationally relevant time frame unless
it was willing to devote most of its national income to the effort.

BTW, if we assume one watt per chip, the machine would consume 87 trillion
kWh of eletricity per year. The U.S. electricity production was only 3.678 
trillion kWh in 1999.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis

2002-04-29 Thread Anonymous

Lucky Green writes:
 Given how panels are assembled and the role they fulfill, I thought it
 would be understood that when one writes that certain results came out
 of a panel that this does not imply that each panelist performed the
 same calculations. But rather that that the information gained from a
 panel (Ian: math appears to be correct, Nicko: if the math is correct,
 these are the engineering implications of the math) are based on the
 combined input from the panelists. My apologies if this process of a
 panel was not understood by all readers and some readers therefore
 interpreted my post to indicate that both Ian and Nicko performed
 parallel engineering estimates.

What he wrote originally was:

: The panel, consisting of Ian Goldberg and Nicko van Someren, put forth
: the following rough first estimates:
:
: While the interconnections required by Bernstein's proposed architecture
: add a non-trivial level of complexity, as Bruce Schneier correctly
: pointed out in his latest CRYPTOGRAM newsletter, a 1024-bit RSA
: factoring device can likely be built using only commercially available
: technology for a price range of several hundred million dollars to about
: 1 billion dollars
: Bernstein's machine, once built, ... will be able to break a 1024-bit
: RSA or DH key in seconds to minutes.

It's not a matter of assuming parallel engineering estimates, but rather
the implication here is that Ian endorsed the results.  In saying that
the panel put forth a result, and the panel is composed of named people,
it implies that the named people put forth the result.  The mere fact
that Ian found it necessary to immediately post a disclaimer makes it
clear how misleading this phrasing was.

Another problem with Lucky's comment is that somewhere between Nicko's
thinking and Lucky's posting, the fact was dropped that only the matrix
solver was being considered.  This is only 1/2 the machine; in fact in
most factoring efforts today it is the smaller part of the whole job.
Neither Nicko nor Ian nor anyone else passed judgement on the equally
crucial question of whether the other part of the machine was buildable.

 It was not until at least a week after FC that I contacted Nicko
 inquiring if he still believed that his initial estimates were correct,
 now that that he had some time to think about it. He told me that the
 estimates had not changed.

It is obvious that in fact Nicko had not spent much time going over
his figures, else he would have immediately spotted the factor of 10
million error in his run time estimate.  Saying that his estimates had
not changed is meaningless if he has not reviewed them.

Lucky failed to make clear the cursory nature of these estimates, that the
machine build cost was based on a hurried hour's work before the panel,
and that the run time was based on about 5 seconds calculation during
the panel itself.  It's not relevant whether this was in part Nicko's
fault for perhaps not making clear to Lucky that the estimate stood in
the same shape a week later.  But it was Lucky who went public with the
claim, so he must take the blame for the inaccuracy.

In fact, if Lucky had passed his incendiary commentary to Nicko and
Ian for review before publishing it, it is clear that they would have
asked for corrections.  Ian would have wanted to remove his name from
the implied endorsement of the numeric results, and Nicko would have
undoubtedly wanted to see more caveats placed on figures which were
going to be attached to his name all over the net, as well as making
clear that he was just talking about the matrix solution.  Of course
this would have removed much of the drama from Lucky's story.

The moral is if you're going to quote people, you're obligated to check
the accuracy of the quotes.  Lucky is not a journalist but in this
instance he is playing one on the net, and he deserves to be criticized
for committing such an elementary blunder, just as he would deserve
credit for bringing a genuine breakthrough to wide attention.

 For example, Bruce has been quoted in a widely-cited eWeek article that
 I don't assume that someone with a massive budget has already built
 this machine, because I don't believe that the machine can be built.

 Bruce shortly thereafter stated in his Cryptogram newsletter that I
 have long believed that a 1024-bit key could fall to a machine costing
 $1 billion.

 Since these quotes describe mutually exclusive view points, we have an
 example of what can happen when a debate spills over into the popular
 media.
 ...
 http://www.eweek.com/article/0,3658,s=712a=24663,00.asp

They are not mutually exclusive, and the difference is clear.  In the
first paragraph, Bruce is saying that Bernstein's design is not practical.
To get his asymptotic results of 3x key length, Bernstein must forego the
use of sieving and replace it with a parallel ECM factoring algorithm
to determine smoothness.  Asymptotically, this is a much lower cost
approach for finding relations, 

Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-04-25 Thread Enzo Michelangeli

Further to Lucky's comments: in the last few days I have discussed keysize
issues with a few people on a couple of mailing lists, and I have
encountered a hostility to large keysizes of which, frankly, I don't
understand the reasons. On the client side at least, performance is not an
issue: PGP 7.0.3 with my new 4096-bit PGP key appears to be as snappy as it
was with 1024-bit keys, and the table at
http://www.mccune.cc/PGPpage2.htm#Speed looks quite reassuring.

In particular, none of the naysayers explained me clearly why it should be
reasonable to use 256-bit ciphers like AES with 1024-bit PK keypairs. Even
before Bernstein's papers it was widely accepted that bruteforcing a 256-bit
cipher requires computing power equivalent to ~16Kbit RSA or DH keys (and
~~512-bit ECC keys). Given that a cipher protects only one session, but PK
protection extends to a large number of sessions, one would expect the PK
part to be engineered to be the _stronger_ link of a cryptosystem, not the
weaker. And if the reason for the 256 bits is the possible deployment,
sometimes in the future, of quantum computers, well in that case we should
stop using PK cryptography altogether.

What am I missing?

Enzo




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-04-25 Thread Lucky Green

Enzo wrote:
 Further to Lucky's comments: in the last few days I have 
 discussed keysize issues with a few people on a couple of 
 mailing lists, and I have encountered a hostility to large 
 keysizes of which, frankly, I don't understand the reasons. 
 On the client side at least, performance is not an
 issue: PGP 7.0.3 with my new 4096-bit PGP key appears to be 
 as snappy as it was with 1024-bit keys, and the table at 
 http://www.mccune.cc/PGPpage2.htm#Speed looks  quite 
 reassuring.
 
 In particular, none of the naysayers explained me clearly why 
 it should be reasonable to use 256-bit ciphers like AES with 
 1024-bit PK keypairs.

Allow me to shed at least some light on the strange phenomenon that you
experienced for I have experienced the same phenomenon and was able to
glean at least a few of the reasons why application authors and
maintainers have proven so reluctant to increase RSA key sizes or
mandate minimum key sizes in their applications.

1) Very, very few applications, and no cryptographic libraries that I am
aware of, that currently employ RSA perform any kind of sanity check on
the size of the keys. The current release version of various programs
that I looked at will happily accept a 4-bit RSA client key that anyone
could factor in their head. VeriSign not too long ago signed a 384-bit
SSL server key that may readers of this list could break with the old
computers in their home. Such certificate signing practices, or their
lack thereof, are nothing short of irresponsible.

I have not tested the following hypothesis and am not willing to spend
the required $125 on the experiment, but I would not be in the least
surprised if many public CA's would readily sign a certificate for a
4-bit RSA key. C.f. the being caught with one's pants down observation
in my previous post. If somebody want to perform this experiment, better
do it quick, since the CA vendors read this mailing list. :-)

There are some positive examples: as a result of my original post, the
next release version of OpenSSH will enforce a 768-bit minimum size on
the key. I have not been following the discussion that lead to the
determination of this figure and therefore  don't know on which
cryptological results the OpenSSH designers based that number.

However, I note that even those experts that I am aware of which assert
that 1024-bit keys are sufficient for the type of long-term use SSH
client keys are frequently employed for do not appear to claim that
768-bit keys should be considered secure today.

Unfortunately, the OpenSSH designers have chosen to not expose the
minimum key size requirement via the sshd configuration file, though at
least there now there will be a known spot in the source that can easily
be patched to a value the server operator might consider to be more in
line with current key size recommendations. I thank the OpenSSH team for
at least providing an easy hook to make the desired changes in the
source, though of course I would like to see both a larger default and a
corresponding option in the config file. Still, hats off to the OpenSSH
team for moving implementing sanity checks that few other vendors have
bothered to implement.

Some other deployed applications either use libraries that can't go
higher than 1024-bits or have the key sizes and structures hard coded
all over the source, making a chance expensive and uneconomical absent
severe customer pressure on the vendor. A much cheaper solution than
rewriting good chunks of the core crypto processing code (or worse,
having to upgrade hardware that is limited to 1024-bits) is putting out
a press release stating why the customer doesn't need keys larger than
1024-bits, which some claim is one of the many reason why there is so
much resistance to moving to larger keys. I am not quite ready to
subscribe to such cynical a view point, but I heard that view point
voiced by several crypto old-timers in private conversations more than
once.

2) One frequently voiced argument against increasing key sizes is the
resultant decrease in performance. Yes, it is absolutely true that
increasing the size of an RSA key leads to a decrease in performance.
However, larger keys decrease performance less than naive experiments
may seem to indicate. For many applications, generating a 2048-bit key
and comparing the performance with that of a 1024-bit key will lead to
numbers that differ widely, much more so than can be attributed to the
larger key size. 

The reason for this phenomenon is that RSA algorithm performance is
highly dependent on optimizations that are both key size and processor
specific. The same optimization strategy that will give you blazingly
fast performance with a 1024-bit key will absolute kill performance with
a 4096-bit key, for which a different optimization strategy is needed.
Similarly, optimizations are highly processor specific. An optimization
designed specifically for a Pentium III processor will run slower on a
Pentium IV than your basic 

Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-04-23 Thread Lucky Green

Anonymous wrote (quoting Adam):
 Adam Back wrote:
  The mocking tone of recent posts about Lucky's call seems quite 
  misplaced given the checkered bias and questionable 
 authority of the 
  above conflicting claims we've seen quoted.
 
 No, Lucky made a few big mistakes.  First, he invoked Ian 
 Goldberg's name as a source of the estimate, which was wrong. 
  Second, he presented Nicko's estimate as being more 
 authoritative than it actually was, as Nicko makes clear 
 here.  And third, he fostered panic by precipitously revoking 
 his key and widely promulgating his sky is falling message.

Rather than continuing with guesses by those that were not present at
the time as to my motivations and objectives behind my post, allow me to
establish the facts and thought processes that lead to my original post.

Prior to the panel at FC, I held the belief that 1024-bit RSA keys could
not be factored in an operationally significant timeframe irrespective
of the budget of the attacker. I know that this belief was held by many,
if not most, of the implementers of cryptographic production systems and
believe that it was held by many, if not most, cryptographers. In some
sense, if this belief had not been held so widely, current debate would
not be as heated.

So let's look at the supposed mistakes Anonymous asserts I made:

1) As is the case with many panel discussions, and in many cases the
reason for choosing a panel format rather than an individual presenter,
the panelists, Ian Goldberg and Nicko van Someren, were selected to
represent subject matter experts in different areas of relevance to the
subject to be discussed: Ian's role was to help determine the
mathematical impact and correctness of Bernstein's proposal: are the
mathematical assumptions correct? Did the author make a mathematical
error in the paper? Ian did not identify errors in the math, though
cautioned that the interconnections required by an actual device would
represent a challenge of significant engineering impact. (Which, as
Nicko addressed in his previous posts to this list, he too considered to
be the limiting factor on performance).

Having thus established, as well as it could been established at the
time, that the paper that triggered the discussion appeared to not
contain mathematical errors, Nicko, as the subject matter expert in
building cryptographic hardware implementations, presented what the math
meant from an engineering perspective. In particular, can a device be
built based on the mathematical assumptions, how much would it cost to
build such a device, and what would the device's operating
characteristics be?

It is correct that Nicko presented estimates that literally were being
refined during the panel session. Naturally, I would not have even
considered posting such hasty generated estimates to a widely-read
mailing list. (More on that later).

Interestingly enough, the reaction to the estimates from the attendees
at the conference, which contained many well-known cryptographers, was
quite different from what I would have expected. Nobody stood up, and FC
is a conference quite amenable to open discussion, that they found the
suggestion that 1024-bit RSA could be broken by a well-resourced
attacker in operationally significant times to be unrealistic. The most
vocal comment in the ensuing discussion came from Yvo Desmedt, who
pointed out that no expert in the field should be surprised by these
results, since it was pointed out in Beth, Frisch, and Simmons
Public-Key Cryptography: State of the Art and Future Directions, LNCS
578, published in back 1992, ten years ago, that 1024-bit keys would be
suitable for protection against a national adversary for about another
10 years: until about 2002. As it so happens, this the year 2002.

Given how panels are assembled and the role they fulfill, I thought it
would be understood that when one writes that certain results came out
of a panel that this does not imply that each panelist performed the
same calculations. But rather that that the information gained from a
panel (Ian: math appears to be correct, Nicko: if the math is correct,
these are the engineering implications of the math) are based on the
combined input from the panelists. My apologies if this process of a
panel was not understood by all readers and some readers therefore
interpreted my post to indicate that both Ian and Nicko performed
parallel engineering estimates.

2) Immediately after the panel, a reporter for the Financial Times in
attendance approached me, inquiring if these estimates had already been
published in the media. I told him that I was not aware of any such
publications and that this was the first time I had heard these
estimates. He informed me that he intended to publish this information
in a matter of days. I don't know if he wrote the article or not; I am
not a Financial Times subscriber.

It was not until at least a week after FC that I contacted Nicko
inquiring if he still believed that his