Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-01 Thread Samuel Neves
 On 01-10-2010 02:41, Victor Duchovni wrote:
 Should we be confident that 4-prime RSA is stronger at 2048 bits than
 2-prime is at 1024? At the very least, it is not stronger against ECM
 (yes ECM is not effective at this factor size) and while GNFS is not
 known to benefit from small factors, is this enough evidence that
 4-prime 2048-bit keys are effective?


It is slightly stronger than RSA-1024 against ECM, since ECM is then
performed modulo a 2048 bit value instead of a 1024 bit one. This slows
down arithmetic by a factor between 3 and 4 (Karatsuba vs Schoolbook
multiplication). Further, there are now 3 factors to find by ECM instead
of just 1.

Going by asymptotic complexities, factoring 4-prime RSA-2048 by NFS
should cost around 2^116 operations. Using ECM to find a 512-bit prime
costs around 2^93 elliptic curve group additions (add arithmetic cost
here). Factoring RSA-1024 by NFS costs around 2^80 operations.

Thus, I believe that 4-prime RSA-2048 is slightly easier than 2-prime
RSA-2048, but still significantly harder than RSA-1024.

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Samuel Neves
 On 17-08-2010 21:42, Perry E. Metzger wrote:
 On Tue, 17 Aug 2010 22:32:52 +0200 Simon Josefsson
 si...@josefsson.org wrote:
 Bill Stewart bill.stew...@pobox.com writes:

 Basically, 2048's safe with current hardware
 until we get some radical breakthrough
 like P==NP or useful quantum computers,
 and if we develop hardware radical enough to
 use a significant fraction of the solar output,
 we'll probably find it much easier to eavesdrop
 on the computers we're trying to attack than to
 crack the crypto.
 Another breakthrough in integer factoring could be sufficient for an
 attack on RSA-2048.  Given the number of increasingly efficient
 integer factorization algorithms that have been discovered
 throughout history, another breakthrough here seems more natural
 than unlikely to me.
 A breakthrough could also render 10kbit keys broken, or might never
 happen at all. A breakthrough could make short ECC keys vulnerable.
 A breakthrough could make AES vulnerable. One can't operate on this
 basis -- it makes it impossible to use anything other than one-time
 pads.


A breakthrough is a rather strong term. But it's not unreasonable to
believe that the number field sieve's complexity could be lowered on the
near future by an *incremental* improvement --- it would only require
lowering the complexity from L[1/3, ~1.92] to L[1/3, ~1.2] to make 2048
bit factorization roughly as easy as 768 bits today.

Coppersmith's variant of the number field sieve proposed a tradeoff that
dramatically lowered the exponent, if one wanted to break many keys of
roughly the same size. The idea was to fix m, the 'base' of the number
field polynomial, and precompute many many pairs (a,b) such that a - bm
was smooth. With this precomputation, the NFS runs in L[1/3, ~1.639],
which is dramatically faster (and quite worth it for a large
organization --- they're bound to want to break multiple keys).

It is not unreasonable to think that a small(ish) improvement to the
number field sieve could significantly lower the strength of current
keys. It *looks* more likely to happen than a significant improvement on
the speed of ECDLP breaking (I'll make no bets on AES, though).

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Samuel Neves
 Forwarded at Andrew's request.

 Original Message 
Subject: Re: 2048-bit RSA keys
Date: Tue, 17 Aug 2010 19:11:55 -0500 (CDT)
From: Andrew Odlyzko odly...@umn.edu
To: Samuel Neves sne...@dei.uc.pt
CC: cryptography@metzdowd.com


It is not unreasonable to consider the possibility of
algorithmic improvements.  But that does not guarantee
accuracy.

My 1995 paper The future of integer factorization,

http://www.dtc.umn.edu/~odlyzko/doc/future.of.factoring.pdf

published in CryptoBytes (The technical newsletter of RSA
Laboratories), vol. 1, no. 2, 1995, pp. 5-12, considered
the historical record of integer factorizations up to that
point, and took account both of the increasing computing
power and the progress in algorithms (which over the preceding
20 years contributed about as much as the growth in the
number of available cycles).  It then projected when
integers of various sizes might get factored, and even
the most conservative projection had 768-bit integers
getting factored by 2004 or so.

In retrospect, the latest 768-bit factorization shows that
over the preceding 15 years there has been practically no
progress in algorithms, and even the computing power that
is actually used for the experiments has fallen behind
projections.

Andrew

On Tue, 17 Aug 2010, Samuel Neves wrote:

 On 17-08-2010 21:42, Perry E. Metzger wrote:
 On Tue, 17 Aug 2010 22:32:52 +0200 Simon Josefsson
 si...@josefsson.org wrote:
 Bill Stewart bill.stew...@pobox.com writes:

 Basically, 2048's safe with current hardware
 until we get some radical breakthrough
 like P==NP or useful quantum computers,
 and if we develop hardware radical enough to
 use a significant fraction of the solar output,
 we'll probably find it much easier to eavesdrop
 on the computers we're trying to attack than to
 crack the crypto.
 Another breakthrough in integer factoring could be sufficient for an
 attack on RSA-2048.  Given the number of increasingly efficient
 integer factorization algorithms that have been discovered
 throughout history, another breakthrough here seems more natural
 than unlikely to me.
 A breakthrough could also render 10kbit keys broken, or might never
 happen at all. A breakthrough could make short ECC keys vulnerable.
 A breakthrough could make AES vulnerable. One can't operate on this
 basis -- it makes it impossible to use anything other than one-time
 pads.


 A breakthrough is a rather strong term. But it's not unreasonable to
 believe that the number field sieve's complexity could be lowered on the
 near future by an *incremental* improvement --- it would only require
 lowering the complexity from L[1/3, ~1.92] to L[1/3, ~1.2] to make 2048
 bit factorization roughly as easy as 768 bits today.

 Coppersmith's variant of the number field sieve proposed a tradeoff that
 dramatically lowered the exponent, if one wanted to break many keys of
 roughly the same size. The idea was to fix m, the 'base' of the number
 field polynomial, and precompute many many pairs (a,b) such that a - bm
 was smooth. With this precomputation, the NFS runs in L[1/3, ~1.639],
 which is dramatically faster (and quite worth it for a large
 organization --- they're bound to want to break multiple keys).

 It is not unreasonable to think that a small(ish) improvement to the
 number field sieve could significantly lower the strength of current
 keys. It *looks* more likely to happen than a significant improvement on
 the speed of ECDLP breaking (I'll make no bets on AES, though).

 Best regards,
 Samuel Neves

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to
majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: non 2048-bit keys

2010-08-15 Thread Samuel Neves

If an attacker creating a special-purpose machine to break your keys is
a realistic scenario, why are you even considering keys of that size?

Best regards,
Samuel Neves

On 15-08-2010 04:25, John Gilmore wrote:
  ... 2048-bit keys performing
 at 1/9th of 1024-bit. My own internal benchmarks have been closer to
 1/7th to 1/8th. Either way, that's back in line with the above stated
 90-95% overhead. Meaning, in Dan's words 2048 ain't happening.
 Can I abuse a phrase and call this binary thinking?

 There is no reason that the next step after 1024 bits has to be 2048 bits.
 How about 1032 bits?  Or 1040?  Or 1104?
 How about 1200 bits?  How about 1536?  How about 1600?  1808?

 I have a theory that if everyone picked a pseudo-random key size above
 1024 and below 2048, rather than standardizing on Yet Another Fixed
 Keysize, we'd avoid making a powerful incentive for bad guys to build
 a key-cracker optimized for one size.  Which incentive we have
 currently created at 1024 bits.  It's the Microsoft Windows of key
 sizes -- the target that gets you 90+% of the market.  So pick a
 larger size than 1024 that your server load can live with, even if it
 isn't 2048.  And don't tell anybody else what size you picked :-).

   John

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 1280-Bit RSA

2010-07-11 Thread Samuel Neves
 On 11-07-2010 01:11, Brandon Enright wrote:
 On Fri, 9 Jul 2010 21:16:30 -0400 (EDT)
 Jonathan Thornburg jth...@astro.indiana.edu wrote:

 The following usenet posting from 1993 provides an interesting bit
 (no pun itended) of history on RSA key sizes.  The key passage is the
 last paragraph, asserting that 1024-bit keys should be ok (safe from
 key-factoring attacks) for a few decades.  We're currently just
 under 1.75 decades on from that message.  I think the take-home lesson
 is that forecasting progress in factoring is hard, so it's useful to
 add a safety margin...
 This is quite interesting.  The post doesn't say but I suspect at the
 factoring effort was based on using Quadratic Sieve rather than GNFS.
 The difference in speed for QS versus GNFS starts to really diverge with
 larger composites.  Here's another table:

 RSA   GNFS  QS
 ===
 256  43.68 43.73
 384  52.58 55.62
 512  59.84 65.86
 664  67.17 76.64
 768  71.62 83.40
 1024 81.22 98.48
 1280 89.46111.96
 1536 96.76124.28
 2048 109.41   146.44
 3072 129.86   184.29
 4096 146.49   216.76
 8192 195.14   319.63
 16384258.83   469.80
 32768342.05   688.62

 Clearly starting at key sizes of 1024 and greater GNFS starts to really
 improve over QS.  If the 1993 estimate for RSA 1024 was assuming QS
 then that was roughly equivalent to RSA 1536 today.  Even improving the
 GNFS constant from 1.8 to 1.6 cuts off the equivalent of about 256 bits
 from the modulus.

 The only certainty in factoring techniques is that they won't get worse
 than what we have today.

 Brandon

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


The exponent can be further lowered from that (somewhere between 1.6 and
1.7) --- RSA-768 took about 2^67 Opteron instructions to complete, and
RSA-512 can be done in about 2^54 similar operations (it is in the realm
of possibility for a single box over a few days/weeks).

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-21 Thread Samuel Neves
On 21-04-2010 02:40, Victor Duchovni wrote:
 EC definitely has practical merit. Unfortunately the patent issues around
 protocols using EC public keys are murky.

 Neither RSA nor EC come with complexity proofs.
   

While EC (by that I assume you mean ECDSA) does not have a formal
security proof, i.e., it is as hard as the EC discrete log, it it much
closer to one than RSA is to factoring. In particular, Pointcheval and
Stern, and later Brown come close to a formal proof for ECDSA [1].

If one goes further into other schemes, there is Rabin-Williams for the
factoring problem. There are also the schemes by Goh et al. [2] that are
reducible to the CDH and DDH problems in generic abelian groups (like
EC.)  Would patents also apply to one of these schemes over an elliptic
curve?

Best regards,
Samuel Neves

[1] http://www.cacr.math.uwaterloo.ca/techreports/2000/corr2000-54.ps
[2] http://www.cs.umd.edu/~jkatz/papers/dh-sigs-full.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What's the state of the art in factorization?

2010-04-20 Thread Samuel Neves

The state of the art in factorization is the same as for, e.g., the
factorization of RSA-768 [1] --- there haven't been many advances in the
number field sieve algorithm itself. The current effort, as Bernstein
puts it, is in speeding up smoothness detection, as part of the relation
collection process.

Both the RSA-768 factorization paper and a previous one by the same
authors [2] try to predict the effort needed for a 1024-bit prediction,
which is estimated to be around 1000 times harder than a 768-bit
modulus. [1] estimates to number of operations in the RSA768
factorization to be in the ballpark of 2^67 instructions: a thousand
times harder puts this on about 2^77, which puts it in the realm of
doable, but very hard, even for a well funded organization.

We also have to take into account the logistics of doing such a
factorization. Unlike an elliptic curve discrete logarithm computation,
that takes (relatively) negligible storage and communication, the number
field sieve requires massive amounts of data, and the linear algebra
step could become (even more of) a problem.

Best regards,
Samuel Neves

[1] http://eprint.iacr.org/2010/006
[2] http://eprint.iacr.org/2009/389

On 20-04-2010 16:45, Perry E. Metzger wrote:
 I was alerted to some slides from a talk that Dan Bernstein gave a few
 days ago at the University of Montreal on what tools will be needed to
 factor 1024 bit numbers:

 http://cr.yp.to/talks/2010.04.16/slides.pdf

 It has been a couple of years since there has been serious discussion on
 the list on this topic, and especially in the light of various technical
 decisions being undertaken on the size of DNS signing keys for high
 valued zones (like root), I was curious as to whether anyone had any
 interesting comments on the state of the art in factorization.

 Perry
   

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com