Re: 1280-Bit RSA

2010-07-10 Thread Brandon Enright
On Fri, 9 Jul 2010 21:16:30 -0400 (EDT)
Jonathan Thornburg jth...@astro.indiana.edu wrote:

 The following usenet posting from 1993 provides an interesting bit
 (no pun itended) of history on RSA key sizes.  The key passage is the
 last paragraph, asserting that 1024-bit keys should be ok (safe from
 key-factoring attacks) for a few decades.  We're currently just
 under 1.75 decades on from that message.  I think the take-home lesson
 is that forecasting progress in factoring is hard, so it's useful to
 add a safety margin...

This is quite interesting.  The post doesn't say but I suspect at the
factoring effort was based on using Quadratic Sieve rather than GNFS.
The difference in speed for QS versus GNFS starts to really diverge with
larger composites.  Here's another table:

RSA   GNFS  QS
===
256  43.68 43.73
384  52.58 55.62
512  59.84 65.86
664  67.17 76.64
768  71.62 83.40
1024 81.22 98.48
1280 89.46111.96
1536 96.76124.28
2048 109.41   146.44
3072 129.86   184.29
4096 146.49   216.76
8192 195.14   319.63
16384258.83   469.80
32768342.05   688.62

Clearly starting at key sizes of 1024 and greater GNFS starts to really
improve over QS.  If the 1993 estimate for RSA 1024 was assuming QS
then that was roughly equivalent to RSA 1536 today.  Even improving the
GNFS constant from 1.8 to 1.6 cuts off the equivalent of about 256 bits
from the modulus.

The only certainty in factoring techniques is that they won't get worse
than what we have today.

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [TIME_WARP] 1280-Bit RSA

2010-07-09 Thread Brandon Enright
On Thu, 1 Jul 2010 06:46:30 +0200
Dan Kaminsky d...@doxpara.com wrote:

 All,
 
I've got a perfect vs. good question.
 
NIST is pushing RSA-2048.  And I think we all agree that's
 probably a good thing.
 
However, performance on RSA-2048 is too low for a number of real
 world uses.
 
Assuming RSA-2048 is unavailable, is it worth taking the
 intermediate step of using RSA-1280?  Or should we stick to RSA-1024?
 
 --Dan
 

Dan,

I looked at the GNFS runtime and plugged a few numbers in.  It seems
RSA Security is using a more conservative constant of about 1.8 rather
than the suggested 1.92299...

See:
http://mathworld.wolfram.com/NumberFieldSieve.html

So using 1.8, a 1024 bit RSA key is roughly equivalent to a 81 bit
symmetric key.  Plugging in 1280 yields 89 bits.

I'm of the opinion that if you take action to improve security, you
should get more than 8 additional bits for your efforts.  For example,
1536 shouldn't be that much slower but gives 96 bits of security.

For posterity, here is a table using 1.8 for the GNFS constant:

RSASymmetric

256  43.7
512  59.8
768  71.6
1024 81.2
1280 89.5
1536 96.8
2048 109.4
3072 129.9
4096 146.5
8192 195.1

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD6 withdrawn from SHA-3 competition

2009-07-04 Thread Brandon Enright
On Thu, 2 Jul 2009 20:51:47 -0700 or thereabouts Joseph Ashwood
ashw...@msn.com wrote:

 Sent: Wednesday, July 01, 2009 4:05 PM
 Subject: MD6 withdrawn from SHA-3 competition
 
  Also from Bruce Schneier, a report that MD6 was withdrawn from the
  SHA-3 competition because of performance considerations.
 
 I find this disappointing. With the rate of destruction of primitives
 in any such competition I would've liked to see them let it stay
 until it is either broken or at least until the second round. A quick
 glance at the SHA-3 zoo and you won't see much left with no attacks.
 It would be different if it was yet another M-D, using AES as a
 foundation, blah, blah, blah, but MD6 is a truly unique and
 interesting design.
 
 I hope the report is wrong, and in keeping that hope alive, the MD6
 page has no statement about the withdrawl.
 Joe 
 

It wasn't entirely clear to me if it really was withdrawn.  Ron Rivest
posted on behalf of the MD6 team some thoughts on MD6 performance and
specifically suggested/requested that NIST ask for submitted algorithms
to be provably resistant to differential attacks.

The logic was that MD6 is slow because the high number of rounds is
needed in their proof.  They won't tweak/submit a version that doesn't
meet this requirement of theirs and based on the current contest
requirements, they can't be competitive speed-wise without losing their
proof of resistance to differential attacks.  Unless the contest
changes to require such a proof, there is no point in moving MD6
forward.

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 80-bit security? (Was: Re: SHA-1 collisions now at 2^{52}?)

2009-05-08 Thread Brandon Enright
On Wed, 6 May 2009 20:54:34 -0400
Steven M. Bellovin s...@cs.columbia.edu wrote:

 On Thu, 30 Apr 2009 17:44:53 -0700
 Jon Callas j...@callas.org wrote:
 
  The accepted wisdom
  on 80-bit security (which includes SHA-1, 1024-bit RSA and DSA keys,
  and other things) is that it is to be retired by the end of 2010. 
 
 That's an interesting statement from a historical perspective -- is it
 true?  And what does that say about our ability to predict the future,
 and hence to make reasonable decisions on key length?
 
 See, for example, the 1996 report on key lengths, by Blaze, Diffie,
 Rivest, Schneier, Shimomura, Thompson, and Wiener, available at
 http://www.schneier.com/paper-keylength.html -- was it right?
 

On breaking DES the paper says:

As explained above, 40-bit encryption provides inadequate
protection against even the most casual of intruders, content to
scavenge time on idle machines or to spend a few hundred dollars.
Against such opponents, using DES with a 56-bit key will provide a
substantial measure of security. At present, it would take a year
and a half for someone using $10,000 worth of FPGA technology to
search out a DES key. In ten years time an investment of this size
would allow one to a DES key in less than a week.


This is surprising accurate.  As Sandy Harris pointed out,
http://www.copacobana.org/ is selling about $10k worth of FPGA
technology to crack DES in about 6.4 days:

With further optimization of our implementation, we could achieve a
clock frequency of 136MHz for the brute fore attack with COPACOBANA.
Now, the average search time for a single DES key is less than a week,
precisely 6.4 days. The worst case for the search has been reduced to
12.8 days now.


Now, even assuming 64 bits is within reach of modern computing power, I
still think it is naive to assume that computing power will continue to
grow to 80 or more bits any time soon.  The energy requirements for
cycling a 80 bit counter are significant.  We are likely to get to a
point where the question is not how parallel a machine can you afford
to build? but rather how much heat can you afford to dissipate?.

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Storm, Nugache lead dangerous new botnet barrage

2008-01-02 Thread Brandon Enright
On Fri, 28 Dec 2007 09:06:44 -0800 or thereabouts ' =JeffH '
[EMAIL PROTECTED] wrote:

 Storm, Nugache lead dangerous new botnet barrage
 By Dennis Fisher, Executive Editor
 19 Dec 2007 | SearchSecurity.com
 http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1286808
 ,00.html?track=NL-358ad=614777asrc=EM_NLN_2785475uid=1408222
   
...snip...

Storm made a pretty significant comeback this week:

http://noh.ucsd.edu/~bmenrigh/stormdrain/stormdrain.enctotal_encactive.html

Note that those graphs are *only* from the peers that speak encrypted
Overnet.  If you include all the legacy Storm bots out there that still
speak the unencrypted variant Storm is getting back up to its heyday
size.

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: fyi: Storm Worm botnet numbers, via Microsoft

2007-10-23 Thread Brandon Enright
On Mon, 22 Oct 2007 17:55:39 -0700 plus or minus some time ' =JeffH '
[EMAIL PROTECTED] wrote:
...snip...
  I will be presenting /some/ of this work at Toorcon in San Diego this
  Saturday:  
   
  http://www.toorcon.org/2007/event.php?id=38  
 
 excellent, how'd it go? Anyone else present on Storm?  

Things went pretty smooth.  Storm is a complicated and evolving beast so a
50 minute talk can't really go into the depth that is needed to really
understand how it works.  There weren't any other presentations at Toorcon
but it's a pretty hot topic so there should be more talks and papers coming
out from various researchers in the coming weeks and months.

It seems like whenever anyone says anything about Storm, the story gets
picked up by some news service and makes its way to Slashdot.

   
  The presentation is not academic paper quality and takes more of a
  code-monkey approach to the network.  Real (sane and substantiated)
  numbers, stats, and graphs will be presented.  To the best of my
  knowledge, it will be the first publicly released estimates of the size
  of the network with actual supporting data and evidence.   
 
 are your slides now available?  

They are:
http://noh.ucsd.edu/~bmenrigh/exposing_storm.ppt

The link to the historical trends of the network is here:
http://noh.ucsd.edu/~bmenrigh/storm_data.tar.bz2

It can be very hard to track the size of a botnet, even in the case of
Storm where I'm crawling the network.  Technologies like NAT can
significantly complicate things.

See
http://www.usenix.org/events/hotbots07/tech/full_papers/rajab/rajab_html/
for a discussion on tracking the size of botnets.

 
 =JeffH
   

My slides should provide adequate detail for someone to understand how to
interpret the graphs and data.  For specific questions, feel free to email
me directly.

Brandon


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: fyi: Storm Worm botnet numbers, via Microsoft

2007-10-22 Thread Brandon Enright
On Mon, 15 Oct 2007 16:02:54 -0700 plus or minus some time ' =JeffH '
[EMAIL PROTECTED] wrote:
 
 I haven't come across any detailed Storm extent analysis, even with
 having Google search specific security company sites (e.g. using 
 site:sec-corp.com). So if anyone has pointers to pages (other than the
 MSFT blog article pointed to in an earlier post) that present a sane and 
 substantiated analysis of Storm extent, please post 'em. Maybe folks
 don't want to (post 'em or point to 'em)? Are there papers in
 submission? ;-)
 
   

Detailed analysis of the Storm network, how it works, its size, etc is
being activly worked on by several research groups.  Storm is nowhere near
50 million nodes and never was.

I will be presenting /some/ of this work at Toorcon in San Diego this
Saturday:

http://www.toorcon.org/2007/event.php?id=38

The presentation is not academic paper quality and takes more of a
code-monkey approach to the network.  Real (sane and substantiated)
numbers, stats, and graphs will be presented.  To the best of my knowledge,
it will be the first publicly released estimates of the size of the network
with actual supporting data and evidence.

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: World's most powerful supercomputer goes online

2007-09-02 Thread Brandon Enright
On Sun, 2 Sep 2007 14:48:31 +0200 plus or minus some time Guus Sliepen
[EMAIL PROTECTED] wrote:

 Experience with tinc (a VPN daemon with peer-to-peer like architecture,
 which replicates certain information to all daemons in a single VPN),
 showed that even in a network with only 20 nodes, it is extremely hard
 to get rid of information.  You either need to shut down all daemons at
 the same time to make sure all state is lost, or modify the software to
 allow explicit deletion of certain information. With more that 1 million
 nodes it will be even harder to delete data.
   

Actually the stormworm network illustrates this example perfectly.  As with
most DHT based P2P networks, stormworm suffers from latent/stale node data
still in the memory of other nodes.  Asside from the overnet peer bootstrap
files for each stormworm node, the list of nodes in the network is
distributed in memory across all the nodes.

Stormworm is especially bad because the authors didn't take the latent
data problem into account.  There is no built-in mechanism for a botted
host to remove dead peers from their list in memory.  With tens of
thousands of nodes, IPs of machines that were infected and cleaned weeks
ago still occasionally show up.  I suspect this behavior is the primary
source of the ridiculously high (and inaccurate) estimates for the size of
the stormworm botnet.

Brandon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]