Re: SHA1 broken?

2005-03-09 Thread Tyler Durden
Ah. You meant as a principal in general. Of course the prevailing wisdom is 
to go from FPGAs to ASICs when you have heavy tasks.

In Telecom equipment, however, there's a few issues that basically 'require' 
FPGAs.

First, the standards change quite a bit, depending on which area you're in. 
For instance, RPR didn't really get settled until very recently. Second, 
your customers may ask for more or different kinds of functionality, so 
you may have a new release of firmware to address that. Putting the framing 
and/or PM on an FPGA while keeping the guts (eg, packet processing) on the 
main ASIC/processor allows you to swap out the trivial without a major heart 
transplant.

In addition, there's probably the far more important issue of design cycle 
times. ASICs will take (at the very minimum) 18 months to create, and if you 
make a mistake early on and don't catch, you have to start all over again. 
For some fields that's just unacceptable.

Then again, if you're looking for sheer, brute performance and design cycle 
times are not a limiting factor, ASICs are often the way to go. Even in a 
Variola Suitcase, however, I'd bet some of the trivial functions are 
off-loaded to an FPGA, though, for reasons above.

-TD
From: Riad S. Wahby [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: SHA1 broken?
Date: Tue, 8 Mar 2005 13:26:48 -0600
Tyler Durden [EMAIL PROTECTED] wrote:
 Well, maybe I misunderstand your statement here, but in Telecom most 
heavy
 iron has plenty of FPGAs, and as far as I understand it, they more or 
less
 have to.

Have to in what sense?  If they're constantly reconfiguring the FPGAs
(new software revs, or some sort of evolutionary learning process---
the latter not likely in telecom, of course), sure, they have  to be on
reprogrammable structures.
If, on the other hand, you're building a custom hash cracking machine,
you don't need to reconfigure your gates. You could design your
parallelized SHA1 cracking machine and dump it onto a bunch of FPGAs,
but if you really have unlimited resources you take the plunge into
ASICs, at which point you can tighten your timing substantially.
--
Riad S. Wahby
[EMAIL PROTECTED]



Re: SHA1 broken?

2005-03-09 Thread Riad S. Wahby
Tyler Durden [EMAIL PROTECTED] wrote:
 Then again, if you're looking for sheer, brute performance and design cycle 
 times are not a limiting factor, ASICs are often the way to go. Even in a 
 Variola Suitcase, however, I'd bet some of the trivial functions are 
 off-loaded to an FPGA, though, for reasons above.

Oh, sure.  Buy yourself the flexibility of the FPGA, e.g., by putting an
FPGA on a huge DMA pipe.  But don't count on the FPGA to do the brunt of
the crunching once you've settled on an implementation.

Note also that you can probably buy yourself lots of performance without
increasing the design cycle time all that much by simply synthesizing
(via Synopsys or the like) the same Verilog with which you would have
programmed the FPGA.  Buy (or pirate if you can; it's not like you're
selling these things, so who cares about the IP issues?) a set of
standard logic cells in the smallest process you can afford so that even
the lion's share of the layout can be done in a completely automated
fashion, and you're basically all set.

-- 
Riad S. Wahby
[EMAIL PROTECTED]



Re: SHA1 broken?

2005-03-09 Thread Riad S. Wahby
Tyler Durden [EMAIL PROTECTED] wrote:
 Then again, if you're looking for sheer, brute performance and design cycle 
 times are not a limiting factor, ASICs are often the way to go. Even in a 
 Variola Suitcase, however, I'd bet some of the trivial functions are 
 off-loaded to an FPGA, though, for reasons above.

Oh, sure.  Buy yourself the flexibility of the FPGA, e.g., by putting an
FPGA on a huge DMA pipe.  But don't count on the FPGA to do the brunt of
the crunching once you've settled on an implementation.

Note also that you can probably buy yourself lots of performance without
increasing the design cycle time all that much by simply synthesizing
(via Synopsys or the like) the same Verilog with which you would have
programmed the FPGA.  Buy (or pirate if you can; it's not like you're
selling these things, so who cares about the IP issues?) a set of
standard logic cells in the smallest process you can afford so that even
the lion's share of the layout can be done in a completely automated
fashion, and you're basically all set.

-- 
Riad S. Wahby
[EMAIL PROTECTED]



Re: SHA1 broken?

2005-03-09 Thread Tyler Durden
Ah. You meant as a principal in general. Of course the prevailing wisdom is 
to go from FPGAs to ASICs when you have heavy tasks.

In Telecom equipment, however, there's a few issues that basically 'require' 
FPGAs.

First, the standards change quite a bit, depending on which area you're in. 
For instance, RPR didn't really get settled until very recently. Second, 
your customers may ask for more or different kinds of functionality, so 
you may have a new release of firmware to address that. Putting the framing 
and/or PM on an FPGA while keeping the guts (eg, packet processing) on the 
main ASIC/processor allows you to swap out the trivial without a major heart 
transplant.

In addition, there's probably the far more important issue of design cycle 
times. ASICs will take (at the very minimum) 18 months to create, and if you 
make a mistake early on and don't catch, you have to start all over again. 
For some fields that's just unacceptable.

Then again, if you're looking for sheer, brute performance and design cycle 
times are not a limiting factor, ASICs are often the way to go. Even in a 
Variola Suitcase, however, I'd bet some of the trivial functions are 
off-loaded to an FPGA, though, for reasons above.

-TD
From: Riad S. Wahby [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: SHA1 broken?
Date: Tue, 8 Mar 2005 13:26:48 -0600
Tyler Durden [EMAIL PROTECTED] wrote:
 Well, maybe I misunderstand your statement here, but in Telecom most 
heavy
 iron has plenty of FPGAs, and as far as I understand it, they more or 
less
 have to.

Have to in what sense?  If they're constantly reconfiguring the FPGAs
(new software revs, or some sort of evolutionary learning process---
the latter not likely in telecom, of course), sure, they have  to be on
reprogrammable structures.
If, on the other hand, you're building a custom hash cracking machine,
you don't need to reconfigure your gates. You could design your
parallelized SHA1 cracking machine and dump it onto a bunch of FPGAs,
but if you really have unlimited resources you take the plunge into
ASICs, at which point you can tighten your timing substantially.
--
Riad S. Wahby
[EMAIL PROTECTED]



Re: SHA1 broken?

2005-03-08 Thread Tyler Durden
Well, maybe I misunderstand your statement here, but in Telecom most heavy 
iron has plenty of FPGAs, and as far as I understand it, they more or less 
have to.

-TD
From: Riad S. Wahby [EMAIL PROTECTED]
To: Cypherpunks [EMAIL PROTECTED]
Subject: Re: SHA1 broken?
Date: Mon, 7 Mar 2005 17:57:50 -0600
Thomas Shaddack shaddack@ns.arachne.cz wrote:
 There are FPGAs with on-chip RISC CPU cores, allowing reaping the 
benefits
 of both architectures in a single chip.

FPGAs are mostly useful for prototyping.  Once you've decided on a
design, there's no point in realizing it in a reprogrammable
environment.  Synthesize it, time it carefully, and run it as fast as
your process allows.
TSMC 0.13u just ain't that pricey any more.
--
Riad S. Wahby
[EMAIL PROTECTED]



Re: SHA1 broken?

2005-03-08 Thread Riad S. Wahby
Tyler Durden [EMAIL PROTECTED] wrote:
 Well, maybe I misunderstand your statement here, but in Telecom most heavy 
 iron has plenty of FPGAs, and as far as I understand it, they more or less 
 have to.

Have to in what sense?  If they're constantly reconfiguring the FPGAs
(new software revs, or some sort of evolutionary learning process---
the latter not likely in telecom, of course), sure, they have  to be on
reprogrammable structures.

If, on the other hand, you're building a custom hash cracking machine,
you don't need to reconfigure your gates. You could design your
parallelized SHA1 cracking machine and dump it onto a bunch of FPGAs,
but if you really have unlimited resources you take the plunge into
ASICs, at which point you can tighten your timing substantially.

-- 
Riad S. Wahby
[EMAIL PROTECTED]



Re: SHA1 broken?

2005-03-07 Thread Thomas Shaddack

FPGAs will have very hard time to be as fast as dedicated CPUs, 
frequency-wise. The FPGA structures have to be too generic, and are much 
bigger than specialized structures of the CPUs, so they have higher 
capacity, which limits the maximum achievable switching frequency. The 
length of the wiring between the structures together with the lazy speed 
of light plays its role as well. However, the FPGA structure allows 
parallelizing of processing tasks, which can in some cases neatly beat the 
sequential CPUs.

There are FPGAs with on-chip RISC CPU cores, allowing reaping the benefits 
of both architectures in a single chip.



On Sat, 5 Mar 2005, Tyler Durden wrote:

 Well, what would you call a network processor? An FPGA or a CPU? I think of it
 as somewhere in between, given credence to the FPGA statement below.
 
 -TD
 
  From: Major Variola (ret) [EMAIL PROTECTED]
  To: [EMAIL PROTECTED] [EMAIL PROTECTED]
  Subject: Re: SHA1 broken?
  Date: Sat, 05 Mar 2005 06:51:24 -0800
  
  At 09:23 PM 2/19/05 +, Dave Howe wrote:
 I am unaware of any massive improvement (certainly to the scale of
   the comparable improvement in CPUs) in FPGAs, and the ones I looked at
  a
   a few days ago while researching this question seemed to have pretty
  
  FPGAs scale with tech the same as CPUs, however CPUs contain a lot
  more design info (complexity).  But FPGAs since '98 have gotten
  denser (Moore's observation), pioneering Cu wiring, smaller features,
  etc.
 
 



Re: SHA1 broken?

2005-03-07 Thread Riad S. Wahby
Thomas Shaddack shaddack@ns.arachne.cz wrote:
 There are FPGAs with on-chip RISC CPU cores, allowing reaping the benefits 
 of both architectures in a single chip.

FPGAs are mostly useful for prototyping.  Once you've decided on a
design, there's no point in realizing it in a reprogrammable
environment.  Synthesize it, time it carefully, and run it as fast as
your process allows.

TSMC 0.13u just ain't that pricey any more.

-- 
Riad S. Wahby
[EMAIL PROTECTED]



Re: SHA1 broken?

2005-03-07 Thread Tyler Durden
Well, what would you call a network processor? An FPGA or a CPU? I think of 
it as somewhere in between, given credence to the FPGA statement below.

-TD
From: Major Variola (ret) [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Subject: Re: SHA1 broken?
Date: Sat, 05 Mar 2005 06:51:24 -0800
At 09:23 PM 2/19/05 +, Dave Howe wrote:
   I am unaware of any massive improvement (certainly to the scale of
the comparable improvement in CPUs) in FPGAs, and the ones I looked at
a
a few days ago while researching this question seemed to have pretty
FPGAs scale with tech the same as CPUs, however CPUs contain a lot
more design info (complexity).  But FPGAs since '98 have gotten
denser (Moore's observation), pioneering Cu wiring, smaller features,
etc.



Re: SHA1 broken?

2005-03-05 Thread Major Variola (ret)
At 09:23 PM 2/19/05 +, Dave Howe wrote:
   I am unaware of any massive improvement (certainly to the scale of
the comparable improvement in CPUs) in FPGAs, and the ones I looked at
a
a few days ago while researching this question seemed to have pretty

FPGAs scale with tech the same as CPUs, however CPUs contain a lot
more design info (complexity).  But FPGAs since '98 have gotten
denser (Moore's observation), pioneering Cu wiring, smaller features,
etc.





Re: SHA1 broken?

2005-03-05 Thread Tyler Durden
Well, what would you call a network processor? An FPGA or a CPU? I think of 
it as somewhere in between, given credence to the FPGA statement below.

-TD
From: Major Variola (ret) [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Subject: Re: SHA1 broken?
Date: Sat, 05 Mar 2005 06:51:24 -0800
At 09:23 PM 2/19/05 +, Dave Howe wrote:
   I am unaware of any massive improvement (certainly to the scale of
the comparable improvement in CPUs) in FPGAs, and the ones I looked at
a
a few days ago while researching this question seemed to have pretty
FPGAs scale with tech the same as CPUs, however CPUs contain a lot
more design info (complexity).  But FPGAs since '98 have gotten
denser (Moore's observation), pioneering Cu wiring, smaller features,
etc.



Re: SHA1 broken?

2005-02-22 Thread Eugen Leitl
On Sat, Feb 19, 2005 at 03:53:53PM +, Dave Howe wrote:

 I wasn't aware that FPGA technology had improved that much if any - feel 
 free to correct my misapprehension in that area though :)

FPGAs are too slow (and too expensive), if you want lots of SHA-1 performance,
use a crypto processor (or lots of forthcoming C5J mini-ITX boards), or an
ASIC.

Assuming, fast SHA-1 computation is the basis for the attack -- we do not
know that.

While looking, came across

http://www.ietf.org/proceedings/02jul/slides/saag-1.pdf

We really DO NOT need SHA-256 for Message Authentication, mid-2002.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net


pgpiyYiZfRHUC.pgp
Description: PGP signature


Re: SHA1 broken?

2005-02-22 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


  Indeed so. however, the argument in 1998, a FPGA machine broke a DES 
key in 72 hours, therefore TODAY... assumes that (a) the problems are 
comparable, and (b) that moores law has been applied to FPGAs as well as 
CPUs.
That is only misreading my statements and missing a very large portion where 
I specifically stated that the new machine would need to be custom instead 
of semi-custom. The proposed system was not based on FPGAs, instead it would 
need to be based on ASICs engineered using modern technology, much more 
along the lines of a DSP. The primary gains available are actually from the 
larger wafers in use now, along with the transistor shrinkage. Combined 
these have approximately kept the cost in line with Moore's law, and the 
benefits of custom engineering account for the rest. So for exact details 
about how I did the calculations I assumed Moore's law for speed, and an 
additional 4x improvement from custom chips instead of of the shelf. In 
order to verify the calculations I also redid them assuming DSPs which 
should be capable of processing the data (specifically from TI), I came to a 
cost within a couple orders of magnitude although the power consumption 
would be substantially higher.
   Joe 



Re: SHA1 broken?

2005-02-22 Thread Dave Howe
Joseph Ashwood wrote:
I believe you substantially misunderstood my statements, 2^69 work is 
doable _now_. 2^55 work was performed in 72 hours in 1998, scaling 
forward the 7 years to the present (and hence through known data) leads 
to a situation where the 2^69 work is achievable today in a reasonable 
timeframe (3 days), assuming reasonable quantities of available money 
($500,000US). There is no guessing about what the future holds for this, 
the 2^69 work is NOW.
I wasn't aware that FPGA technology had improved that much if any - feel 
free to correct my misapprehension in that area though :)



Re: SHA1 broken?

2005-02-22 Thread Dave Howe
Eugen Leitl wrote:
On Sat, Feb 19, 2005 at 03:53:53PM +, Dave Howe wrote:
I wasn't aware that FPGA technology had improved that much if any - feel
free to correct my misapprehension in that area though :)
FPGAs are too slow (and too expensive), if you want lots of SHA-1
performance,
use a crypto processor (or lots of forthcoming C5J mini-ITX boards), or an
ASIC.
Assuming, fast SHA-1 computation is the basis for the attack -- we do not
know that.
  Indeed so. however, the argument in 1998, a FPGA machine broke a DES 
key in 72 hours, therefore TODAY... assumes that (a) the problems are 
comparable, and (b) that moores law has been applied to FPGAs as well as 
CPUs.
  I am unaware of any massive improvement (certainly to the scale of 
the comparable improvement in CPUs) in FPGAs, and the ones I looked at a 
a few days ago while researching this question seemed to have pretty 
much the same spec sheet as the ones I looked at back then. However, I 
am not a gate array techie, and most of my experience with them has been 
small (two-three chip) devices at very long intervals, purely for my own 
interest. It is possible there has been a quantum leap foward in FPGA 
tech or some substitute tech that can perform massively parallel 
calculations, on larger block sizes and hence more operations, at a 
noticably faster rate than the DES cracker could back then.
Schneier apparently believes there has been - but is simply applying 
moore's law to the machine from back then, and that may not be true 
unless he knows something I don't (I assume he knows lots of things I 
don't, but of course he may not have thought this one though :)



Re: SHA1 broken?

2005-02-20 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


  Indeed so. however, the argument in 1998, a FPGA machine broke a DES 
key in 72 hours, therefore TODAY... assumes that (a) the problems are 
comparable, and (b) that moores law has been applied to FPGAs as well as 
CPUs.
That is only misreading my statements and missing a very large portion where 
I specifically stated that the new machine would need to be custom instead 
of semi-custom. The proposed system was not based on FPGAs, instead it would 
need to be based on ASICs engineered using modern technology, much more 
along the lines of a DSP. The primary gains available are actually from the 
larger wafers in use now, along with the transistor shrinkage. Combined 
these have approximately kept the cost in line with Moore's law, and the 
benefits of custom engineering account for the rest. So for exact details 
about how I did the calculations I assumed Moore's law for speed, and an 
additional 4x improvement from custom chips instead of of the shelf. In 
order to verify the calculations I also redid them assuming DSPs which 
should be capable of processing the data (specifically from TI), I came to a 
cost within a couple orders of magnitude although the power consumption 
would be substantially higher.
   Joe 



Re: SHA1 broken?

2005-02-19 Thread Dave Howe
Joseph Ashwood wrote:
I believe you substantially misunderstood my statements, 2^69 work is 
doable _now_. 2^55 work was performed in 72 hours in 1998, scaling 
forward the 7 years to the present (and hence through known data) leads 
to a situation where the 2^69 work is achievable today in a reasonable 
timeframe (3 days), assuming reasonable quantities of available money 
($500,000US). There is no guessing about what the future holds for this, 
the 2^69 work is NOW.
I wasn't aware that FPGA technology had improved that much if any - feel 
free to correct my misapprehension in that area though :)



Re: SHA1 broken?

2005-02-19 Thread Eugen Leitl
On Sat, Feb 19, 2005 at 03:53:53PM +, Dave Howe wrote:

 I wasn't aware that FPGA technology had improved that much if any - feel 
 free to correct my misapprehension in that area though :)

FPGAs are too slow (and too expensive), if you want lots of SHA-1 performance,
use a crypto processor (or lots of forthcoming C5J mini-ITX boards), or an
ASIC.

Assuming, fast SHA-1 computation is the basis for the attack -- we do not
know that.

While looking, came across

http://www.ietf.org/proceedings/02jul/slides/saag-1.pdf

We really DO NOT need SHA-256 for Message Authentication, mid-2002.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net


pgpZRzeFp36Q6.pgp
Description: PGP signature


Re: SHA1 broken?

2005-02-19 Thread Dave Howe
Eugen Leitl wrote:
On Sat, Feb 19, 2005 at 03:53:53PM +, Dave Howe wrote:
I wasn't aware that FPGA technology had improved that much if any - feel
free to correct my misapprehension in that area though :)
FPGAs are too slow (and too expensive), if you want lots of SHA-1
performance,
use a crypto processor (or lots of forthcoming C5J mini-ITX boards), or an
ASIC.
Assuming, fast SHA-1 computation is the basis for the attack -- we do not
know that.
  Indeed so. however, the argument in 1998, a FPGA machine broke a DES 
key in 72 hours, therefore TODAY... assumes that (a) the problems are 
comparable, and (b) that moores law has been applied to FPGAs as well as 
CPUs.
  I am unaware of any massive improvement (certainly to the scale of 
the comparable improvement in CPUs) in FPGAs, and the ones I looked at a 
a few days ago while researching this question seemed to have pretty 
much the same spec sheet as the ones I looked at back then. However, I 
am not a gate array techie, and most of my experience with them has been 
small (two-three chip) devices at very long intervals, purely for my own 
interest. It is possible there has been a quantum leap foward in FPGA 
tech or some substitute tech that can perform massively parallel 
calculations, on larger block sizes and hence more operations, at a 
noticably faster rate than the DES cracker could back then.
Schneier apparently believes there has been - but is simply applying 
moore's law to the machine from back then, and that may not be true 
unless he knows something I don't (I assume he knows lots of things I 
don't, but of course he may not have thought this one though :)



Re: [p2p-hackers] SHA1 broken? (fwd from [EMAIL PROTECTED])

2005-02-18 Thread Eugen Leitl
- Forwarded message from \Hal Finney\ [EMAIL PROTECTED] -

From: [EMAIL PROTECTED] (Hal Finney)
Date: Thu, 17 Feb 2005 14:25:36 -0800 (PST)
To: [EMAIL PROTECTED]
Subject: Re: [p2p-hackers] SHA1 broken?
Reply-To: Peer-to-peer development. [EMAIL PROTECTED]

The problem with the attack scenario where two versions of a program are
created with the same hash, is that from what little we know of the new
attacks, they aren't powerful enough to do this.

All of the collisions they have shown have the property where the two
alternatives start with the same initial value for the hash; they then
have one or two blocks which are very carefully selected, with a few
bits differing between the two blocks; and at the end, they are back
to a common value for the hash.

It is known that their techniques are not sensitive to this initial value.
They actually made a mistake when they published their MD5 collision,
because they had the wrong initial values due to a typo in Schneier's
book.  When people gave them the correct initial values, they were able
to come up with new collisions within a matter of hours.

If you look at their MD5 collision in detail, it was two blocks long.
Each block was almost the same as the other, with just a few bits
different.  They start with the common initial value.  Then they run
the first blocks through.  Amazingly, this has only a small impact on
the intermediate value after this first block.  Only a relatively few
bits are different.

If you or I tried to take two blocks with a few bits different and feed
them to MD5, we would get totally different outputs.  Changing even
one bit will normally change half the output bits.  The fact that they
are able to change several bits and get only a small difference in the
output is the first miracle.

But then they do an even better trick.  They now go on and do the
second pair of blocks.  The initial values for these blocks (which are
the outputs from the previous stage) are close but not quite the same.
And amazingly, these second blocks not only keep things from getting
worse, they manage to heal the differences.  They precisely compensate
for the changes and bring the values back together.  This is the second
miracle and it is even greater.

Now, it would be a big leap from this to being able to take two arbitrary
different initial values and bring them together to a common output.
That is what would be necessary to mount the code fraud attack.  But as
we can see by inspection of the collisions produced by the researchers
(who are keeping their methodology secret for now), they don't seem to
have that power.  Instead, they are able to introduce a very carefully
controlled difference between the two blocks, and then cancel it.
Being able to cancel a huge difference between blocks would be a problem
of an entirely different magnitude.

Now, there is this other idea which Zooko alludes to, from Dan Kaminsky,
www.doxpara.com, which could exploit the power of the new attacks to
do something malicious.  Let us grant that the only ability we have is
that we can create slightly different pairs of blocks that collide.
We can't meaningfully control the contents of these blocks, and they
will differ in only a few bits.  And these blocks have to be inserted
into a program being distributed, which will have two versions that
are *exactly the same* except for the few bits of difference between
the blocks.  This way the two versions will have the same hash, and this
is the power which the current attacks seem to have.

Kaminsky shows that you could still have good and bad versions of
such a program.  You'd have to write a program which tested a bit in
the colliding blocks, and behaved good if the bit was set, and bad
if the bit was clear.  When someone reviewed this program, they'd see
the potential bad behavior, but they'd also see that the behavior was
not enabled because the bit that enabled it was not set.  Maybe the
bad behavior could be a back door used during debugging, and there is
some flag bit that turns off the debugging mode.  So the reviewer might
assume that the program was OK despite this somewhat questionable code,
because he builds it and makes sure to sign or validate the hash when
built in the mode when the bad features are turned off.

But what he doesn't know is, Kaminsky has another block of data prepared
which has that flag bit in the opposite state, and which he can substitute
without changing the hash.  That will cause the program to behave in its
bad mode, even though the only change was a few bits in this block
of random data.  So this way he can distribute a malicious build and it
has the hash which was approved by the reviewer.

And as Zooko points out, this doesn't have to be the main developer
who is doing this, anyone who is doing some work on creating the final
package might be able to do so.

On the other hand, this attack is pretty blatant once you know it is
possible.  The lesson is that a reviewer should

Re: SHA1 broken?

2005-02-18 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Sent: Thursday, February 17, 2005 2:49 AM
Subject: Re: SHA1 broken?


Joseph Ashwood wrote:
  I believe you are incorrect in this statement. It is a matter of public
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's 
assume they used 2^55 work to break it. Now moving to a completely custom 
design, bumping up the cost to $500,000, and moving forward 7 years, 
delivers ~2^70 work in 72 hours (give or take a couple orders of 
magnitude). This puts the 2^69 work well within the realm of realizable 
breaks, assuming your attackers are smallish businesses, and if your 
attackers are large businesses with substantial resources the break can 
be assumed in minutes if not seconds.

2^69 is completely breakable.
   Joe
  Its fine assuming that moore's law will hold forever, but without that 
you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that has 
been hashed with sha-1 can be considered breakable (but that would allow 
you to (for example) forge a digital signature given an example)
  This of course assumes that the break doesn't match the criteria from 
the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.
I believe you substantially misunderstood my statements, 2^69 work is doable 
_now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 
years to the present (and hence through known data) leads to a situation 
where the 2^69 work is achievable today in a reasonable timeframe (3 days), 
assuming reasonable quantities of available money ($500,000US). There is no 
guessing about what the future holds for this, the 2^69 work is NOW.


- Original Message - 
From: Trei, Peter [EMAIL PROTECTED]
To: Dave Howe [EMAIL PROTECTED]; Cypherpunks 
[EMAIL PROTECTED]; Cryptography cryptography@metzdowd.com


Actually, the final challenge was solved in 23 hours, about
1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding
the key after only 24% of the keyspace had been searched.
More recently, RC5-64 was solved about a year ago. It took
d.net 4 *years*.
2^69 remains non-trivial.
What you're missing in this is that Deep Crack was already a year old at the 
time it was used for this, I was assuming that the most recent technologies 
would be used, so the 1998 point for Deep Crack was the critical point. Also 
if you check the real statistics for RC5-64 you will find that 
Distributed.net suffered from a major lack of optimization on the workhorse 
of the DES cracking effort (DEC Alpha processor) even to the point where 
running the X86 code in emulation was faster than the native code. Since an 
Alpha Processor had been the breaking force for DES Challenge I and a factor 
of  1/3  for III this crippled the performance resulting in the Alphas 
running at only ~2% of their optimal speed, and the x86 systems were running 
at only about 50%. Based on just this 2^64 should have taken only 1.5 years. 
Additionally add in that virtually the entire Alpha community pulled out 
because we had better things to do with our processors (e.g. IIRC the same 
systems rendered Titanic) and Distributed.net was effectively sucked dry of 
workhorse systems, so a timeframe of 4-6 months is more likely, without any 
custom hardware and rather sad software optimization. Assuming that the new 
attacks can be pipelined (the biggest problem with the RC5-64 optimizations 
was pipeline breaking) it is entirely possible to use modern technology 
along with GaAs substrate to generate chips in the 10-20 GHz range, or about 
10x the speed available to Distributed.net. Add targetted hardware to the 
mix, deep pipelining, and massively multiprocessors and my numbers still 
hold, give or take a few orders of magnitude (the 8% of III done by Deep 
Crack in 23 hours is only a little over 2 orders of magnitude off, so within 
acceptable bounds).

2^69 is achievable, it may not be pretty, and it certainly isn't kind to the 
security of the vast majority of secure infrastructure, but it is 
achievable and while the cost bounds may have to be shifted, that is 
achievable as well.

It is still my view that everyone needs to keep a close eye on their hashes, 
make sure the numbers add up correctly, it is simply my view now that SHA-1 
needs to be put out to pasture, and the rest of the SHA line needs to be 
heavily reconsidered because of their close relation to SHA-1.

The biggest unknown surrounding this is the actual amount of work necessary 
to perform the 2^69, if the workload is all XOR then the costs and timeframe 
I gave are reasonably pessimistic

Re: SHA1 broken?

2005-02-18 Thread Joseph Ashwood
- Original Message - 
From: Joseph Ashwood [EMAIL PROTECTED]
Sent: Friday, February 18, 2005 3:11 AM

[the attack is reasonable]
Reading through the summary I found a bit of information that means my 
estimates of workload have to be re-evaluated. Page 1 Based on our 
estimation, we expect that real collisions of SHA1 reduced to 70-steps can 
be found using todays supercomputers. This is a very important statement 
for estimating the real workload, assuming there is an implicit in one 
year in there, and assuming BlueGene (Top 500 list slot 1) this represents 
22937.6 GHz*years, or slightly over 2^69 clock cycles, I am obviously still 
using gigahertz because information gives us nothing better to work from. 
This clearly indicates that the operations used for the workload span 
multiple processor clocks, and performing a gross estimation based on pure 
guesswork I'm guessing that my numbers are actually off by a factor of 
between 50 and 500, this factor will likely work cleanly in either adjusting 
the timeframe or production cost.

My suggestion though to make a switch away from SHA-1 as soon as reasonable, 
and to prepare to switch hashes very quickly in the future remains the same, 
the march of processor progress is not going to halt, and the advance of 
cryptographic attacks will not halt which will inevitably squeeze SHA-1 to 
broken. I would actually argue that the 2^80 strength it should have is 
enough to begin its retirement, 2^80 has been strong enough for a decade 
in spite of the march of technology. Under the processor speed enhancements 
that have happened over the last decade we should have increased the 
keylength already to accomodate for dual core chips running at 20 times the 
speed for a total of 40 times the prior speed (I was going to use Spec data 
for a better calculation but I couldn'd immediately find specs for a Pentium 
Pro 200) by adding at least 5 bits preferrably 8 to our necessary protection 
profile.
   Joe 



Re: SHA1 broken?

2005-02-18 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Sent: Thursday, February 17, 2005 2:49 AM
Subject: Re: SHA1 broken?


Joseph Ashwood wrote:
  I believe you are incorrect in this statement. It is a matter of public
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's 
assume they used 2^55 work to break it. Now moving to a completely custom 
design, bumping up the cost to $500,000, and moving forward 7 years, 
delivers ~2^70 work in 72 hours (give or take a couple orders of 
magnitude). This puts the 2^69 work well within the realm of realizable 
breaks, assuming your attackers are smallish businesses, and if your 
attackers are large businesses with substantial resources the break can 
be assumed in minutes if not seconds.

2^69 is completely breakable.
   Joe
  Its fine assuming that moore's law will hold forever, but without that 
you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that has 
been hashed with sha-1 can be considered breakable (but that would allow 
you to (for example) forge a digital signature given an example)
  This of course assumes that the break doesn't match the criteria from 
the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.
I believe you substantially misunderstood my statements, 2^69 work is doable 
_now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 
years to the present (and hence through known data) leads to a situation 
where the 2^69 work is achievable today in a reasonable timeframe (3 days), 
assuming reasonable quantities of available money ($500,000US). There is no 
guessing about what the future holds for this, the 2^69 work is NOW.


- Original Message - 
From: Trei, Peter [EMAIL PROTECTED]
To: Dave Howe [EMAIL PROTECTED]; Cypherpunks 
[EMAIL PROTECTED]; Cryptography cryptography@metzdowd.com


Actually, the final challenge was solved in 23 hours, about
1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding
the key after only 24% of the keyspace had been searched.
More recently, RC5-64 was solved about a year ago. It took
d.net 4 *years*.
2^69 remains non-trivial.
What you're missing in this is that Deep Crack was already a year old at the 
time it was used for this, I was assuming that the most recent technologies 
would be used, so the 1998 point for Deep Crack was the critical point. Also 
if you check the real statistics for RC5-64 you will find that 
Distributed.net suffered from a major lack of optimization on the workhorse 
of the DES cracking effort (DEC Alpha processor) even to the point where 
running the X86 code in emulation was faster than the native code. Since an 
Alpha Processor had been the breaking force for DES Challenge I and a factor 
of  1/3  for III this crippled the performance resulting in the Alphas 
running at only ~2% of their optimal speed, and the x86 systems were running 
at only about 50%. Based on just this 2^64 should have taken only 1.5 years. 
Additionally add in that virtually the entire Alpha community pulled out 
because we had better things to do with our processors (e.g. IIRC the same 
systems rendered Titanic) and Distributed.net was effectively sucked dry of 
workhorse systems, so a timeframe of 4-6 months is more likely, without any 
custom hardware and rather sad software optimization. Assuming that the new 
attacks can be pipelined (the biggest problem with the RC5-64 optimizations 
was pipeline breaking) it is entirely possible to use modern technology 
along with GaAs substrate to generate chips in the 10-20 GHz range, or about 
10x the speed available to Distributed.net. Add targetted hardware to the 
mix, deep pipelining, and massively multiprocessors and my numbers still 
hold, give or take a few orders of magnitude (the 8% of III done by Deep 
Crack in 23 hours is only a little over 2 orders of magnitude off, so within 
acceptable bounds).

2^69 is achievable, it may not be pretty, and it certainly isn't kind to the 
security of the vast majority of secure infrastructure, but it is 
achievable and while the cost bounds may have to be shifted, that is 
achievable as well.

It is still my view that everyone needs to keep a close eye on their hashes, 
make sure the numbers add up correctly, it is simply my view now that SHA-1 
needs to be put out to pasture, and the rest of the SHA line needs to be 
heavily reconsidered because of their close relation to SHA-1.

The biggest unknown surrounding this is the actual amount of work necessary 
to perform the 2^69, if the workload is all XOR then the costs and timeframe 
I gave are reasonably pessimistic

RE: SHA1 broken?

2005-02-18 Thread Trei, Peter
Actually, the final challenge was solved in 23 hours, about
1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding
the key after only 24% of the keyspace had been searched.

More recently, RC5-64 was solved about a year ago. It took
d.net 4 *years*. 

2^69 remains non-trivial.

Peter


-Original Message-
From: [EMAIL PROTECTED] on behalf of Dave Howe
Sent: Thu 2/17/2005 5:49 AM
To: Cypherpunks; Cryptography
Subject: Re: SHA1 broken?
 
Joseph Ashwood wrote:
   I believe you are incorrect in this statement. It is a matter of public
 record that RSA Security's DES Challenge II was broken in 72 hours by 
 $250,000 worth of semi-custom machine, for the sake of solidity let's 
 assume they used 2^55 work to break it. Now moving to a completely 
 custom design, bumping up the cost to $500,000, and moving forward 7 
 years, delivers ~2^70 work in 72 hours (give or take a couple orders of 
 magnitude). This puts the 2^69 work well within the realm of realizable 
 breaks, assuming your attackers are smallish businesses, and if your 
 attackers are large businesses with substantial resources the break can 
 be assumed in minutes if not seconds.
 
 2^69 is completely breakable.
Joe
   Its fine assuming that moore's law will hold forever, but without 
that you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that 
has been hashed with sha-1 can be considered breakable (but that would 
allow you to (for example) forge a digital signature given an example)
   This of course assumes that the break doesn't match the criteria 
from the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.





Re: SHA1 broken?

2005-02-17 Thread Dave Howe
Joseph Ashwood wrote:
  I believe you are incorrect in this statement. It is a matter of public
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's 
assume they used 2^55 work to break it. Now moving to a completely 
custom design, bumping up the cost to $500,000, and moving forward 7 
years, delivers ~2^70 work in 72 hours (give or take a couple orders of 
magnitude). This puts the 2^69 work well within the realm of realizable 
breaks, assuming your attackers are smallish businesses, and if your 
attackers are large businesses with substantial resources the break can 
be assumed in minutes if not seconds.

2^69 is completely breakable.
   Joe
  Its fine assuming that moore's law will hold forever, but without 
that you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that 
has been hashed with sha-1 can be considered breakable (but that would 
allow you to (for example) forge a digital signature given an example)
  This of course assumes that the break doesn't match the criteria 
from the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.



Re: SHA1 broken?

2005-02-17 Thread Roland Dowdeswell
On 1108637369 seconds since the Beginning of the UNIX epoch
Dave Howe wrote:


   Its fine assuming that moore's law will hold forever, but without 
that you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that 
has been hashed with sha-1 can be considered breakable (but that would 
allow you to (for example) forge a digital signature given an example)

I think that it is generally prudent to make the most ``conservative''
assumption with regards to Moore's Law in any given context.  I.e.
bet that it will continue when determining how easy your security
is to brute force, and assume that it will not when writing code.

--
Roland Dowdeswell  http://www.Imrryr.ORG/~elric/



Re: [p2p-hackers] SHA1 broken?

2005-02-17 Thread R.A. Hettinga

--- begin forwarded text


Delivered-To: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: [p2p-hackers] SHA1 broken?
Date: Thu, 17 Feb 2005 14:25:36 -0800 (PST)
From: [EMAIL PROTECTED] (Hal Finney)
Reply-To: Peer-to-peer development. [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]

The problem with the attack scenario where two versions of a program are
created with the same hash, is that from what little we know of the new
attacks, they aren't powerful enough to do this.

All of the collisions they have shown have the property where the two
alternatives start with the same initial value for the hash; they then
have one or two blocks which are very carefully selected, with a few
bits differing between the two blocks; and at the end, they are back
to a common value for the hash.

It is known that their techniques are not sensitive to this initial value.
They actually made a mistake when they published their MD5 collision,
because they had the wrong initial values due to a typo in Schneier's
book.  When people gave them the correct initial values, they were able
to come up with new collisions within a matter of hours.

If you look at their MD5 collision in detail, it was two blocks long.
Each block was almost the same as the other, with just a few bits
different.  They start with the common initial value.  Then they run
the first blocks through.  Amazingly, this has only a small impact on
the intermediate value after this first block.  Only a relatively few
bits are different.

If you or I tried to take two blocks with a few bits different and feed
them to MD5, we would get totally different outputs.  Changing even
one bit will normally change half the output bits.  The fact that they
are able to change several bits and get only a small difference in the
output is the first miracle.

But then they do an even better trick.  They now go on and do the
second pair of blocks.  The initial values for these blocks (which are
the outputs from the previous stage) are close but not quite the same.
And amazingly, these second blocks not only keep things from getting
worse, they manage to heal the differences.  They precisely compensate
for the changes and bring the values back together.  This is the second
miracle and it is even greater.

Now, it would be a big leap from this to being able to take two arbitrary
different initial values and bring them together to a common output.
That is what would be necessary to mount the code fraud attack.  But as
we can see by inspection of the collisions produced by the researchers
(who are keeping their methodology secret for now), they don't seem to
have that power.  Instead, they are able to introduce a very carefully
controlled difference between the two blocks, and then cancel it.
Being able to cancel a huge difference between blocks would be a problem
of an entirely different magnitude.

Now, there is this other idea which Zooko alludes to, from Dan Kaminsky,
www.doxpara.com, which could exploit the power of the new attacks to
do something malicious.  Let us grant that the only ability we have is
that we can create slightly different pairs of blocks that collide.
We can't meaningfully control the contents of these blocks, and they
will differ in only a few bits.  And these blocks have to be inserted
into a program being distributed, which will have two versions that
are *exactly the same* except for the few bits of difference between
the blocks.  This way the two versions will have the same hash, and this
is the power which the current attacks seem to have.

Kaminsky shows that you could still have good and bad versions of
such a program.  You'd have to write a program which tested a bit in
the colliding blocks, and behaved good if the bit was set, and bad
if the bit was clear.  When someone reviewed this program, they'd see
the potential bad behavior, but they'd also see that the behavior was
not enabled because the bit that enabled it was not set.  Maybe the
bad behavior could be a back door used during debugging, and there is
some flag bit that turns off the debugging mode.  So the reviewer might
assume that the program was OK despite this somewhat questionable code,
because he builds it and makes sure to sign or validate the hash when
built in the mode when the bad features are turned off.

But what he doesn't know is, Kaminsky has another block of data prepared
which has that flag bit in the opposite state, and which he can substitute
without changing the hash.  That will cause the program to behave in its
bad mode, even though the only change was a few bits in this block
of random data.  So this way he can distribute a malicious build and it
has the hash which was approved by the reviewer.

And as Zooko points out, this doesn't have to be the main developer
who is doing this, anyone who is doing some work on creating the final
package might be able to do so.

On the other hand, this attack is pretty blatant once you know it is
possible.  The lesson is that a reviewer

RE: SHA1 broken?

2005-02-17 Thread Trei, Peter
Actually, the final challenge was solved in 23 hours, about
1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding
the key after only 24% of the keyspace had been searched.

More recently, RC5-64 was solved about a year ago. It took
d.net 4 *years*. 

2^69 remains non-trivial.

Peter


-Original Message-
From: [EMAIL PROTECTED] on behalf of Dave Howe
Sent: Thu 2/17/2005 5:49 AM
To: Cypherpunks; Cryptography
Subject: Re: SHA1 broken?
 
Joseph Ashwood wrote:
   I believe you are incorrect in this statement. It is a matter of public
 record that RSA Security's DES Challenge II was broken in 72 hours by 
 $250,000 worth of semi-custom machine, for the sake of solidity let's 
 assume they used 2^55 work to break it. Now moving to a completely 
 custom design, bumping up the cost to $500,000, and moving forward 7 
 years, delivers ~2^70 work in 72 hours (give or take a couple orders of 
 magnitude). This puts the 2^69 work well within the realm of realizable 
 breaks, assuming your attackers are smallish businesses, and if your 
 attackers are large businesses with substantial resources the break can 
 be assumed in minutes if not seconds.
 
 2^69 is completely breakable.
Joe
   Its fine assuming that moore's law will hold forever, but without 
that you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that 
has been hashed with sha-1 can be considered breakable (but that would 
allow you to (for example) forge a digital signature given an example)
   This of course assumes that the break doesn't match the criteria 
from the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.





Re: SHA1 broken?

2005-02-17 Thread Dave Howe
Joseph Ashwood wrote:
  I believe you are incorrect in this statement. It is a matter of public
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's 
assume they used 2^55 work to break it. Now moving to a completely 
custom design, bumping up the cost to $500,000, and moving forward 7 
years, delivers ~2^70 work in 72 hours (give or take a couple orders of 
magnitude). This puts the 2^69 work well within the realm of realizable 
breaks, assuming your attackers are smallish businesses, and if your 
attackers are large businesses with substantial resources the break can 
be assumed in minutes if not seconds.

2^69 is completely breakable.
   Joe
  Its fine assuming that moore's law will hold forever, but without 
that you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that 
has been hashed with sha-1 can be considered breakable (but that would 
allow you to (for example) forge a digital signature given an example)
  This of course assumes that the break doesn't match the criteria 
from the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.



Re: [p2p-hackers] SHA1 broken?

2005-02-17 Thread Eric Murray

On Wed, Feb 16, 2005 at 07:55:15AM -0500, R.A. Hettinga wrote:
 From: Serguei Osokine [EMAIL PROTECTED]
 To: Peer-to-peer development. [EMAIL PROTECTED]
 Subject: RE: [p2p-hackers] SHA1 broken?
 Date: Wed, 16 Feb 2005 00:11:07 -0800
 
 Okay, so the effective SHA-1 length is 138 bits instead of full
 160 - so what's the big deal? It is still way more than, say, MD5

In applications where collisions are important, SHA1 is now
effectively 69 bits as opposed to 80.

That's not very much, and odds are there will be an improvement on
this attack in the near future. 

Eric




Re: SHA1 broken?

2005-02-17 Thread James A. Donald
--
 There is however a huge problem replace SHA-1 by something
 else from now to tomorrow: Other algorithms are not as well
 anaylyzed and compared against SHA-1 as for example AES to
 DES are; so there is no immediate successor of SHA-1 of whom
 we can be sure to withstand the possible new techniques.
 Second, SHA-1 is tightly integrated in many protocols without
 a fallback algorithms (OpenPGP: fingerprints, MDC, default
 signature algorithm and more).

They reduced the break time of SHA1 from 2^80 to 2^69.

Presumably they will succeed in reducing the break time of
SHA256 from 2^128 to a mere 2^109 or so.

So SHA256 should be OK.

2^69 is damn near unbreakable.  2^80 is really unbreakable. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 IQqit8pqSokARYxy1xVLrTaVRSKMAGvz2MXbQqXi
 4DAQZgw0sbP3OcD3kgO+x7f+VfsPD4E8EBsB96d/D




Re: SHA1 broken?

2005-02-17 Thread R.A. Hettinga

--- begin forwarded text


Date: Wed, 16 Feb 2005 11:13:23 -0500 (EST)
From: Atom Smasher [EMAIL PROTECTED]
OpenPGP: id=0xB88D52E4D9F57808; algo=1 (RSA); size=4096;
url=http://atom.smasher.org/pgp.txt
To: [EMAIL PROTECTED]
Subject: Re: SHA1 broken?
Sender: [EMAIL PROTECTED]

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Wed, 16 Feb 2005, David Shaw wrote:

 In terms of GnuPG: it's up to you whether you want to switch hashes or
 not.  GnuPG supports all of the SHA-2 hashes, so they are at least
 available.  Be careful you don't run up against compatibility problems:
 PGP doesn't support 384 or 512, and only recently started supporting
 256.  GnuPG before 1.2.2 (2003-05-01), doesn't have any of the new
 hashes.  Finally, if you have a DSA signing key (most people do) you are
 required to use either SHA-1 or RIPEMD/160.  RSA signing keys can use
 any hash.


there's more to it than that. openPGP specifies SHA-1 (and nothing else)
as the hash used to generate key fingerprints, and is what key IDs are
derived from.

a real threat if this can be extended into a practical attack is
substituting a key with a *different* key having the same ID and
fingerprint. it would be difficult for average users (and impossible for
the current openPGP infrastructure) to tell bob's key from mallory's key
that claims to be bob's.

it can also be used (if the attack becomes practical) to forge key
signatures. mallory can create a bogus key and sign it with anyone's
real key. this would turn the web of trust into dust.

the openPGP spec seemed to have assumed that SHA-1 just wouldn't fail.
ever. this was the same mistake made in the original version of pgp that
relied on md5. the spec needs to allow a choice of hash algorithms for
fingerprints and key IDs, or else we'll play this game every time someone
breaks a strong hash algorithm.


- --
 ...atom

  _
  PGP key - http://atom.smasher.org/pgp.txt
  762A 3B98 A3C3 96C9 C6B7 582A B88D 52E4 D9F5 7808
  -

Any sufficiently advanced technology
 is indistinguishable from magic.
-- Arthur C. Clarke

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (FreeBSD)
Comment: What is this gibberish?
Comment: http://atom.smasher.org/links/#digital_signatures

iQEcBAEBCAAGBQJCE3EoAAoJEAx/d+cTpVcinwsIAKnjw1AqwY0guPtdxMagoZC2
Rv7mCZt3QnpH4uEaWNLh5R3VImVwOBevW9VdYm+UdMwdmodD79Bc0MyPOaHDuUiP
okmo0PigWIht2vGWK7F6xLtUwLUlGyuAWO5w8g/hNCt0ftdb1jUam0wQtqnTTarM
B1kyTWU0sHsjyloSh0umQ8kC0nt9nNhLIasp84oIo+D3b0r6yKIWjMS7dHr1hIbx
2gXBdVw01HJng/BtF/THfZwAD2IE+OLNPg4Q6v6QnVf3BGBBPSiiD2mXrizuknA8
RevXGYgBc4plOWOlDmx2ydbRqFHe5obGMGFCk4muFh8veFhPbFxCKvfBwsawi+U=
=f0+g
-END PGP SIGNATURE-

___
Gnupg-users mailing list
[EMAIL PROTECTED]
http://lists.gnupg.org/mailman/listinfo/gnupg-users

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



Re: SHA1 broken?

2005-02-17 Thread Joseph Ashwood
- Original Message - 
From: James A. Donald [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


2^69 is damn near unbreakable.
I believe you are incorrect in this statement. It is a matter of public 
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's assume 
they used 2^55 work to break it. Now moving to a completely custom design, 
bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 
work in 72 hours (give or take a couple orders of magnitude). This puts the 
2^69 work well within the realm of realizable breaks, assuming your 
attackers are smallish businesses, and if your attackers are large 
businesses with substantial resources the break can be assumed in minutes if 
not seconds.

2^69 is completely breakable.
   Joe 



Re: SHA1 broken?

2005-02-17 Thread Roland Dowdeswell
On 1108637369 seconds since the Beginning of the UNIX epoch
Dave Howe wrote:


   Its fine assuming that moore's law will hold forever, but without 
that you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that 
has been hashed with sha-1 can be considered breakable (but that would 
allow you to (for example) forge a digital signature given an example)

I think that it is generally prudent to make the most ``conservative''
assumption with regards to Moore's Law in any given context.  I.e.
bet that it will continue when determining how easy your security
is to brute force, and assume that it will not when writing code.

--
Roland Dowdeswell  http://www.Imrryr.ORG/~elric/



Re: [p2p-hackers] SHA1 broken?

2005-02-16 Thread R.A. Hettinga

--- begin forwarded text


Delivered-To: [EMAIL PROTECTED]
Date: Wed, 16 Feb 2005 01:10:13 -0800
From: Gordon Mohr (@ Bitzi) [EMAIL PROTECTED]
User-Agent: Mozilla Thunderbird 1.0 (X11/20041206)
To: Peer-to-peer development. [EMAIL PROTECTED]
Subject: Re: [p2p-hackers] SHA1 broken?
Reply-To: Peer-to-peer development. [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]

Serguei Osokine wrote:
#   * collisions in the the full SHA-1 in 2**69 hash operations,
# much less than the brute-force attack of 2**80 operations...


 Okay, so the effective SHA-1 length is 138 bits instead of full
 160 - so what's the big deal?

If the results hold up:

SHA1 is not as strong as it was designed to be, and its effective
strength is being sent in the wrong direction, rather than being
confirmed, by new research.

Even while maintaining that SHA1 was unbroken and likely to
remain so just last week, NIST was still recommending that SHA1 be
phased out of government use by 2010:

   http://www.fcw.com/fcw/articles/2005/0207/web-hash-02-07-05.asp

One more paper from a group of precocious researchers anywhere in
the world, or unpublished result exploited in secret, could topple
SHA1 from practical use entirely. Of course, that's remotely possible
with any hash, but the pattern of recent results suggest that a
further break is now more likely with SHA1 (and related hashes)
than others.

So the big deal would be: don't rely on SHA1 in any applications
you intend to have a long effective life.

 It is still way more than, say, MD5
 length. And MD5 is still widely used for stuff like content id'ing
 in various systems, because even 128 bits is quite a lot, never
 mind 138 bits.

Just because it's widely used doesn't mean it's a good idea.

MD5 should not be used for content identification, given the ability
to create content pairs with the same MD5, with one version being
(and appearing and acquiring a reputation for being) innocuous, and
the other version malicious.

- Gordon @ Bitzi
___
p2p-hackers mailing list
[EMAIL PROTECTED]
http://zgp.org/mailman/listinfo/p2p-hackers
___
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



RE: [p2p-hackers] SHA1 broken?

2005-02-16 Thread R.A. Hettinga

--- begin forwarded text


Delivered-To: [EMAIL PROTECTED]
From: Serguei Osokine [EMAIL PROTECTED]
To: Peer-to-peer development. [EMAIL PROTECTED]
Subject: RE: [p2p-hackers] SHA1 broken?
Date: Wed, 16 Feb 2005 00:11:07 -0800
Reply-To: [EMAIL PROTECTED],
Peer-to-peer development. [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]

 #   * collisions in the the full SHA-1 in 2**69 hash operations,
 # much less than the brute-force attack of 2**80 operations...

Okay, so the effective SHA-1 length is 138 bits instead of full
160 - so what's the big deal? It is still way more than, say, MD5
length. And MD5 is still widely used for stuff like content id'ing
in various systems, because even 128 bits is quite a lot, never
mind 138 bits.

Best wishes -
S.Osokine.
16 Feb 2005.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Gordon Mohr (@ Bitzi)
Sent: Tuesday, February 15, 2005 9:41 PM
To: p2p-hackers
Subject: [p2p-hackers] SHA1 broken?


Via Slashdot, as reported by Bruce Schneier:

 http://www.schneier.com/blog/archives/2005/02/sha1_broken.html

Schneier writes:

#   SHA-1 Broken
#
# SHA-1 has been broken. Not a reduced-round version. Not a
# simplified version. The real thing.
#
# The research team of Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu
# (mostly from Shandong University in China) have been quietly
# circulating a paper announcing their results:
#
#   * collisions in the the full SHA-1 in 2**69 hash operations,
# much less than the brute-force attack of 2**80 operations
# based on the hash length.
#
#   * collisions in SHA-0 in 2**39 operations.
#
#   * collisions in 58-round SHA-1 in 2**33 operations.
#
# This attack builds on previous attacks on SHA-0 and SHA-1, and
# is a major, major cryptanalytic result. It pretty much puts a
# bullet into SHA-1 as a hash function for digital signatures
# (although it doesn't affect applications such as HMAC where
# collisions aren't important).
#
# The paper isn't generally available yet. At this point I can't
# tell if the attack is real, but the paper looks good and this
# is a reputable research team.
#
# More details when I have them.

- Gordon @ Bitzi
___
p2p-hackers mailing list
[EMAIL PROTECTED]
http://zgp.org/mailman/listinfo/p2p-hackers
___
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences

___
p2p-hackers mailing list
[EMAIL PROTECTED]
http://zgp.org/mailman/listinfo/p2p-hackers
___
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



Re: [p2p-hackers] SHA1 broken?

2005-02-16 Thread Eric Murray

On Wed, Feb 16, 2005 at 07:55:15AM -0500, R.A. Hettinga wrote:
 From: Serguei Osokine [EMAIL PROTECTED]
 To: Peer-to-peer development. [EMAIL PROTECTED]
 Subject: RE: [p2p-hackers] SHA1 broken?
 Date: Wed, 16 Feb 2005 00:11:07 -0800
 
 Okay, so the effective SHA-1 length is 138 bits instead of full
 160 - so what's the big deal? It is still way more than, say, MD5

In applications where collisions are important, SHA1 is now
effectively 69 bits as opposed to 80.

That's not very much, and odds are there will be an improvement on
this attack in the near future. 

Eric




Re: SHA1 broken?

2005-02-16 Thread R.A. Hettinga

--- begin forwarded text


Date: Wed, 16 Feb 2005 11:13:23 -0500 (EST)
From: Atom Smasher [EMAIL PROTECTED]
OpenPGP: id=0xB88D52E4D9F57808; algo=1 (RSA); size=4096;
url=http://atom.smasher.org/pgp.txt
To: [EMAIL PROTECTED]
Subject: Re: SHA1 broken?
Sender: [EMAIL PROTECTED]

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Wed, 16 Feb 2005, David Shaw wrote:

 In terms of GnuPG: it's up to you whether you want to switch hashes or
 not.  GnuPG supports all of the SHA-2 hashes, so they are at least
 available.  Be careful you don't run up against compatibility problems:
 PGP doesn't support 384 or 512, and only recently started supporting
 256.  GnuPG before 1.2.2 (2003-05-01), doesn't have any of the new
 hashes.  Finally, if you have a DSA signing key (most people do) you are
 required to use either SHA-1 or RIPEMD/160.  RSA signing keys can use
 any hash.


there's more to it than that. openPGP specifies SHA-1 (and nothing else)
as the hash used to generate key fingerprints, and is what key IDs are
derived from.

a real threat if this can be extended into a practical attack is
substituting a key with a *different* key having the same ID and
fingerprint. it would be difficult for average users (and impossible for
the current openPGP infrastructure) to tell bob's key from mallory's key
that claims to be bob's.

it can also be used (if the attack becomes practical) to forge key
signatures. mallory can create a bogus key and sign it with anyone's
real key. this would turn the web of trust into dust.

the openPGP spec seemed to have assumed that SHA-1 just wouldn't fail.
ever. this was the same mistake made in the original version of pgp that
relied on md5. the spec needs to allow a choice of hash algorithms for
fingerprints and key IDs, or else we'll play this game every time someone
breaks a strong hash algorithm.


- --
 ...atom

  _
  PGP key - http://atom.smasher.org/pgp.txt
  762A 3B98 A3C3 96C9 C6B7 582A B88D 52E4 D9F5 7808
  -

Any sufficiently advanced technology
 is indistinguishable from magic.
-- Arthur C. Clarke

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (FreeBSD)
Comment: What is this gibberish?
Comment: http://atom.smasher.org/links/#digital_signatures

iQEcBAEBCAAGBQJCE3EoAAoJEAx/d+cTpVcinwsIAKnjw1AqwY0guPtdxMagoZC2
Rv7mCZt3QnpH4uEaWNLh5R3VImVwOBevW9VdYm+UdMwdmodD79Bc0MyPOaHDuUiP
okmo0PigWIht2vGWK7F6xLtUwLUlGyuAWO5w8g/hNCt0ftdb1jUam0wQtqnTTarM
B1kyTWU0sHsjyloSh0umQ8kC0nt9nNhLIasp84oIo+D3b0r6yKIWjMS7dHr1hIbx
2gXBdVw01HJng/BtF/THfZwAD2IE+OLNPg4Q6v6QnVf3BGBBPSiiD2mXrizuknA8
RevXGYgBc4plOWOlDmx2ydbRqFHe5obGMGFCk4muFh8veFhPbFxCKvfBwsawi+U=
=f0+g
-END PGP SIGNATURE-

___
Gnupg-users mailing list
[EMAIL PROTECTED]
http://lists.gnupg.org/mailman/listinfo/gnupg-users

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



Re: SHA1 broken?

2005-02-16 Thread R.A. Hettinga

--- begin forwarded text


To: [EMAIL PROTECTED]
From: Werner Koch [EMAIL PROTECTED]
Organisation: g10 Code GmbH
OpenPGP: id=5B0358A2; url=finger:[EMAIL PROTECTED]
Mail-Followup-To: [EMAIL PROTECTED]
Date: Wed, 16 Feb 2005 19:54:35 +0100
User-Agent: Gnus/5.1007 (Gnus v5.10.7) Emacs/21.3 (gnu/linux)
Subject: Re: SHA1 broken?
Sender: [EMAIL PROTECTED]

On Wed, 16 Feb 2005 11:57:36 -0500, David Shaw said:

 Yes it is.  Assuming this is true, we must start migrating away from
 SHA-1.  Actually, we should start this anyway - even the NIST
 recommends moving away from SHA-1 for long-term security.

The real problem with the breakthrough is, that it seems that they
have developed a new cryptoanalytical method and that might pave the
way for further improvements.  Over the last 2 decades the art of
cryptoanalysis has changed dramatically in the area of symmetric
ciphers.  This will probably also happen to hash algorithms now.

There is however a huge problem replace SHA-1 by something else from
now to tomorrow: Other algorithms are not as well anaylyzed and
compared against SHA-1 as for example AES to DES are; so there is no
immediate successor of SHA-1 of whom we can be sure to withstand the
possible new techniques.  Second, SHA-1 is tightly integrated in many
protocols without a fallback algorithms (OpenPGP: fingerprints, MDC,
default signature algorithm and more).


Salam-Shalom,

   Werner



___
Gnupg-users mailing list
[EMAIL PROTECTED]
http://lists.gnupg.org/mailman/listinfo/gnupg-users

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'



Re: SHA1 broken?

2005-02-16 Thread James A. Donald
--
 There is however a huge problem replace SHA-1 by something
 else from now to tomorrow: Other algorithms are not as well
 anaylyzed and compared against SHA-1 as for example AES to
 DES are; so there is no immediate successor of SHA-1 of whom
 we can be sure to withstand the possible new techniques.
 Second, SHA-1 is tightly integrated in many protocols without
 a fallback algorithms (OpenPGP: fingerprints, MDC, default
 signature algorithm and more).

They reduced the break time of SHA1 from 2^80 to 2^69.

Presumably they will succeed in reducing the break time of
SHA256 from 2^128 to a mere 2^109 or so.

So SHA256 should be OK.

2^69 is damn near unbreakable.  2^80 is really unbreakable. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 IQqit8pqSokARYxy1xVLrTaVRSKMAGvz2MXbQqXi
 4DAQZgw0sbP3OcD3kgO+x7f+VfsPD4E8EBsB96d/D




Re: SHA1 broken?

2005-02-16 Thread Joseph Ashwood
- Original Message - 
From: James A. Donald [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


2^69 is damn near unbreakable.
I believe you are incorrect in this statement. It is a matter of public 
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's assume 
they used 2^55 work to break it. Now moving to a completely custom design, 
bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 
work in 72 hours (give or take a couple orders of magnitude). This puts the 
2^69 work well within the realm of realizable breaks, assuming your 
attackers are smallish businesses, and if your attackers are large 
businesses with substantial resources the break can be assumed in minutes if 
not seconds.

2^69 is completely breakable.
   Joe