Re: [p2p-hackers] SHA1 broken?
> --- begin forwarded text > > > To: [EMAIL PROTECTED] > Subject: Re: [p2p-hackers] SHA1 broken? > Date: Thu, 17 Feb 2005 14:25:36 -0800 (PST) > From: [EMAIL PROTECTED] ("Hal Finney") [...] > Now, it would be a big leap from this to being able to take two arbitrary > different initial values and bring them together to a common output. > That is what would be necessary to mount the code fraud attack. But as > we can see by inspection of the collisions produced by the researchers > (who are keeping their methodology secret for now), they don't seem to > have that power. Instead, they are able to introduce a very carefully > controlled difference between the two blocks, and then cancel it. > Being able to cancel a huge difference between blocks would be a problem > of an entirely different magnitude. > > Now, there is this other idea which Zooko alludes to, from Dan Kaminsky, > www.doxpara.com, which could exploit the power of the new attacks to > do something malicious. Let us grant that the only ability we have is > that we can create slightly different pairs of blocks that collide. > We can't meaningfully control the contents of these blocks, and they > will differ in only a few bits. And these blocks have to be inserted > into a program being distributed, which will have two versions that > are *exactly the same* except for the few bits of difference between > the blocks. This way the two versions will have the same hash, and > this is the power which the current attacks seem to have. > > Kaminsky shows that you could still have "good" and "bad" versions of > such a program. You'd have to write a program which tested a bit in > the colliding blocks, and behaved "good" if the bit was set, and "bad" > if the bit was clear. When someone reviewed this program, they'd see > the potential bad behavior, but they'd also see that the behavior was > not enabled because the bit that enabled it was not set. Maybe the I would have to argue that a single bit could indeed constitute enough of a difference to represent a security vulnerability. There have been a number of security issues that have occured due to an underlying integer wrap vulnerability; what if the single bit that was changed was the MSB of a length field in an often unused code path, that when properly manipulated, resulted in a controllable heap ovewrite? The difference would be subtle, only a single bit, which fits with the attack. Exploitable examples, where by flipping the MSB is enough to allow an attack, are known in the computer security field. One doesn't need to include the entire back door to cause a problem. Jacob -- Jacob Langseth <[EMAIL PROTECTED]> - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: SHA1 broken?
- Original Message - From: "Dave Howe" <[EMAIL PROTECTED]> Subject: Re: SHA1 broken? Indeed so. however, the argument "in 1998, a FPGA machine broke a DES key in 72 hours, therefore TODAY..." assumes that (a) the problems are comparable, and (b) that moores law has been applied to FPGAs as well as CPUs. That is only misreading my statements and missing a very large portion where I specifically stated that the new machine would need to be custom instead of semi-custom. The proposed system was not based on FPGAs, instead it would need to be based on ASICs engineered using modern technology, much more along the lines of a DSP. The primary gains available are actually from the larger wafers in use now, along with the transistor shrinkage. Combined these have approximately kept the cost in line with Moore's law, and the benefits of custom engineering account for the rest. So for exact details about how I did the calculations I assumed Moore's law for speed, and an additional 4x improvement from custom chips instead of of the shelf. In order to verify the calculations I also redid them assuming DSPs which should be capable of processing the data (specifically from TI), I came to a cost within a couple orders of magnitude although the power consumption would be substantially higher. Joe - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: SHA1 broken?
Eugen Leitl wrote: On Sat, Feb 19, 2005 at 03:53:53PM +, Dave Howe wrote: I wasn't aware that FPGA technology had improved that much if any - feel free to correct my misapprehension in that area though :) FPGAs are too slow (and too expensive), if you want lots of SHA-1 performance, use a crypto processor (or lots of forthcoming C5J mini-ITX boards), or an ASIC. Assuming, fast SHA-1 computation is the basis for the attack -- we do not know that. Indeed so. however, the argument "in 1998, a FPGA machine broke a DES key in 72 hours, therefore TODAY..." assumes that (a) the problems are comparable, and (b) that moores law has been applied to FPGAs as well as CPUs. I am unaware of any massive improvement (certainly to the scale of the comparable improvement in CPUs) in FPGAs, and the ones I looked at a a few days ago while researching this question seemed to have pretty much the same spec sheet as the ones I looked at back then. However, I am not a gate array techie, and most of my experience with them has been small (two-three chip) devices at very long intervals, purely for my own interest. It is possible there has been a quantum leap foward in FPGA tech or some substitute tech that can perform massively parallel calculations, on larger block sizes and hence more operations, at a noticably faster rate than the DES cracker could back then. Schneier apparently believes there has been - but is simply applying moore's law to the machine from back then, and that may not be true unless he knows something I don't (I assume he knows lots of things I don't, but of course he may not have thought this one though :) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: SHA1 broken?
Joseph Ashwood wrote: I believe you substantially misunderstood my statements, 2^69 work is doable _now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 years to the present (and hence through known data) leads to a situation where the 2^69 work is achievable today in a reasonable timeframe (3 days), assuming reasonable quantities of available money ($500,000US). There is no guessing about what the future holds for this, the 2^69 work is NOW. I wasn't aware that FPGA technology had improved that much if any - feel free to correct my misapprehension in that area though :) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: SHA1 broken?
- Original Message - From: "Joseph Ashwood" <[EMAIL PROTECTED]> Sent: Friday, February 18, 2005 3:11 AM [the attack is reasonable] Reading through the summary I found a bit of information that means my estimates of workload have to be re-evaluated. Page 1 "Based on our estimation, we expect that real collisions of SHA1 reduced to 70-steps can be found using todays supercomputers." This is a very important statement for estimating the real workload, assuming there is an implicit "in one year" in there, and assuming BlueGene (Top 500 list slot 1) this represents 22937.6 GHz*years, or slightly over 2^69 clock cycles, I am obviously still using gigahertz because information gives us nothing better to work from. This clearly indicates that the operations used for the workload span multiple processor clocks, and performing a gross estimation based on pure guesswork I'm guessing that my numbers are actually off by a factor of between 50 and 500, this factor will likely work cleanly in either adjusting the timeframe or production cost. My suggestion though to make a switch away from SHA-1 as soon as reasonable, and to prepare to switch hashes very quickly in the future remains the same, the march of processor progress is not going to halt, and the advance of cryptographic attacks will not halt which will inevitably squeeze SHA-1 to broken. I would actually argue that the 2^80 strength it should have is enough to begin its retirement, 2^80 has been "strong enough" for a decade in spite of the march of technology. Under the processor speed enhancements that have happened over the last decade we should have increased the keylength already to accomodate for dual core chips running at 20 times the speed for a total of 40 times the prior speed (I was going to use Spec data for a better calculation but I couldn'd immediately find specs for a Pentium Pro 200) by adding at least 5 bits preferrably 8 to our necessary protection profile. Joe - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: SHA1 broken?
- Original Message - From: "Dave Howe" <[EMAIL PROTECTED]> Sent: Thursday, February 17, 2005 2:49 AM Subject: Re: SHA1 broken? Joseph Ashwood wrote: > I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds. 2^69 is completely breakable. Joe Its fine assuming that moore's law will hold forever, but without that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year "break", not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example) This of course assumes that the "break" doesn't match the criteria from the previous breaks by the same team - ie, that you *can* create a collision, but you have little or no control over the plaintext for the colliding elements - there is no way to know as the paper hasn't been published yet. I believe you substantially misunderstood my statements, 2^69 work is doable _now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 years to the present (and hence through known data) leads to a situation where the 2^69 work is achievable today in a reasonable timeframe (3 days), assuming reasonable quantities of available money ($500,000US). There is no guessing about what the future holds for this, the 2^69 work is NOW. - Original Message - From: "Trei, Peter" <[EMAIL PROTECTED]> To: "Dave Howe" <[EMAIL PROTECTED]>; "Cypherpunks" <[EMAIL PROTECTED]>; "Cryptography" Actually, the final challenge was solved in 23 hours, about 1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding the key after only 24% of the keyspace had been searched. More recently, RC5-64 was solved about a year ago. It took d.net 4 *years*. 2^69 remains non-trivial. What you're missing in this is that Deep Crack was already a year old at the time it was used for this, I was assuming that the most recent technologies would be used, so the 1998 point for Deep Crack was the critical point. Also if you check the real statistics for RC5-64 you will find that Distributed.net suffered from a major lack of optimization on the workhorse of the DES cracking effort (DEC Alpha processor) even to the point where running the X86 code in emulation was faster than the native code. Since an Alpha Processor had been the breaking force for DES Challenge I and a factor of > 1/3 for III this crippled the performance resulting in the Alphas running at only ~2% of their optimal speed, and the x86 systems were running at only about 50%. Based on just this 2^64 should have taken only 1.5 years. Additionally add in that virtually the entire Alpha community pulled out because we had better things to do with our processors (e.g. IIRC the same systems rendered Titanic) and Distributed.net was effectively sucked dry of workhorse systems, so a timeframe of 4-6 months is more likely, without any custom hardware and rather sad software optimization. Assuming that the new attacks can be pipelined (the biggest problem with the RC5-64 optimizations was pipeline breaking) it is entirely possible to use modern technology along with GaAs substrate to generate chips in the 10-20 GHz range, or about 10x the speed available to Distributed.net. Add targetted hardware to the mix, deep pipelining, and massively multiprocessors and my numbers still hold, give or take a few orders of magnitude (the 8% of III done by Deep Crack in 23 hours is only a little over 2 orders of magnitude off, so within acceptable bounds). 2^69 is achievable, it may not be pretty, and it certainly isn't kind to the security of the vast majority of "secure" infrastructure, but it is achievable and while the cost bounds may have to be shifted, that is achievable as well. It is still my view that everyone needs to keep a close eye on their hashes, make sure the numbers add up correctly, it is simply my view now that SHA-1 needs to be put out to pasture, and the rest of the SHA line needs to be heavily reconsidered because of their close relation to SHA-1. The biggest unknown surrounding this i
RE: SHA1 broken?
Actually, the final challenge was solved in 23 hours, about 1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding the key after only 24% of the keyspace had been searched. More recently, RC5-64 was solved about a year ago. It took d.net 4 *years*. 2^69 remains non-trivial. Peter -Original Message- From: [EMAIL PROTECTED] on behalf of Dave Howe Sent: Thu 2/17/2005 5:49 AM To: Cypherpunks; Cryptography Subject: Re: SHA1 broken? Joseph Ashwood wrote: > I believe you are incorrect in this statement. It is a matter of public > record that RSA Security's DES Challenge II was broken in 72 hours by > $250,000 worth of semi-custom machine, for the sake of solidity let's > assume they used 2^55 work to break it. Now moving to a completely > custom design, bumping up the cost to $500,000, and moving forward 7 > years, delivers ~2^70 work in 72 hours (give or take a couple orders of > magnitude). This puts the 2^69 work well within the realm of realizable > breaks, assuming your attackers are smallish businesses, and if your > attackers are large businesses with substantial resources the break can > be assumed in minutes if not seconds. > > 2^69 is completely breakable. >Joe Its fine assuming that moore's law will hold forever, but without that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year "break", not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example) This of course assumes that the "break" doesn't match the criteria from the previous breaks by the same team - ie, that you *can* create a collision, but you have little or no control over the plaintext for the colliding elements - there is no way to know as the paper hasn't been published yet. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: [p2p-hackers] SHA1 broken?
--- begin forwarded text To: [EMAIL PROTECTED] Subject: Re: [p2p-hackers] SHA1 broken? Date: Thu, 17 Feb 2005 14:25:36 -0800 (PST) From: [EMAIL PROTECTED] ("Hal Finney") Reply-To: "Peer-to-peer development." <[EMAIL PROTECTED]> Sender: [EMAIL PROTECTED] The problem with the attack scenario where two versions of a program are created with the same hash, is that from what little we know of the new attacks, they aren't powerful enough to do this. All of the collisions they have shown have the property where the two alternatives start with the same initial value for the hash; they then have one or two blocks which are very carefully selected, with a few bits differing between the two blocks; and at the end, they are back to a common value for the hash. It is known that their techniques are not sensitive to this initial value. They actually made a mistake when they published their MD5 collision, because they had the wrong initial values due to a typo in Schneier's book. When people gave them the correct initial values, they were able to come up with new collisions within a matter of hours. If you look at their MD5 collision in detail, it was two blocks long. Each block was almost the same as the other, with just a few bits different. They start with the common initial value. Then they run the first blocks through. Amazingly, this has only a small impact on the intermediate value after this first block. Only a relatively few bits are different. If you or I tried to take two blocks with a few bits different and feed them to MD5, we would get totally different outputs. Changing even one bit will normally change half the output bits. The fact that they are able to change several bits and get only a small difference in the output is the first miracle. But then they do an even better trick. They now go on and do the second pair of blocks. The initial values for these blocks (which are the outputs from the previous stage) are close but not quite the same. And amazingly, these second blocks not only keep things from getting worse, they manage to heal the differences. They precisely compensate for the changes and bring the values back together. This is the second miracle and it is even greater. Now, it would be a big leap from this to being able to take two arbitrary different initial values and bring them together to a common output. That is what would be necessary to mount the code fraud attack. But as we can see by inspection of the collisions produced by the researchers (who are keeping their methodology secret for now), they don't seem to have that power. Instead, they are able to introduce a very carefully controlled difference between the two blocks, and then cancel it. Being able to cancel a huge difference between blocks would be a problem of an entirely different magnitude. Now, there is this other idea which Zooko alludes to, from Dan Kaminsky, www.doxpara.com, which could exploit the power of the new attacks to do something malicious. Let us grant that the only ability we have is that we can create slightly different pairs of blocks that collide. We can't meaningfully control the contents of these blocks, and they will differ in only a few bits. And these blocks have to be inserted into a program being distributed, which will have two versions that are *exactly the same* except for the few bits of difference between the blocks. This way the two versions will have the same hash, and this is the power which the current attacks seem to have. Kaminsky shows that you could still have "good" and "bad" versions of such a program. You'd have to write a program which tested a bit in the colliding blocks, and behaved "good" if the bit was set, and "bad" if the bit was clear. When someone reviewed this program, they'd see the potential bad behavior, but they'd also see that the behavior was not enabled because the bit that enabled it was not set. Maybe the bad behavior could be a back door used during debugging, and there is some flag bit that turns off the debugging mode. So the reviewer might assume that the program was OK despite this somewhat questionable code, because he builds it and makes sure to sign or validate the hash when built in the mode when the bad features are turned off. But what he doesn't know is, Kaminsky has another block of data prepared which has that flag bit in the opposite state, and which he can substitute without changing the hash. That will cause the program to behave in its "bad" mode, even though the only change was a few bits in this block of random data. So this way he can distribute a malicious build and it has the hash which was approved by the reviewer. And as Zooko points out, this doesn't have to be the main developer who is doing this, anyone who is doing some work on creating the final package might be able to do so.
Re: SHA1 broken?
Joseph Ashwood wrote: > I believe you are incorrect in this statement. It is a matter of public record that RSA Security's DES Challenge II was broken in 72 hours by $250,000 worth of semi-custom machine, for the sake of solidity let's assume they used 2^55 work to break it. Now moving to a completely custom design, bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 work in 72 hours (give or take a couple orders of magnitude). This puts the 2^69 work well within the realm of realizable breaks, assuming your attackers are smallish businesses, and if your attackers are large businesses with substantial resources the break can be assumed in minutes if not seconds. 2^69 is completely breakable. Joe Its fine assuming that moore's law will hold forever, but without that you can't really extrapolate a future tech curve. with *todays* technology, you would have to spend an appreciable fraction of the national budget to get a one-per-year "break", not that anything that has been hashed with sha-1 can be considered breakable (but that would allow you to (for example) forge a digital signature given an example) This of course assumes that the "break" doesn't match the criteria from the previous breaks by the same team - ie, that you *can* create a collision, but you have little or no control over the plaintext for the colliding elements - there is no way to know as the paper hasn't been published yet. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]