Re: Certainty

2009-08-25 Thread Perry E. Metzger

h...@finney.org ("Hal Finney") writes:
> Paul Hoffman wrote:
>> Getting a straight answer on whether or not the recent preimage work
>> is actually related to the earlier collision work would be useful.
[...]
> There was an amusing demo at the rump session though of a different
> kind of preimage technique which does depend directly on collisions. It
> uses the Merkle-Damgard structure of MD5 and creates lots of blocks that
> collide (possibly with different prefixes, I didn't look at it closely).
> Then they were able to show a second preimage attack on a chosen message.
>
> That is, they could create a message and have a signer sign it using MD5.
> Then they could create more messages at will that had the same MD5 hash.
> In this demo, the messages started with text that said, "Dear so-and-so"
> and then had more readable text, followed by binary data. They were able
> to change the person's name in the first line to that of a volunteer
> from the audience, then modify the binary data and create a new version
> of the message with the same MD5 hash, in just a second or two! Very
> amusing demo.

That was the "restricted preimage" attack that I earlier mentioned
seeing in the video of the rump session. It isn't fully general, but it
is certainly disturbing.

As we're often fond of saying, attacks only get better with time, they
never roll back.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Certainty

2009-08-25 Thread "Hal Finney"
Paul Hoffman wrote:
> Getting a straight answer on whether or not the recent preimage work
> is actually related to the earlier collision work would be useful.

I am not clueful enough about this work to give an authoritative answer.
My impression is that they use some of the same general techniques and
weaknesses, for example the ability to make modifications to message
words and compensate them with modifications to other message words
that cancel. However I think there are differences as well, the preimage
work often uses meet in the middle techniques which I don't think apply
to collisions.

There was an amusing demo at the rump session though of a different
kind of preimage technique which does depend directly on collisions. It
uses the Merkle-Damgard structure of MD5 and creates lots of blocks that
collide (possibly with different prefixes, I didn't look at it closely).
Then they were able to show a second preimage attack on a chosen message.

That is, they could create a message and have a signer sign it using MD5.
Then they could create more messages at will that had the same MD5 hash.
In this demo, the messages started with text that said, "Dear so-and-so"
and then had more readable text, followed by binary data. They were able
to change the person's name in the first line to that of a volunteer
from the audience, then modify the binary data and create a new version
of the message with the same MD5 hash, in just a second or two! Very
amusing demo.

Google for "trojan message attack" to find details, or read:
www.di.ens.fr/~bouillaguet/pub/SAC2009.pdf
slides (not too informative):
http://rump2009.cr.yp.to/ccbe0b9600bfd9f7f5f62ae1d5e915c8.pdf

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Certainty

2009-08-23 Thread Paul Hoffman
At 7:10 PM -0700 8/19/09, james hughes wrote:
>On Aug 19, 2009, at 3:28 PM, Paul Hoffman wrote:
>>I understand that "creaking" is not a technical cryptography term, but 
>>"certainly" is. When do we become "certain" that devastating attacks on one 
>>feature of hash functions (collision resistance) have any effect at all on 
>>even weak attacks on a different feature (either first or second preimages)?
>>
>>This is a serious question. Has anyone seen any research that took some of 
>>the excellent research on collision resistance and used it directly for 
>>preimage attacks, even with greatly reduced rounds?
>
>This is being done. What Perry said.


At 9:02 PM -0700 8/19/09, Greg Rose wrote:
>Not directly, as far as I know. But some research and success on preimages, 
>yes.

Getting a straight answer on whether or not the recent preimage work is 
actually related to the earlier collision work would be useful.


--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Certainty

2009-08-21 Thread John Gilmore
> Getting back towards topic, the hash function employed by Git is showing 
> signs of bitrot, which, given people's desire to introduce malware 
> backdoors and legal backdoors into Linux, could well become a problem in 
> the very near future.
>
> "James A. Donald" 

> I believe attacks on Git's use of SHA-1 would require second pre-image
> attacks, and I don't think anyone has demonstrated such a thing for
> SHA-1 at this point. None the less, I agree that it would be better if
> Git eventually used better hash functions. Attacks only get better with
> time, and SHA-1 is certainly creaking.
> 
> Emphasis on "eventually", however. This is a "as soon as convenient, not
> as soon as possible" sort of situation -- more like within a year than
> within a week.
> 
> Yet another reason why you always should make the crypto algorithms you
> use pluggable in any system -- you *will* have to replace them some day.
> --
> Perry E. Metzger  pe...@piermont.com

> Of course, I still believe in hash algorithm agility: regardless of how 
> preimage attacks will be found, we need to be able to deal with them 
> immediately.
> 
> --Paul Hoffman, Director

I tried telling this to Linus within a few weeks of the design, while
he was still writing git.  He rejected the advice.  Perhaps a
delegation of cryptographers should approach him -- before it's too
late.

His biggest argument was that the important git trees would be "off-net"
and would not depend on public trees.  I think git is getting enough
use (e.g. by thousands of development projects other than the Linux
kernel) that those assumptions are probably no longer valid.

His secondary argument was that git only uses the hash as a
collision-free oracle, not a cryptographic hash.  But that's exactly
the problem.  If malicious people can make his oracle produce
collisions, other parts of the git code will make false assumptions
that can be exploited.

His final argument is the same one I heard NSA make to Diffie and
Hellman about DES in 1976: "the crypto will never be the weakest link
in the system, so it doesn't really have to be that strong".  That
argument was wrong then and it's wrong now.  The cost of using a
strong cryptosystem isn't significantly greater than the cost of using
a weak cryptosystem; and cracking the crypto HAS become the weakest
link in the overall security of many systems (CSS is an obvious one).
See:

  http://www.toad.com/des-stanford-meeting.html

John

To: torva...@osdl.org, g...@toad.com
Subject: SHA1 is broken; be sure to parameterize your hash function
Date: Sat, 23 Apr 2005 15:21:07 -0700
From: John Gilmore 

It's interesting watching git evolve.  I have one comment, which is
that the code and the contributors are throwing around the term "SHA1
hash" a lot.  They shouldn't.  SHA1 has been broken; it's possible to
generate two different blobs that hash to the same SHA1 hash.  (MD5
has totally failed; there's a one-machine one-day crack.  SHA1 is
still *hard* to crack.)  But as Jon Callas and Bruce Schneier said:
"Attacks always get better; they never get worse.  It's time to walk,
but not run, to the fire exits.  You don't see smoke, but the fire
alarms have gone off.  It's time for us all to migrate away from
SHA-1."  See the summary with bibliography at:

  http://www.schneier.com/crypto-gram-0503.html

Since we don't have a reliable long-term hash function today, you'll
have to change hash functions a few years out.  Some foresight now
will save much later pain in keeping big trees like the kernel secure.
Either that, or you'll want to re-examine git's security assumptions
now: what are the implications if multiple different blobs can be
intentionally generated that have the same hash?  My initial guess is
that changing hash functions will be easier than making git work in
the presence of unreliable hashing.

In the git sources, you'll need to install a better hash function when
one is invented.  For now, just make sure the code and the
repositories are modular -- they don't care what hash function is in
use.  Whether that means making a single git repository able to use
several hash functions, or merely making it possible to have one
repository that uses SHA1 and another that uses some future
WonderHash, is a system design decision for you and the git
contributors to make.  The simplest case -- copying a repository with
one hash function into a new repository using a different hash
function -- will change not only all the hashes, but also the contents
of objects that use hash values to point to other objects.  If any of
those objects are signed (e.g. by PGP keys) then those signatures will
not be valid in the new copy.

Adding support now for SHA256 as well as SHA1 would make it likely
that at least git has no wired-in dependencies on the *names* or
*lengths* of hashes, and let you explore the system level issues.  (I
wouldn't build in the assumption that each different hash function
produces a different length outpu

Re: Certainty

2009-08-21 Thread Greg Rose


On 2009 Aug 19, at 3:28 , Paul Hoffman wrote:


At 5:28 PM -0400 8/19/09, Perry E. Metzger wrote:
I believe attacks on Git's use of SHA-1 would require second pre- 
image

attacks, and I don't think anyone has demonstrated such a thing for
SHA-1 at this point. None the less, I agree that it would be better  
if
Git eventually used better hash functions. Attacks only get better  
with

time, and SHA-1 is certainly creaking.


I understand that "creaking" is not a technical cryptography term,  
but "certainly" is. When do we become "certain" that devastating  
attacks on one feature of hash functions (collision resistance) have  
any effect at all on even weak attacks on a different feature  
(either first or second preimages)?


This is a serious question. Has anyone seen any research that took  
some of the excellent research on collision resistance and used it  
directly for preimage attacks, even with greatly reduced rounds?


Not directly, as far as I know. But some research and success on  
preimages, yes.


The longer that MD5 goes without any hint of preimage attacks, the  
less "certain" I am that collision attacks are even related to  
preimage attacks.


They aren't particularly related, but there was a presentation at  
Eurocrypt about MD5 preimages earlier this year. Or maybe it was MD4...


Greg.



Of course, I still believe in hash algorithm agility: regardless of  
how preimage attacks will be found, we need to be able to deal with  
them immediately.


--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Certainty

2009-08-21 Thread james hughes

Caution, the following contains a rant.

On Aug 19, 2009, at 3:28 PM, Paul Hoffman wrote:
I understand that "creaking" is not a technical cryptography term,  
but "certainly" is. When do we become "certain" that devastating  
attacks on one feature of hash functions (collision resistance) have  
any effect at all on even weak attacks on a different feature  
(either first or second preimages)?


This is a serious question. Has anyone seen any research that took  
some of the excellent research on collision resistance and used it  
directly for preimage attacks, even with greatly reduced rounds?


This is being done. What Perry said.

The longer that MD5 goes without any hint of preimage attacks, the  
less "certain" I am that collision attacks are even related to  
preimage attacks.


There was an invited talk at Crypto about "Alice and Bob Go To  
Washington: A Cryptographic Theory of Politics and Policy". This was  
interesting in that it explained that facts are not what politicians  
want

http://www.iacr.org/conferences/crypto2009/acceptedpapers.html#crypto06
and that politicians form blocks to create shared power.

It seems that your comment about "certainty" is not a technical one,  
but a political one. The block of people that have implemented MD-5  
believe that this algorithm is good enough and that the facts that the  
hash function contains no science of how it works, can not be proven  
to be resistant to pre-image, nor even reduced to any known hard  
problem, are not "certain". Maybe this particular block just wants it  
to be secure? If MD-5 is secure to pre-image attacks, the  
cryptographic community does not know why. It seems that the only  
proof that can be accepted as "certainty" is an existence proof that  
the bad deed _has_ be done.


Maybe this is not really an MD-5 block, but an HMAC implementer's  
block. This block does have some results to hang their hats on. The  
paper "New Proofs for NMAC and HMAC: Security without Collision- 
Resistance" was publushed in 2006

http://eprint.iacr.org/2006/043.pdf
that states that as long as the "compression function is a PRF" HMAC  
is secure. This is mostly because the algorithm is keyed. This places  
HMAC into the class of ciphers as PRF and out of the class of hash  
functions.


I find this "interesting". Cryptographers knew in 2004 that the wheels  
just came off MD-5, and it's future was going to be grim. The "common  
sense" was that a collision by itself was not relevant. Then there was  
the “Colliding X.509 Certificates”

http://eprint.iacr.org/2005/067
and still the "common sense" was that it could still be used. So then  
there was "Chosen-Prefix Collisions for MD5 and Colliding X.509  
Certificates for Different Identities"

http://www.win.tue.nl/hashclash/EC07v2.0.pdf
but that was still not enough. This Crypto, the paper "Short Chosen- 
Prefix Collisions for MD5 and the Creation of a Rogue CA Certificate"

http://www.iacr.org/conferences/crypto2009/acceptedpapers.html#crypto04

https://documents.epfl.ch/users/l/le/lenstra/public/papers/Crypto09nonanom.pdf
seems to have put a nail in this issue, but not the issue of the  
"certainty" of pre-image attacks.


Some believe that the Best Paper award was given for the persistence  
that the authors showed to continue to spend time and effort on what  
the cryptographic community knows is an cart with no wheels on it to  
counter the "common sense" implementing block that do not believe it  
until they see it.


Effort placed on replacing MD-5 is more important now than taunting  
the cryptographers to prove that MD-5 pre-images are feasible when  
there is literally nothing proving that pre-images of MD-5 are  
difficult. (Again, this is for bare MD-5, not HMAC.)


Of course, I still believe in hash algorithm agility: regardless of  
how preimage attacks will be found, we need to be able to deal with  
them immediately.


I am curious if you mean Immediately as in now, or immediately when a  
pre-image attack is found?



--Paul Hoffman, Director
--VPN Consortium


Jim

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Certainty

2009-08-19 Thread Perry E. Metzger

Paul Hoffman  writes:
> The longer that MD5 goes without any hint of preimage attacks, the
> less "certain" I am that collision attacks are even related to
> preimage attacks.

I believe that yesterday, at the rump session at Crypto, restricted
preimage attacks were described. Not quite what you want, but getting
closer.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com