On Wed, Mar 16, 2005 at 10:03:55PM +0100, Karel Gardas wrote:

> "One-way hash functions are supposed to have two properties.  One,
> they're one way.  This means that it is easy to take a message and
> compute the hash value, but it's impossible to take a hash value and
> recreate the original message.  (By 'impossible' I mean 'can't be
> done in any reasonable amount of time.')

Unless one knows the size of the hashed data, aren't there an infinite
number of increasingly large combinations of bytes that resolve to any
given hash?

Isn't being irreversable a function of any lossy algorithm?  Even MP3
and JPEG files are one-way, although they try their hardest to create
an *approximation* of the original.

> Two, they're collision free.  This means that it is impossible to
> find two messages that hash to the same hash value.

Hashes are a specific length.  Hashed data is not (although there are
pragmatic limits).  So how can a finite set of bytes permute to a
unique hash for every one of an infinite number of source values?

There will always be collisions.  The key, I would think, is that it's
practically impossible to predict where collisions will occur.

Further, as far as I can tell (I'm no expert), the original posting
discusses the creation of two documents, *both* in their control, that
could be made to have the same checksum.  Attacking arch's use of MD5
requires that you come up with a document on your side that matches a
specific hash of data outside your control.

That being said, I still advocate using more than one hash, and
particularly signing the file size in addition to the hash.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Gnu-arch-users mailing list
Gnu-arch-users@gnu.org
http://lists.gnu.org/mailman/listinfo/gnu-arch-users

GNU arch home page:
http://savannah.gnu.org/projects/gnu-arch/

Reply via email to