Re: More problems with hash functions

2004-08-25 Thread Daniel Carosone
My immediate (and not yet further considered) reaction to the
description of Joux' method was that it might be defeated by something
as simple as adding a block counter to the input each time.

I any case, I see it as a form of dictionary attack, and wonder
whether the same kinds of techniques wouldn't help.

--
Dan.


pgpRyiFiNhXGq.pgp
Description: PGP signature


Re: Cryptography and the Open Source Security Debate

2004-08-25 Thread Ben Laurie
lrk wrote:
On Thu, Aug 12, 2004 at 03:27:07PM -0700, Jon Callas wrote:
On 10 Aug 2004, at 5:16 AM, John Kelsey wrote:

So, how many people on this list have actually looked at the PGP key 
generation code in any depth?  Open source makes it possible for 
people to look for security holes, but it sure doesn't guarantee that 
anyone will do so, especially anyone who's at all good at it.
Incidentally, none of the issues that lrk brought up (RSA key being 
made from an easy to factor composite, a symmetric key that is a weak 
key, etc.) are unique to PGP.

Yep. And I know that. But as my hair turns grey, I make more simple mistakes
and catch fewer of them.
Looks like we are batting zero here. I have seen no responses nor received
off-list e-mail from anyone admitting to examining the open source for holes.
My examination of RSAREF and OpenSSL code was more toward understanding how
they handled big numbers. It appears both generate prime numbers which are
half the length of the required N and with both of the two most significant
bits set to one. This means the ratio R=P/Q (P being the larger prime) is
limited to 1R(4/3). The actual maximum R is less and can be determined
by examining N.
This doesn't sound right to me - OpenSSL, IIRC, sets the top and bottom 
bits to 1. Of course, all large primes have the bottom bit set to one.

Cheers,
Ben.
--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On hash breaks, was Re: First quantum crypto bank transfer

2004-08-25 Thread John Kelsey
From: Jerrold Leichter [EMAIL PROTECTED]
Sent: Aug 24, 2004 7:18 AM
To: Joseph Ashwood [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: On hash breaks, was Re: First quantum crypto bank transfer


[[Note: I've tried to sort out who wrote what, but something odd was
going on in the quoting of the messages, so I may have it all
wrong]]

...
| Actually for years the cryptography community has been saying
|retire MD5, ...because it's been seen as giving too short a hash,
|and because of a minor weakness - widely described as
|certificational - in the compression function that no one ever
|showed lead to an attack.  (While the details of the current attack
|aren't yet completely clear, the fact that it worked on so many
|functions strongly indicates that the particular weakness in the MD5
|compression function has nothing to do with it.)

The advice may have been prudent, but it doesn't rise to the level of
a theory for distinguishing good from bad hash functions.

How about this: When someone finds any collision at all in your hash
compression function, even a pseudocollision or a free-start
collision, it's time to change hash functions.  This is true, even
when the alternatives are slower, and the existing attacks don't yet
turn into a full attack.  Also, when your collision resistance is
known to be vulnerable to brute-force collision attacks, you really
need to stop using it.  Even when the alternatives are slower, and you
think you can maybe get away with using MD5 here if the stars all line
up properly.

Now, for fielded hardware and (to some extent) software, you can try
to phase out the use of the broken primitive, if the attack isn't yet
leading to a practical fast collision-finding algorithm.  If MD5 had
started being phased out when the pseudocollision attack was found, or
even when the Dobbertin attack was found, it seems like we'd be in
better shape now.  

...
| So basically I encourage my clients to maintain good business
| practices which means that they don't need to have belief in the
| long term security of AES, or SHA-1, or RSA, or . This is
| just good business, and it is a process that evolved to deal with
| similar circumstances.

Real good business practice has to make judgements about possible
risks and trade them off against potential costs.  I quite agree that
your advice is sound.  But that doesn't change the facts: Our
theoretical bases for security are much weaker than we sometimes let
on.  We can still be surprised.

True.  But was anyone surprised at another attack on MD5, which had
already had two high-profile attacks on its compression function?  Was
anyone surprised at an attack on HAVAL?  

Suppose a year ago I offered the following bet: At the next Crypto,
all but one of the widely-discussed hash functions will be shown to be
fundamentally flawed.  What odds would you have given me?  

You would have lost the bet.  Where's the fundamental flaw in SHA1,
SHA256, SHA512, or RIPE-MD160?  Where's the fundamental flaw in
Whirlpool?  There may *be* such flaws in any or all of these hashes,
but they haven't been shown yet.  (Phil Hawkes' results on SHA256 look
interesting; it will be interesting to see if they lead anywhere, but
it sure doesn't look trivial to control those corrective patterns with
choices of message block differences.)  

What odds would you have given me on the following bet: At the next
Crypto, an attack against AES that is substantially better than brute
force will be published?  If the odds were significantly different,
how would you have justified the difference?

Remember that we had the algebraic attacks, which claimed the ability
to break the whole AES, though the attacks apparently don't work as
claimed because of a miscounting of variables.  (It's certainly
possible that someone will find an algebraic attack on AES.)  

Let's update the question to today: Replace widely-discussed hash
functions with SHA-1 and the related family.  Keep the AES bet
intact.  But let's got out 5 years.  Now what odds do you give me?
Why?

I don't know.  If you had to build something today to be secure, it
wouldn't be crazy to use SHA1, IMO.  But you just can't ever rule out
cryptanalytic advances of this kind.  I think the difference between
block ciphers and hash functions is that there's a much better
developed theory of block cipher design and analysis in the public
world than for hash function design and analysis.  This may be
changing, though.  And new attacks (algebraic attacks, the integral
attack that is so effective against reduced-round Rijndael versions)
are always coming up, even so.  

I think seriously trying to beat up on our algorithms, publishing
intermedaite results, etc., is the best we can do at our current state
of knowledge.  

-- Jerry

--John Kelsey

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New directions for hash function designs (was: More problems with hash functions)

2004-08-25 Thread Thierry Moreau

Hal Finney wrote:
Another of the Crypto talks that was relevant to hash function security
was by Antoine Joux, discoverer of the SHA-0 collision that required
2^51 work.  Joux showed how most modern hash functions depart from the
ideal of a random function.
The problem is with the iterative nature of most hash functions, which
are structured like this (view with a fixed with font):
IV  ---  COMP   ---   COMP   ---   COMP   ---  Output
  ^ ^ ^
  | | |
  | | |
Block 1   Block 2   Block 3
The idea is that there is a compression function COMP, which takes
two inputs: a state vector, which is the size of the eventual hash
output; and an input block.  The hash input is padded and divided
into fixed-size blocks, and they are run through the hash function
via the pattern above.
This pattern applies to pretty much all the hashes in common use,
including MDx, RIPEMDx, and SHA-x.
 

Just for the record, the Frogbit algorithm is a compression function with a 1-bit 
block size and a variable size state information.
(http://www.connotech.com/frogbit.htm)
The Helix cipher shares with the Frogbit algorithm the use of dual key stream usages 
between which the plaintext stream is inserted in the computations. However, the Helix 
authors do not recommend their construct for a hash function.
(http://www.schneier.com/paper-helix.pdf)
Perhaps ideas combined from these two approaches can help define other constructs for hash 
functions. Obviously, if the sate information is to end up in a standard size at the end of the 
plaintext processing, the additional state information has to be folded, which means 
additional processing costs, of discarded.

--
- Thierry Moreau
CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, Qc
Canada   H2M 2A1
Tel.: (514)385-5691
Fax:  (514)385-5900
web site: http://www.connotech.com
e-mail: [EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RFC 3833 Threat analysis of the domain name system (DNS)

2004-08-25 Thread Anne Lynn Wheeler
as always ... can go to
http://www.garlic.com/~lynn/rfcietff.htm
and either scroll down the summary page to the 3833 summary and then 
retrieve the actual RFC by clicking on the .txt= field.

In this case it is also possible to click on Term (term-RFC#) in the 
RFC's listed by section ... and then click on DNSSEC in the acronym 
fastpath section at the top. That brings up the DNSSEC RFCs. ... where 
DNSSEC (and/or domain name security) appeared somewhere in the title or 
abstract.

as a side note, I've just done about everything possible that I can do with 
scanning actual RFCs for references. I did a pass ... where I created a 
list of all RFCs ... where the scan didn't produce RFC numbers from a 
reference section ... and then scanned those RFCs for anything that looked 
like there might be a RFC number anywhere in the body. Then I manually 
examined that list of RFCs for how they formated/called the references 
section. somewhat more detailed discussion of the references  md5 stuff:
http://www.garlic.com/~lynn/2004k.html#6



RFC 3833
Title:  Threat Analysis of the Domain Name System (DNS)
Author(s):  D. Atkins, R. Austein
Status: Informational
Date:   August 2004
Mailbox:[EMAIL PROTECTED], [EMAIL PROTECTED]
Pages:  16
Characters: 39303
Updates/Obsoletes/SeeAlso:None
I-D Tag:draft-ietf-dnsext-dns-threats-07.txt
URL:ftp://ftp.rfc-editor.org/in-notes/rfc3833.txt
Although the DNS Security Extensions (DNSSEC) have been under
development for most of the last decade, the IETF has never written
down the specific set of threats against which DNSSEC is designed to
protect.  Among other drawbacks, this cart-before-the-horse situation
has made it difficult to determine whether DNSSEC meets its design
goals, since its design goals are not well specified.  This note
attempts to document some of the known threats to the DNS, and, in
doing so, attempts to measure to what extent (if any) DNSSEC is a
useful tool in defending against these threats.

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]