Re: First quantum crypto bank transfer

2004-08-24 Thread Jerrold Leichter
| > ... the comments I've seen on this list and elsewhere have been much
| > broader, and amount to "QM secure bit distribution is dumb, it solves
| > no problem we haven't already solved better with classical
| > techniques."
|
| Most of the comments on this list are more nuanced than that.
Perhaps we hear them differently.

| Examples of sensible comments include:
|   -- We have seen claims that QM solves "the" key distribution
|problem.  These claims are false.
I'm not sure what "the" key distribution problem would be or what "solving" it
would mean.  As we all know, the real problem with OTP systems is that you
have to distribute as much keying material, securely, as you have material to
protect.  So OTP pretty much comes down to leveraging a single secure channel
to produce another.  In all practical instances I know of, the two channels
are separated in time and space:  You leverage the security of your diplomatic
pouch today to get secure messages from a spy tomorrow.

QM key sharing lets you build an OTP with a shared transmission medium and an
arbitrarily small time separation.  This is new.  It gives you guarantees that
the bits sent have not been intercepted.  That's new. Certainly, it doesn't
solve MITM attacks, as mathematical abstractions. What it does is reduce
protection from MITM attacks to protection of physical assets.  All crypto
ultimately has to rest on that - if you can't protect your keys, nothing
works.  The nature of the system that must be protected, and the kind of
protection, are somewhat different than in traditional systems, but the
inherent problem is neither eliminated nor made inherently worse.

|   -- _Commercialization_ of QM bit-exchange is dumb, for now
|and for the forseeable future
Here, I'll pretty much agree with you.

| > Also, there is a world of difference between:
| >
| > 1.  Showing something is possible in principle;
| > 2.  Making it work on the lab bench;
| > 3.  Making it into something that works in the real world.
| >
| > For QM key exchange, step 1 goes back maybe 10-15 years, and most
| > people thought it was a curiosity - that you could never maintain
| > coherence except in free space and over short distances.
|
| That's backwards.  Quantum crypto free in space is hard.
The thought experiments on this always involve simple pictures in free space.
I agree, actually *doing* anything in free space over macroscopic distances is
a non-starter.

| It's
| much easier to use a single-mode fiber, over distances such
| that there is little total attenuation (which can be a quite
| macroscopic distance, since the attenuation is a fraction of
| a db/km if you do it right).
|
| > Step 2 is a couple of years back, the first surprise being that you
| > could actually make things work through fiber, then through a couple
| > of Km of fiber coiled on a bench.
|
| Again, that diametrically misstates the physics.  Propagation
| through a couple km of fiber shouldn't have surprised anybody.
I think that's obvious now, but might not have been so obvious 20 years ago.
(For that matter, just how long have we had usable multi-km single-mode
fibers?)

| > BTW, if we look at QM *computation* in comparison, we've barely made
| > it through Step 1.  There are still plausible arguments that you
| > can't maintain coherence long enough to solve any interesting
| > problems.
|
| Within a year of the invention of quantum computation,
| people were working on quantum error correction.
Actually, they started off pointing out that error correction couldn't be
done in QM systems without unmixing the states, thus losing the essense of the
computation.  Well, it turned out that things are more subtle than that.

Don't take this as a criticism of those who sayd quantum error correction was
impossible!  This is all new, complex physics.  We're wrong before we're
right.

|  This
| is interesting work and has had spin-offs in the form
| of changing how people think about error correction even
| in non-quantum systems.  And it has had spin-offs
| applicable to quantum cryptography, i.e. showing how it
| is possible to survive a modest amount of attenuation.
|
| > Some of the papers I've seen solve the problem only in their titles:
| > They use a QM system, but they seem to only make classical bits
| > available for general use.
|
| Huh?  The world abounds in QM systems that produce classical
| results, including e.g. transistors, lasers, practically all of
| chemistry, etc. etc. etc.  Quantum computers produce classical
| results because that is what is desired.
You miss my point.  Papers have been published _ there's not much point
dredging them up - whose title and abstract implies that they are providing a
way to store and manipulate qubits, but when you look at what they actually
end up providing, you can't *use* them as qubits, just classical bits.  (What
a sur

Re: First quantum crypto bank transfer

2004-08-24 Thread John Denker
Jerrold Leichter wrote:
... the comments I've seen on this list and elsewhere have been much 
broader, and amount to "QM secure bit distribution is dumb, it solves
no problem we haven't already solved better with classical 
techniques."
Most of the comments on this list are more nuanced than that.
Examples of sensible comments include:
 -- We have seen claims that QM solves "the" key distribution
  problem.  These claims are false.
 -- _Commercialization_ of QM bit-exchange is dumb, for now
  and for the forseeable future.  I am reminded of a slide
  Whit Diffie showed (in a different context) of an attempt
  to build a picket fence consisting of a single narrow pale
  a mile high ... while the rest of the perimeter remains
  undefended.  That's a dumb allocation of resources.  The
  opposition aren't going to attack the mega-pale;  they are
  going to go around it.  QM doesn't solve the whole problem.
  Sensible research should not be directed toward making the
  tall pale taller;  instead it should be directed toward
  filling in the gaps in the fence.
> Even if some snake-oil salesmen have attached themselves
> to the field doesn't say research in the field is worthless.
Be that as it may, there are other grounds for judging the
commercialization projects to be near-worthless.
Also, there is a world of difference between:
1.  Showing something is possible in principle;
2.  Making it work on the lab bench;
3.  Making it into something that works in the real world.
For QM key exchange, step 1 goes back maybe 10-15 years, and most
people thought it was a curiosity - that you could never maintain
coherence except in free space and over short distances.
That's backwards.  Quantum crypto free in space is hard.  It's
much easier to use a single-mode fiber, over distances such
that there is little total attenuation (which can be a quite
macroscopic distance, since the attenuation is a fraction of
a db/km if you do it right).
Step 2 is a couple of years back, the first surprise being that you
could actually make things work through fiber, then through a couple
of Km of fiber coiled on a bench.
Again, that diametrically misstates the physics.  Propagation
through a couple km of fiber shouldn't have surprised anybody.
BTW, if we look at QM *computation* in comparison, we've barely made
it through Step 1.  There are still plausible arguments that you
can't maintain coherence long enough to solve any interesting
problems.
Within a year of the invention of quantum computation,
people were working on quantum error correction.  This
is interesting work and has had spin-offs in the form
of changing how people think about error correction even
in non-quantum systems.  And it has had spin-offs
applicable to quantum cryptography, i.e. showing how it
is possible to survive a modest amount of attenuation.
Some of the papers I've seen solve the problem only in their titles:
They use a QM system, but they seem to only make classical bits
available for general use.   
Huh?  The world abounds in QM systems that produce classical
results, including e.g. transistors, lasers, practically all of
chemistry, etc. etc. etc.  Quantum computers produce classical
results because that is what is desired.
The contrast between this work and QM
key exchange is striking. 
If the intent is to make quantum cryptography sound better
than quantum computation, the point is implausible and
unproven.
If the intent it so make the best results in quantum crypto
sound better than the lamest parts of quantum computation,
then the comparision is (a) unfair and (b) hardly a ringing
endorsement of quantum crypto.
after all, transistors were invented to build phone lines, not
computers!
It's not true that transistors were invented solely for
application to phone lines.  Even if it were true, it would
be irrelevant for mulitple reasons.  For starters, keep
in mind that the big computers built during the 1940s
were built using vast amounts of telecom switch gear.
Bletchley Park relied on engineers from the Post Office
(which was the 'phone company' in those days).
And even if the facts had been otherwise, arguments about
the near-term applicability of one technology are largely
irrelevant to the near-term applicability of another
technology.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-24 Thread "Hal Finney"
Jerry Leichter writes:
> It strikes me that Joux's attack relies on *two* features of current
> constructions:  The block-at-a-time structure, and the fact that the state
> passed from block to block is the same size as the output state.  Suppose we
> did ciphertext chaining:  For block i, the input to the compression function
> is the compressed previous state and the xor of block i and block i-1.  Then
> I can no longer mix-and-match pairs of collisions to find new ones.

Here's how I understand what you are suggesting.  Presently the hash
compression function takes two inputs: a state, and an input data block.
For SHA-1 the state is 160 bits and the input data block is 512 bits.
It outputs a state-size block of 160 bits, which gets fed into the
next block:

IV  --->  COMP   --->   COMP   --->   COMP   --->  Output
   ^ ^ ^
   | | |
   | | |
 Block 1   Block 2   Block 3

I think you are proposing to increase the output of each block by 512
bits in this case, so as to provide some data that can be xored into
the input data block of the next stage, a la CBC:

IV  --->  COMP   --->   COMP   --->   COMP   --->  Output
   ^  \  ^  \  ^
   |\--> X\--> X
   | | |
 Block 1   Block 2   Block 3

By itself, adding an xor step doesn't do anything.  The attacker would
still have full control of the output of the xor, since he has full
control of one of its two inputs.  So I think it is more appropriate to
merely think of the compression function block as taking a wider state
input from the previous stage, and not specify how the input data block
is mixed with the previous state.

This might lead to a construction in which the state passed from block
to block is, say, 512 bits, rather than 160.  The compression function
still takes two inputs, the state and the input block, and now produces
a 512 bit output.

IV  ===>  COMP   ===>   COMP   ===>   COMP   --->  Output
   ^ ^ ^
   | | |
   | | |
 Block 1   Block 2   Block 3

(Here, the ==> are meant to depict wide paths.)

For the final output of the hash, we would presumably "derate" the
output of the last stage down to 160 bits and not claim the full 512
bits of security.  If we didn't do this step, then the construction is
precisely SHA-512 and Joux's attack still applies.

This approach does appear to defeat the Joux attack.  Finding a collision
in a sub-block which takes a 512 bit input to a 512 bit output will take
2^256 work rather than the 2^80 in the SHA-1 compression function.
So you will never find a multicollision in less work than this.  And the
most strength SHA-1 can provide against anything is 2^160 since that
it what it takes to invert it.  Since 2^256 is greater than 2^160,
the construction is strong against this attack.

We can also see from this analysis that making the paths 512 bits wide is
overkill; they only need to be twice the claimed hash strength, or 320
bits wide in this case.  Or in other words, if we took SHA-512 and only
claimed it to have 256 bits of strength, it would avoid the Joux attack.

However, the problem with this construction is that the inner blocks
now genuinely need to have 512 bits of strength, and that may not be
that easy to provide.  SHA-512's compression function does claim to have
that amount of collision resistance, but it is difficult to analyze such
strong claims, and adding this much strength may make for a slow hash.

Plus, given that we have managed to create a block with 512 bits of
security, it seems a shame to throw away half of the strength to produce
an output that avoids the Joux attack.  I think cryptographers will look
harder to try to find a way of combining sub-blocks which will retain the
strength of the individual pieces rather than throwing half of it away.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Bad day at the hash function factory

2004-08-24 Thread Greg Rose
I wrote:
> Phil Hawkes' paper on the SHA-2 round function has just been
> posted as
> Eprint number 207. It contains rather a lot of detail, unlike
> some of the
> other papers on the subject of hash function collisions.
At 14:17 2004-08-23 -0400, Trei, Peter wrote:
Could you possibly post a direct link?
http://eprint.iacr.org/2004/207.pdf
Greg.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: First quantum crypto bank transfer

2004-08-24 Thread Ian Grigg
Matt Crawford wrote:
Please don't blame the physicists for that.  It is still research, but 
someone is selling tincture of quantum physics in their snake-oil 
bottles.  Too bad that may poison the market for a really useful 
development a few years from now, but it does help shake the money tree 
for research.  And physics can use every dime it can get right now.
That logic amounts to "it's ok to sell a fraud that
will make people less secure if it shakes out some
money for my good works."  Which is fine, as it is
standard practice to pay off some scientist to sit
on the board and say what he's told.  As long as we
all understand that it's not science, and it's not
scientific!
Still, it may be that the hype from selling expensive
boxes full of fibre snake oil will flow through to
more cheap boxes full of software crypto, which would
be a good thing.  So it's not clear that every QC box
sold will reduce security overall (through lost
opportunity).
iang
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Cryptography and the Open Source Security Debate

2004-08-24 Thread lrk
On Thu, Aug 12, 2004 at 03:27:07PM -0700, Jon Callas wrote:
> On 10 Aug 2004, at 5:16 AM, John Kelsey wrote:
> 
> >So, how many people on this list have actually looked at the PGP key 
> >generation code in any depth?  Open source makes it possible for 
> >people to look for security holes, but it sure doesn't guarantee that 
> >anyone will do so, especially anyone who's at all good at it.
> 
> Incidentally, none of the issues that lrk brought up (RSA key being 
> made from an "easy to factor" composite, a symmetric key that is a weak 
> key, etc.) are unique to PGP.

Yep. And I know that. But as my hair turns grey, I make more simple mistakes
and catch fewer of them.


Looks like we are batting zero here. I have seen no responses nor received
off-list e-mail from anyone admitting to examining the open source for holes.


My examination of RSAREF and OpenSSL code was more toward understanding how
they handled big numbers. It appears both generate prime numbers which are
half the length of the required N and with both of the two most significant
bits set to one. This means the ratio R=P/Q (P being the larger prime) is
limited to 1

The Call Is Cheap. The Wiretap Is Extra

2004-08-24 Thread John Denker
1) Here's an article from the New York Times.
The headline just about says it all.  Reportedly
THEY want voice-over-internet users to pay for
the privilege of having their calls tapped.
> The Call Is Cheap. The Wiretap Is Extra.
http://www.theledger.com/apps/pbcs.dll/article?AID=/20040823/ZNYT01/408230401/1001/BUSINESS
(I cite the version online at The Ledger because
folks can read it there without registering, unlike
the nytimes.com site.)
===
2) A modest proposal:
I think we should set up the following system:
  a) Users certify to their ISP that they use end-to-end
strong crypto on all their voice-over-internet calls, using
tamper-resistant (or at least tamper-evident) hardware and
software.
  b) The ISP demands such certification from all users.
  c) The ISP declines to install wiretap equipment, and
passes the savings on to the users.
  ... Who could possibly object?
Note that traffic-analysis is still possible, but the
equipment to do that is much cheaper.
Also note that if THEY are going to bugger my endpoints
to defeat the crypto, they might as well do the job right
and change the signalling so that the call goes directly
to THEIR premises ... That way the ISP doesn't need to get
involved, i.e. the ISP has no tap-related costs.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: First quantum crypto bank transfer

2004-08-24 Thread Bill Stewart
At 02:02 AM 8/23/2004, Florian Weimer wrote:
* Bill Stewart:
> I agree that it doesn't look useful, but "lawful intercept" is harder,
> if you're defining that as "undetected eavesdropping with
> possible cooperation of the telco in the middle",
> because quantum crypto needs end-to-end fiber so there's
> nothing the telco can help with except installing dark fiber,
> and the quantum crypto lets you detect eavesdroppers.
But this doesn't scale.
You'd need dark fiber to all communication partners.
Yes.  That's part of one definition of "doesn't look useful".
So if quantum key distribution was mandated for
applications involving more than just a handful communication
partners, you'd need relays (or rather unlikely advances in optical
circuit switching).
It would be possible to use it as link encryption,
giving up the benefits of end-to-end in return for better scaling,
but you could still make all the relaying happen in the
user organization's facilities, rather than in a telco building
that's outside the user organization's control.
(Just because something isn't very useful doesn't mean you can't
at least try to do the job semi-correctly...)
By the way, the complete bashing of the recent QKD experiment is
probably not totally deserved.  Apparently, the experimenters used a
QKD variant that relies on quantum teleportation of photons.
This QKD variant is currently *not* available commercially,
and the experiment itself could well be an important refinement of
Zeilinger's earlier work in this area.
That's at least interesting, though I don't see why you'd take
the experiment out of the lab without a really well-defined
benefit to the end user (unless you've got a research grant.)
I'm surprised to hear that _any_ quantum key distribution variant
is available commercially, given the costs of dedicating fiber
and the effectiveness of current mathematical crypto
or the alternative approach of couriers with briefcases and handcuffs.

Bill Stewart  [EMAIL PROTECTED] 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: On hash breaks, was Re: First quantum crypto bank transfer

2004-08-24 Thread Jerrold Leichter
| > Alternatively, how anyone can have absolute confidence in conventional
| > crypto
| > in a week when a surprise attack appears against a widely-fielded
| > primitive
| > like MD5 is beyond me.  Is our certainty about AES's security really any
| > better today than was our certainty about RIPEM - or even SHA-0 - was
| > three
| > weeks ago?
| > -- Jerry
|
| Actually for years the cryptography community has been saying "retire MD5,"
...because it's been seen as giving too short a hash, and because of a minor
weakness - widely described as "certificational" - in the compression function
that no one ever showed lead to an attack.  (While the details of the current
attack aren't yet completely clear, the fact that it worked on so many
functions strongly indicates that the particular weakness in the MD5
compression function has nothing to do with it.)

The advice may have been prudent, but it doesn't rise to the level of a theory
for distinguishing good from bad hash functions.

| SHA-0 has been required to be replaced by SHA-1 for some time,
because the NSA said so.  It turns out they were ahead of public crypto by a
couple of years.  I will grant you that this is indirect evidence that NSA
has no attacks on AES, since this is now the second time that they've
strengthened a proposed primitive against which no publically-known attacks
existed.  It tells us little about how strong AES actually is - and absolutely
nothing about any other system out there, since NSA has no reason to comment
on those and every reason not to.

|   the RIPEM
| series is functionally-speaking unused
...but not because anyone thought there was a weakness.  MD5 happened to be
widely used, SHA-1 had standards pushing it; little room was left for another
hash.

|and represented the only real
| surprise. Except for RIPEM there were known to be reasons for this, MD5 was
| known to be flawed, SHA-0 was replaced because it was flawed (although
| knowledge of the nature of the flaw was hidden). Even with RIPEM (and SHA-1
| for the same reason) I have plans in place (and have had for some time) the
| move away from 160-bit hashes to larger ones, so the attack on RIPEM had
| little effect on me and my clients, even a full attack on SHA-1 would have
| little effect on the clients that actually listen (they all have backup
| plans that involve the rest of the SHA series and at the very least
| Whirlpool).
Moving to a larger hash function with no underlying theory isn't very far from
the "million-bit key" algorithms you see all over the place.  Bigger probably
can't be worse, but is it really better?

| So basically I encourage my clients to maintain good business practices
| which means that they don't need to have belief in the long term security of
| AES, or SHA-1, or RSA, or . This is just good business, and it is a
| process that evolved to deal with similar circumstances.
Real good business practice has to make judgements about possible risks and
trade them off against potential costs.  I quite agree that your advice is
sound.  But that doesn't change the facts:  Our theoretical bases for security
are much weaker than we sometimes let on.  We can still be surprised.

Suppose a year ago I offered the following bet:  At the next Crypto, all but
one of the widely-discussed hash functions will be shown to be fundamentally
flawed.  What odds would you have given me?  What odds would you have given me
on the following bet:  At the next Crypto, an attack against AES that is
substantially better than brute force will be published?  If the odds were
significantly different, how would you have justified the difference?

Let's update the question to today:  Replace "widely-discussed hash functions"
with "SHA-1 and the related family".  Keep the AES bet intact.  But let's got
out 5 years.  Now what odds do you give me?  Why?

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-24 Thread Jerrold Leichter
| > It strikes me that Joux's attack relies on *two* features of current
| > constructions:  The block-at-a-time structure, and the fact that the state
| > passed from block to block is the same size as the output state.  Suppose we
| > did ciphertext chaining:  For block i, the input to the compression function
| > is the compressed previous state and the xor of block i and block i-1.  Then
| > I can no longer mix-and-match pairs of collisions to find new ones.
|
| Here's how I understand what you are suggesting.  Presently the hash
| compression function takes two inputs: a state, and an input data block.
| For SHA-1 the state is 160 bits and the input data block is 512 bits.
| It outputs a state-size block of 160 bits, which gets fed into the
| next block:
|
| IV  --->  COMP   --->   COMP   --->   COMP   --->  Output
|^ ^ ^
|| | |
|| | |
|  Block 1   Block 2   Block 3
|
| I think you are proposing to increase the output of each block by 512
| bits in this case, so as to provide some data that can be xored into
| the input data block of the next stage, a la CBC:
|
| IV  --->  COMP   --->   COMP   --->   COMP   --->  Output
|^  \  ^  \  ^
||\--> X\--> X
|| | |
|  Block 1   Block 2   Block 3
|
| By itself, adding an xor step doesn't do anything
Not true.

Joux's attack says:  Find single block messages M1 and M1' that collide on
the "blank initial state".  Now find messages M2 amd M2' that collide with
the (common) final state from M1 and M1'.  Then you hav four 2-block
collisions for the cost of two:  M1|M2, M1'|M2, and so on.

But even a simple XOR breaks this.  M1 and M1' have the same hash H, but the
state being passed is now very different:   in one case,  in the
other.  So M1|M2 and M1'|M2 are completely different:  Both started the second
step with the compression function in state H, but the first compressed
M1 XOR M2, and the second M1' XOR M2.

All I'm left with, unless there's some cleverer attack I'm missing, is:

- Find M1 and M1' that collide;
- Find M2 and M2' such that M1 XOR M2 collides with M1' XOR M2',
on the common initial state.

Then for the cost of finding two block-level collisions, I have a collision
between two-block messages (M1|M2 and M1'|M2').  But why bother?  It's always
been sufficient to find a collision on the *last* block, with an arbitrary
initial state.  (It would be nice to eliminate *this* generic weakness, but
it's not clear you can and still have an on-line algorithm.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: More problems with hash functions

2004-08-24 Thread "Hal Finney"
Jerry Leichter writes:
> Joux's attack says:  Find single block messages M1 and M1' that collide on
> the "blank initial state".  Now find messages M2 amd M2' that collide with
> the (common) final state from M1 and M1'.  Then you hav four 2-block
> collisions for the cost of two:  M1|M2, M1'|M2, and so on.
>
> But even a simple XOR breaks this.  M1 and M1' have the same hash H, but the
> state being passed is now very different:   in one case,  in the
> other.  So M1|M2 and M1'|M2 are completely different:  Both started the second
> step with the compression function in state H, but the first compressed
> M1 XOR M2, and the second M1' XOR M2.

Okay, I misunderstood your earlier suggestion.  Sorry.  Rereading it:

> Suppose we
> did ciphertext chaining:  For block i, the input to the compression function
> is the compressed previous state and the xor of block i and block i-1.  Then
> I can no longer mix-and-match pairs of collisions to find new ones.

It sounds like you're saying that that the input to the second compression
function should be M1 XOR M2 instead of just M2.  Like this:

IV  --->  COMP   --->   COMP   --->   COMP   --->  Output
   ^ ^ ^
   |/--> X/--> X
   |  /  |  /  |
 Block 1   Block 2   Block 3

But this is easily compensated for: if you wanted the input to the
2nd compression stage to be M2, you'd just input M1 XOR M2 instead.
Then when this gets XOR'd with M1, M1 will will cancel out in the XOR
and you'll have a nice clean M2 going into COMP, just as you wanted.

So if you found a compression-function collision starting with IV on M1
and M1', and you found a collision starting with the common output of
that stage on M2 and M2', your four collisions would be:

M1 || (M1 xor M2)
M1 || (M1 xor M2')
M1' || (M1' xor M2)
M1' || (M1' xor M2')

In each case the actual input to the 2nd block compression function
(after xoring with the first block input) would be M2 or M2', as desired.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: On hash breaks, was Re: First quantum crypto bank transfer

2004-08-24 Thread "Hal Finney"
Joe Ashwood writes:
> Except for RIPEM there were known to be reasons for this, MD5 was 
> known to be flawed, SHA-0 was replaced because it was flawed (although 
> knowledge of the nature of the flaw was hidden). Even with RIPEM (and SHA-1 
> for the same reason) I have plans in place (and have had for some time) the 
> move away from 160-bit hashes to larger ones, so the attack on RIPEM had 
> little effect on me and my clients...

A minor terminology correction: the hash is RIPEMD, the more recent (and
still unbroken) version being RIPEMD-160.  RIPEMD is the RIPE Message
Digest, where RIPE is the EU's RACE Integrity Primitives Evaluation
project, and I haven't been able to find out what RACE stands for.

RIPEM was an old implementation by Mark Riordan of the PEM (Privacy
Enhanced Email) standard which preceded S/MIME.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: On hash breaks, was Re: First quantum crypto bank transfer

2004-08-24 Thread Joseph Ashwood
- Original Message - 
From: "Jerrold Leichter" <[EMAIL PROTECTED]>
Subject: Re: On hash breaks, was Re: First quantum crypto bank transfer


| (they all have backup
| plans that involve the rest of the SHA series and at the very least
| Whirlpool).
Moving to a larger hash function with no underlying theory isn't very far 
from
the "million-bit key" algorithms you see all over the place.  Bigger 
probably
can't be worse, but is it really better?
The key expansion problem is why the rest of the SHA series is present, and 
Whirlpool is present because of the fundamental flaw problem. The truth is 
that having a diversity of options for this is simple enough, it takes only 
a small amount of additional work to allow a cryptographic function to be 
easily replaced, and making it replacable by 1000 is only marginally more 
difficult than 2, the four I listed are well-built, which is why they are 
the recommended ones.

Suppose a year ago I offered the following bet:  At the next Crypto, all 
but
one of the widely-discussed hash functions will be shown to be 
fundamentally
flawed.  What odds would you have given me?
I think it would be important to change the phrasing a bit to make the odds 
more quantifiable, simply chagne "At the next Crypto" to "By the end of the 
next Crypto." With that said considering history, I would've put the odds at 
~~5:1 (Current hash functions seem to be broken quite often, and being the 
house I want the odds in my favor). But you are correct in that this 
represents a major advance in the state of the art, one that has taken large 
portions of the security community completely blind, I simply took the 
opportunity to push the concept of good business planning into this as a way 
that allows a good escape plan should anything happen.

What odds would you have given me
on the following bet:  At the next Crypto, an attack against AES that is
substantially better than brute force will be published?  If the odds were
significantly different, how would you have justified the difference?
Very different odds actually, we as a group have a much better understanding 
of block ciphers than hash functions, as evidence the just published 4 for 
the price of 2 break (cryptography list post by "Hal Finney" Subject: More 
problems with hash functions 8/20/2004). However AES has one of the smallest 
security margins available, so let's put it around 10:1, I really don't 
expect a break, but I would not be excessively shocked to see one made. It 
is for this very reason that again I recommend to all my clients that the 
have backup plans here as well, all the AES finalists, and Camellia because 
of it's Nessie selection.


Let's update the question to today:  Replace "widely-discussed hash 
functions"
with "SHA-1 and the related family".  Keep the AES bet intact.  But let's 
got
out 5 years.  Now what odds do you give me?  Why?
SHA series 1:1
AES   3:1
Whirlpool   3:1 (even though it wasn't asked)
Camellia 3:1
Of SHA and Whirlpool being felled by the same attack in the next 5 years 
100:1
AES and Camellia by the same attack within 5 years 30:1

SHA in five years because the SHA methodology is showing some cracks, there 
are only minor differences between SHA-0 and SHA-1, and the differences 
between SHA-1 and SHA-256/384/512 are basically just matters of scale, I 
expect to see a major break against the methodology within 10 years, and 
with the current renewed interest in hash functions I expect the manpower to 
be available very soon to find that break.

AES is a very solid algorithm, but it's security margin is too close for me, 
this is always solid evidence that a break may be just around the corner, 
that the evidence is that various agencies don't have a break is irrelevant, 
the current evidence is that the general cryptographic community is < 10 
years behind and gaining quickly..

Whirlpool has the same odds as AES because the underlying cipher is based on 
the same methodology, by the same people, so if it has a flaw it is likely 
to be extremely similar.

Camellia simply does not have the examination behind it that the AES 
finalists do, something that makes me nervous and why it is only a backup 
algorithm.

SHA and Whirlpool are unlikely to all at the same time because they have 
fundamentally different cores, SHA is a hash constructed primitive, 
Whirlpool a block cipher constructed primitive based on a chaining mode. 
This makes the odds of a single attack felling both slim at best. This odd 
is probably slanted too far in my favor.

AES and Camellia by the same attack is more likely because the tools against 
block ciphers are generally cross borders capable, and the differences 
between the styles in Camellia and AES are simply not great enough to 
prevent this. The difference in the styles though represents the additional 
3.333:1 odds.

All my odds on this are conservative and based on sloppy meanings (you and I 
may have very different meaning