Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Ben Laurie
Perry E. Metzger wrote:
 Yet another reason why you always should make the crypto algorithms you
 use pluggable in any system -- you *will* have to replace them some day.

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?

It seems to me protocol designers get all excited about this because
they want to design the protocol once and be done with it. But software
authors are generally content to worry about the new algorithm when they
need to switch to it - and since they're going to have to update their
software anyway and get everyone to install the new version, why should
they worry any sooner?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Jonathan Thornburg
On Tue, 25 Aug 2009, Ben Laurie wrote:
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?

If active attackers are part of the threat model, then you need to
worry about version-rollback attacks for as long as in-the-field software
still groks the old (now-insecure) versions, so versioning is actually
more like Byzantine versioning.

-- 
-- Jonathan Thornburg jth...@astro.indiana.edu
   Dept of Astronomy, Indiana University, Bloomington, Indiana, USA
   Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral.
  -- quote by Freire / poster by Oxfam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Darren J Moffat

Ben Laurie wrote:

Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?


Versioning catches a large part of it, but that alone isn't always 
enough.  Sometimes for on disk formats you need to reserve padding space 
to add larger or differently formatted things later.


Also support for a new crypto algorithm can actually be done without 
changes to the software code if it is truely pluggable.


An example from Solaris that is how our IPsec implementation works.  If 
a new algorithm is available via the Solaris crypto framework in many c 
cases were we don't need any code changes to support it, just have the 
end system admin run the ipsecalgs(1M) command to update the IPsec 
protocol number to crypto framework algorithm name mappings (we use 
PKCS#11 style mechanism names that combine algorithm and mode).  The 
Solaris IPSec implementation has no crypto algorithm names in the code 
base at all (we do currently assume CBC mode though but are in the 
process of adding generic CCM, GCM and GMAC support).


Now having said all that the PF_KEY protocol (RFC 2367) between user and 
kernel does know about crypto algorithms.



It seems to me protocol designers get all excited about this because


Not just on the wire protocols but persistent on disk formats, on disk 
is a much bigger deal.  Consider the case when you have terrabytes of 
data written in the old format and you need to migrate to the new format 
 - you have to support both at the same time.  So not just versioning 
but space padding can be helpful.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git

2009-08-25 Thread Paul Hoffman
At 9:39 AM -0400 8/25/09, Perry E. Metzger wrote:
Ben Laurie b...@links.org writes:
  Perry E. Metzger wrote:
 Yet another reason why you always should make the crypto algorithms you
 use pluggable in any system -- you *will* have to replace them some day.

 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?

For one example, it is not theoretical that some people will often want
to use different algorithms than others and will need negotiation. Some
things like SSH have approximately done this right. Others have done
this quite wrong.

When we were planning out IPSec, a key management protocol, SKIP, was
proposed that had no opportunity for negotiating algorithms at all --
they were burned into the metal. As it happens, by now we would have had
to completely scrap it.

Of course you can go too far in the other direction. IPSec is a total
mess because there are far too many choices -- the standard key
management protocols are so jelly-like as to be incomprehensible and
unconfigurable.

Perry is completely correct on the two previous paragraphs. Hard-coded 
algorithms, particularly asymmetric encryption/signing and hash algorithms, 
will lead to later scrapping and a transition for the entire protocol. But it 
is quite easy to build a protocol with too many knobs, and IPsec is living 
proof of that. TLS's fixed, registered ciphersuites are far from perfect, but 
in retropsect, they seem a lot better from an operations / deployment 
standpoint than IPsec.

  It seems to me protocol designers get all excited about this because
  they want to design the protocol once and be done with it. But software
  authors are generally content to worry about the new algorithm when they
  need to switch to it - and since they're going to have to update their
  software anyway and get everyone to install the new version, why should
  they worry any sooner?

You speak of beyond versioning as though introducing versioning or
algorithm negotiation were a trivial thing, but I don't think you can
generally tack such things on after the fact. You have to think about
them carefully from the start.

A different answer to Ben would be because not planning sooner causes your 
software users more grief. If you build in both algorithm agility and a few 
probably-good algorithms, the operational changes needed when one algorithm 
fails is low. Later software updates that contain other changes can also 
include new algorithms that are suspected to be good even if all of the 
original ones fail.

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Thor Lancelot Simon
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
 Perry E. Metzger wrote:
  Yet another reason why you always should make the crypto algorithms you
  use pluggable in any system -- you *will* have to replace them some day.
 
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?
 
 It seems to me protocol designers get all excited about this because
 they want to design the protocol once and be done with it. But software
 authors are generally content to worry about the new algorithm when they
 need to switch to it - and since they're going to have to update their
 software anyway and get everyone to install the new version, why should
 they worry any sooner?

Look at the difference between the time it requires to add an algorithm
to OpenSSL and the time it requires to add a new SSL or TLS version to
OpenSSL.  Or should we expect TLS 1.2 support any day now?  If earlier
TLS versions had been designed to allow the hash functions in the PRF
to be swapped out, the exercise of recovering from new horrible problems
with SHA1 would be vastly simpler, easier, and far quicker.  It is just
not the case that the software development exercise of implementing a
new protocol is on a scale with that of implementing a new cipher or hash
function -- it is far, far larger, and that, alone, seems to me to be
sufficient reason to design protocols so that algorithms and algorithm
parameter sizes are not fixed.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Nicolas Williams
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?
 
 It seems to me protocol designers get all excited about this because
 they want to design the protocol once and be done with it. But software
 authors are generally content to worry about the new algorithm when they
 need to switch to it - and since they're going to have to update their
 software anyway and get everyone to install the new version, why should
 they worry any sooner?

Many good replies have been given already.  Here's a few more reasons to
want pluggability in the protocol:

 - Yes, we want to design the protocol once and be done with the hard
   parts of the design problem that we can reasonably expect to have to
   do only once.  Having to do things only once is not just cool.

 - Pluggability at the protocol layer enable pluggability in the
   implementations.  A pluggable design does not imply open plug-in
   interfaces, but a pluggable design does imply highly localized
   development of new plug-ins.

 - It's a good idea to promote careful thought about the future,
   precisely what designing a pluggable protocol does and requires.

   We may get it wrong (e.g., the SSHv2 alg nego protocol has quirks,
   some of which were discovered when we worked on RFC4462), but the
   result is likely to be much better than not putting much or any such
   thought into it.

If the protocol designers and the implementors get their respective
designs right, the best case scenario is that switching from one
cryptographic algorithm to another requires less effort in the pluggable
case than in the non-pluggable case.  Specifically, specification and
implementation of new crypto algs can be localized -- no existing
specification nor code need change!  Yes, new SW must still get
deployed, and that's pretty hard, but it helps to make it easier to
develop that SW.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Small-key DSA variant

2009-08-25 Thread Hal Finney
While thinking about Zooko's problem with needing small keys, I seem to
have recalled an idea for a DSA variant that uses small keys. I can't
remember where I heard it, maybe at a Crypto rump session. I wonder if
anyone recognizes it.

Let the DSA public key be y = g^x mod p. DSA signatures are defined by:

r = g^k mod p mod q
s satisfies sk - rx = h mod q

Let's define R = g^k mod p. Then r = R mod q. The verification raises
both sides of the s equation to the g power:

g^(sk) / g^(rx) = g^h mod p, or equivalently:
R^s / y^r = g^h mod p

The old ElGamal signature would have been (R,s) (and maybe wouldn't have
used a subgroup). Then this second equation would be the verification
(substituting R for r in the y exponentiation works because the exponent
arithmetic is implicitly mod q). But DSA improved this by using (r,s)
which is smaller. We can't substitute r for R globally in the verification
equation, it won't work. But we can solve for R:

R = g^(h/s) * y^(r/s) mod p

and take this mod q:

r = R mod q = g^(h/s) * y^(r/s) mod p mod q

which is the conventional DSA verification equation.

The small-key variant I'm asking about goes back to the ElGamal
verification equation:

R^s / y^r = g^h mod p

but instead of solving for R, we solve for y in a similar way:

y = R^(s/r) / g^(h/r) mod p

This means that with an ElGamal style (R,s) signature, the verifier
can derive y = g^x mod p. So he doesn't have to know the public key.
Instead of letting the public key be y, we can make it H(y) for some
hash function H. Then the verification is:

H(y) = H( R^(s/r) / g^(h/r) mod p )

We have increased the signature size from twice the size of q to the
sum of the sizes of p and q (which is much bigger, for typical non-EC
DSA groups). But we have decreased the public key from the size of p to
the size of H.

Now the interesting question is whether H can be the size of the security
parameter, rather than twice the size like q. Maybe we could use an 80
bit H. This would make for a very small public key (again at the expense
of a much larger signature).

The idea would be that y is a fixed target, and a forger needs to come
up with an R,s whose hash matches the hash of y. It's a second preimage
problem, not a collision problem.

One issue is that there are many keys in the world, and perhaps a forger
is happy to forge anyone's signature. (Or more to the point, a verifier
really wants to trust all signatures out there.) Then the forger only
needs to match one of H(y) for any of potentially millions or billions
of y values, and this reduces his work by a factor equal to the number of
keys. So we probably do have to boost the key size up to accommodate this
issue. But it could still probably be smaller than for even ECDSA keys.

Anyway, that's the concept. Does anyone recognize it?

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread James A. Donald



Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


Ben Laurie wrote:

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?


New software has to work with new and old data files and communicate 
with new and old software.


Thus full protocol negotiation has to be built in to everything from the 
beginning - which was the insight behind COM and the cure to DLL hell.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Certainty

2009-08-25 Thread Hal Finney
Paul Hoffman wrote:
 Getting a straight answer on whether or not the recent preimage work
 is actually related to the earlier collision work would be useful.

I am not clueful enough about this work to give an authoritative answer.
My impression is that they use some of the same general techniques and
weaknesses, for example the ability to make modifications to message
words and compensate them with modifications to other message words
that cancel. However I think there are differences as well, the preimage
work often uses meet in the middle techniques which I don't think apply
to collisions.

There was an amusing demo at the rump session though of a different
kind of preimage technique which does depend directly on collisions. It
uses the Merkle-Damgard structure of MD5 and creates lots of blocks that
collide (possibly with different prefixes, I didn't look at it closely).
Then they were able to show a second preimage attack on a chosen message.

That is, they could create a message and have a signer sign it using MD5.
Then they could create more messages at will that had the same MD5 hash.
In this demo, the messages started with text that said, Dear so-and-so
and then had more readable text, followed by binary data. They were able
to change the person's name in the first line to that of a volunteer
from the audience, then modify the binary data and create a new version
of the message with the same MD5 hash, in just a second or two! Very
amusing demo.

Google for trojan message attack to find details, or read:
www.di.ens.fr/~bouillaguet/pub/SAC2009.pdf
slides (not too informative):
http://rump2009.cr.yp.to/ccbe0b9600bfd9f7f5f62ae1d5e915c8.pdf

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Certainty

2009-08-25 Thread Perry E. Metzger

h...@finney.org (Hal Finney) writes:
 Paul Hoffman wrote:
 Getting a straight answer on whether or not the recent preimage work
 is actually related to the earlier collision work would be useful.
[...]
 There was an amusing demo at the rump session though of a different
 kind of preimage technique which does depend directly on collisions. It
 uses the Merkle-Damgard structure of MD5 and creates lots of blocks that
 collide (possibly with different prefixes, I didn't look at it closely).
 Then they were able to show a second preimage attack on a chosen message.

 That is, they could create a message and have a signer sign it using MD5.
 Then they could create more messages at will that had the same MD5 hash.
 In this demo, the messages started with text that said, Dear so-and-so
 and then had more readable text, followed by binary data. They were able
 to change the person's name in the first line to that of a volunteer
 from the audience, then modify the binary data and create a new version
 of the message with the same MD5 hash, in just a second or two! Very
 amusing demo.

That was the restricted preimage attack that I earlier mentioned
seeing in the video of the rump session. It isn't fully general, but it
is certainly disturbing.

As we're often fond of saying, attacks only get better with time, they
never roll back.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com