Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-30 Thread Nicolas Williams
On Sun, Sep 27, 2009 at 02:23:16PM -0700, Fuzzy Hoodie-Monster wrote:
 As usual, I tend to agree with Peter. Consider the time scale and
 severity of problems with cryptographic algorithms vs. the time scale
 of protocol development vs. the time scale of bug creation
 attributable to complex designs. Let's make up some fake numbers,
 shall we? (After all, we're software engineers. Real numbers are for
 real engineers! Bah!)
 
 [snip]
 
 Although the numbers are fake, perhaps the orders of magnitude are
 close enough to make the point. Which is: your software will fail for
 reasons unrelated to cryptographic algorithm problems long before
 SHA-256 is broken enough to matter. Perhaps pluggability is a source
 of frequent failures, designed to solve for infrequent and
 low-severity algorithm failures. I would worry about an overfull \hbox
 (badness 1!) long before I worried about AES-128 in CBC mode with
 a unique IV made from /dev/urandom. Between now and the time our

AES-128 in CBC mode with a unique IV made from /dev/urandom is
manifestly not the issue of the day.  The issue is hash function
strength.  So when would you worry about MD5?  SHA-1?  By your own
admission MD5 has already been fatally wounded and SHA-1 is headed
that way.

 ciphers and hashes and signatures are broken, we'll have a decade to
 design and implement the next simple system to replace our current
 system. Most software developers would be overjoyed to have a full
 decade. Why are we whining?

We don't have a decade to replace MD5.  We've had a long time to replace
MD5, and even SHA-1 already, but we haven't done it yet.  The reason is
simple: there's more to it than you've stated.  Specifically, for
example, you ignored protocol update development (you assumed 1 new
protocol per-year, but this says nothing about how long it takes to,
say, update TLS) and deployment issues completely, and you supposed that
software development happens at a consistent, fast clip throughout.
Software development and deployment are usually constrained by legacy
and customer behavior, as well as resource availability, all of which
varies enormously.  Protocol upgrade development, for example, is harder
than you might think (I'm guessing though, since you didn't address that
issue).  Complexity exists outside protocol.  This is why we must plan
ahead and make reasonable trade-offs.  Devising protocols that make
upgrade easier is important, supposing that they actually help with the
deployment issues (cue your argument that they do not).

I'm OK with making up numbers for the sake of argument.  But you have to
make up all the relevent numbers.  Then we can plug in real data where
we have it, argue about the other numbers, ...

 What if TLS v1.1 (2006) specified that the only ciphersuite was RSA
 with = 1024-bit keys, HMAC_SHA256, and AES-128 in CBC mode. How
 likely is it that attackers will be able to reliably and economically
 attack those algorithms in 2016? Meanwhile, the comically complex
 X.509 is already a punching bag
 (http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf
 and 
 http://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf,
 including the remote exploit in the certificate handling code itself).

We don't have crystal balls.  We don't really know what's in store for
AES, for example.  Conservative design says we should have a way to
deploy alternatives in a reasonably short period of time.

You and Peter are clearly biased against TLS 1.2 specifically, and
algorithm negotiation generally.  It's also clear that you're outside
the IETF consensus on both matters _for now_.  IMO you'll need to make
better arguments, or wait enough time to be proven right by events, in
order to change that consensus.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-28 Thread Fuzzy Hoodie-Monster
On Mon, Sep 7, 2009 at 6:02 AM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 That's a rather high cost to pay just for the ability to make a crypto fashion
 statement.  Even if the ability to negotiate hash algorithms had been built in
 from the start, this only removes the non-interoperability but doesn't remove
 the complexity issue.

As usual, I tend to agree with Peter. Consider the time scale and
severity of problems with cryptographic algorithms vs. the time scale
of protocol development vs. the time scale of bug creation
attributable to complex designs. Let's make up some fake numbers,
shall we? (After all, we're software engineers. Real numbers are for
real engineers! Bah!)

cryptographic algorithm weakness discovery rate: several per decade

cryptographic algorithm weakness severity: 5 badness points per decade
the weakness has been known; 7 badness points is considered fatal.
Let's say MD5's badness is 8 and SHA-1's is 3. AES-256's is 1, because
even after the attack it is still strong enough for most real uses.

protocol development rate: 1 per year

bug creation rate (baseline): tens per day per project

bug creation rate for bugs due to complex designs: half of baseline
(the other half is due to just regular mistakes)

Although the numbers are fake, perhaps the orders of magnitude are
close enough to make the point. Which is: your software will fail for
reasons unrelated to cryptographic algorithm problems long before
SHA-256 is broken enough to matter. Perhaps pluggability is a source
of frequent failures, designed to solve for infrequent and
low-severity algorithm failures. I would worry about an overfull \hbox
(badness 1!) long before I worried about AES-128 in CBC mode with
a unique IV made from /dev/urandom. Between now and the time our
ciphers and hashes and signatures are broken, we'll have a decade to
design and implement the next simple system to replace our current
system. Most software developers would be overjoyed to have a full
decade. Why are we whining?

What if TLS v1.1 (2006) specified that the only ciphersuite was RSA
with = 1024-bit keys, HMAC_SHA256, and AES-128 in CBC mode. How
likely is it that attackers will be able to reliably and economically
attack those algorithms in 2016? Meanwhile, the comically complex
X.509 is already a punching bag
(http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf
and 
http://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf,
including the remote exploit in the certificate handling code itself).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-08 Thread Peter Gutmann
Thor Lancelot Simon t...@rek.tjls.com writes:

I think we're largely talking past one another.  As regards new horrible
problems I meant simply that if there _are_ new horrible problems_ such
that we need to switch away from SHA1 in the TLS PRF, the design mistakes
made in TLS 1.1 will make it much harder.

Well, let's move the TLS 1.2 aspect out of the discussion and look at the
underlying issues.  If you're looking at this purely from a theoretical point
of view then it's possible that the ability to use SHA-2 in the PRF is an
improvement (it may also be a step backwards since you're now relying on a
single hash rather than the dual hash used in the original design).  Since no-
one knows of any attacks, we don't know whether it's a step backwards, a step
forwards, or (most likely) a no-op.

However there's more to it than this.  Once you've got the crypto sorted out,
you need to implement it, and then deploy it.  So looking at the two options
you have:

Old: No known crypto weaknesses.
 Interoperable with all deployed implementations.
 Only one option, so not much to get wrong.

New: No known crypto weaknesses.
 Interoperable with no deployed implementations.
 Lots of flexibioity and options to get wrong.

Removing the common factors (the crypto portion) and the no-op terms
(interoperable with existing implementations) we're left with:

Old: -
New: Non-interoperable.
 Complex - Likely to exhibit security flaws (from the maxim that
   complexity is the enemy of security).

That's a rather high cost to pay just for the ability to make a crypto fashion
statement.  Even if the ability to negotiate hash algorithms had been built in
from the start, this only removes the non-interoperability but doesn't remove
the complexity issue.

As I read Ben's comments, they were _advocating_ those kinds of design
mistakes, advocating hard-wiring particular algorithms or their parameter
sizes into protocols,

You keep asserting that this is a mistake, but in the absence of any
cryptographic argument in support, and with practical arguments against it, it
looks like a feature to me.

In fact, it is radically harder to replace an entire protocol, even with a
related one, than to drop a new algorithm into an existing, properly-designed
protocol.

A properly-designed security protocol is one that's both cryptographically
sound and simple enough that it's hard to get wrong (or at least relatively
easy to get right, admittedly not necessarily the same thing).  Adding a pile
of complexity simply so you can make a crypto fashion statement doesn't seem
to be helping here.

If TLS 1.{0,1} had been designed to make the hash functions pluggagle
everywhere

... like that model of security protocol design IKEv1 was [0], then we'd have
all kinds of interop problems and quite probably security issues based on
exploitation of the unnecessary complexity of the protocol, for a net loss in
security and interoperability, and nothing gained.

Peter.

[0] Apologies to the IPsec folks on the list, just trying to illustrate the
point.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Ben Laurie
Perry E. Metzger wrote:
 Yet another reason why you always should make the crypto algorithms you
 use pluggable in any system -- you *will* have to replace them some day.

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?

It seems to me protocol designers get all excited about this because
they want to design the protocol once and be done with it. But software
authors are generally content to worry about the new algorithm when they
need to switch to it - and since they're going to have to update their
software anyway and get everyone to install the new version, why should
they worry any sooner?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Jonathan Thornburg
On Tue, 25 Aug 2009, Ben Laurie wrote:
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?

If active attackers are part of the threat model, then you need to
worry about version-rollback attacks for as long as in-the-field software
still groks the old (now-insecure) versions, so versioning is actually
more like Byzantine versioning.

-- 
-- Jonathan Thornburg jth...@astro.indiana.edu
   Dept of Astronomy, Indiana University, Bloomington, Indiana, USA
   Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral.
  -- quote by Freire / poster by Oxfam

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Darren J Moffat

Ben Laurie wrote:

Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?


Versioning catches a large part of it, but that alone isn't always 
enough.  Sometimes for on disk formats you need to reserve padding space 
to add larger or differently formatted things later.


Also support for a new crypto algorithm can actually be done without 
changes to the software code if it is truely pluggable.


An example from Solaris that is how our IPsec implementation works.  If 
a new algorithm is available via the Solaris crypto framework in many c 
cases were we don't need any code changes to support it, just have the 
end system admin run the ipsecalgs(1M) command to update the IPsec 
protocol number to crypto framework algorithm name mappings (we use 
PKCS#11 style mechanism names that combine algorithm and mode).  The 
Solaris IPSec implementation has no crypto algorithm names in the code 
base at all (we do currently assume CBC mode though but are in the 
process of adding generic CCM, GCM and GMAC support).


Now having said all that the PF_KEY protocol (RFC 2367) between user and 
kernel does know about crypto algorithms.



It seems to me protocol designers get all excited about this because


Not just on the wire protocols but persistent on disk formats, on disk 
is a much bigger deal.  Consider the case when you have terrabytes of 
data written in the old format and you need to migrate to the new format 
 - you have to support both at the same time.  So not just versioning 
but space padding can be helpful.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Thor Lancelot Simon
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
 Perry E. Metzger wrote:
  Yet another reason why you always should make the crypto algorithms you
  use pluggable in any system -- you *will* have to replace them some day.
 
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?
 
 It seems to me protocol designers get all excited about this because
 they want to design the protocol once and be done with it. But software
 authors are generally content to worry about the new algorithm when they
 need to switch to it - and since they're going to have to update their
 software anyway and get everyone to install the new version, why should
 they worry any sooner?

Look at the difference between the time it requires to add an algorithm
to OpenSSL and the time it requires to add a new SSL or TLS version to
OpenSSL.  Or should we expect TLS 1.2 support any day now?  If earlier
TLS versions had been designed to allow the hash functions in the PRF
to be swapped out, the exercise of recovering from new horrible problems
with SHA1 would be vastly simpler, easier, and far quicker.  It is just
not the case that the software development exercise of implementing a
new protocol is on a scale with that of implementing a new cipher or hash
function -- it is far, far larger, and that, alone, seems to me to be
sufficient reason to design protocols so that algorithms and algorithm
parameter sizes are not fixed.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Nicolas Williams
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
 In order to roll out a new crypto algorithm, you have to roll out new
 software. So, why is anything needed for pluggability beyond versioning?
 
 It seems to me protocol designers get all excited about this because
 they want to design the protocol once and be done with it. But software
 authors are generally content to worry about the new algorithm when they
 need to switch to it - and since they're going to have to update their
 software anyway and get everyone to install the new version, why should
 they worry any sooner?

Many good replies have been given already.  Here's a few more reasons to
want pluggability in the protocol:

 - Yes, we want to design the protocol once and be done with the hard
   parts of the design problem that we can reasonably expect to have to
   do only once.  Having to do things only once is not just cool.

 - Pluggability at the protocol layer enable pluggability in the
   implementations.  A pluggable design does not imply open plug-in
   interfaces, but a pluggable design does imply highly localized
   development of new plug-ins.

 - It's a good idea to promote careful thought about the future,
   precisely what designing a pluggable protocol does and requires.

   We may get it wrong (e.g., the SSHv2 alg nego protocol has quirks,
   some of which were discovered when we worked on RFC4462), but the
   result is likely to be much better than not putting much or any such
   thought into it.

If the protocol designers and the implementors get their respective
designs right, the best case scenario is that switching from one
cryptographic algorithm to another requires less effort in the pluggable
case than in the non-pluggable case.  Specifically, specification and
implementation of new crypto algs can be localized -- no existing
specification nor code need change!  Yes, new SW must still get
deployed, and that's pretty hard, but it helps to make it easier to
develop that SW.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread James A. Donald



Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


Ben Laurie wrote:

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?


New software has to work with new and old data files and communicate 
with new and old software.


Thus full protocol negotiation has to be built in to everything from the 
beginning - which was the insight behind COM and the cure to DLL hell.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread Jack Lloyd
On Wed, Aug 19, 2009 at 09:28:45AM -0600, Zooko Wilcox-O'Hearn wrote:

 [*] Linus Torvalds got the idea of a Cryptographic Hash Function
 Directed Acyclic Graph structure from an earlier distributed revision
 control tool named Monotone.  He didn't go out of his way to give
 credit to Monotone, and many people mistakenly think that he invented
 the idea.

OT trivia: The idea actually predates either monotone or git; opencm
(http://opencm.org/docs.html) was using a similiar technique for VCS
access control a year or two prior to monotone's first release. AFAIK
Graydon Hoare (the original monotone designer) came up with the
technique independently of the opencm design. I'm actually not certain
that opencm originated the technique, either; all I can say for
certain is that it was using it prior to monotone or git.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread Zooko Wilcox-O'Hearn

On Wednesday,2009-08-19, at 10:05 , Jack Lloyd wrote:


On Wed, Aug 19, 2009 at 09:28:45AM -0600, Zooko Wilcox-O'Hearn wrote:

[*] Linus Torvalds got the idea of a Cryptographic Hash Function  
Directed Acyclic Graph structure from an earlier distributed  
revision control tool named Monotone.


OT trivia: The idea actually predates either monotone or git;  
opencm (http://opencm.org/docs.html) was using a similiar technique  
for VCS access control a year or two prior to monotone's first  
release.


Note that I didn't say Monotone invented it.  :-)  Graydon Hoare of  
Monotone got the idea from a friend of his who, as far as we know,  
came up with it independently.  I personally got it from Eric Hughes  
who came up with it independently.  I think OpenCM got it from the  
Xanadu project who came up with it independently.  :-)


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread Perry E. Metzger

Zooko Wilcox-O'Hearn zo...@zooko.com writes:
 On Wednesday,2009-08-19, at 10:05 , Jack Lloyd wrote:

 On Wed, Aug 19, 2009 at 09:28:45AM -0600, Zooko Wilcox-O'Hearn wrote:

 [*] Linus Torvalds got the idea of a Cryptographic Hash Function
 Directed Acyclic Graph structure from an earlier distributed
 revision control tool named Monotone.

 OT trivia: The idea actually predates either monotone or git; opencm
 (http://opencm.org/docs.html) was using a similiar technique  for
 VCS access control a year or two prior to monotone's first  release.

 Note that I didn't say Monotone invented it.  :-)  Graydon Hoare of
 Monotone got the idea from a friend of his who, as far as we know,
 came up with it independently.  I personally got it from Eric Hughes
 who came up with it independently.  I think OpenCM got it from the
 Xanadu project who came up with it independently.  :-)

The whole thing simply seems like a very obvious use of Merkle hash
trees. It is very understandable that many people familiar with Merkle
trees and related structures would think to apply them this way, since
it is more or less the purpose for which they were intended.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread James A. Donald
[*] Linus Torvalds got the idea of a Cryptographic Hash Function  
Directed Acyclic Graph structure from an earlier distributed  
revision control tool named Monotone.
OT trivia: The idea actually predates either monotone or git;  
opencm (http://opencm.org/docs.html) was using a similiar technique  
for VCS access control a year or two prior to monotone's first  
release.


Note that I didn't say Monotone invented it.  :-)  Graydon Hoare of  
Monotone got the idea from a friend of his who, as far as we know,  
came up with it independently.  I personally got it from Eric Hughes  
who came up with it independently.  I think OpenCM got it from the  
Xanadu project who came up with it independently.  :-)


Getting back towards topic, the hash function employed by Git is showing 
signs of bitrot, which, given people's desire to introduce malware 
backdoors and legal backdoors into Linux, could well become a problem in 
the very near future.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-19 Thread Perry E. Metzger

James A. Donald jam...@echeque.com writes:
 Getting back towards topic, the hash function employed by Git is
 showing signs of bitrot, which, given people's desire to introduce
 malware backdoors and legal backdoors into Linux, could well become a
 problem in the very near future.

I believe attacks on Git's use of SHA-1 would require second pre-image
attacks, and I don't think anyone has demonstrated such a thing for
SHA-1 at this point. None the less, I agree that it would be better if
Git eventually used better hash functions. Attacks only get better with
time, and SHA-1 is certainly creaking.

Emphasis on eventually, however. This is a as soon as convenient, not
as soon as possible sort of situation -- more like within a year than
within a week.

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.

Perry
--
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com