On Sun, Sep 27, 2009 at 02:23:16PM -0700, Fuzzy Hoodie-Monster wrote:
> As usual, I tend to agree with Peter. Consider the time scale and
> severity of problems with cryptographic algorithms vs. the time scale
> of protocol development vs. the time scale of bug creation
> attributable to complex designs. Let's make up some fake numbers,
> shall we? (After all, we're software engineers. Real numbers are for
> real engineers! Bah!)
> [snip]
> Although the numbers are fake, perhaps the orders of magnitude are
> close enough to make the point. Which is: your software will fail for
> reasons unrelated to cryptographic algorithm problems long before
> SHA-256 is broken enough to matter. Perhaps pluggability is a source
> of frequent failures, designed to solve for infrequent and
> low-severity algorithm failures. I would worry about an overfull \hbox
> (badness 10000!) long before I worried about AES-128 in CBC mode with
> a unique IV made from /dev/urandom. Between now and the time our

"AES-128 in CBC mode with a unique IV made from /dev/urandom" is
manifestly not the issue of the day.  The issue is hash function
strength.  So when would you worry about MD5?  SHA-1?  By your own
admission MD5 has already been fatally wounded and SHA-1 is headed
that way.

> ciphers and hashes and signatures are broken, we'll have a decade to
> design and implement the next simple system to replace our current
> system. Most software developers would be overjoyed to have a full
> decade. Why are we whining?

We don't have a decade to replace MD5.  We've had a long time to replace
MD5, and even SHA-1 already, but we haven't done it yet.  The reason is
simple: there's more to it than you've stated.  Specifically, for
example, you ignored protocol update development (you assumed 1 new
protocol per-year, but this says nothing about how long it takes to,
say, update TLS) and deployment issues completely, and you supposed that
software development happens at a consistent, fast clip throughout.
Software development and deployment are usually constrained by legacy
and customer behavior, as well as resource availability, all of which
varies enormously.  Protocol upgrade development, for example, is harder
than you might think (I'm guessing though, since you didn't address that
issue).  Complexity exists outside protocol.  This is why we must plan
ahead and make reasonable trade-offs.  Devising protocols that make
upgrade easier is important, supposing that they actually help with the
deployment issues (cue your argument that they do not).

I'm OK with making up numbers for the sake of argument.  But you have to
make up all the relevent numbers.  Then we can plug in real data where
we have it, argue about the other numbers, ...

> What if TLS v1.1 (2006) specified that the only ciphersuite was RSA
> with >= 1024-bit keys, HMAC_SHA256, and AES-128 in CBC mode. How
> likely is it that attackers will be able to reliably and economically
> attack those algorithms in 2016? Meanwhile, the comically complex
> X.509 is already a punching bag
> (http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf
> and 
> http://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf,
> including the remote exploit in the certificate handling code itself).

We don't have crystal balls.  We don't really know what's in store for
AES, for example.  Conservative design says we should have a way to
deploy alternatives in a reasonably short period of time.

You and Peter are clearly biased against TLS 1.2 specifically, and
algorithm negotiation generally.  It's also clear that you're outside
the IETF consensus on both matters _for now_.  IMO you'll need to make
better arguments, or wait enough time to be proven right by events, in
order to change that consensus.


The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

Reply via email to