Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-30 Thread Nicolas Williams
On Sun, Sep 27, 2009 at 02:23:16PM -0700, Fuzzy Hoodie-Monster wrote:
> As usual, I tend to agree with Peter. Consider the time scale and
> severity of problems with cryptographic algorithms vs. the time scale
> of protocol development vs. the time scale of bug creation
> attributable to complex designs. Let's make up some fake numbers,
> shall we? (After all, we're software engineers. Real numbers are for
> real engineers! Bah!)
> 
> [snip]
> 
> Although the numbers are fake, perhaps the orders of magnitude are
> close enough to make the point. Which is: your software will fail for
> reasons unrelated to cryptographic algorithm problems long before
> SHA-256 is broken enough to matter. Perhaps pluggability is a source
> of frequent failures, designed to solve for infrequent and
> low-severity algorithm failures. I would worry about an overfull \hbox
> (badness 1!) long before I worried about AES-128 in CBC mode with
> a unique IV made from /dev/urandom. Between now and the time our

"AES-128 in CBC mode with a unique IV made from /dev/urandom" is
manifestly not the issue of the day.  The issue is hash function
strength.  So when would you worry about MD5?  SHA-1?  By your own
admission MD5 has already been fatally wounded and SHA-1 is headed
that way.

> ciphers and hashes and signatures are broken, we'll have a decade to
> design and implement the next simple system to replace our current
> system. Most software developers would be overjoyed to have a full
> decade. Why are we whining?

We don't have a decade to replace MD5.  We've had a long time to replace
MD5, and even SHA-1 already, but we haven't done it yet.  The reason is
simple: there's more to it than you've stated.  Specifically, for
example, you ignored protocol update development (you assumed 1 new
protocol per-year, but this says nothing about how long it takes to,
say, update TLS) and deployment issues completely, and you supposed that
software development happens at a consistent, fast clip throughout.
Software development and deployment are usually constrained by legacy
and customer behavior, as well as resource availability, all of which
varies enormously.  Protocol upgrade development, for example, is harder
than you might think (I'm guessing though, since you didn't address that
issue).  Complexity exists outside protocol.  This is why we must plan
ahead and make reasonable trade-offs.  Devising protocols that make
upgrade easier is important, supposing that they actually help with the
deployment issues (cue your argument that they do not).

I'm OK with making up numbers for the sake of argument.  But you have to
make up all the relevent numbers.  Then we can plug in real data where
we have it, argue about the other numbers, ...

> What if TLS v1.1 (2006) specified that the only ciphersuite was RSA
> with >= 1024-bit keys, HMAC_SHA256, and AES-128 in CBC mode. How
> likely is it that attackers will be able to reliably and economically
> attack those algorithms in 2016? Meanwhile, the comically complex
> X.509 is already a punching bag
> (http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf
> and 
> http://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf,
> including the remote exploit in the certificate handling code itself).

We don't have crystal balls.  We don't really know what's in store for
AES, for example.  Conservative design says we should have a way to
deploy alternatives in a reasonably short period of time.

You and Peter are clearly biased against TLS 1.2 specifically, and
algorithm negotiation generally.  It's also clear that you're outside
the IETF consensus on both matters _for now_.  IMO you'll need to make
better arguments, or wait enough time to be proven right by events, in
order to change that consensus.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-28 Thread Fuzzy Hoodie-Monster
On Mon, Sep 7, 2009 at 6:02 AM, Peter Gutmann  wrote:

> That's a rather high cost to pay just for the ability to make a crypto fashion
> statement.  Even if the ability to negotiate hash algorithms had been built in
> from the start, this only removes the non-interoperability but doesn't remove
> the complexity issue.

As usual, I tend to agree with Peter. Consider the time scale and
severity of problems with cryptographic algorithms vs. the time scale
of protocol development vs. the time scale of bug creation
attributable to complex designs. Let's make up some fake numbers,
shall we? (After all, we're software engineers. Real numbers are for
real engineers! Bah!)

cryptographic algorithm weakness discovery rate: several per decade

cryptographic algorithm weakness severity: 5 badness points per decade
the weakness has been known; 7 badness points is considered fatal.
Let's say MD5's badness is 8 and SHA-1's is 3. AES-256's is 1, because
even after the attack it is still strong enough for most real uses.

protocol development rate: 1 per year

bug creation rate (baseline): tens per day per project

bug creation rate for bugs due to complex designs: half of baseline
(the other half is due to just regular mistakes)

Although the numbers are fake, perhaps the orders of magnitude are
close enough to make the point. Which is: your software will fail for
reasons unrelated to cryptographic algorithm problems long before
SHA-256 is broken enough to matter. Perhaps pluggability is a source
of frequent failures, designed to solve for infrequent and
low-severity algorithm failures. I would worry about an overfull \hbox
(badness 1!) long before I worried about AES-128 in CBC mode with
a unique IV made from /dev/urandom. Between now and the time our
ciphers and hashes and signatures are broken, we'll have a decade to
design and implement the next simple system to replace our current
system. Most software developers would be overjoyed to have a full
decade. Why are we whining?

What if TLS v1.1 (2006) specified that the only ciphersuite was RSA
with >= 1024-bit keys, HMAC_SHA256, and AES-128 in CBC mode. How
likely is it that attackers will be able to reliably and economically
attack those algorithms in 2016? Meanwhile, the comically complex
X.509 is already a punching bag
(http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf
and 
http://www.blackhat.com/presentations/bh-usa-09/MARLINSPIKE/BHUSA09-Marlinspike-DefeatSSL-SLIDES.pdf,
including the remote exploit in the certificate handling code itself).

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-09-08 Thread Peter Gutmann
Thor Lancelot Simon  writes:

>I think we're largely talking past one another.  As regards "new horrible
>problems" I meant simply that if there _are_ "new horrible problems_ such
>that we need to switch away from SHA1 in the TLS PRF, the design mistakes
>made in TLS 1.1 will make it much harder.

Well, let's move the TLS 1.2 aspect out of the discussion and look at the
underlying issues.  If you're looking at this purely from a theoretical point
of view then it's possible that the ability to use SHA-2 in the PRF is an
improvement (it may also be a step backwards since you're now relying on a
single hash rather than the dual hash used in the original design).  Since no-
one knows of any attacks, we don't know whether it's a step backwards, a step
forwards, or (most likely) a no-op.

However there's more to it than this.  Once you've got the crypto sorted out,
you need to implement it, and then deploy it.  So looking at the two options
you have:

Old: No known crypto weaknesses.
 Interoperable with all deployed implementations.
 Only one option, so not much to get wrong.

New: No known crypto weaknesses.
 Interoperable with no deployed implementations.
 Lots of flexibioity and options to get wrong.

Removing the common factors (the crypto portion) and the no-op terms
("interoperable with existing implementations") we're left with:

Old: -
New: Non-interoperable.
 Complex -> Likely to exhibit security flaws (from the maxim that
   complexity is the enemy of security).

That's a rather high cost to pay just for the ability to make a crypto fashion
statement.  Even if the ability to negotiate hash algorithms had been built in
from the start, this only removes the non-interoperability but doesn't remove
the complexity issue.

>As I read Ben's comments, they were _advocating_ those kinds of design
>mistakes, advocating hard-wiring particular algorithms or their parameter
>sizes into protocols,

You keep asserting that this is a mistake, but in the absence of any
cryptographic argument in support, and with practical arguments against it, it
looks like a feature to me.

>In fact, it is radically harder to replace an entire protocol, even with a
>related one, than to drop a new algorithm into an existing, properly-designed
>protocol.

A properly-designed security protocol is one that's both cryptographically
sound and simple enough that it's hard to get wrong (or at least relatively
easy to get right, admittedly not necessarily the same thing).  Adding a pile
of complexity simply so you can make a crypto fashion statement doesn't seem
to be helping here.

>If TLS 1.{0,1} had been designed to make the hash functions pluggagle
>everywhere

... like that model of security protocol design IKEv1 was [0], then we'd have
all kinds of interop problems and quite probably security issues based on
exploitation of the unnecessary complexity of the protocol, for a net loss in
security and interoperability, and nothing gained.

Peter.

[0] Apologies to the IPsec folks on the list, just trying to illustrate the
point.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-31 Thread Thor Lancelot Simon
On Thu, Aug 27, 2009 at 11:30:08AM +1200, Peter Gutmann wrote:
> 
> Thor Lancelot Simon  writes:
> 
> >the exercise of recovering from new horrible problems with SHA1 would be
> >vastly simpler, easier, and far quicker
> 
> What new horrible problems in SHA1 (as it's used in SSL/TLS)?  What old
> horrible problems, for that matter?  The only place I can think of offhand
> where it's used in a manner where it might be vulnerable is for DSA sigs, and
> how many of those have you seen in the wild?

I think we're largely talking past one another.  As regards "new horrible
problems" I meant simply that if there _are_ "new horrible problems_ such
that we need to switch away from SHA1 in the TLS PRF, the design mistakes
made in TLS 1.1 will make it much harder.

As I read Ben's comments, they were _advocating_ those kinds of design
mistakes, advocating hard-wiring particular algorithms or their parameter
sizes into protocols, because -- as I understood him -- both replacing
and algorithm and replacing a whole protocol are just "software upgrades"
and all software upgrades are alike.

Well, I don't think it's true that all software upgrades are alike in the
relevant way.  In fact, it is radically harder to replace an entire
protocol, even with a related one, than to drop a new algorithm into an
existing, properly-designed protocol.  It may be no different for _users_,
but the difference for _implementers_ is vast, and that greatly delays the
availability of the relevant software upgrade to users, which is not a
good thing.

I think the current TLS 1.2 debacle is about the best evidence of this one
could ask for.  If TLS 1.{0,1} had been designed to make the hash functions
pluggagle everywhere they're used, then users would have new software which
didn't rely on SHA1 (even in a way we currently think is still safe)
available now, rather than having to wait quite a bit longer before the
possibility of upgrading even arose.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-26 Thread James A. Donald

Peter Gutmann wrote:
> Consider for example a system that uses two
> authentication algorithms in case one fails, or that
> has an algorithm-upgrade/rollover capability, perhaps
> via downloadable plugins.  At some point a device
> receives a message authenticated with algorithm A
> saying "Algorithm B has been broken, don't use it any
> more" (with an optional side-order of "install and run
> this plugin that implements a new algorithm instead").
> It also receives a message authenticated with
> algorithm B saying "Algorithm A has been broken, don't
> use it any more", with optional extras as before.

Not so hard.  True breaks occur infrequently.  Those
that download the scam version will find that they can
*only* communicate with the scammers, so will sort
things out in due course and all will be well until the
next break - which will not happen for a long time, and
may well never happen - unless of course one has the
IEEE 802.11 working group designing the standards.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-26 Thread Peter Gutmann
Ben Laurie  writes:

>It seems to me protocol designers get all excited about this because they
>want to design the protocol once and be done with it. But software authors
>are generally content to worry about the new algorithm when they need to
>switch to it - and since they're going to have to update their software
>anyway and get everyone to install the new version, why should they worry any
>sooner?

It's not just that, while pluggability (for transparent crypto upgrade) may
sound like a fun theoretical exercise for geeks it's really a special case of
the (unsolveable) secure-initialisation problem.

Consider for example a system that uses two authentication algorithms in case
one fails, or that has an algorithm-upgrade/rollover capability, perhaps via
downloadable plugins.  At some point a device receives a message authenticated
with algorithm A saying "Algorithm B has been broken, don't use it any more"
(with an optional side-order of "install and run this plugin that implements a
new algorithm instead").  It also receives a message authenticated with
algorithm B saying "Algorithm A has been broken, don't use it any more", with
optional extras as before.  Although you could then apply fault-tolerant
design concepts to try and make this less problematic, this adds a huge amount
of design complexity, and therefore new attack surface.  Adding to the
problems is the fact that this capability will only be exercised in extremely
rare circumstances.  So you have a piece of complex, error-prone code that's
never really exercised and that has to sit there unused (but resisting all
attacks) for years until it's needed, at which point it has to work perfectly
the first time.  In addition you have some nice catch-22's such as the
question of how you safely load a replacement algorithm into a remote device
when the existing algorithm that's required to secure the load has been
broken.

Compounding this even further is the innate tendency of security geeks to want
to replace half the security infrastructure that you're relying on as a side-
effect of any algorithm upgrade.  After all, if you're replacing one of the
hash algorithms then why not take the opportunity to replace the key
derivation that it's used in, and the signature mechanisms, and the key
management as well?  This results in huge amounts of turmoil as a
theoretically minor algorithm change carries over into a requirement to
reimplement half the security mechanisms being used.  One example of this is
TLS 1.2, for which the (theoretically minor) step from TLS 1.1 to TLS 1.2 was
much, much bigger than the change from SSL to TLS, because the developers
redesigned significant portions of the security mechanisms as a side-effect of
introducing a few new hash algorithms.  As a result, TLS 1.2 adoption has
lagged for years after the first specifications became available.

Thor Lancelot Simon  writes:

>the exercise of recovering from new horrible problems with SHA1 would be
>vastly simpler, easier, and far quicker

What new horrible problems in SHA1 (as it's used in SSL/TLS)?  What old
horrible problems, for that matter?  The only place I can think of offhand
where it's used in a manner where it might be vulnerable is for DSA sigs, and
how many of those have you seen in the wild?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-26 Thread Jon Callas


On Aug 25, 2009, at 4:44 AM, Ben Laurie wrote:


Perry E. Metzger wrote:
Yet another reason why you always should make the crypto algorithms  
you
use pluggable in any system -- you *will* have to replace them some  
day.


In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for "pluggability" beyond  
versioning?


It seems to me protocol designers get all excited about this because
they want to design the protocol once and be done with it. But  
software
authors are generally content to worry about the new algorithm when  
they

need to switch to it - and since they're going to have to update their
software anyway and get everyone to install the new version, why  
should

they worry any sooner?


I have no idea, myself.

I have said many times effectively what you said, and there's always  
the same hand-wringing.


I believe that it boils down to this:

They aren't software engineers and we are. We've designed  
paramaterized or (that's or, not xor) versioned protocols before.  
We've done upgrades.


They will inevitably bring up downgrade attacks, but come on. It is a  
truism that there is more stupidity than malice in the world and if  
you stupid-proof your protocol, you've also malice-proofed it.


And yes, yes, one has to be thorough in your design of plugable  
system. I, too, can come up with a scenario where a simple version  
number is not enough. It's just a software engineering problem, and  
you and I and the other software engineers know how to do software  
engineering.


I think that again, they haven't in general deployed software to a  
population large enough to contain stupid people. If they have  
deployed it to stupid people, they haven't had the attitude that  
stupidity is a fact of life and has to be fixed in the software, not  
the person.


And after boiling it down, let me go further and reduce it to a  
sticky, bitter sauce:


They don't believe it's important. They so believe the naive simple-is- 
better line that they end up believing that brittle is better than  
resilient. They're so enamored with the aphorism that you can make  
something so simple it's secure or so complex it's secure that they  
forget the aphorism that you should make things as simple as possible  
and no simpler. They're not engineers, so for them, upgrades are free.  
Therefore brittle is simpler than resilient.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread James A. Donald



Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


Ben Laurie wrote:

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for "pluggability" beyond versioning?


New software has to work with new and old data files and communicate 
with new and old software.


Thus full protocol negotiation has to be built in to everything from the 
beginning - which was the insight behind COM and the cure to DLL hell.



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Nicolas Williams
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
> In order to roll out a new crypto algorithm, you have to roll out new
> software. So, why is anything needed for "pluggability" beyond versioning?
> 
> It seems to me protocol designers get all excited about this because
> they want to design the protocol once and be done with it. But software
> authors are generally content to worry about the new algorithm when they
> need to switch to it - and since they're going to have to update their
> software anyway and get everyone to install the new version, why should
> they worry any sooner?

Many good replies have been given already.  Here's a few more reasons to
want "pluggability" in the protocol:

 - Yes, we "want to design the protocol once and be done with" the hard
   parts of the design problem that we can reasonably expect to have to
   do only once.  Having to do things only once is not just "cool".

 - Pluggability at the protocol layer enable pluggability in the
   implementations.  A pluggable design does not imply open plug-in
   interfaces, but a pluggable design does imply highly localized
   development of new plug-ins.

 - It's a good idea to promote careful thought about the future,
   precisely what designing a pluggable protocol does and requires.

   We may get it wrong (e.g., the SSHv2 alg nego protocol has quirks,
   some of which were discovered when we worked on RFC4462), but the
   result is likely to be much better than not putting much or any such
   thought into it.

If the protocol designers and the implementors get their respective
designs right, the best case scenario is that switching from one
cryptographic algorithm to another requires less effort in the pluggable
case than in the non-pluggable case.  Specifically, specification and
implementation of new crypto algs can be localized -- no existing
specification nor code need change!  Yes, new SW must still get
deployed, and that's pretty hard, but it helps to make it easier to
develop that SW.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Thor Lancelot Simon
On Tue, Aug 25, 2009 at 12:44:57PM +0100, Ben Laurie wrote:
> Perry E. Metzger wrote:
> > Yet another reason why you always should make the crypto algorithms you
> > use pluggable in any system -- you *will* have to replace them some day.
> 
> In order to roll out a new crypto algorithm, you have to roll out new
> software. So, why is anything needed for "pluggability" beyond versioning?
> 
> It seems to me protocol designers get all excited about this because
> they want to design the protocol once and be done with it. But software
> authors are generally content to worry about the new algorithm when they
> need to switch to it - and since they're going to have to update their
> software anyway and get everyone to install the new version, why should
> they worry any sooner?

Look at the difference between the time it requires to add an algorithm
to OpenSSL and the time it requires to add a new SSL or TLS version to
OpenSSL.  Or should we expect TLS 1.2 support any day now?  If earlier
TLS versions had been designed to allow the hash functions in the PRF
to be swapped out, the exercise of recovering from new horrible problems
with SHA1 would be vastly simpler, easier, and far quicker.  It is just
not the case that the software development exercise of implementing a
new protocol is on a scale with that of implementing a new cipher or hash
function -- it is far, far larger, and that, alone, seems to me to be
sufficient reason to design protocols so that algorithms and algorithm
parameter sizes are not fixed.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Darren J Moffat

Ben Laurie wrote:

Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for "pluggability" beyond versioning?


Versioning catches a large part of it, but that alone isn't always 
enough.  Sometimes for on disk formats you need to reserve padding space 
to add larger or differently formatted things later.


Also support for a new crypto algorithm can actually be done without 
changes to the software code if it is "truely" pluggable.


An example from Solaris that is how our IPsec implementation works.  If 
a new algorithm is available via the Solaris crypto framework in many c 
cases were we don't need any code changes to support it, just have the 
end system admin run the ipsecalgs(1M) command to update the IPsec 
protocol number to crypto framework algorithm name mappings (we use 
PKCS#11 style mechanism names that combine algorithm and mode).  The 
Solaris IPSec implementation has no crypto algorithm names in the code 
base at all (we do currently assume CBC mode though but are in the 
process of adding generic CCM, GCM and GMAC support).


Now having said all that the PF_KEY protocol (RFC 2367) between user and 
kernel does know about crypto algorithms.



It seems to me protocol designers get all excited about this because


Not just on the wire protocols but persistent on disk formats, on disk 
is a much bigger deal.  Consider the case when you have terrabytes of 
data written in the old format and you need to migrate to the new format 
 - you have to support both at the same time.  So not just versioning 
but space padding can be helpful.


--
Darren J Moffat

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Jonathan Thornburg
On Tue, 25 Aug 2009, Ben Laurie wrote:
> In order to roll out a new crypto algorithm, you have to roll out new
> software. So, why is anything needed for "pluggability" beyond versioning?

If active attackers are part of the threat model, then you need to
worry about version-rollback attacks for as long as in-the-field software
still groks the old (now-insecure) versions, so "versioning" is actually
more like "Byzantine versioning".

-- 
-- Jonathan Thornburg 
   Dept of Astronomy, Indiana University, Bloomington, Indiana, USA
   "Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral."
  -- quote by Freire / poster by Oxfam

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Ben Laurie
Perry E. Metzger wrote:
> Yet another reason why you always should make the crypto algorithms you
> use pluggable in any system -- you *will* have to replace them some day.

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for "pluggability" beyond versioning?

It seems to me protocol designers get all excited about this because
they want to design the protocol once and be done with it. But software
authors are generally content to worry about the new algorithm when they
need to switch to it - and since they're going to have to update their
software anyway and get everyone to install the new version, why should
they worry any sooner?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-19 Thread Perry E. Metzger

"James A. Donald"  writes:
> Getting back towards topic, the hash function employed by Git is
> showing signs of bitrot, which, given people's desire to introduce
> malware backdoors and legal backdoors into Linux, could well become a
> problem in the very near future.

I believe attacks on Git's use of SHA-1 would require second pre-image
attacks, and I don't think anyone has demonstrated such a thing for
SHA-1 at this point. None the less, I agree that it would be better if
Git eventually used better hash functions. Attacks only get better with
time, and SHA-1 is certainly creaking.

Emphasis on "eventually", however. This is a "as soon as convenient, not
as soon as possible" sort of situation -- more like within a year than
within a week.

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.

Perry
--
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com