Ben Laurie <> writes:

>It seems to me protocol designers get all excited about this because they
>want to design the protocol once and be done with it. But software authors
>are generally content to worry about the new algorithm when they need to
>switch to it - and since they're going to have to update their software
>anyway and get everyone to install the new version, why should they worry any

It's not just that, while pluggability (for transparent crypto upgrade) may
sound like a fun theoretical exercise for geeks it's really a special case of
the (unsolveable) secure-initialisation problem.

Consider for example a system that uses two authentication algorithms in case
one fails, or that has an algorithm-upgrade/rollover capability, perhaps via
downloadable plugins.  At some point a device receives a message authenticated
with algorithm A saying "Algorithm B has been broken, don't use it any more"
(with an optional side-order of "install and run this plugin that implements a
new algorithm instead").  It also receives a message authenticated with
algorithm B saying "Algorithm A has been broken, don't use it any more", with
optional extras as before.  Although you could then apply fault-tolerant
design concepts to try and make this less problematic, this adds a huge amount
of design complexity, and therefore new attack surface.  Adding to the
problems is the fact that this capability will only be exercised in extremely
rare circumstances.  So you have a piece of complex, error-prone code that's
never really exercised and that has to sit there unused (but resisting all
attacks) for years until it's needed, at which point it has to work perfectly
the first time.  In addition you have some nice catch-22's such as the
question of how you safely load a replacement algorithm into a remote device
when the existing algorithm that's required to secure the load has been

Compounding this even further is the innate tendency of security geeks to want
to replace half the security infrastructure that you're relying on as a side-
effect of any algorithm upgrade.  After all, if you're replacing one of the
hash algorithms then why not take the opportunity to replace the key
derivation that it's used in, and the signature mechanisms, and the key
management as well?  This results in huge amounts of turmoil as a
theoretically minor algorithm change carries over into a requirement to
reimplement half the security mechanisms being used.  One example of this is
TLS 1.2, for which the (theoretically minor) step from TLS 1.1 to TLS 1.2 was
much, much bigger than the change from SSL to TLS, because the developers
redesigned significant portions of the security mechanisms as a side-effect of
introducing a few new hash algorithms.  As a result, TLS 1.2 adoption has
lagged for years after the first specifications became available.

Thor Lancelot Simon <> writes:

>the exercise of recovering from new horrible problems with SHA1 would be
>vastly simpler, easier, and far quicker

What new horrible problems in SHA1 (as it's used in SSL/TLS)?  What old
horrible problems, for that matter?  The only place I can think of offhand
where it's used in a manner where it might be vulnerable is for DSA sigs, and
how many of those have you seen in the wild?


The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to

Reply via email to