On 10/9/13 at 7:18 PM, crypto....@gmail.com (John Kelsey) wrote:
We know how to address one part of this problem--choose only
algorithms whose design strength is large enough that there's
not some relatively close by time when the algorithms will need
to be swapped out. That's not all that big a problem now--if
you use, say, AES256 and SHA512 and ECC over P521, then even in
the far future, your users need only fear cryptanalysis, not
Moore's Law. Really, even with 128-bit security level
primitives, it will be a very long time until the brute-force
attacks are a concern.
We should try to characterize what "a very long time" is in
years. :-)
This is actually one thing we're kind-of on the road to doing
right in standards now--we're moving away from
barely-strong-enough crypto and toward crypto that's going to
be strong for a long time to come.
We had barely-strong-enough crypto because we couldn't afford
the computation time for longer key sizes. I hope things are
better now, although there may still be a problem for certain
devices. Let's hope they are only needed in low security/low
value applications.
Protocol attacks are harder, because while we can choose a key
length, modulus size, or sponge capacity to support a known
security level, it's not so easy to make sure that a protocol
doesn't have some kind of attack in it.
I think we've learned a lot about what can go wrong with
protocols, and we can design them to be more ironclad than in
the past, but we still can't guarantee we won't need to
upgrade. But I think this is an area that would be interesting
to explore--what would need to happen in order to get more
ironclad protocols? A couple random thoughts:
I fully agree that this is a valuable area to research.
a. Layering secure protocols on top of one another might
provide some redundancy, so that a flaw in one didn't undermine
the security of the whole system.
Defense in depth has been useful from longer ago than the
Trojans and Greeks.
b. There are some principles we can apply that will make
protocols harder to attack, like encrypt-then-MAC (to eliminate
reaction attacks), nothing is allowed to need change its
execution path or timing based on the key or plaintext, every
message includes a sequence number and the hash of the previous
message, etc. This won't eliminate protocol attacks, but will
make them less common.
I think that the attacks on MAC-then-encrypt and timing attacks
were first described within the last 15 years. I think it is
only normal paranoia to think there may be some more equally
interesting discoveries in the future.
c. We could try to treat at least some kinds of protocols more
like crypto algorithms, and expect to have them widely vetted
before use.
Most definitely! Lots of eye. Formal proofs because they are a
completely different way of looking at things. Simplicity. All
will help.
What else?
...
Perhaps the shortest limit on the lifetime of an embedded
system is the security protocol, and not the hardware. If so,
how do we as society deal with this limit.
What we really need is some way to enforce protocol upgrades
over time. Ideally, there would be some notion that if you
support version X of the protocol, this meant that you would
not support any version lower than, say, X-2. But I'm not sure
how practical that is.
This is the direction I'm pushing today. If you look at auto
racing you will notice that the safety equipment commonly used
before WW2 is no longer permitted. It is patently unsafe. We
need to make the same judgements in high security/high risk applications.
Cheers - Bill
-----------------------------------------------------------------------
Bill Frantz |The nice thing about standards| Periwinkle
(408)356-8506 |is there are so many to choose| 16345
Englewood Ave
www.pwpconsult.com |from. - Andrew Tanenbaum | Los Gatos,
CA 95032
_______________________________________________
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography