Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-13 Thread Christian Huitema
 Without doing any key management or requiring some kind of reliable
identity or memory of previous sessions, the best we can do in the inner 
 protocol is an ephemeral Diffie-Hellman, so suppose we do this:  

 a.  Generate random a and send aG on curve P256

 b.  Generate random b and send bG on curve P256

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to
generate an AES key for messages in each direction.
 
 d.  Each side keeps a sequence number to use as a nonce.  Both sides use
AES-CCM with their sequence number and their sending key, and keep 
 track of the sequence number of the most recent message received from the
other side.  

 ...

 Thoughts?

We should get Stev Knowles explain the skeeter and bubba TCP options.
From private conversations I understand that the options where doing
pretty much what you describe:  use Diffie Hellman in the TCP exchange to
negotiate an encryption key for the TCP session. 

That would actually be a very neat thing. I don't believe using TCP options
would be practical today, too many firewalls would filter them. But the same
results would be achieved with a zero-knowledge version of TLS. That would
make session encrypted by default.

Of course, any zero-knowledge protocol can be vulnerable to
man-in-the-middle attacks. But the applications can protect against that
with an end to end exchange. For example, if there is a shared secret, even
a lowly password, the application protocol can embed verification of the
zero-knowledge session key in the password verification, by combining the
session key with either the challenge or the response in a basic
challenge-response protocol. 

That would be pretty neat, zero-knowledge TLS, then use the password
exchange to mutually authenticate server and client while protecting against
MITM. Pretty much any site could deploy that.

-- Christian Huitema


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread James A. Donald

On 2013-10-11 15:48, ianG wrote:
Right now we've got a TCP startup, and a TLS startup.  It's pretty 
messy.  Adding another startup inside isn't likely to gain popularity.


The problem is that layering creates round trips, and as cpus get ever 
faster, and pipes ever fatter, round trips become a bigger an bigger 
problem.  Legend has it that each additional round trip decreases usage 
of your web site by twenty percent, though I am unaware of any evidence 
on this.





(Which was one thing that suggests a redesign of TLS -- to integrate 
back into IP layer and replace/augment TCP directly. Back in those 
days we -- they -- didn't know enough to do an integrated security 
protocol.  But these days we do, I'd suggest, or we know enough to 
give it a try.)


TCP provides eight bits of protocol negotiation, which results in 
multiple layers of protocol negotiation on top.


Ideally, we should extend the protocol negotiation and do crypto 
negotiation at the same time.


But, I would like to see some research on how evil round trips really are.

I notice that bank web pages take an unholy long time to come up, 
probably because one secure we page loads another, and that then loads a 
script, etc.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread Ben Laurie
On 10 October 2013 17:06, John Kelsey crypto@gmail.com wrote:
 Just thinking out loud

 The administrative complexity of a cryptosystem is overwhelmingly in key 
 management and identity management and all the rest of that stuff.  So 
 imagine that we have a widely-used inner-level protocol that can use strong 
 crypto, but also requires no external key management.  The purpose of the 
 inner protocol is to provide a fallback layer of security, so that even an 
 attack on the outer protocol (which is allowed to use more complicated key 
 management) is unlikely to be able to cause an actual security problem.  On 
 the other hand, in case of a problem with the inner protocol, the outer 
 protocol should also provide protection against everything.

 Without doing any key management or requiring some kind of reliable identity 
 or memory of previous sessions, the best we can do in the inner protocol is 
 an ephemeral Diffie-Hellman, so suppose we do this:

 a.  Generate random a and send aG on curve P256

 b.  Generate random b and send bG on curve P256

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.

 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side.

 The point is, this is a protocol that happens *inside* the main security 
 protocol.  This happens inside TLS or whatever.  An attack on TLS then leads 
 to an attack on the whole application only if the TLS attack also lets you do 
 man-in-the-middle attacks on the inner protocol, or if it exploits something 
 about certificate/identity management done in the higher-level protocol.  
 (Ideally, within the inner protcol, you do some checking of the identity 
 using a password or shared secret or something, but that's application-level 
 stuff the inner and outer protocols don't know about.

 Thoughts?

AIUI, you're trying to make it so that only active attacks work on the
combined protocol, whereas passive attacks might work on the outer
protocol. In order to achieve this, you assume that your proposed
inner protocol is not vulnerable to passive attacks (I assume the
outer protocol also thinks this is true). Why should we believe the
inner protocol is any better than the outer one in this respect?
Particularly since you're using tainted algorithms ;-).
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread Jerry Leichter
On Oct 11, 2013, at 11:09 PM, James A. Donald wrote:
 Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  
 Adding another startup inside isn't likely to gain popularity.
 
 The problem is that layering creates round trips, and as cpus get ever 
 faster, and pipes ever fatter, round trips become a bigger an bigger problem. 
  Legend has it that each additional round trip decreases usage of your web 
 site by twenty percent, though I am unaware of any evidence on this.
The research is on time delays, which you could easily enough convert to round 
trips.  The numbers are nowhere near 20%, but are significant if you have many 
users:  http://googleresearch.blogspot.com/2009/06/speed-matters.html

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread John Kelsey
On Oct 12, 2013, at 6:51 AM, Ben Laurie b...@links.org wrote:
...
 AIUI, you're trying to make it so that only active attacks work on the
 combined protocol, whereas passive attacks might work on the outer
 protocol. In order to achieve this, you assume that your proposed
 inner protocol is not vulnerable to passive attacks (I assume the
 outer protocol also thinks this is true). Why should we believe the
 inner protocol is any better than the outer one in this respect?

The point is, we don't know how to make protocols that really are reliably 
secure against future attacks.  If we did, we'd just do that. 


My hope is that if we layer two of our best attempts at secure protocols on top 
of one another, then we will get security because the attacks will be hard to 
get through the composed protocols.  So maybe my protocol (or whatever inner 
protocol ends up being selected) isn't secure against everything, but as long 
as its weaknesses are covered up by the outer protocol, we still get a secure 
final result.  

One requirement for this is that the inner protocol must not introduce new 
weaknesses.  I think that means it must not:

a.  Leak information about its plaintexts in its timing, error messages, or 
ciphertext sizes.  

b.  Introduce ambiguities about how the plaintext is to be decrypted that could 
mess up the outer protocol's authentication.  

I think we can accomplish (a) by not compressing the plaintext before 
processing it, by using crypto primitives that don't leak plaintext data in 
their timing, and by having the only error message that can ever be generated 
from the inner protocol be essentially a MAC failure or an out-of-sequence 
error.  

I think (b) is pretty easy to accomplish with standard crypto, but maybe I'm 
missing something.  

...
 Particularly since you're using tainted algorithms ;-).

If using AES or P256 are the weak points in the protocol, that is a big win.  
Right now, we aren't getting anywhere close to that.  And there's no reason 
either AES or P256 have to be used--I'm just looking for a simple, lightweight 
way to get as much security as possible inside some other protocol.  

--John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread John Kelsey
On Oct 11, 2013, at 1:48 AM, ianG i...@iang.org wrote:

...
 What's your goal?  I would say you could do this if the goal was ultimate 
 security.  But for most purposes this is overkill (and I'd include online 
 banking, etc, in that).

We were talking about how hard it is to solve crypto protocol problems by 
getting the protocol right the first time, so we don't end up with fielded 
stuff that's weak but can't practically be fixed.  One approach I can see to 
this is to have multiple layers of crypto protocols that are as independent as 
possible in security terms.  The hope is that flaws in one protocol will 
usually not get through the other layer, and so they won't lead to practical 
security flaws.  

Actually getting the outer protocol right the first time would be better, but 
we haven't had great success with that so far. 

 Right now we've got a TCP startup, and a TLS startup.  It's pretty messy.  
 Adding another startup inside isn't likely to gain popularity.

Maybe not, though I think a very lightweight version of the inner protocol adds 
only a few bits to the traffic used and a few AES encryptions to the workload.  
I suspect most applications would never notice the difference.  (Even the 
version with the ECDH key agreement step would probably not add noticable 
overhead for most applications.)  On the other hand, I have no idea if anyone 
would use this.  I'm still at the level of thinking what could be done to 
address this problem, not how would you sell this?  

 iang

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 19:06 PM, John Kelsey wrote:

Just thinking out loud

The administrative complexity of a cryptosystem is overwhelmingly in key 
management and identity management and all the rest of that stuff.  So imagine 
that we have a widely-used inner-level protocol that can use strong crypto, but 
also requires no external key management.  The purpose of the inner protocol is 
to provide a fallback layer of security, so that even an attack on the outer 
protocol (which is allowed to use more complicated key management) is unlikely 
to be able to cause an actual security problem.  On the other hand, in case of 
a problem with the inner protocol, the outer protocol should also provide 
protection against everything.

Without doing any key management or requiring some kind of reliable identity or 
memory of previous sessions, the best we can do in the inner protocol is an 
ephemeral Diffie-Hellman, so suppose we do this:

a.  Generate random a and send aG on curve P256

b.  Generate random b and send bG on curve P256

c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
generate an AES key for messages in each direction.

d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
AES-CCM with their sequence number and their sending key, and keep track of the 
sequence number of the most recent message received from the other side.

The point is, this is a protocol that happens *inside* the main security 
protocol.  This happens inside TLS or whatever.  An attack on TLS then leads to 
an attack on the whole application only if the TLS attack also lets you do 
man-in-the-middle attacks on the inner protocol, or if it exploits something 
about certificate/identity management done in the higher-level protocol.  
(Ideally, within the inner protcol, you do some checking of the identity using 
a password or shared secret or something, but that's application-level stuff 
the inner and outer protocols don't know about.

Thoughts?



What's your goal?  I would say you could do this if the goal was 
ultimate security.  But for most purposes this is overkill (and I'd 
include online banking, etc, in that).


Right now we've got a TCP startup, and a TLS startup.  It's pretty 
messy.  Adding another startup inside isn't likely to gain popularity.


(Which was one thing that suggests a redesign of TLS -- to integrate 
back into IP layer and replace/augment TCP directly.  Back in those days 
we -- they -- didn't know enough to do an integrated security protocol. 
 But these days we do, I'd suggest, or we know enough to give it a try.)


iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 08:41 AM, Bill Frantz wrote:


We should try to characterize what a very long time is in years. :-)



Look at the produce life cycle for known crypto products.  We have some 
experience of this now.  Skype, SSL v2/3 - TLS 0/1/2, SSH 1 - 2, PGP 2 
- 5+.


As a starting point, I would suggest 10 years.

iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread ianG

On 10/10/13 17:58 PM, Salz, Rich wrote:

TLS was designed to support multiple ciphersuites. Unfortunately this opened 
the door
to downgrade attacks, and transitioning to protocol versions that wouldn't do 
this was nontrivial.
The ciphersuites included all shared certain misfeatures, leading to the 
current situation.


On the other hand, negotiation let us deploy it in places where full-strength 
cryptography is/was regulated.



That same regulator that asked for that capability is somewhat prominent 
in the current debacle.


Feature or bug?



Sometimes half a loaf is better than nothing.



A shortage of bread has been the inspiration for a few revolutions :)

iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Zooko O'Whielacronx
I like the ideas, John.

The idea, and the protocol you sketched out, are a little reminiscent
of ZRTP ¹ and of tcpcrypt ². I think you can go one step further,
however, and make it *really* strong, which is to offer the higher
or outer layer a way to hook into the crypto from your inner layer.

This could be by the inner layer exporting a crypto value which the
outer layer enforces an authorization or authenticity requirement on,
as is done in ZRTP if the a=zrtp-hash is delivered through an
integrity-protected outer layer, or in tcpcrypt if the Session ID is
verified by the outer layer.

I think this is a case where a separation of concerns between layers
with a simple interface between them can have great payoff. The
lower/inner layer enforces confidentiality (encryption),
integrity, hopefully forward-secrecy, etc., and the outer layer
decides on policy: authorization, naming (which is often but not
necessarily used for authorization), etc. The interface between them
can be a simple cryptographic interface, for example the way it is
done in the two examples above.

I think the way that SSL combined transport layer security,
authorization, and identification was a terrible idea. I (and others)
have been saying all along that it was a bad idea, and I hope that the
related security disasters during the last two years have started
persuading more people to rethink it, too. I guess the designers of
SSL were simply following the lead of the original inventors of public
key cryptography, who delegated certain critical unsolved problems to
an underspecified Trusted Third Party. What a colossal, historic
mistake.

The foolscap project ³ by Brian Warner demonstrates that it is
possible to retrofit a nice abstraction layer onto SSL. The way that
it does this is that each server automatically creates a self-signed
certificate, the secure hash of that certificate is embedded into the
identifier pointing at that server, and the client requires the
server's public key match the certificate matching that hash. The fact
that this is a useful thing to do, and inconvenient and rare thing to
do with SSL, should give security architects food for thought.

So I have a few suggestions for you:

1. Go, go, go! The path your thoughts are taking seems fruitful. Just
design a really good inner layer of crypto, without worrying (for
now) about the vexing and subtle problems of authorization,
authentication, naming, Man-In-The-Middle-Attack and so on. For now.

2. Okay, but leave yourself an out, by defining a nice simple
cryptographic hook by which someone else who *has* solved those vexing
problems could extend the protection that they've gained to users of
your protocol.

3. Maybe study ZRTP and tcpcrypt for comparison. Don't try to study
foolscap, even though it is a very interesting practical approach,
because there doesn't exist documentation of the protocol at the right
level for you to learn from.

Regards,

Zooko

https://LeastAuthority.com ← verifiably end-to-end-encrypted storage

P.S. Another example that you and I should probably study is cjdns ⁴.
Despite its name, it is *not* a DNS-like thing. It is a
transport-layer thing. I know less about cjdns so I didn't cite it as
a good example above.

¹ https://en.wikipedia.org/wiki/ZRTP
² http://tcpcrypt.org/
³ http://foolscap.lothar.com/docs/using-foolscap.html
⁴ http://cjdns.info/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Bill Frantz

On 10/11/13 at 10:32 AM, zoo...@gmail.com (Zooko O'Whielacronx) wrote:


Don't try to study
foolscap, even though it is a very interesting practical approach,
because there doesn't exist documentation of the protocol at the right
level for you to learn from.


Look at the E language sturdy refs, which are a lot like the 
Foolscap references. They are documented at www.erights.org.


Cheers - Bill

---
Bill Frantz| Truth and love must prevail  | Periwinkle
(408)356-8506  | over lies and hate.  | 16345 
Englewood Ave
www.pwpconsult.com |   - Vaclav Havel | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-11 Thread Trevor Perrin
On Fri, Oct 11, 2013 at 10:32 AM, Zooko O'Whielacronx zoo...@gmail.com wrote:
 I like the ideas, John.

 The idea, and the protocol you sketched out, are a little reminiscent
 of ZRTP ¹ and of tcpcrypt ². I think you can go one step further,
 however, and make it *really* strong, which is to offer the higher
 or outer layer a way to hook into the crypto from your inner layer.

 This could be by the inner layer exporting a crypto value which the
 outer layer enforces an authorization or authenticity requirement on,
 as is done in ZRTP if the a=zrtp-hash is delivered through an
 integrity-protected outer layer, or in tcpcrypt if the Session ID is
 verified by the outer layer.

Hi Zooko,

Are you and John talking about the same thing?

John's talking about tunnelling a redundant inner record layer of
encryption inside an outer record layer (using TLS terminology).

I think you're talking about a couple different-but-related things:

 * channel binding, where an unauthenticated-but-encrypted channel
can be authenticated by performing an inside-the-channel
authentication which commits to values uniquely identifying the outer
channel (note that the inner vs outer distinction has flipped
around here!)

 * out-of-band verification, where a channel is authenticated by
communicating values identifying the channel (fingerprint, SAS,
sessionIDs) over some other, authenticated channel (e.g. ZRTP's use of
the signalling channel to protect the media channel).

So I think you're focusing on *modularity* between authentication
methods and the record layer, whereas I think John's getting at
*redundancy*.


 I think the way that SSL combined transport layer security,
 authorization, and identification was a terrible idea. I (and others)
 have been saying all along that it was a bad idea, and I hope that the
 related security disasters during the last two years have started
 persuading more people to rethink it, too.

This seems like a different thing again.  I agree that TLS could have
been more modular wrt key agreement and public-key authentication.
 It would be nice if the keys necessary to compute a TLS handshake
were part of TLS, instead of requiring X.509 certs.  This would avoid
self-signed certs, and would allow the client to request various
proofs for the server's public key, which could be X.509, other cert
formats, or other info (CT, TACK, DNSSEC, revocation data, etc.).

But this seems like a minor layering flaw, I'm not sure it should be
blamed for any TLS security problems.  The problems with chaining CBC
IVs, plaintext compression, authenticate-then-encrypt, renegotiation,
and a non-working upgrade path aren't solved by better modularity, nor
are they solved by redundancy.  They're solved by making better
choices.


 I guess the designers of
 SSL were simply following the lead of the original inventors of public
 key cryptography, who delegated certain critical unsolved problems to
 an underspecified Trusted Third Party. What a colossal, historic
 mistake.

If you're talking about the New Directions paper, Diffie and Hellman
talk about a public file.  Certificates were a later idea, due to
Kohnfelder... I'd argue that's where things went wrong...


 1. Go, go, go! The path your thoughts are taking seems fruitful. Just
 design a really good inner layer of crypto, without worrying (for
 now) about the vexing and subtle problems of authorization,
 authentication, naming, Man-In-The-Middle-Attack and so on. For now.

That's easy though, right?  Use a proper KDF from a shared secret, do
authenticated encryption, don't f*ck up the IVs

The worthwhile problems are the hard ones, no? :-)


Trevor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
Just thinking out loud

The administrative complexity of a cryptosystem is overwhelmingly in key 
management and identity management and all the rest of that stuff.  So imagine 
that we have a widely-used inner-level protocol that can use strong crypto, but 
also requires no external key management.  The purpose of the inner protocol is 
to provide a fallback layer of security, so that even an attack on the outer 
protocol (which is allowed to use more complicated key management) is unlikely 
to be able to cause an actual security problem.  On the other hand, in case of 
a problem with the inner protocol, the outer protocol should also provide 
protection against everything.

Without doing any key management or requiring some kind of reliable identity or 
memory of previous sessions, the best we can do in the inner protocol is an 
ephemeral Diffie-Hellman, so suppose we do this:  

a.  Generate random a and send aG on curve P256

b.  Generate random b and send bG on curve P256

c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
generate an AES key for messages in each direction.

d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
AES-CCM with their sequence number and their sending key, and keep track of the 
sequence number of the most recent message received from the other side.  

The point is, this is a protocol that happens *inside* the main security 
protocol.  This happens inside TLS or whatever.  An attack on TLS then leads to 
an attack on the whole application only if the TLS attack also lets you do 
man-in-the-middle attacks on the inner protocol, or if it exploits something 
about certificate/identity management done in the higher-level protocol.  
(Ideally, within the inner protcol, you do some checking of the identity using 
a password or shared secret or something, but that's application-level stuff 
the inner and outer protocols don't know about.  

Thoughts?

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:18 PM, crypto@gmail.com (John Kelsey) wrote:

We know how to address one part of this problem--choose only 
algorithms whose design strength is large enough that there's 
not some relatively close by time when the algorithms will need 
to be swapped out.  That's not all that big a problem now--if 
you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not 
Moore's Law.  Really, even with 128-bit security level 
primitives, it will be a very long time until the brute-force 
attacks are a concern.


We should try to characterize what a very long time is in 
years. :-)



This is actually one thing we're kind-of on the road to doing 
right in standards now--we're moving away from 
barely-strong-enough crypto and toward crypto that's going to 
be strong for a long time to come.


We had barely-strong-enough crypto because we couldn't afford 
the computation time for longer key sizes. I hope things are 
better now, although there may still be a problem for certain 
devices. Let's hope they are only needed in low security/low 
value applications.



Protocol attacks are harder, because while we can choose a key 
length, modulus size, or sponge capacity to support a known 
security level, it's not so easy to make sure that a protocol 
doesn't have some kind of attack in it.
I think we've learned a lot about what can go wrong with 
protocols, and we can design them to be more ironclad than in 
the past, but we still can't guarantee we won't need to 
upgrade.  But I think this is an area that would be interesting 
to explore--what would need to happen in order to get more 
ironclad protocols?  A couple random thoughts:


I fully agree that this is a valuable area to research.



a.  Layering secure protocols on top of one another might 
provide some redundancy, so that a flaw in one didn't undermine 
the security of the whole system.


Defense in depth has been useful from longer ago than the 
Trojans and Greeks.



b.  There are some principles we can apply that will make 
protocols harder to attack, like encrypt-then-MAC (to eliminate 
reaction attacks), nothing is allowed to need change its 
execution path or timing based on the key or plaintext, every 
message includes a sequence number and the hash of the previous 
message, etc.  This won't eliminate protocol attacks, but will 
make them less common.


I think that the attacks on MAC-then-encrypt and timing attacks 
were first described within the last 15 years. I think it is 
only normal paranoia to think there may be some more equally 
interesting discoveries in the future.



c.  We could try to treat at least some kinds of protocols more 
like crypto algorithms, and expect to have them widely vetted 
before use.


Most definitely! Lots of eye. Formal proofs because they are a 
completely different way of looking at things. Simplicity. All 
will help.




What else?
...
Perhaps the shortest limit on the lifetime of an embedded 
system is the security protocol, and not the hardware. If so, 
how do we as society deal with this limit.


What we really need is some way to enforce protocol upgrades 
over time.  Ideally, there would be some notion that if you 
support version X of the protocol, this meant that you would 
not support any version lower than, say, X-2.  But I'm not sure 
how practical that is.


This is the direction I'm pushing today. If you look at auto 
racing you will notice that the safety equipment commonly used 
before WW2 is no longer permitted. It is patently unsafe. We 
need to make the same judgements in high security/high risk applications.


Cheers - Bill

---
Bill Frantz|The nice thing about standards| Periwinkle
(408)356-8506  |is there are so many to choose| 16345 
Englewood Ave
www.pwpconsult.com |from.   - Andrew Tanenbaum| Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:12 PM, watsonbl...@gmail.com (Watson Ladd) wrote:


On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:
... As professionals, we have an obligation to share our 
knowledge of the limits of our technology with the people who 
are depending on it. We know that all crypto standards which 
are 15 years old or older are obsolete, not recommended for 
current use, or outright dangerous. We don't know of any way 
to avoid this problem in the future.


15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.


When I developed the VatTP crypto protocol for the E language 
www.erights.org about 15 years ago, key sizes of 1024 bits 
were high security. Now they are seriously questioned. 3DES was 
state of the art. No widely distributed protocols used 
Feige-Fiat-Shamir or Schnorr signatures. Do any now? I stand by 
my statement.




I think the burden of proof is on the people who suggest that 
we only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


... long post of problems with TLS, most of which are valid 
criticisms deleted as not addressing the above questions.



Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


I agree with this general direction, but I still don't have the 
warm fuzzies that good answers to the above questions might 
give. I have seen too many projects to do it right that didn't 
pull it off.


See also my response to John Kelsey.

Cheers - Bill

---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Peter Gutmann
Watson Ladd watsonbl...@gmail.com writes:

The obvious solution: Do it right the first time.

And how do you know that you're doing it right?  PGP in 1992 adopted a
bleeding-edge cipher (IDEA) and was incredibly lucky that it's stayed secure
since then.  What new cipher introduced up until 1992 has had that
distinction?  Doing it right the first time is a bit like the concept of
stopping rules in heuristic decision-making, if they were that easy then
people wouldn't be reading this list but would be in Las Vegas applying the
stopping rule stop playing just before you start losing.

This is particularly hard in standards-based work because any decision about
security design tends to rapidly degenerate into an argument about whose
fashion statement takes priority.  To get back to an earlier example that I
gave on the list, the trivial and obvious fix to TLS of switching from MAC-
then-encrypt to encrypt-then-MAC is still being blocked by the WG chairs after
nearly a year, despite the fact that a straw poll on the list indicated
general support for it (rough consensus) and implementations supporting it are
already deployed (running code).  So do it right the first time is a lot
easier said than done.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Salz, Rich
 TLS was designed to support multiple ciphersuites. Unfortunately this opened 
 the door
 to downgrade attacks, and transitioning to protocol versions that wouldn't do 
 this was nontrivial.
 The ciphersuites included all shared certain misfeatures, leading to the 
 current situation.

On the other hand, negotiation let us deploy it in places where full-strength 
cryptography is/was regulated.

Sometimes half a loaf is better than nothing.

/r$
--  
Principal Security Engineer
Akamai Technology
Cambridge, MA

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Stephen Farrell


 On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote:
 
 Just thinking out loud
 
 The administrative complexity of a cryptosystem is overwhelmingly in key 
 management and identity management and all the rest of that stuff.  So 
 imagine that we have a widely-used inner-level protocol that can use strong 
 crypto, but also requires no external key management.  The purpose of the 
 inner protocol is to provide a fallback layer of security, so that even an 
 attack on the outer protocol (which is allowed to use more complicated key 
 management) is unlikely to be able to cause an actual security problem.  On 
 the other hand, in case of a problem with the inner protocol, the outer 
 protocol should also provide protection against everything.
 
 Without doing any key management or requiring some kind of reliable identity 
 or memory of previous sessions, the best we can do in the inner protocol is 
 an ephemeral Diffie-Hellman, so suppose we do this:  
 
 a.  Generate random a and send aG on curve P256
 
 b.  Generate random b and send bG on curve P256
 
 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.
 
 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side.  
 
 The point is, this is a protocol that happens *inside* the main security 
 protocol.  This happens inside TLS or whatever.  An attack on TLS then leads 
 to an attack on the whole application only if the TLS attack also lets you do 
 man-in-the-middle attacks on the inner protocol, or if it exploits something 
 about certificate/identity management done in the higher-level protocol.  
 (Ideally, within the inner protcol, you do some checking of the identity 
 using a password or shared secret or something, but that's application-level 
 stuff the inner and outer protocols don't know about.  
 
 Thoughts?


Suggest it on the tls wg list as a feature of 1.3?

S

 
 --John
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
More random thoughts:

The minimal inner protocol would be something like this:

Using AES-CCM with a tag size of 32 bits, IVs constructed based on an implicit 
counter, and an AES-CMAC-based KDF, we do the following:

Sender: 
a.  Generate random 128 bit value R
b.  Use the KDF to compute K[S],N[S],K[R],N[R] = KDF(R, 128+96+128+96)
c.  Sender's 32-bit unsigned counter C[S] starts at 0.
d.  Compute IV[S,0] = 96 bits of binary 0s||C[S]
e.  Send R, CCM(K[S],N[S],IV[S,0],sender_message[0])

Receiver:
a.  Receive R and derive K[S],N[S],K[R],N[R] from it as above.
b.  Set Receiver's counter C[R] = 0.
c.  Compute IV[R,0] = 96 bits of binary 0s||C[R]
d.  Send CCM(K[R],N[R],IV[R,0],receiver_message[0])

and so on.  

Note that in this protocol, we never send a key or IV or nonce.  The total 
communications overhead of the inner protocol is an extra 160 bits in the first 
message and an extra 32 bits thereafter.  We're assuming the outer protocol is 
taking care of message ordering and guaranteed delivery--otherwise, we need to 
do something more complicated involving replay windows and such, and probably 
have to send along the message counters.  

This doesn't provide a huge amount of extra protection--if the attacker can 
recover more than a very small number of bits from the first message (attacking 
through the outer protocol), then the security of this protocol falls apart.  
But it does give us a bare-minimum-cost inner layer of defenses, inside TLS or 
SSH or whatever other thing we're doing.  

Both this and the previous protocol I sketched have the property that they 
expect to be able to generate random numbers.  There's a problem there, 
though--if the system RNG is weak or trapdoored, it could compromise both the 
inner and outer protocol at the same time.  

One way around this is to have each endpoint that uses the inner protocol 
generate its own internal secret AES key, Q[i].  Then, when it's time to 
generate a random value, the endpoint asks the system RNG for a random number 
X, and computes E_Q(X).  If the attacker knows Q but the system RNG is secure, 
we're fine.  Similarly, if the attacker can predict X but doesn't know Q, we're 
fine.  Even when the attacker can choose the value of X, he can really only 
force the random value in the beginning of the protocol to repeat.  In this 
protocol, that doesn't do much harm.  

The same idea works for the ECDH protocol I sketched earlier.  I request two 
128 bit random values from the system RNG, X, X'.  I then use E_Q(X)||E_Q(X') 
as my ephemeral DH private key. If an attacker knows Q but the system RNG is 
secure, then we get an unpredictable value for the ECDH key agreement.  If an 
attacker knows X,X' but doesn't know Q, he doesn't know what my ECDH ephemeral 
private key is.  If he forces it to a repeated value, he still doesn't weaken 
anything except this run of the protocol--no long-term secret is leaked if AES 
isn't broken.  

This is subject to endless tweaking and improvement.  But the basic idea seems 
really valuable:  

a.  Design an inner protocol, whose job is to provide redundancy in security 
against attacks on the outer protocol.

b.  The inner protocol should be:

(i)  As cheap as possible in bandwidth and computational terms.

(ii) Flexible enough to be used extremely widely, implemented in most places, 
etc.  

(iii) Administratively free, adding no key management or related burdens.

(iv) Free from revisions or updates, because the whole point of the inner 
protocol is to provide redundant security.  (That's part of administratively 
free.)  

(v)  There should be one or at most two versions (maybe something like the two 
I've sketched, but better thought out and analyzed).

c.  As much as possible, we want the security of the inner protocol to be 
independent of the security of the outer protocol.  (And we want this without 
wanting to know exactly what the outer protocol will look like.)  This means:

(i)  No shared keys or key material or identity strings or anything.

(ii) The inner protocol can't rely on the RNG being good.

(iii) Ideally, the crypto algorithms would be different, though that may impose 
too high a cost.  At least, we want as many of the likely failure modes to be 
different.  

Comments?  I'm not all that concerned with the protocol being perfect, but what 
do you think of the idea of doing this as a way to add redundant security 
against protocol attacks?  

--John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Richard Outerbridge
On 2013-10-10 (283), at 15:29:33, Stephen Farrell stephen.farr...@cs.tcd.ie 
wrote:

 On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote:
 
 Just thinking out loud
 

[]

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.

How does this prevent MITM?  Where does G come from?

I'm also leery of using literally the same key in both directions.  Maybe a 
simple transform would suffice; maybe not.

 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side. 

If the same key is used, there needs to be a simple way of ensuring the 
sequence numbers can never overlap each other.
__outer



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
On Oct 10, 2013, at 5:15 PM, Richard Outerbridge ou...@sympatico.ca wrote:
 
 How does this prevent MITM?  Where does G come from?

I'm assuming G is a systemwide shared parameter.  It doesn't prevent 
mitm--remember the idea here is to make a fairly lightweight protocol to run 
*inside* another crypto protocol like TLS.  The inner protocol mustn't add 
administrative requirements to the application, which means it can't need key 
management from some administrator or something.  The goal is to have an inner 
protocol which can run inside TLS or some similar thing, and which adds a layer 
of added security without the application getting more complicated by needing 
to worry about more keys or certificates or whatever.  

Suppose we have this inner protocol running inside a TLS version that is 
subject to one of the CBC padding reaction attacks.  The inner protocol 
completely blocks that.  

 I'm also leery of using literally the same key in both directions.  Maybe a 
 simple transform would suffice; maybe not.

I probably wasn't clear in my writeup, but my idea was to have different keys 
in different directions--there is a NIST KDF that uses only AES as its crypto 
engine, so this is relatively easy to do using standard components.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Trevor Perrin
On Thu, Oct 10, 2013 at 3:32 PM, John Kelsey crypto@gmail.com wrote:
  The goal is to have an inner protocol which can run inside TLS or some 
 similar thing
[...]

 Suppose we have this inner protocol running inside a TLS version that is 
 subject to one of the CBC padding reaction attacks.  The inner protocol 
 completely blocks that.

If you can design an inner protocol to resist such attacks - which
you can, easily - why wouldn't you just design the outer protocol
the same way?


Trevor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread David Mercer
On Thursday, October 10, 2013, Salz, Rich wrote:

  TLS was designed to support multiple ciphersuites. Unfortunately this
 opened the door
  to downgrade attacks, and transitioning to protocol versions that
 wouldn't do this was nontrivial.
  The ciphersuites included all shared certain misfeatures, leading to the
 current situation.

 On the other hand, negotiation let us deploy it in places where
 full-strength cryptography is/was regulated.

 Sometimes half a loaf is better than nothing.


 The last time various SSL/TLS ciphersuites needed to be removed from
webserver configurations when I managed a datacenter some years ago led to
the following 'failure modes', either from the user's browser now warning
or refusing to connect to a server using an insecure cipher suite, or when
the only cipher suites used by a server weren't supported by an old browser
(or both at once):

1) for sites that had low barriers to switching, loss of traffic/customers
to sites that didn't drop the insecure ciphersuites

2) for sites that are harder to leave (your bank, google/facebook level
sticky public ones [less common]), large increases in calls to support,
with large costs for the business. Non-PCI compliant businesses taking CC
payments are generally so insecure that customers that fled to them really
are uppung their chances of suffering  fraud.

In both cases you have a net decrease of security and an increase of fraud
and financial loss.

So in some cases anything less than a whole loaf, which you can't guarantee
for N years of time, isn't 'good enough.' In other words, we are screwed no
matter what.

-David Mercer



-- 
David Mercer - http://dmercer.tumblr.com
IM:  AIM: MathHippy Yahoo/MSN: n0tmusic
Facebook/Twitter/Google+/Linkedin: radix42
FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7
PGP Public Key: http://davidmercer.nfshost.com/radix42.pubkey.txt
Fingerprint: A24F 5816 2B08 5B37 5096  9F52 B182 3349 0F23 225B
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 7:38 AM, Jerry Leichter leich...@lrw.com wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:
  If we can't select ciphersuites that we are sure we will always be
 comfortable with (for at least some forseeable lifetime) then we urgently
 need the ability to *stop* using them at some point.  The examples of MD5
 and RC4 make that pretty clear.
  Ceasing to use one particular encryption algorithm in something like
 SSL/TLS should be the easiest case--we don't have to worry about old
 signatures/certificates using the outdated algorithm or anything.  And yet
 we can't reliably do even that.
 
  We seriously need to consider what the design lifespan of our crypto
 suites is in real life. That data should be communicated to hardware and
 software designers so they know what kind of update schedule needs to be
 supported. Users of the resulting systems need to know that the crypto
 standards have a limited life so they can include update in their
 installation planning.
 This would make a great April Fool's RFC, to go along with the classic
 evil bit.  :-(

 There are embedded systems that are impractical to update and have
 expected lifetimes measured in decades.  RFID chips include cryptography,
 are completely un-updatable, and have no real limit on their lifetimes -
 the percentage of the population represented by any given vintage of
 chips will drop continuously, but it will never go to zero.  We are rapidly
 entering a world in which devices with similar characteristics will, in
 sheer numbers, dominate the ecosystem - see the remote-controllable
 Phillips Hue light bulbs (
 http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b)
 as an early example.  (Oh, and there's been an attack against them:
 http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/.
  The response from Phillips to that article says In developing Hue we have
 used industry standard encryption and authentication techni
  ques  [O]ur main advice to customers is that they take steps to
 ensure they are secured from malicious attacks at a network level.

 The obvious solution: Do it right the first time. Many of the TLS issues
we are dealing with today were known at the time the standard was being
developed. RFID usually isn't that security critical: if a shirt insists
its an ice cream, a human will usually be around to see that it is a shirt.
AES will last forever, unless cryptoanalytic advances develop. Quantum
computers will doom ECC, but in the meantime we are good.

Cryptography in the two parties authenticating and communicating is a
solved problem. What isn't solved, and behind many of these issues is 1)
getting the standard committees up to speed and 2) deployment/PKI issues.


 I'm afraid the reality is that we have to design for a world in which some
 devices will be running very old versions of code, speaking only very old
 versions of protocols, pretty much forever.  In such a world, newer devices
 either need to shield their older brethren from the sad realities or
 relegate them to low-risk activities by refusing to engage in high-risk
 transactions with them.  It's by no means clear how one would do this, but
 there really aren't any other realistic alternatives.

Great big warning lights saying Insecure device! Do not trust!. If Wells
Fargo customers got a Warning: This site is using outdated security when
visiting it on all browsers, they would fix that F5 terminator currently
stopping the rest of us from deploying various TLS extensions.

 -- Jerry

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography




-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Bill Frantz

On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:


On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


We seriously need to consider what the design lifespan of our 
crypto suites is in real life. That data should be 
communicated to hardware and software designers so they know 
what kind of update schedule needs to be supported. Users of 
the resulting systems need to know that the crypto standards 
have a limited life so they can include update in their 
installation planning.



This would make a great April Fool's RFC, to go along with the classic evil 
bit.  :-(


I think the situation is much more serious than this comment 
makes it appear. As professionals, we have an obligation to 
share our knowledge of the limits of our technology with the 
people who are depending on it. We know that all crypto 
standards which are 15 years old or older are obsolete, not 
recommended for current use, or outright dangerous. We don't 
know of any way to avoid this problem in the future.


I think the burden of proof is on the people who suggest that we 
only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


There are embedded systems that are impractical to update and 
have expected lifetimes measured in decades...
Many perfectly good PC's will stay on XP forever because even 
if there was the will and staff to upgrade, recent versions of 
Windows won't run on their hardware.

...
I'm afraid the reality is that we have to design for a world in 
which some devices will be running very old versions of code, 
speaking only very old versions of protocols, pretty much 
forever.  In such a world, newer devices either need to shield 
their older brethren from the sad realities or relegate them to 
low-risk activities by refusing to engage in high-risk 
transactions with them.  It's by no means clear how one would 
do this, but there really aren't any other realistic alternatives.


Users of this old equipment will need to make a security/cost 
tradeoff based on their requirements. The ham radio operator who 
is still running Windows 98 doesn't really concern me. (While 
his internet connected system might be a bot, the bot 
controllers will protect his computer from others, so his radio 
logs and radio firmware update files are probably safe.) I've 
already commented on the risks of sending Mailman passwords in 
the clear. Low value/low risk targets don' need titanium security.


The power plant which can be destroyed by a cyber attack, c.f. 
STUXNET, does concern me. Gas distribution systems do concern 
me. Banking transactions do concern me, particularly business 
accounts. (The recommendations for online business accounts 
include using a dedicated computer -- good advice.)


Perhaps the shortest limit on the lifetime of an embedded system 
is the security protocol, and not the hardware. If so, how do we 
as society deal with this limit.


Cheers -- Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 
Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:

 On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


 We seriously need to consider what the design lifespan of our crypto suites 
 is in real life. That data should be communicated to hardware and software 
 designers so they know what kind of update schedule needs to be supported. 
 Users of the resulting systems need to know that the crypto standards have 
 a limited life so they can include update in their installation planning.


 This would make a great April Fool's RFC, to go along with the classic evil 
 bit.  :-(


 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.

 I think the burden of proof is on the people who suggest that we only have to 
 do it right the next time and things will be perfect. These proofs should 
 address:

 New applications of old attacks.
 The fact that new attacks continue to be discovered.
 The existence of powerful actors subverting standards.
 The lack of a did right example to point to.
As one of the Do it right the first time people I'm going to argue
that the experience with TLS shows that extensibility doesn't work.

TLS was designed to support multiple ciphersuites. Unfortunately this
opened the door to downgrade attacks, and transitioning to protocol
versions that wouldn't do this was nontrivial. The ciphersuites
included all shared certain misfeatures, leading to the current
situation.

TLS is difficult to model: the use of key confirmation makes standard
security notions not applicable. The fact that every cipher suite is
indicated separately, rather than using generic composition makes
configuration painful.

In addition bugs in widely deployed TLS accelerators mean that the
claimed upgradability doesn't actually exist. Implementations can work
without supporting very necessary features. Had the designers of TLS
used a three-pass Diffie-Hellman protocol with encrypt-then-mac,
rather than the morass they came up with, we wouldn't be in this
situation today. TLS was not exploring new ground: it was well hoed
turf intellectually, and they still screwed it up.

Any standard is only an approximation to what is actually implemented.
Features that aren't used are likely to be skipped or implemented
incorrectly.

Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


 There are embedded systems that are impractical to update and have expected 
 lifetimes measured in decades...

 Many perfectly good PC's will stay on XP forever because even if there was 
 the will and staff to upgrade, recent versions of Windows won't run on their 
 hardware.
 ...

 I'm afraid the reality is that we have to design for a world in which some 
 devices will be running very old versions of code, speaking only very old 
 versions of protocols, pretty much forever.  In such a world, newer devices 
 either need to shield their older brethren from the sad realities or 
 relegate them to low-risk activities by refusing to engage in high-risk 
 transactions with them.  It's by no means clear how one would do this, but 
 there really aren't any other realistic alternatives.



-- 
Those who would give up Essential Liberty to purchase a little
Temporary Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread John Kelsey
On Oct 8, 2013, at 4:46 PM, Bill Frantz fra...@pwpconsult.com wrote:

 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

We know how to address one part of this problem--choose only algorithms whose 
design strength is large enough that there's not some relatively close by time 
when the algorithms will need to be swapped out.  That's not all that big a 
problem now--if you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not Moore's Law.  
Really, even with 128-bit security level primitives, it will be a very long 
time until the brute-force attacks are a concern.  

This is actually one thing we're kind-of on the road to doing right in 
standards now--we're moving away from barely-strong-enough crypto and toward 
crypto that's going to be strong for a long time to come. 

Protocol attacks are harder, because while we can choose a key length, modulus 
size, or sponge capacity to support a known security level, it's not so easy to 
make sure that a protocol doesn't have some kind of attack in it.  

I think we've learned a lot about what can go wrong with protocols, and we can 
design them to be more ironclad than in the past, but we still can't guarantee 
we won't need to upgrade.  But I think this is an area that would be 
interesting to explore--what would need to happen in order to get more ironclad 
protocols?  A couple random thoughts:

a.  Layering secure protocols on top of one another might provide some 
redundancy, so that a flaw in one didn't undermine the security of the whole 
system.  

b.  There are some principles we can apply that will make protocols harder to 
attack, like encrypt-then-MAC (to eliminate reaction attacks), nothing is 
allowed to need change its execution path or timing based on the key or 
plaintext, every message includes a sequence number and the hash of the 
previous message, etc.  This won't eliminate protocol attacks, but will make 
them less common.

c.  We could try to treat at least some kinds of protocols more like crypto 
algorithms, and expect to have them widely vetted before use.  

What else?  

 ...
 Perhaps the shortest limit on the lifetime of an embedded system is the 
 security protocol, and not the hardware. If so, how do we as society deal 
 with this limit.

What we really need is some way to enforce protocol upgrades over time.  
Ideally, there would be some notion that if you support version X of the 
protocol, this meant that you would not support any version lower than, say, 
X-2.  But I'm not sure how practical that is.  

 Cheers -- Bill

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-08 Thread Bill Frantz

On 10/6/13 at 8:26 AM, crypto@gmail.com (John Kelsey) wrote:

If we can't select ciphersuites that we are sure we will always 
be comfortable with (for at least some forseeable lifetime) 
then we urgently need the ability to *stop* using them at some 
point.  The examples of MD5 and RC4 make that pretty clear.
Ceasing to use one particular encryption algorithm in something 
like SSL/TLS should be the easiest case--we don't have to worry 
about old signatures/certificates using the outdated algorithm 
or anything.  And yet we can't reliably do even that.


We seriously need to consider what the design lifespan of our 
crypto suites is in real life. That data should be communicated 
to hardware and software designers so they know what kind of 
update schedule needs to be supported. Users of the resulting 
systems need to know that the crypto standards have a limited 
life so they can include update in their installation planning.


Cheers - Bill

---
Bill Frantz| If the site is supported by  | Periwinkle
(408)356-8506  | ads, you are the product.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-08 Thread Jerry Leichter
On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:
 If we can't select ciphersuites that we are sure we will always be 
 comfortable with (for at least some forseeable lifetime) then we urgently 
 need the ability to *stop* using them at some point.  The examples of MD5 
 and RC4 make that pretty clear.
 Ceasing to use one particular encryption algorithm in something like SSL/TLS 
 should be the easiest case--we don't have to worry about old 
 signatures/certificates using the outdated algorithm or anything.  And yet 
 we can't reliably do even that.
 
 We seriously need to consider what the design lifespan of our crypto suites 
 is in real life. That data should be communicated to hardware and software 
 designers so they know what kind of update schedule needs to be supported. 
 Users of the resulting systems need to know that the crypto standards have a 
 limited life so they can include update in their installation planning.
This would make a great April Fool's RFC, to go along with the classic evil 
bit.  :-(

There are embedded systems that are impractical to update and have expected 
lifetimes measured in decades.  RFID chips include cryptography, are completely 
un-updatable, and have no real limit on their lifetimes - the percentage of the 
population represented by any given vintage of chips will drop continuously, 
but it will never go to zero.  We are rapidly entering a world in which devices 
with similar characteristics will, in sheer numbers, dominate the ecosystem - 
see the remote-controllable Phillips Hue light bulbs 
(http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b)
 as an early example.  (Oh, and there's been an attack against them:  
http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/.  
The response from Phillips to that article says In developing Hue we have used 
industry standard encryption and authentication techni
 ques  [O]ur main advice to customers is that they take steps to ensure 
they are secured from malicious attacks at a network level.

Even in the PC world, where updates are a part of life, makers eventually stop 
producing them for older products.  Windows XP, as of about 10 months ago, was 
running on 1/4 of all PC's - many 100's of millions of PC's.  About 9 months 
from now, Microsoft will ship its final security update for XP.  Many perfectly 
good PC's will stay on XP forever because even if there was the will and staff 
to upgrade, recent versions of Windows won't run on their hardware.

In the Mac world, hardware in general tends to live longer, and there's plenty 
of hardware still running that can't run recent OS's.  Apple pretty much only 
does patches for at most 3 versions of the OS (with a new version roughly every 
year).  The Linux world isn't really much different except that it's less 
likely to drop support for old hardware, and because it tends to be used by a 
more techie audience who are more likely to upgrade, the percentages probably 
look better, at least for PC's.  (But there are antique versions of Linux 
hidden away in all kinds of appliances that no one ever upgrades.)

I'm afraid the reality is that we have to design for a world in which some 
devices will be running very old versions of code, speaking only very old 
versions of protocols, pretty much forever.  In such a world, newer devices 
either need to shield their older brethren from the sad realities or relegate 
them to low-risk activities by refusing to engage in high-risk transactions 
with them.  It's by no means clear how one would do this, but there really 
aren't any other realistic alternatives.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Nico Williams
On Sat, Oct 05, 2013 at 09:29:05PM -0400, John Kelsey wrote:
 One thing that seems clear to me:  When you talk about algorithm
 flexibility in a protocol or product, most people think you are
 talking about the ability to add algorithms.  Really, you are talking
 more about the ability to *remove* algorithms.  We still have stuff
 using MD5 and RC4 (and we'll probably have stuff using dual ec drbg
 years from now) because while our standards have lots of options and
 it's usually easy to add new ones, it's very hard to take any away.  

Algorithm agility makes it possible to add and remove algorithms.  Both,
addition and removal, are made difficult by the fact that it is
difficult to update deployed code.  Removal is made much more difficult
still by the need to remain interoperable with legacy that has been
deployed and won't be updated fast enough.  I don't know what can be
done about this.  Auto-update is one part of the answer, but it can't
work for everything.

I like the idea of having a CRL-like (or OCSP-like?) system for
revoking algorithms.  This might -in some cases- do nothing more
than warn the user, or -in other cases- trigger auto-update checks.

But, really, legacy is a huge problem that we barely know how to
ameliorate a little.  It still seems likely that legacy code will
continue to remain deployed for much longer than the advertised
service lifetime of the same code (see XP, for example), and for at
least a few more product lifecycles (i.e., another 10-15 years
before we come up with a good solution).

Nico
-- 
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Phillip Hallam-Baker
On Sat, Oct 5, 2013 at 7:36 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-10-04 23:57, Phillip Hallam-Baker wrote:

 Oh and it seems that someone has murdered the head of the IRG cyber
 effort. I condemn it without qualification.


 I endorse it without qualification.  The IRG are bad guys and need killing
 - all of them, every single one.

 War is an honorable profession, and is in our nature.  The lion does no
 wrong to kill the deer, and the warrior does no wrong to fight in a just
 war, for we are still killer apes.

 The problem with the NSA and NIST is not that they are doing warlike
 things, but that they are doing warlike things against their own people.


If people who purport to be on our side go round murdering their people
then they are going to go round murdering people on ours. We already have
Putin's group of thugs murdering folk with Polonium laced teapots, just so
that there can be no doubt as to the identity of the perpetrators.

We are not at war with Iran. I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.

Iran used to have a democracy, remember what happened to it? It was people
like the brothers Dulles who preferred a convenient dictator to a
democratic government that overthrew it with the help of a rent-a-mob
supplied by one Ayatollah Khomenei.


I believe that it was the Ultra-class signals intelligence that made the
operation possible and the string of CIA inspired coups that installed
dictators or pre-empted the emergence of democratic regimes in many other
countries until the mid 1970s. Which not coincidentally is the time that
mechanical cipher machines were being replaced by electronic.

I have had a rather closer view of your establishment than most. You have
retired four star generals suggesting that in the case of a cyber-attack
against critical infrastructure, the government should declare martial law
within hours. It is not hard to see where that would lead there are plenty
of US military types who would dishonor their uniforms with a coup at home,
I have met them.


My view is that we would all be rather safer if the NSA went completely
dark for a while, at least until there has been some accountability for the
crimes of the '00s and a full account of which coups the CIA backed, who
authorized them and why.

I have lived with terrorism all my life. My family was targeted by
terrorists that Rep King and Rudy Giuliani profess to wholeheartedly
support to this day. I am not concerned about the terrorists because they
obviously can't win. It is like the current idiocy in Congress, the
Democrats are bound to win because at the end of the day the effects of the
recession that the Republicans threaten to cause will be temporary while
universal health care will be permanent. The threatened harm is not great
enough to cause a change in policy. The only cases where terrorist tactics
have worked is where a small minority have been trying to suppress the
majority, as in Rhodesia or French occupied Spain during the Napoleonic
wars.

But when I see politicians passing laws to stop people voting, judges
deciding that the votes in a Presidential election cannot be counted and
all the other right wing antics taking place in the US at the moment, the
risk of a right wing fascist coup has to be taken seriously.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread James A. Donald

On 2013-10-07 01:18, Phillip Hallam-Baker wrote:

We are not at war with Iran.


We are not exactly at peace with Iran either, but that is irrelevant, 
for presumably it was a Jew that did it, and Iran is at war with Jews.

(And they are none too keen on Christians, Bahais, or Zoroastrians either)


I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.


You may not be interested in war, but war is interested in you.   You 
can reasonably argue that we should not get involved in Israel's 
problems, but you should not complain about Israel getting involved in 
Israel's problems.



Iran used to have a democracy


Had a democracy where if you opposed Mohammad Mosaddegh you got murdered 
by Islamists.


Which, of course differs only in degree from our democracy, where (to 
get back to some slight relevance to cryptography) Ladar Levison gets 
put out of business for defending the fourth Amendment, and Pax gets put 
on a government blacklist that requires him to be fired and prohibits 
his business from being funded for tweeting disapproval of affirmative 
action for women in tech.


And similarly, if Hitler's Germany was supposedly not a democracy, why 
then was Roosevelt's America supposedly a democracy?


I oppose democracy because it typically results from, and leads to, 
government efforts to control the thoughts of the people.  There is not 
a large difference between our government requiring Pax to be fired, and 
Mohammad Mosaddegh murdering Haj-Ali Razmara.  Democracy also frequently 
results in large scale population replacement and ethnic cleansing, as 
for example Detroit and the Ivory Coast, as more expensive voters get 
laid off and cheaper voters get imported.


Mohammed Moasddegh loved democracy because he was successful and 
effective in murdering his opponents, and the Shah was unwilling or 
unable to murder the Shah's opponents.


And our government loves democracy because it can blacklist Pax and 
destroy Levison.


If you want murder and blacklists, population replacement and ethnic 
cleansing, support democracy.  If you don't want murder and blacklists, 
should have supported the Shah.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Jerry Leichter
On Oct 5, 2013, at 9:29 PM, John Kelsey wrote:
 One thing that seems clear to me:  When you talk about algorithm flexibility 
 in a protocol or product, most people think you are talking about the ability 
 to add algorithms.  Really, you are talking more about the ability to 
 *remove* algorithms.  We still have stuff using MD5 and RC4 (and we'll 
 probably have stuff using dual ec drbg years from now) because while our 
 standards have lots of options and it's usually easy to add new ones, it's 
 very hard to take any away.  
Q.  How did God create the world in only 6 days?
A.  No installed base.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Ray Dillinger
Is it just me, or does the government really have absolutely no one
with any sense of irony?  Nor, increasingly, anyone with a sense of
shame?

I have to ask, because after directly suborning the cyber security
of most of the world including the USA, and destroying the credibility
of just about every agency who could otherwise help maintain it, the
NSA kicked off National Cyber Security Awareness Month on the first
of October this year.

http://blog.sfgate.com/hottopics/2013/10/01/as-government-shuts-down-nsa-excitedly-announces-national-cyber-security-awareness-month/

[Slow Clap]  Ten out of ten for audacity, wouldn't you say?

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Ray Dillinger


 Original message 
From: Jerry Leichter leich...@lrw.com 
Date: 10/06/2013  15:35  (GMT-08:00) 
To: John Kelsey crypto@gmail.com 
Cc: cryptography@metzdowd.com List cryptography@metzdowd.com,Christoph 
Anton Mitterer cales...@scientia.net,james hughes 
hugh...@mac.com,Dirk-Willem van Gulik di...@webweaving.org 
Subject: Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was:
NIST about to weaken SHA3? 
 
On Oct 5, 2013, at 9:29 PM, John Kelsey wrote:
  Really, you are talking more about the ability to *remove* algorithms.  We 
still have stuff using MD5 and RC4 (and we'll probably have stuff using dual ec 
drbg years from now) because while our standards have lots of options and it's 
usually easy to add new ones, it's very hard to take any away.

Can we do anything about that? If the protocol allows correction (particularly 
remote or automated correction) of an entity using a weak crypto primitive, 
that opens up a whole new set of attacks on strong primitives.

We'd like the answer to be that people will decline to communicate with you if 
you use a weak system,  but honestly when was the last time you had that degree 
of choice in from whom you get exactly the content and services you need?

Can we even make renegotiating the cipher suite inconveniently long or heavy so 
defaulting weak becomes progressively more costly as more people default 
strong? That opens up denial of service attacks, and besides it makes it 
painful to be the first to default strong.

Can a check for a revoked signature for the cipher's security help? That makes 
the CA into a point of control.

Anybody got a practical idea?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Jerry Leichter
On Oct 7, 2013, at 12:45 PM, Ray Dillinger b...@sonic.net wrote:
 Can we do anything ...[to make it possible to remove old algorithms]? If the 
 protocol allows correction (particularly remote or automated correction) of 
 an entity using a weak crypto primitive, that opens up a whole new set of 
 attacks on strong primitives.
 
 We'd like the answer to be that people will decline to communicate with you 
 if you use a weak system,  but honestly when was the last time you had that 
 degree of choice in from whom you get exactly the content and services you 
 need?
 
 Can we even make renegotiating the cipher suite inconveniently long or heavy 
 so defaulting weak becomes progressively more costly as more people default 
 strong? That opens up denial of service attacks, and besides it makes it 
 painful to be the first to default strong.
 
 Can a check for a revoked signature for the cipher's security help? That 
 makes the CA into a point of control.
 
 Anybody got a practical idea?
I don't see how there can be any solution to this.  Slow renegotiation doesn't 
affect users until it gets to the point where they feel the something is 
broken; at that point, the result to them is indistinguishable from just 
refusing connections with the old suites.  And of course what's broken is never 
*their* software, it's the other guy's - and given the alternative, they'll go 
to someone who isn't as insistent that their potential customers do it the 
right way.  So you'll just set off a race to the bottom.

Revoking signatures ... well, just how effect are bad signature warnings 
today?  People learn - in fact, are often *taught* - to click through them.  If 
software refuses to let them do that, they'll look for other software.

Ultimately, I think you have to look at this as an economic issue.  The only 
reason to change your software is if the cost of changing is lower than the 
estimated future cost of *not* changing.  Most users (rightly) estimate that 
the chance of them losing much is very low.  You can change that estimate by 
imposing a cost on them, but in a world of competitive suppliers (and consumer 
protection laws) that's usually not practical.

It's actually interesting to consider the single counter-example out there;  
The iOS world (and to a slightly less degree, the OSX world).  Apple doesn't 
force iOS users to upgrade their existing hardware (and sometimes it's 
obsolete and isn't software-upgradeable) but in fact iOS users upgrade very 
quickly.  (iOS 7 exceeded 50% of installations within 7 days - a faster ramp 
than iOS 6.  Based on past patterns, iOS 7 will be in the high 90's in a fairly 
short time.)  No other software comes anywhere close to that.  Moving from iOS 
6 to iOS 7 is immensely more disruptive than moving to a new browser version 
(say) that drops support for a vulnerable encryption algorithm.  And yet huge 
numbers of people do it.  Clearly it's because of the new things in iOS 7 - and 
yet Microsoft still has a huge population of users on XP.

I think the real take-away here is that getting upgrades into the field is a 
technical problem only at the margins.  It has to do with people's attitudes in 
subtle ways that Apple has captured and others have not.  (Unanswerable 
question:  If the handset makers and the Telco vendors didn't make it so hard - 
often impossible - to upgrade, what would the market penetration numbers for 
different Android versions look like?)

-- Jerry


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Phillip Hallam-Baker
On Sun, Oct 6, 2013 at 11:26 AM, John Kelsey crypto@gmail.com wrote:

 If we can't select ciphersuites that we are sure we will always be
 comfortable with (for at least some forseeable lifetime) then we urgently
 need the ability to *stop* using them at some point.  The examples of MD5
 and RC4 make that pretty clear.

 Ceasing to use one particular encryption algorithm in something like
 SSL/TLS should be the easiest case--we don't have to worry about old
 signatures/certificates using the outdated algorithm or anything.  And yet
 we can't reliably do even that.


I proposed a mechanism for that a long time back based on Rivest's notion
of a suicide note in SDSI.


The idea was that some group of cryptographers get together and create some
random numbers which they then keyshare amongst themselves so that there
are (say) 11 shares and a quorum of 5.

Let the key be k, if the algorithm being witnessed is AES then the value
AES(k) is published as the 'witness value for AES.

A device that ever sees the witness value for AES presented knows to stop
using it. It is in effect a 'suicide note' for AES.


Similar witness functions can be specified easily enough for hashes etc. We
already have the RSA factoring competition for RSA public key. In fact I
suggested to Burt Kaliski that they expand the program.

The cryptographic basis here is that there are only two cases where the
witness value will be released, either there is an expert consensus to stop
using AES (or whatever) or someone breaks AES.

The main downside is that there are many applications where you can't
tolerate fail-open. For example in the electricity and power system it is
more important to keep the system going than to preserve confidentiality.
An authenticity attack on the other hand might be cause...

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-06 Thread James A. Donald

On 2013-10-04 23:57, Phillip Hallam-Baker wrote:

Oh and it seems that someone has murdered the head of the IRG cyber
effort. I condemn it without qualification.


I endorse it without qualification.  The IRG are bad guys and need 
killing - all of them, every single one.


War is an honorable profession, and is in our nature.  The lion does no 
wrong to kill the deer, and the warrior does no wrong to fight in a just 
war, for we are still killer apes.


The problem with the NSA and NIST is not that they are doing warlike 
things, but that they are doing warlike things against their own people.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-06 Thread John Kelsey
One thing that seems clear to me:  When you talk about algorithm flexibility in 
a protocol or product, most people think you are talking about the ability to 
add algorithms.  Really, you are talking more about the ability to *remove* 
algorithms.  We still have stuff using MD5 and RC4 (and we'll probably have 
stuff using dual ec drbg years from now) because while our standards have lots 
of options and it's usually easy to add new ones, it's very hard to take any 
away.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread John Kelsey
On Oct 4, 2013, at 10:10 AM, Phillip Hallam-Baker hal...@gmail.com wrote:
...
 Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had no 
 impact on the security of certificates issued using MD5 until the attack was 
 dramatically improved and the second pre-image attack became feasible.

Just a couple nitpicks: 

a.  Dobbertin wasn't doing a birthday (brute force collision) attack, but 
rather a collision attack from a chosen IV.  

b.  Preimages with MD5 still are not practical.  What is practical is using the 
very efficient modern collision attacks to do a kind of herding attack, where 
you commit to one hash and later get some choice about which message gives that 
hash.  

...
 Proofs are good for getting tenure. They produce papers that are very 
 citable. 

There are certainly papers whose only practical importance is getting a smart 
cryptographer tenure somewhere, and many of those involve proofs.  But there's 
also a lot of value in being able to look at a moderately complicated thing, 
like a hash function construction or a block cipher chaining mode, and show 
that the only way anything can go wrong with that construction is if some 
underlying cryptographic object has a flaw.  Smart people have proposed 
chaining modes that could be broken even when used with a strong block cipher.  
You can hope that security proofs will keep us from doing that.  

Now, sometimes the proofs are wrong, and almost always, they involve a lot of 
simplification of reality (like most proofs aren't going to take low-entropy 
RNG outputs into account).  But they still seem pretty valuable to me for 
real-world things.  Among other things, they give you a completely different 
way of looking at the security of a real-world thing, with different people 
looking over the proof and trying to attack things.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread Phillip Hallam-Baker
On Fri, Oct 4, 2013 at 10:23 AM, John Kelsey crypto@gmail.com wrote:

 On Oct 4, 2013, at 10:10 AM, Phillip Hallam-Baker hal...@gmail.com
 wrote:
 ...
  Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had
 no impact on the security of certificates issued using MD5 until the attack
 was dramatically improved and the second pre-image attack became feasible.

 Just a couple nitpicks:

 a.  Dobbertin wasn't doing a birthday (brute force collision) attack, but
 rather a collision attack from a chosen IV.


Well if we are going to get picky, yes it was a collision attack but the
paper he circulated in 1995 went beyond a collision from a known IV, he had
two messages that resulted in the same output when fed a version of MD5
where one of the constants had been modified in one bit position.



 b.  Preimages with MD5 still are not practical.  What is practical is
 using the very efficient modern collision attacks to do a kind of herding
 attack, where you commit to one hash and later get some choice about which
 message gives that hash.


I find the preimage nomencalture unnecessarily confusing and have to look
up the distinction between first second and platform 9 3/4s each time I do
a paper.



 ...
  Proofs are good for getting tenure. They produce papers that are very
 citable.

 There are certainly papers whose only practical importance is getting a
 smart cryptographer tenure somewhere, and many of those involve proofs.
  But there's also a lot of value in being able to look at a moderately
 complicated thing, like a hash function construction or a block cipher
 chaining mode, and show that the only way anything can go wrong with that
 construction is if some underlying cryptographic object has a flaw.  Smart
 people have proposed chaining modes that could be broken even when used
 with a strong block cipher.  You can hope that security proofs will keep us
 from doing that.


Yes, that is what I would use them for. But I note that a very large
fraction of the field has studied formal methods, including myself and few
of us find them to be quite as useful as the academics think them to be.

The oracle model is informative but does not necessarily need to be reduced
to symbolic logic to make a point.


 Now, sometimes the proofs are wrong, and almost always, they involve a lot
 of simplification of reality (like most proofs aren't going to take
 low-entropy RNG outputs into account).  But they still seem pretty valuable
 to me for real-world things.  Among other things, they give you a
 completely different way of looking at the security of a real-world thing,
 with different people looking over the proof and trying to attack things.


I think the main value of formal methods turns out to be pedagogical. When
you teach students formal methods they quickly discover that the best way
to deliver a proof is to refine out every bit of crud possible before
starting and arrive at an appropriate level of abstraction.

But oddly enough I am currently working on a paper that presents a
formalized approach.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread james hughes

On Oct 2, 2013, at 7:46 AM, John Kelsey crypto@gmail.com wrote:

 Has anyone tried to systematically look at what has led to previous crypto 
 failures?  T

In the case we are now, I don't think that it is actually crypto failures 
(RSA is still secure, but 1024 bit is not. 2048 DHE is still secure, but no one 
uses it, AES is secure, but not with an insecure key exchange) but standards 
failures. These protocol and/or implementation failures are either because the 
standards committee said to the cryptographers prove it (the case of WEP) and 
even when an algorithm is dead, they refuse to deprecate it (MD5 certificate 
mess) or just use bad RND (too many examples to cite). 

The antibodies in the standards committees need to read this and think about it 
really hard. 

 (1)  Overdesign against cryptanalysis (have lots of rounds)
 (2)  Overdesign in security parameters (support only high security levels, 
 use bigger than required RSA keys, etc.) 
 (3)  Don't accept anything without a proof reducing the security of the whole 
 thing down to something overdesigned in the sense of (1) or (2).

and (4) Assume algorithms fall faster than Moore's law and, in the standard, 
provide a sunset date.

I completely agree. 


rhetoric
The insane thing is that it is NOT the cryppies that are complaining about 
moving to RSA 2048 and 2048 bit DHE, it is the standards wonks that complain 
that a 3ms key exchange is excessive. 

Who is the CSO of the Internet? We have Vince Cerf,  Bob Kahn or Sir Tim, but 
what about security? Who is responsible for the security of eCommerce? Who will 
VISA turn to? It was NIST (effectively). Thank you NSA, because of you NIST now 
has lost most of its credibility. (Secrets are necessary, but many come to 
light over time. Was the probability of throwing NIST under the bus 
[http://en.wikipedia.org/wiki/Throw_under_the_bus] part of the challenge in 
finesse? Did NSA consider backing down when the Shumow, Ferguson presentation 
(which Schneier blogged about) came to light in 2007?).  

We have a mess. Who is going to lead? Can the current IETF Security Area step 
into the void? They have cryptographers on the Directorate list, but history 
has shown that they are not incredibly effective at implementing a 
cryptographic vision. One can easily argue that vision is rarely provided by a 
committee oversight committee. 
/rhetoric


John: Thank you. These are absolutely the right criteria. 

Now what? 

Jim

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-03 Thread Alan Braggins

On 02/10/13 18:42, Arnold Reinhold wrote:

On 1 Oct 2013 23:48 Jerry Leichter wrote:


The larger the construction project, the tighter the limits on this stuff.  I used to work with a former structural 
engineer, and he repeated some of the bad example stories they are taught.  A famous case a number of years 
back involved a hotel in, I believe, Kansas City.  The hotel had a large, open atrium, with two levels of concrete 
skyways for walking above.  The skyways were hung from the roof.  As the structural engineer 
specified their attachment, a long threaded steel rod ran from the roof, through one skyway - with the skyway held on 
by a nut - and then down to the second skyway, also held on by a nut.  The builder, realizing that he would have to 
thread the nut for the upper skyway up many feet of rod, made a minor change:  He instead used two threaded 
rods, one from roof to upper skyway, one from upper skyway to lower skyway.  It's all the same, right?  Well, no:  In 
the original design, the upper nut holds the weight of just the upper skyway.  In the m

o

  di

fied version, it holds the weight of *both* skyways.  The upper fastening 
failed, the structure collapsed, and as I recall several people on the skyways 
at the time were killed.  So ... not even a factor of two safety margin there.  
(The take-away from the story as delivered to future structural engineers was 
*not* that there wasn't a large enough safety margin - the calculations were 
accurate and well within the margins used in building such structures.  The 
issue was that no one checked that the structure was actually built as 
designed.)


This would be the 1981 Kansas City Hyatt Regency walkway collapse 
(http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse)


Which says of the original design: Investigators determined eventually 
that this design supported only 60 percent of the minimum load required 
by Kansas City building codes.[19], though the reference seems to be a 
dead link. (And as built it supported 30% or the required minimum.)


So even if it had been built as designed, the safety margin would not
have been well within the margins used in building such structures.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-03 Thread James A. Donald

On 2013-10-03 00:46, John Kelsey wrote:

a.  Most attacks come from protocol or mode failures, not so much crypto 
primitive failures.  That is, there's a reaction attack on the way CBC 
encryption and message padding play with your application, and it doesn't 
matter whether you're using AES or FEAL-8 for your block cipher.


The repeated failures of wifi are more crypto primitive failure, though 
underlying crypto primitives were abused in ways that exposed subtle 
weaknesses.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-03 Thread ianG

On 2/10/13 17:46 PM, John Kelsey wrote:

Has anyone tried to systematically look at what has led to previous crypto 
failures?


This has been a favourite topic of mine, ever since I discovered that 
the entire foundation of SSL was built on theory, never confirmed in 
practice.  But my views are informal, never published nor systematic. 
Here's a history I started for risk management of CAs, informally:


http://wiki.cacert.org/Risk/History

But I don't know of any general history of internet protocol breaches.




That would inform us about where we need to be adding armor plate.  My 
impression (this may be the availability heuristic at work) is that:



a.  Most attacks come from protocol or mode failures, not so much crypto 
primitive failures.  That is, there's a reaction attack on the way CBC 
encryption and message padding play with your application, and it doesn't 
matter whether you're using AES or FEAL-8 for your block cipher.



Most attacks go around the protocol, or as Adi to eloquently put it. 
Then, of the rest, most go against the software engineering outer 
layers.  Attacks become less and less frequent as we peel the onion to 
get to the crypto core.  However, it would be good to see an empirical 
survey of these failures, in order to know if my picture is accurate.




b.  Overemphasis on performance (because it's measurable and security usually 
isn't) plays really badly with having stuff be impossible to get out of the 
field when it's in use.  Think of RC4 and DES and MD5 as examples.



Yes.  Software engineers are especially biased by this issue.  Although, 
it rarely causes a breach, it more often distracts attention from what 
really matters.



c.  The ways I can see to avoid problems with crypto primitives are:

(1)  Overdesign against cryptanalysis (have lots of rounds)



Frankly, I see this as a waste.  The problem with rounds and analysis of 
same is that it isn't just one algorithm, it's many.  Which means you 
are overdesigning for many algorithms, which means ... what?


It is far better to select a target such as 128 bit security, and then 
design each component to meet this target.  If you want overdesign 
then up the target to 160 bits, etc.  And make all the components 
achieve this.


The papers and numbers shown on keylength.com provide the basis for 
this.  It's also been frequently commented that the NSA's design of 
Skipjack was balanced this way, and that's how they like it.


Also note that the black-box effect in crypto protocols is very 
important.  Once we have the black box (achieved par excellence by block 
ciphers and MDs) we can then concentrate on the protocol using those 
boxes.  Which is to say that, because crypto can be black-boxed, then 
security protocols are far more of a software engineering problem than 
they are a crypto problem.  (As you know, the race is now on to develop 
an AE stream black box.)


Typically then we model the failure of an entire black box, as if it is 
totally transparent, rather than if it becomes weak.  For example, in my 
payments work, I say what happens if my AES128 fails?  Well, because 
all payments are signed by RSA204 then the attacker can simply read the 
payments, cannot make or inject payments.  And the converse.


This software engineering approach dominates questions such as AES at 
128 level or 96 level, as it covers more attack surface area than the 
bit strength question.




(2)  Overdesign in security parameters (support only high security levels, use 
bigger than required RSA keys, etc.)



As above.  Perhaps the reason why I like a balanced approach is that, by 
the time that some of the components have started to show their age (and 
overdesign is starting to look attractive in hindsight) we have moved on 
*for everything*.


Which is to say, it's time to replace the whole darn lot, and no 
overdesign would have saved us.  E.g., look at SSL's failures.  All 
(most?) of them were design flaws from complexity, none of them could be 
saved by overdesign in terms of rounds or params.


So, overdesign can be seen as a sort of end-of-lifecycle bias of hindsight.



(3)  Don't accept anything without a proof reducing the security of the whole 
thing down to something overdesigned in the sense of (1) or (2).



Proofs are ... good for cryptographers :)  As I'm not, I can't comment 
further (nor do I design to them).




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread Jerry Leichter
On Oct 1, 2013, at 12:27 PM, Dirk-Willem van Gulik wrote:
 It's clear what 10x stronger than needed means for a support beam:  We're 
 pretty good at modeling the forces on a beam and we know how strong beams of 
 given sizes are.  
 Actually - do we ? I picked this example as it is one of those where this 'we 
 know' falls apart on closer examination. Wood varies a lot; and our ratings 
 are very rough. We drill holes through it; use hugely varying ways to 
 glue/weld/etc. And we liberally apply safety factors everywhere; and a lot of 
 'otherwise it does not feel right' throughout. And in all fairness - while 
 you can get a bunch of engineers to agree that 'it is strong enough' - they'd 
 argue endlessly and have 'it depends' sort of answers when you ask them how 
 strong is it 'really' ?
[Getting away from crypto, but ... ]  Having recently had significant work done 
on my house, I've seen this kind of thing close up.

There are three levels of construction.  If you're putting together a small 
garden shed, it looks right is generally enough - at least if it's someone 
with sufficient experience.  If you're talking non-load-bearing walls, or even 
some that bear fairly small loads, you follow standards - use 2x4's, space them 
36 apart, use doubled 2x4's over openings like windows and doors, don't cut 
holes larger than some limit - and you'll be fine (based on what I saw, you 
could cut a hole large enough for a water supply, but not for a water drain 
pipe).  Methods of attachment are also specified.  These standards - enforced 
by building codes - are deliberately chosen with large safety margins so that 
you don't need to do any detailed calculations.  They are inherently safe over 
some broad range of sizes of a constructed object.

Beyond that, you get into the realm of computation.  I needed a long open span, 
which was accomplished with an LV beam (engineered wood - LV is Layered 
Veneer).  The beam was supporting a good piece of the house's roof, so the 
actual forces needed to be calculated.  LV beams come in multiple sizes, and 
the strengths are well characterized.  In this case, we would not have wanted 
the architect/structural engineer to just build in a larger margin of safety:  
There was limited space in the attic to get this into place, and if we chose 
too large an LV beam just for good measure, it wouldn't fit.  Alternatively, 
we could have added a vertical support beam just to be sure - but it would 
have disrupted the kitchen.  (A larger LV beam would also have cost more money, 
though with only one beam, the percentage it would have added to the total cost 
would have been small.  On a larger project - or, if we'd had to go with a 
steel beam if no LV beam of appropriate size and strength exi
 sted - the cost increase could have been significant.)

The larger the construction project, the tighter the limits on this stuff.  I 
used to work with a former structural engineer, and he repeated some of the 
bad example stories they are taught.  A famous case a number of years back 
involved a hotel in, I believe, Kansas City.  The hotel had a large, open 
atrium, with two levels of concrete skyways for walking above.  The skyways 
were hung from the roof.  As the structural engineer specified their 
attachment, a long threaded steel rod ran from the roof, through one skyway - 
with the skyway held on by a nut - and then down to the second skyway, also 
held on by a nut.  The builder, realizing that he would have to thread the nut 
for the upper skyway up many feet of rod, made a minor change:  He instead 
used two threaded rods, one from roof to upper skyway, one from upper skyway to 
lower skyway.  It's all the same, right?  Well, no:  In the original design, 
the upper nut holds the weight of just the upper skyway.  In the modi
 fied version, it holds the weight of *both* skyways.  The upper fastening 
failed, the structure collapsed, and as I recall several people on the skyways 
at the time were killed.  So ... not even a factor of two safety margin there.  
(The take-away from the story as delivered to future structural engineers was 
*not* that there wasn't a large enough safety margin - the calculations were 
accurate and well within the margins used in building such structures.  The 
issue was that no one checked that the structure was actually built as 
designed.)

I'll leave it to others to decide whether, and how, these lessons apply to 
crypto design.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread John Kelsey
Has anyone tried to systematically look at what has led to previous crypto 
failures?  That would inform us about where we need to be adding armor plate.  
My impression (this may be the availability heuristic at work) is that:

a.  Most attacks come from protocol or mode failures, not so much crypto 
primitive failures.  That is, there's a reaction attack on the way CBC 
encryption and message padding play with your application, and it doesn't 
matter whether you're using AES or FEAL-8 for your block cipher.  

b.  Overemphasis on performance (because it's measurable and security usually 
isn't) plays really badly with having stuff be impossible to get out of the 
field when it's in use.  Think of RC4 and DES and MD5 as examples.  

c.  The ways I can see to avoid problems with crypto primitives are:

(1)  Overdesign against cryptanalysis (have lots of rounds)

(2)  Overdesign in security parameters (support only high security levels, use 
bigger than required RSA keys, etc.) 

(3)  Don't accept anything without a proof reducing the security of the whole 
thing down to something overdesigned in the sense of (1) or (2).

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread Jonathan Thornburg
maybe offtopic
On Tue, 1 Oct 2013, someone who (if I've unwrapped the nested quoting
correctly) might have been Jerry Leichter wrote:
 There are three levels of construction.  If you're putting together
 a small garden shed, it looks right is generally enough - at least
 if it's someone with sufficient experience.  If you're talking
 non-load-bearing walls, or even some that bear fairly small loads,
 you follow standards - use 2x4's, space them 36 apart, [[...]]

Standard construction in US  Canada uses 2/4's on 16 (repeat: 16)
centers.  Perhaps there's a lesson here:  leave carpentry to people
who are experts at carpentry.
/maybe offtopic
And leave crypto to people who are experts at crypto.

-- 
-- Jonathan Thornburg [remove -animal to reply] 
jth...@astro.indiana-zebra.edu
   Dept of Astronomy  IUCSS, Indiana University, Bloomington, Indiana, USA
   There was of course no way of knowing whether you were being watched
at any given moment.  How often, or on what system, the Thought Police
plugged in on any individual wire was guesswork.  It was even conceivable
that they watched everybody all the time.  -- George Orwell, 1984
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-02 Thread Arnold Reinhold
On 1 Oct 2013 23:48 Jerry Leichter wrote:

 The larger the construction project, the tighter the limits on this stuff.  I 
 used to work with a former structural engineer, and he repeated some of the 
 bad example stories they are taught.  A famous case a number of years back 
 involved a hotel in, I believe, Kansas City.  The hotel had a large, open 
 atrium, with two levels of concrete skyways for walking above.  The 
 skyways were hung from the roof.  As the structural engineer specified 
 their attachment, a long threaded steel rod ran from the roof, through one 
 skyway - with the skyway held on by a nut - and then down to the second 
 skyway, also held on by a nut.  The builder, realizing that he would have to 
 thread the nut for the upper skyway up many feet of rod, made a minor 
 change:  He instead used two threaded rods, one from roof to upper skyway, 
 one from upper skyway to lower skyway.  It's all the same, right?  Well, no:  
 In the original design, the upper nut holds the weight of just the upper 
 skyway.  In the mo
 di
 fied version, it holds the weight of *both* skyways.  The upper fastening 
 failed, the structure collapsed, and as I recall several people on the 
 skyways at the time were killed.  So ... not even a factor of two safety 
 margin there.  (The take-away from the story as delivered to future 
 structural engineers was *not* that there wasn't a large enough safety margin 
 - the calculations were accurate and well within the margins used in building 
 such structures.  The issue was that no one checked that the structure was 
 actually built as designed.)
 
 I'll leave it to others to decide whether, and how, these lessons apply to 
 crypto design.

This would be the 1981 Kansas City Hyatt Regency walkway collapse 
(http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse), where 114 people 
died, a bit more than several. And the take-away included the fact there 
there were no architectural codes covering that particular structural design. I 
believe they now exist and include a significant safety margin.  The Wikipedia 
article includes a link to a NIST technical report on the disaster, but NIST 
and its web site are now closed due to the government shutdown. 

The concept of safety margin is a meta-design principle that is basic to 
engineering.  It's really the only way to answer the questions, vital in 
retrospect, we don't yet know to ask.  

That nist.gov is down also keeps me from reading the slide sets there on the 
proposal to change to SHA-3 from the design that won the competition.  I'll 
reserve judgment on the technical arguments until I can see them, but there is 
a separate question of how much time the cryptographic community should be 
given to analyze a major change like that (think years). I would also note that 
the opinions of the designers of Keccak, while valuable, should not be 
considered dispositive any more than they were in the original competition.  


Arnold Reinhold
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Dirk-Willem van Gulik

Op 30 sep. 2013, om 05:12 heeft Christoph Anton Mitterer 
cales...@scientia.net het volgende geschreven:
 
 Not sure whether this has been pointed out / discussed here already (but
 I guess Perry will reject my mail in case it has):
 
 https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3
 This makes NIST seem somehow like liars,... on the one hand they claim

Do keep in mind that in this case the crux is not around SHA-3 as a 
specification/algorithm - but about the number of bits one should use.

One aspect in all this is into what engineering culture standards (such as 
those created by NIST) finally land. 

Is it in one which is a bit insecure and just does the absolute minimum; or is 
it in one where practitioners have certain gut-feels - and take them as 
absolute minimums ?

I do note that in crypto (possibly driven by the perceived expense of too many 
bits) we tend to very carefully observe the various bit lengths found in 
800-78-3, 800-131A , etc etc. And rarely go much beyond it*.

While in a lot of other fields - it is very common for 'run of the mill' 
constructions; such as when calculating a floor, wooden support beam, a joist, 
to take the various standards and liberally apply safety factors. A factor 10 
or 20x too strong is quite common *especially* in 'consumer' constructions.  

It is only when one does large/complex engineering works that you take the time 
to really calculate strength; and even then - a factor 2 or 3 is still very 
common; and barely raises an eyebrow with a cost conscious customer. 

So perhaps we need to look at those NIST et.al. standards in crypto and do the 
same - take them as a absolute minimum; but by default and routinely not feel 
guilty when we add a 10x or more. 

And at the same time evoke a certain 'feeling' of strength with our users. A 
supporting column can just 'look' right or too thin; a BMW car door can just 
make that right sound on closing***. 

And :) :) people like (paying for/owning) tools that look fit for purpose :) :) 
:).

Dw

*) and yes; compute power may have been an issue - but rarely is these days; I 
have a hard time measuring symmetric AES on outbound packet flows relative to 
all other stuff.
**) and yes; compute, interaction/UI/UX  joules may be a worry - but at the 
same time - CPU's have have gotten faster and clever UI's can background things 
or good engineers can device async/queues and what not.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Jerry Leichter
On Oct 1, 2013, at 3:29 AM, Dirk-Willem van Gulik di...@webweaving.org wrote:
 ...I do note that in crypto (possibly driven by the perceived expense of too 
 many bits) we tend to very carefully observe the various bit lengths found in 
 800-78-3, 800-131A , etc etc. And rarely go much beyond it*.
 
 While in a lot of other fields - it is very common for 'run of the mill' 
 constructions; such as when calculating a floor, wooden support beam, a 
 joist, to take the various standards and liberally apply safety factors. A 
 factor 10 or 20x too strong is quite common *especially* in 'consumer' 
 constructions  
It's clear what 10x stronger than needed means for a support beam:  We're 
pretty good at modeling the forces on a beam and we know how strong beams of 
given sizes are.  We have *no* models for the strength of a crypto system 
that would allow one to meaningfully make such comparisons in general.  It's 
like asking that houses be constructed to survive intact even when hit by the 
Enterprise's tractor beam.

Oh, if you're talking brute force, sure, 129 bits takes twice as long as 128 
bits.  But even attacking a 128-bit cipher by brute force is way beyond 
anything we can even sketch today, and 256 bits is getting into if you could 
use the whole known universe as a computer it would talk you more than the life 
of the universe territory.

If, on the other hand, you're talking analytic attacks, there's no way to know 
ahead of time what matters.  The ultimate example of this occurred back when 
brute force attacks against DES, at 56 bits, were clearly on the horizon - so 
people proposed throwing away the key schedule and making the key the full 
expanded schedule of 448 bits, or whatever it came to.  Many times more secure 
- except then differential cryptography was (re-)discovered and it turned out 
that 448-bit DES was no stronger than 56-bit DES.

There are three places I can think of where the notion of adding a safety 
factor makes sense today; perhaps someone can add to the list, but I doubt it 
will grow significantly longer:

1.  Adding a bit to the key size when that key size is small enough;
2.  Using multiple encryption with different mechanisms and independent keys;
3.  Adding rounds to a round-based symmetric encryptor of the design we 
currently use pretty universally (multiple S and P transforms with some keying 
information mixed in per round, repeated for multiple rounds).  In a good 
cipher designed according to our best practices today, the best attacks we know 
of extend to some number of rounds and then just die - i.e., after some number 
of rounds they do no better than brute force.  Adding a few more beyond that 
makes sense.  But ... if you think adding many more beyond that makes sense, 
you're into tin-foil hat territory.  We understand what certain attacks look 
like and we understand how they (fail to) extend beyond some number of rounds - 
but the next attack down the pike, about which we have no theory, might not be 
sensitive to the number of rounds at all.

These arguments apply to some other primitives as well, particularly hash 
functions.  They *don't* apply to asymmetric cryptography, except perhaps for 
case 2 above - though it may not be so easy to apply.  For asymmetric crypto, 
the attacks are all algorithmic and mathematical in nature, and the game is 
different.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Dirk-Willem van Gulik

Op 1 okt. 2013, om 17:59 heeft Jerry Leichter leich...@lrw.com het volgende 
geschreven:

 On Oct 1, 2013, at 3:29 AM, Dirk-Willem van Gulik di...@webweaving.org 
 wrote:
 ...I do note that in crypto (possibly driven by the perceived expense of too 
 many bits) we tend to very carefully observe the various bit lengths found 
 in 800-78-3, 800-131A , etc etc. And rarely go much beyond it*.
 
 While in a lot of other fields - it is very common for 'run of the mill' 
 constructions; such as when calculating a floor, wooden support beam, a 
 joist, to take the various standards and liberally apply safety factors. A 
 factor 10 or 20x too strong is quite common *especially* in 'consumer' 
 constructions….  

 It's clear what 10x stronger than needed means for a support beam:  We're 
 pretty good at modeling the forces on a beam and we know how strong beams of 
 given sizes are.  

Actually - do we ? I picked this example as it is one of those where this 'we 
know' falls apart on closer examination. Wood varies a lot; and our ratings are 
very rough. We drill holes through it; use hugely varying ways to 
glue/weld/etc. And we liberally apply safety factors everywhere; and a lot of 
'otherwise it does not feel right' throughout. And in all fairness - while you 
can get a bunch of engineers to agree that 'it is strong enough' - they'd argue 
endlessly and have 'it depends' sort of answers when you ask them how strong 
is it 'really' ?

 Oh, if you're talking brute force, sure, 129 bits takes twice as long as 128 
 bits.  
...
 If, on the other hand, you're talking analytic attacks, there's no way to 
 know ahead of time what matters.  

So I think you are hitting the crux of the matter - the material, like most, we 
work with, is not that easy to gauge. But then when we consider your example of 
DES:

 The ultimate example of this occurred back when brute force attacks against 
 DES, at 56 bits, were clearly on the horizon - so people proposed throwing 
 away the key schedule and making the key the full expanded schedule of 448 
 bits, or whatever it came to.  Many times more secure - except then 
 differential cryptography was (re-)discovered and it turned out that 448-bit 
 DES was no stronger than 56-bit DES.

with hindsight we can conclude that despite all this - despite all the various 
instutitions and interests conspiring, fighting and collaborating roughly 
yielded us a fair level of safety for a fair number of years - and that is 
roughly what we got. 

Sure - that relied on 'odd' things; like the s-boxes getting strengthened 
behind the scenes, the EFF stressing that a hardware device was 'now' cheap 
enough. But by and large - these where more or less done 'on time'. 

So I think we roughly got the minimum about right with DES. 

The thing which facinates/strikes me as odd - is that that is then exactly what 
we all implemented. Not more. Not less. No safety; no nothing. Just a bit of 
hand waving to how complex it all is; how hard it is to predict; so we listen 
to NIST* et.al. and that is it then.

*Despite* the fact that, as you so eloquently argue, the material we work with 
is notoriously unpredictable, finnicky and has many an uncontrolled unknown.

And any failures or issues come back to haunt us, not NIST et.al.

 There are three places I can think of where the notion of adding a safety 
 factor makes sense today; perhaps someone can add to the list, but I doubt 
 it will grow significantly longer:
 
 1.  Adding a bit to the key size when that key size is small enough;
 2.  Using multiple encryption with different mechanisms and independent keys;
 3.  Adding rounds to a round-based symmetric encryptor of the design we 
 currently use pretty universally (multiple S and P transforms with some 
 keying information mixed in per round, repeated for multiple rounds).  In a 
 good cipher designed according to our best practices today, the best attacks 
 we know of extend to some number of rounds and then just die - i.e., after 
 some number of rounds they do no better than brute force.  Adding a few more 
 beyond that makes sense.  But ... if you think adding many more beyond that 
 makes sense, you're into tin-foil hat territory.  We understand what certain 
 attacks look like and we understand how they (fail to) extend beyond some 
 number of rounds - but the next attack down the pike, about which we have no 
 theory, might not be sensitive to the number of rounds at all.

Agreed - and perhaps develop some routine practices around which way you layer; 
i.e. what is best wrapped inside which; and where do you (avoid) padding; or 
get the most out of IVs.
 
 These arguments apply to some other primitives as well, particularly hash 
 functions.  They *don't* apply to asymmetric cryptography, except perhaps for 
 case 2 above - though it may not be so easy to apply.  For asymmetric crypto, 
 the attacks are all algorithmic and mathematical in nature, and the game is 
 different.

Very good point (I did 

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-01 Thread Bill Frantz
On 10/1/13 at 12:29 AM, di...@webweaving.org (Dirk-Willem van 
Gulik) wrote:


While in a lot of other fields - it is very common for 'run of 
the mill' constructions; such as when calculating a floor, 
wooden support beam, a joist, to take the various standards and 
liberally apply safety factors. A factor 10 or 20x too strong 
is quite common *especially* in 'consumer' constructions.


In cave rescue the National Cave Rescue Commission (a training 
organization) uses a 7:1 system safety ratio in its trainings. 
This is for building systems where people could be seriously 
hurt or killed if the system fails.


Cheers - Bill, NCRC instructor

---
Bill Frantz| If the site is supported by  | Periwinkle
(408)356-8506  | ads, you are the product.| 16345 
Englewood Ave
www.pwpconsult.com |  | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography