[Cryptography] was this FIPS 186-1 (first DSA) an attemped NSA backdoor?

2013-10-10 Thread Adam Back

Some may remember Bleichenbacher found a random number generator bias in the
original DSA spec, that could leak the key after soem number of signatures
depending the circumstances.

Its described in this summary of DSA issues by Vaudenay Evaluation Report
on DSA

http://www.ipa.go.jp/security/enc/CRYPTREC/fy15/doc/1002_reportDSA.pdf

   
Bleichenbacher's attack is described in section 5.


The conclusion is Bleichenbacher estimates that the attack would be
practical for a non-negligible fraction of qs with a time complexity of
2^63, a space complexity of 2^40, and a collection of 2^22 signatures.  We
believe the attack can still be made more efficient.

NIST reacted by issuing special publication SP 800-xx to address and I
presume that was folded into fips 186-3.  Of course NIST is down due to the
USG political level stupidity (why take the extra work to switch off the web
server on the way out I dont know).

That means 186-1 and 186-2 were vulnerable.

An even older NSA sabotage spotted by Bleichenbacher?

Anyway it highlights the significant design fragility in DSA/ECDSA not just
in the entropy of the secret key, but in the generation of each and every k
value, which leads to the better (but non-NIST recommended) idea adopted by
various libraries and applied crypto people to use k=H(m,d) so that the
signture is determinstic in fact, and the same k value will only be used
with the same message (which is harmless as thts just reissuing the bitwise
same signature).  


What happens if a VM is rolled back including the RNG and it outputs the
same k value to a different network dependeng m value?  etc.  Its just
unnecessarily fragile in its NIST/NSA mandated form.

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-10 Thread John Kelsey
The problem with offensive cyberwarfare is that, given the imbalance between 
attackers and defenders and the expanding use of computer controls in all sorts 
of systems, a cyber war between two advanced countries will not decide anything 
militarily, but will leave both combattants much poorer than they were 
previously, cause some death and a lot of hardship and bitterness, and leave 
the actual hot war to be fought. 

Imagine a conflict that starts with both countries wrecking a lot of each 
others' infrastructure--causing refineries to burn, factories to wreck 
expensive equipment, nuclear plants to melt down, etc.  A week later, that 
phase of the war is over.  Both countries are, at that point, probalby 10-20% 
poorer than they were a week earlier.  Both countries have lots of really 
bitter people out for blood, because someone they care about was killed or 
their job's gone and their house burned down or whatever.  But probably there's 
been little actual degradation of their standard war-fighting ability.  Their 
civilian aviation system may be shut down, some planes may even have been 
crashed, but their bombers and fighters and missiles are mostly still working.  
Fuel and spare parts may be hard to come by, but the military will certainly 
get first pick.  My guess is that what comes next is that the two countries 
have a standard hot war, but with the pleasant addition of a great depression 
 sized economic collapse for both right in the middle of it.

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
Just thinking out loud

The administrative complexity of a cryptosystem is overwhelmingly in key 
management and identity management and all the rest of that stuff.  So imagine 
that we have a widely-used inner-level protocol that can use strong crypto, but 
also requires no external key management.  The purpose of the inner protocol is 
to provide a fallback layer of security, so that even an attack on the outer 
protocol (which is allowed to use more complicated key management) is unlikely 
to be able to cause an actual security problem.  On the other hand, in case of 
a problem with the inner protocol, the outer protocol should also provide 
protection against everything.

Without doing any key management or requiring some kind of reliable identity or 
memory of previous sessions, the best we can do in the inner protocol is an 
ephemeral Diffie-Hellman, so suppose we do this:  

a.  Generate random a and send aG on curve P256

b.  Generate random b and send bG on curve P256

c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
generate an AES key for messages in each direction.

d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
AES-CCM with their sequence number and their sending key, and keep track of the 
sequence number of the most recent message received from the other side.  

The point is, this is a protocol that happens *inside* the main security 
protocol.  This happens inside TLS or whatever.  An attack on TLS then leads to 
an attack on the whole application only if the TLS attack also lets you do 
man-in-the-middle attacks on the inner protocol, or if it exploits something 
about certificate/identity management done in the higher-level protocol.  
(Ideally, within the inner protcol, you do some checking of the identity using 
a password or shared secret or something, but that's application-level stuff 
the inner and outer protocols don't know about.  

Thoughts?

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:18 PM, crypto@gmail.com (John Kelsey) wrote:

We know how to address one part of this problem--choose only 
algorithms whose design strength is large enough that there's 
not some relatively close by time when the algorithms will need 
to be swapped out.  That's not all that big a problem now--if 
you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not 
Moore's Law.  Really, even with 128-bit security level 
primitives, it will be a very long time until the brute-force 
attacks are a concern.


We should try to characterize what a very long time is in 
years. :-)



This is actually one thing we're kind-of on the road to doing 
right in standards now--we're moving away from 
barely-strong-enough crypto and toward crypto that's going to 
be strong for a long time to come.


We had barely-strong-enough crypto because we couldn't afford 
the computation time for longer key sizes. I hope things are 
better now, although there may still be a problem for certain 
devices. Let's hope they are only needed in low security/low 
value applications.



Protocol attacks are harder, because while we can choose a key 
length, modulus size, or sponge capacity to support a known 
security level, it's not so easy to make sure that a protocol 
doesn't have some kind of attack in it.
I think we've learned a lot about what can go wrong with 
protocols, and we can design them to be more ironclad than in 
the past, but we still can't guarantee we won't need to 
upgrade.  But I think this is an area that would be interesting 
to explore--what would need to happen in order to get more 
ironclad protocols?  A couple random thoughts:


I fully agree that this is a valuable area to research.



a.  Layering secure protocols on top of one another might 
provide some redundancy, so that a flaw in one didn't undermine 
the security of the whole system.


Defense in depth has been useful from longer ago than the 
Trojans and Greeks.



b.  There are some principles we can apply that will make 
protocols harder to attack, like encrypt-then-MAC (to eliminate 
reaction attacks), nothing is allowed to need change its 
execution path or timing based on the key or plaintext, every 
message includes a sequence number and the hash of the previous 
message, etc.  This won't eliminate protocol attacks, but will 
make them less common.


I think that the attacks on MAC-then-encrypt and timing attacks 
were first described within the last 15 years. I think it is 
only normal paranoia to think there may be some more equally 
interesting discoveries in the future.



c.  We could try to treat at least some kinds of protocols more 
like crypto algorithms, and expect to have them widely vetted 
before use.


Most definitely! Lots of eye. Formal proofs because they are a 
completely different way of looking at things. Simplicity. All 
will help.




What else?
...
Perhaps the shortest limit on the lifetime of an embedded 
system is the security protocol, and not the hardware. If so, 
how do we as society deal with this limit.


What we really need is some way to enforce protocol upgrades 
over time.  Ideally, there would be some notion that if you 
support version X of the protocol, this meant that you would 
not support any version lower than, say, X-2.  But I'm not sure 
how practical that is.


This is the direction I'm pushing today. If you look at auto 
racing you will notice that the safety equipment commonly used 
before WW2 is no longer permitted. It is patently unsafe. We 
need to make the same judgements in high security/high risk applications.


Cheers - Bill

---
Bill Frantz|The nice thing about standards| Periwinkle
(408)356-8506  |is there are so many to choose| 16345 
Englewood Ave
www.pwpconsult.com |from.   - Andrew Tanenbaum| Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-10 Thread Lodewijk andré de la porte
2013/10/9 Phillip Hallam-Baker hal...@gmail.com

 I see cyber-sabotage as being similar to use of chemical or biological
 weapons: It is going to be banned because the military consequences fall
 far short of being decisive, are unpredictable and the barriers to entry
 are low.


I doubt that's anywhere near how they'll be treated. Bio en Chem are banned
for their extreme relative effectiveness and far greater cruelty than most
weapons have. Bleeding out is apparently considered quite human, compared
to chocking on foamed up parts of your own lungs. Cyberwarfare will likely
be effectively counteracted by better security. The more I think the less I
understand fall far short of being decisive. If cyber is out you switch
to old-school tactics. If chemical or biological happens it's either death
for hundreds or thousands or nothing happens.

Of course the bigger armies will want to keep it away from the
terrorists, it'd level the playing field quite a bit. A 200 losses, 2000
kills battle could turn into 1200 losses, 1700 kills quite fast. But that's
not what I'd call a ban.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Elliptic curve question

2013-10-10 Thread Lodewijk andré de la porte
2013/10/10 Phillip Hallam-Baker hal...@gmail.com

  The original author was proposing to use the same key for encryption and
 signature which is a rather bad idea.


Explain why, please. It might expand the attack surface, that's true. You
could always add a signed message that says I used a key named 'Z' for
encryption here. Would that solve the problem?
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Bill Frantz

On 10/9/13 at 7:12 PM, watsonbl...@gmail.com (Watson Ladd) wrote:


On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:
... As professionals, we have an obligation to share our 
knowledge of the limits of our technology with the people who 
are depending on it. We know that all crypto standards which 
are 15 years old or older are obsolete, not recommended for 
current use, or outright dangerous. We don't know of any way 
to avoid this problem in the future.


15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.


When I developed the VatTP crypto protocol for the E language 
www.erights.org about 15 years ago, key sizes of 1024 bits 
were high security. Now they are seriously questioned. 3DES was 
state of the art. No widely distributed protocols used 
Feige-Fiat-Shamir or Schnorr signatures. Do any now? I stand by 
my statement.




I think the burden of proof is on the people who suggest that 
we only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


... long post of problems with TLS, most of which are valid 
criticisms deleted as not addressing the above questions.



Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


I agree with this general direction, but I still don't have the 
warm fuzzies that good answers to the above questions might 
give. I have seen too many projects to do it right that didn't 
pull it off.


See also my response to John Kelsey.

Cheers - Bill

---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Peter Gutmann
Watson Ladd watsonbl...@gmail.com writes:

The obvious solution: Do it right the first time.

And how do you know that you're doing it right?  PGP in 1992 adopted a
bleeding-edge cipher (IDEA) and was incredibly lucky that it's stayed secure
since then.  What new cipher introduced up until 1992 has had that
distinction?  Doing it right the first time is a bit like the concept of
stopping rules in heuristic decision-making, if they were that easy then
people wouldn't be reading this list but would be in Las Vegas applying the
stopping rule stop playing just before you start losing.

This is particularly hard in standards-based work because any decision about
security design tends to rapidly degenerate into an argument about whose
fashion statement takes priority.  To get back to an earlier example that I
gave on the list, the trivial and obvious fix to TLS of switching from MAC-
then-encrypt to encrypt-then-MAC is still being blocked by the WG chairs after
nearly a year, despite the fact that a straw poll on the list indicated
general support for it (rough consensus) and implementations supporting it are
already deployed (running code).  So do it right the first time is a lot
easier said than done.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Salz, Rich
 TLS was designed to support multiple ciphersuites. Unfortunately this opened 
 the door
 to downgrade attacks, and transitioning to protocol versions that wouldn't do 
 this was nontrivial.
 The ciphersuites included all shared certain misfeatures, leading to the 
 current situation.

On the other hand, negotiation let us deploy it in places where full-strength 
cryptography is/was regulated.

Sometimes half a loaf is better than nothing.

/r$
--  
Principal Security Engineer
Akamai Technology
Cambridge, MA

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread R. Hirschfeld
Very silly but trivial to implement so I went ahead and did so:

To send a prism-proof email, encrypt it for your recipient and send it
to irrefrangi...@mail.unipay.nl.  Don't include any information about
the recipient, just send the ciphertext (in some form of ascii armor).
Be sure to include something in the message itself to indicate who
it's from because no sender information will be retained.

To receive prism-proof email, subscribe to the irrefrangible mailing
list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/.  Use a
separate email address for which you can pipe all incoming messages
through a script.  Upon receipt of a message, have your script attempt
to decrypt it.  If decryption succeeds (almost never), put it in your
inbox.  If decryption fails (almost always), put it in the bit bucket.

(If you prefer not to subscribe you can instead download messages from
the public list archive, but at some point I may discard archived
messages and/or stop archiving.)

The simple(-minded) idea is that everybody receives everybody's email,
but can only read their own.  Since everybody gets everything, the
metadata is uninteresting and traffic analysis is largely fruitless.

Spam isn't an issue because it will be discarded along with all the
other mail that fails to decrypt for the recipient.

Each group of correspondents can choose its own methods of encryption
and key exchange.  Scripts interfacing to, e.g., gpg on either end
should be straightforward.

Enjoy!

/tongue-in-cheek
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-10 Thread Lodewijk andré de la porte
2013/10/10 John Kelsey crypto@gmail.com

 The problem with offensive cyberwarfare is that, given the imbalance
 between attackers and defenders and the expanding use of computer controls
 in all sorts of systems, a cyber war between two advanced countries will
 not decide anything militarily, but will leave both combattants much poorer
 than they were previously, cause some death and a lot of hardship and
 bitterness, and leave the actual hot war to be fought.


I think you'd only employ most the offensive means in harmony with the
start of the hot war. That makes a lot more sense than annoying your
opponent.


 Imagine a conflict that starts with both countries wrecking a lot of each
 others' infrastructure--causing refineries to burn, factories to wreck
 expensive equipment, nuclear plants to melt down, etc.  A week later, that
 phase of the war is over.  Both countries are, at that point, probalby
 10-20% poorer than they were a week earlier.


I think this would cause more than 20% damage (esp. the nuclear reactor!).
But I can imagine a slow buildup of disabled things happening.


 Both countries have lots of really bitter people out for blood, because
 someone they care about was killed or their job's gone and their house
 burned down or whatever.  But probably there's been little actual
 degradation of their standard war-fighting ability.  Their civilian
 aviation system may be shut down, some planes may even have been crashed,
 but their bombers and fighters and missiles are mostly still working.  Fuel
 and spare parts may be hard to come by, but the military will certainly get
 first pick.  My guess is that what comes next is that the two countries
 have a standard hot war, but with the pleasant addition of a great
 depression sized economic collapse for both right in the middle of it.


This would be a mayor plus in the eyes of the countries' leaders.
Motivating people for war is the hardest thing about it. I do think the
military relies heavily on electronic tools for coordination. And I think
they have plenty of parts stockpiled for a proper blitzkrieg.

Most the things you mentioned can be achieved with infiltration and covert
operations, which are far more traditional. And far harder to do at great
scale. But they are not done until there is already a significant blood
thirst.

I'm not sure what'd happen, simply put. But I think it'll become just
another aspect of warfare. It is already another aspect of the cover
operations, and we haven't lived a high-tech vs high-tech war. And if it
does happen, the chance we live to talk about it is less than I'd like.

You pose an interesting notion about the excessiveness of causing a great
depression before the first bullets fly. I counter that with the effects of
conventional warfare being more excessively destructive.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Other Backdoors?

2013-10-10 Thread Phillip Hallam-Baker
I sarcastically proposed the use of GOST as an alternative to NIST crypto.
Someone shot back a note saying the elliptic curves might be 'bent'.

Might be interesting for EC to take another look at GOST since it might be
the case that the GRU and the NSA both found a similar backdoor but one was
better at hiding it than the other.


On the NIST side, can anyone explain the reason for this mechanism for
truncating SHA512?

Denote H(0)′
to be the initial hash value of SHA-512 as specified in Section 5.3.5
above.
Denote H(0)′′ to be the initial hash value computed below.
H(0) is the IV for SHA-512/t.
For i = 0 to 7
{
(0)′′ (0)′ Hi = Hi ⊕ a5a5a5a5a5a5a5a5(in hex).

}

H(0) = SHA-512 (“SHA-512/t”) using H(0)′′
as the IV, where t is the specific truncation value.
(end.)

[Can't link to FIPS180-4 right now as its down]

I really don't like the futzing with the IV like that, not least because a
lot of implementations don't give access to the IV. Certainly the object
oriented ones I tend to use don't.

But does it make the scheme weaker?

Is there anything wrong with just truncating the output?

The only advantage I can see to the idea is to stop the truncated digest
being used as leverage to reveal the full digest in a scheme where one was
public and the other was not.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Kelsey
Having a public bulletin board of posted emails, plus a protocol for 
anonymously finding the ones your key can decrypt, seems like a pretty decent 
architecture for prism-proof email.  The tricky bit of crypto is in making 
access to the bulletin board both efficient and private.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Stephen Farrell


 On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote:
 
 Just thinking out loud
 
 The administrative complexity of a cryptosystem is overwhelmingly in key 
 management and identity management and all the rest of that stuff.  So 
 imagine that we have a widely-used inner-level protocol that can use strong 
 crypto, but also requires no external key management.  The purpose of the 
 inner protocol is to provide a fallback layer of security, so that even an 
 attack on the outer protocol (which is allowed to use more complicated key 
 management) is unlikely to be able to cause an actual security problem.  On 
 the other hand, in case of a problem with the inner protocol, the outer 
 protocol should also provide protection against everything.
 
 Without doing any key management or requiring some kind of reliable identity 
 or memory of previous sessions, the best we can do in the inner protocol is 
 an ephemeral Diffie-Hellman, so suppose we do this:  
 
 a.  Generate random a and send aG on curve P256
 
 b.  Generate random b and send bG on curve P256
 
 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.
 
 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side.  
 
 The point is, this is a protocol that happens *inside* the main security 
 protocol.  This happens inside TLS or whatever.  An attack on TLS then leads 
 to an attack on the whole application only if the TLS attack also lets you do 
 man-in-the-middle attacks on the inner protocol, or if it exploits something 
 about certificate/identity management done in the higher-level protocol.  
 (Ideally, within the inner protcol, you do some checking of the identity 
 using a password or shared secret or something, but that's application-level 
 stuff the inner and outer protocols don't know about.  
 
 Thoughts?


Suggest it on the tls wg list as a feature of 1.3?

S

 
 --John
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Salz, Rich
 The simple(-minded) idea is that everybody receives everybody's email, but 
 can only read their own.  Since everybody gets everything, the metadata is 
 uninteresting and traffic analysis is largely fruitless.

Some traffic analysis is still possible based on just message originator.  If I 
see a message from A, and then soon see messages from B and C, then I can 
perhaps assume they are collaborating.  If I A's message is significantly 
larger then the other two, then perhaps they're taking some kind of vote.

So while it's a neat hack, I think the claims are overstated.

/r$
 
--  
Principal Security Engineer
Akamai Technology
Cambridge, MA
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Jerry Leichter
On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote:
 Very silly but trivial to implement so I went ahead and did so:
 
 To send a prism-proof email, encrypt it for your recipient and send it
 to irrefrangi...@mail.unipay.nl
Nice!  I like it.

A couple of comments:

1.  Obviously, this has scaling problems.  The interesting question is how to 
extend it while retaining the good properties.  If participants are willing to 
be identified to within 1/k of all the users of the system (a set which will 
itself remain hidden by the system), choosing one of k servers based on a hash 
of the recipient would work.  (A concerned recipient could, of course, check 
servers that he knows can't possibly have his mail.)  Can one do better?

2.  The system provides complete security for recipients (all you can tell 
about a recipient is that he can potentially receive messages - though the 
design has to be careful so that a recipient doesn't, for example, release 
timing information depending on whether his decryption succeeded or not).  
However, the protection is more limited for senders.  A sender can hide its 
activity by simply sending random messages, which of course no one will ever 
be able to decrypt.  Of course, that adds yet more load to the entire system.

3.  Since there's no acknowledgement when a message is picked up, the number of 
messages in the system grows without bound.  As you suggest, the service will 
have to throw out messages after some time - but that's a blind process which 
may discard a message a slow receiver hasn't had a chance to pick up while 
keeping one that was picked up a long time ago.  One way around this, for 
cooperative senders:  When creating a message, the sender selects a random R 
and appends tag Hash(R).  Anyone may later send a you may delete message R 
message.  A sender computes Hash(R), finds any message with that tag, and 
discards it.  (It will still want to delete messages that are old, but it may 
be able to define old as a larger value if enough of the senders are 
cooperative.)

Since an observer can already tell who created the message with tag H(R), it 
would normally be the original sender who deletes his messages.  Perhaps he 
knows they are no longer important; or perhaps he received an application-level 
acknowledgement message from the recipient.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread arxlight
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cool.

Drop me a note if you want hosting (gratis) for this.

On 10/10/13 10:22 PM, Jerry Leichter wrote:
 On Oct 10, 2013, at 11:58 AM, R. Hirschfeld r...@unipay.nl
 wrote:
 Very silly but trivial to implement so I went ahead and did
 so:
 
 To send a prism-proof email, encrypt it for your recipient and
 send it to irrefrangi...@mail.unipay.nl
 Nice!  I like it.
 
 A couple of comments:
 
 1.  Obviously, this has scaling problems.  The interesting question
 is how to extend it while retaining the good properties.  If
 participants are willing to be identified to within 1/k of all the
 users of the system (a set which will itself remain hidden by the
 system), choosing one of k servers based on a hash of the recipient
 would work.  (A concerned recipient could, of course, check servers
 that he knows can't possibly have his mail.)  Can one do better?
 
 2.  The system provides complete security for recipients (all you
 can tell about a recipient is that he can potentially receive
 messages - though the design has to be careful so that a recipient
 doesn't, for example, release timing information depending on
 whether his decryption succeeded or not).  However, the protection
 is more limited for senders.  A sender can hide its activity by
 simply sending random messages, which of course no one will ever
 be able to decrypt.  Of course, that adds yet more load to the
 entire system.
 
 3.  Since there's no acknowledgement when a message is picked up,
 the number of messages in the system grows without bound.  As you
 suggest, the service will have to throw out messages after some
 time - but that's a blind process which may discard a message a
 slow receiver hasn't had a chance to pick up while keeping one that
 was picked up a long time ago.  One way around this, for
 cooperative senders:  When creating a message, the sender selects a
 random R and appends tag Hash(R).  Anyone may later send a you may
 delete message R message.  A sender computes Hash(R), finds any
 message with that tag, and discards it.  (It will still want to
 delete messages that are old, but it may be able to define old as
 a larger value if enough of the senders are cooperative.)
 
 Since an observer can already tell who created the message with tag
 H(R), it would normally be the original sender who deletes his
 messages.  Perhaps he knows they are no longer important; or
 perhaps he received an application-level acknowledgement message
 from the recipient. -- Jerry
 
 ___ The cryptography
 mailing list cryptography@metzdowd.com 
 http://www.metzdowd.com/mailman/listinfo/cryptography
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)

iQIcBAEBAgAGBQJSVxYkAAoJEAWtgNHk7T8Q+uwP/0sWLASYrvKHkVYo4yEjLLYK
+s4Yfnz4sBJRUkndj6G3mhk+3lutcMiMhD2pWaTjo/FENCqMveiReI3LiA57aJ9l
eaB2whG8pslm+NKirFJ//3AL6mBPJEqeH4QfrfaxNbu61T3oeU9jwihQ/1XpZUxb
F1vPGN5GZyrW4GdNBWW+0bzgjoBKsyBNTe/0F/JhtKz/KD6aEQjzeNDJkgm4z6DA
Euf+qYT+K3QlWWe8IMxliJcP4HacKhUPO6YUCx6mjbz34zNNa3th4eXXTzlcTWUR
LWFXcDnmor3E9yMdFOdtN8+qXvauyi5HGq55Rge3fZ/TqZbNrfPh2AWqDSd/N1rW
TFkx9w7b3ndfbkipK51lrdJsZcOudDgvPVnZUZBNm8H7dHi4jb4CJz+Cfr7e7Ar8
wze58qz/kYFqZ7h91e/m4TaIM+jXtPteAM2HZnAAtx3daNqcbcFd8DRtZGdOpjWt
ugz2f1NUQrj8f17jUFRwIZfwi2E6wBfKTfVebQy7kMMBbN3fwvIHjyXJTHaz6o0I
AX1u3bvAilFdxObwULP4PRl7ReDB42XonCf90VHSDetE/qHQy4CKiIiMrGQIlY7Y
NhyAkd3dGvs57TP5gH+d39G0hkJ/iBqgaJtHcU1CwMxYABNasj2yyKPzA7Lvma62
8qzw2uTKepVPUkCjbqcy
=mvZ0
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread lists
 Having a public bulletin board of posted emails, plus a protocol
 for anonymously finding the ones your key can decrypt, seems
 like a pretty decent architecture for prism-proof email.
 The tricky bit of crypto is in making access to the bulletin
 board both efficient and private.

This idea has been around for a while but not built AFAIK.
http://petworkshop.org/2003/slides/talks/stef/pet2003/Lucky_Green_Anonmail_PET_2003.ppt
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
More random thoughts:

The minimal inner protocol would be something like this:

Using AES-CCM with a tag size of 32 bits, IVs constructed based on an implicit 
counter, and an AES-CMAC-based KDF, we do the following:

Sender: 
a.  Generate random 128 bit value R
b.  Use the KDF to compute K[S],N[S],K[R],N[R] = KDF(R, 128+96+128+96)
c.  Sender's 32-bit unsigned counter C[S] starts at 0.
d.  Compute IV[S,0] = 96 bits of binary 0s||C[S]
e.  Send R, CCM(K[S],N[S],IV[S,0],sender_message[0])

Receiver:
a.  Receive R and derive K[S],N[S],K[R],N[R] from it as above.
b.  Set Receiver's counter C[R] = 0.
c.  Compute IV[R,0] = 96 bits of binary 0s||C[R]
d.  Send CCM(K[R],N[R],IV[R,0],receiver_message[0])

and so on.  

Note that in this protocol, we never send a key or IV or nonce.  The total 
communications overhead of the inner protocol is an extra 160 bits in the first 
message and an extra 32 bits thereafter.  We're assuming the outer protocol is 
taking care of message ordering and guaranteed delivery--otherwise, we need to 
do something more complicated involving replay windows and such, and probably 
have to send along the message counters.  

This doesn't provide a huge amount of extra protection--if the attacker can 
recover more than a very small number of bits from the first message (attacking 
through the outer protocol), then the security of this protocol falls apart.  
But it does give us a bare-minimum-cost inner layer of defenses, inside TLS or 
SSH or whatever other thing we're doing.  

Both this and the previous protocol I sketched have the property that they 
expect to be able to generate random numbers.  There's a problem there, 
though--if the system RNG is weak or trapdoored, it could compromise both the 
inner and outer protocol at the same time.  

One way around this is to have each endpoint that uses the inner protocol 
generate its own internal secret AES key, Q[i].  Then, when it's time to 
generate a random value, the endpoint asks the system RNG for a random number 
X, and computes E_Q(X).  If the attacker knows Q but the system RNG is secure, 
we're fine.  Similarly, if the attacker can predict X but doesn't know Q, we're 
fine.  Even when the attacker can choose the value of X, he can really only 
force the random value in the beginning of the protocol to repeat.  In this 
protocol, that doesn't do much harm.  

The same idea works for the ECDH protocol I sketched earlier.  I request two 
128 bit random values from the system RNG, X, X'.  I then use E_Q(X)||E_Q(X') 
as my ephemeral DH private key. If an attacker knows Q but the system RNG is 
secure, then we get an unpredictable value for the ECDH key agreement.  If an 
attacker knows X,X' but doesn't know Q, he doesn't know what my ECDH ephemeral 
private key is.  If he forces it to a repeated value, he still doesn't weaken 
anything except this run of the protocol--no long-term secret is leaked if AES 
isn't broken.  

This is subject to endless tweaking and improvement.  But the basic idea seems 
really valuable:  

a.  Design an inner protocol, whose job is to provide redundancy in security 
against attacks on the outer protocol.

b.  The inner protocol should be:

(i)  As cheap as possible in bandwidth and computational terms.

(ii) Flexible enough to be used extremely widely, implemented in most places, 
etc.  

(iii) Administratively free, adding no key management or related burdens.

(iv) Free from revisions or updates, because the whole point of the inner 
protocol is to provide redundant security.  (That's part of administratively 
free.)  

(v)  There should be one or at most two versions (maybe something like the two 
I've sketched, but better thought out and analyzed).

c.  As much as possible, we want the security of the inner protocol to be 
independent of the security of the outer protocol.  (And we want this without 
wanting to know exactly what the outer protocol will look like.)  This means:

(i)  No shared keys or key material or identity strings or anything.

(ii) The inner protocol can't rely on the RNG being good.

(iii) Ideally, the crypto algorithms would be different, though that may impose 
too high a cost.  At least, we want as many of the likely failure modes to be 
different.  

Comments?  I'm not all that concerned with the protocol being perfect, but what 
do you think of the idea of doing this as a way to add redundant security 
against protocol attacks?  

--John

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Richard Outerbridge
On 2013-10-10 (283), at 15:29:33, Stephen Farrell stephen.farr...@cs.tcd.ie 
wrote:

 On 10 Oct 2013, at 17:06, John Kelsey crypto@gmail.com wrote:
 
 Just thinking out loud
 

[]

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.

How does this prevent MITM?  Where does G come from?

I'm also leery of using literally the same key in both directions.  Maybe a 
simple transform would suffice; maybe not.

 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side. 

If the same key is used, there needs to be a simple way of ensuring the 
sequence numbers can never overlap each other.
__outer



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Ray Dillinger
On 10/10/2013 12:54 PM, John Kelsey wrote:
 Having a public bulletin board of posted emails, plus a protocol 
 for anonymously finding the ones your key can decrypt, seems 
 like a pretty decent architecture for prism-proof email.  The 
 tricky bit of crypto is in making access to the bulletin board 
 both efficient and private.  

Wrong on both counts, I think.  If you make access private, you
generate metadata because nobody can get at mail other than their
own.  If you make access efficient, you generate metadata because
you're avoiding the wasted bandwidth that would otherwise prevent
the generation of metadata. Encryption is sufficient privacy, and
efficiency actively works against the purpose of privacy.

The only bow I'd make to efficiency is to split the message stream
into channels when it gets to be more than, say, 2GB per day. At
that point you would need to know both what channel your recipient
listens to *and* the appropriate encryption key before you could
send mail.

Bear




___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-10 Thread John Gilmore
 Does PGP have any particular support for key signing parties built in or is
 this just something that has grown up as a practice of use?

It's just a practice.  I agree that building a small amount of automation
for key signing parties would improve the web of trust.

I have started on a prototype that would automate small key signing
parties (as small as 2 people, as large as a few dozen) where everyone
present has a computer or phone that is on the same wired or wireless
LAN.

 I am specifically thinking of ways that key signing parties might be made
 scalable so that it was possible for hundreds of thousands of people...

An important user experience point is that we should be teaching GPG
users to only sign the keys of people who they personally know.
Having a signature that says, This person attended the RSA conference
in October 2013 is not particularly useful.  (Such a signature could
be generated by the conference organizers themselves, if they wanted
to.)  Since the conference organizers -- and most other attendees --
don't know what an attendee's real identity is, their signature on
that identity is worthless anyway.

So, if I participate in a key signing party with a dozen people, but I
only personally know four of them, I will only sign the keys of those
four.  I may have learned a public key for each of the dozen, but that
is separate from me signing those keys.  Signing them would assert to
any stranger that I know that this key belongs to this identity, which
would be false and would undermine the strength of the web of trust.

John


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread John Kelsey
On Oct 10, 2013, at 5:15 PM, Richard Outerbridge ou...@sympatico.ca wrote:
 
 How does this prevent MITM?  Where does G come from?

I'm assuming G is a systemwide shared parameter.  It doesn't prevent 
mitm--remember the idea here is to make a fairly lightweight protocol to run 
*inside* another crypto protocol like TLS.  The inner protocol mustn't add 
administrative requirements to the application, which means it can't need key 
management from some administrator or something.  The goal is to have an inner 
protocol which can run inside TLS or some similar thing, and which adds a layer 
of added security without the application getting more complicated by needing 
to worry about more keys or certificates or whatever.  

Suppose we have this inner protocol running inside a TLS version that is 
subject to one of the CBC padding reaction attacks.  The inner protocol 
completely blocks that.  

 I'm also leery of using literally the same key in both directions.  Maybe a 
 simple transform would suffice; maybe not.

I probably wasn't clear in my writeup, but my idea was to have different keys 
in different directions--there is a NIST KDF that uses only AES as its crypto 
engine, so this is relatively easy to do using standard components.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Kelsey
On Oct 10, 2013, at 5:20 PM, Ray Dillinger b...@sonic.net wrote:

 On 10/10/2013 12:54 PM, John Kelsey wrote:
 Having a public bulletin board of posted emails, plus a protocol 
 for anonymously finding the ones your key can decrypt, seems 
 like a pretty decent architecture for prism-proof email.  The 
 tricky bit of crypto is in making access to the bulletin board 
 both efficient and private.  
 
 Wrong on both counts, I think.  If you make access private, you
 generate metadata because nobody can get at mail other than their
 own.  If you make access efficient, you generate metadata because
 you're avoiding the wasted bandwidth that would otherwise prevent
 the generation of metadata. Encryption is sufficient privacy, and
 efficiency actively works against the purpose of privacy.

So the original idea was to send a copy of all the emails to everyone.  What 
I'm wanting to figure out is if there is a way to do this more efficiently, 
using a public bulletin board like scheme.  The goal here would be:

a.  Anyone in the system can add an email to the bulletin board, which I am 
assuming is public and cryptographically protected (using a hash chain to make 
it impossible for even the owner of the bulletin board to alter things once 
published).

b.  Anyone can run a protocol with the bulletin board which results in them 
getting only the encrypted emails addressed to them, and prevents the bulletin 
board operator from finding out which emails they got.

This sounds like something that some clever crypto protocol could do.  (It's 
related to the idea of searching on encrypted data.). And it would make an 
email system that was really resistant to tracing users.  

Bear

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] PGP Key Signing parties

2013-10-10 Thread Glenn Willen
John,

On Oct 10, 2013, at 2:31 PM, John Gilmore wrote:
 
 An important user experience point is that we should be teaching GPG
 users to only sign the keys of people who they personally know.
 Having a signature that says, This person attended the RSA conference
 in October 2013 is not particularly useful.  (Such a signature could
 be generated by the conference organizers themselves, if they wanted
 to.)  Since the conference organizers -- and most other attendees --
 don't know what an attendee's real identity is, their signature on
 that identity is worthless anyway.
 
 So, if I participate in a key signing party with a dozen people, but I
 only personally know four of them, I will only sign the keys of those
 four.  I may have learned a public key for each of the dozen, but that
 is separate from me signing those keys.  Signing them would assert to
 any stranger that I know that this key belongs to this identity, which
 would be false and would undermine the strength of the web of trust.

I am going to be interested to hear what the rest of the list says about this, 
because this definitely contradicts what has been presented to me as 'standard 
practice' for PGP use -- verifying identity using government issued ID, and 
completely ignoring personal knowledge.

Do you have any insight into what proportion of PGP/GPG users mean their 
signatures as personal knowledge (my preference and evidently yours), versus 
government ID (my perception of the community standard best practice), 
versus no verification in particular (my perception of the actual common 
practice in many cases)?

(In my ideal world, we'd have a machine readable way of indication what sort of 
verification was performed. Signing policies, not being machine readable or 
widely used, don't cover this well. There is space for key-value annotations in 
signature packets, which could help with this if we standardized on some.)

Glenn Willen
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread John Denker
On 10/10/2013 02:20 PM, Ray Dillinger wrote:

 split the message stream
 into channels when it gets to be more than, say, 2GB per day.

That's fine, in the case where the traffic is heavy.

We should also discuss the opposite case:

*) If the traffic is light, the servers should generate cover traffic.

*) Each server should publish a public key for /dev/null so that
 users can send cover traffic upstream to the server, without
 worrying that it might waste downstream bandwidth.

 This is crucial for deniabililty:  If the rubber-hose guy accuses
 me of replying to ABC during the XYZ crisis, I can just shrug and 
 say it was cover traffic.


Also:

*) Messages should be sent in standard-sized packets, so that the
 message-length doesn't give away the game.

*) If large messages are common, it might help to have two streams:
 -- the pointer stream, and
 -- the bulk stream.

It would be necessary to do a trial-decode on every message in the
pointer stream, but when that succeeds, it yields a pilot message
containing the fingerprints of the packets that should be pulled 
out of the bulk stream.  The first few bytes of the packet should 
be a sufficient fingerprint.  This reduces the number of trial-
decryptions by a factor of roughly sizeof(message) / sizeof(packet).


From the keen-grasp-of-the-obvious department:

*) Forward Secrecy is important here.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread grarpamp
On Thu, Oct 10, 2013 at 11:58 AM, R. Hirschfeld r...@unipay.nl wrote:
 To send a prism-proof email, encrypt it for your recipient and send it
 to irrefrangi...@mail.unipay.nl.  Don't include any information about

 To receive prism-proof email, subscribe to the irrefrangible mailing
 list at http://mail.unipay.nl/mailman/listinfo/irrefrangible/.  Use a

This is the same as NNTP, but worse in that it's not distributed.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-10 Thread Lars Luthman
On Thu, 2013-10-10 at 14:20 -0700, Ray Dillinger wrote: 
 Wrong on both counts, I think.  If you make access private, you
 generate metadata because nobody can get at mail other than their
 own.  If you make access efficient, you generate metadata because
 you're avoiding the wasted bandwidth that would otherwise prevent
 the generation of metadata. Encryption is sufficient privacy, and
 efficiency actively works against the purpose of privacy.
 
 The only bow I'd make to efficiency is to split the message stream
 into channels when it gets to be more than, say, 2GB per day. At
 that point you would need to know both what channel your recipient
 listens to *and* the appropriate encryption key before you could
 send mail.

This is starting to sound a lot like Bitmessage, doesn't it? A central
message stream that is split into a tree of streams when it gets too
busy and everyone tries to decrypt every message in their stream to see
if they are the recipient. In the case of BM the stream is distributed
in a P2P network, the stream of an address is found by walking the tree,
and you need a hash collision proof-of-work in order for other peers to
accept your sent messages. The P2P aspect and the proof-of-work
(according to the whitepaper[1] it should represent 4 minutes of work on
an average computer) probably makes it less attractive for mobile
devices though.

[1] https://bitmessage.org/bitmessage.pdf


--ll


signature.asc
Description: This is a digitally signed message part
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PGP Key Signing parties

2013-10-10 Thread Paul Hoffman
On Oct 10, 2013, at 2:31 PM, John Gilmore g...@toad.com wrote:

 Does PGP have any particular support for key signing parties built in or is
 this just something that has grown up as a practice of use?
 
 It's just a practice.  I agree that building a small amount of automation
 for key signing parties would improve the web of trust.
 
 I have started on a prototype that would automate small key signing
 parties (as small as 2 people, as large as a few dozen) where everyone
 present has a computer or phone that is on the same wired or wireless
 LAN.

Phil Zimmerman and Jon Callas had started to work on that around 1998, they 
might still have some of that design around.

--Paul Hoffman

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread Trevor Perrin
On Thu, Oct 10, 2013 at 3:32 PM, John Kelsey crypto@gmail.com wrote:
  The goal is to have an inner protocol which can run inside TLS or some 
 similar thing
[...]

 Suppose we have this inner protocol running inside a TLS version that is 
 subject to one of the CBC padding reaction attacks.  The inner protocol 
 completely blocks that.

If you can design an inner protocol to resist such attacks - which
you can, easily - why wouldn't you just design the outer protocol
the same way?


Trevor
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Other Backdoors?

2013-10-10 Thread David Mercer
 Thursday, October 10, 2013, Phillip Hallam-Baker wrote:


 [Can't link to FIPS180-4 right now as its down]


For the lazy among us, including my future self, a shutdown-proof url to
the archive.org copy of the NIST FIPS 180-4 pdf:
 http://tinyurl.com/FIPS180-4

-David Mercer




-- 
David Mercer - http://dmercer.tumblr.com
IM:  AIM: MathHippy Yahoo/MSN: n0tmusic
Facebook/Twitter/Google+/Linkedin: radix42
FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7
PGP Public Key: http://davidmercer.nfshost.com/radix42.pubkey.txt
Fingerprint: A24F 5816 2B08 5B37 5096  9F52 B182 3349 0F23 225B
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-10 Thread David Mercer
On Thursday, October 10, 2013, Salz, Rich wrote:

  TLS was designed to support multiple ciphersuites. Unfortunately this
 opened the door
  to downgrade attacks, and transitioning to protocol versions that
 wouldn't do this was nontrivial.
  The ciphersuites included all shared certain misfeatures, leading to the
 current situation.

 On the other hand, negotiation let us deploy it in places where
 full-strength cryptography is/was regulated.

 Sometimes half a loaf is better than nothing.


 The last time various SSL/TLS ciphersuites needed to be removed from
webserver configurations when I managed a datacenter some years ago led to
the following 'failure modes', either from the user's browser now warning
or refusing to connect to a server using an insecure cipher suite, or when
the only cipher suites used by a server weren't supported by an old browser
(or both at once):

1) for sites that had low barriers to switching, loss of traffic/customers
to sites that didn't drop the insecure ciphersuites

2) for sites that are harder to leave (your bank, google/facebook level
sticky public ones [less common]), large increases in calls to support,
with large costs for the business. Non-PCI compliant businesses taking CC
payments are generally so insecure that customers that fled to them really
are uppung their chances of suffering  fraud.

In both cases you have a net decrease of security and an increase of fraud
and financial loss.

So in some cases anything less than a whole loaf, which you can't guarantee
for N years of time, isn't 'good enough.' In other words, we are screwed no
matter what.

-David Mercer



-- 
David Mercer - http://dmercer.tumblr.com
IM:  AIM: MathHippy Yahoo/MSN: n0tmusic
Facebook/Twitter/Google+/Linkedin: radix42
FAX: +1-801-877-4351 - BlackBerry PIN: 332004F7
PGP Public Key: http://davidmercer.nfshost.com/radix42.pubkey.txt
Fingerprint: A24F 5816 2B08 5B37 5096  9F52 B182 3349 0F23 225B
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography