Re: [cryptography] cryptographic agility
On 10/4/13 9:48 PM, Jeffrey Goldberg wrote: The AES “failure” in TLS is a CBC padding failure. Any block cipher would have “failed” in exactly the same way. Yes, I know. My second point, about needing a stream cipher other than RC4, is what's applicable to the current BEAST vs RC4 dilemma. My point with block ciphers was more hypothetical. As far as we know, AES is good, but some day it might turn out not to be, and even now, there is the concern that the AES-256 key schedule is not as good as it could be. My point was just that if you are going to have multiple block ciphers, you should have some diversity, and be able to explain the rationale for why you picked each one. (i. e. This one was for speed, that one was for security margin.) But TLS seems to have opted for the logic that if one 128-bit block cipher is good, four 128-bit block ciphers are better. Perhaps Camellia is a good back-up to AES; I don't know. But I'm not aware of it having been presented as has a higher security margin or something like that, the way Serpent could have been presented. It was just here's another one. And then we got SEED and ARIA piling on after that. (Or maybe SEED was before Camellia; I don't remember, and it doesn't really matter.) Yes, CBC mode has been an issue in a lot of the recent attacks against TLS. So, block cipher modes are another axis for diversity. A lot of folks seem to be putting a lot of eggs in the GCM basket lately. Maybe that's okay, but I know some concerns have been raised about the complexity of implementing GCM, and the potential for side-channel attacks. Maybe we need EAX as a backup in case GCM doesn't turn out to be as great as it was supposed to be. Again, I'm not *specifically* saying we need a Serpent-EAX cipher suite or something like that. I'm just saying that, in general, this is the kind of thinking that should be going on: how can we add cipher suites that add diversity, rather than just me too? --Patrick ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] cryptographic agility (was: Re: the spell is broken)
On Fri, Oct 4, 2013 at 11:48 PM, Jeffrey Goldberg jeff...@goldmark.org wrote: On 2013-10-04, at 10:46 PM, Patrick Pelletier c...@funwithsoftware.org wrote: On 10/4/13 3:19 PM, Nico Williams wrote: b) algorithm agility is useless if you don't have algorithms to choose from, or if the ones you have are all in the same family. Yes, I think that's where TLS failed. TLS supports four block ciphers with a 128-bit block size (AES, Camellia, SEED, and ARIA) without (as far as I'm aware) any clear tradeoff between them. Well, maybe I was too emphatic. I didn't mean that a protocol like, say, TLS, should be born with a large number of ciphersuites. It needs to be born with *two* (of each negotiable cryptographic primitive): to prove algorithm agility works. Also, none of this one-integer-to-name-combinations-of-all-algorithms -- key exchange, authentication, and KDF, should all be negotiated separately from session ciphers (but cipher modes, OTOH, should not be negotiated separately from ciphers). The rationale is that a cartesian product of algorithms in a manual registry -and with small integers!- is not really manageable. Some cipher modes can be separated from ciphers, but there's relatively few combinations of ciphers and cipher modes, so no need to separate them. The AES “failure” in TLS is a CBC padding failure. Any block cipher would have “failed” in exactly the same way. Indeed. 3DES and AES both failed because of CBC IV chaining without randomization in SSHv2. Any block cipher would have failed in the same situation because the failure was the *mode*'s. Nico -- ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
[cryptography] Daniel the King. Jon the President. Linus the God?
On 4/10/13 01:39 AM, James A. Donald wrote: On 2013-10-04 03:45, Adam Back wrote: Is it just me or could we better replace NIST by DJB ? ;) He can do that EC crypto, and do constant time coding (nacl), and non-hackable mail servers (qmail), and worst-time databases (cdb). Most people in the world look like rank amateurs or no-real-programming understanding niche-bound math geeks compared to DJB! Committees are at best inherently more stupid than their most stupid member, and are at worst also inclined to evil and madness. Linux was success because Linus is unelected president for life. Let us have Jon Callas as unelected president for life of symmetric cryptography, Bernstein as God King of public key cryptography. Long live the King :) Recall the long succession of Wifi debacles. Has any committee ever done anything good in cryptography? IEEE 802.11 was stupid. If NIST was not stupid, it was because evil was calling the shots behind the scenes, overruling the stupid. But, before we say, Long Live the King, there is another phrase: The King is Dead! The problem with ICANN writ large -- before them, there was Jon Postel. When he died, there was ... a power vacuum. This is no new story, traditionally there have been many battles for succession, and historically some of these have been bloody. While I agree that committees have not served us in security (and that is as far as my thesis goes) I think Kings do not either. We've seen NIST the king, of sorts, and we wonder who whispers into his ear. It is for this reason I like competition. I like the fact that DJB is whipping NIST's tail. I also like the fact that any young upstart can whip DJB's tail if he can get the curves lined up ... Long Live Competition! iang ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Daniel the King. Jon the President. Linus the God?
On Sat, Oct 5, 2013 at 4:21 AM, ianG i...@iang.org wrote: Long Live Competition! There should be no King to serve, no Committee to subvert, only an open Process. ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] the spell is broken
On 04/10/13 22:58, Jeffrey Goldberg wrote: On 2013-10-04, at 4:24 AM, Alan Braggins alan.bragg...@gmail.com wrote: Surely that's precisely because they (and SSL/TLS generally) _don't_ have a One True Suite, they have a pick a suite, any suite approach? And for those of us having to choose between preferring BEAST and RC4 for our webservers, it doesn’t look like we are really seeing the expected benefits of “negotiate a suite”. I’m not trying to use this to condemn the approach; it’s a single example. But it’s a BIG single example. Well yes, for most browsers and servers it's pick a suite - sorry, we haven't added AES-GCM yet, you have a choice of one flawed stream cipher or a load of block ciphers all in flawed MAC-then-Encrypt mode. I wasn't suggesting that this choice is a huge benefit over picking One True Suite, just commenting on how Firefox comes to pick Camellia. (The supposed agility does mean that when people get round to implementing TLS 1.2 and AES-GCM, or if Salsa20 gets added, it can be used without having to define a new One True Suite. But that only helps if new suites actually get adopted before attacks are found on all the old ones. And if an attacker can't easily force a downgrade to SSL3.0 without the user being warned) ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] the spell is broken
On 4/10/13 10:52 AM, Peter Gutmann wrote: Jon Callas j...@callas.org writes: In Silent Text, we went far more to the one true ciphersuite philosophy. I think that Iang's writings on that are brilliant. Absolutely. The one downside is that you then need to decide what the OTS is going to be. For example Mozilla (at least via Firefox) seems to think it involves Camellia (!!!?!!?). Thanks for those kind words, all. Perhaps some deeper background. When I was writing those hypotheses, I was very conscious that there was *no silver bullet*. I was trying to extrapolate what we should do in a messy world? We all know that too many ciphersuites is a mess. We also know that only one suite is vulnerable to catastrophic failure, and two or three suites is vulnerable to downgrade attacks, bugs in the switching, and expansion attacks in committee. A conundrum! Perhaps worse, we know that /our work is probably good/ but we are too few! We need ways to make cryptoplumbing safe for general software engineers, not just us. Not just hand out received wisdom like use TLS or follow NIST. If we've learnt anything recently, it is that standards and NISTs and similar are not always or necessarily the answer. There are many bad paths. I was trying to figure out what the best path among those bad paths was. From theory, I heard no clarity, I saw noise. But in history I found clues, and that is what informs those hypotheses. If one looks at the lifecycle of suites (or algorithms, or protocols, or products) then one sees that typically, stuff sticks around much longer than we want. Suites live way past their sell-by date. Once a cryptosystem is in there, it is there to stay until way past embarrassing. Old, algorithms, old suites are like senile great-aunts, they hang around, part of the family, we can't quite see how to push them off, and we justify keeping her for all sorts of inane reasons. Alternatively, if one looks at the history of failures, as John Kelsey pointed to a few days ago, one sees something surprising: rarely is a well-designed, state of the art cryptosuite broken. E.g., AES/CBC/HMAC as a suite is now a decade old, and still strong. Where things go wrong is typically outside the closely designed envelope. More, the failures are like an onion: the outside skin is the UI, it's tatty before it hits the store. Take the outer layer off, and the inner is quite good, but occasionally broken too. If we keep peeling off the layers, our design looks better and better Those blemished outer onion layers, those breaks, wherever they are, provide the next clue in the puzzle. Not only security issues, but we also have many business issues, features, compliance ... all sorts of stuff we'd rather ignore. E.g., I'm now adding photos to a secure datagram protocol -- oops! SSL took over a decade for SNI, coz it was a feature-not-bug. Examples abound where we've ignored wider issues because it's SOPs, Someone-Else's-Problem. Regardless of what we think or want, if we are really being responsible for the end-user result, we would be faced with pressures to do wholesale fixes. And these fixes will come more from the outside of the onion than from inside. Therefore, I claim: The cryptoplumber will be pressured to replace the system well before needing to replace any particular crypto component. Add in more issues: Resources -- I haven't got a team to spend on tweaking. Better knowledge over time -- we know so much more now. Incompatibility nightmares. Then, it becomes clearer that the big picture is rarely about a cryptosuite, it's about the whole darn system. Hence, I say: Plan on replacing the whole lot, when it is needed. Which leads to the corollary: Do a good job: make it Good as well as True! And you likely won't need a second. And another corollary: Prepare the next generation in background time. In your sleep, on the train, on honeymoon... Be advanced, be ready! In the rarest of circumstances that you do need to replace a cryptosuite, just replace the whole darn lot. It'll be about time, anyway. One True Suite works until that suite is no longer true, and then you're left hanging. One way to deal with this that got discussed some time ago over dinner (dining geeks, not cryptographers) is to swap at random among a small number of probably-OK suites and/or algorithms, a sort of probabilistic-security defence against the OTS having a problem. It's not like there's a shortage of them in... well, SSH, SSL/TLS, PGP, S/MIME, etc, anything really. For some reason, I'm wondering what the optimal method for a random shuffle of dinner choices/plates is, and how the vegetarians are going to respond... iang ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
[cryptography] ciphersuite revocation model? (Re: the spell is broken)
You know part of this problem is the inability to disable dud ciphersuites. Maybe its time to get pre-emptive on that issue: pair a protocol revocation cert with a new ciphersuite. I am reminded of mondex security model: it was a offline respendable smart-card based ecash system in the UK, with backing of a few large name UK banks and card issuers, with little wallets like calculators to check a cards balance. Secure up to tamper resistance or ciphersuite cryptographic break. Not sure mondex got very far in market penetration beyond a few trial runs, but the ciphersuite update model is interesting. So their plan was deploy ciphersuite A, and B in the first card issue. Now when someone finds a defect in ciphersuite A, issue a revocation cert for ciphersuite A, and deploy it together with a signed update for ciphersuite C, that you work on polishing in the background during the life-cycle of A. Then the cards run on ciphersuite B, with C as the backup. At all times there is a backup, and at no time do you run on known defective ciphersuites. Now the ciphersuite revocation certs are distributed actually p2p because mondex is offline respendable. If a card encounters another card that has heard the news that ciphersuite A is dead, it refuses to use it, and passes on the signed news. Maybe to get the update they actually have to go online proper, after a grace period of running only on ciphersuite B, but thats ok, it'll only happen once in a few years. Ciphersuite A is pretty instantly disabled as the news spreads with each payment. Maybe something like that could work for browser ciphersuites. It something related to vendor security updates, except there is a prompting that each site and client you interact with starts warning you the clock is ticking that you have to disable a ciphersuite. If you persist to ignore it your browser or server stops working after 6months. Adam On Sat, Oct 05, 2013 at 02:03:38PM +0300, ianG wrote: On 4/10/13 10:52 AM, Peter Gutmann wrote: Jon Callas j...@callas.org writes: In Silent Text, we went far more to the one true ciphersuite philosophy. I think that Iang's writings on that are brilliant. Absolutely. The one downside is that you then need to decide what the OTS is going to be. For example Mozilla (at least via Firefox) seems to think it involves Camellia (!!!?!!?). Thanks for those kind words, all. Perhaps some deeper background. When I was writing those hypotheses, I was very conscious that there was *no silver bullet*. I was trying to extrapolate what we should do in a messy world? We all know that too many ciphersuites is a mess. We also know that only one suite is vulnerable to catastrophic failure, and two or three suites is vulnerable to downgrade attacks, bugs in the switching, and expansion attacks in committee. A conundrum! Perhaps worse, we know that /our work is probably good/ but we are too few! We need ways to make cryptoplumbing safe for general software engineers, not just us. Not just hand out received wisdom like use TLS or follow NIST. If we've learnt anything recently, it is that standards and NISTs and similar are not always or necessarily the answer. There are many bad paths. I was trying to figure out what the best path among those bad paths was. From theory, I heard no clarity, I saw noise. But in history I found clues, and that is what informs those hypotheses. If one looks at the lifecycle of suites (or algorithms, or protocols, or products) then one sees that typically, stuff sticks around much longer than we want. Suites live way past their sell-by date. Once a cryptosystem is in there, it is there to stay until way past embarrassing. Old, algorithms, old suites are like senile great-aunts, they hang around, part of the family, we can't quite see how to push them off, and we justify keeping her for all sorts of inane reasons. Alternatively, if one looks at the history of failures, as John Kelsey pointed to a few days ago, one sees something surprising: rarely is a well-designed, state of the art cryptosuite broken. E.g., AES/CBC/HMAC as a suite is now a decade old, and still strong. Where things go wrong is typically outside the closely designed envelope. More, the failures are like an onion: the outside skin is the UI, it's tatty before it hits the store. Take the outer layer off, and the inner is quite good, but occasionally broken too. If we keep peeling off the layers, our design looks better and better Those blemished outer onion layers, those breaks, wherever they are, provide the next clue in the puzzle. Not only security issues, but we also have many business issues, features, compliance ... all sorts of stuff we'd rather ignore. E.g., I'm now adding photos to a secure datagram protocol -- oops! SSL took over a decade for SNI, coz it was a feature-not-bug. Examples abound where we've ignored wider issues because it's SOPs,
Re: [cryptography] ciphersuite revocation model? (Re: the spell is broken)
Should we create some kind of CRL style protocol for algorithms? Then we'd have a bunch of servers run by various organizations specialized on crypto/computer security that can issue warnings against unsecure algorithms, as well as cipher modes and combinations of ciphers and whatever else it might be. And your client software would subscribe to a bunch of those servers. There should probably be degrees to the warnings, since a cipher can be totally broken in one set of circumstances but still work perfectly fine in others. Switching when its not needed can be costly. I think I'd prefer if this was OS level and all your local software could use it. I think a problem can be that there doesn't seem to be a universal naming convention for ciphers or ciphersuites in software configurations. We'd have to define one that clients can understand. - Sent from my phone Den 5 okt 2013 13:41 skrev Adam Back a...@cypherspace.org: You know part of this problem is the inability to disable dud ciphersuites. Maybe its time to get pre-emptive on that issue: pair a protocol revocation cert with a new ciphersuite. I am reminded of mondex security model: it was a offline respendable smart-card based ecash system in the UK, with backing of a few large name UK banks and card issuers, with little wallets like calculators to check a cards balance. Secure up to tamper resistance or ciphersuite cryptographic break. Not sure mondex got very far in market penetration beyond a few trial runs, but the ciphersuite update model is interesting. So their plan was deploy ciphersuite A, and B in the first card issue. Now when someone finds a defect in ciphersuite A, issue a revocation cert for ciphersuite A, and deploy it together with a signed update for ciphersuite C, that you work on polishing in the background during the life-cycle of A. Then the cards run on ciphersuite B, with C as the backup. At all times there is a backup, and at no time do you run on known defective ciphersuites. Now the ciphersuite revocation certs are distributed actually p2p because mondex is offline respendable. If a card encounters another card that has heard the news that ciphersuite A is dead, it refuses to use it, and passes on the signed news. Maybe to get the update they actually have to go online proper, after a grace period of running only on ciphersuite B, but thats ok, it'll only happen once in a few years. Ciphersuite A is pretty instantly disabled as the news spreads with each payment. Maybe something like that could work for browser ciphersuites. It something related to vendor security updates, except there is a prompting that each site and client you interact with starts warning you the clock is ticking that you have to disable a ciphersuite. If you persist to ignore it your browser or server stops working after 6months. Adam On Sat, Oct 05, 2013 at 02:03:38PM +0300, ianG wrote: On 4/10/13 10:52 AM, Peter Gutmann wrote: Jon Callas j...@callas.org writes: In Silent Text, we went far more to the one true ciphersuite philosophy. I think that Iang's writings on that are brilliant. Absolutely. The one downside is that you then need to decide what the OTS is going to be. For example Mozilla (at least via Firefox) seems to think it involves Camellia (!!!?!!?). Thanks for those kind words, all. Perhaps some deeper background. When I was writing those hypotheses, I was very conscious that there was *no silver bullet*. I was trying to extrapolate what we should do in a messy world? We all know that too many ciphersuites is a mess. We also know that only one suite is vulnerable to catastrophic failure, and two or three suites is vulnerable to downgrade attacks, bugs in the switching, and expansion attacks in committee. A conundrum! Perhaps worse, we know that /our work is probably good/ but we are too few! We need ways to make cryptoplumbing safe for general software engineers, not just us. Not just hand out received wisdom like use TLS or follow NIST. If we've learnt anything recently, it is that standards and NISTs and similar are not always or necessarily the answer. There are many bad paths. I was trying to figure out what the best path among those bad paths was. From theory, I heard no clarity, I saw noise. But in history I found clues, and that is what informs those hypotheses. If one looks at the lifecycle of suites (or algorithms, or protocols, or products) then one sees that typically, stuff sticks around much longer than we want. Suites live way past their sell-by date. Once a cryptosystem is in there, it is there to stay until way past embarrassing. Old, algorithms, old suites are like senile great-aunts, they hang around, part of the family, we can't quite see how to push them off, and we justify keeping her for all sorts of inane reasons. Alternatively, if one looks at the history of failures, as John Kelsey pointed to a few
Re: [cryptography] ciphersuite revocation model? (Re: the spell is broken)
On Sat, Oct 05, 2013 at 02:29:11PM +0200, Natanael wrote: Should we create some kind of CRL style protocol for algorithms? Then we'd have a bunch of servers run by various organizations specialized on crypto/computer security that can issue warnings against unsecure algorithms, as well as cipher modes and combinations of ciphers and whatever else it might be. And your client software would subscribe to a bunch of those servers. Just make sure you sign your protocol revocation message using more than one protocol... Speaking of as a last ditch measure you can two messages that hash to the same digest as a type of revocation message. -- 'peter'[:-1]@petertodd.org signature.asc Description: Digital signature ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Daniel the King. Jon the President. Linus the God?
As we're down a rat hole now, perhaps this can be the last word: We reject: kings, presidents and voting. We believe in: rough consensus and running code. -- David Clark ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Daniel the King. Jon the President. Linus the God?
On 2013-10-06 02:52, d...@geer.org wrote: We reject: kings, presidents and voting. We believe in: rough consensus and running code. Which gave us IEEE 802.11 Which, like Occupy Wall Street, worked by consensus. ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] the spell is broken
On Sat, Oct 5, 2013 at 3:13 PM, Erwann Abalea eaba...@gmail.com wrote: 2013/10/4 Paul Wouters p...@cypherpunks.ca [...] People forget the NSA has two faces. One side is good. NIST and FIPS and NSA are all related. One lesson here might be, only use FIPS when the USG requires it. That said, a lot of FIPS still makes sense. I'm surely not going to stick with md5 or sha1. We're still using HMAC-SHA1 for most TLS ciphersuites, RSA(MD5||SHA1) for TLS signatures (until TLS1.2), and RSA(SHA1) to sign (EC)DHE parameters. SHA1 is still there. There are alternatives, it doesn't hurt to get them in place. Yes, like the IETF brainpool drafts. RFC5639 standardized the curves, RFC7027 allows them to be used for TLS. They're no more drafts. Do you know if there's a standard name and OID assigned to Dr. Bernstein's gear? IETF only makes one mention of 25519 in the RFC search, and its related to TLS and marked TBD. Lack of a mailing list for NACl is crippling. (Sorry to wander a bit). Jeff ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
[cryptography] Curve25519 OID (was: Re: the spell is broken)
On 10/5/13 2:47 PM, Jeffrey Walton wrote: Do you know if there's a standard name and OID assigned to Dr. Bernstein's gear? IETF only makes one mention of 25519 in the RFC search, and its related to TLS and marked TBD. Not yet. See this thread: http://www.ietf.org/mail-archive/web/tls/current/msg10074.html (In short, the argument was that an OID for Curve25519 is only useful if it's going to be used for signatures, and Curve25519 shouldn't directly be used for signatures; Ed25519 should be used instead.) --Patrick ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Curve25519 OID (was: Re: the spell is broken)
On Sat, Oct 5, 2013 at 7:35 PM, Patrick Pelletier c...@funwithsoftware.org wrote: On 10/5/13 2:47 PM, Jeffrey Walton wrote: Do you know if there's a standard name and OID assigned to Dr. Bernstein's gear? IETF only makes one mention of 25519 in the RFC search, and its related to TLS and marked TBD. Not yet. See this thread: http://www.ietf.org/mail-archive/web/tls/current/msg10074.html (In short, the argument was that an OID for Curve25519 is only useful if it's going to be used for signatures, and Curve25519 shouldn't directly be used for signatures; Ed25519 should be used instead.) Thanks Patrick. I tend to agree with Simon when he remarked [OID assignment for ed25519] doesn't belong in the TLS WG though. For completeness, Crypto++ has a factory-like method that serves curves. The curves are sorted by OID in the function, so Crypto++ would need an OID for ed25519. See around line of 120 and 250 at http://www.cryptopp.com/docs/ref/eccrypto_8cpp_source.html. I doubt Wei Dai will accept a patch which breaks from his design. In the meantime, folks are hacking in something (from other conversations I've had with some folks). That makes it hard to use ed25519 correctly, and possibly easy to use incorrectly. Jeff ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography
Re: [cryptography] Allergy for client certificates
On 30/09/13 19:55 PM, Guido Witmond wrote: On 09/30/13 17:43, Adam Back wrote: Anyway and all that because we are seemingly alergic to using client side keys which kill the password problem dead. Hi Adam, I wondered about that 'allergy' myself. I have some ideas about that and I'm curious to learn about other. Here are mine: 1. The long standing belief is that client systems are untrustworthy. Any malware will go after the client certificates. So without proper sandboxing, capability-security and other partitioning mechanisms, the user is toast. If the client system is untrustworthy, then the user is toast, and the password is so much candy. So this is not something that effects client certs one way or another. The most popular consumer-OS was (is?) also the most leaky. Where was The Hurd when we needed it? Why did people fall for Unix when Multics was so much better? 2. It's easier to change a password in a database than to talk the user through creating an submitting a new pub/priv key pair. No way. We've done that work over at CAcert and it is far easier to have the user create new certs than to authenticate the user for a new password. In essence what it does is it outsources the lost password problem over to the certificate business, which is also more efficient. The problem of client cert management is strictly bad software and not enough attention to making it easy. There is a cert rollover issue, but again, that's because there isn't enough attention to it. 3. The crypto-programs were too diffucult to use. Requiring end users to make trust decisions about entities they never heard of. Again, this is a myth. It's actually easy enough to run a single purpose CA. It's a few thousand lines of code. Who is Verisign and why should I trust them That's certainly a question. 4. Client certificates from the big CA-peddlers are akin digital passports, eliminating all non-repudiation. That's all marketing blather. It can be ignored for the most part. Ie, a privacy problem. Yes, to the privacy problem. But that's a lost battle, as if they are tracking the users, they are doing it through 100 other mechanisms anyway. 5. Only recently, computers have become powerful enough to encrypt everything, all the time. Now we can afford to burn cpu cycles on encryption without getting usability to suffer. That's also an old dead argument. In order to address the phishing thing, we have to move everything over to HTTPS. So we're going to be encrypting everything anyway. iang ___ cryptography mailing list cryptography@randombit.net http://lists.randombit.net/mailman/listinfo/cryptography