Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Philipp Gühring wrote: I had the feeling that Microsoft wants to abandon the usage of client certificates completely, and move the people to CardSpace instead. But how do you sign your emails with CardSpace? CardSpace only does the realtime authentication part of the market ... It's not rocket science. You have a public/private keypair. You can sign emails. For example, import your cardspace key into PGP. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Thierry Moreau <[EMAIL PROTECTED]> writes: >At first, it seems neat. But then, looking at how it works in practice: the >client receives an e-mail notification soliciting him to click on a HTML link >and then enroll for a security certificate, the client is solicited exactly >like a phishing criminal would do, Correction, "exactly like phishing criminals are actively doing right now" (hat tip to Don Jackson of SecureWorks who's investigated and documented this practice). Given the almost complete failure of client certs in the marketplace, I found it most amusing that the current active users of "client certs" are phishers. It reminded me of spammers and SPF. > Title: Sender driven certification enrollment system > Document Type and Number: United States Patent 6651166 > Link to this page: http://www.freepatentsonline.com/6651166.html > > Filing Date: 04/09/1998 > Publication Date: 11/18/2003 Thus postdating Microsoft's CertEnroll/Certenr3/Xenroll ActiveX control by several years. The only difference here is that the user generates the cert directly rather than involving a CA. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Leichter, Jerry wrote: While trying to find something else, I came across the following reference: Title: Sender driven certification enrollment system Document Type and Number: United States Patent 6651166 Link to this page: http://www.freepatentsonline.com/6651166.html Abstract: A sender driven certificate enrollment system and methods of its use are provided, in which a sender controls the generation of a digital certificate that is used to encrypt and send a document to a recipient in a secure manner. The sender compares previously stored recipient information to gathered information from the recipient. If the information matches, the sender transfers key generation software to the recipient, which produces the digital certificate, comprising a public and private key pair. The sender can then use the public key to encrypt and send the document to the recipient, wherein the recipient can use the matching private key to decrypt the document. Some feedback on the above security certificate issuance process. At first, it seems neat. But then, looking at how it works in practice: the client receives an e-mail notification soliciting him to click on a HTML link and then enroll for a security certificate, the client is solicited exactly like a phishing criminal would do, and a java software utility downloaded from the web should not be allowed to modify security-critical parameters on the local machine. According to my records, this issuance process is nonetheless representative of research directions for user enrollment, i.e. there aren't too many other documented processes in this area. Regards, -- - Thierry Moreau CONNOTECH Experts-conseils inc. 9130 Place de Montgolfier Montreal, Qc Canada H2M 2A1 Tel.: (514)385-5691 Fax: (514)385-5900 web site: http://www.connotech.com e-mail: [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Is anyone aware of any third-party usability studies on CardSpace, OpenID, ...?). I'm not. It would be a good opportunity for security usability researchers to contribute though. [0] I'm not sure whether putting "CardSpace" and "Liberty" in such close proximity in the above line was a good idea. If your monitor starts smoking due to the friction generated, please cut&paste one of the two elsewhere. Actually lots of Liberty and WS/Infocard/etc people are working on interop scenarios, see: http://projectconcordia.org/index.php/Main_Page - RL "Bob" - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Imagine if a website could instruct your browser to transparently generate a public/private keypair for use with that website only and send the public key to that website. Then, any time that the user returns to that website, the browser would automatically use that private key to authenticate itself. For instance, boa.com might instruct my browser to create one private key for use with *.boa.com; later, citibank.com could instruct my browser to create a private key for use with *.citibank.com. By associating the private key with a specific DNS domain (just as cookies are), this means that the privacy implications of client authentication would be comparable to the privacy implications of cookies. Also, in this scheme, there wouldn't be any need to have your public key signed by a CA; the site only needs to know your public key (e.g., your browser could send self-signed certs), which eliminates the dependence upon the third-party CAs. Any thoughts on this? You don't have to imagine this. It is exactly how Infocard (the generic name of the technology of which Microsoft's CardSpace is one implementation of one component) works in its most common mode (the personal or self-issued card). It has lots of other benefits as well even in this mode (user-managed attributes, graphical UI) as well as other modes to support identity providers (managed cards). Lest you think that this is Microsoft-only, be assured that there is a large community building implementations for many other platforms and systems. OSIS (http://osis.idcommons.net/) is the prime venue for people to work on interoperability across the spectrum of implementations. There's a big interop event coming up at the RSA conference in April. If you'd like to help make your scenario a pervasive reality, check it out. - RL "Bob" - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Philipp =?iso-8859-1?q?G=FChring?= <[EMAIL PROTECTED]> writes: >I had the feeling that Microsoft wants to abandon the usage of client >certificates completely, and move the people to CardSpace instead. While there's an obvious interpretation of that ("Microsoft want to lock everyone into CardSpace"), there's a second interpretation that's equally likely: After > 10 years of effort and getting almost exactly nowhere with client certs, Microsoft are moving on to something more likely to succeed (actually I have no idea how workable CardSpace is since I don't think anyone's done any usability studies on it, but I doubt it's more unworkable than client certs. Is anyone aware of any third-party usability studies on CardSpace, OpenID, ...?). >But how do you sign your emails with CardSpace? Does anyone care that you can't sign your emails with CardSpace? (I could post my standard reference on this here again :-). The unwashed masses don't even know what signed email is, let alone care about using it. I know that there are assorted corporates and so on that are still keen on email signing, but they can keep playing with PKI for that. CardSpace/Liberty/ OpenID/SAML/whatever[0] should handle the rest. Eventually. Peter. [0] I'm not sure whether putting "CardSpace" and "Liberty" in such close proximity in the above line was a good idea. If your monitor starts smoking due to the friction generated, please cut&paste one of the two elsewhere. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
On Feb 11, 2008, at 8:28 AM, Philipp Gühring wrote: I had the feeling that Microsoft wants to abandon the usage of client certificates completely, and move the people to CardSpace instead. But how do you sign your emails with CardSpace? CardSpace only does the realtime authentication part of the market ... We (Morgan Stanley) were able to pressure them into a rapid fix, and they have committed to delivering it in SP1. Keep your fingers crossed. If anyone needs more information how to upgrade your Web-based CA for IE7: http://wiki.cacert.org/wiki/IE7VistaSource Step (2), "On Vista you have to add this website to the list of trusted sites in the internet-settings." can be quite unpalatable. Depending on your customers' situations, an alternative might be more palatable: Generate the key and deliver a PKCS#12. This depends on whether you believe in the non-repudiation fairy or not -- or more accurately, whether you're already assuming the repudiation risk. -wps - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Hi, > Microsoft broke this in IE7... It is no longer possible to generate and > enroll a client cert from a CA not on the trusted root list. So private > label CAs can no longer enroll client certs. We have requested a fix, > so this may come in the future, but the damage is already done... > > Also the IE7 browser APIs for this are completely different and rather > minimally documented. The interfaces are not portable between browsers, > ... It's a mess. I can fully confirm this. Microsoft claimed that they had to rewrite the API to make it more secure, but I only found one small security-relevant weakness that they fixed, the others are still there. (And even that fix wouldn´t have justified a rewrite of the API for websites. They could have kept the frontend-API compatible in my opinion.) I had the feeling that Microsoft wants to abandon the usage of client certificates completely, and move the people to CardSpace instead. But how do you sign your emails with CardSpace? CardSpace only does the realtime authentication part of the market ... If anyone needs more information how to upgrade your Web-based CA for IE7: http://wiki.cacert.org/wiki/IE7VistaSource Best regards, Philipp Gühring - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
> "Werner" == Werner Koch <[EMAIL PROTECTED]> writes: Werner> The last time I checked the Mozilla code they used their own crypto Werner> stuff. When did they switched to OpenSSL and how do they solve the Werner> GPL/OpenSSL license incompatibility? Indeed they do. It is called nss, is available as a package of its own on several dists, is written in C, is MPL|GPL|LGPL and has its own page at: http://www.mozilla.org/projects/security/pki/nss/ The Gentoo ebuild even installs a pkgconfig file. I don't recall seeing anything !zilla using it, though. -JimC -- James Cloos <[EMAIL PROTECTED]> OpenPGP: 1024D/ED7DAEA6 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
On Sun, Feb 10, 2008 at 07:27:28PM +0100, Werner Koch wrote: > On Thu, 7 Feb 2008 16:37, [EMAIL PROTECTED] said: > > > I don't have any idea why or why not, but all they can release now is > > source code with #ifdef openssl >= 0.9.9 ... do PSK stuff ... #endif, > > The last time I checked the Mozilla code they used their own crypto > stuff. When did they switched to OpenSSL and how do they solve the > GPL/OpenSSL license incompatibility? > You are probably right about that, they use the "NSS" library. It is sometimes easy to forget that not all the world is OpenSSL... -- /"\ ASCII RIBBON NOTICE: If received in error, \ / CAMPAIGN Victor Duchovni please destroy and notify X AGAINST IT Security, sender. Sender does not waive / \ HTML MAILMorgan Stanley confidentiality or privilege, and use is prohibited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
On Thu, 7 Feb 2008 16:37, [EMAIL PROTECTED] said: > I don't have any idea why or why not, but all they can release now is > source code with #ifdef openssl >= 0.9.9 ... do PSK stuff ... #endif, The last time I checked the Mozilla code they used their own crypto stuff. When did they switched to OpenSSL and how do they solve the GPL/OpenSSL license incompatibility? Salam-Shalom, Werner -- Die Gedanken sind frei. Auschnahme regelt ein Bundeschgesetz. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Peter Gutmann wrote: Victor Duchovni <[EMAIL PROTECTED]> writes: While Firefox should ideally be developing and testing PSK now, without stable libraries to use in servers and browsers, we can't yet expect anything to be released. Is that the FF devlopers' reason for holding back? Just wondering... why not release it with TLS-PSK/SRP anyway (particularly with 3.0 being in the beta stage, it'd be the perfect time to test new features), tested against existing implementations, then at least it's ready for when server support appears. At the moment we seem to be in a catch-22, servers don't support it because browsers don't, and browsers don't support it because servers don't. I would say that this would not hold the FF developers back, as they were definately capable of implementing TLS/SNI extension a year or two back, without any support from stable libraries in Apache httpd, Microsoft IIS, etc (still waiting...). I'd also suggest that the TLS/SNI (which will apparently turn up one day in Apache) will have a much more dramatic effect on phishing than TLS-PSK/SRP ... because of the economics of course. Lowering the barriers on all TLS use is far more important than making existing TLS use easier. Of course, this is not a competition, as the effect adds, not competes. The good thing is that we may actually get to see the effects of both fixes to TLS rollout at similar times. In economics, it is a truism that we can't run the experiment, we have to watch real life, Heisenberg style, and this may give us a chance to do that. Also, we can observe another significant factor in the mix: the rollout of virtual machine platforms (xen and the like) is dramatically changed the economics of IP#s, these now becoming more the limiting factor than they were, which might also put more pressure on Apache ... to release earlier and more often. iang - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Peter Gutmann wrote: There's always the problem of politics. You'd think that support for a free CA like CAcert would also provide fantastic marketing opportunities for free browser like Firefox, but this seems to be stalled pretty much idefinitely because since CAcert doesn't charge for certificates, including it in Firefox would upset the commercial CAs that do (there's actually a lot more to it than this, see the interminable flamewars on this topic on blogs and whatnot for more information). The situation with CAcert and Mozo is fairly simple. Mozo ran a long and open design exercise for a CA policy, which specifies that each CA requires an audit [1]. CAcert hasn't got an audit [2]. Mozo did indeed work quite hard to give CAcert and others some more open access to the process. One could debate the wisdom of having an audit at all, or ascribe the motives to politics, or whatever [3] ... in the end, Mozo moved a considerable distance by opening up the process to non-financial-audit firms and to criteria from non-consortium authors [4]. CAcert also now conducts an open process [5], so it is much easier to talk about the audit. It is well advanced on the policy side, only lacking one or two critical policies which are works-in-progress. Audits generally deliver reports that say things like "management has put in place procedures and policies..." so CAcert is in good shape here. Where the audit has stalled is on the systems side (and the missing policies are all on that side as well). CAcert will either solve their systems problems or die in the attempt. My current estimate is that if CAcert moves seriously to solve the systems problems, then it may have the audit by early 2009. If not, not. You can read more about it [6] or ask me or them or join their many mail lists, etc etc. iang [1] The process was led by Frank Hecker on the open mozo security maillist. I was part of that process, as was Duane (founder of CAcert), because it was an open process. http://www.mozilla.org/projects/security/certs/policy/ IMO, the Mozo CA policy project was a great case study in open security, and should be copied by others, including other Mozo security processes. [2] By way of disclosure, I am the auditor. Minutes of most recent published audit report: http://wiki.cacert.org/wiki/AuditPresentations [3] FTR I argued against the requirements for audits. [4] The case for audits was significantly weakened when rumours spread of audited CAs conducting MITMs on their own customers, and the logical claim that this was permitted under audit as long as it was disclosed, sort of, somewhere, maybe. This was crucial in shifting consensus to allow competition in audit criteria and auditors. [5] Due to direction from Greg Rose (retiring President) and a funding deal with NLnet that imposes frequent public reports. [6] http://wiki.cacert.org/wiki/Audit - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
| By the way, it seems like one thing that might help with client certs | is if they were treated a bit like cookies. Today, a website can set | a cookie in your browser, and that cookie will be returned every time | you later visit that website. This all happens automatically. Imagine | if a website could instruct your browser to transparently generate a | public/private keypair for use with that website only and send the | public key to that website. Then, any time that the user returns to | that website, the browser would automatically use that private key to | authenticate itself. For instance, boa.com might instruct my browser | to create one private key for use with *.boa.com; later, | citibank.com could instruct my browser to create a private key for | use with *.citibank.com. By associating the private key with a specific | DNS domain (just as cookies are), this means that the privacy | implications of client authentication would be comparable to the | privacy implications of cookies. Also, in this scheme, there wouldn't | be any need to have your public key signed by a CA; the site only needs | to know your public key (e.g., your browser could send self-signed | certs), which eliminates the dependence upon the third-party CAs. | Any thoughts on this? While trying to find something else, I came across the following reference: Title: Sender driven certification enrollment system Document Type and Number: United States Patent 6651166 Link to this page: http://www.freepatentsonline.com/6651166.html Abstract: A sender driven certificate enrollment system and methods of its use are provided, in which a sender controls the generation of a digital certificate that is used to encrypt and send a document to a recipient in a secure manner. The sender compares previously stored recipient information to gathered information from the recipient. If the information matches, the sender transfers key generation software to the recipient, which produces the digital certificate, comprising a public and private key pair. The sender can then use the public key to encrypt and send the document to the recipient, wherein the recipient can use the matching private key to decrypt the document. This was work done a Xerox. I was trying to find a different report at Xerox in response to Peter Gutmann's comment that certificate aren't used because they are impractical/unusable. Parc has done some wonderful work on deal with those problems. See: http://www.parc.com/research/projects/usablesecurity/wireless.html Not "Internet scale", but in an enterprise, it should work. -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
re: http://www.garlic.com/~lynn/aadsm28.htm#30 Fixing SSL so lots of the AADS http://www.garlic.com/~lynn/x959.html#aads scenarios are that every place a password might appear, have a public key instead. for various of the cookie authentication operations ... also think kerberos tickets. recent reference http://www.garlic.com/~lynn/2008c.html#31 Kerberized authorization service part of the scenario for cookie/ticket encryption ... involving servers, is brute force attack on the server secret key. the cookie instead of all encrypted data ... has some sort of client registration value ... analogous to an account number or userid. the cookie caries the registration value followed by the server encrypted data. the encryption part uses a derived key ... formed by combination of the server's secret key and the client's registration value. these derived key scenarios are also found in transit system operation (both magstripe and memory chip) as well as financial transactions. the issue then is initial registration ... the part where the user chooses their userid (and/or the client registration value is otherwise selected) and supplies a password (but in this case a public key). m'soft and others have been using CAPTCHA to weed-out the non-humans, but this has come under attacks. reference to recent news items http://www.garlic.com/~lynn/2008d.html#2 Spammer' bot cracks Microsoft's CAPTCHA the ticket/cookie carries the clients public key (and whatever other characteristics) ... which then can be used by the server(s) to perform dynamic authentication (digital signing of some server supplied, random data, countermeasure to replay attacks). this is in lieu of server having to maintain the client account record ... ala a RADIUS scenario where public key has been registered in lieu of a password (some sort of online access to RADIUS account records). various RADIUS public key in lieu of password postings: http://www.garlic.com/~lynn/subpubkey.html#radius the ticket/cookie scenario (with derived key encryption) is cross between dynamic server-side account record data (say RADIUS repository) and stale, static digital certificate scenario. as in the transit gate operation, the ticket/cookie could also be retrieved, decrypted, updated, re-encrypted, and returned as part of the operation. initial server hand-shakes can include server sending some random challenge data. The client returns the digital signature and their previously obtained cookie. in the straight RADIUS public key handshake scenario, just the digital signature and client userid/account-number is returned since the rest of the cookie/ticket equivalent info is online in the RADIUS account repository. The straight RADIUS scenario would be to combine the server-side random challenge data and combine it with the client registration value (account number, userid) and whatever else the client-side digital signing requires ... and return the userid/account-number any other data and digital signature (i.e. server-side has to be able to reconstruct what the client actually digitally signed as part of verifying the digital signature). In the straight RADIUS scenario, the public key (and any associated permissions, authorization, etc) is obtained from the RADIUS repository. In cookie/ticket scenario, it is obtained from the cookie/ticket appended to the message. The business process still has the initial registration phase ... where the original cookie is created (or in the RADIUS scenario, where the userid definitiion is initially created) and the public key is supplied (in lieu of a password). This is also effectively the original certificateless pk-init scenario for kerberos (aka public key in lieu of password) http://www.garlic.com/~lynn/subpubkey.html#kerberos The cookie scenario is standard client/server ... attempting to eliminate the server having to retain the account record on behalf of every client (as in either the RADIUS and/or KERBEROS scenario). Encrypting of the cookie data is standard ... although transit systems and financial transactions have gone to derived key for the situation ... as countermeasure to brute force attack on the infrastructure secret key. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Steven M. Bellovin wrote: > There's another issue: initial account setup. [Even > with SRP] people will still need to rely on > certificate-checking for that. It's a real problem at > some hotspots, where Evil Twin attacks are easy and > lots of casual users are signing up for the first > time. For banks and health care, initial account setup always involves out of band communication, so certificate checking not needed. We need to build our security mechanisms to fit characteristic human out of band security, rather than trying to force humans to imitate computers. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
On Sat, Feb 09, 2008 at 05:04:28PM -0800, David Wagner wrote: > By the way, it seems like one thing that might help with client certs > is if they were treated a bit like cookies. Today, a website can set > a cookie in your browser, and that cookie will be returned every time > you later visit that website. This all happens automatically. Imagine > if a website could instruct your browser to transparently generate a > public/private keypair for use with that website only and send the > public key to that website. Then, any time that the user returns to > that website, the browser would automatically use that private key to > authenticate itself. For instance, boa.com might instruct my browser > to create one private key for use with *.boa.com; later, > citibank.com could instruct my browser to create a private key for > use with *.citibank.com. Microsoft broke this in IE7... It is no longer possible to generate and enroll a client cert from a CA not on the trusted root list. So private label CAs can no longer enroll client certs. We have requested a fix, so this may come in the future, but the damage is already done... Also the IE7 browser APIs for this are completely different and rather minimally documented. The interfaces are not portable between browsers, ... It's a mess. -- /"\ ASCII RIBBON NOTICE: If received in error, \ / CAMPAIGN Victor Duchovni please destroy and notify X AGAINST IT Security, sender. Sender does not waive / \ HTML MAILMorgan Stanley confidentiality or privilege, and use is prohibited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
On 2/9/08, David Wagner <[EMAIL PROTECTED]> wrote: > By the way, it seems like one thing that might help with client certs > is if they were treated a bit like cookies. I don't see how this helps with phishing. Phishers will just go after the password or other secrets used to authenticate a new system or a system that has lost its cert. -- Taral <[EMAIL PROTECTED]> "Please let me know if there's any further trouble I can give you." -- Unknown - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
David Wagner <[EMAIL PROTECTED]> writes: >Tim Dierks writes: >>(there are totally different reasons that client certs aren't being >>widely adopted, but that's beside the point). > >I'd be interested in hearing your take on why SSL client certs aren't widely >adopted. Because they're essentially unworkable. At the risk of spamming this reference a bit too often here: http://www.cs.auckland.ac.nz/~pgut001/pubs/usability.pdf There's detailed discussion there of results of user studies, conference papers, references, (hopefully) all the information you need. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
David Wagner wrote: I'd be interested in hearing your take on why SSL client certs aren't widely adopted. It seems like they could potentially help with the phishing problem (at least, the problem of theft of web authenticators -- it obviously won't help with theft of SSNs). If users don't know the authentication secret, they can't reveal it. The nice thing about using client certs instead of passwords is that users don't know the private key -- only the browser knows the secret key. The standard concerns I've heard are: (a) SSL client supports aren't supported very well by some browsers; (b) this doesn't handle the mobility problem, where the user wants to log in from multiple different browsers. So you'd need a different mechanism for initially registering the user's browser. By the way, it seems like one thing that might help with client certs is if they were treated a bit like cookies. Today, a website can set a cookie in your browser, and that cookie will be returned every time you later visit that website. This all happens automatically. Imagine if a website could instruct your browser to transparently generate a public/private keypair for use with that website only and send the public key to that website. Then, any time that the user returns to that website, the browser would automatically use that private key to authenticate itself. For instance, boa.com might instruct my browser to create one private key for use with *.boa.com; later, citibank.com could instruct my browser to create a private key for use with *.citibank.com. By associating the private key with a specific DNS domain (just as cookies are), this means that the privacy implications of client authentication would be comparable to the privacy implications of cookies. Also, in this scheme, there wouldn't be any need to have your public key signed by a CA; the site only needs to know your public key (e.g., your browser could send self-signed certs), which eliminates the dependence upon the third-party CAs. Any thoughts on this? in AADS http://www.garlic.com/~lynn/x959.html#aads and certificateless public key http://www.garlic.com/~lynn/subpubkey.html#certless we referred to the scenario as person-centric ... as a contrast to institutional-centric oriented implementations. past posts in this thread: http://www.garlic.com/~lynn/aadsm28.htm#20 Fixing SSL (was Re: Dutch Transport Card Broken) http://www.garlic.com/~lynn/aadsm28.htm#24 Fixing SSL (was Re: Dutch Transport Card Broken) http://www.garlic.com/~lynn/aadsm28.htm#26 Fixing SSL (was Re: Dutch Transport Card Broken) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Fixing SSL (was Re: Dutch Transport Card Broken)
Tim Dierks writes: >(there are totally different reasons that client certs aren't being >widely adopted, but that's beside the point). I'd be interested in hearing your take on why SSL client certs aren't widely adopted. It seems like they could potentially help with the phishing problem (at least, the problem of theft of web authenticators -- it obviously won't help with theft of SSNs). If users don't know the authentication secret, they can't reveal it. The nice thing about using client certs instead of passwords is that users don't know the private key -- only the browser knows the secret key. The standard concerns I've heard are: (a) SSL client supports aren't supported very well by some browsers; (b) this doesn't handle the mobility problem, where the user wants to log in from multiple different browsers. So you'd need a different mechanism for initially registering the user's browser. By the way, it seems like one thing that might help with client certs is if they were treated a bit like cookies. Today, a website can set a cookie in your browser, and that cookie will be returned every time you later visit that website. This all happens automatically. Imagine if a website could instruct your browser to transparently generate a public/private keypair for use with that website only and send the public key to that website. Then, any time that the user returns to that website, the browser would automatically use that private key to authenticate itself. For instance, boa.com might instruct my browser to create one private key for use with *.boa.com; later, citibank.com could instruct my browser to create a private key for use with *.citibank.com. By associating the private key with a specific DNS domain (just as cookies are), this means that the privacy implications of client authentication would be comparable to the privacy implications of cookies. Also, in this scheme, there wouldn't be any need to have your public key signed by a CA; the site only needs to know your public key (e.g., your browser could send self-signed certs), which eliminates the dependence upon the third-party CAs. Any thoughts on this? - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
On Thu, Feb 07, 2008 at 08:47:20PM +1300, Peter Gutmann wrote: > Victor Duchovni <[EMAIL PROTECTED]> writes: > > >While Firefox should ideally be developing and testing PSK now, without > >stable libraries to use in servers and browsers, we can't yet expect anything > >to be released. > > Is that the FF devlopers' reason for holding back? Just wondering... why not > release it with TLS-PSK/SRP anyway (particularly with 3.0 being in the beta > stage, it'd be the perfect time to test new features), tested against existing > implementations, then at least it's ready for when server support appears. At > the moment we seem to be in a catch-22, servers don't support it because > browsers don't, and browsers don't support it because servers don't. I don't have any idea why or why not, but all they can release now is source code with #ifdef openssl >= 0.9.9 ... do PSK stuff ... #endif, with binaries (dynamically) linked against the default OpenSSL on the oldest supported release of each platform... For RedHat 4.x systems, for example, that means that binary packages use 0.9.7... Distributions that build their own Firefox from source may at some point have PSK (once they ship OpenSSL 0.9.9). I don't think we will see this available in many user's hands for 2-3 years after the code is written (fielding new systems to the masses takes a long time...). -- /"\ ASCII RIBBON NOTICE: If received in error, \ / CAMPAIGN Victor Duchovni please destroy and notify X AGAINST IT Security, sender. Sender does not waive / \ HTML MAILMorgan Stanley confidentiality or privilege, and use is prohibited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Victor Duchovni <[EMAIL PROTECTED]> writes: >While Firefox should ideally be developing and testing PSK now, without >stable libraries to use in servers and browsers, we can't yet expect anything >to be released. Is that the FF devlopers' reason for holding back? Just wondering... why not release it with TLS-PSK/SRP anyway (particularly with 3.0 being in the beta stage, it'd be the perfect time to test new features), tested against existing implementations, then at least it's ready for when server support appears. At the moment we seem to be in a catch-22, servers don't support it because browsers don't, and browsers don't support it because servers don't. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
"Steven M. Bellovin" <[EMAIL PROTECTED]> writes: >There's another issue: initial account setup. People will still need to rely >on certificate-checking for that. It's a real problem at some hotspots, >where Evil Twin attacks are easy and lots of casual users are signing up for >the first time. It really depends on the value of the account, for high-value ones I would hope it's done out-of-band (so you can't just sign up for online banking by going to a bank's purported web page and saying "Hi, I'm Bob, give me access to my account"), and for low-value stuff like Facebook I'm not sure how much effort your password is worth to an attacker when they can get a million others from the same site. I agree that it's still a problem, but switching to failsafe auth is a major attack surface reduction since now an attacker has to be there at the initial signup rather than at any arbitrary time of their choosing. It's turning an open channel into a time- and location-limited channel. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Thu, 07 Feb 2008 17:37:02 +1300 [EMAIL PROTECTED] (Peter Gutmann) wrote: > The real issues occur in two locations: > > 1. In the browser UI. > 2. In the server processing, which no longer gets the password via an > HTTP POST but as a side-effect of the TLS connect. > > (1) is a one-off cost for the browser developers, (2) is a bit more > complex to estimate because it's on a per-site basis, but in general > since the raw data (username+pw) is already present it's mostly a > case of redoing the data flow a bit, and not necessarily rebuilding > the whole system from scratch. To give one example, a healthcare > provider, they currently trigger an SQL query from an HTTP POST that > looks up the password with the username as key, and the change would > be to do the same thing at the TLS stage rather than the post-TLS > HTTP stage. There's another issue: initial account setup. People will still need to rely on certificate-checking for that. It's a real problem at some hotspots, where Evil Twin attacks are easy and lots of casual users are signing up for the first time. --Steve Bellovin, http://www.cs.columbia.edu/~smb - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Frank Siebenlist <[EMAIL PROTECTED]> writes: >With the big browser war still going strong, wouldn't that provide fantastic >marketing opportunities for Firefox? There's always the problem of politics. You'd think that support for a free CA like CAcert would also provide fantastic marketing opportunities for free browser like Firefox, but this seems to be stalled pretty much idefinitely because since CAcert doesn't charge for certificates, including it in Firefox would upset the commercial CAs that do (there's actually a lot more to it than this, see the interminable flamewars on this topic on blogs and whatnot for more information). >If Firefox would support these secure password protocols, and the banks would >openly recommend their customers to use Firefox because its safer and >protects them better from phishing, that would be great publicity for >Firefox, draw more users, and force M$ to support it too in the long run... Here's a suggestion to list members: - If you know a Firefox developer, go to them and tell them that TLS-PSK and TLS-SRP support would be a fantastic selling point and would allow Firefox to trump IE in terms of resisting phishing, which might encourage banks to recommend it to users in place of IE. - If you know anyone with some clout at Microsoft, tell them that your organisation is thinking of mandating a switch to Firefox because IE doesn't support phish-resistant authentication like TLS-PSK/TLS-SRP, and since you have x million paying customers this won't look good for MS. - If you work for any banking regulators (for example the FFIEC), require failsafe authentication (in which the remote site doesn't get a copy of your credentials if the authentication fails) rather than the current two-factor auth (which has lead to farcical "two-factor" mechanisms like SiteKey). Oh, and don't tell them I put you up to this :-). Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
"James A. Donald" <[EMAIL PROTECTED]> writes: >However, seems to me that logging into the website using SRP is a non trivial >refactoring, and not just a matter of dropping in TLS-SRP as a simple >replacement of TLS-DSA-X509 I've discussed this with (so far) a small sample of assorted corporate TLS users to get at least a general idea of what'd be involved. At a very abstract level all they see is "username + password + TLS" -> "permitted/denied", the only change is that by moving the verification into TLS this process happens a bit earlier than when it's done in HTML (and obviously the failsafe nature means the other side never gets the password if the auth fails). At an implementation level it's also fairly simple, it's maybe 2-3 pages of code added to my SSL implementation, and I spoke to another SSL developer who gave similar figures. All you're doing is mixing a little extra keying material into the premaster secret, it's not a major piece of programming. The real issues occur in two locations: 1. In the browser UI. 2. In the server processing, which no longer gets the password via an HTTP POST but as a side-effect of the TLS connect. (1) is a one-off cost for the browser developers, (2) is a bit more complex to estimate because it's on a per-site basis, but in general since the raw data (username+pw) is already present it's mostly a case of redoing the data flow a bit, and not necessarily rebuilding the whole system from scratch. To give one example, a healthcare provider, they currently trigger an SQL query from an HTTP POST that looks up the password with the username as key, and the change would be to do the same thing at the TLS stage rather than the post-TLS HTTP stage. With the folks I've discussed this with their concern has been far more "We want this yesterday, why isn't it here yet" rather than "We can't integrate this with our existing back-ends". Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
On Wed, Feb 06, 2008 at 09:21:47AM -0800, Frank Siebenlist wrote: > With the big browser war still going strong, wouldn't that provide > fantastic marketing opportunities for Firefox? > > If Firefox would support these secure password protocols, and the banks > would openly recommend their customers to use Firefox because its safer > and protects them better from phishing, that would be great publicity > for Firefox, draw more users, and force M$ to support it too in the long > run... It is a bit early. OpenSSL 0.9.9 is not yet released. I wish OpenSSL releases were more frequent, and each added fewer features, allowing features to be released as they mature, this would also reduce pressure to add features to stable releases (which occasionally break binary compatibility, and lead to vendors back-porting fixes rather than deploying the next patch level of the stable release). While Firefox should ideally be developing and testing PSK now, without stable libraries to use in servers and browsers, we can't yet expect anything to be released. -- /"\ ASCII RIBBON NOTICE: If received in error, \ / CAMPAIGN Victor Duchovni please destroy and notify X AGAINST IT Security, sender. Sender does not waive / \ HTML MAILMorgan Stanley confidentiality or privilege, and use is prohibited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
a recent reference Research unmasks anonymity networks http://www.techworld.com/security/news/index.cfm?newsID=11295 Research unmasks anonymity networks http://www.networkworld.com/news/2008/020108-research-unmasks-anonymity.html Research unmasks anonymity networks http://www.arnnet.com.au/index.php/id;1270745171;fp;4194304;fpid;1 Paper Outlines Methods for Beating Anonymity Technology http://www.darkreading.com/document.asp?doc_id=144606 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Peter Gutmann wrote: Frank Siebenlist <[EMAIL PROTECTED]> writes: That's actually a sad observation. I keep telling my colleagues that this technology is coming "any day now" to a browser near you - didn't realize that that there was no interest with the browser companies to add support for this... I know of a number of organisations (mostly governmental, but also some financial) in various countries who are really, really keen to get support for (as James Donald pointed out) cryptographically secured relationships (not requiring PKI would be a big feature) into browsers, but no-one knows who to beat over the head about it. The last group I talked to (banks) were hoping to use commercial pressure to get MS to add support for it in IE7^H^H8 at which point Firefox would be forced to follow, but it's a slow process. With the big browser war still going strong, wouldn't that provide fantastic marketing opportunities for Firefox? If Firefox would support these secure password protocols, and the banks would openly recommend their customers to use Firefox because its safer and protects them better from phishing, that would be great publicity for Firefox, draw more users, and force M$ to support it too in the long run... Why do the browser companies not care? What is the adoption issue? Still the dark cloud of patents looming over it? Not enough understanding about the benefits? (marketing) Economic reasons that we wouldn't buy anymore server certs? I think it's a combination of two factors: 1. Everyone knows that passwords are insecure, so it's not worth trying to do anything with them. (My counter-argument to this is that passwords are only insecure because protocol designers have chosen to make them insecure, see my previous post about the quaint 1970s-vintage hand-over-the-password model used by SSH and SSL/TLS). ...these protocol would even make the use of one-time-passwords more secure (no MITM exposure - phishing), and make them securely usable without any server-certs... 2. If you add failsafe authentication to browsers, CAs become redundant. (My counter-argument to this is to ask whether browser security exists in order to provide a business model for CAs or to protect users. Currently it seems to be the former, with EV certs being a prime example). I was afraid that this cynical argument would play a role... so the server-cert racketeering scheme has just been made more profitable through more expensive but equally "trustworthy" EV-certs, which makes it more difficult to introduce alternatives that don't fit into this "business model"... On the other hand, I'm sure that the marketeers will be able to sell server-certs together with those secure passwords protocols to the naive customers as it will be very difficult to explain why you do/don't need the certs and why it would more/less secure... -Frank. -- Frank Siebenlist [EMAIL PROTECTED] The Globus Alliance - Argonne National Laboratory - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Tue, Feb 05, 2008 at 08:17:32AM +1000, James A. Donald wrote: > Nicolas Williams wrote: > > Sounds a bit like SCTP, with crypto thrown in. > > SCTP is what we should have done http over, though of > course SCTP did not exist back then. Perhaps, like > quite a few other standards, it still does not quite > exist. Proposing something new won't help make that available sooner than SCTP if that something new, like SCTP, must be implemented in kernel-land. > > I thought it was the latency cause by unnecessary > > round-trips and expensive key exchange crypto that > > motivated your proposal. The cost of session crypto > > is probably not as noticeable as that of the latency > > of key exchange and authentication. > > The big problem is that between the time one logs on to > one's bank, and the time one logs off, one is apt to > have done lots and lots of cryptographic key exchanges. > One key exchange per customer session is a really small > cost, but we have a storm of them. This is what session resumption is all about, and now that we have a way to do it without server-side state (RFC4507) there should be no more complaints. If the latency of multiple key exchanges is the issue then we should push for deployment of RFC4507 before we go push for a brand new transport protocol. > Whenever the web page shows what is particular to the > individual rather than universal, it uses a session > cookie, visible to server side web page code. > Encryption, the bundle of shared secrets that enable > encrypted communications, should be visible at that > level, should be a session cookie characteristic rather > than a low level transport characteristic, should have > the durability and scope of a session cookie, instead of > the durability and scope of a transaction. If I understand what you mean then the ticket in RFC4507 is just that. Nico -- - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Nicolas Williams wrote: > Sounds a bit like SCTP, with crypto thrown in. SCTP is what we should have done http over, though of course SCTP did not exist back then. Perhaps, like quite a few other standards, it still does not quite exist. > I thought it was the latency cause by unnecessary > round-trips and expensive key exchange crypto that > motivated your proposal. The cost of session crypto > is probably not as noticeable as that of the latency > of key exchange and authentication. The big problem is that between the time one logs on to one's bank, and the time one logs off, one is apt to have done lots and lots of cryptographic key exchanges. One key exchange per customer session is a really small cost, but we have a storm of them. Whenever the web page shows what is particular to the individual rather than universal, it uses a session cookie, visible to server side web page code. Encryption, the bundle of shared secrets that enable encrypted communications, should be visible at that level, should be a session cookie characteristic rather than a low level transport characteristic, should have the durability and scope of a session cookie, instead of the durability and scope of a transaction. Because we use encryption merely at a level where it is logically transient, because it protects transactions rather than relationships, the connections are too costly, and fail to provide the information about relationships that are needed to protect the user. If we had implemented http over something like SCTP, then an SCTPlike connection value should have been a cookie. One should have been able to look at the SCTPlike connection value in the server side page code, and be pretty sure that if the person is the same, the connection value will be unchanged, so that one could then associate additional state with the connection value - encryption being some more state. Encryption parameters have more in common with session cookies than with transactions. They should be about relationships, not data transport. If encryption setups were made and discarded only as often as session cookies, not so costly. It is making them and discarding them as often as transactions that hurts. Also, the fact that they are so frequently discarded means that scope information is unavailable to secure relationships, means we cannot provide useful information to the end user about who he is really talking to, because the encryption does not know about relationships, even though encryption should be about relationships. With encryption merely at the transactional level, the browser can know the true name of website you are looking at, that being merely a page property, but cannot know what relationship you think you are participating in. To provide security, client side code, browser chrome, needs to know not the true name of the web site, but if you are at a web site where you have user name or durable user ID. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Frank Siebenlist <[EMAIL PROTECTED]> writes: >That's actually a sad observation. > >I keep telling my colleagues that this technology is coming "any day now" to >a browser near you - didn't realize that that there was no interest with the >browser companies to add support for this... I know of a number of organisations (mostly governmental, but also some financial) in various countries who are really, really keen to get support for (as James Donald pointed out) cryptographically secured relationships (not requiring PKI would be a big feature) into browsers, but no-one knows who to beat over the head about it. The last group I talked to (banks) were hoping to use commercial pressure to get MS to add support for it in IE7^H^H8 at which point Firefox would be forced to follow, but it's a slow process. >Why do the browser companies not care? >What is the adoption issue? >Still the dark cloud of patents looming over it? >Not enough understanding about the benefits? (marketing) >Economic reasons that we wouldn't buy anymore server certs? I think it's a combination of two factors: 1. Everyone knows that passwords are insecure, so it's not worth trying to do anything with them. (My counter-argument to this is that passwords are only insecure because protocol designers have chosen to make them insecure, see my previous post about the quaint 1970s-vintage hand-over-the-password model used by SSH and SSL/TLS). 2. If you add failsafe authentication to browsers, CAs become redundant. (My counter-argument to this is to ask whether browser security exists in order to provide a business model for CAs or to protect users. Currently it seems to be the former, with EV certs being a prime example). There are probably other contributory reasons as well. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
On Feb 1, 2008, at 9:34 PM, Ian G wrote: * Browser vendors don't employ security people as we know them on this mailgroup [...] But they are completely at sea when it comes to systemic security failings or designing new systems. I don't know about other browsers, but Mozilla's CSO-type is Window Snyder who I'd easily describe as a pretty top-notch "security person". -- Ivan Krstić <[EMAIL PROTECTED]> | http://radian.org - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
"Steven M. Bellovin" <[EMAIL PROTECTED]> writes: >On Fri, 01 Feb 2008 13:29:52 +1300 >[EMAIL PROTECTED] (Peter Gutmann) wrote: >> Actually it doesn't even require X.509 certs. TLS-SRP and TLS-PSK >> provide mutual authentication of client and server without any use of >> X.509. The only problem has been getting vendors to support it, >> several smaller implementations support it, it's in the (still >> unreleased) OpenSSL 0.99, and the browser vendors don't seem to be >> interested at all, which is a pity because the mutual auth (the >> server has to prove possession of the shared secret before the client >> can connect) would significantly raise the bar for phishing attacks. >> >> (Anyone have any clout with Firefox or MS? Without significant >> browser support it's hard to get any traction, but the browser >> vendors are too busy chasing phantoms like EV certs). > >The big issue is prompting the user for a password in a way that no one will >confuse with a web site doing so. HCI people have been studying this for quite some time, and there's been a lot of good work done in this area. Because of the amount of information, I'll answer indirectly via a link (warning, it's a partial book draft and is currently ~140 pages long): http://www.cs.auckland.ac.nz/~pgut001/pubs/usability.pdf Even without this detailed analysis, one of the Mac browsers (Safari?) already has a quite distinctive password prompt that rolls down out of the menu bar at the top. Sure, you can spoof that if you own the browser, but if malware owns your browser then you're toast anyway. >It might have been the right thing, once upon a time, but the horse may be >too far out of the barn by now to make it worthwhile closing the barn door. That's the response I got from a browser developer when I talked about this about a year ago, "Sufficiently sophisticated malware can spoof any piece of browser UI, so let's just give up and admit that the phishers have won". At the moment, after 15-odd years of work, the state of the art for both major secure-channel protocols is to connect to anything listening on port 22 or 443 and then hand over the user's password in plaintext form (although inside a secure tunnel, as if that made any difference) [0]. This is only just barely better than the 1970s-era telnet in that the authenticator is still handed over in plaintext, but at least you can't capture it with a packet sniffer. Moving to a challenge-response mechanism (which PSK and SRP aren't really, it's more a bit-commitment since there's no real challenge or response process [1]) would at least move the security into the late 1980s. As a side-note, I was talking to a security person from a large (multi- national) bank recently and they mentioned that they were slowing down on the push to move to two-factor auth (real two-factor auth with SecurIDs and the like, not the gimmicks that US banks are using :-) because the problem isn't authenticating the user, it's authenticating the server and/or the transaction, and most two-factor auth tokens can't do that. As a result they're not going to commit to sinking much more money into something that doesn't actually solve the problem. So mutual client/server auth is something that's of concern to more than just some geeks on security mailing lists, it's coming onto the radar of large financial institutions as well. Peter. [0] By "443" I mean HTTP over SSL/TLS, obviously. [1] Actually this is neither challenge-response nor bit-commitment so in the absence of anything better I'll propose "failsafe authentication" because the other side doesn't get your authenticator unless they can prove they already possess it. In other words if the authentication process fails, it fails safe. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
StealthMonger wrote: > They can't be as "anonymous as cash" if the party > being dealt with can be identified. And the party can > be identified if the transaction is "online, > real-time". Even if other clues are erased, there's > still traffic analysis in this case. > > What the offline paradigm has going for it is the > possibility of true, untraceable anonymity through the > use of anonymizing remailers and related technologies. A ripple payment protocol could in practice much resemble an onion protocol. Someone trying to trace a ripple payment might find that the first level is some highly cooperative bank, and the next level is someone in the Carribean who will cooperate only if offered a suitable inducement, and upon a suitable inducement being applied, reveals that the next level is I suspect, however, that ripple is apt to be a violation of the money laundering laws, with ripple intermediaries being defined as straw men or smurfs. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Sun, Feb 03, 2008 at 09:24:48PM +1000, James A. Donald wrote: > Nicolas Williams wrote: > >What, specifically, are you proposing? > > I am still writing it up. > > > Running the web over UDP? > > In a sense. > > That should have been done from the beginning, even before security > became a problem. TCP is a poor fit to a transactional protocol, as the > gyrations with "Keep-alive" and its successors illustrate. In the beginning most pages were simple enough that to speak of "transactional protocol" is almost an exageration. Web techonologies grew organically. Solutions to the various resulting problems will, I bet, also grow organically. A complete revamping is probably not in the cards. But if one should be then it should not surprise you that I'm all in favor of piercing abstraction layers. User authentication should happen that the application layer, and session crypto should happen at the transport layer, with everything cryptographically bound up. In any case we should re-use what we know works (e.g., ESP/AH for transport session crypto, IKEv2/TLS/DTLS for key exchange, ...). > In rough summary outline, what I propose is to introduce a distinction > between connections and streams, that a single long lasting connection > contains many transient streams. This is equivalent to TCP in the case > that a single connection always contains exactly two streams, one in > each direction, and the two streams are created when the connection is > created and shut down when the connection is shut down, but the main > objective is to support usages that are not equivalent to TCP. This is > pretty much the same thing as T/TCP, except that a "connection" can have > a large shared secret associated with it to encrypt the streams. For an > unencrypted connection, it can be spoof flooded the same way as T/TCP > can be spoof flooded, Sounds a bit like SCTP, with crypto thrown in. > but the main design objective is to make > encryption efficient enough that one always encrypts everything. I thought it was the latency cause by unnecessary round-trips and expensive key exchange crypto that motivated your proposal. The cost of session crypto is probably not as noticeable as that of the latency of key exchange and authentication. Nico -- - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
>They can't be as "anonymous as cash" if the party being dealt with >can be identified. And the party can be identified if the >transaction is "online, real-time". Even if other clues are erased, >there's still traffic analysis in this case. If I show up at a store and pay cash for something every week, they can still do traffic analysis on me ("oh him, he's a regular customer") unless I go out of my way to obscure my routine like asking other people to buy stuff for me. It's not clear to me what the object of this argument is. Yes, the harder you work, the more difficult you can make it for other people to tie your transactions to you. This shouldn't be news to anyone. R's, John - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
StealthMonger wrote: They can't be as "anonymous as cash" if the party being dealt with can be identified. And the party can be identified if the transaction is "online, real-time". Even if other clues are erased, there's still traffic analysis in this case. What the offline paradigm has going for it is the possibility of true, untraceable anonymity through the use of anonymizing remailers and related technologies. most people who heard the statement, understood that. i think that possibly 2nd level detail was that they didn't want PII easily associated by casual merchant. Initial response was to remove name from payment cards & magstripes. This also precluded merchants from requesting other forms of identification to see if the names matched the name on the payment card. The implication being that the payment infrastructure would have to come up with other mechanisms to improve the infrastructure integrity. The offline payment paradigms ... while touting "true" anonymity were actually primarily justified based on other factors. We had been asked to design and cost the dataprocessing supporting US deployments of some of the "offline" products (that were being used in Europe). Along the way, we did some business process and revenue analysis and realized that the primary motivation behind these system deployments was the float. About the same time that there was the EU about the privacy of electronic retail payments ... there was also a statement by the EU (and some of the country central banks) that the offline products would be allowed to keep the float for a short grace period to help in the funding of the infrastructure deployment ... but after the grace period ... the operators would have to start paying interest on the balance held in the "offline" instruments (eliminating float from the equation). After that, much of the interest in the offline deployments drifted away. In that time frame we had also done design, implementation and deployment of a payment transaction infrastructure supporting target marketing ... recent reference http://www.garlic.com/~lynn/2008c.html#27 Diversity support was for a small pilot of 60mil accounts and 1.5million transaction/day ... but capable of scaling up to 20-30 times that amount. There was significant attention paid to privacy issues and it was subject to quarterly auditing by some dozen or so privacy organizations. there had to be a large amount of sensitive treatment of the information along the lines of what HIPAA specifies for health information. aka: anonymized Previously identifiable data that have been deidentified and for which a code or other link no longer exists. An investigator would not be able to link anonymized information back to a specific individual. [HIPAA] (see also anonymous, coded, directly identifiable, indirectly identifiable) as part of co-authoring x9.99 financial privacy standard, one of the things we created was a privacy merged glossory and taxonomy ... including GLBA, HIPAA, and EU-DPD references some notes: http://www.garlic.com/~lynn/index.html#glosnote in our work on x9.59 financial transaction standard http://www.garlic.com/~lynn/x959.html#x959 we made the statement that it was privacy agnostic ... since the transactions were tied to accounts ... but then whether or not the accounts were tied to individuals was outside the x9.59 standard http://www.garlic.com/~lynn/subpubkey.html#x959 As a total aside ... as part of the Digicash liquidation, we were brought in to evaluate the patent portfolio. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
At 09:34 PM 2/1/2008 +0100, Ian G wrote: * Browser vendors don't employ security people as we know them on this mailgroup, they employ cryptoplumbers. Completely different layer. These people are mostly good (and often very good) at fixing security bugs. We thank them for that! But they are completely at sea when it comes to systemic security failings or designing new systems. An excellent observation Ian!! I too have run into this mindset at enterprises with inhouse security teams (mostly in Silicon Valley). They focus on the nuts and bolts like producing/using cryptographic libaries, fixing security bugs in code or configuring network appliances to stop intrusions. But it is really hard to find any of them with decent experience or knowledge at the overall software/hardware/people system design level. They are often very smart and educated engineers. I find that there's this "mindless" focus on using groups of "security" standards, e.g PKI / LDAP / SSL type of combinations, etc. The DoD contractor firms seem to be a little bit better at recognizing the system level aspects of security, although they too are often blinded by the emphasis on "COTS" security products. - Alex -- Alex Alten [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Anne & Lynn Wheeler <[EMAIL PROTECTED]> write: > one of my favorite exchanges from the mid-90s was somebody claiming > that adding digital certificates to the electronic payment > transaction infrastructure would bring it into the modern age. my > response was that it actually would regress the infrastructure at > least a couple decades to the time when online, real-time > transactions weren't being done. The online, real-time transaction > provides much higher quality and useful information than a stale, > static digital certificate (with an offline paradigm from before > modern communication). Having an available repository about the > party being dealt with ... including things like timely, aggregated > information (recent transactions) is significantly mover valuable > than the stale, static digital certificate environment (the only > thing that it has going for it, is it is better than nothing in the > oldtime offline environment). > [...] > EU had also made a statement in the mid-90s that electronic retail > payments should be as anonymous as cash. They can't be as "anonymous as cash" if the party being dealt with can be identified. And the party can be identified if the transaction is "online, real-time". Even if other clues are erased, there's still traffic analysis in this case. What the offline paradigm has going for it is the possibility of true, untraceable anonymity through the use of anonymizing remailers and related technologies. -- StealthMonger <[EMAIL PROTECTED]> -- stealthmail: Scripts to hide whether you're doing email, or when, or with whom. http://stealthsuite.afflictions.org - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Thu, Jan 31, 2008 at 11:12:45PM -0500, Victor Duchovni wrote: > On Fri, Feb 01, 2008 at 01:15:09PM +1300, Peter Gutmann wrote: > > If anyone's interested, I did an analysis of this sort of thing in an > > unpublished draft "Performance Characteristics of Application-level Security > > Protocols", http://www.cs.auckland.ac.nz/~pgut001/pubs/app_sec.pdf. It > > compares (among other things) the cost in RTT of several variations of SSL > > and > > SSH. It's not the TCP RTTs that hurt, it's all the handshaking that takes > > place during the crypto connect. SSH is particularly bad in this regard. > > Thanks, an excellent reference! Section 6.2 is most enlightening, we were > already considering adopting HPN fixes in the internal OpenSSH deployment, > this provides solid material to motivate the work... To be fair, the "handbrake" in SFTP isn't -- the clients and servers should be using async I/O and support interleaving the transfers of many files concurrently, which should allow the peers to exchange data as fast as it can be read from disk. The same is true of NFS, and keep in mind that SFTP is more of a remote filesystem protocol than a file transfer protocol. But nobody writes archivers that work asynchronously (or which are threaded, since, e.g., close(2) has no async equivalent, and is required to be synchronous in the NFS case). And nobody writes SFTP clients and server that work asynchronously. But, we could, and we should. And the handbrake in the SSHv2 connection protocol has its rationale as well (namely to allow interactive sessions to be responsive). As described in Peter's paper, it can be turned off, effectively. It's most useful when mixing interactive sessions and X11 display forwarding (and port forwarding which don't involve bulk data transfers). It's most useless when doing bulk transfers. So use separate connections for bulk transfers. Nico -- - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Frank Siebenlist wrote: Why do the browser companies not care? I spent a few years trying to interest (at least) one browser vendor with looking at new security problems (phishing) and using the knowledge that we had to solve this (opportunistic cryptography). No luck whatsoever. My view of why it is impractical / impossible to interest the browser vendors in new ideas and new security might be summed as this: * Browser vendors operate a closed security shop. I think this is because of a combination of things. Mostly, all security shops are closed, and there aren't any good examples of open security shops (at least that I can think of). We see some outreach in the last few years (blogs or lists by some) but they are very ... protected, the moat is still there. * Browser vendors are influenced heavily by companies, which have strong agendas. Security programmers at the open browsers are often employed by big companies who want their security in. They are not interested in user security. Security programmers need jobs, they don't do this stuff for fun. So it is not as if you can blame them. * Browser vendors don't employ security people as we know them on this mailgroup, they employ cryptoplumbers. Completely different layer. These people are mostly good (and often very good) at fixing security bugs. We thank them for that! But they are completely at sea when it comes to systemic security failings or designing new systems. * Which also means it is rather difficult to have a conversation with them. For example, programmers don't know what governance is, so they don't know how to deal with PKI (which is governance with some certificate sugar), and they can't readily map a multi-party failure. OTOH, they know what code is, so if you code it up you can have a conversation. But if your conversation needs non-code elements ... glug glug... * Browser vendors work to a limited subset of the old PKI book. Unfortunately, the book itself isn't written, with consequent problems. So certain myths (like "all CAs must be the same") have arisen which are out of sync with the original PKI thinking ... and out of sync with reality ... but there is no easy way to deal with this because of the previous points. * Browser vendors may be on the hook for phishing. When you start to talk in terms like that, legal considerations make people go gooey and vague. Nobody in a browser vendor can have that conversation. Which is all to say ... it's not the people! It's the assumptions and history and finance and all other structural issues. That won't change until they are ready to change, and there are only limited things that outsiders can do. Just a personal opinion. iang - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Fri, Feb 01, 2008 at 07:58:16PM +, Steven M. Bellovin wrote: > On Fri, 01 Feb 2008 13:29:52 +1300 > [EMAIL PROTECTED] (Peter Gutmann) wrote: > > (Anyone have any clout with Firefox or MS? Without significant > > browser support it's hard to get any traction, but the browser > > vendors are too busy chasing phantoms like EV certs). > > > The big issue is prompting the user for a password in a way that no one > will confuse with a web site doing so. Given all the effort that's > been put into making Javascript more and more powerful, and given > things like picture-in-picture attacks, I'm not optimistic. It might > have been the right thing, once upon a time, but the horse may be too > far out of the barn by now to make it worthwhile closing the barn door. And on top of that web site designers don't want browser dialogs for HTTP/TLS authentication. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Fri, 01 Feb 2008 13:29:52 +1300 [EMAIL PROTECTED] (Peter Gutmann) wrote: > Actually it doesn't even require X.509 certs. TLS-SRP and TLS-PSK > provide mutual authentication of client and server without any use of > X.509. The only problem has been getting vendors to support it, > several smaller implementations support it, it's in the (still > unreleased) OpenSSL 0.99, and the browser vendors don't seem to be > interested at all, which is a pity because the mutual auth (the > server has to prove possession of the shared secret before the client > can connect) would significantly raise the bar for phishing attacks. > > (Anyone have any clout with Firefox or MS? Without significant > browser support it's hard to get any traction, but the browser > vendors are too busy chasing phantoms like EV certs). > The big issue is prompting the user for a password in a way that no one will confuse with a web site doing so. Given all the effort that's been put into making Javascript more and more powerful, and given things like picture-in-picture attacks, I'm not optimistic. It might have been the right thing, once upon a time, but the horse may be too far out of the barn by now to make it worthwhile closing the barn door. --Steve Bellovin, http://www.cs.columbia.edu/~smb - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)
Peter Gutmann wrote: "Perry E. Metzger" <[EMAIL PROTECTED]> writes: SSL involves digital certificates. Not really, James Donald/George W. Bush. It involves public keys, and it provides a channel by which X.509 certificates can be exchanged, Actually it doesn't even require X.509 certs. TLS-SRP and TLS-PSK provide mutual authentication of client and server without any use of X.509. The only problem has been getting vendors to support it, several smaller implementations support it, it's in the (still unreleased) OpenSSL 0.99, and the browser vendors don't seem to be interested at all, which is a pity because the mutual auth (the server has to prove possession of the shared secret before the client can connect) would significantly raise the bar for phishing attacks. (Anyone have any clout with Firefox or MS? Without significant browser support it's hard to get any traction, but the browser vendors are too busy chasing phantoms like EV certs). That's actually a sad observation. I keep telling my colleagues that this technology is coming "any day now" to a browser near you - didn't realize that that there was no interest with the browser companies to add support for this... Why do the browser companies not care? What is the adoption issue? Still the dark cloud of patents looming over it? Not enough understanding about the benefits? (marketing) Economic reasons that we wouldn't buy anymore server certs? -Frank. -- Frank Siebenlist [EMAIL PROTECTED] The Globus Alliance - Argonne National Laboratory smime.p7s Description: S/MIME Cryptographic Signature
Re: Dutch Transport Card Broken
On Fri, Feb 01, 2008 at 06:24:25PM +1000, James A. Donald wrote: > You are asking for a layered design that works better than the existing > layered design. My claim is that you get an additional round trip for > each layer - which your examples have just demonstrated. > > SSL has to be on top of a reliable transport layer, hence has to have an > extra round trip. I was not proposing something better *for* SSL, I was > proposing something better *instead* *of* SSL. If one takes SSL as a > given, then indeed, *three* round trips are needed before the client can > send any actual data - which is precisely my objection to SSL. What, specifically, are you proposing? Running the web over UDP? That's the only alternative that I can see short of modifying TCP or IPsec. I doubt any of those three will take the web world by storm, but HTTP over DTLS over UDP would have to be least unlikely, and even then, I strongly doubt it. I think we'll just have to deal with those round-trips. As long as there be plenty of other, cheaper or more practical ways to improve web app performance, that's all we're likely to see pursued. Nico -- - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Ian G wrote: The PII equation is particularly daunting, echoing Lynn's early '90s experiences. I am told (but haven't really verified) that the certificate serial number is PII and therefore falls under the full weight of privacy law & regs ... this may sound ludicrous, but privacy and security are different fields with different logics. If that is true, the liability is far too high for something that should be private, but is already public by dint of its exposure in certificates. Privacy liabilities are sky-high in some places, and not only that, they are incalculable, unknowable, and vary with the person you are talking to. So a superficial conclusion would be "don't use client certificates because of the privacy issues" although the issues are somewhat more complex than "PII revealed in SSL key exchange." As I say, they'll plug on, as they need to prove that the cert is worth issuing. It's a data point, no more, and it doesn't exactly answer your spec above. But I'm having fun observing them trying to prove that client certs are worth any amount of effort. the problem that digital certificates were suppose to address was first time communication between strangers ... the electronic analogy of the letters of credit/introduction from sailing ship days. this harks back to the "offline" email days of the early 80s ... dial-up electronic post-office, exchange email, hangup, and now authenticate first-time email from total stranger. the design point assumptions are invalidated if the relying party has their own repository of information about the party being dealt with (and therefor can included that party's public key) and/or has online, timely electronic access to such information. one of my favorite exchanges from the mid-90s was somebody claiming that adding digital certificates to the electronic payment transaction infrastructure would bring it into the modern age. my response was that it actually would regress the infrastructure at least a couple decades to the time when online, real-time transactions weren't being done. The online, real-time transaction provides much higher quality and useful information than a stale, static digital certificate (with an offline paradigm from before modern communication). Having an available repository about the party being dealt with ... including things like timely, aggregated information (recent transactions) is significantly mover valuable than the stale, static digital certificate environment (the only thing that it has going for it, is it is better than nothing in the oldtime offline environment). misc. past posts referencing "certificate-less" public key operation http://www.garlic.com/~lynn/subpubkey.html#certless for some real topic drift ... i've mentioned x9.59 financial standard protocol that can use digital signatures for authentication w/o requiring digital certificates http://www.garlic.com/~lynn/x959.html#x959 part of the issue included that digital certificates (even relying party only digital certificates) can add a factor of one hundred times payload bloat to a typical payment transaction http://www.garlic.com/~lynn/subpubkey.html#bloat however, we were also got involved in co-authoring the x9.99 privacy standard ... as part of that we had to look at a number of things, HIPAA, GLBA ... as well as EU-DPD. as part of that we had also done a privacy merged taxonomy and glossary ... some notes http://www.garlic.com/~lynn/index.html#glosnote EU had also made a statement in the mid-90s that electronic retail payments should be as anonymous as cash. The dominant use of SSL in the world today is electronic commerce between a consumer and a merchant. Passing a client certificate (with PII information) within an encrypted SSL channel to a merchant ... still exposes the information to the merchant ... also violating making purchases as anonymous as cash. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Nicolas Williams wrote: I don't have one that exists today and is practical. But we can certainly imagine possible ways to improve this situation: move parts of TLS into TCP and/or IPsec. There are proposals that come close enough to this (see the last IETF SAAG meeting's proceedings, see the IETF BTNS WG) that it's not too farfetched, but for web stuff I just don't think they're remotely likely. my view of ipsec was that it faced a significant barrier to entry since it required upgrading lots of installed kernels all over the infrastructure (aka tcp/ip protocol stack have been integrated kernel implementations) both SSL and VPN offered implementations that require having to upgrade existing deployed kernels (something that has gotten somewhat easier in the last decade plus). about the same time as SSL, a friend that we had worked on & off with over a couple decades introduced what was to become called VPN in gateway committee at fall '94 IETF meeting in san jose. my view was this resulted in some amount of consternation with the ipsec forces as well as some of the router vendors. the ipsec forces were somewhat mollified by being able to refer to vpn as "lightweight" ipsec (while others then would refer to ipsec as "heavyweight" ipsec). the initial proposal involved border routers providing authentication and (encryption) tunneling through the internet. some of the router vendors had processors that could handle the encryption load. however, there was opposition from the router vendors that didn't have products with processors that could handle the encryption load (or at least stalling until they had such a product). in any case, uptake of both SSL and VPN ... was the significantly easier and less complex deployment ... vis-a-vis ipsec. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
>> (as if anyone uses client certificates anyway)? >> > > Guess why so few people are using it ... > If it were secure, more people would be able to use it. > > People don't use it because the workload of getting signed up is vastly beyond their skillset, and the user experience using the things is pretty bad too. > And there are hundreds of internal systems I heard of that are using client > certificates in reality every day. > There's always a few people using a technology. It's certainly a nonplayer out there. Probably more servers out there authing with Digest, honestly. > Validated email addresses for spamming. Spear-phishing perhaps, ... > > > There are CA´s on this planet that put things like social security numbers > into certificates. > > Who? Seriously, that's pretty significant, I'd like to know who does this. > Where does the SSL specification say that certificates shouldn´t contain > sensitive information? I am missing that detail in the security consideration > section of the RFC. > The word "public" in Public Key isn't exactly subtle. > Do we have any more ideas how we can get this flaw fixed before it starts > hurting too much? > Make it really easy to use some future version of SSL client certs, and quietly add the property you seek. Ease of use drives technology adoption; making the tech actually work is astonishingly secondary. Heh, you asked :) > We have an issue here. And the issue isn´t going to go away, until we > deprecate SSL/TLS, or it gets solved. > To be clear, we'd *have* an issue, if any serious number of people used SSL client certs. I think you have a point that if SSL client certs become very popular going forward, then every website you go to will quietly grab your identity through their ad banners. > * We fix SSL > Does anyone have a solution for SSL/TLS available that we could propose on > the > TLS list? > If not: Can anyone with enough protocol design experience please develop it? > What solution could there be? You're actually going to SSL to the banner ad network, and you're going to give them your client cert. > * We deprecate SSL for client certificate authentication. > We write in the RFC that people MUST NOT use SSL for client authentication. > (Perhaps we get away by pretending that client certificates accidently > slipped > into the specification.) > People by in large do not use SSL client cert authentication. This is problematic, as there's some very nice cryptographic aspects of the system. > * We switch from TLS to hmmm ... perhaps SSH, which has fixed the problem > already. > Hmm, there we would have to write all the glue RFCs like "HTTP over SSH" > again ... > I used to code for SSH. SSL is an entire top-to-bottom stack, replete with a deep PKI infrastructure. SSH? Tunneling transport, barely even librarized. > Try to send a DVD iso image (4GB) over a SSL or SSH encrypted link with bit > errors every 1 bits with a client software like scp that cannot resume > downloads. I gave up after 5 tries that all broke down in average after 1 GB. > (In that case it was a hardware (bad cable) initiated denial of service > attack ;-) > The problem here isn't checksums. SSH is notoriously buggy when packets are dropped. I think there are certain windows in which OpenSSH assumes it will get a response. If it doesn't, it just dies. So, outages more than a few hundred milliseconds have a small percentage chance of causing the session to permanently stall. "Corrupted MAC on input" -- this is a decent sign of corruption at the app layer. Did you really try this with OpenSSL? I've had much better luck there. > If the link layer gives you 1/256, and the TCP layer gives you 1/65536, and > the SSL layer demands 0/16777216, then end up with 1/16777216 too much. > > Actually, 256*65536 = 1677216 :) In actuality, you have both IP and TCP checksums. So you get 8 bits from link, 16 bits from IP, and 16 bits from TCP. A random corrupt packet has about 2^40 odds of getting through. Of course, one real problem is that the checksum algorithms don't exactly distribute noise randomly, and noise is not random. Still, corruption doesn't start being a problem until you get some pretty serious amounts of transfer. (Interestingly, I've been looking at IPsec lately, not for encryption, but for better checksumming.) > Best regards, > Philipp Gühring > > - > The Cryptography Mailing List > Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] > - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Victor Duchovni wrote: Jumping in late, but the idea that *TCP* (and not TLS protocol design) adds round-trips to SSL warrants some evidence (it is very temping to express this skepticism more bluntly). With unextended SMTP for example, the minimum RTT count is: 0. SYN SYN-ACK 1. ACK 220 2. HELO 250 3. MAIL 250 4. RCPT 250 ... n recipients RCPT 250 4+n DATA 354 5+n ... stream of message content segments 250 so it takes at least 6 RTTs to perform a delivery (of a short single-recipient message), but only 1 of the 6 RTTs is TCP "overhead". This is improved with PIPELINING: 0. SYN SYN-ACK 1. ACK 220 2. EHLO 250 ... PIPELINING ... 3. MAIL RCPT(n times) DATA 250 250 (n times) 354 4. ... stream of message content segments 250 Here the application protocol is pipelined, and 5+n RTTs becomes 4 RTTs. The solution is not replacing TCP, but reducing the number of lock-step interactions in the application protocol. If someone has a faster than 3-way handshake connection establishment protocol that SSL could leverage instead of TCP, please explain the design. You are asking for a layered design that works better than the existing layered design. My claim is that you get an additional round trip for each layer - which your examples have just demonstrated. SSL has to be on top of a reliable transport layer, hence has to have an extra round trip. I was not proposing something better *for* SSL, I was proposing something better *instead* *of* SSL. If one takes SSL as a given, then indeed, *three* round trips are needed before the client can send any actual data - which is precisely my objection to SSL. The TCP handshake adds a 1-RTT delay at the start of the connection. What 0-RTT algorithm will allow the server to delay creating expensive connections to clients until the client acks the server response or discover the MSS before sending the first segment? With TCP, at least SYN floods require unspoofed client IPs. Most of the application protocols we wrap in TLS are not DNS. Sure if you can guarantee a single packet response to a single packet request, TCP is not the answer. Otherwise, claiming that SSL is less efficient over TCP smacks of arrogance. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Fri, Feb 01, 2008 at 01:15:09PM +1300, Peter Gutmann wrote: > Victor Duchovni <[EMAIL PROTECTED]> writes: > > >Jumping in late, but the idea that *TCP* (and not TLS protocol design) adds > >round-trips to SSL warrants some evidence (it is very temping to express this > >skepticism more bluntly). > > If anyone's interested, I did an analysis of this sort of thing in an > unpublished draft "Performance Characteristics of Application-level Security > Protocols", http://www.cs.auckland.ac.nz/~pgut001/pubs/app_sec.pdf. It > compares (among other things) the cost in RTT of several variations of SSL and > SSH. It's not the TCP RTTs that hurt, it's all the handshaking that takes > place during the crypto connect. SSH is particularly bad in this regard. Thanks, an excellent reference! Section 6.2 is most enlightening, we were already considering adopting HPN fixes in the internal OpenSSH deployment, this provides solid material to motivate the work... -- /"\ ASCII RIBBON NOTICE: If received in error, \ / CAMPAIGN Victor Duchovni please destroy and notify X AGAINST IT Security, sender. Sender does not waive / \ HTML MAILMorgan Stanley confidentiality or privilege, and use is prohibited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Eric Rescorla wrote: (as if anyone uses client certificates anyway)? Guess why so few people are using it ... If it were secure, more people would be able to use it. No, if it were *convenient* people would use it. I know of absolutely zero evidence (nor have you presented any) that people choose not to use certs because of this kind of privacy issue--but I know of plenty that they find getting certs way too inconvenient. In a CA I have something to do with, I'm observing a site that just started experimenting with client certs (100 users, will reach 1000, maybe more). When we discovered that the certificate includes PII (personally identifying information) and the website stores additional PII, the service was directed to drop all additional PII, and some thought was put into the in-cert PII. Current view is that the service must engage the user in a contract to accept the storing of that in-cert PII, otherwise it must not store the info in the cert (which means no identity, no persistence, and no point to the client certs). Writing contracts and securing agreement of course is a barrier, a burden. If this were a general requirement, then this would be enough (imho) to not recommend client certs, because contracts need lawyers, they cost real money, they don't solve the problem, and not recommending them is likewise unacceptable. (Then, as you say, there are convenience issues.) This is an experiment to force client certs to be used, so they are plugging on. It's a CA so it is trying to prove that there is value in these things. So... there are two slight variations that could be employed. Firstly, all data placed in the cert could be declared public in advance, and then no contract is required to use it in a context that is compatible with public data. That is, the question of the contract is pushed to the CA/CPS. (You mentioned that the premise is that it is all public data...) Another variation is to switch to username + password, of course, in which case the username is freely given and expected to be stored (certs being more or less invisible to users, so we can presume no such). (definately open to other ideas...) The PII equation is particularly daunting, echoing Lynn's early '90s experiences. I am told (but haven't really verified) that the certificate serial number is PII and therefore falls under the full weight of privacy law & regs ... this may sound ludicrous, but privacy and security are different fields with different logics. If that is true, the liability is far too high for something that should be private, but is already public by dint of its exposure in certificates. Privacy liabilities are sky-high in some places, and not only that, they are incalculable, unknowable, and vary with the person you are talking to. So a superficial conclusion would be "don't use client certificates because of the privacy issues" although the issues are somewhat more complex than "PII revealed in SSL key exchange." As I say, they'll plug on, as they need to prove that the cert is worth issuing. It's a data point, no more, and it doesn't exactly answer your spec above. But I'm having fun observing them trying to prove that client certs are worth any amount of effort. iang PS: normal disclosures of interest + conflicts, included. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
"Perry E. Metzger" <[EMAIL PROTECTED]> writes: >> SSL involves digital certificates. > >Not really, James Donald/George W. Bush. It involves public keys, and it >provides a channel by which X.509 certificates can be exchanged, Actually it doesn't even require X.509 certs. TLS-SRP and TLS-PSK provide mutual authentication of client and server without any use of X.509. The only problem has been getting vendors to support it, several smaller implementations support it, it's in the (still unreleased) OpenSSL 0.99, and the browser vendors don't seem to be interested at all, which is a pity because the mutual auth (the server has to prove possession of the shared secret before the client can connect) would significantly raise the bar for phishing attacks. (Anyone have any clout with Firefox or MS? Without significant browser support it's hard to get any traction, but the browser vendors are too busy chasing phantoms like EV certs). >> The particular digital certificate format necessarily imply a PKI >> structure > >No, James Donald/George W. Bush, that's not even remotely true. There is no >requirement that you use the certs as anything other than key containers. There's actually no requirement that you use certs at all. In fact if everyone dropped them (i.e. stopped pretending that they work and moved towards something that does) we might all be a whole lot better off. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Victor Duchovni <[EMAIL PROTECTED]> writes: >Jumping in late, but the idea that *TCP* (and not TLS protocol design) adds >round-trips to SSL warrants some evidence (it is very temping to express this >skepticism more bluntly). If anyone's interested, I did an analysis of this sort of thing in an unpublished draft "Performance Characteristics of Application-level Security Protocols", http://www.cs.auckland.ac.nz/~pgut001/pubs/app_sec.pdf. It compares (among other things) the cost in RTT of several variations of SSL and SSH. It's not the TCP RTTs that hurt, it's all the handshaking that takes place during the crypto connect. SSH is particularly bad in this regard. Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Dave Howe <[EMAIL PROTECTED]> writes: >SSL - Cludge thrown together by a browser manufacturer, To paraphrase Winston Churchill, "SSL is the worst secure-pipe protocol, except for all the others". Like most people here, I can find assorted nits to pick with it (mostly message-formatting stuff and the like, which is actually relatively trivial), but every time I look at its competitors I realise that they're all much, much worse. Conversely, it's amazing how many other protocols are just SSL reinvented badly (or in several cases, really really badly). Peter. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Victor Duchovni wrote: SMTP does not need TCP to provide reliability for the tail of the session, the application-level "." (end-of-data) and server "250" response complete a transaction, everything after that is optional, so for example Postfix will send (when PIPELINING). DATA354 Go ahead Message-Content Lots of acks .QUIT 250 Ok and will disconnect after reading the "250 response" without waiting for the 221 response. The TCP 3-way shutdown (FIN, FIN-ACK, ACK) happens in the kernel in the background, the SMTP server and client are by that point handling different connections. So the reliable shutdown latency is of no consequence for application throughput. A pipelined SMTP delivery can be completed over TCP in 5 RTTs not 7. 0. SYN SYN-ACK 1. ACK 220 2. EHLO 250 3. MAIL RCPT DATA 250 250 354 4. MSG . QUIT 250 221 5. close socket TCP is fine, latency is primarily the result of application protocol details, not TCP overhead. The only TCP overhead above is 1 extra RTT for the connection setup. Everything else is SMTP not TCP, and running SMTP over UDP (with ideal conditions and no lost packets, ...) would save just 1 RTT. re: http://www.garlic.com/~lynn/aadsm28.htm#21 Dutch Transport Card Broken sorry, I didn't say that TCP required seven round-trips for reliable exchange; the statement was that minimum TCP operation was seven packet exchange (for reliable operation) sort of 3.5 round-trips. That VMTP (rfc 1045) reduced that to minimum of five packet exchange (sort of 2.5 round-tips) ... and that XTP got it to a minimum of three packet exchange (sort of 1.5 round-trips) for reliable operation. from my RFC index http://www.garlic.com/~lynn/rfcietff.htm rfc 1045 summary http://www.garlic.com/~lynn/rfcidx3.htm#1045 1045 E VMTP: Versatile Message Transaction Protocol: Protocol specification, Cheriton D., 1988/02/01 (123pp) (.txt=264928) (Refs 955, 966, 969) (Ref'ed By 1050, 1072, 1105, 1106, 1190, 1263, 1323, 1453, 1458, 1700, 2018, 2375, 2757) (VMTP) as always, clicking on the ".txt=nnn" field (in rfc summary) retrieves the actual RFC. If there is more than minimum amount of data ... TCP might involve more than seven packet exchange ... but the minimum packet exchange is seven packets (not round-trips). - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Wed, Jan 30, 2008 at 02:47:46PM -0500, Victor Duchovni wrote: > If someone has a faster than 3-way handshake connection establishment > protocol that SSL could leverage instead of TCP, please explain the > design. I don't have one that exists today and is practical. But we can certainly imagine possible ways to improve this situation: move parts of TLS into TCP and/or IPsec. There are proposals that come close enough to this (see the last IETF SAAG meeting's proceedings, see the IETF BTNS WG) that it's not too farfetched, but for web stuff I just don't think they're remotely likely. Prior to the advent of AJAX-like web design patterns the most noticeable latency in web apps was in the server (for dynamic content) and the client (re-rendering the whole page on every click). Applying GUI lessons to the web (asynchrony! callbacks/closures!) fixed that. TLS was not to blame. TLS probably still isn't to blame for whatever latency users might be annoyed by in web apps. It's *much* easier to look for improvements in the app layer first given that web app updates are much easier to deploy than TLS (which in turn is much easier to deploy than changes to TCP or IPsec). Nico -- - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Thu, Jan 31, 2008 at 02:28:30PM -0500, Anne & Lynn Wheeler wrote: > TCP requires minimum of seven message exchange for reliable transport > VMTP (rfc 1045) got that down to minimum of five messages, and XTP > then > got it down to three messages minimum for reliable transport (disclaimer > we were on the XTP technical advisory board). > SMTP does not need TCP to provide reliability for the tail of the session, the application-level "." (end-of-data) and server "250" response complete a transaction, everything after that is optional, so for example Postfix will send (when PIPELINING). DATA 354 Go ahead Message-Content Lots of acks .QUIT 250 Ok and will disconnect after reading the "250 response" without waiting for the 221 response. The TCP 3-way shutdown (FIN, FIN-ACK, ACK) happens in the kernel in the background, the SMTP server and client are by that point handling different connections. So the reliable shutdown latency is of no consequence for application throughput. A pipelined SMTP delivery can be completed over TCP in 5 RTTs not 7. 0. SYN SYN-ACK 1. ACK 220 2. EHLO 250 3. MAIL RCPT DATA 250 250 354 4. MSG . QUIT 250 221 5. close socket TCP is fine, latency is primarily the result of application protocol details, not TCP overhead. The only TCP overhead above is 1 extra RTT for the connection setup. Everything else is SMTP not TCP, and running SMTP over UDP (with ideal conditions and no lost packets, ...) would save just 1 RTT. -- Viktor. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
On Jan 30, 2008 9:04 PM, Philipp Gühring <[EMAIL PROTECTED]> wrote: > Hi, > > > Huh? What are you claiming the problem with sending client certificates > > in plaintext is > > * It´s a privacy problem > * It´s a security problem for people with a security policy that requires > the > their identities to be kept secret, and only to be used to authenticate to > the particular server they need > * It´s an availability problem for people that need high-security > authentication mechanisms, combined with high-privacy demands > * It´s a identity theft problem in case the certificate contains personal > data > that can be used for identity theft I totally disagree that this is a material problem that is in any meaningful way impeding the use of SSL client certificates (there are totally different reasons that client certs aren't being widely adopted, but that's beside the point). However, TLS supports what you want right now: just do the initial handshake without client auth, then renegotiate after the session encryption starts. The renegotiation will happen under the encrypted, identity-protected and server-authenticated session, and client authentication can be requested in the renegotiation; the client cert will then be confidential. The reason nobody actually bothers to do this is because there's no customer demand (see paragraph 1). - Tim - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Victor Du Jumping in late, but the idea that *TCP* (and not TLS protocol design) adds round-trips to SSL warrants some evidence (it is very temping to express this skepticism more bluntly). With unextended SMTP for example, the minimum RTT count is: 0. SYN SYN-ACK 1. ACK 220 2. HELO 250 3. MAIL 250 4. RCPT 250 ... n recipients RCPT 250 4+n DATA 354 5+n ... stream of message content segments 250 so it takes at least 6 RTTs to perform a delivery (of a short single-recipient message), but only 1 of the 6 RTTs is TCP "overhead". This is improved with PIPELINING: 0. SYN SYN-ACK 1. ACK 220 2. EHLO 250 ... PIPELINING ... 3. MAIL RCPT(n times) DATA 250 250 (n times) 354 4. ... stream of message content segments 250 re: http://www.garlic.com/~lynn/aadsm28.htm#15 Dutch Transport Card Broken http://www.garlic.com/~lynn/aadsm28.htm#16 Dutch Transport Card Broken http://www.garlic.com/~lynn/aadsm28.htm#20 Fixing SSL (was Re: Dutch Transport Card Broken) TCP requires minimum of seven message exchange for reliable transport VMTP (rfc 1045) got that down to minimum of five messages, and XTP then got it down to three messages minimum for reliable transport (disclaimer we were on the XTP technical advisory board). i've frequently pontificated that with reliable registration of public keys in the dns system and then piggy-backing any registered public key in standard DNS response then it would be possible to encode the randomly generated secret key (with that public key) and the encrypted message in the XTP packet for minimum 3 packet exchange. http://www.garlic.com/~lynn/subtopic.html#subpubkey.html#catch22 http already went thru its period of problems of implicit assumptions with tcp. tcp sessions were assumed to be long lived and session shutdown was assumed to be relatively infrequently. non-session activity like http was always assumed to use udp for efficiency. the http ignored all of that and used tcp for non-session activity. as a result, webserver systems went thru a period where the processors was spending 95+ percent of processor in the session shutdown processing. systems then were retrofited with new kind of tcp session shutdown implementation to handle the misuse by http. the original ssl deployment was to 1) encrypt data in transit and 2) authenticate the server. the implicit assumption was that the user understood the binding between the business and the url. the browser then provided the second part, verifying the binding between the url and the server contacted (was the server that the user thot they were talking to, the server they were actually talking to). The dependency for valid ssl operation was violated almost immediately when merchants found that ssl overhead reduced thru thruput by 5-10 times. the regression was instead of initial contact of the webserver (presumably url supplied by user) being ssl, ssl was moved to checkout/pay phase where the user clicked on a button (and url) provided by the webserver (not a url provided by the user). It was no longer possible to provide any assurances as to the authentity of the webserver contacted (ssl purely being reduced to encrypting data in transmission). we had been called in to consult with the small client/server company on using this technology (they created) called SSL for payment transactions http://www.garlic.com/~lynn/subnetwork.html#gateway and had to go thru detailed walk thrus of the technology as it applied to actual business processes (and the associated implicit dependencies) ... as well as detailed walk thrus of the new business operations that were calling themselves certification authorities. the other issue that we came up in applying this SSL technology was communication between webservers and something called the payment gateway. for this communication we mandated mutual authentication ... this was before mutual authentication had been implemented in SSL. It turns out that by the time we had it all implemented and deployed ... it also became very apparent that the things called digital certificates were redundant and superfluous. the basic design point for digital certificates is first time communication between total strangers. the payment gateway business processes required that all the merchants had to be pre-registered with the payment gateway and the payment gateway pre-registered with all the merchants violating the basic justification for having digital certificates. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Fixing SSL (was Re: Dutch Transport Card Broken)
To add to the examples Philipp has mentioned, I've been closely involved in the design and implementation of a number of projects for the Spanish government using SSL + client certificates; indeed, the new Spanish ID card includes two certificates, one for authentication and the other for digital signature. There are some examples of services using SSL+client certs at: http://www.mir.es/MIR/Servicios_Telematicos/ConCertificacion/ Regards, Jim Cheesman -Mensaje original- De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] En nombre de Philipp Gühring Enviado el: jueves, 31 de enero de 2008 3:04 Para: Eric Rescorla CC: Cryptography; Rasika Dayarathna Asunto: Re: Fixing SSL (was Re: Dutch Transport Card Broken) Hi, > Huh? What are you claiming the problem with sending client certificates > in plaintext is * It´s a privacy problem * It´s a security problem for people with a security policy that requires the their identities to be kept secret, and only to be used to authenticate to the particular server they need * It´s an availability problem for people that need high-security authentication mechanisms, combined with high-privacy demands * It´s a identity theft problem in case the certificate contains personal data that can be used for identity theft Quoted from Lynns email: >i.e. the x.509 identity digital certificates from the early 90s, were >becoming >more and more overloaded with personal information ... and by the >mid-90s, lots of institutions were starting to realize all that personal >information represented significant privacy and liability issues ... and >the RPO digital certificates were born. * It´s a liability issue (Lynn, can you go into more details here? On the other hand, I would say it´s self-explaining ...) > (as if anyone uses client certificates anyway)? Guess why so few people are using it ... If it were secure, more people would be able to use it. If you want a "public" example of client certificate usage: https://secure.cacert.org/ (You need a (free) client certificate from www.CAcert.org to be able to access this page) There are ISPs out there who provide internet access based on client certificates, authenticated in HTTPS sessions Creative Commons is running a registry for digital works, based on authors client certificate authentication: http://www.registeredcommons.org/ The Austrian governmental inhabitant register is using client certificates for about 10,000 users all around Austria since 2001. (If I remember the details correctly) http://zmr.bmi.gv.at/pages/home.htm And there are hundreds of internal systems I heard of that are using client certificates in reality every day. > That the phisher gets to see the client's identity? Validated email addresses for spamming. Spear-phishing perhaps, ... > So what? Why doesn´t SSH leak the client identity in plaintext? The problem isn´t a key-agreement problem. The problem is a client-authentication problem. > It doesn't let them impersonate the client to anyone. It does let them impersonate the client to anyone who doesn´t care about the public key. (There are applications that just use the DN+Issuer information that they normally extract out of the certificates, ...) But impersonation is just one threat out of the huge SSL/TLS threat-model. > Certificates > shouldn't contain sensitive information anyway. There are CA´s on this planet that put things like social security numbers into certificates. (I guess those CA´s would say that SSL shouldn´t leak certificates in plaintext anyway.) Shovling around responsibility won´t help us. Let´s fix the problems. (Yes, we are already trying to get those CA´s to stop doing that ... but it´s a bit like asking credit card companies to not print those sensitive creditcard numbers on those credit cards ...) And there are a lot of people who would be interested to use certificates for more applications than pure identity. (which aren´t necessarily sensitive, but they are personal related data) Where does the SSL specification say that certificates shouldn´t contain sensitive information? I am missing that detail in the security consideration section of the RFC. There is a market demand for using sensitive information in certificates, dating back to the mid 90's (according to Lynn), and showing itself in various forms like Stefan Brands credentials, Attribute Certificates, and even the OACerts by Jiangtao Li and Ninghui Li. I have been talking to many people about client certificates and client authentication, and a lot of them are interested in using client certificates for authentication, and also to add other attributes to the certificates. > > We have the paradox situation that I have to tell people that they should > > use HTTPS with server-certificates and username+password inside the HTTPS > > session, because that´s more secure
Re: Dutch Transport Card Broken
"James A. Donald" <[EMAIL PROTECTED]> writes: > Perry E. Metzger wrote: >> (No, I'm not a fan of X.509 certs, but those are not >> core to the protocol, and you can think of them as >> nothing more than a fancy key container format if you >> like. Key management is not addressed by SSL, so there >> is no reason that fixing key management has anything >> to do with SSL per se.) > > The two actually working, widely used, secure systems > are SSH and Skype, SSL is in use just about everywhere, James. https: is used constantly, and there are many other applications that make use of it. My mom uses SSL regularly, but by contrast I don't think she's ever touched SSH. Perhaps you argue that SSL is not secure, but it appears to have withstood all attacks to date. You might claim that it does not prevent phishing attacks or some such, but then you are making a claim about how people use https in practice,, not about SSL at all. The SSL part of that protocol is fine, just as the RSA and AES or 3DES parts of that protocol are fine. > neither of which uses SSL/TLS/PKI I'm going to start referring to you as James Donald/George W. Bush. Why does James Donald/George W. Bush persist in involving us in wars in foreign countries, I wonder? Please don't claim that you're not somehow part of James Donald/George W. Bush, because as you see I've juxtaposed your names, which is proof that you must be part of the same entity. If you don't like my doing that, then stop referring to SSL/TLS/PKI because SSL has nothing to do with PKI. SSL has nothing to do with PKI. X.509 certs are just a key container format. The applications get to decide what to do with them. You persistently claim that SSL has something to do with public key infrastructure, but it has no more to do with public key infrastructure at all. You shouldn't mention them at the same time. You are free to write applications that use SSL with a PKI, or without it -- the two have nothing to do with each other whatsoever. I know of many apps that use SSL and don't touch PKI at all. Constantly mentioning PKI and SSL at the same time betrays a substantial ignorance of the system architecture we're mentioning -- it would make as much sense to claim that SSL has something to do with IKE because both use X.509 certificates. > The proof of the pudding is in the eating. When large > numbers of people use cryptography that really does make > them secure, they are not using SSL/TLS/PKI. Well, James Donald/George W. Bush, I presume this means that you have a way of breaking SSL. Could you share it with us? If not, please stop conflating things that are unconnected. > SSL involves digital certificates. Not really, James Donald/George W. Bush. It involves public keys, and it provides a channel by which X.509 certificates can be exchanged, but by the same token, SSH also provides a channel by which X.509 certs can be exchanged. Should we therefore refer to SSH/PKI > The particular digital certificate format necessarily imply a PKI > structure No, James Donald/George W. Bush, that's not even remotely true. There is no requirement that you use the certs as anything other than key containers. >> My opinion (and just about everyone else's) is well >> known. > > There is a serious security problem in the network. It > needs fixing. SSL/TLS/PKI exists, yet is entirely > ineffectual in fixing it. Well, James Donald/George W. Bush, perhaps that is because SSL has nothing to do with the issue. SSL works perfectly so far as we know. The issue is that higher levels of the stack (like key management) aren't properly designed, but SSL itself is just fine. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Philipp Gühring wrote: Hi, SSL key distribution and management is horribly broken, with the result that everyone winds up using plaintext when they should not. Yes, sending client certificates in plaintext while claiming that SSL/TLS is secure doesn´t work in a world of phishing and identity theft anymore. We have the paradox situation that I have to tell people that they should use HTTPS with server-certificates and username+password inside the HTTPS session, because that´s more secure than client certificates ... Does anyone have an idea how we can fix this flaw within SSL/TLS within a reasonable timeframe, so that it can be implemented and shipped by the vendors in this century? (I don´t think that starting from scratch and replacing SSL makes much sense, since it´s just one huge flaw ...) If I recall correctly, SSL was designed chronologically after ISO OSI Network-Layer Security Protocol (yes, the official WAN was actually X.25 at one point) or Transport Layer Security Protocol, both in their connection-oriented flavor, which used ideas originating from DecNET designs (researcher names Tardo, Alagappan; I once had a patent number in this thread of protocol engineering, but I lost it). Anyway, the key point in these visionary ideas is that the D-H exchange occurs *before* the exchange of security certificates. This provided the traffic-flow confidentiality that becomes desirable to protect privacy these days. So, you got your fix with OSI NLSP or TLSP, you just have to overcome the *power of the installed base*! Regards, -- - Thierry Moreau CONNOTECH Experts-conseils inc. 9130 Place de Montgolfier Montreal, Qc Canada H2M 2A1 Tel.: (514)385-5691 Fax: (514)385-5900 web site: http://www.connotech.com e-mail: [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
At Thu, 31 Jan 2008 03:04:00 +0100, Philipp Gühring wrote: > > Hi, > > > Huh? What are you claiming the problem with sending client certificates > > in plaintext is > > * It´s a privacy problem > * It´s a security problem for people with a security policy that requires the > their identities to be kept secret, and only to be used to authenticate to > the particular server they need > * It´s an availability problem for people that need high-security > authentication mechanisms, combined with high-privacy demands > * It´s a identity theft problem in case the certificate contains personal > data > that can be used for identity theft I don't find this at all convincing. There are a variety of different threat vectors here: 1. Phishing. 2. Pharming (DNS spoofing). 3. Passive attacks. In the case of phishing, the fact that the client sends its certificates in the clear is totally irrelevant, since the client would simply send its identity encrypted under the server's certificate. The only fix for this alleged privacy leak in the phishing context is for the client to refuse to deliver his certificate to anyone but people who present valid certs that he otherwise trusts. Now, this is potentially an attack if the attacker is passive but on-path, either via pharming or via subverting some router, but I'm unaware of any evidence that this is used as a certificate disclosure attack vector. > > (as if anyone uses client certificates anyway)? > > Guess why so few people are using it ... > If it were secure, more people would be able to use it. No, if it were *convenient* people would use it. I know of absolutely zero evidence (nor have you presented any) that people choose not to use certs because of this kind of privacy issue--but I know of plenty that they find getting certs way too inconvenient. > > That the phisher gets to see the client's identity? > > Validated email addresses for spamming. Spear-phishing perhaps, ... Validated email addresses are not exactly hard to obtain. > > It doesn't let them impersonate the client to anyone. > > It does let them impersonate the client to anyone who doesn´t care about the > public key. (There are applications that just use the DN+Issuer information > that they normally extract out of the certificates, ...) If those applications do not force the client to do proof of possession of the private key, then they are fatally broken. It's not our job to fix them. > > > We have the paradox situation that I have to tell people that they should > > > use HTTPS with server-certificates and username+password inside the HTTPS > > > session, because that´s more secure than client certificates ... > > > > No it isn't more secure. > > Using username+password inside HTTPS does not leak the client´s identity in > cleartext on the line. (If I am wrong and HTTPS leaks usernames sent as HTTP > Forms or with HTTP Basic Authentication, please tell me) No, it just leaks the password to the phishing server. Yeah, that's totally a lot better. > > This gets discussed on the TLS mailing list occasionally, but the > > arguments for making this change aren't very convincing. > > Yes, there are regularly people popping up there that need it, but they > always > get ignored there, it seems. Because the arguments they present are handwavy and unconvincing, just like yours. > > If you have > > an actual credible security argument you should post it to > > [EMAIL PROTECTED] > > Do you think the the security arguments I summed up above qualify on the tls > list? It's an open list. Feel free to make these arguments. > Should I go into more detail? Present practical examples? I would certainly find practical examples more convincing than the ones you've presented. > I see several possible options: > * We fix SSL > Does anyone have a solution for SSL/TLS available that we could propose on > the > TLS list? > If not: Can anyone with enough protocol design experience please develop it? There's already a solution: double handshake. You do an ordinary handshake with server auth only and then you do a second handshake with client auth. This hides the certificate perfectly well. Yes, you have to do two private key ops on the server, but if this issue is as important as you say, this is a tradeoff you should be happy to make. I've pointed this out on the TLS mailing list a number of times, but maybe you missed it. > * We change the rules of the market, and tell the people that they MUST NOT > ask for additional data in their certificates anymore Fundamentally, this *is* the fix. Even if SSL guaranteeed that nobody but the person you were handshaking with got the certificate, this is still incredibly brittle because any random server can ask you for your cert and users can't be trusted not to hand them over. The basic premise of certs is that they're public info. If you want to carry private data around in them then you should encrypt that data. > > > TC
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Hi, > Huh? What are you claiming the problem with sending client certificates > in plaintext is * It´s a privacy problem * It´s a security problem for people with a security policy that requires the their identities to be kept secret, and only to be used to authenticate to the particular server they need * It´s an availability problem for people that need high-security authentication mechanisms, combined with high-privacy demands * It´s a identity theft problem in case the certificate contains personal data that can be used for identity theft Quoted from Lynns email: >i.e. the x.509 identity digital certificates from the early 90s, were >becoming >more and more overloaded with personal information ... and by the >mid-90s, lots of institutions were starting to realize all that personal >information represented significant privacy and liability issues ... and >the RPO digital certificates were born. * It´s a liability issue (Lynn, can you go into more details here? On the other hand, I would say it´s self-explaining ...) > (as if anyone uses client certificates anyway)? Guess why so few people are using it ... If it were secure, more people would be able to use it. If you want a "public" example of client certificate usage: https://secure.cacert.org/ (You need a (free) client certificate from www.CAcert.org to be able to access this page) There are ISPs out there who provide internet access based on client certificates, authenticated in HTTPS sessions Creative Commons is running a registry for digital works, based on authors client certificate authentication: http://www.registeredcommons.org/ The Austrian governmental inhabitant register is using client certificates for about 10,000 users all around Austria since 2001. (If I remember the details correctly) http://zmr.bmi.gv.at/pages/home.htm And there are hundreds of internal systems I heard of that are using client certificates in reality every day. > That the phisher gets to see the client's identity? Validated email addresses for spamming. Spear-phishing perhaps, ... > So what? Why doesn´t SSH leak the client identity in plaintext? The problem isn´t a key-agreement problem. The problem is a client-authentication problem. > It doesn't let them impersonate the client to anyone. It does let them impersonate the client to anyone who doesn´t care about the public key. (There are applications that just use the DN+Issuer information that they normally extract out of the certificates, ...) But impersonation is just one threat out of the huge SSL/TLS threat-model. > Certificates > shouldn't contain sensitive information anyway. There are CA´s on this planet that put things like social security numbers into certificates. (I guess those CA´s would say that SSL shouldn´t leak certificates in plaintext anyway.) Shovling around responsibility won´t help us. Let´s fix the problems. (Yes, we are already trying to get those CA´s to stop doing that ... but it´s a bit like asking credit card companies to not print those sensitive creditcard numbers on those credit cards ...) And there are a lot of people who would be interested to use certificates for more applications than pure identity. (which aren´t necessarily sensitive, but they are personal related data) Where does the SSL specification say that certificates shouldn´t contain sensitive information? I am missing that detail in the security consideration section of the RFC. There is a market demand for using sensitive information in certificates, dating back to the mid 90's (according to Lynn), and showing itself in various forms like Stefan Brands credentials, Attribute Certificates, and even the OACerts by Jiangtao Li and Ninghui Li. I have been talking to many people about client certificates and client authentication, and a lot of them are interested in using client certificates for authentication, and also to add other attributes to the certificates. > > We have the paradox situation that I have to tell people that they should > > use HTTPS with server-certificates and username+password inside the HTTPS > > session, because that´s more secure than client certificates ... > > No it isn't more secure. Using username+password inside HTTPS does not leak the client´s identity in cleartext on the line. (If I am wrong and HTTPS leaks usernames sent as HTTP Forms or with HTTP Basic Authentication, please tell me) > > Does anyone have an idea how we can fix this flaw within SSL/TLS within a > > reasonable timeframe, so that it can be implemented and shipped by the > > vendors in this century? Do we have any more ideas how we can get this flaw fixed before it starts hurting too much? > This gets discussed on the TLS mailing list occasionally, but the > arguments for making this change aren't very convincing. Yes, there are regularly people popping up there that need it, but they always get ignored there, it seems. I think we have the boiling frog probl
Re: Dutch Transport Card Broken
Perry E. Metzger wrote: > (No, I'm not a fan of X.509 certs, but those are not > core to the protocol, and you can think of them as > nothing more than a fancy key container format if you > like. Key management is not addressed by SSL, so there > is no reason that fixing key management has anything > to do with SSL per se.) The two actually working, widely used, secure systems are SSH and Skype, neither of which uses SSL/TLS/PKI The proof of the pudding is in the eating. When large numbers of people use cryptography that really does make them secure, they are not using SSL/TLS/PKI. SSL involves digital certificates. The particular digital certificate format necessarily imply a PKI structure with the same sort of defects as the existing PKI structure, which secures what does not matter much, and fails to secure that which does matter. In this sense, X.509 certificates are core to the protocol, and that is the big problem with the protocol, though neither am I happy about the fact that when the client initiates a communication, the data it actually wants to send only gets sent after the the *third* round trip. > My opinion (and just about everyone else's) is well > known. There is a serious security problem in the network. It needs fixing. SSL/TLS/PKI exists, yet is entirely ineffectual in fixing it. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Eric Rescorla wrote: > Huh? What are you claiming the problem with sending > client certificates in plaintext is (as if anyone uses > client certificates anyway)? Well that is one problem - no one uses them, and no one should use them, while PKI was designed under the assumption that everyone would be using them. Another problem is that in practice the system merely ensures you are getting the purported domain name. Since we are overwhelmed by a multitude of irrelevant and confusing domain names, this is not much help. Further, I frequently get the warning that the certificate does not agree with the domain name when I know well that I am communicating with the intended entity - frequent misconfiguration results in false warnings, which I am thus trained to ignore, rendering the system entirely useless. Since we rely on passwords, social security numbers, and so forth, shared secrets, people are trained to give away secrets to purported authority, which creates the phishing hazard. We need to fix both problems. Of course, if the phishing hazard was fixed, we would still have the malware hazard, but we now know how to fix the malware hazard. We should fix both problems, rather than using one as an excuse for not fixing the other. We need to fix the network assuming the node is going to be made safe, and fix the node assuming the network is going to be made safe. >> Does anyone have an idea how we can fix this flaw >> within SSL/TLS within a reasonable timeframe, so that >> it can be implemented and shipped by the vendors in >> this century? Eric Rescorla wrote: > This gets discussed on the TLS mailing list > occasionally, but the arguments for making this change > aren't very convincing. If you have an actual credible > security argument you should post it to [EMAIL PROTECTED] I don't think that is a useful discussion forum. The IETF is moribund, paralyzed and increasingly irrelevant. If the internet is to be fixed, the fixes have to bypass the IETF. When one has a large group, group dynamics can make the large group a little bit smarter than its smartest members, but more commonly, make it a lot dumber than its dumbest members. If the IETF was capable of handling, or even noticing, the crisis that we in then we would not be in this crisis. To fix the phishing problem, we need to cryptographically secure relationships, rather than attempting to cryptographically secure true names, and to greatly reduce reliance on revealing shared secrets. It should be unusual and disturbing to reveal shared secrets, rather than routine, and it should only be done with humans, not machines. 1. As with Skype to Skype IM, the fact that you can receive a message from what purports to be an entity with which you have a relationship, should be compelling evidence that it really is that entity, the entity to which you have given a petname on your contacts list. Thus phishing is hard to initiate. As with Skype, what we seek to secure is petnames, not true names. We want to secure the bookmark list, and the list that comes up in a Google search. We want to secure that when you click on a the top entry of the Google list, you are contacting the intended entity. 2. As with Skype to Skype IM, this should be symmetric. If you respond to a message from your bank, or initiate a message to your bank, you should not have to reveal some shared secrets to prove an existing relationship before getting on with your task. Thus phishing should fail to catch any phish. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Wed, Jan 30, 2008 at 06:08:37PM -, Dave Korn wrote: > On 30 January 2008 17:01, Jim Cheesman wrote: > > > James A. Donald: > SSL is layered on top of TCP, and then one layers > one's actual protocol on top of SSL, with the result > that a transaction involves a painfully large number > of round trips. Jumping in late, but the idea that *TCP* (and not TLS protocol design) adds round-trips to SSL warrants some evidence (it is very temping to express this skepticism more bluntly). With unextended SMTP for example, the minimum RTT count is: 0. SYN SYN-ACK 1. ACK 220 2. HELO 250 3. MAIL 250 4. RCPT 250 ... n recipients RCPT 250 4+n DATA 354 5+n ... stream of message content segments 250 so it takes at least 6 RTTs to perform a delivery (of a short single-recipient message), but only 1 of the 6 RTTs is TCP "overhead". This is improved with PIPELINING: 0. SYN SYN-ACK 1. ACK 220 2. EHLO 250 ... PIPELINING ... 3. MAIL RCPT(n times) DATA 250 250 (n times) 354 4. ... stream of message content segments 250 Here the application protocol is pipelined, and 5+n RTTs becomes 4 RTTs. The solution is not replacing TCP, but reducing the number of lock-step interactions in the application protocol. If someone has a faster than 3-way handshake connection establishment protocol that SSL could leverage instead of TCP, please explain the design. The TCP handshake adds a 1-RTT delay at the start of the connection. What 0-RTT algorithm will allow the server to delay creating expensive connections to clients until the client acks the server response or discover the MSS before sending the first segment? With TCP, at least SYN floods require unspoofed client IPs. Most of the application protocols we wrap in TLS are not DNS. Sure if you can guarantee a single packet response to a single packet request, TCP is not the answer. Otherwise, claiming that SSL is less efficient over TCP smacks of arrogance. -- /"\ ASCII RIBBON NOTICE: If received in error, \ / CAMPAIGN Victor Duchovni please destroy and notify X AGAINST IT Security, sender. Sender does not waive / \ HTML MAILMorgan Stanley confidentiality or privilege, and use is prohibited. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Philipp Gühring wrote: I once implemented SSL over GSM data channel (without PPP and without TCP), and discovered that SSL needs better integrity protection than raw GSM delivers. (I am quite sure that´s why people normally run PPP over GSM channels ...) SSH has the same problems. It also assumes an active attack in case of integrity problems of the lower layer, and terminates the connection. TBH I can't see the problem - the unix philosophy of doing one thing well, and chaining simple tools to make complex ones, works well here. we have: TCP - well understood, has crude integrity and reliability checks built in, works reasonably well at converting a bunch of packets leaving and arriving via your network connection into something vaguely like a stream point-to-point connection. Provided by every ISP across the planet, problems at this level can be handed off to experienced network engineers who will at least understand the problem. SSL - Cludge thrown together by a browser manufacturer, probably to create a market for a bunch of companies who generated two prime numbers and now sell the answers to simple math queries involving the numbers. However, works reasonably well, has some crude authentication of the server built in (via the aformentioned bunch of companies) which at least limits potential hackers to those whose money the bunch of companies will accept ;) Again, works well in its domain, but requires a reasonably reliable channel to talk over, and a message to carry. Effectively turns an unencrypted channel into an encrypted one, Would work as well over a serial link as a tcp link (modulo the domain name check in the cert) HTTP - pretty basic file transfer protocol, with limited scope for negotiation, but designed largely to move text files from a server to a client. requires transport, can use tcp, ssl-over-tcp, serial, whatever your server will listen on and your client request on. add them together and you get HTTPS. leave out the SSL, and you get HTTP as-normally-spoke, so the SSL and HTTP are pretty much drop in modules. you could define HTTPG (HTTP over a security protocol other than SSL) and if a browser could support it, both TCP and HTTP would still be happy. you could also define HTTPS-over-adis-lamp and provided the operators were sufficiently accurate, securely download your web page from a server on a nearby hilltop after dark by replacing the TCP layer :) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Dutch Transport Card Broken
On 30 January 2008 17:03, Perry E. Metzger wrote: > My main point here was, in fact, quite related to yours, and one that > we make over and over again -- innovation in such systems for its own > sake is also not economically efficient or engineering smart. Hear hear! This maxim should be burned into the frontal lobes of every single member of Microsoft's engineering (and marketing) teams with a red-hot poker[*]. [ Over-engineered solutions to non-problems and gratuitous marketing-driven featuritis have been the root cause of almost every windows security disaster ever - e.g., email featuring 'rich content' such as scripts; web browsers that download and locally run active-x from random websites; lots of vulnerable RPC services installed and enabled by default on home user PCs; ... etc etc.; certainly they have far outnumbered the occasional flaws in core kernel services. But - economics again!, and a tip'o the hat to Schneier and his externalities argument - as long as the extra sales go to Microsoft's coffers, and the extra costs are all imposed on their victims^Wusers, there's no incentive for them to do otherwise. Hence my suggestion that they need a red-hot one (incentive, that is). ] cheers, DaveK [*] - or red-hot Gutmann soundwave -- Can't think of a witty .sigline today - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Dutch Transport Card Broken
On 30 January 2008 17:01, Jim Cheesman wrote: > James A. Donald: SSL is layered on top of TCP, and then one layers one's actual protocol on top of SSL, with the result that a transaction involves a painfully large number of round trips. > > Richard Salz wrote: > > Perhaps theoretically painful, but in practice this is > > not the case; commerce on the web is the > > counter-example. > > James A. Donald: > >> The delay is often humanly perceptible. If humanly >> perceptible, too much. > > I respectfully disagree - I'd argue that a short wait is actually more > reassuring to the average user (Hey! The System's checking me out!) than an > instantaneous connection would be. I also disagree. It's not like anyone says to themselves "Hey, this website is taking me several seconds to access - I'll spend a couple of hours physically going to the shop instead". It's economics again: what amount of time or money constitutes "too much" depends what the alternative choices are. cheers, DaveK -- Can't think of a witty .sigline today - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
At Wed, 30 Jan 2008 17:59:51 -, Dave Korn wrote: > > On 30 January 2008 17:03, Eric Rescorla wrote: > > > >>> We really do need to reinvent and replace SSL/TCP, > >>> though doing it right is a hard problem that takes more > >>> than morning coffee. > >> > >> TCP could need some stronger integrity protection. 8 Bits of checksum isn´t > >> enough in reality. (1 out of 256 broken packets gets injected into your TCP > >> stream) Does IPv6 have a stronger TCP? > > > > Whether this is true or not depends critically on the base rate > > of errors in packets delivered to TCP by the IP layer, since > > the rate of errors delivered to SSL is 1/256th of those delivered > > to the TCP layer. > > Out of curiosity, what kind of TCP are you guys using that has 8-bit > checksums? You're right. It's 16 bit, isn't it. I plead it being early in the morning. I think my point now applies even moreso :) > > Since link layer checksums are very common, > > as a practical matter errored packets getting delivered to protocols > > above TCP is quite rare. > > Is it not also worth mentioning that TCP has some added degree of protection > in that if the ACK sequence num isn't right, the packet is likely to be > dropped (or just break the stream altogether by desynchronising the seqnums)? Right, so this now depends on the error model... -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Fixing SSL (was Re: Dutch Transport Card Broken)
On 30 January 2008 17:03, Eric Rescorla wrote: >>> We really do need to reinvent and replace SSL/TCP, >>> though doing it right is a hard problem that takes more >>> than morning coffee. >> >> TCP could need some stronger integrity protection. 8 Bits of checksum isn´t >> enough in reality. (1 out of 256 broken packets gets injected into your TCP >> stream) Does IPv6 have a stronger TCP? > > Whether this is true or not depends critically on the base rate > of errors in packets delivered to TCP by the IP layer, since > the rate of errors delivered to SSL is 1/256th of those delivered > to the TCP layer. Out of curiosity, what kind of TCP are you guys using that has 8-bit checksums? > Since link layer checksums are very common, > as a practical matter errored packets getting delivered to protocols > above TCP is quite rare. Is it not also worth mentioning that TCP has some added degree of protection in that if the ACK sequence num isn't right, the packet is likely to be dropped (or just break the stream altogether by desynchronising the seqnums)? cheers, DaveK -- Can't think of a witty .sigline today - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
Philipp Gühring wrote: Yes, sending client certificates in plaintext while claiming that SSL/TLS is secure doesn´t work in a world of phishing and identity theft anymore. We have the paradox situation that I have to tell people that they should use HTTPS with server-certificates and username+password inside the HTTPS session, because that´s more secure than client certificates ... Does anyone have an idea how we can fix this flaw within SSL/TLS within a reasonable timeframe, so that it can be implemented and shipped by the vendors in this century? (I don´t think that starting from scratch and replacing SSL makes much sense, since it´s just one huge flaw ...) re: http://www.garlic.com/~lynn/aadsm28.htm#15 Dutch Transport Card Broken http://www.garlic.com/~lynn/aadsm28.htm#16 Dutch Transport Card Broken aka ... that was part of the relying-party-only certificates from the mid-90s; http://www.garlic.com/~lynn/subpubkey.html#rpo i.e. the x.509 identity digital certificates from the early 90s, were becoming more and more overloaded with personal information ... and by the mid-90s, lots of institutions were starting to realize all that personal information represented significant privacy and liability issues ... and the RPO digital certificates were born. However, it was trivial to demonstrate that (for all those business processes) that the digital certificates were redundant and superfluous (however, there was some amount of industry brain washing that digital certificates were mandatory ... especially if digital signatures was used ... even if they served no useful purpose). this also showed up in work on pk-init for kerberos supporting digital signature authentication ... and got into the confused mess with redundant and superfluous digital certificates http://www.garlic.com/~lynn/subpubkey.html#kerberos and similarly digital signatures for radius http://www.garlic.com/~lynn/subpubkey.html#radius (between kerberos and radius, they represent possibly the majority of authentication in the world today) part of the confusion regarding the necessity for digital certificates could be seen in the X9F financial standards work ... the appending of even a relying-party-only digital certificate (lacking any personal information) could represent a factor of 100 times payload bloat http://www.garlic.com/~lynn/subpubkey.html#bloat for a nominal electronic payment transactions (and also 100 times processing bloat). as a result, there was some standardization effort looking at "compressed" (relying party only) digital certificates (even tho they were serving no useful purpose), attempting to get the payload bloat down to possibly only 5-10 times (instead of 100 times). I took the opportunity to demonstrate that it would be logically possible to compress such digital certificates to zero bytes ... totally eliminating the payload bloat. then rather than advocating the elimination of totally useless, redundant and superfluous digital certificates http://www.garlic.com/~lynn/subpubkey.html#certless there could be an infrastructure that mandated zero-byte digital certificates appended to every transaction. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Dutch Transport Card Broken
> Folks on this list and its progenitors have long noted that cryptography is a matter of economics. Agreed, but using an insecure technology doesn't make sense from even an economic perspective. They spent enough money that they could have implemented a secure system, but instead, made two fundamental errors: 1.) The cost of fraud is probably much less than the cost of the system - 2 billion. So, even if the system were completely secure, they still might have been better off using paper tickets and the honor system. >From all indications, there were no cost controls on this project, so it seems likely that the technology was not chosen because of technical reasons or economic reasons, but rather, because someone was familiar with it. Perhaps it was suggested by a politician, and his cronies made it their mission to make it happen. Perhaps someone thought that it would impress visitors; maybe it was a matter of national pride. 2.) The implementation was insecure. Yes, there were probably technical factors involved, but for the cost of the project, they could have implemented a secure system, using other means if necessary. The problem, as I see it, was not an economic one, but rather, that the developers relied on the secrecy of the algorithm for security, rather than the size of the key. Even unpaid, open-source developers have produced secure systems for far less than the Dutch spent simply because they followed good cryptographic design guidelines. The question about mag strip versus RFID versus physical-contact readers is a valid one. For 2 billion, the cost/convenience difference between radio and contact cards would have to be rather large to justify implementing an insecure system. Even a swipe time of 100 ms is enough to implement a secure solution. I find it very unlikely that a competent engineering firm could not implement this in a reliable, secure, and fast manner given this project's budget. If the assertions are correct - that the subway is used 1,000,000 times (or by 1,000,000 people?) a year, spending 2 billion on the fare system means approximately 2,000 per user/time. For those math types, that's ~~5.50 per day just to pay for the fare system, not to mention the cost of electricity, trains, maintenance, etc... How many people spend more than 5.50 per day on train/subway/bus fare? This system, and its attendant costs - though obsolete even before its inception - will probably be amortized over a few decades. Which is why fraud is a very important issue. In that time frame, it is very likely that the criminal underground could produce, and profit from, counterfeit cards on a large scale. Unlike turnstyle jumpers, fraud of this kind could easily become so widespread that the subway system operates at a significant loss. A turnstyle jumper is easily caught; a rider with a cloned card is virtually undetectable (without expensive upgrades to the system). If this system had been securely implemented, we might be able to know if the fraud prevention would ever have exceeded the 2 billion cost of the system; but because it isn't, the Dutch have essentially flushed the money into the sewer. And, bringing economics back into the picture, the purpose of the Mifare system is *to prevent fraud*. I seriously doubt that such a system - especially now that it is broken - will eliminate 2 billion worth of fraud. It seems the Dutch would have been better off simply issuing paper tickets and relying on the honor system. Most people are honest; the purpose of the ticket system is to keep people that way. Unfortunately, it fails from both perspectives: it isn't economically viable, and neither is it secure. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
"James A. Donald" <[EMAIL PROTECTED]> writes: > James A. Donald: >>> SSL is layered on top of TCP, and then one layers >>> one's actual protocol on top of SSL, with the result >>> that a transaction involves a painfully large number >>> of round trips. > > Richard Salz wrote: >> Perhaps theoretically painful, but in practice this is >> not the case; commerce on the web is the >> counter-example. > > The delay is often humanly perceptible. If humanly > perceptible, too much. The initial delay in connecting is usually DNS related, not SSL related, and is often experienced even in ordinary http: web surfing. I don't think that the delays involved in the SSL handshake are particularly perceptible amidst the other delays involved in connection setup, unless you're on a very high delay network like one of the older cellular data systems that are now going away anyway. Protocol hacks also make subsequent connections to the same server quite fast. In any case, although SSL has some compromises in the design, it is pretty good overall, and I can't see a good reason why one would pick something else in almost any ordinary situation. A real expert might find corner cases where it is not suitable, but there are very few experts out there, though lots of people who incorrectly think they are. I've seen idiots produce many things that were slower and had horrible security properties, all the while costing far more in software development, because they thought they knew better -- I've never seen anyone actually do better, though. For practical purposes, the rule is "don't use something else", and "if you think you're smart enough to do better, you almost certainly aren't". (No, I'm not a fan of X.509 certs, but those are not core to the protocol, and you can think of them as nothing more than a fancy key container format if you like. Key management is not addressed by SSL, so there is no reason that fixing key management has anything to do with SSL per se.) I'm sure you're going to disagree with me, James, but I won't be responding -- I don't think you're right, but I also see no reason to beat a dead horse. My opinion (and just about everyone else's) is well known. We live in a world where you are free to have a dissenting view. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
I don't disagree with your posting in general. I will note one thing: "Steven M. Bellovin" <[EMAIL PROTECTED]> writes: > A transit system has to move people. For all that the New York City > Metrocard works, it's slower than a contactless wireless system. As a consultant, I happen to have a lot of ID badges. I've used contactless systems for entry at several firms on a regular basis. I've experienced the equivalent of "re-swipe" problems even with the contactless systems -- that is, I've been forced to wave the card past the reader more than once. I'm told that similar issues can be found in other RFID systems. Although I will not disagree that the only important criterion for a transit system is "will we maximize overall economic efficiency with this design choice", I'm still far from certain that contactless is always going to be faster. It could in theory be faster -- whether that theory can be reduced to practice is a different question. (As an aside, I'll also point out that, in the NYC transit system, it is fairly rare that the "rate limiting step" is the speed of turnstile reads. Far more often, limited space on stairwells, limited numbers of turnstiles (which are used both for entry and exit), etc., seem to be the limiting factor on how fast people can flow onto and off of the platforms.) I want repeat that I don't disagree with you that all of this is about economics first, and the security level and costs have to take that into consideration. We are in violent agreement there. A $100 but "perfect" entry token is going to be worthless for most transit systems, and an attack that costs a system a few dollars a year at most is unlikely to be worth closing. (Indeed, the Metrocard system isn't perfect, in that you can clone cards -- you just can't steal more than a trivial sum before the card will be turned off, so no one bothers.) My main point here was, in fact, quite related to yours, and one that we make over and over again -- innovation in such systems for its own sake is also not economically efficient or engineering smart. If an existing system works reasonably well and you can use it off the shelf without paying development and other costs, why not use it? I find the fact that nearly every city in the world seems to have a custom designed electronic fare system somewhat peculiar -- I'm not surprised that several such systems might exist, but surely every city in the world does not need to sink the costs of custom development of an entire fare system. The Dutch apparently sunk vast sums into the development of a brand new fare card system -- one questions what requirements could not have been met with one of the several hundred existing systems. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Fixing SSL (was Re: Dutch Transport Card Broken)
At Wed, 30 Jan 2008 11:25:04 +0100, Philipp Gühring wrote: > > Hi, > > > SSL key distribution and management is horribly broken, > > with the result that everyone winds up using plaintext > > when they should not. > > Yes, sending client certificates in plaintext while claiming that SSL/TLS is > secure doesn´t work in a world of phishing and identity theft anymore. Huh? What are you claiming the problem with sending client certificates in plaintext is (as if anyone uses client certificates anyway)? That the phisher gets to see the client's identity? So what? It doesn't let them impersonate the client to anyone. Certificates shouldn't contain sensitive information anyway. > We have the paradox situation that I have to tell people that they should use > HTTPS with server-certificates and username+password inside the HTTPS > session, because that´s more secure than client certificates ... No it isn't more secure. > Does anyone have an idea how we can fix this flaw within SSL/TLS within a > reasonable timeframe, so that it can be implemented and shipped by the > vendors in this century? This gets discussed on the TLS mailing list occasionally, but the arguments for making this change aren't very convincing. If you have an actual credible security argument you should post it to [EMAIL PROTECTED] > > We really do need to reinvent and replace SSL/TCP, > > though doing it right is a hard problem that takes more > > than morning coffee. > > TCP could need some stronger integrity protection. 8 Bits of checksum isn´t > enough in reality. (1 out of 256 broken packets gets injected into your TCP > stream) Does IPv6 have a stronger TCP? Whether this is true or not depends critically on the base rate of errors in packets delivered to TCP by the IP layer, since the rate of errors delivered to SSL is 1/256th of those delivered to the TCP layer. Since link layer checksums are very common, as a practical matter errored packets getting delivered to protocols above TCP is quite rare. -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Dutch Transport Card Broken
James A. Donald: > >> SSL is layered on top of TCP, and then one layers > >> one's actual protocol on top of SSL, with the result > >> that a transaction involves a painfully large number > >> of round trips. Richard Salz wrote: > Perhaps theoretically painful, but in practice this is > not the case; commerce on the web is the > counter-example. James A. Donald: > The delay is often humanly perceptible. If humanly > perceptible, too much. I respectfully disagree - I'd argue that a short wait is actually more reassuring to the average user (Hey! The System's checking me out!) than an instantaneous connection would be. Adding in a false wait (a nice pop-up, a progress bar and a snake-oily security message) would be even better... Regards, Jim Cheesman - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
At Wed, 30 Jan 2008 09:04:37 +1000, James A. Donald wrote: > > Ivan Krstic' wrote: > > Some number of these muppets approached me over the > > last couple of years offering to donate a free license > > for their excellent products. I used to be more polite > > about it, but nowadays I ask that they Google the > > famous Gutmann Sound Wave Therapy[0] and mail me > > afterwards. > > Gutmann Sound Wave Therapy: Gutmann recommends: > : : Whenever someone thinks that they can replace > : : SSL/SSH with something much better that they > : : designed this morning over coffee, their > : : computer speakers should generate some sort > : : of penis-shaped sound wave and plunge it > : : repeatedly into their skulls until they > : : achieve enlightenment. > > On SSL, Gutmann is half wrong: > > SSL key distribution and management is horribly broken, > with the result that everyone winds up using plaintext > when they should not. > > SSL is layered on top of TCP, and then one layers one's > actual protocol on top of SSL, with the result that a > transaction involves a painfully large number of round > trips. > > We really do need to reinvent and replace SSL/TCP, > though doing it right is a hard problem that takes more > than morning coffee. I can't believe I'm getting into this with James. Ignoring the technical question of "broken", I know of no evidence whatsoever that round trip latency is in any way a limiting factor for people to use SSL/TLS. I've heard of people resisting using SSL for performance concerns, but they're almost always about the RSA operation on the server (and hence the cost of server hardware). If you have some evidence I'd be interested in hearing it. -Ekr - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Fixing SSL (was Re: Dutch Transport Card Broken)
Hi, > SSL key distribution and management is horribly broken, > with the result that everyone winds up using plaintext > when they should not. Yes, sending client certificates in plaintext while claiming that SSL/TLS is secure doesn´t work in a world of phishing and identity theft anymore. We have the paradox situation that I have to tell people that they should use HTTPS with server-certificates and username+password inside the HTTPS session, because that´s more secure than client certificates ... Does anyone have an idea how we can fix this flaw within SSL/TLS within a reasonable timeframe, so that it can be implemented and shipped by the vendors in this century? (I don´t think that starting from scratch and replacing SSL makes much sense, since it´s just one huge flaw ...) > SSL is layered on top of TCP, and then one layers one's > actual protocol on top of SSL, with the result that a > transaction involves a painfully large number of round > trips. SSL already looks quite round-trip optimized to me (at least the key-agreement part) > We really do need to reinvent and replace SSL/TCP, > though doing it right is a hard problem that takes more > than morning coffee. TCP could need some stronger integrity protection. 8 Bits of checksum isn´t enough in reality. (1 out of 256 broken packets gets injected into your TCP stream) Does IPv6 have a stronger TCP? > As discussed earlier on this list, layering induces > excessive round trips. The SSL implementations I analyzed behaved quite nicely, I didn´t noticed any round trip problems there. (But feel free to send me a traffic capture file that shows the problem) I once implemented SSL over GSM data channel (without PPP and without TCP), and discovered that SSL needs better integrity protection than raw GSM delivers. (I am quite sure that´s why people normally run PPP over GSM channels ...) SSH has the same problems. It also assumes an active attack in case of integrity problems of the lower layer, and terminates the connection. > Layering communications > protocols is analogous to having a high level > interpreter written in a low level language. What we > need instead of layering is a protocol compiler, > analogous to the Microsoft IDL compiler. The Microsoft > IDL compiler automatically generates a C++ interface > that correctly handles run time version negotiation, > which hand generated interfaces always screw up, with > the result that hand generated interfaces result in > forward and backward incompatibility, resulting in the > infamous Microsoft DLL hell. Similarly we want a > compiler that automatically generates secure message > exchange and reliable transactions from unreliable > packets. (And of course, run time version negotiation) Sounds like an interesting idea to me. Best regards, Philipp Gühring - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
James A. Donald: >> SSL is layered on top of TCP, and then one layers >> one's actual protocol on top of SSL, with the result >> that a transaction involves a painfully large number >> of round trips. Richard Salz wrote: > Perhaps theoretically painful, but in practice this is > not the case; commerce on the web is the > counter-example. The delay is often humanly perceptible. If humanly perceptible, too much. > The benefits of layering for outweigh the perceived > gains of just merging it all together into one glob. > For example, the ability to replace layers, or replace > them by just dropping in a new library. Compilation would provide the same benefits, and a fair bit more - such as built in protocol negotiation, rather than protocol negotiation being reinvented ad hoc in a different and incompatible way each, and bolted on after the fact in a different way each time. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
> SSL is layered on top of TCP, and then one layers one's > actual protocol on top of SSL, with the result that a > transaction involves a painfully large number of round > trips. Perhaps theoretically painful, but in practice this is not the case; commerce on the web is the counter-example. The benefits of layering for outweigh the perceived gains of just merging it all together into one glob. For example, the ability to replace layers, or replace them by just dropping in a new library. /r$ -- STSM, DataPower Chief Programmer WebSphere DataPower SOA Appliances http://www.ibm.com/software/integration/datapower/ - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
> Why require contactless in the first place? > > Is swiping one's card, credit-card style too difficult for the average > user? I'm thinking two parallel copper traces on the card could be > used to power it for the duration of the swipe, with power provided > by the reader. Why, in a billion-dollar project, one must use COTS > RFIDs - with their attendant privacy and security problems - is > beyond me. > > A little ingenuity would have gone a long way. > OPs deliberately elided. This posting (and several others in this thread) disturb me. Folks on this list and its progenitors have long noted that cryptography is a matter of economics. That is, cryptography and security aren't absolute goals; rather, they're tools for achieving something else. The obvious answers in this case are "prevent fare fraud" or "make money", and even those would suffice. However, there are other issues less easily monetized, such as "make the trams and buses run efficiently". A security system doesn't have to be perfect. Rather, it has to be good enough that you save more than you lose via the holes, including the holes you know about up front. Spending more than you have to is simply bad engineering. Speaking as an engineer, rather than as a scientist, the real failure mode is too high a net loss. As a cryptographer and security guy, I'd rather there were no loss -- but that's not real. A transit system has to move people. For all that the New York City Metrocard works, it's slower than a contactless wireless system. How much longer will it take people to board trams with a stripe reader than with a contactless smart card? What is your power budget (which affects range)? Even leaving out the effect that delays have on ridership, a transit system that wants to move N people needs more units if the latency per rider is above a certain threshold. Let's take a closer look at the New York system, since it was touted as superior. It's optimized for subways, not buses, which has several implications. (Subway ridership in New York is twice bus ridership -- see http://www.crainsnewyork.com/apps/pbcs.dll/article?AID=/20070223/FREE/70223008/1066) First, subway turnstiles are much more easily used as part of an online system than are bus fare card readers. The deployment started in 1994, when cellular data simply wasn't an option, based on cost, bandwidth, availability, and much more. Second, on a subway you use your fare card well in advance of boarding; there is thus little latency effect on the system. Third, wireless is *still* faster -- according to some reports (http://www.dslreports.com/forum/r19222677-The-Next-MetroCard), the MTA is considering replacing the current system with a wireless one. Online systems have another issue: they require constant communication to a high-availability server. When that's not an option (i.e., New York buses, or subway turnstiles when the server is down), the system has to fall back to some other scheme. This scheme is more restrictive, precisely because of the fraud issue. Back when I was in high school, some students got bus passes. I recall a frequent sight: those who had boarded early moving to the back of the bus and handing their passes to other students still waiting to board the bus. Replay worked well against an overloaded driver... Metrocards don't have that failure mode -- but the failure mode they do have is a limitation on how many times they can be used in a short time interval. This affects, for example, a family of five or more trying to travel on a single card, even on subways. How much of this applies to the Dutch farecards? I have no idea. But this group is trying to *engineer* a system without looking at costs and other constraints. That leads to security by checklist, an all-too-common failing. Systems like this have two primary failure modes -- "failure" in the sense of losing more money (or time, or what have you) than anticipated. First, the designers may not have understood the available technology and its limitations. That was certainly the case with WEP; I suspect it's the case here, but I don't know. Even so, it is far from clear that exploitation of the hole will have an economic impact; that's as much a sociological question as a technical one. (Maybe the incremental cost per card of better crypto is ?.01. One web site I found put tram ridership in Amsterdam at >1,000,000/year (http://blog.wired.com/cars/2007/10/trams-dominate-.html), which means that the cost might be ?10,000/year. How many riders will try to cheat the system? Enough that to be an issue? I don't know -- but that's precisely my point; I don't know and I doubt very much that most other posters here know. That said, I do suspect that stronger crypto would be economical.) The second failure mode comes from misunderstanding the threat model. That's why the old American AMPS cellular phones were subject to cloning attacks. It was *not* that the designers d
Re: Dutch Transport Card Broken
Ivan Krstic' wrote: > Some number of these muppets approached me over the > last couple of years offering to donate a free license > for their excellent products. I used to be more polite > about it, but nowadays I ask that they Google the > famous Gutmann Sound Wave Therapy[0] and mail me > afterwards. Gutmann Sound Wave Therapy: Gutmann recommends: : : Whenever someone thinks that they can replace : : SSL/SSH with something much better that they : : designed this morning over coffee, their : : computer speakers should generate some sort : : of penis-shaped sound wave and plunge it : : repeatedly into their skulls until they : : achieve enlightenment. On SSL, Gutmann is half wrong: SSL key distribution and management is horribly broken, with the result that everyone winds up using plaintext when they should not. SSL is layered on top of TCP, and then one layers one's actual protocol on top of SSL, with the result that a transaction involves a painfully large number of round trips. We really do need to reinvent and replace SSL/TCP, though doing it right is a hard problem that takes more than morning coffee. As discussed earlier on this list, layering induces excessive round trips. Layering communications protocols is analogous to having a high level interpreter written in a low level language. What we need instead of layering is a protocol compiler, analogous to the Microsoft IDL compiler. The Microsoft IDL compiler automatically generates a C++ interface that correctly handles run time version negotiation, which hand generated interfaces always screw up, with the result that hand generated interfaces result in forward and backward incompatibility, resulting in the infamous Microsoft DLL hell. Similarly we want a compiler that automatically generates secure message exchange and reliable transactions from unreliable packets. (And of course, run time version negotiation) - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Harald Koch <[EMAIL PROTECTED]> writes: > Crawford Nathan-HMGT87 wrote: >> Why require contactless in the first place? >> >> Is swiping one's card, credit-card style too difficult for the average >> user? > > As compared to slapping your wallet on the reader? yes. > > I swipe my Visa / debit / Tim Horton's cards regularly. With the > plethora of bad reader technology out there, no matter how practiced I > am it sometimes takes 2-3 tries to get a good read. Heck, sales clerks > who swipe hundreds of times per day sometimes have trouble. And that's > with a relatively easy to read magnetic stripe... Here in New York City, we use a swipe based system called Metrocard. New Yorkers are not known for passive, accepting behavior, and yet no one seems to complain about the swipe system -- it seems to work more than well enough, and I have to double swipe at most once every few dozen times. The system has held up quite well to quite determined attacks, and the cards themselves are insanely cheap to produce. I therefore expect no one else will ever use the technology -- anything cheap, secure and well tested in the field can't possibly see wide adoption. -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Crawford Nathan-HMGT87 wrote: Why require contactless in the first place? Is swiping one's card, credit-card style too difficult for the average user? As compared to slapping your wallet on the reader? yes. I swipe my Visa / debit / Tim Horton's cards regularly. With the plethora of bad reader technology out there, no matter how practiced I am it sometimes takes 2-3 tries to get a good read. Heck, sales clerks who swipe hundreds of times per day sometimes have trouble. And that's with a relatively easy to read magnetic stripe... GO Transit here in Toronto ran a pilot program with contactless stored-value fare cards several years ago, which worked quite well; I'm sure technology details are available via Google. Alas, the project got squashed by a political movement for a Greater Toronto Area fare card project (which still hasn't gone anywhere 5-6 years later...). -- Harald - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Dutch Transport Card Broken
Why require contactless in the first place? Is swiping one's card, credit-card style too difficult for the average user? I'm thinking two parallel copper traces on the card could be used to power it for the duration of the swipe, with power provided by the reader. Why, in a billion-dollar project, one must use COTS RFIDs - with their attendant privacy and security problems - is beyond me. A little ingenuity would have gone a long way. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Karsten Nohl Sent: Monday, January 28, 2008 12:41 AM To: Aram Perez Cc: Cryptography Subject: Re: Dutch Transport Card Broken > Not to defend the designers in any way or fashion, but I'd like to > ask, How much security can you put into a plastic card, the size of a > credit card, that has to perform its function in a secure manner, all > in under > 2 seconds (in under 1 second in parts of Asia)? And it has to do this > while receiving its power via the electromagnetic field being > generated by the reader. You are raising a very interesting point. The constraints under which RFIDs and contactless smart-cards need to operate seem to vary widely depending on the application. The Mifare Classic cards, for example, authenticate in under 2 ms, but wouldn't need to be that fast as you point out. Their crypto is also very small, much smaller even than their flash memory. What good is it, though, to have a lot of memory that is badly protected? Last, the power consumption of the Mifare cards is certainly lower than others, which doesn't matter, though, in the near-field where even micro-processor based designs can operate. This is where contactless smart-cards and RFIDs get confused often. Only for the latter ones power consumption is a limiting constraint. To answer your question directly: Within the limits of Mifare Classic (48-bit cipher, 16-bit RNG), one can build a 64-bit cipher that generates 'random' numbers internally. Within the same limits, one could almost implement TEA which at least has undergone its share of peer-review. Again: Trading some of the memory for this much higher level of security would certainly have been worth it. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
On Jan 25, 2008, at 4:27 PM, Perry E. Metzger wrote: However, you should be very skeptical when someone claims that they "need" to use a home grown crypto algorithm or that they "need" to use a home grown protocol instead of a well proven one. I'm beginning to suspect that more often than not, this nonsense is a result of market forces rather than idiot technologists. In my experience, senior decision-maker types outside of the computer industry (and even within it, but perhaps a tad less so) are sufficiently non-technical as to never have heard of Kerckhoffs' principle -- and to disbelieve it when they do, since it opposes their intuition of what makes for secure systems. Various companies (or departments) then emerge peddling their home-grown crypto and trumpeting the fact that it's proprietary as a feature, commonly going hand in hand with stupidly large key sizes. Some number of these muppets approached me over the last couple of years offering to donate a free license for their excellent products. I used to be more polite about it, but nowadays I ask that they Google the famous Gutmann Sound Wave Therapy[0] and mail me afterwards. I've never heard back. [0] Last paragraph, http://diswww.mit.edu/bloom-picayune/crypto/14238 -- Ivan Krstić <[EMAIL PROTECTED]> | http://radian.org - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Not to defend the designers in any way or fashion, but I'd like to ask, How much security can you put into a plastic card, the size of a credit card, that has to perform its function in a secure manner, all in under 2 seconds (in under 1 second in parts of Asia)? And it has to do this while receiving its power via the electromagnetic field being generated by the reader. You are raising a very interesting point. The constraints under which RFIDs and contactless smart-cards need to operate seem to vary widely depending on the application. The Mifare Classic cards, for example, authenticate in under 2 ms, but wouldn't need to be that fast as you point out. Their crypto is also very small, much smaller even than their flash memory. What good is it, though, to have a lot of memory that is badly protected? Last, the power consumption of the Mifare cards is certainly lower than others, which doesn't matter, though, in the near-field where even micro-processor based designs can operate. This is where contactless smart-cards and RFIDs get confused often. Only for the latter ones power consumption is a limiting constraint. To answer your question directly: Within the limits of Mifare Classic (48-bit cipher, 16-bit RNG), one can build a 64-bit cipher that generates 'random' numbers internally. Within the same limits, one could almost implement TEA which at least has undergone its share of peer-review. Again: Trading some of the memory for this much higher level of security would certainly have been worth it. - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
The per-card cost need not be such a big problem. Singapore has a proximity-card-based system. They use the same card both for the long-term cards and for the single-use cards. There is a S$ 2 (IIRC) deposit on the card, which is refunded after the card is used. Waste not want not! /ji - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
RE: Dutch Transport Card Broken
Oberthur Card Systems has a card designed for transit use with 3DES, according to their datasheet (registration required, http://www.oberthurcs.com/get_downloadsection_file.aspx?id=43&otherid=95&typ eid=5) it's certainly fast enough. Interestingly, they also make the card that's failed so spectacularly here... Regards, Jim Cheesman -Mensaje original- De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] En nombre de Aram Perez Enviado el: viernes, 25 de enero de 2008 5:59 Para: Cryptography Asunto: Re: Dutch Transport Card Broken Hi Folks, > Ed Felten has an interesting post on his blog about a Dutch smartcard > based transportation payment system that has been broken. Among other > foolishness, the designers used a custom cryptosystem and 48 bit keys. Not to defend the designers in any way or fashion, but I'd like to ask, How much security can you put into a plastic card, the size of a credit card, that has to perform its function in a secure manner, all in under 2 seconds (in under 1 second in parts of Asia)? And it has to do this while receiving its power via the electromagnetic field being generated by the reader. Regards, Aram Perez - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Moin, Am Thu, 24 Jan 2008 20:58:38 -0800 schrieb Aram Perez: > Not to defend the designers in any way or fashion, but I'd like to > ask, How much security can you put into a plastic card, the size of > a credit card, that has to perform its function in a secure manner, > all in under 2 seconds (in under 1 second in parts of Asia)? And it > has to do this while receiving its power via the electromagnetic > field being generated by the reader. Hmm, how about Triple-DES for starters? :-) There are cards using 3DES (called Mifare DESfire) available from the same manufacturer (NXP) as the Mifare Classic cards with the proprietary algorithm that we looked at. Apparently the main difference is that DESfire cards cost 1.50 EUR per piece while Classic cards are at 0.50 EUR per piece. Other public transport systems, such as Madrid, did the sensible thing and chose DESfire: http://www.nxp.com/news/identification/articles/otm81/madrid/ -- Henryk Plötz Grüße aus Berlin ~~ Help Microsoft fight software piracy: Give Linux to a friend today! ~ pgpsmBWu8tOGO.pgp Description: PGP signature
Re: Dutch Transport Card Broken
Aram Perez wrote: Not to defend the designers in any way or fashion, but I'd like to ask, How much security can you put into a plastic card, the size of a credit card, that has to perform its function in a secure manner, all in under 2 seconds (in under 1 second in parts of Asia)? And it has to do this while receiving its power via the electromagnetic field being generated by the reader. we sort of saw that in the mid-90s when we were doing the x9.59 financial standard http://www.garlic.com/~lynn/x959.html#x959 and getting comments that it wasn't possible to have both low cost and high security at the same time. we looked at it and made the semi-facetious statements that we would take a $500 milspec part and aggresively cost reduce it by 2-3 orders of magnitude will improving the security. along the way we got tapped by some in the transit industry to also be able to meet the (then) transit gate requirements (well under 1 second and do it within iso 14443 power profile). part of it was having to walk the whole end-to-end process ... all the way back to chip design and fab manufacturing process ... little drift about walking fab in a "bunny suit" http://www.garlic.com/~lynn/2008b.html#13 we effectively did get it on close to the RFID chip (i.e. the one that they are targeting for UPC) technology curve ... i.e. chip fabrication cost is roughly constant per wafer ... wafer size and circuit size have been leading to higher number of chips per wafer (significantly reducing cost/chip). As circuit size shrank with a corresponding shrinkage in the size of chips (that didn't have corresponding increase in number of circuits) there was a "blip" on the cost/chip curve as the area of the cuts (to separate chips in the wafer) exceeded the (decreasing) chip size. Earlier this decade there was a new cutting process that significantly reduced the "cut" area ... allowing yield of (small) chips per wafer to continue to significantly increase (allowing pushing close to four orders of magnitude reduction ... rather than 3-4 orders of magnitude reduction). aads chip strawman references http://www.garlic.com/~lynn/x959.html#x959 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
my impression has been that with lack of takeup of various kinds of security solutions that were extensively marketed in the 90s ... that the current situation has many of those same organizations heavily involved in behind the scenes lobbying saw some of that nearly a decade ago when we were brought in to help wordsmith the cal. state electronic signature legislation which led to also be brought in on federal electronic signature legislation ... some past posts http://www.garlic.com/~lynn/subpubkey.html#signature some other references ... Hackers break into transport smart card http://www.dutchnews.nl/news/archives/2008/01/german_hackers_break_transport.php Transport smart card hacked again (update) http://www.dutchnews.nl/news/archives/2008/01/transport_smart_card_hacked_ag.php - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
> How much security can you put into a plastic card, the size of a > credit card, that has to perform its function in a secure manner, all > in under 2 seconds (in under 1 second in parts of Asia)? And it has to > do this while receiving its power via the electromagnetic field being > generated by the reader. The 24C3 presenters to their credit made this exact point. But mixing the 16-bit nonce with the card identifier was an optimization too far. That said, it's a hard problem. Inside Picopass is one of many examples that progress is possible. IMHO as always. Cheers, Scott - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Aram Perez <[EMAIL PROTECTED]> writes: >> Ed Felten has an interesting post on his blog about a Dutch smartcard >> based transportation payment system that has been broken. Among other >> foolishness, the designers used a custom cryptosystem and 48 bit keys. > > Not to defend the designers in any way or fashion, but I'd like to > ask, How much security can you put into a plastic card, the size of a > credit card, that has to perform its function in a secure manner, all > in under 2 seconds (in under 1 second in parts of Asia)? Several other transit systems have payment cards that have proven remarkably resilient to attack. For example, the NYC "Metrocard" system has been attacked repeatedly without significant breaks (but it does not rely on its cards being tamperproof -- it is an online system using magstripes.) The authors of the paper on the Dutch break claim that it would have been possible to use far more secure means even given the basic design, such as a non-proprietary crypto algorithm and longer keys. I see no real reason to disbelieve this. In any case, if it was not possible to do this with smartcards, existing, well proven mechanisms that are in use in other transit systems could have been adopted. It is not necessary to use an unimplementable architecture when implementable and proven architectures exist. Often we hear of a false need for "engineering tradeoffs" in such circumstances. Engineering tradeoffs do indeed sometimes become critical in security design. However, you should be very skeptical when someone claims that they "need" to use a home grown crypto algorithm or that they "need" to use a home grown protocol instead of a well proven one. Generally these are not "engineering tradeoffs" but reflections of ignorance on the part of the designers. Perry -- Perry E. Metzger[EMAIL PROTECTED] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Hi Folks, Ed Felten has an interesting post on his blog about a Dutch smartcard based transportation payment system that has been broken. Among other foolishness, the designers used a custom cryptosystem and 48 bit keys. Not to defend the designers in any way or fashion, but I'd like to ask, How much security can you put into a plastic card, the size of a credit card, that has to perform its function in a secure manner, all in under 2 seconds (in under 1 second in parts of Asia)? And it has to do this while receiving its power via the electromagnetic field being generated by the reader. Regards, Aram Perez - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: Dutch Transport Card Broken
Perry E. Metzger wrote: Ed Felten has an interesting post on his blog about a Dutch smartcard based transportation payment system that has been broken. Among other foolishness, the designers used a custom cryptosystem and 48 bit keys. http://www.freedom-to-tinker.com/?p=1250 The Dutch government paid two billion dollars for stupidity, for foolishness that almost anyone on this list could have told them was foolish. Secret algorithm! - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]