(Kyle, resend) > On 4/22/05, Ian Grigg <[EMAIL PROTECTED]> wrote: >> >> If that's what has happened, then the core identity >> of the merchant has in some sense changed, and then >> we should expect some stuff to happen. In fact, it >> is quite critical that we do surface this change of >> key event as that is one of the weaknesses in the >> infrastructure: a phisher can go to any other CA >> and trick them into giving out a cert in the name >> of the targetted merchant. The only serious way >> to defend against this is to have the browser cache >> the cert relationship, then ask the user to examine >> the cert, and hope that the difference is enough >> to inspire caution. Without that defence, PKI has >> a hole we could drive a ship full of trucks through. > > If it's assured by the same CA, is that Good Enough[tm]?
Yes, in the large. If CAJoe tells you that this cert is amazon, and then sends you another cert and says it is also amazon, you more or less have to accept that as being "as good as it gets" because exactly the same statement is being made each time; if you don't accept the second, then why did you accept the first? (The other thing that makes this a better bet is if in fact CAJoe was tricked, he's on the hook for all legs of the transaction, and he's the one party that can prevent this happening; so there is a good chance that responsibility can land in his lap. Not so with any wider tricker.) > The 'core > identity' -- the first-order data that we use to determine the site's > identity -- is the encryption key. However, encryption keys also have > lifetimes, and life-cycles. So, we use certificates from CAs as our > only proof that the encryption key is in actuality bound to the > identity of the owner of the server. Right, and given the number of steps involved, and the uncertainty of the statements being made, you can see that this is not exactly a robust system. So, coupling these mechanisms with browser based relationship monitoring is a much better idea than relying on certs alone (and the alternate is true, too). >> Also, it turns out there is a much bigger case where >> key changes are prevalent, and that is in the use of >> hardware SSL farms. Larger merchants use lots of >> certs in hardware, and switch rapidly between them >> depending on the moment. > > That's not too surprising -- up until recently, no CA would issue > wildcard certs, based on the concept that it would require the private > key to be in more than one place, which is "bad key-handling > practice". An SSL server can only handle so many SSL connections at a > time, and that number is at least 2 orders of magnitude less than the > number of non-SSL connections. Bear in mind that this is a "big merchant" problem, and smaller operations should not be penalised for the CA's and the merchant's stupidity. > This scheme breaks SSL session caching, though. So they're throwing > money and hardware at a problem that could be solved with fewer > resources, if they had intelligence in their process. They should just share the keys. "Bad key handling" is a lesser problem than "users being ripped off by phishers because everyone's too scared to use SSL." >> In this case, both the trustbar and petname toolbars >> have developed strategies to deal with this. For the >> former, trustbar suggests that you click-to-accept >> *all* certs from that CA. For petname, I think Tyler >> outlined his approach. Either way, there is some >> experimentation to be done, we shouldn't see these >> as being the ultimate word but rather steps in the >> right direction of stopping the above truck-liner >> being driven by phishers, as well as addressing >> phishing as it is driven at the single truck level, >> today. > > This brings to mind a question that I have: Is Firefox the best place > to experiment with issues of this gravity? I agree that the status > quo is broken; however, many people rely on Firefox and don't really > want to play around with experimental code. You raise an interesting point. Let me ponder that. A reasonable body of experimentation has been done, if you look at the trustbar paper you will see that they have carefully tested their assumptions (although not in depth and not in large numbers). There is also a fair bit of other academic work backing up this direction, and security theory itself agrees with the direction. Another issue is that these toolbars work and work today. If unsure, download them and try them. The petname one is particularly simple and innocuous once installed, it is hard to see that it would do any damage. Opposing that is the notion that these could incur costs for users in their use of Firefox. The easy answers are: phishing costs (mostly) Americans something like half a billion plus plus per year. If Mozilla were to deliver something that reduced their share by 50%, and if Mozilla's share of the market were 10%, then that's $25 million saved over the next year. (These are low figures, I used to use a billion, but these days another estimate is popular that says half a billion.) One could talk about other costs ... but we'd have to talk long and hard before we got anywhere near the potential savings. The main problem for Mozilla is that they don't have an incentive to save money for the their users so they really don't care if their users are losing money (most phishing is badly reported so it is easy to pretend it doesn't effect us). > Is there a development tree available for experimentation, akin to the > odd-numbered variants of Linux and Apache? I'd suggest that the experimentation "space" is the plugins. They already show enough good ideas, Trustbar is well developed, well thought out and is backed by some academic rigour. Petnames is based on a single strong concept that has stood the test of time. As well as these two, there are also alternate toolbars by GeoTrust, Comodo and Netcraft which are experimenting with variations on central databases. iang _______________________________________________ mozilla-crypto mailing list [email protected] http://mail.mozilla.org/listinfo/mozilla-crypto
