ummm... actually, it wasn't meant to be offlist.  Yay gmail, and
having to fix the headers every time.

(Key:
  belongs to aerowolf (Kyle Hamilton)
> belongs to iang (Ian Grigg)
> > belongs to aerowolf (Kyle Hamilton)
> >> belongs to iang (Ian Grigg)
)

On 4/23/05, Ian Grigg <[EMAIL PROTECTED]> wrote:
> Hi Kyle,
> 
> (was this meant to be oflist?)
> 
> 
> > On 4/22/05, Ian Grigg <[EMAIL PROTECTED]> wrote:
> >>
> >> If that's what has happened, then the core identity
> >> of the merchant has in some sense changed, and then
> >> we should expect some stuff to happen.  In fact, it
> >> is quite critical that we do surface this change of
> >> key event as that is one of the weaknesses in the
> >> infrastructure:  a phisher can go to any other CA
> >> and trick them into giving out a cert in the name
> >> of the targetted merchant.  The only serious way
> >> to defend against this is to have the browser cache
> >> the cert relationship, then ask the user to examine
> >> the cert, and hope that the difference is enough
> >> to inspire caution.  Without that defence, PKI has
> >> a hole we could drive a ship full of trucks through.
> >
> > If it's assured by the same CA, is that Good Enough[tm]?
> 
> Yes, in the large.  If CAJoe tells you that this
> cert is amazon, and then sends you another cert
> and says it is also amazon, you more or less have
> to accept that as being "as good as it gets"
> because exactly the same statement is being made
> each time;  if you don't accept the second, then
> why did you accept the first?
> 
> (The other thing that makes this a better bet is
> if in fact CAJoe was tricked, he's on the hook
> for all legs of the transaction, and he's the
> one party that can prevent this happening;  so
> there is a good chance that responsibility can
> land in his lap.  Not so with any wider tricker.)

...which brings us back to the limitation of liability issue.  (I
wonder... can we come up with a coherent concept of what
fiscal-trusted CAs should and should not be on the hook for?  Maybe
eventually make that a requirement for inclusion of the cert in
Firefox?)

> 
> > The 'core
> > identity' -- the first-order data that we use to determine the site's
> > identity -- is the encryption key.  However, encryption keys also have
> > lifetimes, and life-cycles.  So, we use certificates from CAs as our
> > only proof that the encryption key is in actuality bound to the
> > identity of the owner of the server.
> 
> Right, and given the number of steps involved, and
> the uncertainty of the statements being made, you
> can see that this is not exactly a robust system.
> So, coupling these mechanisms with browser based
> relationship monitoring is a much better idea than
> relying on certs alone (and the alternate is true,
> too).

The uncertainty of the statements being made is that "there is no
coherent set of statements that are being made".

(pardon me while I go completely off my rocker for a few moments...
this is a completely hypothetical thought exercise, and I have no idea
how useful it might be.)

In an absolutely free market, we'd have no Trent watching out to make
sure that each party has one-and-only-one identity.  Self-signed certs
would be the norm, and the value of keys would go up as the reputation
of the key went up.

(This, by the way, is how authentication via the Freenet project
works... you insert data into the network with the private key, and
people find it via the public key.  If you can insert data that can be
found via the public key, you have the private key.  Thus, the
completely-individual-and-individualized reputation is based on the
data that you put into the medium.)

The alternative is to have a "key-signing key" -- use that key to sign
transaction-signing keys, or message-signing keys, or such.  Trust
would be put in the KSK, rather than individual transaction keys.

(i.e., everyone becomes their own CA.)

As time goes on, you start to trust certain people, and "trust"
[defined as 'expect that they will, in good faith, uphold the
commonly-held protocol without attempting to subvert it' -- not 'trust
for introducing other people', which is a different decision] them. 
So, they create a transaction key for signing other peoples' keys, and
sign the keys ('certify') of those people they trust.

(This is the Web Of Trust model.  Note that there are at least 2
different kinds of trust associated, and it could probably be said
that there are an effectively infinite kinds of trust.  Attempting to
identify and enumerate all of them at design time is pure folly, and
there's no actual sense of 'design' in this model in any case.  So,
you start issuing everyone unique identifiers that they can branch off
themselves, to embed their own meanings.  OIDs, anyone?  You can get
one delegated from the IANA for free, if you're willing to wait a week
or so.  Or I can delegate you one from mine.  (If you're a member of
sourceforge, I already have.))

Now, because of how easy it is to make signatures, someone gets the
bright idea that they're going to charge for them.  But what are they
going to do?  Oh, let's validate the identity of the person, and
certify it on their KSK.  (i.e., bind the identity of the person to
the KSK so that people who become part of the network can find people
they know and already trust offline.)

(That's the Certifying Authority model coming into play.)

However, just because the CA has bound that person's offline identity,
there's no reason to trust them any more or less than anyone else. 
'Caveat emptor' -- buyer beware, both of the quality of goods sold and
the reliability of the merchant.  Beware of who you tell your secrets
to, for fear that they might haunt you later when you least expect it.

(and here's where the role of the Mozilla Foundation and browser
manufacturers comes in.)

As it stands, when a lock icon is presented, it means that a
certificate from a "trusted CA" (meaning, a CA embedded into the
certificate database, with the "trust bit" set) has been presented and
verified.  However, this only means that you're dealing with someone
who has an identity that's asserted, not a reputation to uphold.  Why
should certificates like this be blindly trusted by action of the
developers, with the decision taken out of the hands of the user?

Since phishers rely on being able to look as much like "the real
thing" as possible, and the CAs aren't on the hook for it (right
now)... why should the CAs be given "trust bits"?  Instead, the first
time that the site comes up [and this may be very difficult, given
X.501 DNs and the entire X.509v3 certificate model], pop up a dialog
giving the user the identity that's asserted... and the identity of
the CA that asserted it.  As well, keep track of (and display) how
many times certificates by that CA have been rejected, and how many
times they've been accepted, as a means of determining whether that
CA's assertions should be trusted on a /per-user basis/.

The fact that the "trust bit" is set on all of those CAs, combined
with the fact that the user interface doesn't present the information
except in a very difficult-to-understand manner (I really don't
understand needing to click 6 times in order to get to the DN in the
certificate, and I don't understand not being able to 'walk the chain
of trust' to figure out where the trust is being delegated from),
means that essentially the Mozilla developers are creating a
"transitive trust" situation -- "I trust that I know who this person
is, so I trust that I can go after them if they misuse my
information."  This, more than anything, hurts the security of Firefox
and the NSS in-general.

(I know that these are 'prescribed procedures', but I've just
explained why the procedures are horribly insufficient.)

> >> Also, it turns out there is a much bigger case where
> >> key changes are prevalent, and that is in the use of
> >> hardware SSL farms.  Larger merchants use lots of
> >> certs in hardware, and switch rapidly between them
> >> depending on the moment.
> >
> > That's not too surprising -- up until recently, no CA would issue
> > wildcard certs, based on the concept that it would require the private
> > key to be in more than one place, which is "bad key-handling
> > practice".  An SSL server can only handle so many SSL connections at a
> > time, and that number is at least 2 orders of magnitude less than the
> > number of non-SSL connections.
> 
> Bear in mind that this is a "big merchant" problem,
> and smaller operations should not be penalised for
> the CA's and the merchant's stupidity.

I wholeheartedly agree... but if there was a way to explain the
benefits of SSL session caching in a way that bean-counters can
understand, I bet we'd see a lot more pressure to issue wildcard certs
in any case.

Side note: Is a wildcard like "secure*.example.com" supported?

> > This scheme breaks SSL session caching, though.  So they're throwing
> > money and hardware at a problem that could be solved with fewer
> > resources, if they had intelligence in their process.
> 
> They should just share the keys.  "Bad key handling"
> is a lesser problem than "users being ripped off by
> phishers because everyone's too scared to use SSL."

"too scared"?  What assertion is this?

> >> In this case, both the trustbar and petname toolbars
> >> have developed strategies to deal with this.  For the
> >> former, trustbar suggests that you click-to-accept
> >> *all* certs from that CA.  For petname, I think Tyler
> >> outlined his approach.  Either way, there is some
> >> experimentation to be done, we shouldn't see these
> >> as being the ultimate word but rather steps in the
> >> right direction of stopping the above truck-liner
> >> being driven by phishers, as well as addressing
> >> phishing as it is driven at the single truck level,
> >> today.
> >
> > This brings to mind a question that I have: Is Firefox the best place
> > to experiment with issues of this gravity?  I agree that the status
> > quo is broken; however, many people rely on Firefox and don't really
> > want to play around with experimental code.
> 
> You raise an interesting point.  Let me ponder that.
> 
> A reasonable body of experimentation has been done,
> if you look at the trustbar paper you will see that
> they have carefully tested their assumptions (although
> not in depth and not in large numbers).  There is
> also a fair bit of other academic work backing up
> this direction, and security theory itself agrees
> with the direction.

I'm not arguing there. :)

> Another issue is that these toolbars work and work
> today.  If unsure, download them and try them.  The
> petname one is particularly simple and innocuous once
> installed, it is hard to see that it would do any
> damage.

The Trustbar is... a lot less useful than I would expect.

And just a sidenote... why doesn't Mozilla complain about cross-site
certifications by another CA?  (for an example of what I mean, disable
the trust on the Thawte Server CA, and then browse to PayPal (you
might have to log in, if you have an account with them).)

> Opposing that is the notion that these could incur
> costs for users in their use of Firefox.  The easy
> answers are:  phishing costs (mostly) Americans
> something like half a billion plus plus per year.
> If Mozilla were to deliver something that reduced
> their share by 50%, and if Mozilla's share of the
> market were 10%, then that's $25 million saved over
> the next year.  (These are low figures, I used to
> use a billion, but these days another estimate is
> popular that says half a billion.)

What are the current numbers for Mozilla-related software market
penetration?  Where would I find them?

> One could talk about other costs ... but we'd have
> to talk long and hard before we got anywhere near
> the potential savings.  The main problem for Mozilla
> is that they don't have an incentive to save money
> for the their users so they really don't care if
> their users are losing money (most phishing is badly
> reported so it is easy to pretend it doesn't effect
> us).

See, I don't really understand this point of view.

Mozilla has an incentive for saving users money, because by saving
users money Mozilla increases its market share.

Increased market share means more users.  More users means more ideological win.

> > Is there a development tree available for experimentation, akin to the
> > odd-numbered variants of Linux and Apache?
> 
> I'd suggest that the experimentation "space" is
> the plugins.  They already show enough good ideas,
> Trustbar is well developed, well thought out and
> is backed by some academic rigour.  Petnames is
> based on a single strong concept that has stood
> the test of time.  As well as these two, there are
> also alternate toolbars by GeoTrust, Comodo and
> Netcraft which are experimenting with variations
> on central databases.

Trustbar did not warn me about the web bug that PayPal put in their
pages, that directed to an SSL site with a certificate issued to a
completely different company, and signed by a completely different CA
than the one that signed PayPal.  As well, the user interface on it is
nowhere near ready for prime time.  And it doesn't handle full CA
paths.  (Then again, neither does the UI for Firefox.)

Guh.  It's enough to make a saint swear.

Cordially,

Kyle Hamilton

_______________________________________________
mozilla-crypto mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-crypto

Reply via email to