Hi Kyle,

> ummm... actually, it wasn't meant to be offlist.  Yay gmail, and
> having to fix the headers every time.

I have the same problem with mozilla lists,
the To and CC have to be manually checked
each post.

> (Key:
>   belongs to aerowolf (Kyle Hamilton)
>> belongs to iang (Ian Grigg)
>> > belongs to aerowolf (Kyle Hamilton)
>> >> belongs to iang (Ian Grigg)
> )
>
> On 4/23/05, Ian Grigg <[EMAIL PROTECTED]> wrote:
>> Hi Kyle,
>>
>> (was this meant to be oflist?)


[snip of substitute CA attack]


>> (The other thing that makes this a better bet is
>> if in fact CAJoe was tricked, he's on the hook
>> for all legs of the transaction, and he's the
>> one party that can prevent this happening;  so
>> there is a good chance that responsibility can
>> land in his lap.  Not so with any wider tricker.)
>
> ...which brings us back to the limitation of liability issue.  (I
> wonder... can we come up with a coherent concept of what
> fiscal-trusted CAs should and should not be on the hook for?  Maybe
> eventually make that a requirement for inclusion of the cert in
> Firefox?)


Nope, it's not possible.  This is something where
the techie / SSL world got sold a bill of goods;
there is no way to create a reliable "limitation
of liability" connection into the browser, and the
more you try the more damage you are likely to do.

About the only thing you can do is create a series
of statements and collect the evidence to those
statements.  But if you miss a link - as is done
now - you blow away any sense of reliability.

(This was shown in british and german cases,
where the mere appearance of holes in the crypto
was enough to knock out the banks' cases.  And in
those cases, it was fairly likely that the holes
hadn't been exploited...  The game has now entered
into FL court, and the banks are worried sick because
they know they'll have to wear it.)


>> > The 'core
>> > identity' -- the first-order data that we use to determine the site's
>> > identity -- is the encryption key.  However, encryption keys also have
>> > lifetimes, and life-cycles.  So, we use certificates from CAs as our
>> > only proof that the encryption key is in actuality bound to the
>> > identity of the owner of the server.
>>
>> Right, and given the number of steps involved, and
>> the uncertainty of the statements being made, you
>> can see that this is not exactly a robust system.
>> So, coupling these mechanisms with browser based
>> relationship monitoring is a much better idea than
>> relying on certs alone (and the alternate is true,
>> too).
>
> The uncertainty of the statements being made is that "there is no
> coherent set of statements that are being made".


:-)

> (pardon me while I go completely off my rocker for a few moments...
> this is a completely hypothetical thought exercise, and I have no idea
> how useful it might be.)

[big snip on web of trust :]


> (and here's where the role of the Mozilla Foundation and browser
> manufacturers comes in.)
>
> As it stands, when a lock icon is presented, it means that a
> certificate from a "trusted CA" (meaning, a CA embedded into the
> certificate database, with the "trust bit" set) has been presented and
> verified.  However, this only means that you're dealing with someone
> who has an identity that's asserted, not a reputation to uphold.  Why
> should certificates like this be blindly trusted by action of the
> developers, with the decision taken out of the hands of the user?


Ah, that's because the browser tries to create an
environment where the user can trust enough to do
ecommerce.  Here's the problem.  Back in the mid-90s
a lot of people ran around and said we have to protect
commerce.  "Otherwise it will never happen."  Now, the
users who were trialled on all of the various ideas
basically rejected that they get involved, and insisted
that they be told it was trusted.  So this thing called
trust bounced around until it found somewhere that it
could sit without bouncing again.  When the music stopped,
trust was sitting in the browser's lap.

Still, the system that was created had a bunch of relatively
stable statements:  "the browser says that the CA says that
this identity says that he can accept credit cards."  Firstly,
the users (or their proxy, the gfx UI team) said ... oh
that's all too complex, so it became, by steps, "the browser
says this site is ok."

And then everyone realised that the user wouldn't do any
thing until she had someone telling her it was trusted,
so everyone said it was trusted.  Which meant that the
message migrated to "the browser says you can trust this
site."

So from a reasonably cohesive set of statements, it
migrated to a nonsense.  Back then, there was no real
threat, and no danger that ecommerce wouldn't take off,
so it didn't matter.  Now, it does matter, and if you
know how to read the signs, people are worried sick,
because they know the first time this goes to court,
the whole charade will be ripped to shreds.

(For example, in the crypto world it is no longer
reasonable to use the word "trust".  This has gone
so far that Verisign dropped the word from its logo!)

Getting back to web of trust - that world has the
same problem.  The essence of the word trust is that
it is very hard to define.  So if you use it, you
either have to be very sure of your meaning, or you
are basically using the word to pass the buck.  In
OpenPGP, for example, a signature is undefined.  It
means therefore whatever you want it to mean.  It
explicitly doesn't mean trust, unless you explicitly
said so.  Hence, the web of trust is misnamed, it is
really the web of contacts, or links, or relationships.

When you get to S/MIME sigs it is even messier, as
sigs are used for both message authentication purposes
and for identity purposes (without blushing).  This
basically breaks the signature as an application, and
is one small contributing factor in why S/MIME hasn't
taken off in the world.  (To explain - when people
send communications in the real world, they understand
what it means to sign it.  But no such meaning is clear
in S/MIME.)


> Since phishers rely on being able to look as much like "the real
> thing" as possible, and the CAs aren't on the hook for it (right
> now)... why should the CAs be given "trust bits"?  Instead, the first
> time that the site comes up [and this may be very difficult, given
> X.501 DNs and the entire X.509v3 certificate model], pop up a dialog
> giving the user the identity that's asserted... and the identity of
> the CA that asserted it.  As well, keep track of (and display) how
> many times certificates by that CA have been rejected, and how many
> times they've been accepted, as a means of determining whether that
> CA's assertions should be trusted on a /per-user basis/.


Right.  That's if you are trying to address
phishing.  But the browser security model doesn't
address spoofing or MITM of that nature, the
protection chrome was dropped in favour of real
estate back in the mid 90s.


> The fact that the "trust bit" is set on all of those CAs, combined
> with the fact that the user interface doesn't present the information
> except in a very difficult-to-understand manner (I really don't
> understand needing to click 6 times in order to get to the DN in the
> certificate, and I don't understand not being able to 'walk the chain
> of trust' to figure out where the trust is being delegated from),
> means that essentially the Mozilla developers are creating a
> "transitive trust" situation -- "I trust that I know who this person
> is, so I trust that I can go after them if they misuse my
> information."  This, more than anything, hurts the security of Firefox
> and the NSS in-general.

Yep.

> (I know that these are 'prescribed procedures', but I've just
> explained why the procedures are horribly insufficient.)


Yep.  They were designed to address a static
model of a threat that never eventuated.  People
hang onto that threat in case the threat turns
up, it is an article of faith that as soon as
we loosen our guard, the naughty protocol MITM
will sneek in and prove us all idiots, no matter
that he can always forge some documents and buy
a cert from a CA.


>> >> Also, it turns out there is a much bigger case where
>> >> key changes are prevalent, and that is in the use of
>> >> hardware SSL farms.  Larger merchants use lots of
>> >> certs in hardware, and switch rapidly between them
>> >> depending on the moment.
>> >
>> > That's not too surprising -- up until recently, no CA would issue
>> > wildcard certs, based on the concept that it would require the private
>> > key to be in more than one place, which is "bad key-handling
>> > practice".  An SSL server can only handle so many SSL connections at a
>> > time, and that number is at least 2 orders of magnitude less than the
>> > number of non-SSL connections.
>>
>> Bear in mind that this is a "big merchant" problem,
>> and smaller operations should not be penalised for
>> the CA's and the merchant's stupidity.
>
> I wholeheartedly agree... but if there was a way to explain the
> benefits of SSL session caching in a way that bean-counters can
> understand, I bet we'd see a lot more pressure to issue wildcard certs
> in any case.

I don't quite understand what there is to explain;
just copy the key and re-use it.  It may be that
you are risking the loss of the cert, but that's
something to explain to the management - do you
want to pay more or run the risk?

> Side note: Is a wildcard like "secure*.example.com" supported?

(Any CAs reading this far?)

>> > This scheme breaks SSL session caching, though.  So they're throwing
>> > money and hardware at a problem that could be solved with fewer
>> > resources, if they had intelligence in their process.
>>
>> They should just share the keys.  "Bad key handling"
>> is a lesser problem than "users being ripped off by
>> phishers because everyone's too scared to use SSL."
>
> "too scared"?  What assertion is this?


The solution to phishing starts with using more
SSL.  Along with more SSL, the other step is to
manage the identity relationship on the chrome,
a la Trustbar and petname, etc.

But, to use SSL more pervasively, we need to
migrate the CA industry to a series of differing
products, including domain-control and the like.
But, that seems like a change to the model of
security, and for many people, that's scary.
To too many people, changing the model means
breaking it, as they cannot understand the
alternate that the model wasn't perfect to
begin with and that improvements exist.

It is that which causes the fear - knowing that
there are two possibilities for change.  Is
changing the model going to reduce security,
or was everything I've been working on in the
last 10 years based on a wrong model?

So there is a lot of resistance to change (along
with no sympathy or respect for victims of phishing,
so it is not as if the debate is anything to do with
users' security, it's more to do with insider's
security).

E.g., what happens if domain-control certs become the
norm?  As they already hold something like a quarter
of the market, and most browsers already have roots
in them that accept domain-control certs, we know
these fears aren't anything to do with the users
and the CAs.

Oh, and it isn't just here, it is all the way through
the SSL world.  What is happening now in the browser
world is just a repeat of what happened in the crypto
world a few years back, and that was a repeat of what
happened in the academic crypto world in the late 90s.

The good news is that the debate is more or less
over in the crypto world - the model was wrong,
and even the insiders accept it when talking to
their peers.  The torch has now passed to the
browser world.

>> Another issue is that these toolbars work and work
>> today.  If unsure, download them and try them.  The
>> petname one is particularly simple and innocuous once
>> installed, it is hard to see that it would do any
>> damage.
>
> The Trustbar is... a lot less useful than I would expect.

Sure - but consider the operative word - experimentation!

It's a start.  Nobody is suddenly going to spring
up with the right answer and wow the world, what
is needed is a series of experiments and ideas all
cross-fertilising each other.

If you look at Trustbar, which was the first in
modern times, it started out in form A and then
moved to form B, C, D....  each time gobbling up
the ideas thrust to it.

(As a tool, it has bugs, sure - in fact on my FreeBSD
the logos didn't work at all, which was disappointing.)

> And just a sidenote... why doesn't Mozilla complain about cross-site
> certifications by another CA?  (for an example of what I mean, disable
> the trust on the Thawte Server CA, and then browse to PayPal (you
> might have to log in, if you have an account with them).)


I don't understand this part, and have no easy
access to check it out ... leave that for others,
you might like to file a bug.

>> Opposing that is the notion that these could incur
>> costs for users in their use of Firefox.  The easy
>> answers are:  phishing costs (mostly) Americans
>> something like half a billion plus plus per year.
>> If Mozilla were to deliver something that reduced
>> their share by 50%, and if Mozilla's share of the
>> market were 10%, then that's $25 million saved over
>> the next year.  (These are low figures, I used to
>> use a billion, but these days another estimate is
>> popular that says half a billion.)
>
> What are the current numbers for Mozilla-related software market
> penetration?  Where would I find them?

Mine above were guesses, I don't know where
real numbers are;  I think only downloads are
recorded, and that doesn't mean much, the
conversion rate of a download is probably
greater than 1% and less than 25%.


>> One could talk about other costs ... but we'd have
>> to talk long and hard before we got anywhere near
>> the potential savings.  The main problem for Mozilla
>> is that they don't have an incentive to save money
>> for the their users so they really don't care if
>> their users are losing money (most phishing is badly
>> reported so it is easy to pretend it doesn't effect
>> us).
>
> See, I don't really understand this point of view.
>
> Mozilla has an incentive for saving users money, because by saving
> users money Mozilla increases its market share.


Pretty weak.  Mozilla increases its market share
currently, because a) it's cool, b) it's more
robust than IE, c) dodgy stuff is turned off,
and d) it has a rep for security.  Mostly, these
are perceptions rather than necessary realities.

I grant that there is an opportunity to address
protecting users from phishing and therefore
saving money.  But Mozilla doesn't buy that.

(The yellow bar and the status domain name helps,
and Opera has started putting the identity name on
the chrome;  I think this all changes when we see
what Microsoft has done.  Frankly, I think that
the number of people in Mozilla who appreciate
phishing is too small to make a difference, which
is testamant to the fact that Mozilla isn't about
saving money for its users.)

> Increased market share means more users.  More users means more ideological
> win.


Very pretty :)  Ideology doesn't like money
so if you are saying that is Mozilla's goal,
that explains it.

> Trustbar did not warn me about the web bug that PayPal put in their
> pages, that directed to an SSL site with a certificate issued to a
> completely different company, and signed by a completely different CA
> than the one that signed PayPal.  As well, the user interface on it is
> nowhere near ready for prime time.  And it doesn't handle full CA
> paths.  (Then again, neither does the UI for Firefox.)


Can you write that up with the URLs to get
to those certs?  I'm sure Amir and Ahmad would
be interested in trialling that case!

iang
_______________________________________________
mozilla-crypto mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-crypto

Reply via email to