Nelson B wrote:
Ian Grigg wrote:
...
Microsoft, which also distributes CA root certs inside
its browser (and accepts CA-signed certs) charges each
CA for the privilege.


No, they don't.


Yes, I saw that - no easy choice for Microsoft either,
I guess.


[mozilla] simply hasn't
got the easy money choice, so it has to decide what is
"good" in certs, some other way.


The question is not "what is good in certs", but rather,
"who can be trusted to truthfully and correctly issue certs?"


Well, I prefer mine, because the padlock doesn't say
or imply anything about trust or truth, it simply
asserts yes or no;  one man's meat is another man's
poison.


Hence Nelson's dilemma in deciding how to handle a
cert that isn't laid out to standard, but is usable
none the less.


A cert that isn't laid out to standard isn't useful.


The dilemma comes out most strongly when a major
browser accepts a non-standard cert.  If a product
has 90% of the market, and accepts a non-standard
approach, it's useful.  No matter how much one
believes in standards, this happens over and over
again.

Standards are tools, not religions.  People use
standards, not the other way around.


Jean-Marc proposed a diagnostic tool for CAs (if I understood him
correctly) that would tell them what was wrong with their certs in
terms that competent CAs would understand.  That's not an end user tool.


Would you say that such a diagnostic tool, or the
code in Mozilla, couldn't be turned to stating that
a cert is "non-standard, but functional" ?


I fully agree that if a CA gives too much trouble, then
MF should consider dropping it - but I wonder how likely
that is to be a "fair" process?


I suggest the "trouble" be measured in terms of the number of bugs filed
and complaints filed in newsgroups like this.  More than, say, 1 or 2
a month is way too many.  Consider that we've only gotten 2-3 such
complaints EVER for certs from your favorite commercial CA.


I do understand your frustration, I also used to
have to deal with such requests, but...

If I was a commercial CA, and a browser maker
set up such a rule to determine trouble by number
of complaints & bugs, I would ensure that there
were at least that many bugs and complaints filed
for the weaker products.

Why doesn't the normal "rule" covers it?  If
there aren't enough people to get around to it,
then the bugs will sit there.  If developers get
too annoyed by the "wannabe CAs", some tools will
be created, eventually.


It all comes back to the little lock symbol.  The one
binary decision, good or bad, all compressed into that
one lonely icon, is what makes it so difficult.


Make what so difficult?  After all the SSL crypto computation, and the
validation of the cert chain, one of two outcomes arise.  Either
we've proven to our own satisfaction that the party at the other end
of the SSL "pipe" is who the cert says it is and we now have a "secure
pipe" to that party, or we haven't and don't.


No, you've shown that the cert is signed by a
root cert, and that the cert id matches the
website.  (As you mentioned further below.)

If the root cert is of lesser pedigree, then
the user might want to know that [1].  Are all
CAs equal?  You might assert that, but is that
a reliable statement?  And, does it make sense
to claim that is a reliable statement?  Do all
CAs conduct anything like appropriate due
diligence?  I don't think so, simply because
the due diligence is the same for a flower shop
as it is for a bank.

The list of flaws with the above logic is rather
long and boring.  Luckily, this is never at issue,
as a real attacker bypasses the HTTPS security
altogether.


What can't be done is to hide all the available information
and try and mash it all into one tiny little lock symbol.


Can't?

I think all security conscious folks who've participated here wish that
browsers would display more info about
a) what is the full name (as full as we have in the cert) of the party
at the other end of this pipe, and
b) who says so?
I know that the various managers of browser crypto securty software
under which I've worked in the last 7+ years all wanted that, as did
most (if not all) the crypto security developers.

The fact that browsers don't do that today is not because security
people are holding back, but rather is because UI people do not value
security enough to be willing to devote the window real estate to it.


That's a key point, so, we are agreed that the
security is unsatisfactory.

One question then is, how much security is delivered
with the information that is implicit in the padlock,
and how much security is lost without the rest of
the information?

This is a security decision.  There are certain
compromises that shouldn't be made.  If the UI
people say "you only get a padlock" then they
have to be told that they don't get the security
they think they are getting.

If then, the UI people say, that's fine, we only
want "moderate" security, that opens up the way for
a whole host of different directions.  This is a
big opportunity, because there is a lot we can do
to improve security, once the UI people accept the
status quo.

But, what shouldn't be done is to pretend or
represent that the current arrangement delivers
satisfactory security, even though we can't put
in the information to close the loop.


...
The only way forward is to engage the user in the protocol,
...
That's not the only way.  But, it is clear that the only
way to deal with the complex security choices facing
browsers is something to do with presenting the user with
more information.


One thing we learned from Communicator 4.x was that if you give the
user a way to override a security error, the user will always choose
to do so.  Any error or warning that you let the user disable, he will
disable.  The user views warnings that say "you're not talking to the
party who you said you want to talk to" as just another annoying
dialog that they have to click through on the way to what they want.
IOW, the user is typically not able to aptly judge security risks,
even though it is they who are most likely to lose from their choices.


Right.  This brings up some quite difficult results,
such as, even if the browser says, "this party is
using a bad cert," then the user clicks through any
way.  I know this, I've done it myself, and I was
trying to research the very nature of an attack that
I knew was in progress (some phisher was actually
using an SSL cert as an experiment, so I was very
keen to work out who they'd tricked a cert from...[2]).

So, we seem to be in agreement that the current
notion of popup dialogs that asks questions is not
going to help.  False certs are not defeated, in
the practice of the browser activity, although they
are addressed in the protocol and the CA regime.

Hence, the branding idea.  This is an idea that has
been floating around for some time, and was brought
up recently by Tim Dierks.  CAs want it, as per the
below [1], and they should, otherwise their business
is free riding off the other CAs, and they can't set
their prices properly.

There are some good reasons why brand works when
dialogs don't:  it enables the user to make choices
based on information that they've acquired via
other channels.  It enables the CAs to express some
real claims.  There are also some reasons why it
isn't perfect, but it does seem to be way way better
than what's there at the moment.


That's one reason why mozilla has fewer security dialogs that the user
can override, and more that simply say "no dice".  And it hasn't been
a great tragedy.  More web sites that used to be sloppy have now done
the right thing and fixed the problems.  That's a GOOD THING.

I'd say the way forward is to enforce the rules no less tightly than
before, maybe more so, and give the users fewer decisions to make,
fewer chances to hurt themselves.  With the presence of low-cost CAs,
there won't be any remaining excuse for people to continue to use
improperly made certs from their own homebrew CAs.


Security is a hard problem.  It's not amenable
to a binary result, that of being secure, or
being insecure.  It's not like politics, where
someone can be voted in one the platform of
"making people safe."  There are degrees of
safeness, and there are risks and losses.  There
are economic choices to be made, and each person,
each party, has different choices.

It's a process, and a progression.  The HTTPS
implementation is already so imbalanced within
today's browsing, by means of its desire to
create a user-simple binary security choice,
that it is ignored by almost all attackers [2].

By any measures, browsing is not well protected.
The security that was built in is bypassed on a
routine basis in real attacks.  One of the core
reasons for banking websites being so vulnerable
is that HTTPS is "too secure" - so theoretically
secure that it results in costs and inconveniences
that lead to easy bypasses and easy ignoring.

In security, the result justifies the means, not
the other way around.

You are totally right that this is also/really
a UI problem.  But, the UI people won't look at
it until they realise that the HTTPS system in
browsers doesn't deliver the security that they
thought.

UI people will figure this out, in time, because
they read the news and there has been an increasing
flood of attacks over the last year.  I personally
would rather that it was the crypto guys who said,
"the crypto solution didn't work, sorry ... we
need a rethink."

Because, that will make it a whole lot easier to
keep the crypto in there, doing its job in the
long run.


iang



[1] I stumbled on this last night while looking for the audit cabinet stuff:

http://www.cio-dpi.gc.ca/pki-icp/pki-in-practice/efforts/2002-07/scan-analyse_e.rtf

  "VeriSign Inc is mounting a marketing and education campaign
  saying its authentication services are more trustworthy than
  those of some of its rivals.

  The company announced a "Trusted Commerce" initiative, which
  will include "fairly significant" advertising and PR aimed at
  getting consumers to realize that all the "solid padlocks"
  that appear in their browsers are not equal, and that some
  are more trustworthy than others.

  Part of VeriSign's initiative is its participation in industry
  standards, mainly WebTrust, an auditing standard for best
  practices developed by the American Institute of Certified
  Public Accountants and the Canadian Institute of Chartered
  Accountants. VeriSign, Entrust Inc and Baltimore Technologies
  Inc are WebTrust-certified."


[2] I know of that one case where the attackers experimented with using a cert. They didn't repeat the experiment, as it probably lost people in the additional dialogs that popped up. They get much more success by showing a professional HTTP site copy, without any HTTPS certs getting in the way. _______________________________________________ mozilla-crypto mailing list [EMAIL PROTECTED] http://mail.mozilla.org/listinfo/mozilla-crypto

Reply via email to