On 26/06/11 1:26 PM, Marsh Ray wrote:
On 06/25/2011 03:48 PM, Ian G wrote:
On 21/06/11 4:15 PM, Marsh Ray wrote:
This was about the CNNIC situation,
Ah, the "I'm not in control of my own root list" threat scenario.
See, the thing there is that CNNIC has a dirty reputation.
That's part of it. But there are some deeper issues.
Deeper issue A applies equally if you *are* the government of China.
Would it make sense for you to be trust root CAs controlled by other
governments? Of course, this might seem a more academic question if you
in China since your OS is likely MS Windows made in the US anyway.
Yes, exactly. For everyone who's paranoid about CNICC, there are 10 who
are scared of some other government. It's not about one CA, it's about
all of those scenarios; there are many many people outside the USA that
feel the same about the USA government.
If we pander to those who are scared of CNICC, that means all the
USA-based CAs are next.
A better thing to do is work our risk analysis (which is shortly to be
mandated for CAs, but not for anyone else...).
For what we want browsers to do, is it reasonable that governments
somewhere somehow can MITM us?
Probably: for online banking or credit cards (what SSL was intended to
deal with) it is reasonable. For freedom fighting / terrorism, it's
probably not reasonable. But, are we really saying that we want to
provide a system for those latter people? What costs are we willing to
take on board? Are we going to kill it for the former group?
That's a rabbit hole, are you sure you want to go down it?
Deeper issue B is a simple engineering failure calculation. Even if you
only trust reliable CAs that will protect your security 99 years out of
100 (probably a generous estimate of CA quality), then with 100 such
roots you can expect to be pwned 63% of the time.
(1 - 0.99^100) = 0.63
Well, ug! Those numbers assume that a CA breaches us for the entire
year, and it breaches everyone for that year, and we all lose big time
from that breach.
It seems unreasonable to assume such apocalyptic results, especially
given the rather singular data points we have (a handful of breaches,
and zero damaged customers or users).
More likely, we will see breaches at a level of 0.1% to 1% per year, and
those breaches will effect around 0.00001% to 0.00001% of the users, and
around 0.000000001% to 0.000000001% of the RPs.
That's an acceptable risk.
But CNNIC passed the test to get into the root lists.
That tells me it was a bad test.
Many might agree with you. When I did the test, the result was
positive, but still didn't pass ... I'm not sure I can confirm whether
it was a good test or a bad test on one data point, but I can tell you
it is an expensive test :)
Which do you want? A CA gets into a root list because it is nice and
pretty and bribes its way in? This was the old way, pre 1995. Or
there is an objective test that all CAs have an equivalent hurdle in
passing? This was the post 1995 way.
There's no dichotomy here. Cash payments can make a fantastically
objective test.
:) So CNNIC is in either way.
There's no easy answer to this. Really, the question being asked is
wrong.
Yeah.
The question really should be something like "do we need a
centralised root list?"
Well something is going to get shipped with the browser, even if it's
something small and just used to bootstrap the more general system.
Right. The Microsoft dynamic population makes a lot of sense, from an
engineering perspective. Especially if you're aware of how hard Mozilla
has found it to police this issue. Indeed, the fixed root list of
Mozilla looks very 1970s-ish.
How about these questions:
When is a centralized root list necessary and when can it be avoided?
Vendors typically rejects all variations of the centralised root list
model at the centralised distribution level. This is an article of faith.
Where there is some room to experiment is with plugins to browsers.
How can the quality of root CAs be improved?
Not easily. There are several barriers:
Disclosures. We need a lot more of the right disclosures before we can
move to improve the quality of the CAs, as only once the entire model is
on the table in documented form can focus be achieved. The CAs control
what they want to disclose via CABForum. So you will only see the right
disclosures come slowly, if at all. There are a batch of new
disclosures coming through in a document called Basic Requirements.
Reputation. The vendors hold the line that reputation of CAs is not to
be used in a formalised sense to allow the CAs to compete. This is
basically a failure of marketing on the part of the vendors. For their
credit, the CAs have grumbled about this for a long time. EV goes some
way towards branding the CAs, but it mucked it up by exchanging the
branding for a hill of beans called EVG. So it ended up confirming the
race to the bottom, but starting at a slightly higher point.
How can the number of root CAs be reduced in general?
How can the number of root CAs be reduced in specific situations?
What difference does a reduction make? Do you feel more secure with
only 100 roots? 50? For my money, I don't see it makes a difference
until maybe 5.
And even then, what happens if Microsoft's dynamic root list is the way
of the future?
and most importantly:
How can we give the people who need it the skills and information needed
to assess the security of their connection?
Hmmm... do you mean the geeks who understand crypto? They don't need it
as much as they say they do, they just like the idea of tweaking and
playing. I speak as one of them of course... it never really protected
me from enemies coz I didn't have anywhere near as many as I imagined.
And the enemies I had didn't bother with attacking the crypto,
confirming Adi's 3rd law. Ya know, frankly, those who are scared of
connection-level threats oughta join society for a few years and
discover where the real threats are... #1 threat is the guy on the
other end.
Or do you mean the masses? Well, the masses really only do two
mechanisms: reputation in the large, and word-of-mouth in the small.
Advertising or social networks, take your pick.
This is the geek's realisation that they cannot control their list of
"trusted" CAs.
It's more prosaic than you make it sound.
When engineers sit down to review the security of a real-world product,
often with sharp people from the customer's side present, occasionally
someone thinks to ask the question: "OK, so supposing there are no
killer defects in the implementation, and all the crypto works as
expected, who has keys to the damn thing?"
If the product's implementation relies on SSL/TLS (e.g., has a
management port with a web interface), then be prepared to have this
conversation.
Yeah.
To me this is a validation of the cipherpunks' foresight of taking the
attack model at face value. What was once considered spy-fantasy
paranoia by many is, in reality, a textbook engineering calculation
after all.
Well, as far as the question goes, it is reasonable. I would hold short
of calling it a textbook engineering calculation though. For that, I'd
like to see their data first.
As I understand it, most of these devices provide reasonable protection
from eavesdropping. That's their job - to stop passwords being sniffed.
What they don't do is stop the manufacturer from getting in. Often
this is desirable. Nor the manufacturer's friends and enemies. Less
desirable. But, as long as our threat surface is reduced over the
alternative (no device) then we're in the positive.
Their judgement is undermined, as MS Windows' root list has gone the
next step to dynamic control, which means that the users' ability to
verify the root is undermined a bit more by not having an ability to
stop the future dynamic enhancements.
You can go to "Add/remove Windows Components" (or whatever they call it
these days) and remove the "Automatic Certificate Update" feature. But
if you do this you need to be prepared to troubleshoot some pretty
mysterious breakages many months later after you've forgotten about it.
Yes. That's the "don't touch the settings" button. The lesson takes a
while to take effect, tho.
In practice, if we assume a centralised root list, this is probably
the better result.
Maybe sometimes. But when?
When any ordinary user is accessing any website, mostly. The choice
they have is this: zero protection, or some crypto thingie that is
transparent except when it is telling them to go back to zero protection.
The user has a choice: something or nothing. Which do you want to provide?
This is very hard to quantify because it's all theoretical until the
instant that the client software tries to make a connection to a
specific server and receives a specific certificate from the next-hop
router. Does the client software accept the connection or fail it and
tell the user that they're possibly being attacked?
(a) It's impossible to quantify until attackers start attacking it.
That has only just started in the last year or two.
(b) the client software is now seeing the difference between something
or nothing. What we got before that was good. What comes now is
controversial.
From a UI designer's perspective, this is as close as to a "launch the
nuclear missiles" moment as they're ever likely to encounter because
showing the scary page to a browser user instead of the page they
requested probably seems pretty much like the end of the world to these
people.
(c) bayesian statistics has something to say about false negatives.
Basically, if the false negatives are too high, the true positives will
never be spotted. That's the problem we have now: the dial is switched
on too high. Arguably, PKI will never work in the consumer field
because the inherent design generates too many false negatives.
(I may have gotton my true/false/+/- mixed up above...)
Here's an example of some thinking by UI design types. It's obviously
biased, but it confirms my own biased experience :-) so I'll link it:
http://www.reddit.com/r/pics/comments/hvuhg/apple_why/c1yuah6
Welcome to marketing. You either love it or hate it. It's not
personal, marketing doesn't care :P I highly recommend doing some
advanced classes on marketing, to see why the purist engineer always
loses these battles.
It works quite simply: 1 billion users don't check the root list, at
all. They rely entirely on the ueber-CA to generate a good root
list.
Isn't this basically the system we have now with the browser vendor
acting as the ueber-CA?
Yup.
A tiny fraction of that number (under 1 million, or 0.1%) know about
something called a root list, something perversely called "trust"
bits, and the ability to fiddle those bits. They do that, and imagine
that they have achieved some higher level of security. But, this
technique has difficulty establishing itself as anything more than a
placebo.
Any model that offers a security feature to a trivially tiny
minority, to the expense of the dominant majority, is daft.
Heh. Unless the dominant majority has a dependency on this tiny minority
that they are unaware about:
http://www.cisco.com/en/US/products/sw/secursw/ps5318/index.html :
Cisco Router and Security Device Manager (SDM) is a Web-based
device-management tool for Cisco routers [...]
It ships preinstalled on all new Cisco 850 Series,
Cisco 870 Series, Cisco 1800 Series, Cisco 2800 Series, and Cisco
3800 Series integrated services routers.
The people who legitimately manage Cisco routers are a tiny minority,
perhaps you would consider them "trivial". Yet the dominant majority of
users' internet traffic is passing through a Cisco router at some point.
So, the tiny minority bought a tool that was protected by a
masses-security mechanism, and didn't bother to complain?
I don't think that this tiny minority really deserves much sympathy from
us, if they are relying on a chain of unknown root list manipulators to
protect them, in a design that is entirely sub-standard to their
requirements.
Now maybe it's different for ISP core router admins, but the existence
of this product strongly implies that at least some admins are
connecting to their router with their web browser over HTTPS and typing
in the same password that they use via SSH.
Gee, how the heck do you suppose Cisco admins are going to get a unique
and valid certificate on every router like that?
:)
A lower layer (IP routing) depending on a higher layer (TCP port 443 and
PKI) for it secure functioning. This is the type of layering violation
and cyclic dependency that will take down the entire internet if we're
not careful. Let's hope it doesn't take down the power grid along with it.
Yeah. Clunk. Fail.
(Not much point in blaming the root list system for that.)
The logical conclusion of 1.5 decades worth of experience with
centralised root lists is that we, in the aggregate, may as well
trust Microsoft and the other root vendors' root list entirely.
The party that decides the quality of the authentication is the party
that writes the check for it. This is often not the party with the most
to lose and is almost never the end user.
In other words, Microsoft and Mozilla are looking out for their own
interests. Their interests diverge in significant ways from the
interests of secure sites, ISPs, corporate firewall admins, governments,
Yeah...
and even their own end users.
And that is the problem. Users are not represented in any forum that
matters.
Or: find another model. Change the assumptions. Re-do the security
engineering.
We need some new ideas, that's for sure.
- Marsh
iang
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography