On 06/25/2011 03:48 PM, Ian G wrote:
On 21/06/11 4:15 PM, Marsh Ray wrote:
This was about the CNNIC situation,
Ah, the "I'm not in control of my own root list" threat scenario.
See, the thing there is that CNNIC has a dirty reputation.
That's part of it. But there are some deeper issues.
Deeper issue A applies equally if you *are* the government of China.
Would it make sense for you to be trust root CAs controlled by other
governments? Of course, this might seem a more academic question if you
in China since your OS is likely MS Windows made in the US anyway.
Deeper issue B is a simple engineering failure calculation. Even if you
only trust reliable CAs that will protect your security 99 years out of
100 (probably a generous estimate of CA quality), then with 100 such
roots you can expect to be pwned 63% of the time.
(1 - 0.99^100) = 0.63
But CNNIC passed the test to get into the root lists.
That tells me it was a bad test.
Which do you want? A CA gets into a root list because it is nice and
pretty and bribes its way in? This was the old way, pre 1995. Or
there is an objective test that all CAs have an equivalent hurdle in
passing? This was the post 1995 way.
There's no dichotomy here. Cash payments can make a fantastically
objective test.
There's no easy answer to this. Really, the question being asked is
wrong.
Yeah.
The question really should be something like "do we need a
centralised root list?"
Well something is going to get shipped with the browser, even if it's
something small and just used to bootstrap the more general system.
How about these questions:
When is a centralized root list necessary and when can it be avoided?
How can the quality of root CAs be improved?
How can the number of root CAs be reduced in general?
How can the number of root CAs be reduced in specific situations?
and most importantly:
How can we give the people who need it the skills and information needed
to assess the security of their connection?
This is the geek's realisation that they cannot control their list of
"trusted" CAs.
It's more prosaic than you make it sound.
When engineers sit down to review the security of a real-world product,
often with sharp people from the customer's side present, occasionally
someone thinks to ask the question: "OK, so supposing there are no
killer defects in the implementation, and all the crypto works as
expected, who has keys to the damn thing?"
If the product's implementation relies on SSL/TLS (e.g., has a
management port with a web interface), then be prepared to have this
conversation.
To me this is a validation of the cipherpunks' foresight of taking the
attack model at face value. What was once considered spy-fantasy
paranoia by many is, in reality, a textbook engineering calculation
after all.
Their judgement is undermined, as MS Windows' root list has gone the
next step to dynamic control, which means that the users' ability to
verify the root is undermined a bit more by not having an ability to
stop the future dynamic enhancements.
You can go to "Add/remove Windows Components" (or whatever they call it
these days) and remove the "Automatic Certificate Update" feature. But
if you do this you need to be prepared to troubleshoot some pretty
mysterious breakages many months later after you've forgotten about it.
In practice, if we assume a centralised root list, this is probably
the better result.
Maybe sometimes. But when?
This is very hard to quantify because it's all theoretical until the
instant that the client software tries to make a connection to a
specific server and receives a specific certificate from the next-hop
router. Does the client software accept the connection or fail it and
tell the user that they're possibly being attacked?
From a UI designer's perspective, this is as close as to a "launch the
nuclear missiles" moment as they're ever likely to encounter because
showing the scary page to a browser user instead of the page they
requested probably seems pretty much like the end of the world to these
people.
Here's an example of some thinking by UI design types. It's obviously
biased, but it confirms my own biased experience :-) so I'll link it:
http://www.reddit.com/r/pics/comments/hvuhg/apple_why/c1yuah6
It works quite simply: 1 billion users don't check the root list, at
all. They rely entirely on the ueber-CA to generate a good root
list.
Isn't this basically the system we have now with the browser vendor
acting as the ueber-CA?
A tiny fraction of that number (under 1 million, or 0.1%) know about
something called a root list, something perversely called "trust"
bits, and the ability to fiddle those bits. They do that, and imagine
that they have achieved some higher level of security. But, this
technique has difficulty establishing itself as anything more than a
placebo.
Any model that offers a security feature to a trivially tiny
minority, to the expense of the dominant majority, is daft.
Heh. Unless the dominant majority has a dependency on this tiny minority
that they are unaware about:
http://www.cisco.com/en/US/products/sw/secursw/ps5318/index.html :
Cisco Router and Security Device Manager (SDM) is a Web-based
device-management tool for Cisco routers [...]
It ships preinstalled on all new Cisco 850 Series,
Cisco 870 Series, Cisco 1800 Series, Cisco 2800 Series, and Cisco
3800 Series integrated services routers.
The people who legitimately manage Cisco routers are a tiny minority,
perhaps you would consider them "trivial". Yet the dominant majority of
users' internet traffic is passing through a Cisco router at some point.
Now maybe it's different for ISP core router admins, but the existence
of this product strongly implies that at least some admins are
connecting to their router with their web browser over HTTPS and typing
in the same password that they use via SSH.
Gee, how the heck do you suppose Cisco admins are going to get a unique
and valid certificate on every router like that?
A lower layer (IP routing) depending on a higher layer (TCP port 443 and
PKI) for it secure functioning. This is the type of layering violation
and cyclic dependency that will take down the entire internet if we're
not careful. Let's hope it doesn't take down the power grid along with it.
The logical conclusion of 1.5 decades worth of experience with
centralised root lists is that we, in the aggregate, may as well
trust Microsoft and the other root vendors' root list entirely.
The party that decides the quality of the authentication is the party
that writes the check for it. This is often not the party with the most
to lose and is almost never the end user.
In other words, Microsoft and Mozilla are looking out for their own
interests. Their interests diverge in significant ways from the
interests of secure sites, ISPs, corporate firewall admins, governments,
and even their own end users.
Or: find another model. Change the assumptions. Re-do the security
engineering.
We need some new ideas, that's for sure.
- Marsh
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography