Thanks for bringing up these points.
On 03/13/2013 01:53 AM, Steve Weis wrote:
At its core of this proposal, sites run their own CAs and users
install site-specific client-side certificates. Many organizations
have been doing this for years. For example, MIT:
http://ist.mit.edu/certificates .
In these organisations the people need to know each other by name. MIT
is not going to sign any certificate unless they know the person is
affiliated to them. By signing they state that the person belongs to
MIT. It's a different (and good use of client certificates).
Eccentric offers client certificates to stay anonymous while creating an
account. It's like the difference between email accounts at google.com
and gmail.com. I assume that an email from the first is a Google
employee but the second is not.
I like client certificates as an additional factor in general, but
user enrollment across multiple devices, browser and platform
compatibility, and revocation of lost devices are a pain. I think the
biggest adoption of client certificates has been in large
organizations with managed devices and support staff.
Current browsers are very lousy with handling client certificates. I
believe that to be the reason that only large organisations can deploy
them successfully.
I also start to believe that the browsers are the biggest trojan horse
on the computer. They're being written by advertisers and software
companies that want world domination over our privacy. And people wonder
why there is no privacy? :-)
Incidentally, there have been attacks to use client certificates as
persistent "supercookies" to track users, but I don't know the current
state of how browsers handle this. Here's an old PoC:
http://0x90.eu/ff_tls_poc.html . Firefox 4 at least prompts you before
dumping your cert to https://www.apache-ssl.org/cgi/cert-export .
This is quite a big concern.
But I counter that risk by specifying that the browser must not log in
automatically to any site. To log in a user decision. My ideas: only
when the user browses to a site, ( a deliberate action) may the browser
log in automatically when the user has specified that.
When the site is just linked from another page, it must prevent
linkability. See:
https://blog.torproject.org/blog/improving-private-browsing-modes-do-not-track-vs-real-privacy-design.
(there is so much to write before this becomes a real protocol).
The author also makes claims this could prevent cross-site scripting
with a "cryptographic same origin policy". I don't buy that, since XSS
attacks could still be served from sites with valid certificates. If
someone has a vulnerable web app, it's still going to be vulnerable.
The problem comes from the 'we trust the world' model in browsers.
The fact that a javascript comes in from a site that has a certificate
does not imply it is trusted. That's the mistake that current browsers
make.
The Cryptographic Same Origin Policy states that all servers with
certificates from the local CA are a single trust-domain. Resources
served from other sites (signed by a different CA) have a different
trust-domain.
The browser is able to distinguish between the two trust domains. If it
doesn't take heed, it's broken.
Finally, this proposal requires changes on server-side authentication
and potentially in browsers themselves. Sites don't typically change
their authentication system unless it drives user adoption (e.g.
OpenID or Facebook Connect) or is needed for security (e.g. 2-factor
auth). I don't see any incentives for adoption here.
Indeed, it needs changes at the server, and many changes at the browser.
Driving it to adoption is the difficult part (as always). That's why I
brought it to this list.
With regards, Guido Witmond.
--
Too many emails? Unsubscribe, change to digest, or change password by emailing
moderator at [email protected] or changing your settings at
https://mailman.stanford.edu/mailman/listinfo/liberationtech