Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Ralph Holz
Hi,

 Any model that offers a security feature to a trivially tiny minority,
 to the expense of the dominant majority, is daft.  The logical
 conclusion of 1.5 decades worth of experience with centralised root
 lists is that we, in the aggregate, may as well trust Microsoft and the
 other root vendors' root list entirely.
 
 Or: find another model.  Change the assumptions.  Re-do the security
 engineering.

You have avoided the wording find a better model - intentionally so?
Because such work would only be meaningful if we could show we have
achieved an improvement by doing it.

Which brings us to the next point: how do we measure improvement? What
we would need - and don't have, and likely won't have for another long
while - are numbers that are statistically meaningful.

On moz.dev.sec.policy, the proposal is out that CAs need to publicly
disclose security incidents and breaches. This could actually be a good
step forward. If the numbers show that incidents are far more frequent
than generally assumed, this would get us away from the low frequency,
high impact scenario that we all currently seem to assume, and which is
so hard to analyse. If the numbers show that incidents are very rare -
fine, too. Then the current model is maybe not too bad (apart from the
fact that one foul apple will still spoil everything, and government
interference will still likely remain undetected).

The problem is that CAs object to disclosure on the simple grounds that
public disclosure hurts them. Even Startcom, otherwise aiming to present
a clean vest, has not disclosed yet what happened on June, 15th this year.

Mozilla seems to take the stance that incidents should, at most, be
disclosed to Mozilla, not the general public. While understandable from
Moz's point of view - you don't want to hurt the CAs too badly if you
are a vendor - it still means researchers won't get the numbers they
need. And the circle closes - no numbers, no facts, no improvements,
other than those subjectively perceived.

Ralph



signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread James A. Donald

On 2011-06-26 7:50 PM, Ralph Holz wrote:

On moz.dev.sec.policy, the proposal is out that CAs need to publicly
disclose security incidents and breaches. This could actually be a good
step forward. If the numbers show that incidents are far more frequent
than generally assumed, this would get us away from the low frequency,
high impact scenario that we all currently seem to assume, and which is
so hard to analyse. If the numbers show that incidents are very rare -
fine, too. Then the current model is maybe not too bad (apart from the
fact that one foul apple will still spoil everything, and government
interference will still likely remain undetected).


The most common security breach is probably that a government or 
powerful private group launches a man in the middle attack.  Are CAs 
going to report that?  Seems unlikely.


On tor, a website is identified by the hash of its public key.

Thus the infamous silk road is: http://ianxz6zefk72ulzz.onion/index.php

If it had been on the regular web, in very short order, it would have 
been redirected to the DEA, and the CAs would have given the DEA a 
certificate.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Ralph Holz
Hi,

 The most common security breach is probably that a government or
 powerful private group launches a man in the middle attack.  Are CAs
 going to report that?  Seems unlikely.

The key word in your sentence is probably. Just how much is that?

I'm not saying I'm not with you in the general argument, but I am saying
that in order to compare one model with another, we need more facts, and
less belief.

Ralph



signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Marsh Ray

On 06/25/2011 03:48 PM, Ian G wrote:

On 21/06/11 4:15 PM, Marsh Ray wrote:


This was about the CNNIC situation,


Ah, the I'm not in control of my own root list threat scenario.

See, the thing there is that CNNIC has a dirty reputation.


That's part of it. But there are some deeper issues.

Deeper issue A applies equally if you *are* the government of China.
Would it make sense for you to be trust root CAs controlled by other
governments? Of course, this might seem a more academic question if you
in China since your OS is likely MS Windows made in the US anyway.

Deeper issue B is a simple engineering failure calculation. Even if you
only trust reliable CAs that will protect your security 99 years out of
100 (probably a generous estimate of CA quality), then with 100 such
roots you can expect to be pwned 63% of the time.
(1 - 0.99^100) = 0.63


But CNNIC passed the test to get into the root lists.


That tells me it was a bad test.


Which do you want? A CA gets into a root list because it is nice and
 pretty and bribes its way in? This was the old way, pre 1995. Or
there is an objective test that all CAs have an equivalent hurdle in
passing? This was the post 1995 way.


There's no dichotomy here. Cash payments can make a fantastically
objective test.


There's no easy answer to this. Really, the question being asked is
wrong.


Yeah.


The question really should be something like do we need a
centralised root list?


Well something is going to get shipped with the browser, even if it's
something small and just used to bootstrap the more general system.

How about these questions:
When is a centralized root list necessary and when can it be avoided?
How can the quality of root CAs be improved?
How can the number of root CAs be reduced in general?
How can the number of root CAs be reduced in specific situations?

and most importantly:
How can we give the people who need it the skills and information needed
to assess the security of their connection?


This is the geek's realisation that they cannot control their list of
 trusted CAs.


It's more prosaic than you make it sound.

When engineers sit down to review the security of a real-world product,
often with sharp people from the customer's side present, occasionally
someone thinks to ask the question: OK, so supposing there are no
killer defects in the implementation, and all the crypto works as
expected, who has keys to the damn thing?

If the product's implementation relies on SSL/TLS (e.g., has a
management port with a web interface), then be prepared to have this
conversation.

To me this is a validation of the cipherpunks' foresight of taking the
attack model at face value. What was once considered spy-fantasy
paranoia by many is, in reality, a textbook engineering calculation
after all.


Their judgement is undermined, as MS Windows' root list has gone the
next step to dynamic control, which means that the users' ability to
verify the root is undermined a bit more by not having an ability to
stop the future dynamic enhancements.


You can go to Add/remove Windows Components (or whatever they call it
these days) and remove the Automatic Certificate Update feature. But
if you do this you need to be prepared to troubleshoot some pretty
mysterious breakages many months later after you've forgotten about it.


In practice, if we assume a centralised root list, this is probably
the better result.


Maybe sometimes. But when?

This is very hard to quantify because it's all theoretical until the
instant that the client software tries to make a connection to a
specific server and receives a specific certificate from the next-hop
router. Does the client software accept the connection or fail it and
tell the user that they're possibly being attacked?

From a UI designer's perspective, this is as close as to a launch the
nuclear missiles moment as they're ever likely to encounter because
showing the scary page to a browser user instead of the page they
requested probably seems pretty much like the end of the world to these
people.

Here's an example of some thinking by UI design types. It's obviously
biased, but it confirms my own biased experience :-) so I'll link it:
http://www.reddit.com/r/pics/comments/hvuhg/apple_why/c1yuah6


It works quite simply: 1 billion users don't check the root list, at
 all. They rely entirely on the ueber-CA to generate a good root
list.


Isn't this basically the system we have now with the browser vendor
acting as the ueber-CA?


A tiny fraction of that number (under 1 million, or 0.1%) know about
 something called a root list, something perversely called trust
bits, and the ability to fiddle those bits. They do that, and imagine
that they have achieved some higher level of security. But, this
technique has difficulty establishing itself as anything more than a
placebo.

Any model that offers a security feature to a trivially tiny
minority, to the expense of the dominant majority, is daft.


Heh. Unless the dominant 

Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Jonathan Thornburg
On Sun, 26 Jun 2011, Marsh Ray wrote:
 How about these questions:
 When is a centralized root list necessary and when can it be avoided?
 How can the quality of root CAs be improved?
 How can the number of root CAs be reduced in general?
 How can the number of root CAs be reduced in specific situations?

I think the last of these is very important, because it's the difference
between

[today]
I want to connect to https://www.bank.com or https://www.airline.com.
If *any* CA in the world has falsely issued a certificate for that domain,
then I could be talking to a phisher or MITM and be none the wiser.

[if we used certificates a bit more wisely]
I want to connect to https://www.bank.com or https://www.airline.com.
If bank.com's or airline.com's CA has falsely issued a certificate for
that domain, then I could be talking to a phisher or MITM and be none
the wiser.

The latter is far from perfect, but it's a lot better than the former.
I think the ssh model (cross your fingers the first time you connect,
but then remember the info so future connections are safer if that first
time was actually ok) has a lot of potential.  I think there's a firefox
extension that does this for certificates, but I forget its name...

ciao,

-- 
-- Jonathan Thornburg [remove -animal to reply] 
jth...@astro.indiana-zebra.edu
   Dept of Astronomy  IUCSS, Indiana University, Bloomington, Indiana, USA
   Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral.
  -- quote by Freire / poster by Oxfam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread The Fungi
On Sun, Jun 26, 2011 at 12:26:40PM -0500, Marsh Ray wrote:
[...]
 Now maybe it's different for ISP core router admins, but the
 existence of this product strongly implies that at least some admins
 are connecting to their router with their web browser over HTTPS and
 typing in the same password that they use via SSH.
[...]

Valid point, but flawed example. Managing these things day in and
day out, I can tell you this is the first thing any experienced
admin disables when initially configuring the device. If your admin
is managing your routers with a Web interface, SSL MitM is the
*least* of your worries, honestly.
-- 
{ IRL(Jeremy_Stanley); WWW(http://fungi.yuggoth.org/); PGP(43495829);
WHOIS(STANL3-ARIN); SMTP(fu...@yuggoth.org); FINGER(fu...@yuggoth.org);
MUD(kin...@katarsis.mudpy.org:6669); IRC(fu...@irc.yuggoth.org#ccl);
ICQ(114362511); YAHOO(crawlingchaoslabs); AIM(dreadazathoth); }
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Marsh Ray

On 06/26/2011 01:13 PM, The Fungi wrote:

On Sun, Jun 26, 2011 at 12:26:40PM -0500, Marsh Ray wrote: [...]

Now maybe it's different for ISP core router admins, but the
existence of this product strongly implies that at least some
admins are connecting to their router with their web browser over
HTTPS and typing in the same password that they use via SSH.

[...]

Valid point, but flawed example. Managing these things day in and day
out, I can tell you this is the first thing any experienced admin
disables when initially configuring the device.


But what about all the other admins? :-)

You're probably right today, the guys running the core routers are some
of the best. This web management thing seems to be targeted to
small/medium non-ISP businesses.

But what about after a few more rounds of IT people graduate from
courses and certification programs which now divert time from the old
command-line stuff to teach the new web management functionality?

What if functionality gets released for which there is no command-line
interface?

What about all the other datacenter gear plugging into trusted segments?

What about the other makes of routers? Well, Juniper, that is.
Hmmm...

http://www.juniper.net/us/en/products-services/software/network-management-software/j-web/



http://www.redelijkheid.com/blog/2011/3/11/configure-ssl-certificate-for-juniper-j-web-interface.html
By default, the J-Web interface (GUI for the Juniper SRX firewalls)
has SSL enabled. Like most devices with SSL out-of-the-box, the
protection is based on a self-signed certificate. Self-signed
certificates are easy (they come basically out-of-the-box), but they
tend to nag you every time you connect to the GUI. So, it's time to
install a proper certificate.


OK, good, so this guy is going to make a cert for his router! He even 
shows you how to use the subject alternative name to make it so you can 
connect to it via the raw IP address 192.168.1.254!


Anyone else see any problems with that? :-)


http://www.instantssl.com/ssl-certificate-products/ssl/ssl-certificate-intranetssl.html
Intranet SSL Certificates allow you to secure internal servers with SSL issued 
to
either a Full Server Name or a Private IP Address. [...]
Trusted by all popular browsers.


Comodo to the rescue! I wonder how many people they'll be willing to 
sell the same IP address too.


On 06/26/2011 01:13 PM, The Fungi wrote:

If your admin is managing your routers with a Web interface, SSL MitM
is the *least* of your worries, honestly.


:-)

It's only the least of your worries until somebody gets around to
exploiting it, at which point it may be the greatest of your worries.

A lot of systems are set up with RADIUS/TACACS centralized
authentication. In these cases there are many admins with access to many
routers and other pieces of equipment. The bad guy only needs to
convince the high-level admin to use his password once on the
least-important piece of equipment.

A self-propagating router MitM would make for a very interesting and
scary worm. Hopefully such a thing would first start out on some small
home routers and give time to raise awareness for those with login
credentials on the big ones.

- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Ian G

On 26/06/11 5:50 AM, Ralph Holz wrote:

Hi,


Any model that offers a security feature to a trivially tiny minority,
to the expense of the dominant majority, is daft.  The logical
conclusion of 1.5 decades worth of experience with centralised root
lists is that we, in the aggregate, may as well trust Microsoft and the
other root vendors' root list entirely.

Or: find another model.  Change the assumptions.  Re-do the security
engineering.


You have avoided the wording find a better model - intentionally so?


:)  It's very hard to word proposals that go against the belief of many, 
without being inflamatory.  If it is too inflamatory, nobody reads it. 
Even if it is right.  We lose another few years...



Because such work would only be meaningful if we could show we have
achieved an improvement by doing it.


Yeah.  So we have a choice:  improve the overall result of the current 
model, or try another model.


The point of the subject line is that certain options are fantasy.  In 
the current model, we're rather stuck with a global solution.


So, fixing it for CNNIC is ... changing the model.


Which brings us to the next point: how do we measure improvement? What
we would need - and don't have, and likely won't have for another long
while - are numbers that are statistically meaningful.


Right, indeed.  The blind leading the blind :)


On moz.dev.sec.policy, the proposal is out that CAs need to publicly
disclose security incidents and breaches.


Yes, but they (we) haven't established why or what yet.


This could actually be a good
step forward. If the numbers show that incidents are far more frequent
than generally assumed, this would get us away from the low frequency,
high impact scenario that we all currently seem to assume, and which is
so hard to analyse. If the numbers show that incidents are very rare -
fine, too. Then the current model is maybe not too bad (apart from the
fact that one foul apple will still spoil everything, and government
interference will still likely remain undetected).


Except, we've known that the numbers of security patches released by 
Microsoft tells us ... nothing.  We need more than numbers and 
research to justify a disclosure.



The problem is that CAs object to disclosure on the simple grounds that
public disclosure hurts them. Even Startcom, otherwise aiming to present
a clean vest, has not disclosed yet what happened on June, 15th this year.


Yes, it's hilarious isn't it :)


Mozilla seems to take the stance that incidents should, at most, be
disclosed to Mozilla, not the general public. While understandable from
Moz's point of view


Mozo are doing it because it makes them feel more in control.  They are 
not as yet able to fully explain what the benefit is.  Nor what the 
costs are.



- you don't want to hurt the CAs too badly if you
are a vendor - it still means researchers won't get the numbers they
need. And the circle closes - no numbers, no facts, no improvements,
other than those subjectively perceived.



OK.  So we need to show why researchers can benefit us with those numbers :)

(IMHO, the point is nothing to do with researchers.  It's all to do with 
reputation.  It's the only tool we have.  So disclosure as a blunt 
weapon might work.)




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] this house believes that user's control over the root list is a placebo

2011-06-26 Thread Marsh Ray

On 06/26/2011 05:58 PM, Ian G wrote:


On 26/06/11 5:50 AM, Ralph Holz wrote:

- you don't want to hurt the CAs too badly if you are a vendor


Vendors spend all day long talking internally and with other vendors.
Consequently, they tend to forget who holds the real money.

For most healthy vendors in a market economy, that's the customers. 
Browsers seem to live on a planet without the usual market forces however.


In the case of Mozilla, 97% of their revenue comes royalties

http://www.mozilla.org/foundation/documents/mf-2009-audited-financial-statement.pdf

of which 86% is one contract. It's a safe bet that's probably Google.
That contract is said to expire in November, and Google now makes a 
competing browser.


Google seems to care more about actual security than Mozilla. Last I 
checked Mozilla didn't even bother to sign all the addons for their own 
package system, whereas we see Google doing things like pinning their 
own certs in the Chrome codebase.


Maybe that's because Google actually runs services that people use (e.g. 
Gmail).



- it still means researchers won't get the numbers they need. And
the circle closes - no numbers, no facts, no improvements, other
than those subjectively perceived.


OK. So we need to show why researchers can benefit us with those
numbers :)


Because having a system that's credibly secure will increase
adoption among organizations with money.

You can't credibly claim to defend against earthquakes while keeping 
seismic resiliency data secret.



(IMHO, the point is nothing to do with researchers. It's all to do
with reputation. It's the only tool we have. So disclosure as a blunt
weapon might work.)


Nothing undermines credibility and trust like public denials and secrecy.

CAs seem to think they can act like nuclear power plant operators or 
something. But NPPs at least produce electric power! On the other hand, 
every additional trusted root beyond the necessary minimum represents 
pure risk.


The general public and those who defend networks understand the need to 
take active network attacks seriously far more than than did just a year 
or two ago.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography