On Tue, Feb 14, 2012 at 2:55 PM, Martin Rex <[email protected]> wrote:
We can continue to outlaw it, in which case it will continue to exist
outside of our sight.

There are two solutions for this type of "usage".

- Provide terminal servers that you monitor, to which your users have
 to dial-in when they want to connect to the outside.




- set up IPsec on your network and disallow the use of TLS on your
 internal network (require them to use an application gateway
 similar to a HTTP CONNECT proxy)


TLS was not designed to provide wiretapping

  http://tools.ietf.org/html/rfc2804

TLS was designed to prevent undetectable wiretapping.  It wasn't designed to 
prevent wiretapping in and of itself by one of the endpoints.  And it is 
entirely legitimate for a company to install keyloggers and screen recorders on 
their computers as long as they get employee signatures that they have been 
informed of such.  (I worked somewhere that did this, a large life insurance 
company.)

Regardless of whether it was designed to permit one thing or the other, the fact is that 
it is being used such.  We can't rely on standards of intentional purity, we have to 
engineer something that works.  (If it doesn't work, we're not engineers, we're 
"money sinks".)

What is currently exploited is a serious flaw in the security of
the trust model used by TLS.  And it can be abused by evil just as
easily, see the DigiNotar hack and for what it was used.

What was exploited is a serious flaw, yes.  And authoritative CA issued certificates will 
always be high-value items.  However, I don't really think the flaw is in the trust model 
"used by TLS".

The flaw is in the trust model implemented by the browsers which (taking its cue from this insistence on a 
binary "Correct" or "Crap") categorically cannot acknowledge different authorities for 
different tasks, instead forcing everything into a very simplistic "does it verify to our trust store?  
Keep the UI uncluttered, if it verifies to the trust store it's okay for everything".

Put another way, X.509 itself says that the privilege management infrustructure will 
almost always be completely separate from the certification infrastructure.  Why are we 
using "the simple existence of a chain which goes up to one of the many authorities 
we accept" as the marker for an error-free UI?

Keeping rfc2804 in mind, when fixing the weaknesses in the trust model
of TLS, maintaining wiretapping capabilities is *NOT* appropriate.

I think it's incredibly naive to adhere to the directions and demands of a 
document created on the cusp of a technology which suddenly had a much broader 
potential than had ever been considered before.  We've got the Law of 
Deployment Inertia against us: if we try to create something which breaks 
compatibility with existing systems, nobody will upgrade.

I'm all for idealism -- believe me, I am an idealist of the highest order.  My main 
ideal: we must be able to communicate freely, without interference or unacknowledged 
eavesdropping.  We've had a lot of experiences in the intervening 15 years, and we should 
learn from them.  We should probably also avoid that definition of insanity, "doing 
the same thing and expecting a different result".

We can continue the slippery slope toward a state-mandated fully-tagged and 
fully-state-identified communications regime by continuing to specify Absolute 
Correctness. (Law is the only actual mandate, so the only place to mandate any 
particular version of Correctness exists in the legislature).  Alternatively, 
we can do what we're supposed to be doing, which is authenticate the public key 
used by the endpoint and authenticate the signatures on the certificate 
chain(s) which terminate there, and somehow communicate that to the user so 
that someone who's under attack has a fighting chance to realize it.

The person who is under attack's only tool to help in that defense is his 
computer.  Is it appropriate for the IETF to limit the applicability of its 
protocols, and snub any implementor who exists in a world that we can't imagine?

We aren't states.  We aren't lawmakers.  We aren't regulators.  We aren't enforcers.  
(...unless employed by or as such.)  Your idea of "where the limit should be" 
doesn't change the fact that there will always exist legitimate reasons to venture beyond 
it.  We must acknowledge that, and find a compromise which acknowledges that people 
simply will be presented with trustworthy authoritative CAs, untrustworthy pseudo-CAs, 
and trustworthy axiomatic bindings, and all categories will be be commonplace events.  
Not acknowledging it only means that when these inevitable cases happen, our users and 
implementors are left without guidance and without compatibility.

I have no problems at all with the output of the current authoritative CAs.  I only have 
a problem with how that output is interpreted and applied.  Getting a certificate from a 
CA (which, by the way, makes it rather difficult to say "I'm trying out this 
technology, this signature doesn't mean anything") should not be prerequisite, or 
else the security capabilities will not be used.  Nobody knows the value of encryption, 
because we've made it something they can't use.

All we (as users) really need is to know is when we do or don't have a 
trustworthy authoritative signature chain.  We still derive benefit from 
opportunistic encryption; it prevents wide-scale trawling, forcing resources to 
be spent on targeted attacks.

How and why is the lack of a trustworthy signature chain more alarming than the lack of 
encryption or authentication at all?  At the very least, a key expresses a useful 
identity: "if it's verifiable with the same public key, it was signed by the holder 
of the same private key".  Key continuity is the approach that the One Laptop Per 
Child project used.

-Kyle H

Attachment: Verify This Message with Penango.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
therightkey mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/therightkey

Reply via email to