Something else you all need to consider which currently no one is talking
about. Root trust and chain of trust from the PC - through the OS - to the
application and on to the customer (client)

The issue is simple - how do I know I can trust the machine let alone the
applications running on the machine? The current answer is a TPM chip. Sorry
that's not good enough. TPM allows you to verify that the boot up sequence
hasn't been tampered with, after that you need to ensure that you can build
a chain of trust through the firmware and then the operating system and
finally to the application.

Currently Windows, Linux and Unix only use two levels of privilege - Ring 3
and Ring 0. Everybody and there uncle's code want to run at Ring 0. Another
really bad idea, as once I introduce a network/video/keyboard/whatever
driver at that level I can execute malicious code. From there I can control
the machine.

The web of trust starts much lower in the stack than people are currently
talking about. 

What you've outlined below is the following:

Client <----------------> attacker <--------------> Server

Substitute the "attacker" for a legitimate "use" i.e. SSL proxy acceleration
via compression. The man in the middle is now a legitimate user. The benefit
is faster downloads to the customer and conservation of bandwidth. Many
people currently use mod_gzip this way. Works like a champ.

Now of course there's the announcement on News.com today that Blue Coat is
selling a box that sits in the "attacker" spot and decrypts the traffic and
scans it to ensure no spyware etc.

Foiled again. If you want a web of trust start at the machine and extend it
to the operating system and applications via a chain of trust and finally to
the clients desktop. Anything else is simply a band aid which can be used
either for malicious or good use.


Peter
-----Original Message-----
From: Brian Candler [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 09, 2005 2:22 AM
To: Nick Kew
Cc: dev@httpd.apache.org
Subject: Re: pgp trust for https?

On Tue, Nov 08, 2005 at 12:46:18PM +0000, Nick Kew wrote:
> On Tuesday 08 November 2005 12:02, Brian Candler wrote:
> 
> [twice - please don't]

Not sure what you mean by that. I probably did a group reply, so you got one
copy directly and one via the list. I'm afraid it's impossible to please
everyone on this point: some people insist on getting a direct reply (so
they see it sooner), and some people insist on replies to the list only.
Unfortunately my brain and my MUA are not able to record individual
preferences here, so everyone gets a 'G'roup reply.

> > > I'll sign my server.  Same as I'll sign an httpd tarball if I roll one
> > > for public consumption.  You sign your server.  Where's the problem?
> >
> > The problem is that you'll have no protection against man-in-the-middle
> > attacks, whereby an attacker impersonates you, or intercepts your
traffic
> > (decrypting it and re-encrypting it, allowing them to read and/or modify
> > all communication on your supposedly 'secure' connection)
> 
> Nonsense.  The encryption is unaffected by this.  It's only the server
> identity we're verifying.

Which fundamentally affects whether the encryption is actually performing
the job it was intended to do, namely to ensure that the data between the
two endpoints is (a) not visible to any other party, and (b) not subject to
tampering by any other party.

Simple example: attacker inserts a machine between the client and the server
(the archetypal "man in the middle" attack)

   Client <----------------> attacker <--------------> Server

The client negotiates an encrypted session with the attacker, and the
attacker negotiates a separate encrypted session with the server.

The client *thinks* they are talking to the server, and that the session is
strongly encrypted (which it is). The server *thinks* it is talking to the
client, and that the session is strongly encrypted (it is).

In fact, the attacker can read *everything* which goes between the client
and the server - and either just copy it back and forth, or modify any parts
of it that it wishes. Both confidentiality and integrity have failed.

The fundamental problem is down to identity: can the client be sure it is
talking directly to the server, or is it talking to an imposter? Without
this assurance of identity, encryption is useless. Well, it's not 100%
useless because it protects you against passive attacks (sniffing only), but
you have to assume you're dealing with a *determined* attacker. These
man-in-the-middle attacks are trivial to implement, and there are toolkits
for doing so.

> > For example, an attacker could redirect requests for www.yourdomain.com
to
> > their server, perhaps by spoofing the DNS, or by unplugging the cable
> > somewhere between your ISP and your server and inserting their own
server.
> > Clients would be none the wiser.
> 
> Nonsense.  My server is signed with my (private) key.  If they've got my
> key and passphrase, then the whole thing is dead, just as if they got
> my verispam certificate.

I think you're getting a bit confused - your "server" is a piece of hardware
plus software and is not signed (unless you are talking about some sort of
Trusted Computing Platform, which is a different ball game)

Your server has a private key, which you could think of as its digital
identity. You can give out the corresponding public key to anyone, and when
they communicate with you, you can prove that they are communicating
directly and securely to you, because you own the private key. The
man-in-the-middle is not possible if the client knows your public key in
advance, because the attacker doesn't know your private key; their server
would have a different private key and therefore reveal their identity to be
different.

If you have made prior arrangements with everyone that you communicate with,
so that they have a trustworthy copy of your public key (e.g. you handed it
to them in person on a floppy disk, and at the same time they were able to
prove to your satisfaction that the person in front of you was not an
imposter acting on behalf of an attacker), then your communication with
those parties is secure. Good for you.

So what about signing? Well, you could sign your own public key with your
own private key (self-signed certificate) or with a separate private key
which you keep for signing purposes (set up your own CA).

However the attacker can do the same on their server, so it's still not
possible to tell the difference between talking to the attacker or talking
to you unless the client already knows your public key. The signing doesn't
make any difference here, *unless* the client already has a copy of the
public key corresponding to the private key you used for signing, in which
case the signature validates the key. So now the client doesn't need your
server's public key, but they still do need to know your signing public key.

Passphrases? Well that prevents someone breaking into your office, stealing
your machine, and then setting up a clone somewhere else with the exact same
key. Good, but that doesn't affect that the client still needs to have
a-priori knowledge of your identity, or a way to verify it.

> > The attacker doesn't have your private key, so they would create their
own
> > key pair. As a result, the connecting client would see a *different* key
> > than the one they would see if they connect to your server directly. The
> > problem is, they have no way of telling which key is the one which
belongs
> > to you, and which one is the one which belongs to the attacker.
> 
> Of course you do!  That's exactly what the web of trust is all about.
> By your argument, I shouldn't be able to trust the httpd-2.1.9 tarball
> I downloaded about a week ago either - for exactly the same reason.

If it's PGP signed, and you have a degree of trust that the copy of the
public key you have really belongs to a trustworthy Apache developer, then
that's fine. You can obtain that trust by a number of routes. You might
exchange signed E-mails over a long period of time and via different routes
and mailboxes, and decide that it's unlikely that a man-in-the-middle attack
has being going on all this time for every message you exchanged. You might
meet them in person at a developer conference. Or the PGP key might be
PGP-signed by someone else who you trust to verify their identity.

Not all of these mechanisms are appropriate if you were going to
www.amazon.com and entering your credit card to order a book though. You
*might* know someone at Amazon personally; they *might* be sufficiently
technical to be able to find out the fingerprint of the public key used on
their SSL web servers, and print it out or give it to you personally; and
you *might* manually verify the authenticity of the key within your browser,
by calling up the certificate details page *every* time you connect to
Amazon, and manually comparing the key fingerprint with the one you were
given.

However, 99.99% of the rest of the web world doesn't work that way, as it's
just too painful. There are additional problems too (e.g. if the Amazon
techies decide to replace their keys with new ones, you will have to go
through the whole process again)

If you don't trust the third parties to do this work for you, and you insist
you want to do it yourself, that's fine. But I don't think you'll end up
buying anything over the Internet. You are of course free to choose not to
do so.

> > If the client knows you personally, they can phone you up and ask for
you
> > to read the key fingerprint over the phone, or fax it to them. That
doesn't
> > scale very well.
> 
> And it's far more insecure.  If I've nicked your webpage, whose 'phone
> or fax number do you suppose is on it?  And how is telephone routing
> inherently any more secure than DNS, especially when so much of the
> former Free World is openly becoming police states.

The point about telephone calls is that you can recognise the voice of the
person you're talking to. It's one way to be able to convince yourself to a
sufficient degree that you're happy that the key you have really belongs to
the person you claim it's from. This applies equally to PGP, unless you're
going to rely only on third-party introductions.

Also, phone numbers can be obtained from printed directories. Yes, the
government might have replaced your copy of a directory with a modified one,
or the copy in your local library, or might be rerouting calls from your
phone (try using a payphone?) You have to assess the level of these risks
yourself. The more different routes of communication you use concurrently,
the more difficult it would be for a government to intercept them all
without you noticing.

Ultimately, even face-to-face meetings with strangers can't be trusted:
suppose someone intercepted your communication and sent an imposter to meet
you instead? You would need to convince yourself somehow that you are happy
to deal with this particular person. You can ask for identity documents, but
this ultimately still puts trust in government agencies. So you might just
want to establish a rapport until you can convince yourself just that this
is a person that you wish to deal with, and that they are not a very good
con artist acting on behalf of some malicious force.

Anyway, I think we're losing the point :-) With SSL you *need* to establish
the digital identity of the people you are communicating with, otherwise the
encryption is worthless as it is easily bypassed.

Regards,

Brian.

Reply via email to