[whatwg] fixing the authentication problem

2008-10-21 Thread Aaron Swartz
The most common way of authenticating to web applications is:

Client: GET /login
Server: htmlform method=post
Client: POST /login
user=joesmith01password=secret
Server: 200 OK
Set-Cookie: acct=joesmith01,2008-10-21,sj89d89asd89s8d

The obvious problem with this is that passwords are transferred in the
clear. Some major web services redirect the user to an SSL server for
the login transaction, but SSL is too expensive for the vast majority
of services. (We can hope ObsTCP will fix this, but that's a long way
away, if ever.)

Another alternative is HTTP Digest authentication, but I vaguely
remember Hixie saying it was insecure and, in any event, most Web
services will not adopt it because the browser UI isn't customizable.

My proposal: add something to HTML5 so that the transaction looks like this:

Client: GET /login
Server: htmlform method=post pubkey=/pubkey.key...
Client: POST /login
dXNlcj1qb2VzbWl0aDAxJnBhc3N3b3JkPXNlY3JldA==
Server: 200 OK
Set-Cookie: acct=joesmith01,2008-10-21,sj89d89asd89s8d

where the base64 string is the form data encrypted with the key
downloaded from /pubkey.key. This should be fairly easy to implement
(for clients and servers), falls back to exactly the current behavior
on browsers that don't support it, and solves a rather important
problem on the Web.


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Philip Taylor
On Tue, Oct 21, 2008 at 2:16 PM, Aaron Swartz [EMAIL PROTECTED] wrote:
 The most common way of authenticating to web applications is:

 Client: GET /login
 Server: htmlform method=post
 Client: POST /login
 user=joesmith01password=secret
 Server: 200 OK
 Set-Cookie: acct=joesmith01,2008-10-21,sj89d89asd89s8d

 [...]

 My proposal: add something to HTML5 so that the transaction looks like this:

 Client: GET /login
 Server: htmlform method=post pubkey=/pubkey.key...
 Client: POST /login
 dXNlcj1qb2VzbWl0aDAxJnBhc3N3b3JkPXNlY3JldA==
 Server: 200 OK
 Set-Cookie: acct=joesmith01,2008-10-21,sj89d89asd89s8d

 where the base64 string is the form data encrypted with the key
 downloaded from /pubkey.key.

As I understand it: As an attacker, I can intercept that dXN...
string. Then I can simply make a login POST request myself at any time
in the future, sending the same encrypted string, and will get the
valid login cookies even though I don't know the password. So it
doesn't seem to work very well at keeping me out of the user's
account. Also this seems vulnerable to dictionary attacks, e.g. I can
easily encrypt user=joesmith01password=... for every word in the
dictionary and will probably discover the user's password.

-- 
Philip Taylor
[EMAIL PROTECTED]


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Aaron Swartz
 As I understand it: As an attacker, I can intercept that dXN...
 string. Then I can simply make a login POST request myself at any time
 in the future, sending the same encrypted string, and will get the
 valid login cookies even though I don't know the password. So it
 doesn't seem to work very well at keeping me out of the user's
 account. Also this seems vulnerable to dictionary attacks, e.g. I can
 easily encrypt user=joesmith01password=... for every word in the
 dictionary and will probably discover the user's password.

I was simplifying; in real life, I expect the server will include a
nonce with the form (as a hidden input), which they'll only permit to
be used once. (I also expect their cookie will have an ID that maps to
the username instead of the actual username. Or they'll just have the
cookie encrypted entirely instead of using an HMAC.) This, of course,
doesn't affect the HTML spec.


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Tab Atkins Jr.
On Tue, Oct 21, 2008 at 9:36 AM, Eduard Pascual [EMAIL PROTECTED]wrote:

 On Tue, Oct 21, 2008 at 2:16 PM, Aaron Swartz [EMAIL PROTECTED] wrote:
  My proposal: add something to HTML5 so that the transaction looks like
 this:
 
  Client: GET /login
  Server: htmlform method=post pubkey=/pubkey.key...
  Client: POST /login
  dXNlcj1qb2VzbWl0aDAxJnBhc3N3b3JkPXNlY3JldA==
  Server: 200 OK
  Set-Cookie: acct=joesmith01,2008-10-21,sj89d89asd89s8d
 
  where the base64 string is the form data encrypted with the key
  downloaded from /pubkey.key. This should be fairly easy to implement
  (for clients and servers), falls back to exactly the current behavior
  on browsers that don't support it, and solves a rather important
  problem on the Web.
 What's the actual difference between this and https? Both mechanisms
 are using public-key encryption to protect the communications; the
 only difference being that with https the encryption is handled at the
 protocol level; while your suggestion would (currently) require to
 reinvent the wheel, encrypting the data on the client (maybe using
 JavaScript?) and then decripting it on the server (probably via
 server-side scripting).
 Maybe there is a good point on that suggestion, and I'm simply failing
 to see it. If that's the case, I invite you to enlighten me on it.


I agree in general with the criticisms raised here, but I'll correct a small
point in your post.  The goal for this is to *not* require authors to do any
client-side encrypting, but for the UAs to encrypt instead.  It would then
be the responsibility of the author to decrypt on the server side.

~TJ


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Philip Taylor
On Tue, Oct 21, 2008 at 2:52 PM, Aaron Swartz [EMAIL PROTECTED] wrote:
 As I understand it: As an attacker, I can intercept that dXN...
 string. Then I can simply make a login POST request myself at any time
 in the future, sending the same encrypted string, and will get the
 valid login cookies even though I don't know the password. So it
 doesn't seem to work very well at keeping me out of the user's
 account. Also this seems vulnerable to dictionary attacks, e.g. I can
 easily encrypt user=joesmith01password=... for every word in the
 dictionary and will probably discover the user's password.

 I was simplifying; [...]

Simplifications make it hard to tell whether it's possible to use the
feature securely (and hard to tell what securely means in this
context), which is a necessary condition for usefulness, so it's
probably best to explain in detail exactly how you expect it'll be
used, and then people can try to pick holes in it :-) . (But at least
in my case, I know little enough about security that even if I can't
pick holes then I'd be unwilling to assume it's secure...)

 in real life, I expect the server will include a
 nonce with the form (as a hidden input), which they'll only permit to
 be used once.

That still doesn't help with the dictionary attacks, since the
attacker knows the nonce too. I'd guess the client has to add an extra
nonce (which is never transmitted in the clear) to avoid that problem.

For the server-generated nonce, the login form will have to be on a
page that is never cached, so that every client will get a new nonce
every time they load the page. That would prevent it being used in a
lot of cases where sites put a login box on every page (instead of
requiring the user to go through an extra login page), which is a
minor disadvantage of this scheme.

How will the server limit each nonce to being used once? If it stores
a list of every nonce that was ever used, it's going to be a pretty
large table and slow to check on any reasonably popular site. If it
encodes a timestamp in the nonce, it won't work if a user opens the
login page (causing the new nonce to be generated) in a background tab
and leaves it for a few days before trying to log in, which breaks the
usually-valid assumption that you can wait indefinitely between
separate HTTP requests. (Digest authentication avoids that problem
because it's defined at the HTTP level and can say that the browser
ought to respond immediately and to retry silently if the nonce was
stale.)

Probably more importantly, does this solve any of the security flaws
you indicated Digest authentication has? (i.e. how would it be better
than inventing a mechanism for allow custom styling of the browser's
username/password dialog box?)

-- 
Philip Taylor
[EMAIL PROTECTED]


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread WeBMartians
Vanguard Investments has an interesting approach:

1- User enters an identification but not a password
This page is an HTTPS one, by the way.

2- On a subsequent page (also HTTPS), the user enters the password
Additionally, there is an identifying image that is associated with the user:
Your security image... You'll see your image whenever you log on.
Sometimes, based on a variety of factors, there is an intermediate page that 
challenges with a security question.

Please don't consider this to be, in any way, a criticism of the proposal. The 
above is just an example of what appears to be a very
good security regimen within the current constraints.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Eduard Pascual
Sent: Tuesday, 2008 October 21 10:37
To: Aaron Swartz; whatwg@lists.whatwg.org
Subject: Re: [whatwg] fixing the authentication problem

On Tue, Oct 21, 2008 at 2:16 PM, Aaron Swartz [EMAIL PROTECTED] wrote:

 Some major web services redirect the user to an SSL server for the 
 login transaction, but SSL is too expensive for the vast majority of 
 services.
The issue is not SSL being expensive: the only expensive part is having a third 
party (the Certification Authority) to endorse your
SSL keys, but for basic authentication self-signed certificates *should* be 
enough: when a user logs into a forum, for example, s/he
shouldn't care about example.com being actually owned by Example Inc.; but just 
about the fact that the username and password will
only be readable by example.com, regardless of who is behind example.com.
The actual issue is that current UAs are over-paranoid about self-signed 
certificates: of course, a CA-endorsed certificate is safer
than a self-signed one (specially for transactions involving money); but a 
self-signed certificate is still safer than no
certificate enough (and definitely safe enough for login purposes):
browsers, however, treat a self-signed certificate as almost a guarantee of 
fraud, while sending login data through unencrypted
connections doesn't even raise an eyebrow: this behavior is definitely wrong, 
and this wrong behavior is the actual issue that
should be dealt with (I don't really know if it should fall within HTML5's 
scope to deal with this, probably it doesn't). In
essence, if UAs *lie* to the user about security (treating cheap self-signed 
certificates for login as fraud attempts; but
unsecure communications as a non-issue), then what's the point at all about 
security?

 My proposal: add something to HTML5 so that the transaction looks like this:

 Client: GET /login
 Server: htmlform method=post pubkey=/pubkey.key...
 Client: POST /login
 dXNlcj1qb2VzbWl0aDAxJnBhc3N3b3JkPXNlY3JldA==
 Server: 200 OK
 Set-Cookie: acct=joesmith01,2008-10-21,sj89d89asd89s8d

 where the base64 string is the form data encrypted with the key 
 downloaded from /pubkey.key. This should be fairly easy to implement 
 (for clients and servers), falls back to exactly the current behavior 
 on browsers that don't support it, and solves a rather important 
 problem on the Web.
What's the actual difference between this and https? Both mechanisms are using 
public-key encryption to protect the communications;
the only difference being that with https the encryption is handled at the 
protocol level; while your suggestion would (currently)
require to reinvent the wheel, encrypting the data on the client (maybe using
JavaScript?) and then decripting it on the server (probably via server-side 
scripting).
Maybe there is a good point on that suggestion, and I'm simply failing to see 
it. If that's the case, I invite you to enlighten me
on it.



Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Eduard Pascual
On Tue, Oct 21, 2008 at 3:48 PM, Aaron Swartz [EMAIL PROTECTED] wrote:
 There are three costs to SSL:

 1. Purchasing a signed cert.
 2. Configuring the web server.
 3. The CPU time necessary to do the encryption.

 1 could be fixed by less paranoid UAs, 2 could be fixed with better
 software and SNI, and 3 could be fixed by better hardware. But,
 realistically, I don't see any of these things happening.
There is a difference between something having a cost, and that cost
being expensive:
(1) is definitely expensive (I know that first-hand), and most
probably out of the reach for any non-revenue website.
(2) is not expensive: currently, many server management software
already handles this decently (I'm right now thinking of CPanel, one
of the most widely deployed utilities of this type, and it allows
installing a certificate with just a few clicks).
(3) Your suggestion is not addressing that point: encryption will
still be done by the client, and decryption by the server.

In addition, for the first cost; I'm still convinced that UAs should
be fixed, because their paranoid behavior is generally wrong. I don't
think this spec should deal with browsers' bugs and paranoias on
aspects that are not strictly HTML-related; even less to specify
workarounds to these bugs that require browsers to duplicate the tasks
that are currently showing these bugs. What makes you think browsers
would behave less paranoically to your approach than to self-signed
certificates? OTOH, changing the messages show to the user when
self-signed certificates are encountered to be more informative and
less missleading should be far easier than adding a new hook to
trigger encryption (the former only requires reviewing and updating
some texts to something that makes sense, while the later involves
changes on the way forms are handled, which would require additional
testing and might arise even new bugs). That's, however, only my point
of view.


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Kristof Zelechovski
Sending any data, including, log-in data, through an unencrypted connection
is greeted by a warning dialogue box in Internet Explorer.  A similar
precaution is taken when the server certificate is not trusted.  The risk of
using an invalid certificate is bigger than not using any because your level
of trust is bigger while you are equally unprotected.

It is not enough to make sure that your credentials do not unintentionally
leave example.com.
Consider the following scenario:
1. You want to update your blog at blog.com 
2. Evil.org poses as blog.com by phishing or DNS poisoning.
3. You log in to evil.org using your credentials of blog.com.
4. The bad guys at evil.org use your credentials to post an entry at
blog.com that you are going to deploy a dirty bomb in NYC.
5. You travel to the USA and you end up in Guantanamo.
Nice, eh?
Chris

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Eduard Pascual
Sent: Tuesday, October 21, 2008 4:37 PM
To: Aaron Swartz; whatwg@lists.whatwg.org
Subject: Re: [whatwg] fixing the authentication problem

On Tue, Oct 21, 2008 at 2:16 PM, Aaron Swartz [EMAIL PROTECTED] wrote:

 Some major web services redirect the user to an SSL server for
 the login transaction, but SSL is too expensive for the vast majority
 of services.
The issue is not SSL being expensive: the only expensive part is
having a third party (the Certification Authority) to endorse your SSL
keys, but for basic authentication self-signed certificates *should*
be enough: when a user logs into a forum, for example, s/he shouldn't
care about example.com being actually owned by Example Inc.; but just
about the fact that the username and password will only be readable by
example.com, regardless of who is behind example.com.
The actual issue is that current UAs are over-paranoid about
self-signed certificates: of course, a CA-endorsed certificate is
safer than a self-signed one (specially for transactions involving
money); but a self-signed certificate is still safer than no
certificate enough (and definitely safe enough for login purposes):
browsers, however, treat a self-signed certificate as almost a
guarantee of fraud, while sending login data through unencrypted
connections doesn't even raise an eyebrow: this behavior is definitely
wrong, and this wrong behavior is the actual issue that should be
dealt with (I don't really know if it should fall within HTML5's scope
to deal with this, probably it doesn't). In essence, if UAs *lie* to
the user about security (treating cheap self-signed certificates for
login as fraud attempts; but unsecure communications as a non-issue),
then what's the point at all about security?





Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Julian Reschke

Kornel Lesinski wrote:

...
Anyway, it doesn't make sense to duplicate all that functionality in 
forms just because typical interface for HTTP authentication is ugly and 
unusable. You can fix the interface, and there's proposal for it already 
(from 1999!):

http://www.w3.org/TR/NOTE-authentform

I think that proposal is generally a good idea, but the details could be 
improved (i.e. should reuse existing forms and input types rather than 
creating new ones that can't offer seamless fallback).

...


+1

See also http://www.w3.org/html/wg/tracker/issues/13 (currently in 
state raised).


BR, Julian





Re: [whatwg] Caching offline Web applications

2008-10-21 Thread Dave Camp
On Tue, Oct 21, 2008 at 12:47 PM, Ian Hickson [EMAIL PROTECTED] wrote:
 On Tue, 21 Oct 2008, Dave Camp wrote:
 On Fri, Oct 17, 2008 at 6:36 PM, Ian Hickson [EMAIL PROTECTED] wrote:
  Summary of changes:

   * Made application caches scoped to their browsing context, and allowed
iframes to start new scopes. By default the contents of an iframe are
part of the appcache of the parent, but if you declare a manifest, you
get your own cache.

 Should this inheritance be subject to the same origin restriction
 enforced while selecting a cache during navigation?

 The same-origin restriction is intended to prevent people from setting up
 their manifests such that another site will stop being fetched from the
 net. In an iframe, the risk isn't present, since you have to go to the
 evil site in the first place, and it has to explicitly pick the victim
 site in an iframe. Since you can't tell what the URL of the victim iframe
 content is anyway, there's no practical difference between it being on a
 remote site or the same site, as far as i can tell.

 No?

Yeah, but it does let an evil site persist a potential dom-based xss
attack permanently.  It still depends on you visiting the evil site
repeatedly, though.

-dave


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Eduard Pascual
On Tue, Oct 21, 2008 at 4:35 PM, Kristof Zelechovski
[EMAIL PROTECTED] wrote:
 Sending any data, including, log-in data, through an unencrypted connection
 is greeted by a warning dialogue box in Internet Explorer.
Only the first time. IIRC, the don't display this again checkbox is
checked by default.

 A similar precaution is taken when the server certificate is not trusted.
Not similar at all: for unencrypted connections, you have the don't
bother me again option, in the form of an obvious checkbox; while
with self-signed certificates you are warned continuously; with the
only option to install the certificate on your system to trust it
(which is a non-trivial task; out of the reach for most average users;
still annoying even for web professionals; and, to top it up, you need
to do it on a site-by-site basis).
It doesn't make any sense for UAs to treat unencrypted connections as
safer than (some) encrypted ones: that's simply wrong.

 The risk of using an invalid certificate is bigger than not using any because
 your level of trust is bigger while you are equally unprotected.
That's, simply put, not true. The level of trust doesn't actually
depend (for average users) on the certificate at all, but on what the
browser says about it.
The level of protection, instead, is independent from the user, and
it's not the same for each case:
On an unencrypted connection, everyone could read the data being sent.
This is no protection at all.
On a connection encrypted with a self-signed certificate, the user can
rest assured that the data is only readable by the server, regardless
of who is actually behind that server. There is some protection here,
even if it's not the most possible.
On an encrypted connection with a CA-signed cert, the user has the
protection from encryption (only the server will be able to read the
data), plus the guarantee that the CA has taken care to verify that
the entity in charge of that server is who it claims to be.

 It is not enough to make sure that your credentials do not unintentionally
 leave example.com.
 Consider the following scenario:
 1. You want to update your blog at blog.com
 2. Evil.org poses as blog.com by phishing or DNS poisoning.
 3. You log in to evil.org using your credentials of blog.com.
 4. The bad guys at evil.org use your credentials to post an entry at
 blog.com that you are going to deploy a dirty bomb in NYC.
 5. You travel to the USA and you end up in Guantanamo.
 Nice, eh?
Although I'm not sure what do you mean by Evil.org poses as
blog.com, I see no point in Aaron's original suggestion that would
deal with such a case.

In summary, besides UAs' paranoia, I can't see any case where the
suggested feature would provide anything self-signed certificates
don't already provide. And since it involves using public-key
encryption, I don't see any reason why UAs would treat the encryption
keys differently from current SSL certificates.

On Tue, Oct 21, 2008 at 6:08 PM, Andy Lyttle [EMAIL PROTECTED] wrote:
 4. The need for a dedicated IP address, instead of using name-based virtual
 hosts.

 That and #1 are the reasons I don't use it more.
#4 is, again, a cost, but not an expensive one: most of the hosts I
know of offer dedicated IP for a fee that's just a fraction of the
actual hosting price.
And, about #1, I just read my points about self-signed certificates in
this and my previous mail.


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread WeBMartians
Somewhere, is there a definition of trust in this context? I say that in all 
seriousness; it's not a facetious remark. I feel that
it might be useful. 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Eduard Pascual
Sent: Tuesday, 2008 October 21 19:40
To: Kristof Zelechovski; Andy Lyttle; whatwg@lists.whatwg.org
Subject: Re: [whatwg] fixing the authentication problem

On Tue, Oct 21, 2008 at 4:35 PM, Kristof Zelechovski [EMAIL PROTECTED] wrote:
 Sending any data, including, log-in data, through an unencrypted 
 connection is greeted by a warning dialogue box in Internet Explorer.
Only the first time. IIRC, the don't display this again checkbox is checked 
by default.

 A similar precaution is taken when the server certificate is not trusted.
Not similar at all: for unencrypted connections, you have the don't bother me 
again option, in the form of an obvious checkbox;
while with self-signed certificates you are warned continuously; with the 
only option to install the certificate on your system
to trust it (which is a non-trivial task; out of the reach for most average 
users; still annoying even for web professionals; and,
to top it up, you need to do it on a site-by-site basis).
It doesn't make any sense for UAs to treat unencrypted connections as safer 
than (some) encrypted ones: that's simply wrong.

 The risk of using an invalid certificate is bigger than not using any 
 because your level of trust is bigger while you are equally unprotected.
That's, simply put, not true. The level of trust doesn't actually depend (for 
average users) on the certificate at all, but on
what the browser says about it.
The level of protection, instead, is independent from the user, and it's not 
the same for each case:
On an unencrypted connection, everyone could read the data being sent.
This is no protection at all.
On a connection encrypted with a self-signed certificate, the user can rest 
assured that the data is only readable by the server,
regardless of who is actually behind that server. There is some protection 
here, even if it's not the most possible.
On an encrypted connection with a CA-signed cert, the user has the protection 
from encryption (only the server will be able to read
the data), plus the guarantee that the CA has taken care to verify that the 
entity in charge of that server is who it claims to be.

 It is not enough to make sure that your credentials do not 
 unintentionally leave example.com.
 Consider the following scenario:
 1. You want to update your blog at blog.com 2. Evil.org poses as 
 blog.com by phishing or DNS poisoning.
 3. You log in to evil.org using your credentials of blog.com.
 4. The bad guys at evil.org use your credentials to post an entry at 
 blog.com that you are going to deploy a dirty bomb in NYC.
 5. You travel to the USA and you end up in Guantanamo.
 Nice, eh?
Although I'm not sure what do you mean by Evil.org poses as blog.com, I 
see no point in Aaron's original suggestion that would
deal with such a case.

In summary, besides UAs' paranoia, I can't see any case where the suggested 
feature would provide anything self-signed certificates
don't already provide. And since it involves using public-key encryption, I 
don't see any reason why UAs would treat the encryption
keys differently from current SSL certificates.

On Tue, Oct 21, 2008 at 6:08 PM, Andy Lyttle [EMAIL PROTECTED] wrote:
 4. The need for a dedicated IP address, instead of using name-based 
 virtual hosts.

 That and #1 are the reasons I don't use it more.
#4 is, again, a cost, but not an expensive one: most of the hosts I know of 
offer dedicated IP for a fee that's just a fraction of
the actual hosting price.
And, about #1, I just read my points about self-signed certificates in this and 
my previous mail.



Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Eduard Pascual
On Wed, Oct 22, 2008 at 1:28 AM, WeBMartians [EMAIL PROTECTED] wrote:
 Somewhere, is there a definition of trust in this context? I say that in 
 all seriousness; it's not a facetious remark. I feel that
 it might be useful.
I can't speak for others, but just for myself: the way I understand
the term trust (in contrast with security or protection), and
what I meant with it on my previous message, is as a measure of how
confident a user would feed about providing (generally sensitive) data
to a website. Ie: a user that absolutely trusts a site won't hesitate
to provide any kind of data to it; while a user who doesn't trust the
site at all won't knowingly provide any data at all (of course,
s/he'll still be providing a request HTTP header and similar
details, but that's most probably not known by the user; otherwise the
user wouldn't even visit the site). Of course, there is a full range
of grays between these extremes.


Re: [whatwg] fixing the authentication problem

2008-10-21 Thread Martin Atkins

Eduard Pascual wrote:

Not similar at all: for unencrypted connections, you have the don't
bother me again option, in the form of an obvious checkbox; while
with self-signed certificates you are warned continuously; with the
only option to install the certificate on your system to trust it
(which is a non-trivial task; out of the reach for most average users;
still annoying even for web professionals; and, to top it up, you need
to do it on a site-by-site basis).


There is some sense in this requirement to store the cert. It allows the 
browser to warn you if the cert changes later, which is what would 
happen if an attacker managed to intercept your connection. If you don't 
store the cert, one self-signed cert is the same as the next.


This is similar to the SSH model; the first time you connect, you're 
expected to manually check by some means that you're connecting to the 
right server.  On subsequent connections, you won't be bothered unless 
the key changes.


I'll concede that in most cases no-one actually verifies the key in the 
first connection case, but at least this requires an attacker to 
intercept your *first* connection from a particular client, rather than 
just any connection.


The UI for this is a bit overboard in today's browsers, but I think the 
general principle is sound.