Ben Laurie wrote:
> This is the SSH design for host keys, of course, and also the petnames
> design for URLs. Unfortunately petnames don't solve the problem that it
> is hard to check the URL even the first time.

the original SSL paradigm was predicated on end-to-end security that
"the server the user thot they were taling to" was "the server that they
were actually talking to". certificates addressed the part from "the URL
inside the browser" to "the server".

the paradigm was dependent on the user having a tight binding between
"the server the user thot they were talking to" and "the URL inside the
browser" ... which in turn was dependent on the user actually inputing
the URL (as demonstration of the binding between the server the user
thot they were talking to and the inputed URL).

the problem was that as the infrastructure matured ... the actual URL
came to have less & less meaning to the user. so the MITM-attacks moved
to the weak points in the chain ... rather than attacking a valid
certificate and/or the process after the URL was inside the browser,
attack the process before the URL got inside the browser.

petnames would seem to suffer somewhat the same problem as
shared-secrets and passwords ... requiring a unique petname for every
URL. it works as long as their a few ... when they reach scores ... the
user no longer can manage.

so part of the problem is that the URL has become almost some internal
infrastructure representation... almost on par with the ip-address ...
the user pays nearly as much attention to the URL for a website as they
pay to the lower-level ip-address for the site (legacy requirements
still have ways for people to deal with both the URL and the ip-address
... but they don't have a lot of meaning for a lot of people).

however the URL Is one way of internally doing bookkeeping about a site.

so security issues might be

1) is the user talking to the server they think they are talking

2) does the user believe that the site is safe

3) is the site safe for providing certain kinds of sensitive information

4) is the site safe for providing specific sensitive information

#1 is the original SSL design point ... but the infrastructure has
resulted in creating a disconnect for establishing this information.

possibly another approach is that the local environment remembers things
... akin to PGP key repository. rather than the SSL locked ... have a
large green/yellow/red indicator. red is neither SSL locked and/or
checked. yellow is both SSL locked and checked.  green is SSL loaked,
initial checked, and further checked for entry of sensitive information.

a human factors issue is how easy can you make preliminary checking ...
and then not have to do it again ... where the current infrastructure
requires users to match something meaningful to URL to SSL certificate
on every interaction. preliminary checking is more effort than the
current stuff done on every SSL URL ... but could be made to be done
relatively rarely and part of an overall infrastructure that directly
relates to something the end-user might find meaningful.

bits and pieces of the infrastructure is already there. for instance
there is already support for automatically entering userid/password on
specific web forms. using bits and pieces of that repository could
provide ability to flag a specific web form as approved/not-approved for
specific sensitive information (like specific userid/password).

the issue isn't that a simple indicator with 2-4 states isn't useful ...
but the states presented need to realistic need to mean something to the
user. the locked/unlocked just says that the link is encrypted. it
doesn't indicate that the remote site is the server that that the user
thinks it is ... in part because of the way that the infrastructure has
creating disconnect between the URL and what users actually deal in.

if the browser kept track of whether the user actually hit the keys for
the entering of the URL ... then it might be useful for the browser to
provide a higher level of confidence to the SSL certificate checking
(aka it is only if the user actually typed in the URL ... can their be a
high-level of confidence related to the SSL certificate checking).

one might be tempted to make some grandiose philosophical security
statement ... that unless the user is involved in actually doing some
physical operation (at least at some point in time) to correlate between
what is meaningful to the user and the internal infrastructure. the
original SSL scheme was dependent on the user actually typing in the URL.

this is somewhat analogous to the confusion that seems to have cropped
up in the past with respect to the difference between digital signature
and human signature.


could actually have digital signature applied to a retail transaction at
point-of-sale as means of authentication. however, that digital
signature wouldn't be the representation of human intent, aka read,
understood, agress, approves, and/or authorizes. pin-debit POS currently
has two-factor authentication, you swipe the magnetic card and you enter
a PIN. however, both are purely authentication. to get human intent, the
(certified) POS terminal asks the person to push the yes button if they
agree with the transaction. in the case of an x9.59 transaction at a
point-of-sale, the digital signature is authentication, but NOT human
intent. pushing the green/yes button on the POS terminal is what
indicates human intent (and therefor is the equivalent of human
signature indicating read, understood, approves, agrees, and/or authorizes).

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to