On 07/22/2011 01:17 PM, Zooko O'Whielacronx wrote:

From http://www.hpl.hp.com/techreports/2009/HPL-2009-53.pdf :

“1. INTRODUCTION Most people agree with the statement, ―There is an
inevitable tension between usability and security. We don’t, so we
set out to build a useful tool to prove our point.”

If they've done what they claim (which I find plausible), then how
could it be possible? Where does this "free energy" come from?

They could get it from optimizing things that were previously
unnecessarily cumbersome.

Or they could be hand-waving and pushing the problem off somewhere else.
Usually developers try to push it back on the user to be ignored:

A webkey is displayed in the Webkey field after the user clicks Save.
The user sends this webkey to the new Pal by whatever means the user
desires, typically via a conventional email, much as one would send
an email address. Users can prevent a third party who intercepts the
webkey from impersonation attacks by confirming the webkey out of
band, such as over the telephone, much as they would double check an
email address. The recipient clicks "Create a Pal with a Webkey",
fills out the form, inserts the webkey, and clicks Save. The
difference between these two ways of creating a Pal avoids the
confusion some early users had when they generated a webkey instead
of using one they had been sent.

This doesn't sound like free security to me.

I think it comes from taking advantage of information which is
already present but which is just lying about unused by the security
mechanism: expressions of intent that the user makes but that some
security mechanisms ignore.

I like this direction of work, but I just don't see the word "attack" in
the paper enough times.

What does the user see when they *are* under attack and the server
authentication step fails?

How do the security properties change when the user clicks on a link in
a phishing email?

The design says
A webkey is the moral equivalent of a password, but one the user
treats as a bookmark and that controls access to a specific object

So what do you do when one of these webkey passwords eventually does get
disclosed? Can you revoke it or is is equivalent to the name of the
document?

For example, if you send a file to someone, then there is no need
for your tools to interrupt your workflow with security-specific
questions, like prompting for a password or access code, popping up
a dialog that says "This might be insecure! Are you sure?",

Yeah that's lame.

or asking you to specify a public key of your recipient. You've
already specified (as part of your *normal* workflow) what file and
who to send it to,

How do you specify "what file" without an existing server authentication
infrastructure?

How do you specify "who" without presuming an existing user identity and
authentication infrastructure?

and that information is sufficient the security system to figure out
what to do. Likewise there is no need for the recipient of the file
to have her workflow interrupted by security issues.

I think a system can still be moderately secure without interrupting the
normal workflow. But at first glance it looks to me like this HP system
has most of the security limitations of any other web browser email
client being operated by a typical user. In this case the user is told
that they don't have to "interrupt their workflow" for security concerns
(except perhaps to install the binary plugin).

Again, the point is that *you've already specified*. The human has
already communicated all of the necessary information to the
computer. Security tools that request extra steps are usually being
deaf to what the human has already told the computer. (Or else they
are just doing "CYA Security" a.k.a "Blame The Victim Security" where
if anything goes wrong later they can say "Well I popped up an 'Are
You Sure?' dialog box, so what happened wasn't my fault!".)

Agreed. The programmer is successful as far as shifting blame, which is
actually a reasonable thing to do if you're thinking as an engineer
documenting an interface contract and fulfilling his side of it.

Okay, now I admit that once we have security tools that integrate
into user workflow and take advantage of the information that is
already present, *then* we'll still have some remaining hard problems
about fitting usability and security together.

Remember the attacker gets a vote in how "usable" the system really is
when he chooses to attack the user, which is exactly when it really
matters. What capabilities can he trick the (intentionally naive) user
into delegating? What can the user give away and how hard is it to clean
up afterwards?

- Marsh
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to