Re: Five Theses on Security Protocols

2010-08-03 Thread Perry E. Metzger
On Mon, 2 Aug 2010 16:20:01 -0500 Nicolas Williams
nicolas.willi...@oracle.com wrote:
 But users have to help you establish the context.  Have you ever
 been prompted about invalid certs when navigating to pages where
 you couldn't have cared less about the server's ID?  On the web,
 when does security matter?  When you fill in a field on a form?
 Maybe you're just submitting an anonymous comment somewhere.  But
 certainly not just when making payments.

 I believe the user has to be involved somewhat.  The decisions the
 user has to make need to be simple, real simple (e.g., never about
 whether to accept a defective cert).  But you can't treat the user
 as a total ignoramus unless you're also willing to severely
 constrain what the user can do (so that you can then assume that
 everything the user is doing requires, say, mutual authentication
 with peer servers).

There are decisions, and there are decisions.

If, for example (and this is really just an example, not a worked
design), your browser authenticates the bank website using a USB
attached hardware token containing both parties credentials, which
also refuses to work for any other web site, it is very difficult for
the user to do anything to give away the store, and the user has very
little scope for decision making (beyond, say, deciding whether to
make a transfer once they're authenticated).

This is a big contrast to the current situation, where the user needs
to figure out whether they're typing their password in to the correct
web site etc., and can be phished into giving up their credentials.

You can still be attacked even in an ideal situation, of course. For
example, you could still follow instructions from con men telling you
to wire money to them. However, the trick is to change the system from
one where the user must be constantly on the alert lest they do
something wrong, like typing in a password to the wrong web site, to
one in which the user has to go out of their way to do something
wrong, like actively making the decision to send a bad guy all their
money.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Anne Lynn Wheeler

On 08/01/2010 01:51 PM, Jeffrey I. Schiller wrote:

I remember them well. Indeed these protocols, presumably you are
talking about Secure Electronic Transactions (SET), were a major
improvement over SSL, but adoption was killed by not only failing the
give the merchants a break on the fraud surcharge, but also requiring
the merchants to pick up the up-front cost of upgrading all of their
systems to use these new protocols. And there was the risk that it
would turn off consumers because it required the consumers setup
credentials ahead of time. So if a customer arrived at my SET
protected store-front, they might not be able to make a purchase if
they had not already setup their credentials. Many would just go to a
competitor that doesn't require SET rather then establish the
credentials.


SET specification predated these (as also internet specific, from the mid-90s, went on 
currently with x9a10 financial standards work ... which had requirement to preserve the 
integrity for *ALL* retail payments) ... the decade past efforts were later were much 
simpler and practical ... and tended to be various kinds of something you 
have authentication. I'm unaware of any publicity and/or knowledge about these 
payment products (from a decade ago) outside the payment industry and select high volume 
merchants.

The mid-90s, PKI/certificate-based specifications tended to hide behind a large 
amount of complexity ... and provide no effective additional benefit over  
above SSL (aka with all the additional complexity ... did little more than hide the 
transaction during transit on the internet).  They also would strip all the PKI 
gorp off at the Internet boundary (because of the 100 times payload size and 
processing bloat that the certificate processing represented) and send the 
transaction thru the payment network with just a flag indicating that certificate 
processing had occurred (end-to-end security was not feasible). Various past posts 
mentioning the 100 times payload size and processing bloat that certificates added 
to typical payment transactions
http://www.garlic.com/~lynn/subpubkey.html#bloat

In the time-frame of some of the pilots, there were then presentation by payment network 
business people at ISO standards meetings that they were seeing transactions come thru 
the network with the certificate processed flag on ... but could prove that 
no certificate processing actually occurred (there was financial motivation to lie since 
turning the flag on lowered the interchange fee).

The certificate processing overhead also further increased the merchant 
processing overhead ... in large part responsible for the low uptake ... even 
with some benefit of lowered interchange fee. The associations looked at 
providing additional incentive (somewhat similar to more recent point-of-sale, 
hardware token incentives in europe), effectively changing the burden of proof 
in dispute (rather than the merchant having to prove the consumer was at fault, 
the consumer would have to prove they weren't at fault; of course this would 
have met with some difficulty in the US with regard to regulation-E).

Old past thread interchange with members of that specification team regarding 
the specification was (effectively) never intended to do more than hide the 
transaction during transnmission:
http://www.garlic.com/~lynn/aepay7.htm#norep5 non-repudiation, was re: crypto 
flaw in secure mail standards

aka high-overhead and convoluted, complex processing of the specification 
provided little practical added benefit over and above what was already being 
provided by SSL.

oblique reference to that specification in recent post in this thread regarding 
having done both a PKI-operation benchmark (using BSAFE library) profile as 
well as business benefit profile of the specification (when it was initially 
published ... before any operational pilots):
http://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI

with regard specifically to BSAFE processing bloat referenced in the above ... 
there is folklore that one of the people, working on the specification, 
admitted to a adding a huge number of additional PKI-operations (and message 
interchanges) to the specification ... effectively for no other reason than the 
added complexity and use of PKI-operations.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Ian G

On 1/08/10 9:08 PM, Peter Gutmann wrote:

John Levinejo...@iecc.com  writes:


Geotrust, to pick the one I use, has a warranty of $10K on their cheap certs
and $150K on their green bar certs.  Scroll down to the bottom of this page
where it says Protection Plan:

http://www.geotrust.com/resources/repository/legal/

It's not clear to me how much this is worth, since it seems to warrant mostly
that they won't screw up, e.g., leak your private key, and they'll only pay
to the party that bought the certificate, not third parties that might have
relied on it.


A number of CAs provide (very limited) warranty cover, but as you say it's
unclear that this provides any value because it's so locked down that it's
almost impossible to claim on it.


Although distasteful, this is more or less essential.  The problem is 
best seen like this:  take all the potential relying parties for a large 
site / large CA, and multiply that by the damages in (hypothetically) 
fat-ass class action suit.  Think phishing, or an MD5 crunch, or a 
random debian code downsizing.


What results is a Very Large Number (tm).

By fairly standard business processes one ends up at the sad but 
inevitable principle:


   the CA sets expected liabilities to zero

And must do so.  Note that there is a difference between expected 
liabilities and liabilities stated in some document.  I use the term 
expected in the finance sense (c.f. Net Present Value calculations).


In practice, this is what could be called best practices, to the extent 
that I've seen it.


http://www.iang.org/papers/open_audit_lisa.html#rlo says the same thing 
in many many pages, and shows how CAcert does it.




Does anyone know of someone actually
collecting on this?


I've never heard of anyone collecting, but I wish I had (heard).


Could an affected third party sue the cert owner


In theory, yes.  This is expected.  In some sense, the certificate's 
name might be interpreted as suggesting that because the name is 
validated, then you can sue that person.


However, I'd stress that's a theory.  See above paper for my trashing of 
that, What's in a Name? at an individual level.  I'd speculate that 
the problem will be some class action suit because of the enourmous 
costs involved.




who can
then claim against the CA to recover the loss?


If the cause of loss is listed in the documentation . . .


Is there any way that a
relying party can actually make this work, or is the warranty cover more or
less just for show?


We are facing Dan Geer's disambiguation problem:

 The design goal for any security system is that the
 number of failures is small but non-zero, i.e., N0.
 If the number of failures is zero, there is no way
 to disambiguate good luck from spending too much.
 Calibration requires differing outcomes.


Maybe money can buy luck ;)



iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Adam Fields
On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
[...]
 3 Any security system that demands that users be educated,
   i.e. which requires that users make complicated security decisions
   during the course of routine work, is doomed to fail.
[...]

I would amend this to say which requires that users make _any_
security decisions.

It's useful to have users confirm their intentions, or notify the user
that a potentially dangerous action is being taken. It is not useful
to ask them to know (or more likely guess, or even more likely ignore)
whether any particular action will be harmful or not.

-- 
- Adam
--
If you liked this email, you might also like:
Some iPad apps I like 
-- http://workstuff.tumblr.com/post/680301206
Sous Vide Black Beans 
-- http://www.aquick.org/blog/2010/07/28/sous-vide-black-beans/
Sous Vide Black Beans 
-- http://www.flickr.com/photos/fields/4838987109/
fields: Readdle turns 3: Follow @readdle, RT to win an #iPad. $0.99 for any 
ap... 
-- http://twitter.com/fields/statuses/20072241887
--
** I design intricate-yet-elegant processes for user and machine problems.
** Custom development project broken? Contact me, I can help.
** Some of what I do: http://workstuff.tumblr.com/post/70505118/aboutworkstuff

[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.morningside-analytics.com ] .. Latest Venture
[ http://www.confabb.com ]  Founder

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Anne Lynn Wheeler

minor addenda about speeds  feeds concerning the example of mid-90s payment 
protocol specification that had enormous PKI/certificate bloat ... and SSL.

The original SSL security was predicated on the user understanding the 
relationship between the webserver they thought they were talking to, and the 
corresponding URL. They would enter that URL into the browser ... and the 
browser would then establish that the URL corresponded to the webserver being 
talked to (both parts were required in order to create an environment where the 
webserver you thot you were talking to, was, in fact, the webserver you were 
actually talking to). This requirement was almost immediately violated when 
merchant servers found that using SSL for the whole operation cost them 90-95% 
of their thruput. As a result, the merchants dropped back to just using SSL for 
the payment part and having a user click on a check-out/payment button. The 
(potentially unvalidated, counterfeit) webserver now provides the URL ... and 
SSL has been reduced to just validating that the URL corresponds to the 
webserver being talked to (or validating that the webserver being talke
d to, is the webserver that it claims to be; i.e. NOT validating that the 
webserver is the one you think you are talking to).

Now, the backend of the SSL payment process was SSL connection between the webserver and 
a payment gateway (sat on the internet and acted as gateway to the payment 
networks). Moderate to heavy load, avg. transaction elapsed time (at payment gateway, 
thru payment network) round-trip was under 1/3rd of second. Avg. roundtrip at merchant 
servers could be a little over 1/3rd of second (depending on internet connection between 
the webserver and the payment gateway).

I've referenced before doing BSAFE benchmarks for the PKI/certificate bloated 
payment specification ... and using a speeded up BSAFE library ... the people 
involved in the bloated payment specification claimed the benchmark numbers 
were 100 times too slow (apparently believing that standard BSAFE library at 
the time ran nearly 1000 times faster than it actually did).

When pilot code (for the enormously bloated PKI/certificate specification) was 
finally available, using BSAFE library (speedup enhancements had been 
incorporated into standard distribution) ... dedicated pilot demos for 
transaction round trip took nearly minute elapsed time ... effectively all of 
it was BSAFE computations (using dedicated computers doing nothing else).

Merchants that found using SSL for the whole consumer interaction would have 
required ten to twenty times the number of computers ... to handle equivalent 
non-SSL load ... were potentially being faced with needing hundreds of 
additional computers to handle just the BSAFE computational load (for the 
mentioned extremely PKI/certificate bloated payment specification) ... and 
still wouldn't be able to perform the transaction anywhere close to the elapsed 
time of the implementation being used with SSL.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-01 Thread Guus Sliepen
On Sun, Aug 01, 2010 at 11:20:51PM +1200, Peter Gutmann wrote:

 But, if you query an online database, how do you authenticate its answer? If
 you use a key for that or SSL certificate, I see a chicken-and-egg problem.
 
 What's your threat model?

My threat model is practice.

I assume Perry assumed that you have some pre-established trust relationship
with the online database. However, I do not see myself having much of those.
Yes, my browser comes preloaded with a set of root certificates, but Verisign
is as much a third party to me as any SSL protected website I want to visit.

Anyway, suppose we do all trust Verisign. Then everybody needs its public key
on their computers to safely communicate with it. How is this public key
distributed? Just like those preloaded root certs in the browser? What if their
key gets compromised? How do we revoke that key and get a new one? We still
have all the same problems with the public key of our root of trust as we have
with long-lived certificates. Perry says we should do online checks in such a
case. So which online database can tell us if Verisign's public key is still
good? Do we need multiple trusted online databases who can vouch for each
other, and hope not all of them fail simultaneously?

Another issue with online verification is the increase in traffic. Would
Verisign like it if they get queried for a significant fraction of all the SSL
connections that are made by all users in the world?

-- 
Met vriendelijke groet / with kind regards,
  Guus Sliepen g...@sliepen.org


signature.asc
Description: Digital signature


Five Theses on Security Protocols

2010-07-31 Thread Perry E. Metzger
Inspired by recent discussion, these are my theses, which I hereby
nail upon the virtual church door:

1 If you can do an online check for the validity of a key, there is no
  need for a long-lived signed certificate, since you could simply ask
  a database in real time whether the holder of the key is authorized
  to perform some action. The signed certificate is completely
  superfluous.

  If you can't do an online check, you have no practical form of
  revocation, so a long-lived signed certificate is unacceptable
  anyway.

2 A third party attestation, e.g. any certificate issued by any modern
  CA, is worth exactly as much as the maximum liability of the third
  party for mistakes. If the third party has no liability for
  mistakes, the certification is worth exactly nothing. All commercial
  CAs disclaim all liability.

  An organization needs to authenticate and authorize its own users;
  it cannot ask some other organization with no actual liability to
  perform this function on its behalf. A bank has to know its own
  customers, the customers have to know their own bank. A company
  needs to know on its own that someone is allowed to reboot a machine
  or access a database.

3 Any security system that demands that users be educated,
  i.e. which requires that users make complicated security decisions
  during the course of routine work, is doomed to fail.

  For example, any system which requires that users actively make sure
  throughout a transaction that they are giving their credentials to
  the correct counterparty and not to a thief who could reuse them
  cannot be relied on.

  A perfect system is one in which no user can perform an action that
  gives away their own credentials, and in which no user can
  authorizes an action without their participation and knowledge. No
  system can be perfect, but that is the ideal to be sought after.

4 As a partial corollary to 3, but which requires saying on its own:
  If false alarms are routine, all alarms, including real ones, will
  be ignored. Any security system that produces warnings that need to
  be routinely ignored during the course of everyday work, and which
  can then be ignored by simple user action, has trained its users to be
  victims.

  For example, the failure of a cryptographic authentication check
  should be rare, and should nearly always actually mean that
  something bad has happened, like an attempt to compromise security,
  and should never, ever, ever result in a user being told oh, ignore
  that warning, and should not even provide a simple UI that permits
  the warning to be ignored should someone advise the user to do so.

  If a system produces too many false alarms to permit routine work to
  happen without an ignore warning button, the system is worthless
  anyway.

5 Also related to 3, but important in its own right: to quote Ian
  Grigg:

*** There should be one mode, and it should be secure. ***

  There must not be a confusing combination of secure and insecure
  modes, requiring the user to actively pay attention to whether the
  system is secure, and to make constant active configuration choices
  to enforce security. There should be only one, secure mode.

  The more knobs a system has, the less secure it is. It is trivial to
  design a system sufficiently complicated that even experts, let
  alone naive users, cannot figure out what the configuration
  means. The best systems should have virtually no knobs at all.

  In the real world, bugs will be discovered in protocols, hash
  functions and crypto algorithms will be broken, etc., and it will be
  necessary to design protocols so that, subject to avoiding downgrade
  attacks, newer and more secure modes can and will be used as they
  are deployed to fix such problems. Even then, however, the user
  should not have to make a decision to use the newer more secure mode,
  it should simply happen.


Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Anne Lynn Wheeler

corollary to security proportional to risk is parameterized risk management 
... where variety of technologies with varying integrity levels can co-exist within the same 
infrastructure/framework. transactions exceeding particularly technology risk/integrity threshold 
may still be approved given various compensating processes are invoked (allows for multi-decade 
infrastructure operation w/o traumatic dislocation moving from technology to technology as well as 
multi-technology co-existence).

in the past I had brought this up to the people defining V3 extensions ... 
early in their process ... and they offered to let me do the work defining a V3 
integrity level field. My response was why bother with stale, static 
information when real valued operations would use much more capable dynamic, 
realtime, online process.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread John Levine
Nice theses.  I'm looking forward to the other 94.  The first one is a
nice summary of why DKIM might succeed in e-mail security where S/MIME
failed.  (Succeed as in, people actually use it.)

2 A third party attestation, e.g. any certificate issued by any modern
  CA, is worth exactly as much as the maximum liability of the third
  party for mistakes. If the third party has no liability for
  mistakes, the certification is worth exactly nothing. All commercial
  CAs disclaim all liability.

Geotrust, to pick the one I use, has a warranty of $10K on their cheap
certs and $150K on their green bar certs.  Scroll down to the bottom
of this page where it says Protection Plan:

http://www.geotrust.com/resources/repository/legal/

It's not clear to me how much this is worth, since it seems to warrant
mostly that they won't screw up, e.g., leak your private key, and
they'll only pay to the party that bought the certificate, not third
parties that might have relied on it.

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Peter Gutmann
Perry E. Metzger pe...@piermont.com writes:

Inspired by recent discussion, these are my theses, which I hereby nail upon
the virtual church door:

Are we allowed to play peanut gallery for this?

1 If you can do an online check for the validity of a key, there is no
  need for a long-lived signed certificate, since you could simply ask
  a database in real time whether the holder of the key is authorized
  to perform some action.

Based on the ongoing discussion I've now had, both on-list and off, about
blacklist-based key validity checking [0], I would like to propose an
addition:

  The checking should follow the credit-card authorised/declined model, and
  not be based on blacklists (a.k.a. the second dumbest idea in computer
  security, see
  http://www.ranum.com/security/computer_security/editorials/dumb/).

(Oh yes, for a laugh, have a look at the X.509 approach to doing this.  It's
eighty-seven pages long, and that's not including the large number of other
RFCs that it includes by reference: http://tools.ietf.org/html/rfc5055).

 The signed certificate is completely superfluous.

This is, I suspect, the reason for the vehement opposition to any kind of
credit-card style validity checking of keys, if you were to introduce it, it
would make both certificates and the entities that issue them superfluous.

Peter.

[0] It's kinda scary that it's taking this much debate to try and convince
people that blacklists are not a valid means of dealing with arbitrarily
delegatable capabilities.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Chris Palmer
Usability engineering requires empathy. Isn't it interesting that nerds
built themselves a system, SSH, that mostly adheres to Perry's theses? We
nerds have empathy for ourselves. But when it comes to a system for other
people, we suddenly lose all empathy and design a system that ignores
Perry's theses.

(In an alternative scenario, given the history of X.509, we can imagine that
PKI's woes are due not to nerd un-empathy, but to
government/military/hierarchy-lover un-empathy. Even in that scenario, nerd
cooperation is necessary.)

The irony is, normal people and nerds need systems with the same properties,
for the same reasons.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com