IDS systems (was Re: Five Theses on Security Protocols)

2010-08-30 Thread Perry E. Metzger
On Mon, 30 Aug 2010 13:49:54 +0100 Ben Laurie  wrote:
> On 02/08/2010 12:32, Ian G wrote:
> > We are facing Dan Geer's disambiguation problem:
> > 
> >> The design goal for any security system is that the
> >> number of failures is small but non-zero, i.e., N>0.
> >> If the number of failures is zero, there is no way
> >> to disambiguate good luck from spending too much.
> >> Calibration requires differing outcomes.
> > 
> > Maybe money can buy luck ;)
> 
> Late to the party, I realise, but I have to argue with this. This is
> only true if there's no way to distinguish close misses from nothing
> interesting happening at all.
> 
> This may be the first time I've realised why there's any point to an
> IDS: I've always argued that if you can detect the attacks, then you
> should not be vulnerable to them, however, if your goal is to
> justify the money you spent on not being vulnerable, then suddenly
> an IDS makes some kind of sense. However, no-one has ever suggested
> to me that that is their actual purpose...

Perhaps you haven't been in the right kinds of companies. Your
observation is one many have made in the past, and I don't think it
is even much of a secret.

I suspect that many IDS systems have been put in place over the years
largely as a way of showing management how bad the problem the
security team faces is, and why their budget is justified. That is
never the public claim, of course. However, without some sort of
evidence of a continuing threat, there are managers who would see
an ideally performing security team (i.e. one with a perfect record of
defense) and interpret it as a group spending money to no effect
whatsoever. "Why do I need you when no one ever breaks in?" might be
the (foolish) question.

IDS systems generate voluminous reports which may be used, in part,
to justify continuing funding for a security effort. They allow
management to feel that they are getting something concrete for their
money.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-03 Thread Perry E. Metzger
On Mon, 2 Aug 2010 16:20:01 -0500 Nicolas Williams
 wrote:
> But users have to help you establish the context.  Have you ever
> been prompted about invalid certs when navigating to pages where
> you couldn't have cared less about the server's ID?  On the web,
> when does security matter?  When you fill in a field on a form?
> Maybe you're just submitting an anonymous comment somewhere.  But
> certainly not just when making payments.
>
> I believe the user has to be involved somewhat.  The decisions the
> user has to make need to be simple, real simple (e.g., never about
> whether to accept a defective cert).  But you can't treat the user
> as a total ignoramus unless you're also willing to severely
> constrain what the user can do (so that you can then assume that
> everything the user is doing requires, say, mutual authentication
> with peer servers).

There are decisions, and there are decisions.

If, for example (and this is really just an example, not a worked
design), your browser authenticates the bank website using a USB
attached hardware token containing both parties credentials, which
also refuses to work for any other web site, it is very difficult for
the user to do anything to give away the store, and the user has very
little scope for decision making (beyond, say, deciding whether to
make a transfer once they're authenticated).

This is a big contrast to the current situation, where the user needs
to figure out whether they're typing their password in to the correct
web site etc., and can be phished into giving up their credentials.

You can still be attacked even in an ideal situation, of course. For
example, you could still follow instructions from con men telling you
to wire money to them. However, the trick is to change the system from
one where the user must be constantly on the alert lest they do
something wrong, like typing in a password to the wrong web site, to
one in which the user has to go out of their way to do something
wrong, like actively making the decision to send a bad guy all their
money.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-03 Thread Nicolas Williams
On Mon, Aug 02, 2010 at 11:29:32AM -0400, Adam Fields wrote:
> On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
> [...]
> > 3 Any security system that demands that users be "educated",
> >   i.e. which requires that users make complicated security decisions
> >   during the course of routine work, is doomed to fail.
> [...]
> 
> I would amend this to say "which requires that users make _any_
> security decisions".
> 
> It's useful to have users confirm their intentions, or notify the user
> that a potentially dangerous action is being taken. It is not useful
> to ask them to know (or more likely guess, or even more likely ignore)
> whether any particular action will be harmful or not.

But users have to help you establish the context.  Have you ever been
prompted about invalid certs when navigating to pages where you couldn't
have cared less about the server's ID?  On the web, when does security
matter?  When you fill in a field on a form?  Maybe you're just
submitting an anonymous comment somewhere.  But certainly not just when
making payments.

I believe the user has to be involved somewhat.  The decisions the user
has to make need to be simple, real simple (e.g., never about whether to
accept a defective cert).  But you can't treat the user as a total
ignoramus unless you're also willing to severely constrain what the user
can do (so that you can then assume that everything the user is doing
requires, say, mutual authentication with peer servers).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Anne & Lynn Wheeler

minor addenda about speeds & feeds concerning the example of mid-90s payment 
protocol specification that had enormous PKI/certificate bloat ... and SSL.

The original SSL security was predicated on the user understanding the 
relationship between the webserver they thought they were talking to, and the 
corresponding URL. They would enter that URL into the browser ... and the 
browser would then establish that the URL corresponded to the webserver being 
talked to (both parts were required in order to create an environment where the 
webserver you thot you were talking to, was, in fact, the webserver you were 
actually talking to). This requirement was almost immediately violated when 
merchant servers found that using SSL for the whole operation cost them 90-95% 
of their thruput. As a result, the merchants dropped back to just using SSL for 
the payment part and having a user click on a check-out/payment button. The 
(potentially unvalidated, counterfeit) webserver now provides the URL ... and 
SSL has been reduced to just validating that the URL corresponds to the 
webserver being talked to (or validating that the webserver being talke
d to, is the webserver that it claims to be; i.e. NOT validating that the 
webserver is the one you think you are talking to).

Now, the backend of the SSL payment process was SSL connection between the webserver and 
a "payment gateway" (sat on the internet and acted as gateway to the payment 
networks). Moderate to heavy load, avg. transaction elapsed time (at payment gateway, 
thru payment network) round-trip was under 1/3rd of second. Avg. roundtrip at merchant 
servers could be a little over 1/3rd of second (depending on internet connection between 
the webserver and the payment gateway).

I've referenced before doing BSAFE benchmarks for the PKI/certificate bloated 
payment specification ... and using a speeded up BSAFE library ... the people 
involved in the bloated payment specification claimed the benchmark numbers 
were 100 times too slow (apparently believing that standard BSAFE library at 
the time ran nearly 1000 times faster than it actually did).

When pilot code (for the enormously bloated PKI/certificate specification) was 
finally available, using BSAFE library (speedup enhancements had been 
incorporated into standard distribution) ... dedicated pilot demos for 
transaction round trip took nearly minute elapsed time ... effectively all of 
it was BSAFE computations (using dedicated computers doing nothing else).

Merchants that found using SSL for the whole consumer interaction would have 
required ten to twenty times the number of computers ... to handle equivalent 
non-SSL load ... were potentially being faced with needing hundreds of 
additional computers to handle just the BSAFE computational load (for the 
mentioned extremely PKI/certificate bloated payment specification) ... and 
still wouldn't be able to perform the transaction anywhere close to the elapsed 
time of the implementation being used with SSL.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Adam Fields
On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
[...]
> 3 Any security system that demands that users be "educated",
>   i.e. which requires that users make complicated security decisions
>   during the course of routine work, is doomed to fail.
[...]

I would amend this to say "which requires that users make _any_
security decisions".

It's useful to have users confirm their intentions, or notify the user
that a potentially dangerous action is being taken. It is not useful
to ask them to know (or more likely guess, or even more likely ignore)
whether any particular action will be harmful or not.

-- 
- Adam
--
If you liked this email, you might also like:
"Some iPad apps I like" 
-- http://workstuff.tumblr.com/post/680301206
"Sous Vide Black Beans" 
-- http://www.aquick.org/blog/2010/07/28/sous-vide-black-beans/
"Sous Vide Black Beans" 
-- http://www.flickr.com/photos/fields/4838987109/
"fields: Readdle turns 3: Follow @readdle, RT to win an #iPad. $0.99 for any 
ap..." 
-- http://twitter.com/fields/statuses/20072241887
--
** I design intricate-yet-elegant processes for user and machine problems.
** Custom development project broken? Contact me, I can help.
** Some of what I do: http://workstuff.tumblr.com/post/70505118/aboutworkstuff

[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.morningside-analytics.com ] .. Latest Venture
[ http://www.confabb.com ]  Founder

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Ian G

On 1/08/10 9:08 PM, Peter Gutmann wrote:

John Levine  writes:


Geotrust, to pick the one I use, has a warranty of $10K on their cheap certs
and $150K on their green bar certs.  Scroll down to the bottom of this page
where it says Protection Plan:

http://www.geotrust.com/resources/repository/legal/

It's not clear to me how much this is worth, since it seems to warrant mostly
that they won't screw up, e.g., leak your private key, and they'll only pay
to the party that bought the certificate, not third parties that might have
relied on it.


A number of CAs provide (very limited) warranty cover, but as you say it's
unclear that this provides any value because it's so locked down that it's
almost impossible to claim on it.


Although distasteful, this is more or less essential.  The problem is 
best seen like this:  take all the potential relying parties for a large 
site / large CA, and multiply that by the damages in (hypothetically) 
fat-ass class action suit.  Think phishing, or an MD5 crunch, or a 
random debian code downsizing.


What results is a Very Large Number (tm).

By fairly standard business processes one ends up at the sad but 
inevitable principle:


   the CA sets expected liabilities to zero

And must do so.  Note that there is a difference between "expected 
liabilities" and "liabilities stated in some document".  I use the term 
"expected" in the finance sense (c.f. Net Present Value calculations).


In practice, this is what could be called best practices, to the extent 
that I've seen it.


http://www.iang.org/papers/open_audit_lisa.html#rlo says the same thing 
in many many pages, and shows how CAcert does it.




Does anyone know of someone actually
collecting on this?


I've never heard of anyone collecting, but I wish I had (heard).


Could an affected third party sue the cert owner


In theory, yes.  This is "expected".  In some sense, the certificate's 
name might be interpreted as suggesting that because the name is 
validated, then you can sue that person.


However, I'd stress that's a theory.  See above paper for my trashing of 
that, "What's in a Name?" at an individual level.  I'd speculate that 
the problem will be some class action suit because of the enourmous 
costs involved.




who can
then claim against the CA to recover the loss?


If the cause of loss is listed in the documentation . . .


Is there any way that a
relying party can actually make this work, or is the warranty cover more or
less just for show?


We are facing Dan Geer's disambiguation problem:

> The design goal for any security system is that the
> number of failures is small but non-zero, i.e., N>0.
> If the number of failures is zero, there is no way
> to disambiguate good luck from spending too much.
> Calibration requires differing outcomes.


Maybe money can buy luck ;)



iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Anne & Lynn Wheeler

On 08/01/2010 04:08 PM, Anne & Lynn Wheeler wrote:

Old past thread interchange with members of that specification team
regarding the specification was (effectively) never intended to do
more than hide the transaction during transnmission:
http://www.garlic.com/~lynn/aepay7.htm#norep5 non-repudiation, was re: crypto 
flaw in secure mail standards


oops, finger-slip ... that should be:
http://www.garlic.com/~lynn/aepay7.htm#nonrep5 non-repudiation, was re: crypto 
flaw in secure mail standards

my archived post (14July2001), references earlier thread in commerce.net 
hosted, ansi-standard electronic payments list ... archive gone 404 ... but 
lives on at the wayback machine; aka from 1999 regarding what did SET intend to 
address
http://web.archive.org/web/20010725154624/http://lists.commerce.net/archives/ansi-epay/199905/msg9.html


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Anne & Lynn Wheeler

On 08/01/2010 01:51 PM, Jeffrey I. Schiller wrote:

I remember them well. Indeed these protocols, presumably you are
talking about Secure Electronic Transactions (SET), were a major
improvement over SSL, but adoption was killed by not only failing the
give the merchants a break on the fraud surcharge, but also requiring
the merchants to pick up the up-front cost of upgrading all of their
systems to use these new protocols. And there was the risk that it
would turn off consumers because it required the consumers setup
credentials ahead of time. So if a customer arrived at my SET
protected store-front, they might not be able to make a purchase if
they had not already setup their credentials. Many would just go to a
competitor that doesn't require SET rather then establish the
credentials.


SET specification predated these (as also internet specific, from the mid-90s, went on 
currently with x9a10 financial standards work ... which had requirement to preserve the 
integrity for *ALL* retail payments) ... the decade past efforts were later were much 
simpler and practical ... and tended to be various kinds of "something you 
have" authentication. I'm unaware of any publicity and/or knowledge about these 
payment products (from a decade ago) outside the payment industry and select high volume 
merchants.

The mid-90s, PKI/certificate-based specifications tended to hide behind a large 
amount of complexity ... and provide no effective additional benefit over & 
above SSL (aka with all the additional complexity ... did little more than hide the 
transaction during transit on the internet).  They also would strip all the PKI 
gorp off at the Internet boundary (because of the 100 times payload size and 
processing bloat that the certificate processing represented) and send the 
transaction thru the payment network with just a flag indicating that certificate 
processing had occurred (end-to-end security was not feasible). Various past posts 
mentioning the 100 times payload size and processing bloat that certificates added 
to typical payment transactions
http://www.garlic.com/~lynn/subpubkey.html#bloat

In the time-frame of some of the pilots, there were then presentation by payment network 
business people at ISO standards meetings that they were seeing transactions come thru 
the network with the "certificate processed" flag on ... but could prove that 
no certificate processing actually occurred (there was financial motivation to lie since 
turning the flag on lowered the interchange fee).

The certificate processing overhead also further increased the merchant 
processing overhead ... in large part responsible for the low uptake ... even 
with some benefit of lowered interchange fee. The associations looked at 
providing additional incentive (somewhat similar to more recent point-of-sale, 
hardware token incentives in europe), effectively changing the burden of proof 
in dispute (rather than the merchant having to prove the consumer was at fault, 
the consumer would have to prove they weren't at fault; of course this would 
have met with some difficulty in the US with regard to regulation-E).

Old past thread interchange with members of that specification team regarding 
the specification was (effectively) never intended to do more than hide the 
transaction during transnmission:
http://www.garlic.com/~lynn/aepay7.htm#norep5 non-repudiation, was re: crypto 
flaw in secure mail standards

aka high-overhead and convoluted, complex processing of the specification 
provided little practical added benefit over and above what was already being 
provided by SSL.

oblique reference to that specification in recent post in this thread regarding 
having done both a PKI-operation benchmark (using BSAFE library) profile as 
well as business benefit profile of the specification (when it was initially 
published ... before any operational pilots):
http://www.garlic.com/~lynn/2010l.html#59 A mighty fortress is our PKI

with regard specifically to BSAFE processing bloat referenced in the above ... 
there is folklore that one of the people, working on the specification, 
admitted to a adding a huge number of additional PKI-operations (and message 
interchanges) to the specification ... effectively for no other reason than the 
added complexity and use of PKI-operations.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/01/2010 09:31 AM, Anne & Lynn Wheeler wrote:
> Part of what was recognized by the x9a10 financial standard working
> group (and the resulting x9.59 financial standard) was that relying
> on the merchant (and/or the transaction processor) to provide major
> integrity protection for financial transactions ... is placing the
> responsibility on the entities with the least financial interest
> ... the "security proportional to risk" scenario (where largest
> percentage of exploits occur in the current infrastructure
> ... including data breaches)

Speaking as a merchant (yep, I get to do that too!), albeit one in
Higher Education, we don't view the risk of a compromise of a
transaction simply as the risk of losing the associated profit. Our
risk concern is more about risk to reputation. MIT wants it's name in
the papers associated with scientific breakthroughs and the like, not
with reports of security breaches!

We also need to be concerned with penalties that might be levied from
the various card networks (this is a more recent concern).

I am involved in our PCI security efforts (led them for a while). We
are not at all concerned with issues involved with SSL. Almost all of
our concern is about protecting the PC's of non-technical people who
are processing these transactions.

Similarly the breaches I am aware of were all about compromising
back-end systems, not breaking SSL.

> A decade ago, there were a number of "secure" payment transaction
> products floated for the internet ... with significant upfront
> merchant interest ... assuming that the associated transactions
> would have significant lower interchange fees (because of the
> elimination of "fraud" surcharge). Then things went thru a period of
> "cognitive dissonance" when financial institutions tried to explain
> why these transactions should have a higher interchange fee ... than
> the highest "fraud surchange" interchange fees. The severity of the
> "cognitive dissonance" between the merchants and the financial
> institutions over whether "secure" payment transactions products
> should result in higher fees or lower fees contributed significantly
> to the products not being deployed.

I remember them well. Indeed these protocols, presumably you are
talking about Secure Electronic Transactions (SET), were a major
improvement over SSL, but adoption was killed by not only failing the
give the merchants a break on the fraud surcharge, but also requiring
the merchants to pick up the up-front cost of upgrading all of their
systems to use these new protocols. And there was the risk that it
would turn off consumers because it required the consumers setup
credentials ahead of time. So if a customer arrived at my SET
protected store-front, they might not be able to make a purchase if
they had not already setup their credentials. Many would just go to a
competitor that doesn't require SET rather then establish the
credentials.

So another aspect of the failure of SET was that it didn't provide
real incentives for consumers to go though the hassle of having their
credentials setup, thus creating a competitive advantage for those
merchants who did *not* use SET. Sigh.

Comes back to Perry's theses, you *must* design security systems to
work for *real* people (both as consumes and merchants).

-Jeff

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFMVbQg8CBzV/QUlSsRAlCwAKDL85NXtQ+HXMvjvhpBs3fnOiL+0wCghXTT
aILxpWKCSavrIDukc+VCKVU=
=tATX
-END PGP SIGNATURE-

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-02 Thread Perry E. Metzger
On Sun, 1 Aug 2010 15:07:46 +0200 Guus Sliepen 
wrote:
> On Sun, Aug 01, 2010 at 11:20:51PM +1200, Peter Gutmann wrote:
>
> > >But, if you query an online database, how do you authenticate
> > >its answer? If you use a key for that or SSL certificate, I see
> > >a chicken-and-egg problem.
> >
> > What's your threat model?
>
> My threat model is practice.
>
> I assume Perry assumed that you have some pre-established trust
> relationship with the online database. However, I do not see myself
> having much of those. Yes, my browser comes preloaded with a set of
> root certificates, but Verisign is as much a third party to me as
> any SSL protected website I want to visit.

Security is not magic. If you have no pre-existing trust relationship
with some source of information, you cannot get additional trusted
information from that source. You have to have some sort of existing
pre-shared information -- a public or private key and a decision to
believe the information from that source -- to get anywhere.

> Anyway, suppose we do all trust Verisign. Then everybody needs its
> public key on their computers to safely communicate with it. How is
> this public key distributed? Just like those preloaded root certs
> in the browser? What if their key gets compromised? How do we
> revoke that key and get a new one?

I think the "we all trust Verisign" model is a bad one (no particular
slam on Verisign here, I think any "we all trust some third party"
model is bad.)

However, you are asking an important question, which is, how do we
replace compromised keys in our configuration files?

This depends a lot on the application. If we're talking about a single
administrative domain (say a few thousand machines inside a company
that are all managed by the same small group), you push out a new
config file. If we're talking about, say, a banking application where
lots of people have the bank's key in the config for their software,
the protocol either has a way of rolling over to the "next key" or you
are forced to send all the users new smartcards or USB keys or what
have you, which is a good incentive to have a means in your protocol
for moving to the next key.

As for the case of "everyone on earth trusts third party X", except
for something like the DNS, I see no reason or benefit in such a
system at all.

> We still have all the same problems with the public key of our root
> of trust as we have with long-lived certificates. Perry says we
> should do online checks in such a case. So which online database can
> tell us if Verisign's public key is still good? Do we need multiple
> trusted online databases who can vouch for each other, and hope not
> all of them fail simultaneously?

No. I think you're not focusing on real applications here. You're
instead thinking far too abstractly.

Say, for example, we're talking about some credit card processor's
public key that is used by all their validation terminals. Presumably
they need some sort of update protocol. However, that protocol does
not need to involve certificates and is not a matter of global
concern. There should be very few cases where a key is a matter of
global concern -- I think having keys be a matter of global concern is
something of an architectural error.

> Another issue with online verification is the increase in traffic.
> Would Verisign like it if they get queried for a significant
> fraction of all the SSL connections that are made by all users in
> the world?

I think the "Verisign is the standard of all truth" model is broken,
but lets consider what you're saying for a minute.

If Verisign is indeed the "standard of truth", then how could we do
anything else? If we don't check a key with Verisign at least every
few hours, then how can we ever have meaningful revocation? Whether
you think that Verisign wouldn't "like" this or not, there is no other
choice given the model you are presenting. Either revocation is
impossible or people make online checks.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-01 Thread Anne & Lynn Wheeler

On 07/31/2010 08:37 PM, Jeffrey I. Schiller wrote:

In general I agree with you, in particular when the task at hand is
authenticating individuals (or more to the point, Joe
Sixpack). However the use case of certificates for websites has worked
out pretty well (from a purely practical standpoint). The site owner
has to protect their key, because as you say, revocation is pretty
much non-existent.


The publicity campaign for SSL digital certificates and why consumers should feel good about 
them was major reason that long & ago and far away, I coined the term "merchant 
comfort" certificates.

Part of what was recognized by the x9a10 financial standard working group (and the 
resulting x9.59 financial standard) was that relying on the merchant (and/or the 
transaction processor) to provide major integrity protection for financial transactions 
... is placing the responsibility on the entities with the least financial interest ... 
the "security proportional to risk" scenario
(where largest percentage of exploits occur in the current infrastructure ... 
including data breaches)

The payment current paradigm has the merchant financial interest in the 
transaction information is the profit on the transaction ... which can be a 
couple dollars (and transaction processor profit can be a couple cents on the 
transaction). By comparison (in the current paradigm), the crooks financial 
motivation in the transaction information is the account credit limit (or 
account balance) which can be several hundred to several thousand dollars ... 
as a result, the crooks attacking the system, can frequently afford to outspend 
the defenders by two orders of magnitude (or more).

The majority of fraud (in the current infrastructure) also contributed to retailers having significant 
"fraud" surcharges onto their interchange fees. Past crypto mailing list threads have discussed 
that financial infrastructures make a significant percent of their profit/bottom-line from these "fraud 
surcharges" (large US issuing financial institutions having made 40-60% of their bottom line from these 
fees) ... with interchange fee "fraud surcharges" for highest risk transactions being 
order-of-magnitude or more larger than for lowest risk transactions.

The work on x9.59 financial standard recognized this dichotomy and slightly 
tweaked the paradigm ... eliminating knowledge of account number and/or 
information from previous transactions as a risk. This would significantly 
decrease the fraud for all x9.59 transactions in the world (i.e. the x9a10 
financial standard working group had been given the requirement to preserve the 
integrity of the financial infrastructure for *ALL* retail payments; 
point-of-sale, face-to-face, unattended, internet, debit, credit, stored-value, 
high-value, low-value, transit turnstyle, cardholder-not-present; aka *ALL*). 
As a result, it also eliminates the major use of SSL in the world today ... 
hiding financial transaction information. It also eliminates other kinds of 
risks from things like data breaches (didn't eliminate data breaches, but 
eliminated the motivation behind the majority of breaches in the world today, 
being able to use the information for fraudulent financial transaction).

The downside, is with the elimination of all that fraud ... it eliminates the majority of 
"fraud surcharge" from interchange fees ... and potentially cuts the "interchange 
fee" bottom line for large issuing institutions from 40-60% to possibly 4-6%. It sort of could 
be viewed as commoditizing payment transaction.

A decade ago, there were a number of "secure" payment transaction products floated for the internet ... with significant upfront 
merchant interest ... assuming that the associated transactions would have significant lower interchange fees (because of the elimination 
of "fraud" surcharge). Then things went thru a period of "cognitive dissonance" when financial institutions tried to 
explain why these transactions should have a higher interchange fee ... than the highest "fraud surchange" interchange fees. The 
severity of the "cognitive dissonance" between the merchants and the financial institutions over whether "secure" 
payment transactions products should result in higher fees or lower fees contributed significantly to the products not being deployed.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-01 Thread Guus Sliepen
On Sun, Aug 01, 2010 at 11:20:51PM +1200, Peter Gutmann wrote:

> >But, if you query an online database, how do you authenticate its answer? If
> >you use a key for that or SSL certificate, I see a chicken-and-egg problem.
> 
> What's your threat model?

My threat model is practice.

I assume Perry assumed that you have some pre-established trust relationship
with the online database. However, I do not see myself having much of those.
Yes, my browser comes preloaded with a set of root certificates, but Verisign
is as much a third party to me as any SSL protected website I want to visit.

Anyway, suppose we do all trust Verisign. Then everybody needs its public key
on their computers to safely communicate with it. How is this public key
distributed? Just like those preloaded root certs in the browser? What if their
key gets compromised? How do we revoke that key and get a new one? We still
have all the same problems with the public key of our root of trust as we have
with long-lived certificates. Perry says we should do online checks in such a
case. So which online database can tell us if Verisign's public key is still
good? Do we need multiple trusted online databases who can vouch for each
other, and hope not all of them fail simultaneously?

Another issue with online verification is the increase in traffic. Would
Verisign like it if they get queried for a significant fraction of all the SSL
connections that are made by all users in the world?

-- 
Met vriendelijke groet / with kind regards,
  Guus Sliepen 


signature.asc
Description: Digital signature


Re: Five Theses on Security Protocols

2010-08-01 Thread Peter Gutmann
Guus Sliepen  writes:

>But, if you query an online database, how do you authenticate its answer? If
>you use a key for that or SSL certificate, I see a chicken-and-egg problem.

What's your threat model?  At the moment if I get a key from a PGP keyserver
for a random contact I have no way of authenticating it (it may be signed, but
I have no idea who the signers are), I just hope the key's the right one.  The
ability to receive email at the given address helps prove it's them, and the
ability to reply indicates proof of possession of the private key.

In this case if I want to know whether (say) a Verisign-issued cert is valid I
go to www.verisign.com and ask.  Sure, you can defeat this with a fair bit of
effort, but doing a live MITM on a random TCP connection from an arbitrary
user just doesn't scale as well as a spamming out a zillion phishing emails
and waiting for users to come to my botnet.  The point isn't to create a
perfect defence but to raise the bar sufficiently that attackers can no longer
profitably use it as an attack vector.

In any case for this specific case DNSSEC will be along any minute to save us
all, so we don't have to worry about it any more.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-08-01 Thread Peter Gutmann
John Levine  writes:

>Geotrust, to pick the one I use, has a warranty of $10K on their cheap certs
>and $150K on their green bar certs.  Scroll down to the bottom of this page
>where it says Protection Plan:
>
>http://www.geotrust.com/resources/repository/legal/
>
>It's not clear to me how much this is worth, since it seems to warrant mostly
>that they won't screw up, e.g., leak your private key, and they'll only pay
>to the party that bought the certificate, not third parties that might have
>relied on it.

A number of CAs provide (very limited) warranty cover, but as you say it's
unclear that this provides any value because it's so locked down that it's
almost impossible to claim on it.  Does anyone know of someone actually
collecting on this?  Could an affected third party sue the cert owner who can
then claim against the CA to recover the loss?  Is there any way that a
relying party can actually make this work, or is the warranty cover more or
less just for show?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/31/2010 12:32 PM, Perry E. Metzger wrote:
> 1 If you can do an online check for the validity of a key, there is
>   no need for a long-lived signed certificate, since you could
>   simply ask a database in real time whether the holder of the key
>   is authorized to perform some action. The signed certificate is
>   completely superfluous.
>
>   If you can't do an online check, you have no practical form of
>   revocation, so a long-lived signed certificate is unacceptable
>   anyway.

In general I agree with you, in particular when the task at hand is
authenticating individuals (or more to the point, Joe
Sixpack). However the use case of certificates for websites has worked
out pretty well (from a purely practical standpoint). The site owner
has to protect their key, because as you say, revocation is pretty
much non-existent.

> 2 A third party attestation, e.g. any certificate issued by any
>   modern CA, is worth exactly as much as the maximum liability of
>   the third party for mistakes. If the third party has no liability
>   for mistakes, the certification is worth exactly nothing. All
>   commercial CAs disclaim all liability.
>
>   An organization needs to authenticate and authorize its own users;
>   it cannot ask some other organization with no actual liability to
>   perform this function on its behalf. A bank has to know its own
>   customers, the customers have to know their own bank. A company
>   needs to know on its own that someone is allowed to reboot a
>   machine or access a database.

This is one of the issues driving "Federated Authentication." The idea
being that each organization authenticates its own users (however it
deems appropriate) and the federation technology permits this
authentication to be used transitively. I view federation as still in
its infancy, so there are plenty of growing pains ahead of us.

As an aside... a number of years ago I was speaking with the security
folks at a large financial organization which does business with
MIT. Their authentication approach was pretty lame. I asked them if
they could instead accept MIT client certificates. They had a simple
question for me. They asked me if MIT would make good if a transaction
went bad and the "badness" could be attributed to us
mis-authenticating someone. I said "No". They said, well, our
authentication may be lame, but we stand behind it. If someone loses
money as a result, we will make them whole. And there you have it.

> 3 Any security system that demands that users be "educated",
>   i.e. which requires that users make complicated security decisions
>   during the course of routine work, is doomed to fail.
>
>   For example, any system which requires that users actively make
>   sure throughout a transaction that they are giving their
>   credentials to the correct counterparty and not to a thief who
>   could reuse them cannot be relied on.
>
>   A perfect system is one in which no user can perform an action
>   that gives away their own credentials, and in which no user can
>   authorizes an action without their participation and knowledge. No
>   system can be perfect, but that is the ideal to be sought after.

Completely agree. One of the appeals of public key credentials, notice
that I didn't say "certificate" here, is that you can prove your
identity without permitting the relying party to turn around and use
your credentials. I call this class of system "non-disclosing" because
you do not disclose sufficient information to permit the relying party
to impersonate you. Passwords are "disclosing"!

We do not require drivers of automobiles to be auto mechanics. We
shouldn't require internet users to be security technologists.

> 4 As a partial corollary to 3, but which requires saying on its own:
>   If "false alarms" are routine, all alarms, including real ones,
>   will be ignored. Any security system that produces warnings that
>   need to be routinely ignored during the course of everyday work,
>   and which can then be ignored by simple user action, has trained
>   its users to be victims.
>
>   For example, the failure of a cryptographic authentication check
>   should be rare, and should nearly always actually mean that
>   something bad has happened, like an attempt to compromise
>   security, and should never, ever, ever result in a user being told
>   "oh, ignore that warning", and should not even provide a simple UI
>   that permits the warning to be ignored should someone advise the
>   user to do so.
>
>   If a system produces too many false alarms to permit routine work
>   to happen without an "ignore warning" button, the system is
>   worthless anyway.

I learned about this from a story when I was a kid. I believe it was
called "The Boy who Cried Wolf."

> 5 Also related to 3, but important in its own right: to quote Ian
>   Grigg:
>
> *** There should be one mode, and it should be secure. ***
>
>   There must not be a confusing c

Re: Five Theses on Security Protocols

2010-07-31 Thread Nicolas Williams
On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
> 5 Also related to 3, but important in its own right: to quote Ian
>   Grigg:
> 
> *** There should be one mode, and it should be secure. ***

6. Enrolment must be simple.

I didn't see anything about transitive trust.  My rule regarding that:

7. Transitive trust, if used at all, should be used to bootstrap
   non-transitive trust (see "enrolment must be simple") or should be
   limited to scales where transitive trust is likely to work (e.g.,
   corporate scale).

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Anne & Lynn Wheeler

On 07/31/2010 01:30 PM, Guus Sliepen wrote:

But, if you query an online database, how do you authenticate its answer? If
you use a key for that or SSL certificate, I see a chicken-and-egg problem.


Part of what is now referred to as "electronic commerce" is a payment gateway that 
sits between the internet and the payment networks. this small client/server startup that 
wanted to do payment transactions and had invented this technology called SSL, wanted to also 
use SSL for internet communication between the merchant servers and the payment gateway (as 
well as between browsers and merchant servers). One of the things that I mandated for the 
merchant servers & payment gateway was mutual authentication (wasn't part of the 
implementation up until then). By the time all required registration and configuration 
operations were done for both the merchant servers and the payment gateway ... it was 
apparent that SSL digital certificates were redundant and superfluous ... and purely an 
artificial side-effect of the software library being used.

The existing SSL digital certificates has a chicken-and-egg problem as to 
public key trusted repository for the authorized Certificate Authorities ... 
aka it requires a trusted repository of Certification Authority public keys in 
order to validate acceptable SSL digital certificates (as mentioned elsewhere, 
the infrastructure is vulnerable since all entries in the trusted repository 
are treated as equivalent; i.e. only as strong as its weakest Certification 
Authority ... aka the weakest link in the security chain scenario).

If the relying party has its own public key trusted repository and/or has 
trusted communication to a public key trusted repository then it can use public 
keys from the trusted repository. In fact, the whole PKI infrastructure 
collapses w/o relying parties having public key trusted repository (for at 
least the public keys of trusted Certification Authorities).

In that sense, PKI is just a restricted, special case of relying party public key trusted 
repository ... where the (special case Certification Authority) trusted public keys, in 
addition to providing "direct" trust, are then used to establish indirect trust 
for public keys belonging to complete strangers in first time (no-value) communication.

For the at least the first decade or so, the major world-wide use of SSL for 
electronic commerce ... was quite skewed ... with top 100 or so merchant 
servers accounting for the majority of all electronic commerce transactions. 
Collecting and distributing those (few) public keys (in manner similar to the 
way that Certification Authority public keys are collected and distributed), 
would satisfy the majority of all trusted electronic commerce. Then volume 
starts to drop off quite quickly ... so there are possibly million or more 
websites that have electronic commerce activity that could possibly justify 
spending $10 for the highest possible integrity SSL digital signature.

The SSL Certification Authority operations started out having a severe catch-22. A major 
objective for SSL was countermeasures to various vulnerabilities in the domain name 
infrastructure and things like ip-address take-over (MITM-attacks, etc; is the webserver 
that I think I'm talking to, really the webserver that I'm talking to). Certificate 
Authorities can typically require a lot of information from an applicant and then they do 
an error-prone, time-consuming, and expensive identification process attempting to match 
the supplied information against the on-file information and the domain name 
infrastructure as to the true owner of the domain. There have been "domain name 
take-over" attacks against the domain name infrastructure ... the attacker then 
could use a front company to apply for an SSL certificate (certificate authority shopping 
... analogous to some of the things in the news associated with the financial mess with 
regulator shopping). Any issued certificate will b
e taken as equivalent to the highest quality and most expensive certificate 
from any other Certification Authority).

So part of some Certification Authority backed integrity improvements to the 
domain name infrastructure ... is to have domain name owners register a public 
key with the domain name infrastructure ... and then all future communication 
is digitally signed (and validated with the certificateless, onfile public key) 
... as countermeasure to various things like domain name hijacking (eliminating 
some of the exploits where wrong people can get valid SSL certificates).

Turns out then the Certification Authority business could require that SSL 
digital certificate applications are also digitally signed. The Certification 
Authority then could do a real-time retrieval of the onfile public key to 
validate the digital signature (replacing the time-consuming, error-prone, and 
expensive identification matching process with an efficient, reliable, 
inexpensive authenti

Re: Five Theses on Security Protocols

2010-07-31 Thread Perry E. Metzger
On Sat, 31 Jul 2010 19:30:06 +0200 Guus Sliepen 
wrote:
> On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:
> 
> > 1 If you can do an online check for the validity of a key, there
> > is no need for a long-lived signed certificate, since you could
> > simply ask a database in real time whether the holder of the key
> > is authorized to perform some action. The signed certificate is
> > completely superfluous.
> > 
> >   If you can't do an online check, you have no practical form of
> >   revocation, so a long-lived signed certificate is unacceptable
> >   anyway.
> 
> But, if you query an online database, how do you authenticate its
> answer?

With a public key you have in a configuration file, or a pairwise
shared secret key stored in a database.

A key sitting in a configuration file is not the same thing as a
certificate signed by a CA and trusted for that reason. Instead, it is
trusted for the same reason that, say, the /etc/passwd file on a Unix
box is trusted -- because if someone could break in and alter the
file, they could do anything else they wanted anyway.

You do not need a signed certificate.

> If you use a key for that or SSL certificate, I see a
> chicken-and-egg problem.

I don't see why you need a certificate for any purpose whatsoever.

A key, on the other hand, is a very different thing. There's nothing
wrong with keys.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Guus Sliepen
On Sat, Jul 31, 2010 at 12:32:39PM -0400, Perry E. Metzger wrote:

> 1 If you can do an online check for the validity of a key, there is no
>   need for a long-lived signed certificate, since you could simply ask
>   a database in real time whether the holder of the key is authorized
>   to perform some action. The signed certificate is completely
>   superfluous.
> 
>   If you can't do an online check, you have no practical form of
>   revocation, so a long-lived signed certificate is unacceptable
>   anyway.

But, if you query an online database, how do you authenticate its answer? If
you use a key for that or SSL certificate, I see a chicken-and-egg problem.

-- 
Met vriendelijke groet / with kind regards,
  Guus Sliepen 


signature.asc
Description: Digital signature


Re: Five Theses on Security Protocols

2010-07-31 Thread Chris Palmer
Usability engineering requires empathy. Isn't it interesting that nerds
built themselves a system, SSH, that mostly adheres to Perry's theses? We
nerds have empathy for ourselves. But when it comes to a system for other
people, we suddenly lose all empathy and design a system that ignores
Perry's theses.

(In an alternative scenario, given the history of X.509, we can imagine that
PKI's woes are due not to nerd un-empathy, but to
government/military/hierarchy-lover un-empathy. Even in that scenario, nerd
cooperation is necessary.)

The irony is, normal people and nerds need systems with the same properties,
for the same reasons.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Peter Gutmann
"Perry E. Metzger"  writes:

>Inspired by recent discussion, these are my theses, which I hereby nail upon
>the virtual church door:

Are we allowed to play peanut gallery for this?

>1 If you can do an online check for the validity of a key, there is no
>  need for a long-lived signed certificate, since you could simply ask
>  a database in real time whether the holder of the key is authorized
>  to perform some action.

Based on the ongoing discussion I've now had, both on-list and off, about
blacklist-based key validity checking [0], I would like to propose an
addition:

  The checking should follow the credit-card authorised/declined model, and
  not be based on blacklists (a.k.a. "the second dumbest idea in computer
  security", see
  http://www.ranum.com/security/computer_security/editorials/dumb/).

(Oh yes, for a laugh, have a look at the X.509 approach to doing this.  It's
eighty-seven pages long, and that's not including the large number of other
RFCs that it includes by reference: http://tools.ietf.org/html/rfc5055).

> The signed certificate is completely superfluous.

This is, I suspect, the reason for the vehement opposition to any kind of
credit-card style validity checking of keys, if you were to introduce it, it
would make both certificates and the entities that issue them superfluous.

Peter.

[0] It's kinda scary that it's taking this much debate to try and convince
people that blacklists are not a valid means of dealing with arbitrarily
delegatable capabilities.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread John Levine
Nice theses.  I'm looking forward to the other 94.  The first one is a
nice summary of why DKIM might succeed in e-mail security where S/MIME
failed.  (Succeed as in, people actually use it.)

>2 A third party attestation, e.g. any certificate issued by any modern
>  CA, is worth exactly as much as the maximum liability of the third
>  party for mistakes. If the third party has no liability for
>  mistakes, the certification is worth exactly nothing. All commercial
>  CAs disclaim all liability.

Geotrust, to pick the one I use, has a warranty of $10K on their cheap
certs and $150K on their green bar certs.  Scroll down to the bottom
of this page where it says Protection Plan:

http://www.geotrust.com/resources/repository/legal/

It's not clear to me how much this is worth, since it seems to warrant
mostly that they won't screw up, e.g., leak your private key, and
they'll only pay to the party that bought the certificate, not third
parties that might have relied on it.

R's,
John

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Five Theses on Security Protocols

2010-07-31 Thread Anne & Lynn Wheeler

corollary to "security proportional to risk" is "parameterized risk management" 
... where variety of technologies with varying integrity levels can co-exist within the same 
infrastructure/framework. transactions exceeding particularly technology risk/integrity threshold 
may still be approved given various compensating processes are invoked (allows for multi-decade 
infrastructure operation w/o traumatic dislocation moving from technology to technology as well as 
multi-technology co-existence).

in the past I had brought this up to the people defining V3 extensions ... 
early in their process ... and they offered to let me do the work defining a V3 
integrity level field. My response was why bother with stale, static 
information when real valued operations would use much more capable dynamic, 
realtime, online process.

--
virtualization experience starting Jan1968, online at home since Mar1970

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Five Theses on Security Protocols

2010-07-31 Thread Perry E. Metzger
Inspired by recent discussion, these are my theses, which I hereby
nail upon the virtual church door:

1 If you can do an online check for the validity of a key, there is no
  need for a long-lived signed certificate, since you could simply ask
  a database in real time whether the holder of the key is authorized
  to perform some action. The signed certificate is completely
  superfluous.

  If you can't do an online check, you have no practical form of
  revocation, so a long-lived signed certificate is unacceptable
  anyway.

2 A third party attestation, e.g. any certificate issued by any modern
  CA, is worth exactly as much as the maximum liability of the third
  party for mistakes. If the third party has no liability for
  mistakes, the certification is worth exactly nothing. All commercial
  CAs disclaim all liability.

  An organization needs to authenticate and authorize its own users;
  it cannot ask some other organization with no actual liability to
  perform this function on its behalf. A bank has to know its own
  customers, the customers have to know their own bank. A company
  needs to know on its own that someone is allowed to reboot a machine
  or access a database.

3 Any security system that demands that users be "educated",
  i.e. which requires that users make complicated security decisions
  during the course of routine work, is doomed to fail.

  For example, any system which requires that users actively make sure
  throughout a transaction that they are giving their credentials to
  the correct counterparty and not to a thief who could reuse them
  cannot be relied on.

  A perfect system is one in which no user can perform an action that
  gives away their own credentials, and in which no user can
  authorizes an action without their participation and knowledge. No
  system can be perfect, but that is the ideal to be sought after.

4 As a partial corollary to 3, but which requires saying on its own:
  If "false alarms" are routine, all alarms, including real ones, will
  be ignored. Any security system that produces warnings that need to
  be routinely ignored during the course of everyday work, and which
  can then be ignored by simple user action, has trained its users to be
  victims.

  For example, the failure of a cryptographic authentication check
  should be rare, and should nearly always actually mean that
  something bad has happened, like an attempt to compromise security,
  and should never, ever, ever result in a user being told "oh, ignore
  that warning", and should not even provide a simple UI that permits
  the warning to be ignored should someone advise the user to do so.

  If a system produces too many false alarms to permit routine work to
  happen without an "ignore warning" button, the system is worthless
  anyway.

5 Also related to 3, but important in its own right: to quote Ian
  Grigg:

*** There should be one mode, and it should be secure. ***

  There must not be a confusing combination of secure and insecure
  modes, requiring the user to actively pay attention to whether the
  system is secure, and to make constant active configuration choices
  to enforce security. There should be only one, secure mode.

  The more knobs a system has, the less secure it is. It is trivial to
  design a system sufficiently complicated that even experts, let
  alone naive users, cannot figure out what the configuration
  means. The best systems should have virtually no knobs at all.

  In the real world, bugs will be discovered in protocols, hash
  functions and crypto algorithms will be broken, etc., and it will be
  necessary to design protocols so that, subject to avoiding downgrade
  attacks, newer and more secure modes can and will be used as they
  are deployed to fix such problems. Even then, however, the user
  should not have to make a decision to use the newer more secure mode,
  it should simply happen.


Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com