I've left your entire email here, because it needs to
be re-read several times.  Understanding it is key to
developing protocols for security.

Ed Gerck wrote:
> Arguments such as "we don't want to reduce the fraud level because
> it would cost more to reduce the fraud than the fraud costs" are just a
> marketing way to say that a fraud has become a sale. Because fraud
> is an hemorrhage that adds up, while efforts to fix it -- if done correctly
> -- are mostly an up front cost that is incurred only once.  So, to accept
> fraud debits is to accept that there is also a credit that continuously
> compensates the debit. Which credit ultimately flows from the customer
> -- just like in car theft.

What you are talking about there is a misalignment
of interests.  That is, the car manufacturer has no
incentive to reduce the theft (by better locks, for
e.g.) if each theft results in a replacement sale.

Conventionally, this is dealt with by another interested
party, the insurer.  He arranges for the owner to have
more incentive to look after her car.  He also publishes
ratings and costs for different cars.  Eventually, the
car maker works out that there is a demand for a car
that doesn't incur so many follow-on costs for the owner.

This is what we call a "free market" solution to a
problem.  The alternative would be some form of
intervention into the marketplace, by some well-
meaning "authority."

The problem with the intervention is that it generally
fails to arise and align according to the underlying
problem.  That is, the authority is no such, and puts
in place some crock according to his own interests.

E.g., ordering all car manufacturers to fit NIST
standard locks (as lobbied for by NIST-standard
lock makers).  Or giving every car owner a free
steering lock.

And, that's more or less what we have with HTTPS.  A
security decision by the authority - the early designers
- that rides on a specious logical chain with no bearing
on the marketplace, and the result being a double block
against deployment.

(It's interesting to study these twin lock-ins, where
two parties are dependant on the other for their
mutual protocol.  For those interested, the longest
running commercial double cartel is about to come
crashing down:  DeBeers is now threatened by the the
advent of gem quality stones for throwaway prices,
its grip on the mines and retailers won't last out
the decade.  Understanding how DeBeers created its
twin interlocking cartels is perhaps the best single
path to understanding how cartels work.)

> Some 10 years ago I was officially discussing a national
> security system to hep prevent car theft. A lawyer representing
> a large car manufacturer told me that "a car stolen is a car sold"
> -- and that's why they did not have much incentive to reduce
> car theft. Having the car stolen was an "acceptable risk" for
> the consumer and a sure revenue for the manufacturer. In fact, a
> car stolen will need replacement that will be provided by insurance
> or by the customer working again to buy another car.  While the
> stolen car continues to generate revenue for the manufacturer in
> service and parts.
> The "acceptable risk" concept is an euphemism for that business
> model that shifts the burden of fraud to the customer, and eventually
> penalizes us all with its costs.
> Today, IT security hears the same argument over and over again.
> For example, the dirty little secret of the credit card industry is that
> they are very happy with +10% of credit card fraud over the Internet.
> In fact, if they would reduce fraud to zero today, their revenue
> would decrease as well as their profits.

Correct!  You've revealed it.  IMHO, not understanding
that fact has been at the root cause of more crypto biz
failures than almost any other issue.  My seat of the
pants view is that over a billion was lost in the late
eighties on payments ventures alone (I worked for a
project that lost about 250 million before it gave up
and let itself be swallowed up...).

In reality, the finance industry cares little about
reducing fraud.  This is easy to show, as you've done.

> There is really no incentive to reduce fraud. On the contrary, keeping
> the status quo is just fine.
> This is so mostly because of a slanted use of insurance. Up to a certain
> level,  which is well within the operational boundaries, a fraudulent
> transaction does not go unpaid through VISA,  American Express or
> Mastercard servers.  The transaction is fully paid, with its insurance cost
> paid by the merchant and, ultimately, by the customer.
> Thus, the credit card industry has successfully turned fraud into
> a sale.  This is the same attitude reported to me by that car manufacturer
> representative who said: "A car stolen is a car sold."
> The important lesson here is that whenever we see continued fraud, we must
> be certain: the defrauded is profiting from it.  Because no company will accept
> a continued  loss ithout doing anything to reduce it.

It'e perverse, because as you say, the so-called
defrauded is profiting from this situation.  But,
nobody is looking at the real source of frauds,
comfortable in the knowledge that crypto has done
its job, and its impenetrability is a wonder to
behold.  Nobody can doubt the system is secure,
because the crypto is secure...

> What is to blame? Not only the shortsighted ethics behind this attitude but also
> that security "school of thought" which is based on risk, surveillance and
> insurance as "security tools". There is no consideration of what trust is or
> means, no consideration whether it is ethically justifiable.  "A fraud is a sale" is
> the only outcome possible from using such methods.
> The solution is to consider the concept of trust(*) and provide means to
> induce trust among the dialogue parties, so that the protocol can be
> not only correct but also effective.  The problem I see with the protocols
> such as 3D Secure (for example) is that it does not allow trust to be
> represented -- even though it allows authorization to be represented (**).

I couldn't agree more.  The essence of the secure
protocol is to build up trust.  For that, trust
has to be represented.  And it has to be encouraged
to flow.  As Lynn Wheeler has pointed, out trust is
not a huge cert with lots of data, it's a small thing
that is so insignificant, it can travel fast and light,
and it can be counted and catalogued.

When you walk through the jungle of trust, you watch
how the birds fly, not how the bamboo hides the tiger.

(I like Lynn Wheeler's posts.  Generally there's no
response possible, because he concentrates on the
point, and nails it.)

> (*) BTW, I often see comments that it is difficult to use the concept of trust.
> Indeed, and unless the concept of trust in communication systems is well-
> defined, it really does not make sense to apply it. The definition that I use
> is that  "trust is that which is essential to a communication  channel but
> cannot be transferred through that same channel." This definition allows one
> to use Shannon's communication theory formalism and define trust without any
> reference to emotions, feelings or other hard to define concepts.
> (**) Trust  is often used as a synonym for authorization (see InterTrust usage,
> for example). This may work where a trusted user is a user authorized by
> management  to use some resources. But it does not work across trust
> boundaries. Trust is more than authorization.

Pretty much.  "Trust" in the certificate world means that
a CA has authorised a web server to conduct crypto stuff.

That ain't trust, not as most of us know it.  And, any
trust, as we do know it, is actually removed from the system
by the overbearing nature of x.509 certs, in cohoots with
the rote browser acceptance of the CA's authorisation.

Trust could be added - in the browsers.  It would start
by ignoring or downgrading the presence of a signer, and
concentrating on the persistance.  For most browsing
purposes, a recognised face is probably the best trust

There have been many thoughts as to where to go beyond
simple SSH-style persistant self-signed certificate caching.
Can we mix in PGP web-of-trust?  Is it possible to infect
URLs with hashes?  etc etc.

But, I think the 1st step has to be to encourage the
designer to look at even the basic unit of trust, which
is establishing trust over a single certificate.  Until
the browsers start to cache and analyse, it's futile to
think of more sophisticated shared forms of trust, as
found in other trust-based protocols like PGP's WoT.


The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to