Ian Grigg <[EMAIL PROTECTED]> writes:
> "Perry E. Metzger" wrote:
> ...
> >    Dumb cryptography kills people.
> What's your threat model?  Or, that's your threat
> model?
> Applying the above threat model as written up in
> "The Codebreakers" to, for example, SSL and its
> original credit card nreeds would seem to be a
> mismatch.

People's software is rarely used in just one place. These days, one
might very well wake up to discover that one's operating system or
cryptographic utility is being used to protect ATM machines or power
generation equipment or worse. People die when power systems fail.

Furthermore, the little open source utility that you think is never
going to be used for something life critical may (with or without your
knowledge) end up being used by someone at an NGO who'll be killed
when the local government thugs break something.

> On the face of it, that is.  Correct me if I'm
> wrong, but I don't recall anyone ever mentioning
> that anyone was ever killed over a sniffed credit
> card.

SSL is not only used to protect people's credit cards.

It is one thing if, as a customer, with eyes wide open, you make a
decision to use something iffy.

However, as a producer, it is a bad idea to make assumptions you know
what people will do with your tools, because you don't. People end up
using tools in surprising ways. You can't control them.

Furthermore, it is utterly senseless to build something to use bad
cryptography when good cryptography is free and easy to use. You claim
there is some "Cryptography Guild" out there, but unlike every other
"Guild" in history, all our work is available for the taking by anyone
who wants it without the slightest renumeration to said fictitious

> > > Well, the opposition to "the guild" is one of pro-market
> > > people who get out there and build applications.
> > 
> > I don't see any truth to that. You can build applications just as
> > easily using things like TLS -- and perhaps even more easily. The
> > "alternatives" aren't any simpler or easier, and are almost always
> > dangerous.
> OK, that's a statement.  What is clear is that,
> regardless of the truth of the that statement,
> developers time and time again look at the crypto
> that is there and conclude that it is "too much."

For decades, I've seen programmers claim they didn't have time to test
their code or document it, either. Should I believe them, or should I
keep kicking?

> > People just finally realize what is needed in
> > order to make critical -- and I do mean critical -- pieces of
> > infrastructure safe enough for use.
> I find this mysterious.  When I send encrypted email
> to my girlfriend with saucy chat in there, is that
> what you mean by "critical" ?

Someone else who is not skilled in the art will then use that same
piece of software to send information to someone at Amnesty
International, and might very well end up dead if the software doesn't
work right.

Just because YOU do not use a piece of software in a life-critical way
does not mean someone else out there will not.

> Or,
> if I implement a VPN between my customers and suppliers,
> do you mean that this is "critical" ?

And someone else will use that VPN software to connect in to the
management interface for sections of the electrical grid, or a
commuter train system, or other things that can easily cause people to

You do not know who will use your software.

> For those applications that *are* critical, surely the
> people best placed to understand and deal with that
> criticality are the people who run the application
> themselves?

I've been a security consultant for years. There are very few
organizations -- even ones with critical security needs -- that
actually understand security well.

Perry E. Metzger                [EMAIL PROTECTED]

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to