Re: Reliance on Microsoft called risk to U.S. security

2003-10-01 Thread Peter Gutmann
Bill Frantz [EMAIL PROTECTED] writes:

The real problem is that the viewer software, whether it is an editor, PDF
viewer, or a computer language interpreter, runs with ALL the user's
privileges.  If we ran these programs with a minimum of privilege, most of
the problems would just go away.

This doens't really work.  Consider the simple case where you run Outlook with
'nobody' privs rather than the current user privs.  You need to be able to
send and receive mail, so a worm that mails itself to others won't be slowed
down much.  In addition everyone's sending you HTML-formatted mail, so you
need access to (in effect) MSIE via the various HTML controls.  Further, you
need Word and Excel and Powerpoint for all the attachments that people send
you.  They need access to various subsystems like ODBC and who knows what else
as an extension of the above.  As you follow these dependencies further and
further out, you eventually end up running what's more or less an MLS system
where you do normal work at one privilege level, read mail at another, and
browse the web at a third.  This was tried in the 1970s and 1980s and it
didn't work very well even if you were prepared to accept a (sizeable) loss of
functionality in exchange for having an MLS OS, and would be totally
unacceptable for someone today who expects to be able to click on anything in
sight and have it automatically processed by whatever app is assigned to it.

Even if you could somehow enforce the MLS-style restrictions and convince
people to run an OS with this level of security enabled, the outcome when this
was tried with MLS OSes was that users would do everything possible to bypass
it because it was seen as an impediment to getting any work done: SIGMA
eventually allowed users to violate the *-property to avoid them having to re-
type messages at lower security levels (i.e. it recognised that they were
going to violate security anyway, so it made it somewhat less awkward to do),
Multics and GEMSOS allowed users to be logged in at multiple security levels
to get work done (now add the 1,001 ways that Windows can move data from A to
B to see how much harder this is to control than on a 1970s system where the
only data-transfer mechanism was copy a file), KSOS used non-kernel
security-related functions (kludges) to allow users to violate security
properties and get their work done, etc etc.

One thing that I noticed in the responses to CyberInsecurity: The Cost of
Monopoly was that of the people who criticised it as recommending the wrong
solution, no two could agree on any alternative remedy.  This indicates just
how hard a problem this really is...

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Don Davis
EKR writes:
 I'm trying to figure out why you want to invent a new authentication
 protocol rather than just going back to the literature ...

there's another rationale my clients often give for
wanting a new security system, instead of the off-
the-shelf standbys:  IPSec, SSL, Kerberos, and the
XML security specs are seen as too heavyweight for
some applications.  the developer doesn't want to
shoehorn these systems' bulk and extra flexibility
into their applications, because most applications
don't need most of the flexibility offered by these
systems.

some shops experiment with the idea of using only
part of OpenSSL, but stripping unused stuff out of
each new release of OpenSSL is a maintenance hassle.

note that customers aren't usually dissatisfied with
the crypto protocols per se;  they just want the
protocol's implementation to meet their needs exactly,
without extra baggage of flexibility, configuration
complexity, and bulk.  they want their crypto clothing
to fit well, but what's available off-the-rack is
a choice between frumpy one-size-fits-all, and a
difficult sew-your-own kit, complete with pattern,
fabric, and sewing machine.  so, they often opt for
tailor-made crypto clothing.

my clients' concern (to keep their crypto code as
small and as simple as possible) doesn't justify
their inventing and deploying broken protocols, but
their concern does point out that neither the crypto
industry nor the crypto literature has fully met
these customers' crypto needs.

- don davis, boston








-

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Eric Rescorla
Don Davis [EMAIL PROTECTED] writes:

 EKR writes:
  I'm trying to figure out why you want to invent a new authentication
  protocol rather than just going back to the literature ...
 
 there's another rationale my clients often give for
 wanting a new security system, instead of the off-
 the-shelf standbys:  IPSec, SSL, Kerberos, and the
 XML security specs are seen as too heavyweight for
 some applications.  the developer doesn't want to
 shoehorn these systems' bulk and extra flexibility
 into their applications, because most applications
 don't need most of the flexibility offered by these
 systems.

I hear this a lot, but I think that Perry nailed it earlier. SSL, for
instance, is about as simple as we know how to make a protocol that
does what it does. The two things that are generally cited as being
sources of complexity are:

(1) Negotiation.
(2) Certificates.

Negotiation doesn't really add that much protocol complexity,
and certificates are kind of the price of admission if you want
third party authentication.


 some shops experiment with the idea of using only
 part of OpenSSL, but stripping unused stuff out of
 each new release of OpenSSL is a maintenance hassle.
But here's you're talking about something different, which is
OpenSSL. Most of the OpenSSL complexity isn't actually in 
SSL.

The way I see it, there are basically four options:
(1) Use OpenSSL (or whatever) as-is.
(2) Strip down your toolkit but keep using SSL.
(3) Write your own toolkit that implements a stripped down subset
of SSL (e.g. self-signed certs or anonymous DH).
(4) Design your own protocol and then implement it.

Since SSL without certificates is about as simple as a stream
security protocol can be, I don't see that (4) holds much of
an advantage over (3)

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Monoculture

2003-10-01 Thread Jill Ramonsky
I could do an implementation of SSL. Speaking as a programmer with an 
interest in crypto, I'm fairly sure I could produce a cleanly 
implemented and simple-to-use version.

I confess I didn't realise there was a need. You see, it's not that it 
doesn't seem to excite [me] - it's just that, well, OpenSSL already 
exists, and creating another tool (or library or whatever) to do exactly 
the same thing seems a bit of a waste of time, like re-inventing the 
wheel. If you can provide some reasonably reassurance that it's not a 
waste of time, I'll make a start.

But I would like to ask you to clarify something about SSL which has 
been bugging me. Allow me to present a scenario. Suppose:
(1) Alice runs a web server.
(2) Bob has a web client.
(3) Alice and Bob know each other personally, and see each other every day.
(4) Eve is the bad guy. She runs a Certificate Authority, which is 
trusted by Bob's browser, but not by Bob.
Is it possible for Bob to instruct his browser to (a) refuse to trust 
anything signed by Eve, and (b) to trust Alice's certificate (which she 
handed to him personally)? (And if so, how?)

I am very much hoping that you can answer both (a) and (b) with a yes, 
in which case I will /definitely/ get on with recoding SSL.
Jill





 -Original Message-
 From: Perry E. Metzger [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, October 01, 2003 3:36 PM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: Monoculture

 We could use more implementations of ssl and of ssh, no
 question.

 However, suggesting to people that they produce more cleanly
 implemented and simpler to use versions of existing algorithms and
 protocols doesn't seem to excite people, although it would be of
 tremendous utility.

 Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Bill Sommerfeld
 Who on this list just wrote a report on the dangers of Monoculture?

An implementation monoculture is more dangerous than a protocol
monoculture..

Most exploitable security problems arise from implementation errors,
rather than from inherent flaws in the protocol being implemented.

And broad diversity in protocols has a downside from another general
systems security principle: minimization..

The more protocols you need to implement to talk to other systems, the
less time you have to make sure the ones you implement are implemented
well, and the more likely you are to pick up one which has a latent
implementation flaw.

- Bill

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread John Saylor
hi

( 03.09.30 20:39 -0700 ) [EMAIL PROTECTED]:
 And, given the recent set of widely publicized flaws in openssl and
 openssh, I think that concern about monoculture in cryptography
 software is pretty damn well founded.

except for the fact that these holes get fixed as opposed to the other
flaws in the true monoculture computing environment [m$ windows] that
get denied, then fixed [at a later date, and with no external review of
the fix code possible].

the monoculture you refer to [ssl/ssh] is brought on by the
effectiveness of this software to allow for some measure of secure
network computing. a lot of people use it because it works.

but you're probably just trolling anyway ...

-- 
\js

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread John S. Denker
On 10/01/2003 11:22 AM, Don Davis wrote:

 there's another rationale my clients often give for
 wanting a new security system, instead of the off-
 the-shelf standbys:  IPSec, SSL, Kerberos, and the
 XML security specs are seen as too heavyweight for
 some applications.  the developer doesn't want to
 shoehorn these systems' bulk and extra flexibility
 into their applications, because most applications
 don't need most of the flexibility offered by these
 systems.
Is that a rationale, or an irrationale?

According to 'ps', an all-up ssh system is less
than 3 megabytes (sshd, ssh-agent, and the ssh
client).  At current memory prices, your clients
would save less than $1.50 per system even if
their custom software could reduce this bulk
to zero.
With the cost of writing custom software being
what it is, they would need to sell quite a
large number of systems before de-bulking began
to pay off.  And that's before accounting for
the cost of security risks.
 some shops experiment with the idea of using only
 part of OpenSSL, but stripping unused stuff out of
 each new release of OpenSSL is a maintenance hassle.
1) Well, they could just ignore the new release
and stick with the old version.  Or, if they think
the new features are desirable, then they ought
to compare the cost of re-stripping against the
cost of implementing the new desirable features
in the custom code.
I'm just trying to inject some balance into the
balance sheet.
2) If you do a good job stripping the code, you
could ask the maintainers to put your #ifdefs into
the mainline version.  Then you have no maintenance
hassle at all.
 they want their crypto clothing
 to fit well, but what's available off-the-rack is
 a choice between frumpy
Aha.  They want to make a fashion statement.

That at least is semi-understandable.  People do
expensive and risky things all the time in the name
of fashion.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-10-01 Thread Derek Atkins
Guus Sliepen [EMAIL PROTECTED] writes:

 Compared with the entire TLS protocol it is much simpler, compared with
 just the handshake protocol it is about as simple and probably just as
 efficient, but as I said earlier, I want to get rid of the client/server
 distinction.

You can't get rid of the distinction.  You will always have a client
and a server -- however you may just rename it Initiator and
Responder to make it sound more peer-like, but it's just the same
emperor in different clothes.  The only real distinction between a
_pure_ client-server protocol and a peer-to-peer protocol is that the
latter is generally reversible where the former is not.  By
reversible I mean that either party could be the initiator and
either could be the responder.

HOWEVER, during the run of a protocol it behooves you to label the
parties, and client/server is just as valid a naming as
initiator/responder.  IPsec (IKE) is clearly peer/peer.  Even with
TLS the protocol is reversible if you perform the name mappings and
assume both ends have certificates.

So, I urge you to be careful with trying to get rid of a distinction
that really has little meaning in most protocols.

-derek

-- 
   Derek Atkins 617-623-3745
   [EMAIL PROTECTED] www.ihtfp.com
   Computer and Internet Security Consultant

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Ian Grigg
Matt Blaze wrote:

  I imagine the Plumbers  Electricians Union must have used similar
  arguments to enclose the business to themselves, and keep out unlicensed
  newcomers.  No longer acceptable indeed.  Too much competition boys?
 

 Rich,

 Oh come on.  Are you willfully misinterpreting what I wrote, or
 did you honestly believe that that was my intent?


Sadly, there is a shared culture amongst cryptography   
professionals that presses a certain logical, scientific 
viewpoint.

What is written in these posts (not just the present one)
does derive from that viewpoint and although one can   
quibble about the details, it does look very much from
the outside that there is an informal Cryptographers  
Guild in place [1].

I don't think the jury has reached an opinion on why
the cryptography group looks like a guild as yet,
and it may never do so.  A guild, of course, is either
a group of well-meaning skilled people serving the
community, or a cartel for raising prices, depending
on who is doing the answering.

But, even if a surprise to some, I think it is a fact
that the crypto community looks like and acts as if a
guild.


 I'd encourage the designer of the protocol who asked the original question
 to learn the field.  Unfortunately, he's going about it a sub-optimally.
 Instead of hoping to design a just protocol and getting others to throw
 darts at it (or bless it), he might have better luck (and learn far
 more) by looking at the recent literature of protocol design and analysis
 and trying to emulate the analysis and design process of other protocols
 when designing his own.  Then when he throws it over the wall to the rest
 of the world, the question would be not is my protocol any good but
 rather are my arguments convincing and sufficient?


This is where maybe the guild and the outside world part
ways.

The guild would like the application builder to learn the
field.  They would like him to read up on all the literature,
the analysies.  To emulate the successes and avoid the
pitfalls of those protocols that went before them.  The  
guild would like the builder to present his protocol and  
hope it be taken seriously.  The guild would like the
builder of applications to reach acceptable standards.

And, the guild would like the builder to take the guild
seriously, in recognition of the large amounts of time
guildmembers invest in their knowledge.



None of that is likely to happen.  The barrier to entry
into serious cryptographic protocol design is too high
for the average builder of new applications [2].  He has,
after all, an application to build.

What *is* going to happen is this:  builders will continue
to ignore the guild.  They will build their application,
and throw any old shonk crypto in there.  Then, they will
deploy their application, in the marketplace, and they will
prove it, in the marketplace.

The builder will find users, again, in the marketplace.   

At some point along this evolution, certain truths will   
become evident:  the app is successful (or not).  The code
is good enough (or not).  People get benefit (or not).
Companies with value start depending on the app (or not).
Security is adequate (or is not).  Someone comes along and
finds some easy breaches (or not).  That embarrasses (or
not).

And, maybe someone nasty comes along and starts doing
damage (or not).

What may not be clear is that the investment of the security
protocol does not earn its effort until well down the track.
And, as an unfortunate but inescapable corollary, if the app
never gets to travel the full distance of its evolutionary
path, then any effort spent up front on high-end security
is wasted.

Crypto is high up-front cost, and long term payoff.  In
such a scenario, standard finance theory would say that
if the project is risky, do not add expensive, heavy duty
crypto in up front.

This tradeoff is so strong that when we look about the
security field, we find very few applications that
succeeded when also built with security in mind from
the initial stages.

And, almost all successful apps had little or bad security
in them up front.  If they needed it later, they required
expensive add-ons.  Later on.

There are no successful systems that started with perfect
crypto, to my knowledge.  There are only perfect protocols
and successful systems.  A successful system can evolve
to enjoy a great crypto protocol, but it would seem that
a great protocol can only spoil the success of a system
in the first instance.



The best we can hope for, therefore, in the initial phase,
is a compromise: maybe the builder can be encouraged to
think about security as an add-on in the future?

Maybe some cheap and nasty crypto can be stuck in there
as a placemarker?  The equivalent of TEA or 40 bit RC4,
but in a protocol sense.

Or, maybe he can encourage a journeyman of the guild to
add the stuff in, on the side, as a fun project.

Maybe, just maybe, someone can create Bob's Simple Crypto
Library.  As a stopgap 

Re: Monoculture

2003-10-01 Thread Dave Howe
Jill Ramonsky wrote:
 Is it possible for Bob to instruct his browser to (a) refuse to trust
 anything signed by Eve, and (b) to trust Alice's certificate (which
 she handed to him personally)? (And if so, how?)

 I am very much hoping that you can answer both (a) and (b) with a yes,
ok then yes :)

What it comes down to is a browser will trust any certificate either
a) explicitly marked as trusted or
b) signed by a root CA in its root certificate store

so the correct procedure for (a) is for bob to delete eve's root
certificate from his root store.
for (b) he can either explicitly mark Alice's cert as accepted, or
(technically more interesting) if he trusts her as introducer add her
root cert - which is the same thing if she self-signed her cert - to his
root store, so that *any* cert she signs is accepted.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Barney Wolff
On Wed, Oct 01, 2003 at 04:48:33PM +0100, Jill Ramonsky wrote:
 
 But I would like to ask you to clarify something about SSL which has 
 been bugging me. Allow me to present a scenario. Suppose:
 (1) Alice runs a web server.
 (2) Bob has a web client.
 (3) Alice and Bob know each other personally, and see each other every day.
 (4) Eve is the bad guy. She runs a Certificate Authority, which is 
 trusted by Bob's browser, but not by Bob.
 Is it possible for Bob to instruct his browser to (a) refuse to trust 
 anything signed by Eve, and (b) to trust Alice's certificate (which she 
 handed to him personally)? (And if so, how?)

The list of trusted certs is part of the browser config, and can be
altered.  It would be hard to imagine a browser so badly written as
to hard-code that list.  Certainly Mozilla makes it easy (Manage Certs
under Privacy  Security in Edit Preferences) and I've even added
a self-signed server cert under IE with no trouble or inconvenience.
(Yes it did ask whether to accept the site's cert.)

-- 
Barney Wolff http://www.databus.com/bwresume.pdf
I'm available by contract or FT, in the NYC metro area or via the 'Net.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Ian Grigg
Don Davis wrote:
 
 EKR writes:
  I'm trying to figure out why you want to invent a new authentication
  protocol rather than just going back to the literature ...

 note that customers aren't usually dissatisfied with
 the crypto protocols per se;  they just want the
 protocol's implementation to meet their needs exactly,
 without extra baggage of flexibility, configuration
 complexity, and bulk.  they want their crypto clothing
 to fit well, but what's available off-the-rack is
 a choice between frumpy one-size-fits-all, and a
 difficult sew-your-own kit, complete with pattern,
 fabric, and sewing machine.  so, they often opt for
 tailor-made crypto clothing.


This is also security-minded thinking on the part
of the customer.

Including extra functionality means that they have
to understand it, they have to agree with its choices,
they have to follow the rules in using it, and have
to pay the costs.  If they can ditch the stuff they
don't want, that means they are generally much safer
in making simple statements about the security model
that they have left.

So, coming up with a tailor-made solution has the
security advantage of reducing complexity.  If one
is striving to develop the whole security model on
ones own, without the benefit of formal methods,
that approach is a big advantage.

(None of which goes to say that they won't ditch a
critical component, of course.  I'm just trying to
get into their heads here when they act like this.)


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Don Davis
eric wrote:
 The way I see it, there are basically four options:
 (1) Use OpenSSL (or whatever) as-is.
 (2) Strip down your toolkit but keep using SSL.
 (3) Write your own toolkit that implements a
 stripped down subset of SSL (e.g. self-signed
 certs or anonymous DH).
 (4) Design your own protocol and then implement it.

 Since SSL without certificates is about as simple
 as a stream security protocol can be, I don't see
 that (4) holds much of an advantage over (3)

i agree, except that simplifying the SSL protocol
will be a daunting task for a non-specialist.  when
a developer is faced with reading  understanding
the intricacy of the SSL spec, he'll naturally be
tempted to start over.  this doesn't exculpate the
developer for biting off more than he could chew,
but it's unfair to claim that his only motivation
was NIH or some other sheer stupidity.

btw, i also agree that when a developer decides to
design a new protocol, he should study the literature
about the design  analysis of such protocols.  but
at the same time, we should recognize that there's a
wake-up call for us in these recurrent requests for
our review of seemingly-superfluous, obviously-broken
new protocols.  such developers evidently want and
need a fifth option, something like:

   (5) use SSSL: a truly lightweight variant of
   SSL, well-analyzed and fully standardized,
   which trades away flexibility in favor of
   small code size  ease of configuration.

arguably, this is as much an opportunity as a wake-up
call.

- don davis








-

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Eric Murray
On Wed, Oct 01, 2003 at 04:48:33PM +0100, Jill Ramonsky wrote:
 I could do an implementation of SSL. Speaking as a programmer with an 
 interest in crypto, I'm fairly sure I could produce a cleanly 
 implemented and simple-to-use version.

Yep.  It's a bit of work, and more work to ensure that
there are no programming bug type security holes, such as those
recently announced, but it's not rocket science.

 But I would like to ask you to clarify something about SSL which has 
 been bugging me. Allow me to present a scenario. Suppose:
 (1) Alice runs a web server.
 (2) Bob has a web client.
 (3) Alice and Bob know each other personally, and see each other every day.
 (4) Eve is the bad guy. She runs a Certificate Authority, which is 
 trusted by Bob's browser, but not by Bob.
 Is it possible for Bob to instruct his browser to (a) refuse to trust 
 anything signed by Eve, and (b) to trust Alice's certificate (which she 
 handed to him personally)? (And if so, how?)

Yes and yes.  Most SSL/TLS implementations let the application designate a
set of certs as trusted CA certs for purposes of authenticating SSL peers.
If his client is programmed to let him, Bob can delete Eve's cert from
the trusted CA list.  Many browsers let you do this although it's often
hard to find in the config menus.

For (b), Bobs client would need to be able to mark Bob's copy of Alice's
cert as trusted even though its not a self-signed CA cert.  This is
also just a matter of programming, but most browsers don't let you do
this-- their programmers decided that in order to simplify operation,
they would not allow browsers to mark non-selfsigned certs as trusted.

The SSL/TLS spec is pretty quiet about what peers use to authenticate
the certs that they receive.  You'd be free to implement a PGP-style
web of trust in your TLS implementation as long as the certs themselves are
X.509 format.


Eric

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 This is where maybe the guild and the outside world part
 ways.
 
 The guild would like the application builder to learn the
 field.  They would like him to read up on all the literature,
 the analysies.  To emulate the successes and avoid the
 pitfalls of those protocols that went before them.  The  
 guild would like the builder to present his protocol and  
 hope it be taken seriously.  The guild would like the
 builder of applications to reach acceptable standards.
 
 And, the guild would like the builder to take the guild
 seriously, in recognition of the large amounts of time
 guildmembers invest in their knowledge.

Actually, I could care less if they take the guild seriously,
because there isn't any guild. What I care about is that people take
the risks seriously.

This is all very much like the reaction back when lots of people were
saying please don't operate on people when you haven't washed your
hands and lots of other folks said nuts to that sort of thing --
I've been a surgeon for 30 years and almost 20% of my patients
survive!.

When I read The Codebreakers in the late 1970s, one thing got
drummed into my head in chapter after chapter after chapter. It is a
simple lesson, but one that I will repeat here.

Dumb cryptography kills people.

It has a simple corollary.

Dumb cryptography is built by people who don't understand that the
problem is hard and that doing a bad job kills people.

In chapter after chapter, you read about people making the same
mistakes, over and over, and never learning, and then other people
dying because they were too egotistical to believe that they could
have made a mistake in the design of their security systems.

We do not ask anyone join a mythical guild. We ask that people not
go off and build suspension bridges out of rotting twine.

The problem, of course, is that although it is obvious why you don't
want your suspension bridge hung from rotting twine instead of steel,
it is far less obvious to the naked eye that using the C library
random() call doesn't provide enough security to keep your nuclear
power plant controls safe.

 Well, the opposition to the guild is one of pro-market
 people who get out there and build applications.

I don't see any truth to that. You can build applications just as
easily using things like TLS -- and perhaps even more easily. The
alternatives aren't any simpler or easier, and are almost always
dangerous.

There isn't a guild. People just finally realize what is needed in
order to make critical -- and I do mean critical -- pieces of
infrastructure safe enough for use.


-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Ian Grigg
Perry E. Metzger wrote:

...

Dumb cryptography kills people.


What's your threat model?  Or, that's your threat
model?

Applying the above threat model as written up in
The Codebreakers to, for example, SSL and its
original credit card nreeds would seem to be a
mismatch.

On the face of it, that is.  Correct me if I'm
wrong, but I don't recall anyone ever mentioning
that anyone was ever killed over a sniffed credit
card.

And, I'm not sure it is wise to draw threat models
from military and national security history and
apply it to commercial and individual life.

There are scenarios where people may get killed
and there was crypto in the story.  But they are
far and few between [1].  And in general, those
parties gradually find themselves taking the crypto
seriously enough to match their own threat model
to an appropriate security model.

But, for the rest of us, that's not a good threat
model, IMHO.

  Well, the opposition to the guild is one of pro-market
  people who get out there and build applications.
 
 I don't see any truth to that. You can build applications just as
 easily using things like TLS -- and perhaps even more easily. The
 alternatives aren't any simpler or easier, and are almost always
 dangerous.


OK, that's a statement.  What is clear is that,
regardless of the truth of the that statement,
developers time and time again look at the crypto
that is there and conclude that it is too much.

The issue is that the gulf is there, not whether
it is a fair gulf.


 There isn't a guild.

BTW, just to clarify.  The intent of my post was not to
claim that there is a guild.  Just to claim that there
is an environment that is guild-like.

 People just finally realize what is needed in
 order to make critical -- and I do mean critical -- pieces of
 infrastructure safe enough for use.


I find this mysterious.  When I send encrypted email
to my girlfriend with saucy chat in there, is that
what you mean by critical ?  Or perhaps, when I send
a credit card number that is limited to $50 losses, is
verified directly by the merchant, and has a home
delivery address, do you mean, that's critical ?  Or,
if I implement a VPN between my customers and suppliers,
do you mean that this is critical ?

I think not.  For most purposes, I'm looking to reduce
the statistical occurrences of breaches.  I'll take
elimination of breaches if it is free, but in the
absence of a perfect world, for most comms needs, near
enough is fine by me, and anyone that tells me that the
crypto is 100% secure is more than likely selling snake
oil.

For those applications that *are* critical, surely the
people best placed to understand and deal with that
criticality are the people who run the application
themselves?  Surely it's their call as to whether they
take their responsibilities fully, or not?


iang


[1] the human rights activities of http://www.cryptorights.org/
do in fact present a case where people can get killed, and their
safety may depend to a lesser or greater extent on crypto.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Guus Sliepen
On Wed, Oct 01, 2003 at 02:34:23PM -0400, Ian Grigg wrote:

 Don Davis wrote:
 
  note that customers aren't usually dissatisfied with
  the crypto protocols per se;  they just want the
  protocol's implementation to meet their needs exactly,
  without extra baggage of flexibility, configuration
  complexity, and bulk.
[...]
 Including extra functionality means that they have
 to understand it, they have to agree with its choices,
 they have to follow the rules in using it, and have
 to pay the costs.  If they can ditch the stuff they
 don't want, that means they are generally much safer
 in making simple statements about the security model
 that they have left.

You clearly formulated what we are doing! We want to keep our crypto as
simple and to the point as necessary for tinc. We also want to
understand it ourselves. Implementing our own authentication protocol
helps us do all that.

Uhm, before getting flamed again: by our own, I don't mean we think we
necessarily have to implement something different from all the existing
protocols. We just want to understand it so well and want to be so
comfortable with it that we can implement it ourselves.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Monoculture

2003-10-01 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
 ...
 Dumb cryptography kills people.
 
 What's your threat model?  Or, that's your threat
 model?
 
 Applying the above threat model as written up in
 The Codebreakers to, for example, SSL and its
 original credit card nreeds would seem to be a
 mismatch.

People's software is rarely used in just one place. These days, one
might very well wake up to discover that one's operating system or
cryptographic utility is being used to protect ATM machines or power
generation equipment or worse. People die when power systems fail.

Furthermore, the little open source utility that you think is never
going to be used for something life critical may (with or without your
knowledge) end up being used by someone at an NGO who'll be killed
when the local government thugs break something.

 On the face of it, that is.  Correct me if I'm
 wrong, but I don't recall anyone ever mentioning
 that anyone was ever killed over a sniffed credit
 card.

SSL is not only used to protect people's credit cards.

It is one thing if, as a customer, with eyes wide open, you make a
decision to use something iffy.

However, as a producer, it is a bad idea to make assumptions you know
what people will do with your tools, because you don't. People end up
using tools in surprising ways. You can't control them.

Furthermore, it is utterly senseless to build something to use bad
cryptography when good cryptography is free and easy to use. You claim
there is some Cryptography Guild out there, but unlike every other
Guild in history, all our work is available for the taking by anyone
who wants it without the slightest renumeration to said fictitious
Guild.

   Well, the opposition to the guild is one of pro-market
   people who get out there and build applications.
  
  I don't see any truth to that. You can build applications just as
  easily using things like TLS -- and perhaps even more easily. The
  alternatives aren't any simpler or easier, and are almost always
  dangerous.
 
 OK, that's a statement.  What is clear is that,
 regardless of the truth of the that statement,
 developers time and time again look at the crypto
 that is there and conclude that it is too much.

For decades, I've seen programmers claim they didn't have time to test
their code or document it, either. Should I believe them, or should I
keep kicking?

  People just finally realize what is needed in
  order to make critical -- and I do mean critical -- pieces of
  infrastructure safe enough for use.
 
 I find this mysterious.  When I send encrypted email
 to my girlfriend with saucy chat in there, is that
 what you mean by critical ?

Someone else who is not skilled in the art will then use that same
piece of software to send information to someone at Amnesty
International, and might very well end up dead if the software doesn't
work right.

Just because YOU do not use a piece of software in a life-critical way
does not mean someone else out there will not.

 Or,
 if I implement a VPN between my customers and suppliers,
 do you mean that this is critical ?

And someone else will use that VPN software to connect in to the
management interface for sections of the electrical grid, or a
commuter train system, or other things that can easily cause people to
die.

You do not know who will use your software.

 For those applications that *are* critical, surely the
 people best placed to understand and deal with that
 criticality are the people who run the application
 themselves?

I've been a security consultant for years. There are very few
organizations -- even ones with critical security needs -- that
actually understand security well.


-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread M Taylor
On Wed, Oct 01, 2003 at 02:24:00PM -0400, Ian Grigg wrote:
 Matt Blaze wrote:
 
   I imagine the Plumbers  Electricians Union must have used similar
   arguments to enclose the business to themselves, and keep out unlicensed
   newcomers.  No longer acceptable indeed.  Too much competition boys?
  
 
  Rich,
 
  Oh come on.  Are you willfully misinterpreting what I wrote, or
  did you honestly believe that that was my intent?
 
 
 Sadly, there is a shared culture amongst cryptography   
 professionals that presses a certain logical, scientific 
 viewpoint.

So is being logically and scientific is a bad way to do cryptography?
Maybe you would rather some sort of more 'post-modern', 'liberal'
or 'free market' cryptography?
 
 What is written in these posts (not just the present one)
 does derive from that viewpoint and although one can   
 quibble about the details, it does look very much from
 the outside that there is an informal Cryptographers  
 Guild in place [1].

Bollocks. Anyone is free to learn and practice (in the 'western' world,
and many other countries) cryptography. Some people are just better
at it, and many of those people are recognized for being better or
more experienced. 

By your argument any group that has education and/or training is
a guild. Heaven forbid CS and IT types look at the history of their
own field.

 The guild would like the application builder to learn the
 field.  They would like him to read up on all the literature,
 the analysies.  To emulate the successes and avoid the
 pitfalls of those protocols that went before them.  The  

That sounds like a progressive, enlightened way of doing business,
at least trying to avoid known mistakes, and trying to discover
new ones. 

 None of that is likely to happen.  The barrier to entry
 into serious cryptographic protocol design is too high
 for the average builder of new applications [2].  He has,
 after all, an application to build.

Which is why the implmentation is different from protocol design,
except for the insecure application developer. 
 
 to boot.  What is not nice is that there is no easy way
 to work out which code to use, and the protocols are not
 so easy to understand.  It's nice that we have an open

Cryptography is hard; suck it up. That is not a reason to act 
irrational and encourage using known weak or flawed methods, when 
we do have better known methods.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Reliance on Microsoft called risk to U.S. security

2003-10-01 Thread bear


On Wed, 1 Oct 2003, Peter Gutmann wrote:

This doens't really work.  Consider the simple case where you run Outlook with
'nobody' privs rather than the current user privs.  You need to be able to
send and receive mail, so a worm that mails itself to others won't be slowed
down much.  In addition everyone's sending you HTML-formatted mail, so you
need access to (in effect) MSIE via the various HTML controls.  Further, you
need Word and Excel and Powerpoint for all the attachments that people send
you.  They need access to various subsystems like ODBC and who knows what else
as an extension of the above.  As you follow these dependencies further and
further out, you eventually end up running what's more or less an MLS system
where you do normal work at one privilege level, read mail at another, and
browse the web at a third.  This was tried in the 1970s and 1980s and it
didn't work very well even if you were prepared to accept a (sizeable) loss of
functionality in exchange for having an MLS OS, and would be totally
unacceptable for someone today who expects to be able to click on anything in
sight and have it automatically processed by whatever app is assigned to it.

I think part of the point is that that expectation is a substantial
problem.

Data that moves between machines is inherently suspect; and if it can
originate at unknown machines (as in SMTP or NNTP), it should be
regarded as guilty until proven innocent.  There ought to be no way to
send live code through the mail.  Users simply cannot be expected to
have the ability to make an informed decision (as opposed to a habit)
about whether to run it, because its format does not give them enough
information to make an informed decision.

The distinction between live code and text is crucial.  While both are
just sequences of bytes, text has no semantics as far as the machine
is concerned.  Once you start sending something that has machine
semantics - something that contains instructions for the machine to
run and running those instructions may cause the machine to do
something besides just displaying it - then you are dealing with live
code. And live code is handy, but dangerous.

There is pressure to stick live code into any protocol that moves
text; SMTP sprouted 'clickable' attachments.  Java, javascript, and
now flash seem to have gotten stuck into HTTP. But I think that live
code really and truly needs a different set of protocols; and for
security's sake, there really need to be text-only protocols.  It
should be part of their charter and part of their purpose that they do
*NOT* under any circumstances deliver live code.

Can be relied on to _only_ deliver text is a valuable and important
piece of functionality, and a capability that has been cut out of too
many protocols with no replacement in sight.

Separating it by protocol would give people practical things that they
could do.  You could, for example, allow people to use a live-code
mail protocol inside a company firewall or allow a live-code browsing
protocol inside the firewall, while allowing only a text mail protocol
or a text browsing protocol to make connections from outside the
company.  We approximate this by trying to make smarter clients that
have different trust models for different domains, but that's always a
crapshoot; you then have to depend on a client, and if the client can
be misconfigured and/or executes live code it can't really be relied
on.  It would be better to have separate protocols; Ideally, even
separate applications.

One thing that I noticed in the responses to CyberInsecurity: The Cost of
Monopoly was that of the people who criticised it as recommending the wrong
solution, no two could agree on any alternative remedy.  This indicates just
how hard a problem this really is...

Indeed.  I think that there ought to be simpler, text-only protocols
for the use of people who don't need to send and recieve live code, so
that they could be effectively protected from live code at the outset
unless they really need it.  Others, of course, disagree.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Perry E. Metzger

Guus Sliepen [EMAIL PROTECTED] writes:
 You clearly formulated what we are doing! We want to keep our crypto as
 simple and to the point as necessary for tinc. We also want to
 understand it ourselves.

There is nothing wrong with either goal.

 Implementing our own authentication protocol helps us do all that.

Implementing is fine. Designing, however, may have a world of problems.

 Uhm, before getting flamed again: by our own, I don't mean we think we
 necessarily have to implement something different from all the existing
 protocols. We just want to understand it so well and want to be so
 comfortable with it that we can implement it ourselves.

That's fine. There is nothing wrong with new implementations. My
biggest concern is with people rolling their own crypto algorithms and
protocols, not with people re-implementing them.

If you are going to implement something on your own, though, may I
strongly encourage you to write your code in a way that is inherently
secure?

Security is not only a question of correct protocols, but of good
implementation. Avoiding buffer overflows, using principles like
aperture minimization and least privilege, and a dozen other
techniques will help you make your system far more secure than it
would otherwise be.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread bear


On Wed, 1 Oct 2003, John S. Denker wrote:

According to 'ps', an all-up ssh system is less
than 3 megabytes (sshd, ssh-agent, and the ssh
client).  At current memory prices, your clients
would save less than $1.50 per system even if
their custom software could reduce this bulk
to zero.

That's not the money they're trying to save.  The money they're trying
to save is spent on the salaries of the guys who have to understand
it.  Depending on what needs you have, that's anything from
familiarity with setting up the certs and authorizations and servers
and configuring the clients, to the ability to sit down and verify the
source line by line and routine by routine.  The price of computer
memory is a non sequitur here; people want something dead-simple so
that there won't be so much overhead in _human_ knowledge and
understanding required to operate it.

Crypto is not like some game or something that nobody has to really
understand how it works; key management and cert management is a
complex issue and people have to be hired to do it.  Code that has so
much riding on it has to be audited in lots of places, and people have
to be hired to do that.  Every line of code costs money in an audit,
even if somebody else wrote it.

So, yeah, they'd rather see a lot of stuff hard-coded instead of
configurable; hard-coded is easier to verify, hard-coded has less
configuration to do, and hard-coded is cheaper to own.  We get so busy
trying to be all things to all people in computer science that we
often forget that what a lot of our clients really want is simplicity.

1) Well, they could just ignore the new release
and stick with the old version.  Or, if they think
the new features are desirable, then they ought
to compare the cost of re-stripping against the
cost of implementing the new desirable features
in the custom code.

And in a lot of places that's exactly what they do.  If the shop
requires a full code audit before taking any new software, going to
the new version can cost tens of millions of dollars over and above
the price.  And the bigger the new version's sourcecode is, the more
the audit is going to cost.

2) If you do a good job stripping the code, you
could ask the maintainers to put your #ifdefs into
the mainline version.  Then you have no maintenance
hassle at all.

You wouldn't.  But the people who have to slog through that tarball of
code for an audit get the jibblies when they see #ifdefs all over the
place, because it means they have to go through line by line and
routine by routine again and again and again with different
assumptions about what symbols are defined during compilation, before
they can certify it.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Thor Lancelot Simon
On Wed, Oct 01, 2003 at 10:20:53PM +0200, Guus Sliepen wrote:
 
 You clearly formulated what we are doing! We want to keep our crypto as
 simple and to the point as necessary for tinc. We also want to
 understand it ourselves. Implementing our own authentication protocol
 helps us do all that.
 
 Uhm, before getting flamed again: by our own, I don't mean we think we
 necessarily have to implement something different from all the existing
 protocols. We just want to understand it so well and want to be so
 comfortable with it that we can implement it ourselves.

In that case, I don't see why you don't bend your efforts towards
producing an open-source implementation of TLS that doesn't suck.
If you insist on not using ESP to encapsulate the packets -- which in
my opinion is a silly restriction to put on yourself; the ESP encapsulation
is extremely simple, to the point that one of my former employers has a
fully functional implementation that works well at moderate data rates
on an 8088 running MS-DOS! -- TLS is probably exactly what you're looking
for.

Note that it's *entirely* possible to use ESP without using IKE for the
user/host authentication and key exchange.  Nothing is preventing you
from using TLS or its moral equiavalent to exchange keys -- and looking
at some of the open-source IKE implementations, it's easy to see how
this would be a tempting choice.  Indeed, there's no reason your ESP
implementation would need to live in the kernel; I already know of more
than one that simple grabs packets using the kernel's tunnel driver, for
portability reasons.

However, if for what seem to me to be very arbitrary reasons you insist on
using an encapsulation that's not ESP, I urge you to use TLS for the whole
thing.  As I and others have pointed out here, if you're willing to *pay* 
for it, you can have your choice of TLS implementations that are simple, 
secure, and well under 100K.  Compare and contrast with the behemoth that 
is OpenSSL and it's easy to see why you wouldn't want to use the 
open-source implementation that is available to you now, but there is no 
reason you could not produce one yourself that was much less awful.

You say that you object to existing protocols because you want simplicity
and performance.  I say that it's not reasonable of you to blame the
failures of the existing *open-source implementations* of those protocols
on the protocols themselves.  I think that both the multiple good, small,
simple commercial SSL/TLS implementations and the two MS-DOS IPsec
implementations are good examples that demonstrate that what you should
object to, more properly, is lousy software design and implementation on
the part of many open-source protocol implementors, not lousy protocol design
in cases where the protocol design is actually quite good.  So if you're
going to set out to fix something, I think if you're trying to fix the
protocols, you're wasting your effort -- there are existing, widely
peer-reviewed and accepted protocols that are *already* about as simple
as they can get and still be secure the way users actually use them in the
real world.  I think that it would make a lot more sense to fix the lousy 
implementation quality instead; that way you seem much more likely to 
achieve your security, performance, and simplicity goals.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


VeriSign tapped to secure Internet voting

2003-10-01 Thread R. A. Hettinga
http://msnbc-cnet.com.com/2102-1029_3-5083772.html?tag=3Dni_print



VeriSign tapped to secure Internet voting=20
By Robert Lemos=20
Staff Writer, CNET News.com=20
http://news.com.com/2100-1029-5083772.html=20

VeriSign announced Monday that it will provide key components of a system d=
esigned to let Americans abroad cast absentee votes over the Internet.=20

The contract was granted by consulting firm Accenture, which is working wit=
h the U.S. Department of Defense on a voting system known as the Secure Ele=
ctronic Registration and Voting Experiment . When completed, the system wil=
l allow absentee military personnel and overseas Americans from eight parti=
cipating states to cast their votes in the 2004 general election.=20

The solution we are building will enable absentee voters to exercise their=
 right to vote, said George Schu, a vice president at VeriSign. The sanct=
ity of the vote can't be compromised nor can the integrity of the system be=
 compromised--it's security at all levels.=20


VeriSign has been selected to host the servers and information needed to au=
thenticate voters and ensure that they cast only one vote.  Internet and el=
ectronic voting systems are notoriously hard to secure. In July, researcher=
s at Johns Hopkins University raised extensive security issues with a leadi=
ng electronic voting system manufactured by Diebold Election Systems.=20

Schu stressed that several layers of security will prevent hackers from acc=
essing the system. VeriSign will house the security servers in its own host=
ing centers. The company will ask military personnel to use their Common Ac=
cess Cards--the latest form of ID for the military--to access the system an=
d cast a vote. Civilians will use digital signatures.=20

Overseas U.S. citizens from Arkansas, Florida, Hawaii, Minnesota, North Car=
olina, South Carolina, Utah and Washington will be able to use the system t=
o cast votes.=20

Related News=20
Voting machine fails inspection=9A=9A July 24, 2003=20
http://news.com.com/2100-1009-5054088.html=20

Tech glitches don't mar Florida vote=9A=9A November 6, 2002=20
http://news.com.com/2100-1023-964609.html=20

Tech makes its mark at the ballot box=9A=9A November 6, 2002=20
http://news.com.com/2009-1023-964723.html=20

U.K. puts online voting to the test=9A=9A April 26, 2002=20
http://news.com.com/2110-1023-893093.html=20

Toward digital democracy=9A=9A November 6, 2001=20
http://news.com.com/2009-1023-275348.html=20

Get this story's Big Picture=20
http://news.com.com/2104-1029-5083772.html=20


--=20
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Perry E. Metzger

Ronald L. Rivest [EMAIL PROTECTED] writes:
 What is aperture minimization?  That's a new term for me...
 Never heard of it before.  Google has never seen it either...
 
 (Perhaps others on the list would be curious as well...)

I'm sure you have heard of it, just under other names.

The term aperture minimization really just means that -- keeping the
potential opening that can be attacked minimized.

If you have only a tiny piece of trusted code, it is easier to fully
audit than if you have a large piece of trusted code. If you have only
a brief period when you have privileges asserted, there is less scope
for hijacking a program than if it asserts privileges at all
times. If your system can send general SQL queries to the database
server, someone hijacking it can do the same, but if you can only send
very limited canned queries by an ad hoc protocol the hijacker has
less scope for mischief.

Thus, aperture minimization: narrow the window (aperture) and less
stuff can get through it.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how simple is SSL? (Re: Monoculture)

2003-10-01 Thread Eric Rescorla
Adam Back [EMAIL PROTECTED] writes:

 On Wed, Oct 01, 2003 at 08:53:39AM -0700, Eric Rescorla wrote:
   there's another rationale my clients often give for
   wanting a new security system [existing protcools] too heavyweight for
   some applications.
  
  I hear this a lot, but I think that Perry nailed it earlier. SSL, for
  instance, is about as simple as we know how to make a protocol that
  does what it does. The two things that are generally cited as being
  sources of complexity are:
  
  (1) Negotiation.
 
  Negotiation doesn't really add that much protocol complexity,
 
 eh well _now_ we can say that negotiation isn't a problem, but I don't
 think we can say it doesn't add complexity: but in the process of
 getting to SSLv3 we had un-MACed and hence MITM tamperable
 ciphersuites preferences (v1), and then version roll-back attack (v2).
Right, but that's a DESIGN cost that we've already paid. 
It doesn't add significant implementation cost. As in check
out any SSL implementation.


  (2) Certificates.
 
  and certificates are kind of the price of admission if you want
  third party authentication.
 
 Maybe but X.509 certificates, ASN.1 and X.500 naming, ASN.1 string
 types ambiguities inherited from PKIX specs are hardly what one could
 reasonably calls simple.  There was no reason SSL couldn't have used
 for example SSH key formats or something that is simple.  If one reads
 the SSL rfcs it's relatively clear what the formats are the state
 stuff is a little funky, but ok, and then there's a big call out to a
 for-pay ITU standard which references half a dozen other for-pay ITU
 standards.  Hardly compatible with IETF doctrines on open standards
 you would think (though this is a side-track).
 
  Since SSL without certificates is about as simple as a stream
  security protocol can be
 
 I don't think I agree with this assertion.  It may be relatively
 simple if you want X.509 compatibility, and if you want ability to
 negotiate ciphers.

I said WITHOUT certificates.

Take your SSL implementation and code it up to use anonymous
DH only. There's not a lot of complexity to remove at that point.

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Eric Rescorla
Don Davis [EMAIL PROTECTED] writes:

 eric wrote:
  The way I see it, there are basically four options:
  (1) Use OpenSSL (or whatever) as-is.
  (2) Strip down your toolkit but keep using SSL.
  (3) Write your own toolkit that implements a
  stripped down subset of SSL (e.g. self-signed
  certs or anonymous DH).
  (4) Design your own protocol and then implement it.
 
  Since SSL without certificates is about as simple
  as a stream security protocol can be, I don't see
  that (4) holds much of an advantage over (3)
 
 i agree, except that simplifying the SSL protocol
 will be a daunting task for a non-specialist.  when
 a developer is faced with reading  understanding
 the intricacy of the SSL spec, he'll naturally be
 tempted to start over.  this doesn't exculpate the
 developer for biting off more than he could chew,
 but it's unfair to claim that his only motivation
 was NIH or some other sheer stupidity.
I disagree. If someone doesn't understand enough about SSL
to understna where to simplify, they shouldn't even consider
designing a new protocol.

 btw, i also agree that when a developer decides to
 design a new protocol, he should study the literature
 about the design  analysis of such protocols.  but
 at the same time, we should recognize that there's a
 wake-up call for us in these recurrent requests for
 our review of seemingly-superfluous, obviously-broken
 new protocols.  such developers evidently want and
 need a fifth option, something like:
 
(5) use SSSL: a truly lightweight variant of
SSL, well-analyzed and fully standardized,
which trades away flexibility in favor of
small code size  ease of configuration.
 
 arguably, this is as much an opportunity as a wake-up
 call.

I'm not buying this, especially in the dimension of code
size. I don't see any evidence that the people complaining
about how big SSL are basing their opinion on anything
more than the size of OpenSSL. I've seen SSL implementations
in well under 100k.

-Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: VeriSign tapped to secure Internet voting

2003-10-01 Thread Roy M. Silvernail
On Wednesday 01 October 2003 17:33, R. A. Hettinga forwarded:

 VeriSign tapped to secure Internet voting

 The solution we are building will enable absentee voters to exercise
 their right to vote, said George Schu, a vice president at VeriSign. The
 sanctity of the vote can't be compromised nor can the integrity of the
 system be compromised--it's security at all levels.

One would wish that were a design constraint.  Sadly, I'm afraid it's just a 
bullet point from the brochure.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


anonymous DH MITM

2003-10-01 Thread M Taylor

Stupid question I'm sure, but does TLS's anonymous DH protect against
man-in-the-middle attacks? If so, how? I cannot figure out how it would,
and it would seem TLS would be wide open to abuse without MITM protection so
I cannot imagine it would be acceptable practice without some form of
security.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Eric Rescorla
M Taylor [EMAIL PROTECTED] writes:

 Stupid question I'm sure, but does TLS's anonymous DH protect against
 man-in-the-middle attacks? If so, how? I cannot figure out how it would,
 and it would seem TLS would be wide open to abuse without MITM protection so
 I cannot imagine it would be acceptable practice without some form of
 security.

It doesn't protect against MITM. 

You could, however, use a static DH key and then client could
cache it as with SSH.

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Tim Dierks
At 07:06 PM 10/1/2003, M Taylor wrote:
Stupid question I'm sure, but does TLS's anonymous DH protect against
man-in-the-middle attacks? If so, how? I cannot figure out how it would,
and it would seem TLS would be wide open to abuse without MITM protection so
I cannot imagine it would be acceptable practice without some form of
security.
It does not, and most SSL/TLS implementations/installations do not support 
anonymous DH in order to avoid this attack. Many wish that anon DH was more 
broadly used as an intermediate security level between bare, insecure TCP  
authenticated TLS, but this is not common at this time.

(Of course, it's not even clear what MITM means for an anonymous 
protocol, given that the layer in question makes no distinction between Bob 
 Mallet.)

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Ian Grigg
M Taylor wrote:
 
 Stupid question I'm sure, but does TLS's anonymous DH protect against
 man-in-the-middle attacks? If so, how? I cannot figure out how it would,


Ah, there's the rub.  ADH does not protect against
MITM, as far as I am aware.


 and it would seem TLS would be wide open to abuse without MITM protection so
 I cannot imagine it would be acceptable practice without some form of
 security.

View A:

MITM is extremely rare.  It's quite a valid threat
model to say that MITM is a possibility that won't
need to be defended against, 100%.

E.g.1, SSH which successfully defends most online
Unix servers, by assuming the first contact is a
good contact.  E.g.2, PGP, which bounces MITM
protection up to a higher layer.

Or, what's your threat model?  Why does it include
MITM and how much do you want to pay?

View B:

MITM is a real and valid threat, and should be
considered.  By this motive, ADH is not a recommended
mode in TLS, and is also deprecated.

Ergo, your threat model must include MITM, and you
will pay the cost.

(Presumably this logic is behind the decision by the
TLS RFC writers to deprecate ADH.  Hence, talking
about ADH in TLS is a waste of time, which is why I
have stopped suggesting that ADH be used to secure
browsing, and am concentrating on self-signed certs.
Anybody care to comment from the TLS team as to what
the posture is?)

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: VeriSign tapped to secure Internet voting

2003-10-01 Thread Ian Grigg
Roy M. Silvernail wrote:
 
 On Wednesday 01 October 2003 17:33, R. A. Hettinga forwarded:
 
  VeriSign tapped to secure Internet voting
 
  The solution we are building will enable absentee voters to exercise
  their right to vote, said George Schu, a vice president at VeriSign. The
  sanctity of the vote can't be compromised nor can the integrity of the
  system be compromised--it's security at all levels.
 
 One would wish that were a design constraint.  Sadly, I'm afraid it's just a
 bullet point from the brochure.

It's actually quite cunning.  The reason that this
is going to work is because the voters are service
men  women, and if they attack the system, they'll
get their backsides tanned.  Basically, it should
be relatively easy to put together a secure voting
application under the limitations, control structures
and security infrastructure found within the US military.

It would be a mistake to apply the solution to wider
circumstances, and indeed another mistake to assume
that Verisign had anything to do with any purported
success in solving the voting problem.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Eric Murray
On Thu, Oct 02, 2003 at 12:06:40AM +0100, M Taylor wrote:
 
 Stupid question I'm sure, but does TLS's anonymous DH protect against
 man-in-the-middle attacks?

No, it doesn't.

 If so, how? I cannot figure out how it would,
 and it would seem TLS would be wide open to abuse without MITM protection so
 I cannot imagine it would be acceptable practice without some form of
 security.

The non DH suites are there in the spec for use when
your security model allows.  Not many uses of TLS do.

Last time I checked, which was a while ago now, very few deployed
https servers offered anon DH suites.  Which is appropriate
since MITM breaks the https security model.

Eric


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: VeriSign tapped to secure Internet voting

2003-10-01 Thread Roy M. Silvernail
On Wednesday 01 October 2003 19:53, Ian Grigg wrote:
 Roy M. Silvernail wrote:
  On Wednesday 01 October 2003 17:33, R. A. Hettinga forwarded:
   VeriSign tapped to secure Internet voting
  
   The solution we are building will enable absentee voters to exercise
   their right to vote, said George Schu, a vice president at VeriSign.
   The sanctity of the vote can't be compromised nor can the integrity of
   the system be compromised--it's security at all levels.
 
  One would wish that were a design constraint.  Sadly, I'm afraid it's
  just a bullet point from the brochure.

 It's actually quite cunning.  The reason that this
 is going to work is because the voters are service
 men  women, and if they attack the system, they'll
 get their backsides tanned.  

Good observation.  I missed that one.

 Basically, it should
 be relatively easy to put together a secure voting
 application under the limitations, control structures
 and security infrastructure found within the US military.

 It would be a mistake to apply the solution to wider
 circumstances, and indeed another mistake to assume
 that Verisign had anything to do with any purported
 success in solving the voting problem.

Definitely, but I can see Verisign doing both.  The rabbit hole gets ever 
deeper.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Peter Gutmann
John S. Denker [EMAIL PROTECTED] writes:

According to 'ps', an all-up ssh system is less than 3 megabytes (sshd, ssh-
agent, and the ssh client).  At current memory prices, your clients would
save less than $1.50 per system even if their custom software could reduce
this bulk to zero.

Let me guess, your background is in software rather than hardware? :-).  Not
all computers are PCs, where you can just drop in another SIMM and the problem
is fixed.  Depending on how you measure it, there are at least as many/many
more embedded systems out there than PCs, where you have X system resources
and can't add any more even if you wanted to because (a) the system is already
deployed and can't be altered, (b) it's cheaper to rewrite the crypto from
scratch than spend even 5 cents (not $1.50) on more memory, or (c) the
hardware can't address any more than the 128K or 512K (64K and 256K 8-bit
SRAMs x 2, the bread and butter of many embedded systems) that it already has.

With the cost of writing custom software being what it is, they would need to
sell quite a large number of systems before de-bulking began to pay off.  And
that's before accounting for the cost of security risks.

See above.  This is exactly the situation that embedded-systems vendors find
themselves in (insert tales of phone exchanges built from clustered Z80s
because it's easier to keep adding more of those than to move the existing
firmware to new hardware without the Z80's restrictions, or people being paid
outrageous amounts of money to hand-code firmware for 4-bit CPUs because it's
cheaper than moving everything to 8-bit ones, or ...).

Perry E. Metzger [EMAIL PROTECTED] writes:

SSL is not only used to protect people's credit cards.

It is one thing if, as a customer, with eyes wide open, you make a decision
to use something iffy.

However, as a producer, it is a bad idea to make assumptions you know what
people will do with your tools, because you don't. People end up using tools
in surprising ways. You can't control them.

Yup.  I once had a user discuss with me the use of my SSL code in an embedded
application that controlled X.  I was a bit curious as to why they'd bother,
until they explained the scale of the X they were controlling.  If anything
were to go wrong there, it'd be a lot more serious than a few stolen credit
cards.

Once you have a general-purpose security tool available, it's going to be used
in ways that the original designers and implementors never dreamed of.  That's
why you need to build it as securely as you possibly can, and once it's done
go back over it half a dozen times and see if you can build it even more
securely than that.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Peter Gutmann
Tim Dierks [EMAIL PROTECTED] writes:

It does not, and most SSL/TLS implementations/installations do not support
anonymous DH in order to avoid this attack.

Uhh, I think that implementations don't support DH because the de facto
standard is RSA, not because of any concern about MITM (see below).  You can
talk to everything using RSA, you can talk to virtually nothing using DH,
therefore...

Many wish that anon DH was more broadly used as an intermediate security
level between bare, insecure TCP  authenticated TLS, but this is not common
at this time.

RSA is already used as anon-DH (via self-signed, snake-oil CA, expired,
invalid, etc etc certs), indicating that MITM isn't much of a concern for most
users.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-10-01 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Perry E. Metzger writes:


Unfortunately, those parts are rather dangerous to omit.

0) If you omit the message authenticator, you will now be subject to a
   range of fine and well documented cut and paste attacks. With some
   ciphers, especially stream ciphers, you'll be subject to far worse
   attacks still.
1) If you omit the IV, suddenly you're going to be subject to a
   second new range of attacks based on the fact that fixed blocks
   will always encrypt the exact same way.

We went through all that, by the way, when designing IPSec. At first,
we didn't put in mandatory authenticators, because we didn't
understand that they were security critical. Then, of course, we
discovered that they were damn critical, and that most of the text
books on this had been wrong. We didn't understand lots of subtleties
about our IVs, either. One big hint: do NOT use IVs on sequential
packets with close hamming distance!

Better yet, don't use predictable IVs; the threat is much clearer.

Perry is right -- a number of us learned the hard way about 
cryptographic protocol complexity.  I led the fight to remove sequence 
numbers from the early version of ESP, since no one could elucidate a 
threat model beyond the enemy could duplicate packets.  My response 
was so what -- packet duplication is always possible per the IP 
datagram model.  (A while back, my ISP fulfilled that part of the 
model; I was seeing up to 90% duplicate packets.  But I digress.)  But 
then I wrote a paper where I showed lots of ways to attack IPsec if you 
didn't have both sequence numbers and integrity protection, so I led 
the fight to reintroduce sequence numbers, and to make integrity 
protection part of ESP rather than leaving it to AH.  We all learn, 
even in embarrassing ways.

My first published cryptographic protocol, EKE, has had an interesting 
history.  One version of it is still believed secure:  encrypt both halves
of a DH exchange with a shared secret.  (Ironically enough, that was
the very first variant we came up with -- I still have the notebook 
where I recorded it.)  We came up with lots of variations and 
optimizations that all looked just fine.  We were wrong...

Someone has already alluded to the Needham-Schroeder protocol.  It's 
instructive to review the history of it.  The original protocol was 
published in 1978; it was the first cryptographic protocol in the open 
literature.  Presciently enough, it warned that cryptographic protocol 
design seemed to be a very suble art.  Three years later, Denning and 
Sacco showed an attack on the protocol under certain assumptions; they 
suggested changes.  In 1994, Abadi and Needham published a paper 
showing a flaw in the Denning-Sacco variant.  In 1996, Lowe published 
a new attack on the *original* Needham-Schroeder paper.  Translated 
into modern terms -- the first paper was published before certificates 
were invented -- the faulty protocol was only three lines long!  Three 
lines of protocol, in the oldest paper in the literature, and it took 
18 years to find the flaw...

No, we're not a guild.  To me, guild has connotations of exclusivity 
and closed membership.  Anyone can develop their own protocols, and 
we're quite happy -- *if* they understand what they're doing.  That 
means reading the literature, understand the threats, and deciding 
which you need to counter and which you can ignore.  In IPsec, Steve 
Kent -- who has far more experience with cryptographic protocols than 
most of us, since he has access to, shall we say, more than just the 
open literature -- was a strong proponent of making integrity checks 
option in ESP.  Why, when I just finished saying that they're 
important?  Integrity checks can be expensive, and in some situations 
the attacks just don't apply.  The trick is to understand the 
tradeoffs, and *to document them*.  Leave out what you want, but tell 
people what you've left out, why you've left it out, and under what 
circumstances will that change get them into trouble.


--Steve Bellovin, http://www.research.att.com/~smb


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Tim Dierks
At 10:37 PM 10/1/2003, Peter Gutmann wrote:
Tim Dierks [EMAIL PROTECTED] writes:
It does not, and most SSL/TLS implementations/installations do not support
anonymous DH in order to avoid this attack.
Uhh, I think that implementations don't support DH because the de facto
standard is RSA, not because of any concern about MITM (see below).  You can
talk to everything using RSA, you can talk to virtually nothing using DH,
therefore...
Sure, although it's a chicken  egg thing: it's not the standard because 
the initial adopters  designers of SSL didn't have any use for it (not to 
mention the political strength of RSADSI in the era).

Many wish that anon DH was more broadly used as an intermediate security
level between bare, insecure TCP  authenticated TLS, but this is not common
at this time.
RSA is already used as anon-DH (via self-signed, snake-oil CA, expired,
invalid, etc etc certs), indicating that MITM isn't much of a concern for most
users.
There are so many different categories of users that it's probably 
impossible to make any blanket statements about most users. It's 
certainly true that a web e-commerce vendor doesn't have much use for 
self-signed certificates, since she knows that dialogs popping up warning 
customers that they have some problem they don't understand is going to 
lead to the loss of some small fraction of sales. (Not that she necessarily 
has any concern about the security implications: it's almost entirely a 
customer comfort and UI issue.)

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]