Re: PKI Research Workshop '04, CFP

2003-10-22 Thread Peter Gutmann
Carl Ellison [EMAIL PROTECTED] writes:

The third annual PKI Research workshop CFP has been posted.

I note that it's still not possible to use PKI to authenticate submissions to
the PKI workshop :-).

(To those people who missed the original comment a year or two back, the first
 PKI workshop required that people use plain passwords for the web-based
 submission system due to the lack of a PKI to handle the task).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


(Fwd) Electronic Signatures: Final Report available

2003-10-22 Thread Stefan Kelm
FYI:

From:   Jos Dumortier [EMAIL PROTECTED]
Subject:Electronic Signatures: Final Report available 
Date sent:  Tue, 21 Oct 2003 18:08:33 +0200

Dear colleagues and friends,
The final report on legal and market aspects of electronic signatures in
Europe has been published by the European Commission on the e-Europe
website. The URL is
http://europa.eu.int/information_society/eeurope/2005/index_en.htm
On behalf of the core research team, I wish to thank all of you who
contributed to this study.
Kind regards
Jos Dumortier

--


http://www.Anti-Spam-Symposium.de  18.-19. November 2003

Dipl.-Inform. Stefan Kelm
Security Consultant

Secorvo Security Consulting GmbH
Albert-Nestler-Strasse 9, D-76131 Karlsruhe

Tel. +49 721 6105-461, Fax +49 721 6105-455
E-Mail [EMAIL PROTECTED], http://www.secorvo.de/
---
PGP Fingerprint 87AE E858 CCBC C3A2 E633 D139 B0D9 212B


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: PKI Research Workshop '04, CFP

2003-10-22 Thread Sean Smith

(To those people who missed the original comment a year or two back, the first
 PKI workshop required that people use plain passwords for the web-based
 submission system due to the lack of a PKI to handle the task).

Hey, but at least the password was protected by an SSL channel,
which was authenticated by a real certificate signed by one of
the 10^4 trust roots built into your browser :)

---Sean











-- 
Sean W. Smith, Ph.D. [EMAIL PROTECTED]   
http://www.cs.dartmouth.edu/~sws/   (has ssl link to pgp key)
Department of Computer Science, Dartmouth College, Hanover NH USA



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[Announce] GPA 0.7.0 released

2003-10-22 Thread R. A. Hettinga

--- begin forwarded text


Status:  U
Date: Wed, 22 Oct 2003 14:45:24 +0200
From: Miguel Coca [EMAIL PROTECTED]
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Mail-Followup-To: [EMAIL PROTECTED], [EMAIL PROTECTED]
User-Agent: Mutt/1.5.4i
Cc: 
Subject: [Announce] GPA 0.7.0 released
List-Id: Help and discussion among users of GnuPG  gnupg-users.gnupg.org
List-Archive: /pipermail
List-Post: mailto:[EMAIL PROTECTED]
List-Help: mailto:[EMAIL PROTECTED]
List-Subscribe: http://lists.gnupg.org/mailman/listinfo/gnupg-users,
mailto:[EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]

Hello,

We are pleased to announce the release of GPA 0.7.0

GPA is a graphical frontend for the GNU Privacy Guard (GnuPG,
http://www.gnupg.org). GPA can be used to encrypt, decrypt, and sign files,
to verify signatures and to manage the private and public keys.

GPA depends on the GnuPG Made Easy (GPGME) library, version 0.4.3 or
later.

This is a development release. Please be careful when using it on
production keys.

You can find the release here:

  ftp://ftp.gnupg.org/gcrypt/alpha/gpa/gpa-0.7.0.tar.gz
  ftp://ftp.gnupg.org/gcrypt/alpha/gpa/gpa-0.7.0.tar.gz.sig

The MD5 checksums for this release are:

  44cb60cba64a48837588ed27f8db08b2  gpa-0.7.0.tar.gz
  a6e414cce650597a24609cc374af4b4d  gpa-0.7.0.tar.gz.sig

Noteworthy changes in version 0.7.0 (2003-10-22)


 * Long file operations no longer block GPA, so several operations can
 be run at the same time. This also means GPA does not freeze while an
 operation runs, leading to a more responsive interface.

 * The keyring editor now displays all the subkeys of the currently
 selected key. This is only visible if GPA is in advanced mode
 (available from the preferences dialog).

 * The capabilities of a key (certify, sign, encrypt) are now visible
 from the keyring editor.

 * The keyring editor can now sort keys by any column. By default,
 they are listed in the order they were imported into the keyring
 (i.e. the same order as gpg --list-keys).

 * The key list is now displayed while it is being filled, allowing
 for faster startup times.

 * A warning dialog is now displayed when an operation slows down due
 to gpg rebuilding the trust database.

 * Imports and exports from files and servers have been separated into
 different dialogs and menu options.

 * Invoking GPA with file names as arguments will open those files in
 the file manager.

 * Cosmetical and minor fixes to the file manager window.

 * GPA now remembers the brief/detailed setting view and restores it
 when GPA is started.

 * Removed all deprecated widgets. GPA is now pure GTK+ 2.2.

 * Fixed a hang on startup on PowerPC machines.

Known problems in version 0.7.0


 * Keyserver access now depends on the GnuPG's plugins themselves,
 instead of on the gpg executable. Therefore, these plugins must be
 installed for keyserver access to work:

- LDAP keyservers require the gpgkeys_ldap plugin, which is
automatically compiled along with gpg if the OpenLDAP
libraries are available.

- HKP keyserver access needs gpgkeys_hkp, which is provided by
default along with GnuPG = 1.3.0, and can be compiled for
GnuPG 1.2.x by passing the --enable-external-hkp option to
configure.

 * It's been reported that compiling a Win32 version of GPA is
 impossible right now, due to our dependence on GPGME.

-- 
Miguel Coca ([EMAIL PROTECTED])http://zipi.fi.upm.es/~e970095/
   OpenPGP: E60A CBF4 5C6F 914E B6C1  C402 8C4D C7B6 27FC 3CA8

___
Gnupg-announce mailing list
[EMAIL PROTECTED]
http://lists.gnupg.org/mailman/listinfo/gnupg-announce


___
Gnupg-users mailing list
[EMAIL PROTECTED]
http://lists.gnupg.org/mailman/listinfo/gnupg-users

--- end forwarded text


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Liberty groups attack plan for EU health ID card

2003-10-22 Thread R. A. Hettinga
http://news.telegraph.co.uk/news/main.jhtml?xml=/news/2003/10/21/wid21.xmlsSheet=/news/2003/10/21/ixworld.html/news/2003/10/21/wid21.xml

The Telegraph



Liberty groups attack plan for EU health ID card
By Ambrose Evans-Pritchard in Brussels
(Filed: 21/10/2003)

The European Union took its first step yesterday towards the creation of an
EU-wide health identity card able to store a range of biometric and
personal data on a microchip by 2008. Approved by Union ministers in
Luxembourg, the plastic disk will slide into the credit-card pouch of a
wallet or purse.

The European Health Insurance Card is intended to end the bureaucratic
misery of E111 forms currently used by travellers who fall ill in other EU
countries. Eventually it will replace a plethora of other complex forms
needed for longer stays.

But civil liberties groups said it was the start of a scheme for a
harmonised data chip that would quickly evolve into an EU identity card
containing intrusive information off all kinds that could be read by a
computer.

During the first phase from June 1 next year, each country will be able to
choose whether to include photographs, fingerprints and biometric data,
such as eye measurements, on the national side of the card. Britain is
opting for a minimalist version.

The European Commission said yesterday that the final phase in 2008 would
add a smart chip containing a range of data, including health files and
records of treatment received. The ultimate objective is to have an
electronic chip on the card, as the technology improves, said a spokesman.

Tony Bunyan, the head of Statewatch, said it was part of a disturbing
Union-wide erosion of privacy since September 11 2001. We all know where
they're heading with this, he said. They want a single card with all our
data on one chip. It'll be a passport and driver's licence rolled into one
with everything from our national insurance numbers, bank accounts, to
health records.


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos
I read the WYTM thread with great interest because it dovetailed nicely with some 
research I am
currently involved in.  But I would like to branch this topic onto something specific, 
to see what
everyone here thinks.

As far as I can glean, the general consensus in WYTM is that MITM attacks are very low 
(read:
inconsequential) probability.  Is this *really* true?  I came across this paper last 
year, at the
SANS reading room:

http://rr.sans.org/threats/man_in_the_middle.php

I found it both fascinating and disturbing, and I have since confirmed much of what it 
was
describing.  This leads me to think that an MITM attack is not merely of academic 
interest but one
that can occur in practice.  With sufficiently simplified tools this type of attack 
can readily be
launched by script kiddies or someone only just slightly higher on the hacker 
evolutionary scale.

Having said that then, I would like to suggest that one of the really big flaws in the 
way SSL is
used for HTTP is that the server rarely, if ever, requires client certs.  We all seem 
to agree that
convincing server certs can be crafted with ease so that a significant portion of the 
Web population
can be fooled into communicating with a MITM, especially when one takes into account 
Bruce
Schneier's observations of legitimate uses of server certs (as quoted by Bryce 
O'Whielacronx).  But
as long as servers do *no* authentication on client certs (to the point of not even 
asking for
them), then the essential handshaking built into SSL is wasted.

I can think of numerous online examples where requiring client certs would be a good 
thing: online
banking and stock trading are two examples that immediately leap to mind.  So the 
question is, why
are client certs not more prevalent?  Is is simply an ease of use thing?  Since the 
Internet threat
model upon which SSL is based makes the assumption that the channel is *not* secure, 
why is MITM
not taken more seriously?  Why, if SSL is designed to solve a problem that can be 
solved, namely
securing the channel (and people are content with just that), are not more people 
jumping up and
down yelling that it is being used incorrectly?

Am I missing something obvious here?  I look forward to any comments you might have.

-- Tom Otvos

Don't think you are. Know you are. - Morpheus


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Otvos wrote:

 As far as I can glean, the general consensus in WYTM is that MITM attacks are very 
 low (read:
 inconsequential) probability.  Is this *really* true?


The frequency of MITM attacks is very low, in the sense
that there are few or no reported occurrences.  This
makes it a challenge to respond to in any measured way.


 I came across this paper last year, at the
 SANS reading room:
 
 http://rr.sans.org/threats/man_in_the_middle.php
 
 I found it both fascinating and disturbing, and I have since confirmed much of what 
 it was
 describing.  This leads me to think that an MITM attack is not merely of academic 
 interest but one
 that can occur in practice.


Nobody doubts that it can occur, and that it *can*
occur in practice.  It is whether it *does* occur
that is where the problem lies.

The question is one of costs and benefits - how much
should we spend to defend against this attack?  How
much do we save if we do defend?

[ Mind you, the issues that are raised by the paper
are to do with MITM attacks, when SSL/TLS is employed
in an anti-MITM role.  (I only skimmed it briefly I
could be wrong.)  We in the SSL/TLS/secure browsing
debate have always assumed that SSL/TLS when fully
employed covers that attack - although it's not the
first time I've seen evidence that the assumption
is unwarranted. ]


 Having said that then, I would like to suggest that one of the really big flaws in 
 the way SSL is
 used for HTTP is that the server rarely, if ever, requires client certs.  We all 
 seem to agree that
 convincing server certs can be crafted with ease so that a significant portion of 
 the Web population
 can be fooled into communicating with a MITM, especially when one takes into account 
 Bruce
 Schneier's observations of legitimate uses of server certs (as quoted by Bryce 
 O'Whielacronx).  But
 as long as servers do *no* authentication on client certs (to the point of not even 
 asking for
 them), then the essential handshaking built into SSL is wasted.
 
 I can think of numerous online examples where requiring client certs would be a good 
 thing: online
 banking and stock trading are two examples that immediately leap to mind.  So the 
 question is, why
 are client certs not more prevalent?  Is is simply an ease of use thing?


I think the failure of client certs has the same
root cause as the failure of SSL/TLS to branch
beyond its mandated role of protecting e-
commerce.  Literally, the requirement that
the cert be supplied (signed) by a third party
killed it dead.  If there had been a button on
every browser that said generate self-signed
client cert now then the whole world would be
using them.

Mind you, the whole client cert thing was a bit
of an afterthought, wasn't it?  The orientation
that it was at server discretion also didn't help.


 Since the Internet threat
 model upon which SSL is based makes the assumption that the channel is *not* 
 secure, why is MITM
 not taken more seriously?


People often say that there are no successful MITM
attacks because of the presence of SSL/TLS !

The existance of the bugs in Microsoft browsers
puts the lie to this - literally, nobody has bothered
with MITM attacks, simply because they are way way
down on the average crook's list of sensible things
to do.

Hence, that rant was in part intended to separate
out 1994's view of threat models to today's view
of threat models.  MITM is simply not anywhere in
sight - but a whole heap of other stuff is!

So, why bother with something that isn't a threat?
Why can't we spend more time on something that *is*
a threat, one that occurs daily, even hourly, some
times?


 Why, if SSL is designed to solve a problem that can be solved, namely
 securing the channel (and people are content with just that), are not more people 
 jumping up and
 down yelling that it is being used incorrectly?


Because it's not necessary.  Nobody loses anything
much over the wire, that we know of.  There are
isolated cases of MITMs in other areas, and in
hacker conferences for example.  But, if 10 bit
crypto and ADH was used all the time, it would
still be the least of all risks.


iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos

 So what purpose would client certificates address? Almost all of the use
 of SSL domain name certs is to hide a credit card number when a consumer
 is buying something. There is no requirement for the merchant to
 identify and/or authenticate the client  the payment infrastructure
 authenticates the financial transaction and the server is concerned
 primarily with getting paid (which comes from the financial institution)
 not who the client is.


The CC number is clearly not hidden if there is a MITM.  I think the I got my money 
so who cares
where it came from argument is not entirely a fair representation.  Someone ends up 
paying for
abuses, even if it is us in CC fees, otherwise why bother encrypting at all?  But that 
is besides
the point.

 So, there are some infrastructures that have web servers that want to
 authenticate clients (for instance online banking). They currently
 establish the SSL session and then authenticate the user with
 userid/password against an online database.


These are, I think, more important examples and again, if there is a MITM, then doing 
additional
authentication post-channel setup is irrelevant. These can be easily replayed after 
the attack has
completed.  The authentication *should* be deeply tied to channel setup, should it 
not?  Or stated
another way, having chained authentication where the first link in the chain is 
demonstrably weak
doesn't seem to achieve an awful lot.


 There was an instance of a bank issuing client certificates for use in
 online banking. At one time they claimed to have the largest issued PKI
 client certificates (aka real PKI as opposed to manufactured
 certificates).

 However, they discovered

 1) the certificates had to be reduced back to relying-party-only
 certificates with nothing but an account number (because of numerous
 privacy and liability concerns)

 2) the certificates quickly became stale

 3) they had to look up the account and went ahead and did a separate
 password authentication  in part because the certificates were
 stale.

 They somewhat concluded that the majority of client certificate
 authentication aren't being done because they want the certificates 
 it is because the available COTS software implements it that way (if you
 want to use public key) ... but not because certificates are in anyway
 useful to them (in fact, it turns out that the certificates are
 redundant and superfluous ... and because of the staleness issue
 resulted in them also requiring passwords).


Fascinating!  Can you please tell me what bank that was?

-- tomo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread John S. Denker
On 10/22/2003 04:33 PM, Ian Grigg wrote:

 The frequency of MITM attacks is very low, in the sense that there
 are few or no reported occurrences.
We have a disagreement about the facts on this point.
See below for details.
 This makes it a challenge to
 respond to in any measured way.
We have a disagreement about the philosophy of how to
measure things.  One should not design a bridge according
to a simple measurement of the amount of cross-river
traffic in the absence of a bridge.  One should not approve
a launch based on the observed fact that previous instances
of O-ring failures were non-fatal.
Designers in general, and cryptographers in particular,
ought to be proactive.
But this philosophy discussion is a digression, because
we have immediate practical issues to deal with.
 Nobody doubts that it can occur, and that it *can* occur in practice.
 It is whether it *does* occur that is where the problem lies.
According to the definitions I find useful, MITM is
basically a double impersonation.  For example,
Mallory impersonates PayPal so as to get me to
divulge my credit-card details, and then impersonates
me so as to induce my bank to give him my money.
This threat is entirely within my threat model.  There
is nothing hypothetical about this threat.  I get 211,000
hits from
  http://www.google.com/search?q=ID-theft
SSL is distinctly less than 100% effective at defending
against this threat.  It is one finger in a dike with
multiple leaks.  Client certs arguably provide one
additional finger ... but still multiple leaks remain.
==

The expert reader may have noticed that there are
other elements to the threat scenario I outlined.
For instance, I interact with Mallory for one seemingly
trivial transaction, and then he turns around and
engages in numerous and/or large-scale transactions.
But this just means we have more than one problem.
A good system would be robust against all forms
of impersonation (including MITM) *and* would be
robust against replays *and* would ensure that
trivial things and large-scale things could not
easily be confused.  Et cetera.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Anne Lynn Wheeler
At 05:08 PM 10/22/2003 -0400, Tom Otvos wrote:

The CC number is clearly not hidden if there is a MITM.  I think the I 
got my money so who cares
where it came from argument is not entirely a fair 
representation.  Someone ends up paying for
abuses, even if it is us in CC fees, otherwise why bother encrypting at 
all?  But that is besides
the point.
the statement was SSL domain name certificate is

1) am i really talking to who I think I'm talking to
2) encrypted channel
obviously #1 addresses MITM (am i really talking to who I think I'm talking 
to).

The issue for CC is that it really is a shjared secret and is extremely 
vulnerable ... as I've commented before

1) CC needs to be in the clear in a dozen or so business processes
2) much simpler to harvest a whole merchant file with possibly millions of 
CC numbers in about the same effort to evesdrop one off the net (even if 
there was no SSL) return on investment  for approx. same amount of 
effort get one CC number or get millions
3) all the instances in the press are in fact involved with harvesting 
large files of numbers ... not one or two at a time off the wire
4) burying the earth in miles of crypto still wouldn't eliminate the 
current shared-secret CC problem

slightly related  security proportional to risk:
http://www.garlic.com/~lynn/2001h.html#61
so the requirement given the X9 financial standards working group X9A10
http://www.x9.org/
was to preserve the integrity of the financial infrastructure for all 
electronic retail payment (regardless of kind, origin, method, etc). The 
result was X9.59 standard
http://www.garlic.com/~lynn/index.html#x959

which effectively defines a digitally signed, authenticated transaction 
 no certificate required ... and the CC number used in X9.59 
authenticated transactions shouldn't be used in non-authenticated 
transactions. Since the transaction is now digitally signed transactions 
and the CC# can't be used in non-authenticated transactions  you can 
listen in on X9.59 transactions and harvest all the CC# that you want to 
and it doesn't help with doing fraudulent transactions. In effect, X9.59 
changes the business rules so that CC# no longer need to be treated as 
shared secrets.

misc. past stuff about ssl domain name certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert
misc. past stuff about relying-party-only certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo
misc. past stuff about using certificateless digital signatures in radius 
authentication
http://www.garlic.com/~lynn/subpubkey.html#radius

misc. past stuff about using certificateless digital signatures in kerberos 
authentication
http://www.garlic.com/~lynn/subpubkey.html#kerberos

misc. fraud  exploits (including some number of cc related press 
announcements)
http://www.garlic.com/~lynn/subtopic.html#fraud

some discussion of early SSL deployment for what is now referred to as 
electronic commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Otvos

 Nobody doubts that it can occur, and that it *can*
 occur in practice.  It is whether it *does* occur
 that is where the problem lies.


Or, whether it gets reported if it does occur.

 The question is one of costs and benefits - how much
 should we spend to defend against this attack?  How
 much do we save if we do defend?


Absolutely true.  If the only effect of a MITM is loss of privacy, then that is 
certainly a
lower-priority item to fix than some quick cash scheme.  So the threat model needs 
to clearly
define who the bad guys are, and what their motivations are.  But then again, if I am 
the victim of
a MITM attack, even if the bad guy did not financially gain directly from the attack 
(as in, getting
my money or something for free), I would consider loss of privacy a significant 
thing. What if an
attacker were paid by someone (indirect financial gain) to ruin me by buying a bunch 
of stock on
margin?  Maybe not the best example, but you get the idea.  It is not an attack that 
affects
millions of people, but to the person involved, it is pretty serious.  Shouldn't the 
server in
this case help mitigate this type of attack?


 So, why bother with something that isn't a threat?
 Why can't we spend more time on something that *is*
 a threat, one that occurs daily, even hourly, some
 times?


I take your point, but would suggest isn't a threat be replaced by doesn't threaten 
the
majority.  And are we at a point where it needs to be a binary thing -- fix this OR 
that but NOT
both?

-- tomo

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread David Wagner
Tom Otvos wrote:
As far as I can glean, the general consensus in WYTM is that MITM
attacks are very low (read:
inconsequential) probability.  Is this *really* true?

I'm not aware of any such consensus.
I suspect you'd get plenty of debate on this point.
But in any case, widespread exploitation of a vulnerability
shouldn't be a prerequisite to deploying countermeasures.

If we see a plausible future threat and the stakes are high enough,
it is often prudent to deploy defenses in advance against the possibility
that attackers.  If we wait until the attacks are widespread, it may be
too late to stop them.  It often takes years (or possibly a decade or more:
witness IPSec) to design and widely deploy effective countermeasures.

It's hard to predict with confidence which of the many vulnerabilities
will be popular among attackers five years from now, and I've been very wrong,
in both directions, many times.  In recognition of our own fallibility at
predicting the future, the conclusion I draw is that it is a good idea
to be conservative.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 Nobody doubts that it can occur, and that it *can*
 occur in practice.  It is whether it *does* occur
 that is where the problem lies.
 
 The question is one of costs and benefits - how much
 should we spend to defend against this attack?  How
 much do we save if we do defend?

I have to find I find this argument very odd.

You argue that TLS defends against man in the middle attacks, but that
we do not observe man in the middle attacks, so why do we need the
defense?

Well, we don't observe the attacks much because they are hard to
undertake. Make them easy and I am sure they would happen
frequently. Protocols subject to such attacks are frequently subjected
to them, and there are whole suites of tools you can download to help
you in intercepting traffic to facilitate them.

You argue that we have to make a cost/benefit analysis, but we're
talking about computer algorithms where the cost is miniscule if it
is measurable at all. Why should we use a second-best practice when a
best practice is in reality no more expensive?

It is one thing to argue that a bridge does not need another million
dollars worth of steel, but who can rationally argue that we should
use different, less secure algorithms when there is no obvious
benefit, either in computation, in development costs or in license
fees (since TLS is after all free of any such fees), and the
alternatives are less secure? In such a light, a cost/benefit analysis
leads inexorably to Use TLS -- second best saves nothing and might
cost a lot in lower security.

Some of your arguments seem to come down to there wasn't enough
thought given to the threat model. That might have been true when the
SSL/TLS process began, but a bunch of fairly smart people worked on
it, and we've ended up with a pretty solid protocol that is at worst
more secure than you might absolutely need but which covers the threat
model in most of the cases in which it might be used. You've yet to
argue that the threat model is insufficiently secure -- only that it
might be more than one needs -- so what is the harm?

Honestly the only really good argument against TLS I can think of is
that if one wants to use something like SSH keying instead of X.509
keying the detailed protocol doesn't support it very well, but the
protocol can be trivially adapted to do what one wants and the
underlying security model is almost exactly what one wants in a
majority of cases. Such an adaptation might be a fine idea, but it can
be done without giving up any of the fine analysis that went into TLS.

Actually, there is one other argument against TLS -- it does not
protect underlying TCP signaling the way that IPSec does. However,
given where it sits in the stack, you can't fault it for that.

 I think the failure of client certs has the same
 root cause as the failure of SSL/TLS to branch
 beyond its mandated role of protecting e-
 commerce.  Literally, the requirement that
 the cert be supplied (signed) by a third party
 killed it dead.  If there had been a button on
 every browser that said generate self-signed
 client cert now then the whole world would be
 using them.

This is not a failure of TLS. This is a failure of the browsers and
web servers. There is no reason browsers couldn't do exactly that,
tomorrow, and that sites couldn't operate on an SSH accept only what
you saw the first time model. TLS is fully capable of supporting that.

If you want to argue against X.509, that might be a fine and quite
reasonable argument. I would happily argue against lots of X.509
myself. However, X.509 is not TLS, and TLS's properties are not those
of X.509.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Thor Lancelot Simon
On Wed, Oct 22, 2003 at 05:08:32PM -0400, Tom Otvos wrote:
 
  So what purpose would client certificates address? Almost all of the use
  of SSL domain name certs is to hide a credit card number when a consumer
  is buying something. There is no requirement for the merchant to
  identify and/or authenticate the client  the payment infrastructure
  authenticates the financial transaction and the server is concerned
  primarily with getting paid (which comes from the financial institution)
  not who the client is.
 
 
 The CC number is clearly not hidden if there is a MITM.

Can you please posit an *exact* situation in which a man-in-the-middle
could steal the client's credit card number even in the presence of a
valid server certificate?  Can you please explain *exactly* how using a
client-side certificate rather than some other form of client authentication
would prevent this?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

[EMAIL PROTECTED] (David Wagner) writes:
 Tom Otvos wrote:
 As far as I can glean, the general consensus in WYTM is that MITM
 attacks are very low (read:
 inconsequential) probability.  Is this *really* true?
 
 I'm not aware of any such consensus.

I will state that MITM attacks are hardly a myth. They're used by
serious attackers when the underlying protocols permit it, and I've
witnessed them in the field with my own two eyes. Hell, they're even
well enough standardized that I've seen them in use on conference
networks. Some such attacks have been infamous.

MITM attacks are not currently the primary means for stealing credit
card numbers these days both because TLS makes it harder to do MITM
attacks and thus it is usually easier just to break in to the poorly
defended web server and steal the card numbers directly. However, that
is not a reason to remove anti-MITM defenses from TLS -- it is in fact
a reason to think of them as a success.

 I suspect you'd get plenty of debate on this point.
 But in any case, widespread exploitation of a vulnerability
 shouldn't be a prerequisite to deploying countermeasures.

Indeed. Imagine if we waited until airplanes exploded regularly to
design them so they would not explode, or if we had designed our first
suspension bridges by putting up some randomly selected amount of
cabling and seeing if the bridge collapsed. That's not how good
engineering works.

 If we see a plausible future threat and the stakes are high enough,
 it is often prudent to deploy defenses in advance against the
 possibility that attackers.

This is especially true when the marginal cost of the defenses is near
zero. The design cost of the countermeasures was high, but once
designed they can be replicated with no greater expense than that of
any other protocol.

 It's hard to predict with confidence which of the many
 vulnerabilities will be popular among attackers five years from now,
 and I've been very wrong, in both directions, many times.  In
 recognition of our own fallibility at predicting the future, the
 conclusion I draw is that it is a good idea to be conservative.

Ditto.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Tom Weinstein wrote:
 
 Ian Grigg wrote:
 
  Nobody doubts that it can occur, and that it *can* occur in practice.
  It is whether it *does* occur that is where the problem lies.
 
 This sort of statement bothers me.
 
 In threat analysis, you have to base your assessment on capabilities,
 not intentions. If an attack is possible, then you must guard against
 it. It doesn't matter if you think potential attackers don't intend to
 attack you that way, because you really don't know if that's true or not
 and they can always change their minds without telling you.

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.

This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.

(Of course, anecdotal evidence helps in that
respect, hence there is a lot of discussion
about MITMs in other forums.)

iang

Here's Eric Rescorla's words on this:

http://www.iang.org/ssl/rescorla_1.html

The first thing that we need to do is define our ithreat model./i
A threat model describes resources we expect the attacker to
have available and what attacks the attacker can be expected
to mount.  Nearly every security system is vulnerable to some
threat or another.  To see this, imagine that you keep your
papers in a completely unbreakable safe.  That's all well and
good, but if someone has planted a video camera in your office
they can see your confidential information whenever you take it
out to use it, so the safe hasn't bought you that much.

Therefore, when we define a threat model, we're concerned
not only with defining what attacks we are going to worry
about but also those we're not going to worry about.
Failure to take this important step typically leads to
complete deadlock as designers try to figure out how to
counter every possible threat.  What's important is to
figure out which threats are realistic and which ones we
can hope to counter with the tools available.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


MITM attacks

2003-10-22 Thread l . crypto
Take many grains of salt before concluding that MITM attacks are either
hard or don't happen.

It is just that the environment for them is not the Internet per se, but
modern switched LANs.   The basic trick to monitoring someone's LAN traffic
is to convince the ARP machinery that the MITM MAC is associated with
the target's IP address, and then to forward the intercepted traffic to
the real MAC address.

This sort of thing is also one approach to getting into wireless lans.

So given switched LANs with wireless access points, (drive up access)
I would not be surprised at a rise in MITM attacks, even with
no crypto involved.

-Larry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 In threat analysis, you base your assessment on
 economics of what is reasonable to protect.  It
 is perfectly valid to decline to protect against
 a possible threat, if the cost thereof is too high,
 as compared against the benefits.

The cost of MITM protection is, in practice, zero. Indeed, if you
wanted to produce an alternative to TLS without MITM protection, you
would have to spend lots of time and money crafting and evaluating a
new protocol that is still reasonably secure without that
protection. One might therefore call the cost of using TLS, which may
be used for free, to be substantially lower than that of an
alternative.

How low does the risk have to get before you will be willing not just
to pay NOT to protect against it? Because that is, in practice, what
you would have to do. You would actually have to burn money to get
lower protection. The cost burden is on doing less, not on doing
more.

There is, of course, also the cost of what happens when someone MITM's
you.

You keep claiming we have to do a cost benefit analysis, but what is
the actual measurable financial benefit of paying more for less
protection?

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Anne Lynn Wheeler
At 05:42 PM 10/22/2003 -0400, Tom Otvos wrote:

Absolutely true.  If the only effect of a MITM is loss of privacy, then 
that is certainly a
lower-priority item to fix than some quick cash scheme.  So the threat 
model needs to clearly
define who the bad guys are, and what their motivations are.  But then 
again, if I am the victim of
a MITM attack, even if the bad guy did not financially gain directly from 
the attack (as in, getting
my money or something for free), I would consider loss of privacy a 
significant thing. What if an
attacker were paid by someone (indirect financial gain) to ruin me by 
buying a bunch of stock on
margin?  Maybe not the best example, but you get the idea.  It is not an 
attack that affects
millions of people, but to the person involved, it is pretty 
serious.  Shouldn't the server in
this case help mitigate this type of attack?


ok, the original SSL domain name certificate for what became electronic 
commerce was

1) am I really talking to the server that I think I'm talking to
2) encrypted session.
so the attack in #1 was plausably some impersonation ... either MITM or 
straight impersonation. The issue was that there was a perceived 
vulnerability in the domain name infrastructure that somebody could 
contaminate the domain name look up and get the ip-address for the client 
redirected to the impersonater.

The SSL domain name certificates carry the original domain name  the 
client validates the domain name certificate with one of the public keys in 
the browser CA table ... and then validates that the server that it is 
communicating with can sign/encrypt something with the private key that 
corresponds to the public key carried in the certificate ... and then the 
client compares the domain name in the certificate with the URL that the 
browser used.  In theory, if all of that works  then it is highly 
unlikely that the client is talking to the wrong ip-address (since it 
should be the ip-address of the server that corresponds to the server).

So what are the subsequent problems:

1) the original idea was that the whole shopping experience was protected 
by the SSL domain name certificate  preventing MITM  impersonation 
attacks. However, it was found that SSL overhead was way to expensive and 
so the servers dropped back to just using it for encryption of the shopping 
experience. This means that the client ... does all their shopping ... with 
the real server or the imposter ... and then clicks on a button to check 
out that drops the client into SSL for the credit card number. The problem 
is that if it is an imposter ... the button likely carries a URL for which 
the imposter has a valid certificate for.

or

2) the original concern was possible ip-address hijacking in the domain 
name infrastructure  so the correct domain name maps to the wrong ip 
address  and the client goes to an imposter (whether or not the 
imposter needs to do an actual MITM or not). The problem is that when 
somebody approaches a CA for a certificate  the CA has to contact the 
domain name system as to the true owner of the domain name. It turns out 
that integrity issues in the domain name infrastructure not only can result 
in ip-address take-over  but also domain name take-over. The imposter 
exploits integrity flaws in the domain name infrastructure and does a 
domain name take-over  approaches a CA for a SSL domain name 
certificate ... and the CA issues it ... because the domain name 
infrastructure claims it is the true owner.

So somewhat from the CA industry ... there is a proposal that people 
register a public key in the domain name database when they obtain a domain 
name. After that ... all communication is digitally signed and validated 
with the database entry public key (notice this is certificate-less). This 
has the attribute of improving the integrity of the domain name 
infrastructure ... so the CA industry can trust the domain name 
infrastructure integrity so the rest of the world can trust the SSL comain 
name certificates?

This has the opportunity for simplifying the  SSL domain name certificate 
requesting process. The entity requesting the SSL domain name certificate 
 digitally signs the request (certificate-less of course). The CA 
validates the SSL domain name certificate request by retrieving the valid 
owner's public key from the domain name infrastructure database to 
authenticate the request. This is a lot more efficient and has less 
vulnerabilities than the current infrastructure.

The current infrastructure has some identification of the domain name owner 
recorded in the domain name infrastructure database. When an entity 
requests a SSL domain name certificate ... they provide additional 
identification to the CA. The CA now has to retrieve the information from 
the domain name infrastructure database and map it to some real world 
identification. They then have to take the requester's information and also 
map it to 

Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Tom Weinstein
Ian Grigg wrote:

Tom Weinstein wrote:
 

In threat analysis, you have to base your assessment on capabilities,
not intentions. If an attack is possible, then you must guard against
it. It doesn't matter if you think potential attackers don't intend to
attack you that way, because you really don't know if that's true or not
and they can always change their minds without telling you.
   

In threat analysis, you base your assessment on
economics of what is reasonable to protect.  It
is perfectly valid to decline to protect against
a possible threat, if the cost thereof is too high,
as compared against the benefits.
This is the reason that we cannot simply accept
the possible as a basis for engineering of any
form, let alone cryptography.  And this is the
reason why, if we can't measure it, then we are
probably justified in assuming it's not a threat
we need to worry about.
The economic view might be a reasonable view for an end-user to take, 
but it's not a good one for a protocol designer. The protocol designer 
doesn't have an economic model for how end-users will end up using the 
protocol, and it's dangerous to assume one. This is especially true for 
a protocol like TLS that is intended to be used as a general solution 
for a wide range of applications.

In some ways, I think this is something that all standards face. For any 
particular application, the standard might be less cost effective than a 
custom solution. But it's much cheaper to design something once that 
works for everyone off the shelf than it would be to custom design a new 
one each and every time.

--
Give a man a fire and he's warm for a day, but set   | Tom Weinstein
him on fire and he's warm for the rest of his life.  | [EMAIL PROTECTED] 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


TLS, costs, and threat models

2003-10-22 Thread Perry E. Metzger

We've heard a bit recently from certain parties, especially Ian Grigg,
claiming that one should use a cost/benefit analysis before using
TLS. The claim seems to be that it provides more protection than one
really needs.

However, there are many perfectly free (in both senses) TLS
implementations, and the added protocol components needed for
providing the protection that one may or may not need do not
substantially change the cost in practice of running the protocols.

We also do not have well studied protocols that provide lesser
protection, or a very good analysis of what the security properties
are of such theoretical protocols. Developing such protocols with
lower standards may be a substantial challenge if our past experience
with protocol development is any indication.

I would argue that most of the comments made claiming that TLS has a
cost associated with it are therefore specious. 

To those making these public statements, this is all a theoretical
concern. To me, however, this is a day to day nightmare I face in my
practice. Ad hoc protocols are almost universally filled with
problems.

A few days ago I saw yet another example of an in house ad hoc
protocol with what I can only describe as extraordinarily poor
properties, which is being used to protect financial transactions. The
author of the protocol is a very smart guy who knew nothing about
cryptography. He could have easily used TLS, and have used it with
much less effort than he actually spent, and would not have had these
problems. However, he didn't think he needed something that
complicated, so in the typical way of such things he spent more time
and effort doing something far less secure.

This is, sadly, a routine sort of discovery.

To Ian, it is important that we do a detailed analysis and see whether
or not the free (in both senses) and well understood protocol is cost
effective, never mind that the cost is in practice zero, or even
negative when compared to other protocols.

To me, it is important that we have at-hand cryptographic tools that
cover the needs of a wide variety of users, many of whom do not
understand the implications of their design decisions and who are not
going to gain anything from using a not-really-but-we'll-pretend less
complicated protocol.

By the way, I could make far more money custom designing protocols for
the needs of each customer. If there really was a Guild of
Cryptographers we'd be making money left and right by following the
process that Ian in effect recommends and trying to tailor ad hoc
protocols for everyone carefully designed to do no more than
necessary, and then taking yet more money fixing the mess when it
turns out that we're wrong.

The era of ad hoc protocols, however, has passed, just as the era
where every computer-using company built its own new languages and
operating systems has long passed. We no longer have economic
justification for that sort of thing, and we no longer have economic
justification for everyone rolling their own protocols either.


Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Ian Grigg
Perry E. Metzger wrote:
 
 Ian Grigg [EMAIL PROTECTED] writes:
  In threat analysis, you base your assessment on
  economics of what is reasonable to protect.  It
  is perfectly valid to decline to protect against
  a possible threat, if the cost thereof is too high,
  as compared against the benefits.
 
 The cost of MITM protection is, in practice, zero.


Not true!  The cost is from 10 million dollars to
100 million dollars per annum.  Those certs cost
money, Perry!  All that sysadmin time costs money,
too!  And all that managerial time trying to figure
out why the servers don't just work.  All those
consultants that come in and look after all those
secure servers and secure key storage and all that.

In fact, it costs so much money that nobody bothers
to do it *unless* they are forced to do it by people
telling them that they are being irresponsibly
vulnerable to the MITM!  Whatever that means.

Literally, nobody - 1% of everyone - runs an SSL
server, and even only a quarter of those do it
properly.  Which should be indisputable evidence
that there is huge resistance to spending money
on MITM.


 Indeed, if you
 wanted to produce an alternative to TLS without MITM protection, you
 would have to spend lots of time and money crafting and evaluating a
 new protocol that is still reasonably secure without that
 protection. One might therefore call the cost of using TLS, which may
 be used for free, to be substantially lower than that of an
 alternative.


I'm not sure how you come to that conclusion.  Simply
use TLS with self-signed certs.  Save the cost of the
cert, and save the cost of the re-evaluation.

If we could do that on a widespread basis, then it
would be worth going to the next step, which is caching
the self-signed certs, and we'd get our MITM protection
back!  Albeit with a bootstrap weakness, but at real
zero cost.

Any merchant who wants more, well, there *will* be
ten offers in his mailbox to upgrade the self-signed
cert to a better one.  Vendors of certs may not be
the smartest cookies in the jar, but they aren't so
dumb that they'll miss the financial benefit of self-
signed certs once it's been explained to them.

(If you mean, use TLS without certs - yes, I agree,
that's a no-won.)


 How low does the risk have to get before you will be willing not just
 to pay NOT to protect against it? Because that is, in practice, what
 you would have to do. You would actually have to burn money to get
 lower protection. The cost burden is on doing less, not on doing
 more.


This is a well known metric.  Half is a good rule of
thumb.  People will happily spend X to protect themselves
from X/2.  Not all the people all the time, but it's
enough to make a business model out of.  So if you
were able to show that certs protected us from 5-50
million dollars of damage every year, then you'd be
there.

(Mind you, where you would be is, proposing that certs
would be good to make available.  Not compulsory for
applications.)


 There is, of course, also the cost of what happens when someone MITM's
 you.


So I should spend the money.  Sure.  My choice.


 You keep claiming we have to do a cost benefit analysis, but what is
 the actual measurable financial benefit of paying more for less
 protection?


Can you take that to the specific case?

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL, client certs, and MITM (was WYTM?)

2003-10-22 Thread Perry E. Metzger

Ian Grigg [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
  The cost of MITM protection is, in practice, zero.
 
 Not true!  The cost is from 10 million dollars to
 100 million dollars per annum.  Those certs cost
 money, Perry!

They cost nothing at all. I use certs every day that I've created in
my own CA to provide MITM protection, and I paid no one for them. It
isn't even hard to do.

Repeat after me:
TLS is not only for protecting HTTP, and should not be mistaken for https:.
TLS is not X.509, and should not be mistaken for X.509.
TLS is also not buy a cert from Verisign, and should not be
mistaken for buy a cert from Verisign.

TLS is just a pretty straightforward well analyzed protocol for
protecting a channel -- full stop. It can be used in a wide variety of
ways, for a wide variety of apps. It happens to allow you to use X.509
certs, but if you really hate X.509, define an extension to use SPKI
or SSH style certs. TLS will accommodate such a thing easily. Indeed, I
would encourage you to do such a thing.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]