Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ian G

Ben Laurie wrote:

Ian G wrote:

...

http://wiki.cacert.org/wiki/VhostTaskForce



(The big problem of course is that you can use
one cert to describe many domains only if they
are the same administrative entity.)



If they share an IP address (which they must, otherwise there's no
problem), then they must share a webserver, which means they can share a
cert, surely?


Certainly they *can* share a cert.  But a cert
speaks to identity - at the human level the cert
is supposed to (by some readings) indicate who
the site is purporting to be and in some scenarios,
there are people who think the cert actually
proves that the site is who it claims to be.

So regardless of the technical details of the
underlying software (complex, I grant), websites
SHOULD NOT share a cert.

(by capitals I mean the RFC sense, not the shouting
sense.)



What we really need is for the webservers to
implement the TLS extension which I think is
called server name indication.

And we need SSL v2 to die so it doesn't interfere
with the above.



Actually, you just disable it in the server. I don't see why we need
anything more than that.


If browsers don't know what is available on the
server, they send a Hello message that asks for
what protocol versions and ciphersuites to use.
This is the SSL v2 message, just in case so
to rectify this situation we need to get all
the browsers distro'd with SSL v2 turned off by
default.  The shorthand for this is SSL v2 must
die...  Thankfully, they did decide to do just
that at last month's browser pow-wow.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ben Laurie
Eric Rescorla wrote:
 Ben Laurie [EMAIL PROTECTED] writes:
 And we need SSL v2 to die so it doesn't interfere
 with the above.
 Actually, you just disable it in the server. I don't see why we need
 anything more than that.
 
 The problem is that the ServerHostName extension that signals
 which host the client is trying to contact is only available
 in the TLS ClientHello.

Sure, but if the server won't negotiate SSL 2, why is this a problem?
-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ben Laurie
Ian G wrote:
 Ben Laurie wrote:
 Ian G wrote:
 ...
 http://wiki.cacert.org/wiki/VhostTaskForce
 
 (The big problem of course is that you can use
 one cert to describe many domains only if they
 are the same administrative entity.)


 If they share an IP address (which they must, otherwise there's no
 problem), then they must share a webserver, which means they can share a
 cert, surely?
 
 Certainly they *can* share a cert.  But a cert
 speaks to identity - at the human level the cert
 is supposed to (by some readings) indicate who
 the site is purporting to be and in some scenarios,
 there are people who think the cert actually
 proves that the site is who it claims to be.
 
 So regardless of the technical details of the
 underlying software (complex, I grant), websites
 SHOULD NOT share a cert.

I don't see why not - the technical details actually matter. Since the
servers will all share a socket, on any normal architecture, they'll all
have access to everyone's private keys. So, what is gained by having
separate certs?

I do agree that the process by which the additional names get added to
the main cert needs to reflect ownership of the name, but that's a
different matter.

And I'm not claiming, btw, that this mechanism is better than the server
name extension. However, I don't believe its as broken as some are claiming.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ian G

Ben Laurie wrote:

Ian G wrote:



http://wiki.cacert.org/wiki/VhostTaskForce



(The big problem of course is that you can use
one cert to describe many domains only if they
are the same administrative entity.)



If they share an IP address (which they must, otherwise there's no
problem), then they must share a webserver, which means they can share a
cert, surely?


Certainly they *can* share a cert.  But a cert
speaks to identity - at the human level the cert
is supposed to (by some readings) indicate who
the site is purporting to be and in some scenarios,
there are people who think the cert actually
proves that the site is who it claims to be.

So regardless of the technical details of the
underlying software (complex, I grant), websites
SHOULD NOT share a cert.



I don't see why not - the technical details actually matter. Since the
servers will all share a socket, on any normal architecture, they'll all
have access to everyone's private keys. So, what is gained by having
separate certs?


1. Because the activity is being done in the name
of the site.  When a business signs or otherwise
represents a site as purporting to be in the name of
some business, we still want to do it in a way that
separates out that business from every other.

2. The system operator has access to the private
keys, yes, but he's just the agent, and this does
not mean that anyone else has access.  We have
systems in place to separate out the protection
of the keys from the rest of the business.

Most small businesses have some level of cooperation
where they share techies, systems, and other services,
so it is probably more seen and more useful in the
SOHO (small office home office) world.  Of course,
this is less interesting to the security world,
because there isn't the money to pay for consultants
there...

All the more reason why the software should provide
the best it can for free!


I do agree that the process by which the additional names get added to
the main cert needs to reflect ownership of the name, but that's a
different matter.

And I'm not claiming, btw, that this mechanism is better than the server
name extension. However, I don't believe its as broken as some are claiming.


Well, sure.  For many uses it will be a useful
stopgap measure, until SNI is deployed.  It's
only broken if you like a binary world, and you
happen to fall on the zero side of the question.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Ben Laurie
Anne  Lynn Wheeler wrote:
 James A. Donald wrote:
 However, the main point of attack is phishing, when an
 outsider attempts to interpose himself, the man in the
 middle, into an existing relationship between two people
 that know and trust each other. 
 
 in the public key model ... whether it involves pgp, pki, digital
 certificates, what-ever; the local user (relying party) has to have a
 local trusted repository for public keys. in the pki model, this tends
 to be restricted to public keys of certification authorities ... so that
 the relying party can verify the digital signature on these
 message/document constructs called digital certificates.
 
 in the traditional, ongoing relationship scenario, relying parties
 directly record authentication information of the parties they are
 dealing with. if a relying party were to directly record the public key
 of the people they are communicating with ... it is the trusting of that
 public key and the validating of associated public key operations that
 provide for the countermeasure for man-in-the-middle attacks and
 phishing attacks.
 
 the issue that has been repeatedly discussed is that supposedly the
 existing SSL domain name digital certificates was to prevent
 impresonation and mitm-attacks. however, because of various
 infrastructure shortcomings ... an attacker can still operate with
 perfectly valid SSL domain name digital certificates ... and it doesn't
 stop the MITM-attack and/or phishing.

Eh? It surely does stop MitM attacks - the problem is that there's
little value in doing so for various reasons, such as no strong binding
between domain name and owner, UI that doesn't make it clear which
domain you are going to, or homograph attacks.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Anne Lynn Wheeler
Ben Laurie wrote:
 Eh? It surely does stop MitM attacks - the problem is that there's
 little value in doing so for various reasons, such as no strong binding
 between domain name and owner, UI that doesn't make it clear which
 domain you are going to, or homograph attacks.

it stops the MITM attacks where the client supplies a URL and the server
supplies a certificate that corresponds to the URL.

the original issue is that a MITM might have redirected the connection
from the client to a bogus site ... or an intermediate site that then
impersonated the real site.

the infrastructure issue was that the merchants decided that SSL was too
high an overhead and stopped using SSL for the main connection where the
client supplied the URL. they allowed the client supplied URL connection
to be done w/o a URL. then later ... the website provided a click button
for checkout/pay ... which supplied a URL and then they also supplied a
certificate that matches the URL that they provided.

this situation could either be a completely bogus site ... or even a
mitm-attack ... which just did a pure passthru of all traffic going in
each way  except for the pay/checkout button. for the pay/checkout
button, the mitm substituted their own URL  certificate. everything
else passes thru as usual ... except the mitm is having two ssl session
... the mitm to real server session and the mitm to the client
session. the mitm to real server uses the real server's certificate
... the mitm to client server users the mitm certificate. since the mitm
supplied the URL to the client as part of the click operation ... the
mitm can control that the actual URL invoked by the client matches the
certitificate used by the mitm. the e-commerce use for pay/checkout
scenario is one of the major uses for SSL on the internet today ... and
the way that the infastructure has come to use SSL no longer prevents
the mitm-attack with the attacker can supply both the URL and the
certificate.

the issue for preventing mitm-attacks ... you need the client to supply
the URL and have the SSL process validate the other end of that
connection (with a server provided ssl domain name certificate ... or at
least a trusted, supplied public key associated with the URL). when the
attacker provides both the URL and a trusted public key ... what is
being prevented.

there is another problem, somewhat the week binding between domain name
and domain name owner. the issue is that many of the certification
authorities aren't the authoritative agency for the information they are
certifying. much of the original justification for SSL related to mitm
attacks was various integrity issues in the domain name infrastructure.

the process tends to be that a domain name owner registers some amount
of identification information for their domain name ownership with the
domain name infrastructure. the certification authorities then require
that SSL domain name certificate applicants also provide some amount of
identification information. then the certification authorities attempt
to do the expensive, time-consuming, and error-prone process of matching
the supplied identification information for the SSL domain name
certificate with the identificaiton information on file with the domain
name infrastructure for the domain name.

as part of various integrity issues related to that process, there has
been a proposal, somewhat backed by the ssl domain name certification
authority industry that domain name owners also register a public key
with the domain name infrastructure (in addition to identificaiton
information). then future communcation can be digitally signed and
verified with the onfile public key. also the ssl domain name
certification authority industry can require that ssl domain name
certificate applications be digitally signed. then the certification
authority can replace the expensive, time-consuming, and error-prone
identification matching process with a much less-expensive and efficient
authentication process by doing a real-time retrieval of the on-file
publickey from the domain name infrastructure for verifying the digital
signature (in lieu of doing a real-time retrieval of the on-file
identificaiton information for the expensive, time-consuming and
error-prone identification matching).

the two catch22 issues here are

1) improving the overall integrity issues of the domain name
infrastructure lessons the original justification for ssl domain name
certificates

2) if the certification authority industry can rely on real-time
retrieval of publickeys from the domain name infrastructure as the base,
TRUST ROOT for all of their operations ... it is possible that other
people in the world might also be able to do real-time retrieval of
publickeys as a substitute to relying on SSL domain name certificates

misc, numerous past postings mentioning SSL and ssl domain name certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert


Re: another feature RNGs could provide

2005-12-27 Thread Travis H.
On 12/26/05, Ben Laurie [EMAIL PROTECTED] wrote:
 Surely if you do this, then there's a meet-in-the middle attack: for a
 plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
 decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
 strength of your cipher from 2^x to 2^(x/2)?

Almost true.  The cardinality of the symmetric group S_(2^x) is
(2^x)!, so it reduces it from (2^x)! to roughly sqrt((2^x)!).  That's
still a lot.

I suspect this is some information-theoretic limit for x-bit block ciphers.
--
http://www.lightconsulting.com/~travis/
Vast emptiness, nothing sacred. -- Bodhidharma --
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-27 Thread Ben Laurie
Jack Lloyd wrote:
 On Fri, Dec 16, 2005 at 05:41:48PM +, Ben Laurie wrote:
 
 No, OpenSSL is self-contained. There is, IIRC, an engine that uses GMP
 if you want, but its entirely optional; OpenSSL has its own bignum
 implementation that's just as good.
 
 Last I checked, public key operations in OpenSSL were significantly faster
 using the GNU MP engine - so just as good is perhaps not entirely
 accurate. OpenSSL's BN library is still very fast compared to many other MPI
 implementations, of course.

Apparently this rather depends on platform and compiler options. I am
reliably informed that GMP is not always faster.

For those that really care it'd be cool if someone did a careful
comparison. It would also be interesting to know why they differ.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Ben Laurie
Anne  Lynn Wheeler wrote:
 a more sensible human factors design ... is to remember whether a person
 has checked out first time communication with a stranger ... the real
 first time, have the person do something additional ... and from then on
 remember that checking. in that respect ... creating a dependency on the
 user to repeatedly check a field that changes possibly thousands of
 times per day is extremely poor human factors security design.

This is the SSH design for host keys, of course, and also the petnames
design for URLs. Unfortunately petnames don't solve the problem that it
is hard to check the URL even the first time.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Anne Lynn Wheeler
Ben Laurie wrote:
 This is the SSH design for host keys, of course, and also the petnames
 design for URLs. Unfortunately petnames don't solve the problem that it
 is hard to check the URL even the first time.

the original SSL paradigm was predicated on end-to-end security that
the server the user thot they were taling to was the server that they
were actually talking to. certificates addressed the part from the URL
inside the browser to the server.

the paradigm was dependent on the user having a tight binding between
the server the user thot they were talking to and the URL inside the
browser ... which in turn was dependent on the user actually inputing
the URL (as demonstration of the binding between the server the user
thot they were talking to and the inputed URL).

the problem was that as the infrastructure matured ... the actual URL
came to have less  less meaning to the user. so the MITM-attacks moved
to the weak points in the chain ... rather than attacking a valid
certificate and/or the process after the URL was inside the browser,
attack the process before the URL got inside the browser.

petnames would seem to suffer somewhat the same problem as
shared-secrets and passwords ... requiring a unique petname for every
URL. it works as long as their a few ... when they reach scores ... the
user no longer can manage.

so part of the problem is that the URL has become almost some internal
infrastructure representation... almost on par with the ip-address ...
the user pays nearly as much attention to the URL for a website as they
pay to the lower-level ip-address for the site (legacy requirements
still have ways for people to deal with both the URL and the ip-address
... but they don't have a lot of meaning for a lot of people).

however the URL Is one way of internally doing bookkeeping about a site.

so security issues might be

1) is the user talking to the server they think they are talking

2) does the user believe that the site is safe

3) is the site safe for providing certain kinds of sensitive information

4) is the site safe for providing specific sensitive information

#1 is the original SSL design point ... but the infrastructure has
resulted in creating a disconnect for establishing this information.

possibly another approach is that the local environment remembers things
... akin to PGP key repository. rather than the SSL locked ... have a
large green/yellow/red indicator. red is neither SSL locked and/or
checked. yellow is both SSL locked and checked.  green is SSL loaked,
initial checked, and further checked for entry of sensitive information.

a human factors issue is how easy can you make preliminary checking ...
and then not have to do it again ... where the current infrastructure
requires users to match something meaningful to URL to SSL certificate
on every interaction. preliminary checking is more effort than the
current stuff done on every SSL URL ... but could be made to be done
relatively rarely and part of an overall infrastructure that directly
relates to something the end-user might find meaningful.

bits and pieces of the infrastructure is already there. for instance
there is already support for automatically entering userid/password on
specific web forms. using bits and pieces of that repository could
provide ability to flag a specific web form as approved/not-approved for
specific sensitive information (like specific userid/password).

the issue isn't that a simple indicator with 2-4 states isn't useful ...
but the states presented need to realistic need to mean something to the
user. the locked/unlocked just says that the link is encrypted. it
doesn't indicate that the remote site is the server that that the user
thinks it is ... in part because of the way that the infrastructure has
creating disconnect between the URL and what users actually deal in.

if the browser kept track of whether the user actually hit the keys for
the entering of the URL ... then it might be useful for the browser to
provide a higher level of confidence to the SSL certificate checking
(aka it is only if the user actually typed in the URL ... can their be a
high-level of confidence related to the SSL certificate checking).

one might be tempted to make some grandiose philosophical security
statement ... that unless the user is involved in actually doing some
physical operation (at least at some point in time) to correlate between
what is meaningful to the user and the internal infrastructure. the
original SSL scheme was dependent on the user actually typing in the URL.

this is somewhat analogous to the confusion that seems to have cropped
up in the past with respect to the difference between digital signature
and human signature.
http://www.garlic.com/~lynn/subpubkey.html#signature

x9.59
http://www.garlic.com/~lynn/index.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

could actually have digital signature applied to a retail transaction at
point-of-sale as means of authentication. 

Re: another feature RNGs could provide

2005-12-27 Thread David Malone
On Tue, Dec 27, 2005 at 03:26:59AM -0600, Travis H. wrote:
 On 12/26/05, Ben Laurie [EMAIL PROTECTED] wrote:
  Surely if you do this, then there's a meet-in-the middle attack: for a
  plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
  decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
  strength of your cipher from 2^x to 2^(x/2)?

 Almost true.  The cardinality of the symmetric group S_(2^x) is
 (2^x)!, so it reduces it from (2^x)! to roughly sqrt((2^x)!).  That's
 still a lot.

I'm fairly sure knowing that E(P) = C reduces the key space from
(2^x)!  to (2^x - 1)!, because you've just got to choose images for
the remaining 2^x - 1 possible blocks.

I think a problem with Ben's arument is in assuming that knowing
E_A(P)=D_B(C) tells you that your key was A.B. For example, suppose
my key K is the permutation:

1 - 2
2 - 3
3 - 4
4 - 1

and my P = 2. Now we know E_K(P) = C = 3. Ben guesses A:

1 - 1
2 - 3
3 - 2
4 - 4

and B:

1 - 1
2 - 2
3 - 3
4 - 4

He sees that E_A(P) = E_A(2) = 3 = D_B(3), and so assumes that K =
A.B. But A.B = A != K.

(In this example, imagine x = 2, and we label the blocks 00 = 1,
01 = 2, 10 = 3, 11 = 4.)

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ADMIN: end of latest SSL discussion

2005-12-27 Thread Perry E. Metzger

The latest round of SSL and X.509 certs in browsers are broken has
gone on too long. I kept hoping after weeks people might get bored,
but they haven't. I'm cutting it off for at least a little while.

I'll entertain new postings only if they propose actual solutions
rather than long philosophical discussions of how we went wrong when
we developed notochords or left the ocean or went bipedal or what have
you.

The unending rant can continue in a few weeks after I've forgotten
about this one.

By the way, this does not apply to any sort of actual technical
discussion (like the discussion of which bignum implementations are
fastest).

Perry
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-27 Thread Jack Lloyd
On Tue, Dec 27, 2005 at 02:28:07PM +, Ben Laurie wrote:

 Apparently this rather depends on platform and compiler options. I am
 reliably informed that GMP is not always faster.
 
 For those that really care it'd be cool if someone did a careful
 comparison. It would also be interesting to know why they differ.

Thank you for the correction. My statement was primarily on the basis of some
benchmarks I ran at the time I wrote some backend code in Botan to dump crypto
operations to GNU MP and/or OpenSSL when available, and at the time GNU MP
outperformed OpenSSL by a fairly large margin on x86 and Alpha machines (up to
50% on large RSA private key operations; as the keys got smaller the
performance difference reduced, down to basically nothing at 512 bit
keys). However I have since checked my changelogs and realized I must have run
those tests almost two years ago now (which surprised me a bit!), so I'm sure
those results are not particularly reflective of current performance. I'll have
to revisit this and see how things stack up these days on the platforms I care
about.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]