Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ian G

Ben Laurie wrote:

Ian G wrote:

...

http://wiki.cacert.org/wiki/VhostTaskForce



(The big problem of course is that you can use
one cert to describe many domains only if they
are the same administrative entity.)



If they share an IP address (which they must, otherwise there's no
problem), then they must share a webserver, which means they can share a
cert, surely?


Certainly they *can* share a cert.  But a cert
speaks to identity - at the human level the cert
is supposed to (by some readings) indicate who
the site is purporting to be and in some scenarios,
there are people who think the cert actually
proves that the site is who it claims to be.

So regardless of the technical details of the
underlying software (complex, I grant), websites
SHOULD NOT share a cert.

(by capitals I mean the RFC sense, not the shouting
sense.)



What we really need is for the webservers to
implement the TLS extension which I think is
called "server name indication."

And we need SSL v2 to die so it doesn't interfere
with the above.



Actually, you just disable it in the server. I don't see why we need
anything more than that.


If browsers don't know what is available on the
server, they send a Hello message that asks for
what protocol versions and ciphersuites to use.
This is the SSL v2 message, just in case so
to rectify this situation we need to get all
the browsers distro'd with SSL v2 turned off by
default.  The shorthand for this is "SSL v2 must
die..."  Thankfully, they did decide to do just
that at last month's browser pow-wow.

iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ben Laurie
Eric Rescorla wrote:
> Ben Laurie <[EMAIL PROTECTED]> writes:
>>> And we need SSL v2 to die so it doesn't interfere
>>> with the above.
>> Actually, you just disable it in the server. I don't see why we need
>> anything more than that.
> 
> The problem is that the ServerHostName extension that signals
> which host the client is trying to contact is only available
> in the TLS ClientHello.

Sure, but if the server won't negotiate SSL 2, why is this a problem?
-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ben Laurie
Ian G wrote:
> Ben Laurie wrote:
>> Ian G wrote:
> ...
>>> http://wiki.cacert.org/wiki/VhostTaskForce
> 
>>> (The big problem of course is that you can use
>>> one cert to describe many domains only if they
>>> are the same administrative entity.)
>>
>>
>> If they share an IP address (which they must, otherwise there's no
>> problem), then they must share a webserver, which means they can share a
>> cert, surely?
> 
> Certainly they *can* share a cert.  But a cert
> speaks to identity - at the human level the cert
> is supposed to (by some readings) indicate who
> the site is purporting to be and in some scenarios,
> there are people who think the cert actually
> proves that the site is who it claims to be.
> 
> So regardless of the technical details of the
> underlying software (complex, I grant), websites
> SHOULD NOT share a cert.

I don't see why not - the technical details actually matter. Since the
servers will all share a socket, on any normal architecture, they'll all
have access to everyone's private keys. So, what is gained by having
separate certs?

I do agree that the process by which the additional names get added to
the main cert needs to reflect ownership of the name, but that's a
different matter.

And I'm not claiming, btw, that this mechanism is better than the server
name extension. However, I don't believe its as broken as some are claiming.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificat es

2005-12-27 Thread Ian G

Ben Laurie wrote:

Ian G wrote:



http://wiki.cacert.org/wiki/VhostTaskForce



(The big problem of course is that you can use
one cert to describe many domains only if they
are the same administrative entity.)



If they share an IP address (which they must, otherwise there's no
problem), then they must share a webserver, which means they can share a
cert, surely?


Certainly they *can* share a cert.  But a cert
speaks to identity - at the human level the cert
is supposed to (by some readings) indicate who
the site is purporting to be and in some scenarios,
there are people who think the cert actually
proves that the site is who it claims to be.

So regardless of the technical details of the
underlying software (complex, I grant), websites
SHOULD NOT share a cert.



I don't see why not - the technical details actually matter. Since the
servers will all share a socket, on any normal architecture, they'll all
have access to everyone's private keys. So, what is gained by having
separate certs?


1. Because the activity is being done "in the name
of" the site.  When a business "signs" or otherwise
represents a site as purporting to be in the name of
some business, we still want to do it in a way that
separates out that business from every other.

2. The system operator has access to the private
keys, yes, but he's just the agent, and this does
not mean that anyone else has access.  We have
systems in place to separate out the protection
of the keys from the rest of the business.

Most small businesses have some level of cooperation
where they share techies, systems, and other services,
so it is probably more seen and more useful in the
SOHO (small office home office) world.  Of course,
this is less interesting to the security world,
because there isn't the money to pay for consultants
there...

All the more reason why the software should provide
the best it can for free!


I do agree that the process by which the additional names get added to
the main cert needs to reflect ownership of the name, but that's a
different matter.

And I'm not claiming, btw, that this mechanism is better than the server
name extension. However, I don't believe its as broken as some are claiming.


Well, sure.  For many uses it will be a useful
stopgap measure, until SNI is deployed.  It's
only broken if you like a binary world, and you
happen to fall on the zero side of the question.

iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-27 Thread James A. Donald
--
From:   Philipp Gühring 
<[EMAIL PROTECTED]>
> The problem is that I have to live with COTS 
> (Common-off-the-shelf) software out there, that is 
> generating the certificate requests. The only thing I 
> can do is create a blacklist or a whitelist of known 
> bad or known good software, to tell the users: Use 
> this software, or don´t use that software.

Randomness is necessarily theory laden.  To determine 
what is good, and what is bad, you have to look inside 
the software.

Software should get its randomness from dev/random, or 
from similarly open sources of randomness, so that the 
source of randomness can be inspected.

The general rule is that true randomness comes from 
quantities that are known to be unknown - for example 
the variation in disk read timing, which is affected by 
turbulence, or the microphone input, which is inherently 
noisy. You have to ask where these random numbers
ultimately come from. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 5i5rAiu+t+UqxlCHKBfiAn24UbuH1D2GsYrL3hv7
 4q7w1mi+V9whucgThiyHnkPt0EkjS1oIAp9hQ1UKc



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Ben Laurie
Anne & Lynn Wheeler wrote:
> James A. Donald wrote:
>> However, the main point of attack is phishing, when an
>> outsider attempts to interpose himself, the man in the
>> middle, into an existing relationship between two people
>> that know and trust each other. 
> 
> in the public key model ... whether it involves pgp, pki, digital
> certificates, what-ever; the local user (relying party) has to have a
> local trusted repository for public keys. in the pki model, this tends
> to be restricted to public keys of certification authorities ... so that
> the relying party can verify the digital signature on these
> message/document constructs called digital certificates.
> 
> in the traditional, ongoing relationship scenario, relying parties
> directly record authentication information of the parties they are
> dealing with. if a relying party were to directly record the public key
> of the people they are communicating with ... it is the trusting of that
> public key and the validating of associated public key operations that
> provide for the countermeasure for man-in-the-middle attacks and
> phishing attacks.
> 
> the issue that has been repeatedly discussed is that supposedly the
> existing SSL domain name digital certificates was to prevent
> impresonation and mitm-attacks. however, because of various
> infrastructure shortcomings ... an attacker can still operate with
> perfectly valid SSL domain name digital certificates ... and it doesn't
> stop the MITM-attack and/or phishing.

Eh? It surely does stop MitM attacks - the problem is that there's
little value in doing so for various reasons, such as no strong binding
between domain name and owner, UI that doesn't make it clear which
domain you are going to, or homograph attacks.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-27 Thread Ben Laurie
Travis H. wrote:
> On 12/21/05, Perry E. Metzger <[EMAIL PROTECTED]> wrote:
>>> Good ciphers aren't permutations, though, are they? Because if they
>>> were, they'd be groups, and that would be bad.
>> Actually, by definition, a cipher should be a permutation from the set
>> of plaintexts to the set of ciphertexts. It has to be 1 to 1 bijective
>> or it isn't an encryption algorithm.
> 
> Isn't the question people normally care about whether encryption over
> all keys is closed or not, and only relevant if you're trying to
> increase the keyspace through multiple encryption?
> 
> The other day I was thinking of using a very large key to select a
> permutation at random from the symmetric group S_(2^x).  That would be
> a group, but I don't see how you knowing that I'm using a random
> permutation would help you at all.

Having shot myself in the foot once already, I've hesitated over
responding to this, but...

Surely if you do this, then there's a meet-in-the middle attack: for a
plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
strength of your cipher from 2^x to 2^(x/2)?

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Anne & Lynn Wheeler
Ben Laurie wrote:
> Eh? It surely does stop MitM attacks - the problem is that there's
> little value in doing so for various reasons, such as no strong binding
> between domain name and owner, UI that doesn't make it clear which
> domain you are going to, or homograph attacks.

it stops the MITM attacks where the client supplies a URL and the server
supplies a certificate that corresponds to the URL.

the original issue is that a MITM might have redirected the connection
from the client to a bogus site ... or an intermediate site that then
impersonated the real site.

the infrastructure issue was that the merchants decided that SSL was too
high an overhead and stopped using SSL for the main connection where the
client supplied the URL. they allowed the client supplied URL connection
to be done w/o a URL. then later ... the website provided a click button
for checkout/pay ... which supplied a URL and then they also supplied a
certificate that matches the URL that they provided.

this situation could either be a completely bogus site ... or even a
mitm-attack ... which just did a pure passthru of all traffic going in
each way  except for the pay/checkout button. for the pay/checkout
button, the mitm substituted their own URL & certificate. everything
else passes thru as usual ... except the mitm is having two ssl session
... the mitm to "real" server session and the mitm to the client
session. the mitm to "real" server uses the "real" server's certificate
... the mitm to client server users the mitm certificate. since the mitm
supplied the URL to the client as part of the click operation ... the
mitm can control that the actual URL invoked by the client matches the
certitificate used by the mitm. the e-commerce use for pay/checkout
scenario is one of the major uses for SSL on the internet today ... and
the way that the infastructure has come to use SSL no longer prevents
the mitm-attack with the attacker can supply both the URL and the
certificate.

the issue for preventing mitm-attacks ... you need the client to supply
the URL and have the SSL process validate the other end of that
connection (with a server provided ssl domain name certificate ... or at
least a trusted, supplied public key associated with the URL). when the
attacker provides both the URL and a trusted public key ... what is
being prevented.

there is another problem, somewhat the week binding between domain name
and domain name owner. the issue is that many of the certification
authorities aren't the authoritative agency for the information they are
certifying. much of the original justification for SSL related to mitm
attacks was various integrity issues in the domain name infrastructure.

the process tends to be that a domain name owner registers some amount
of identification information for their domain name ownership with the
domain name infrastructure. the certification authorities then require
that SSL domain name certificate applicants also provide some amount of
identification information. then the certification authorities attempt
to do the expensive, time-consuming, and error-prone process of matching
the supplied identification information for the SSL domain name
certificate with the identificaiton information on file with the domain
name infrastructure for the domain name.

as part of various integrity issues related to that process, there has
been a proposal, somewhat backed by the ssl domain name certification
authority industry that domain name owners also register a public key
with the domain name infrastructure (in addition to identificaiton
information). then future communcation can be digitally signed and
verified with the onfile public key. also the ssl domain name
certification authority industry can require that ssl domain name
certificate applications be digitally signed. then the certification
authority can replace the expensive, time-consuming, and error-prone
identification matching process with a much less-expensive and efficient
authentication process by doing a real-time retrieval of the on-file
publickey from the domain name infrastructure for verifying the digital
signature (in lieu of doing a real-time retrieval of the on-file
identificaiton information for the expensive, time-consuming and
error-prone identification matching).

the two catch22 issues here are

1) improving the overall integrity issues of the domain name
infrastructure lessons the original justification for ssl domain name
certificates

2) if the certification authority industry can rely on real-time
retrieval of publickeys from the domain name infrastructure as the base,
TRUST ROOT for all of their operations ... it is possible that other
people in the world might also be able to do real-time retrieval of
publickeys as a substitute to relying on SSL domain name certificates

misc, numerous past postings mentioning SSL and ssl domain name certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert

---

Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread James A. Donald
--
From:   Anne & Lynn Wheeler
<[EMAIL PROTECTED]>
> as part of various integrity issues related to that
> process, there has been a proposal, somewhat backed by
> the ssl domain name certification authority industry
> that domain name owners also register a public key 
> with the domain name infrastructure (in addition to
> identificaiton information). then future communcation
> can be digitally signed and verified with the onfile
> public key. also the ssl domain name certification
> authority industry can require that ssl domain name 
> certificate applications be digitally signed. then the
> certification authority can replace the expensive,
> time-consuming, and error-prone identification
> matching process with a much less-expensive and 
> efficient authentication process by doing a real-time
> retrieval of the on-file publickey from the domain
> name infrastructure for verifying the digital
> signature (in lieu of doing a real-time retrieval of
> the on-file identificaiton information for the
> expensive, time-consuming and error-prone
> identification matching).

Unfortunately most domain name registrars take a
completely irresponsible attitude to domain name theft,
despite the fact that domain name theft is a major
problem.   OpenSRS is good but their resellers a very
bad.  Unfortunately by default, one winds up having the
same password with OpenSRS as with the reseller. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 LA7xNzxuTFoXA1ir8b2UWqPg/P6NhF+naIs34+LG
 49FONv1xLEWSjg/TiZ8oHGLHyCAhQLOM7CzPNCuTD


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Anne & Lynn Wheeler
Ben Laurie wrote:
> Eh? It surely does stop MitM attacks - the problem is that there's
> little value in doing so for various reasons, such as no strong binding
> between domain name and owner, UI that doesn't make it clear which
> domain you are going to, or homograph attacks.

part II;

i've repeatedly asserted that the fundamental, underlying certificate
business practices is to address the first time communication between
complete strangers ... analogous to the letters of credit/introduction
from the sailing ship days.

so the original SSL design point was to cross-check the domain name from
the URL typed in by the client to the certificate supplied by the
server. that basic premise is underminned when the server supplies the
URL and the certificate.

so you are left with placing the burdon on the user to cross-check the
URL displayed with the URL they think they are going to. it is simple
human dynamics ... after the first several thousand displayed URLs ...
they are going to ignore the process.

this is somewhat akin to the share-secret passwords ... that the
security experts define that the user has to have hard-to-guess,
impossible-to-remember passwords that change every 30 days, can never be
written down and every new password has to be unique ... as well as
unique across all security domains. the problem is that the number of
unique security domains that a person deals with has grown from 1-2 (I
had my first logon password in the 60s followed with the addition of an
ATM pin in the late 70s) to scores ... there is no practical possibility
that all such requirements can be satisified. misc. past collected posts
on shared-secret
http://www.garlic.com/~lynn/subpubkey.html#secrets

the underlying infrastructure further complicated the whole process when
a large percentage of the merchants outsourced the payment process to
3rd party ... where the click button supplied a URL of the 3rd party
payment processor that had absolutely no relationship to the merchant
site the client had been shopping at. this not only creates the
situation where

1) any initial connection to a merchant site where the user might
possibly have typed in the URL (or controls the URL generation via other
mechanisms) is not checked ... and any actual checking for things like
MITM-attacks doesn't occur until there is a URL provided by a
potentially suspect site.

but also

2) conditions the user as normal process that the pay/checkout button
may have a complete different domain name URL than the domain name of
the shopping site.

so, pretty well documented human factors ... especially related to the
design of security systems ... is that you don't tie humans making
determination about soem security issue to something that repeatedly
happens thousands and thousands of times. there are some guards that
have to check badges against faces ... but they tend to have intensive
training AND organizations that have high volume have gone to guards
doing it only short periods and rotating ... and/or the guards are
looking for a very simple repeating pattern and are trained to look for
missing pattern). having the human have to repeatedly check a (URL)
field that changes several thousand times a day against something they
are suppose to expect ... is pretty quickly a useless security design.

a more sensible human factors design ... is to remember whether a person
has checked out first time communication with a stranger ... the real
first time, have the person do something additional ... and from then on
remember that checking. in that respect ... creating a dependency on the
user to repeatedly check a field that changes possibly thousands of
times per day is extremely poor human factors security design.

now, the other part of my constant theme about certificates having
design point of first time communication between complete strangers ...
involves the additional constraing that the relying party has no other
recourse to obtain information about the other party. if you go to
paradigm where the relying party has facilities to remember first time
checking ... then the appended certificate on the communication is
actually only useful for the real first-time-communication (since by
definition the relying party has facilities to remember previous
checking ... like saving away the other parties publickey in a trusted
public key repository).

another part is that if you have the relying party do some additional
checking on the real first time interaction (rather than expecting the
user to do increasingly trivial checking on each new URL) ... and the
user is really online ... then that first time checking can involve
real-time check of online resources  again invalidating more of the
underlying design point of appending a certificates on every
communciation for benefit of relying parties who have no other recourse
for determining information about complete stranger in first time
communication.

there is something of a dichotomy here ... where ther

Re: RNG quality verification

2005-12-27 Thread Travis H.
On 12/23/05, Philipp Gühring <[EMAIL PROTECTED]> wrote:
> It´s easy to say that it´s their responsibility.
> But how should they do it?

Very carefully.

Picking random numbers is far too important to be left to chance.
--
http://www.lightconsulting.com/~travis/
"Vast emptiness, nothing sacred." -- Bodhidharma -><-
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-27 Thread Travis H.
On 12/26/05, Ben Laurie <[EMAIL PROTECTED]> wrote:
> Surely if you do this, then there's a meet-in-the middle attack: for a
> plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
> decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
> strength of your cipher from 2^x to 2^(x/2)?

Almost true.  The cardinality of the symmetric group S_(2^x) is
(2^x)!, so it reduces it from (2^x)! to roughly sqrt((2^x)!).  That's
still a lot.

I suspect this is some information-theoretic limit for x-bit block ciphers.
--
http://www.lightconsulting.com/~travis/
"Vast emptiness, nothing sacred." -- Bodhidharma -><-
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-27 Thread Ben Laurie
Jack Lloyd wrote:
> On Fri, Dec 16, 2005 at 05:41:48PM +, Ben Laurie wrote:
> 
>> No, OpenSSL is self-contained. There is, IIRC, an engine that uses GMP
>> if you want, but its entirely optional; OpenSSL has its own bignum
>> implementation that's just as good.
> 
> Last I checked, public key operations in OpenSSL were significantly faster
> using the GNU MP engine - so "just as good" is perhaps not entirely
> accurate. OpenSSL's BN library is still very fast compared to many other MPI
> implementations, of course.

Apparently this rather depends on platform and compiler options. I am
reliably informed that GMP is not always faster.

For those that really care it'd be cool if someone did a careful
comparison. It would also be interesting to know why they differ.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Ben Laurie
Anne & Lynn Wheeler wrote:
> a more sensible human factors design ... is to remember whether a person
> has checked out first time communication with a stranger ... the real
> first time, have the person do something additional ... and from then on
> remember that checking. in that respect ... creating a dependency on the
> user to repeatedly check a field that changes possibly thousands of
> times per day is extremely poor human factors security design.

This is the SSH design for host keys, of course, and also the petnames
design for URLs. Unfortunately petnames don't solve the problem that it
is hard to check the URL even the first time.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: X.509 / PKI, PGP, and IBE Secure Email Technologies

2005-12-27 Thread Anne & Lynn Wheeler
Ben Laurie wrote:
> This is the SSH design for host keys, of course, and also the petnames
> design for URLs. Unfortunately petnames don't solve the problem that it
> is hard to check the URL even the first time.

the original SSL paradigm was predicated on end-to-end security that
"the server the user thot they were taling to" was "the server that they
were actually talking to". certificates addressed the part from "the URL
inside the browser" to "the server".

the paradigm was dependent on the user having a tight binding between
"the server the user thot they were talking to" and "the URL inside the
browser" ... which in turn was dependent on the user actually inputing
the URL (as demonstration of the binding between the server the user
thot they were talking to and the inputed URL).

the problem was that as the infrastructure matured ... the actual URL
came to have less & less meaning to the user. so the MITM-attacks moved
to the weak points in the chain ... rather than attacking a valid
certificate and/or the process after the URL was inside the browser,
attack the process before the URL got inside the browser.

petnames would seem to suffer somewhat the same problem as
shared-secrets and passwords ... requiring a unique petname for every
URL. it works as long as their a few ... when they reach scores ... the
user no longer can manage.

so part of the problem is that the URL has become almost some internal
infrastructure representation... almost on par with the ip-address ...
the user pays nearly as much attention to the URL for a website as they
pay to the lower-level ip-address for the site (legacy requirements
still have ways for people to deal with both the URL and the ip-address
... but they don't have a lot of meaning for a lot of people).

however the URL Is one way of internally doing bookkeeping about a site.

so security issues might be

1) is the user talking to the server they think they are talking

2) does the user believe that the site is safe

3) is the site safe for providing certain kinds of sensitive information

4) is the site safe for providing specific sensitive information

#1 is the original SSL design point ... but the infrastructure has
resulted in creating a disconnect for establishing this information.

possibly another approach is that the local environment remembers things
... akin to PGP key repository. rather than the SSL locked ... have a
large green/yellow/red indicator. red is neither SSL locked and/or
checked. yellow is both SSL locked and checked.  green is SSL loaked,
initial checked, and further checked for entry of sensitive information.

a human factors issue is how easy can you make preliminary checking ...
and then not have to do it again ... where the current infrastructure
requires users to match something meaningful to URL to SSL certificate
on every interaction. preliminary checking is more effort than the
current stuff done on every SSL URL ... but could be made to be done
relatively rarely and part of an overall infrastructure that directly
relates to something the end-user might find meaningful.

bits and pieces of the infrastructure is already there. for instance
there is already support for automatically entering userid/password on
specific web forms. using bits and pieces of that repository could
provide ability to flag a specific web form as approved/not-approved for
specific sensitive information (like specific userid/password).

the issue isn't that a simple indicator with 2-4 states isn't useful ...
but the states presented need to realistic need to mean something to the
user. the locked/unlocked just says that the link is encrypted. it
doesn't indicate that the remote site is the server that that the user
thinks it is ... in part because of the way that the infrastructure has
creating disconnect between the URL and what users actually deal in.

if the browser kept track of whether the user actually hit the keys for
the entering of the URL ... then it might be useful for the browser to
provide a higher level of confidence to the SSL certificate checking
(aka it is only if the user actually typed in the URL ... can their be a
high-level of confidence related to the SSL certificate checking).

one might be tempted to make some grandiose philosophical security
statement ... that unless the user is involved in actually doing some
physical operation (at least at some point in time) to correlate between
what is meaningful to the user and the internal infrastructure. the
original SSL scheme was dependent on the user actually typing in the URL.

this is somewhat analogous to the confusion that seems to have cropped
up in the past with respect to the difference between digital signature
and human signature.
http://www.garlic.com/~lynn/subpubkey.html#signature

x9.59
http://www.garlic.com/~lynn/index.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

could actually have digital signature applied to a retail transaction at
point-of-sale as means of 

Re: another feature RNGs could provide

2005-12-27 Thread David Malone
On Mon, Dec 26, 2005 at 12:51:37PM +, Ben Laurie wrote:
> > The other day I was thinking of using a very large key to select a
> > permutation at random from the symmetric group S_(2^x).  That would be
> > a group, but I don't see how you knowing that I'm using a random
> > permutation would help you at all.

> Surely if you do this, then there's a meet-in-the middle attack: for a
> plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
> decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
> strength of your cipher from 2^x to 2^(x/2)?

S_n has size n!, so the size of the keyspace is (2^x)!. The thing
is that if you compose two of these the resulting key space is of
size (2^x)!, because you've already got all possible permutations,
so you gain nothing from it.

Usually a cypher is a small subset of the set of all possible
permutations, so composing the permutations may result in a bigger
subset. If the subset turns out to be a subgroup, then you gain
nothing, because a subgroup would be closed under composition.

In the case of having a plaintext/cyphertext pair in a cypher where
the key can be any possible permutation, knowing E(P) = C tells you
nothing except D(C) = P and E(X) != C for X != P, because the image
of one element tells you nothing about the others.

David.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-27 Thread David Malone
On Tue, Dec 27, 2005 at 03:26:59AM -0600, Travis H. wrote:
> On 12/26/05, Ben Laurie <[EMAIL PROTECTED]> wrote:
> > Surely if you do this, then there's a meet-in-the middle attack: for a
> > plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
> > decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
> > strength of your cipher from 2^x to 2^(x/2)?

> Almost true.  The cardinality of the symmetric group S_(2^x) is
> (2^x)!, so it reduces it from (2^x)! to roughly sqrt((2^x)!).  That's
> still a lot.

I'm fairly sure knowing that E(P) = C reduces the key space from
(2^x)!  to (2^x - 1)!, because you've just got to choose images for
the remaining 2^x - 1 possible blocks.

I think a problem with Ben's arument is in assuming that knowing
E_A(P)=D_B(C) tells you that your key was A.B. For example, suppose
my key K is the permutation:

1 -> 2
2 -> 3
3 -> 4
4 -> 1

and my P = 2. Now we know E_K(P) = C = 3. Ben guesses A:

1 -> 1
2 -> 3
3 -> 2
4 -> 4

and B:

1 -> 1
2 -> 2
3 -> 3
4 -> 4

He sees that E_A(P) = E_A(2) = 3 = D_B(3), and so assumes that K =
A.B. But A.B = A != K.

(In this example, imagine x = 2, and we label the blocks 00 = 1,
01 = 2, 10 = 3, 11 = 4.)

David.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-27 Thread Ben Laurie
David Malone wrote:
> On Tue, Dec 27, 2005 at 03:26:59AM -0600, Travis H. wrote:
>> On 12/26/05, Ben Laurie <[EMAIL PROTECTED]> wrote:
>>> Surely if you do this, then there's a meet-in-the middle attack: for a
>>> plaintext/ciphertext pair, P, C, I choose random keys to encrypt P and
>>> decrypt C. If E_A(P)=D_B(C), then your key was A.B, which reduces the
>>> strength of your cipher from 2^x to 2^(x/2)?
> 
>> Almost true.  The cardinality of the symmetric group S_(2^x) is
>> (2^x)!, so it reduces it from (2^x)! to roughly sqrt((2^x)!).  That's
>> still a lot.
> 
> I'm fairly sure knowing that E(P) = C reduces the key space from
> (2^x)!  to (2^x - 1)!, because you've just got to choose images for
> the remaining 2^x - 1 possible blocks.
> 
> I think a problem with Ben's arument is in assuming that knowing
> E_A(P)=D_B(C) tells you that your key was A.B. For example, suppose
> my key K is the permutation:
> 
>   1 -> 2
>   2 -> 3
>   3 -> 4
>   4 -> 1
> 
> and my P = 2. Now we know E_K(P) = C = 3. Ben guesses A:
> 
>   1 -> 1
>   2 -> 3
>   3 -> 2
>   4 -> 4
> 
> and B:
> 
>   1 -> 1
>   2 -> 2
>   3 -> 3
>   4 -> 4
> 
> He sees that E_A(P) = E_A(2) = 3 = D_B(3), and so assumes that K =
> A.B. But A.B = A != K.
> 
> (In this example, imagine x = 2, and we label the blocks 00 = 1,
> 01 = 2, 10 = 3, 11 = 4.)

If you don't have sufficient plain/ciphertext, then of course you can
choose incorrect pairs.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-27 Thread Sidney Markowitz
Ben Laurie wrote:
> For those that really care it'd be cool if someone did a careful
> comparison. It would also be interesting to know why they differ.

One set of comparisons of OpenSSL 0.9.7d and GMP RSA speed from last March was
posted on the GMP discussion mailing list by the GMP developer at

http://swox.com/list-archives/gmp-discuss/2005-March/001575.html

The less than surprising result is that if one of the packages has optimized
assembler language Bignum routines for a platform and the other doesn't, then
the package with the assembler routines is faster.

A few years ago I found that it was not worth the overhead of a JNI call to
have a Java program call out to the OpenSSL library instead of using the java
BigNum class which is written in Java. At the time I thought I was testing the
effect of JNI overhead, assuming that the C implementation of BigNum in OpenSSL
was of course much faster than one in Java.

I revisited the question sometime in the last six months and was surprised to
find that the JNI calls to OpenSSL were now much faster. It was not a matter of
JNI overhead being large, but that OpenSSL bignum arithmetic had been slow.

Looking into it I found some discussion of how GMP bignum was much faster than
OpenSSL and subsequent revision of the OpenSSL code on the x86 platform. I'm
pretty sure I did these later tests with a newer version of OpenSSL than 0.9.7d
and the x86 performance was about equal to that of GMP, as opposed to GMP being
2.5 to 3 times faster in the March tests.

Recently OpenSSL added a GMP engine to allow GMP to be used for Bignum
arithmetic on platforms where GMP is faster than the OpenSSL library.

Based on the final sentence of the posting to gmp-discuss linked to above, I
would predict that there will continue to be a version race between the two
packages until they both reach some limit of performance, at least on platforms
 on which the OpenSSL developers want to maintain parity, letting the other
platforms be serviced by the GMP engine.

Another small thread about this in an OpenSSL mailing list:
http://www.nabble.com/GMP-vs-BigNum-t213669.html

 -- Sidney Markowitz
http://www.sidney.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


ADMIN: end of latest SSL discussion

2005-12-27 Thread Perry E. Metzger

The latest round of "SSL and X.509 certs in browsers are broken" has
gone on too long. I kept hoping after weeks people might get bored,
but they haven't. I'm cutting it off for at least a little while.

I'll entertain new postings only if they propose actual solutions
rather than long philosophical discussions of how we went wrong when
we developed notochords or left the ocean or went bipedal or what have
you.

The unending rant can continue in a few weeks after I've forgotten
about this one.

By the way, this does not apply to any sort of actual technical
discussion (like the discussion of which bignum implementations are
fastest).

Perry
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-27 Thread Jack Lloyd
On Tue, Dec 27, 2005 at 02:28:07PM +, Ben Laurie wrote:

> Apparently this rather depends on platform and compiler options. I am
> reliably informed that GMP is not always faster.
> 
> For those that really care it'd be cool if someone did a careful
> comparison. It would also be interesting to know why they differ.

Thank you for the correction. My statement was primarily on the basis of some
benchmarks I ran at the time I wrote some backend code in Botan to dump crypto
operations to GNU MP and/or OpenSSL when available, and at the time GNU MP
outperformed OpenSSL by a fairly large margin on x86 and Alpha machines (up to
50% on large RSA private key operations; as the keys got smaller the
performance difference reduced, down to basically nothing at 512 bit
keys). However I have since checked my changelogs and realized I must have run
those tests almost two years ago now (which surprised me a bit!), so I'm sure
those results are not particularly reflective of current performance. I'll have
to revisit this and see how things stack up these days on the platforms I care
about.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]