Re: security questions

2008-08-08 Thread John Ioannidis

[EMAIL PROTECTED] wrote:

John Ioannidis wrote:
| Does anyone know how this security questions disease started, and
why 
| it is spreading the way it is?  If your company does this, can you
find 
| the people responsible and ask them what they were thinking?


The answer is Help Desk Call Avoidance; allow the end-user to fix
their own account without having to get someone on the phone. This is
simply an available mechanism in the spectrum between easy-to-use and
rock-solid security.


As the discussion so far indicates, and as published papers show, the
security of these security questions is lower than the security of
the password.


| My theory is that no actual security people have ever been involved,
and 
| that it's just another one of those stupid design practices that are 
| perpetuated because nobody has ever complained or that's what 
| everybody is doing.


Your theory is incorrect. There is considerable analysis on what


Can you reference it please?  There has been some analysis on the 
entropy of passphrases as a password replacement, but it is not relevant.



constitute good security questions based on the anticipated entropy of
the responses. This is why, for example, no good security question has a
yes/no answer (i.e., 1-bit). Aren't security questions just an
automation of what happens once you get a customer service
representative on the phone? In some regards they may be more secure as
they're less subject to social manipulation (i.e., if I mention a few
possible answers to a customer support person, I can probably get them
to confirm an answer for me).


The difference is that when you are interfacing with a human, you have 
to go through a low-speed interface, namely, voice. In that respect,

a security question, coupled with a challenge about recent transactions,
makes for adequate security.  The on-line version of the security 
question is vulnerable to automated dictionary attacks.


/ji

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


More man-in-the-middle'd SSL sessions on the way

2008-08-08 Thread Jerrold Leichter

From an article about WAN optimization appliances in Computerworld:

In some markets, such as health and finance, [hiring] a managed
	provider [who will do the encryption outside your routers] isn't 	a  
good option for another reason: Because data is optimized in an 	 
unencrypted state, privacy and security concerns arise. But vendors 	 
such as Riverbed, Juniper Networks and Blue Coat Systems can serve 	as  
a trusted man in the middle for optimizing data encrypted with 	SSL,  
which is commonly used in applications with Web interfaces and 	other  
Internet traffic. They terminate the encrypted session,

decrypt, optimize and then re-encrypt and forward the traffic.
[Gartner's Joe] Skorupa said most vendors are developing this
useful capability.

It may indeed be a useful capability - but widespread use will destroy  
what little is left of the SSL trust model.


-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
Security Advisory (08-AUG-2008) (CVE-2008-3280)
===

Ben Laurie of Google's Applied Security team, while working with an
external researcher, Dr. Richard Clayton of the Computer Laboratory,
Cambridge University, found that various OpenID Providers (OPs) had
TLS Server Certificates that used weak keys, as a result of the Debian
Predictable Random Number Generator (CVE-2008-0166).

In combination with the DNS Cache Poisoning issue (CVE-2008-1447) and
the fact that almost all SSL/TLS implementations do not consult CRLs
(currently an untracked issue), this means that it is impossible to
rely on these OPs.

Attack Description
--

In order to mount an attack against a vulnerable OP, the attacker
first finds the private key corresponding to the weak TLS
certificate. He then sets up a website masquerading as the original
OP, both for the OpenID protocol and also for HTTP/HTTPS.

Then he poisons the DNS cache of the victim to make it appear that his
server is the true OpenID Provider.

There are two cases, one is where the victim is a user trying to
identify themselves, in which case, even if they use HTTPS to ensure
that the site they are visiting is indeed their provider, they will be
unable to detect the substitution and will give their login
credentials to the attacker.

The second case is where the victim is the Relying Party (RP). In this
case, even if the RP uses TLS to connect to the OP, as is recommended
for higher assurance, he will not be defended, as the vast majority of
OpenID implementations do not check CRLs, and will, therefore, accept
the malicious site as the true OP.

Mitigation
--

Mitigation is surprisingly hard. In theory the vulnerable site should
revoke their weak certificate and issue a new one.

However, since the CRLs will almost certainly not be checked, this
means the site will still be vulnerable to attack for the lifetime of
the certificate (and perhaps beyond, depending on user
behaviour). Note that shutting down the site DOES NOT prevent the
attack.

Therefore mitigation falls to other parties.

1. Browsers must check CRLs by default.

2. OpenID libraries must check CRLs.

3. DNS caching resolvers must be patched against the poisoning attack.

4. Until either 1 and 2 or 3 have been done, OpenID cannot be trusted
   for any OP that cannot demonstrate it has never had a weak
   certificate.

Discussion
--

Normally, when security problems are encountered with a single piece
of software, the responsible thing to do is to is to wait until fixes
are available before making any announcement. However, as a number of
examples in the past have demonstrated, this approach does not work
particularly well when many different pieces of software are involved
because it is necessary to coordinate a simultaneous release of the
fixes, whilst hoping that the very large number of people involved
will cooperate in keeping the vulnerability secret.

In the present situation, the fixes will involve considerable
development work in adding CRL handling to a great many pieces of
openID code. This is a far from trivial amount of work.

The fixes will also involve changes to browser preferences to ensure
that CRLs are checked by default -- which many vendors have resisted
for years. We are extremely pessimistic that a security vulnerability
in OpenID will be seen as sufficiently important to change the browser
vendors minds.

Hence, we see no value in delaying this announcement; and by making
the details public as soon as possible, we believe that individuals
who rely on OpenID will be better able to take their own individual
steps to avoid relying upon the flawed certificates we have
identified.

OpenID is at heart quite a weak protocol, when used in its most
general form[1], and consequently there is very limited reliance upon
its security. This means that the consequences of the combination of
attacks that are now possible is nothing like as serious as might
otherwise have been the case.

However, it does give an insight into the type of security disaster
that may occur in the future if we do not start to take CRLs
seriously, but merely stick them onto to-do lists or disable them in
the name of tiny performance improvements.

Affected Sites
--

There is no central registry of OpenID systems, and so we cannot be
sure that we have identified all of the weak certificates that are
currently being served. The list of those we have found so far is:

openid.sun.com
www.xopenid.net
openid.net.nz

Notes
-

[1] There are ways of using OpenID that are significantly more secure
than the commonly deployed scheme, I shall describe those in a
separate article.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: security questions

2008-08-08 Thread Leichter, Jerry
|  | My theory is that no actual security people have ever been involved,
|  | that it's just another one of those stupid design practices that are
|  | perpetuated because nobody has ever complained or that's what
|  | everybody is doing.
|  
|  Your theory is incorrect. There is considerable analysis on what
| 
| Can you reference it please?  There has been some analysis on the
| entropy of passphrases as a password replacement, but it is not
| relevant.
RSA sells a product that is based on such research.  I don't have
references; perhaps someone else does.

I think the accurate statement here is:  There's been some research on
this matter, and there are some reasonable implementations out there;
but there are also plenty of me-too implementations that are quite
worthless.

In fact, I've personally never run into an implementation that I would
not consider worthless.  (Oddly, the list of questions that started
this discussion is one of the better ones I've seen.  Unfortunately,
what it demonstrates is that producing a useful implementation with
a decent amount of total entropy probably involves more setup time
than the average user will want to put up with.)

|  constitute good security questions based on the anticipated entropy
|  of the responses. This is why, for example, no good security
|  question has a yes/no answer (i.e., 1-bit). Aren't security
|  questions just an automation of what happens once you get a customer
|  service representative on the phone? In some regards they may be
|  more secure as they're less subject to social manipulation (i.e., if
|  I mention a few possible answers to a customer support person, I can
|  probably get them to confirm an answer for me).
| The difference is that when you are interfacing with a human, you have
| to go through a low-speed interface, namely, voice. In that respect, a
| security question, coupled with a challenge about recent transactions,
| makes for adequate security.  The on-line version of the security
| question is vulnerable to automated dictionary attacks.
Actually, this cuts both ways.  Automated interfaces generally require
exact matches; at most, they will be case-blind.  This is appropriate
and understood for passwords.  It is inappropriate for what people
perceive as natural-text questions and answers.  When I first started
running into such systems, when asked for where I was born, I would
answer New York - or maybe New York City, or maybe NY or NYC.
I should have thought about the consequences of providing a natural-
text answer to a natural-text question - but I didn't.  Sure enough,
when I actually needed to reset my password - I ended up getting locked
out of the system because there was no way I could remember, 6 months
later, what exact answer I'd given.

A human being is more forgiving.  This makes the system more vulnerable
to social engineering - but it makes it actually useable.  The
tradeoff here is very difficult to make.  By its nature, a secondary
access system will be rarely used.  People may, by dint of repetition,
learn to parrot back exact answers, even a random bunch of characters,
if they have to use them every day.  There's no way anything but a
fuzzy match on meaning will work for an answer people have to give
once every couple of months - human memory simply doesn't work that
way.

I learned my lesson and never provide actual answers to these questions
any more.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 8 Aug 2008 11:50:59 +0100,
Ben Laurie wrote:
 However, since the CRLs will almost certainly not be checked, this
 means the site will still be vulnerable to attack for the lifetime of
 the certificate (and perhaps beyond, depending on user
 behaviour). Note that shutting down the site DOES NOT prevent the
 attack.
 
 Therefore mitigation falls to other parties.
 
 1. Browsers must check CRLs by default.

Isn't this a good argument for blacklisting the keys on the client
side?

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


UK e-passport cloned

2008-08-08 Thread Perry E. Metzger

Quoting:

New microchipped passports designed to be foolproof against
identity theft can be cloned and manipulated in minutes and
accepted as genuine by the computer software recommended for use
at international airports.

Tests for The Times exposed security flaws in the microchips
introduced to protect against terrorism and organised crime. The
flaws also undermine claims that 3,000 blank passports stolen last
week were worthless because they could not be forged.

http://www.timesonline.co.uk/tol/news/uk/crime/article4467106.ece

Hat tip: Bruce Schneier's blog

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Dave Korn
Eric Rescorla wrote on 08 August 2008 16:06:

 At Fri, 8 Aug 2008 11:50:59 +0100,
 Ben Laurie wrote:
 However, since the CRLs will almost certainly not be checked, this
 means the site will still be vulnerable to attack for the lifetime of
 the certificate (and perhaps beyond, depending on user
 behaviour). Note that shutting down the site DOES NOT prevent the attack.
 
 Therefore mitigation falls to other parties.
 
 1. Browsers must check CRLs by default.
 
 Isn't this a good argument for blacklisting the keys on the client
 side?

  Isn't that exactly what Browsers must check CRLs means in this context
anyway?  What alternative client-side blacklisting mechanism do you suggest?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 8 Aug 2008 17:31:15 +0100,
Dave Korn wrote:
 
 Eric Rescorla wrote on 08 August 2008 16:06:
 
  At Fri, 8 Aug 2008 11:50:59 +0100,
  Ben Laurie wrote:
  However, since the CRLs will almost certainly not be checked, this
  means the site will still be vulnerable to attack for the lifetime of
  the certificate (and perhaps beyond, depending on user
  behaviour). Note that shutting down the site DOES NOT prevent the attack.
  
  Therefore mitigation falls to other parties.
  
  1. Browsers must check CRLs by default.
  
  Isn't this a good argument for blacklisting the keys on the client
  side?
 
   Isn't that exactly what Browsers must check CRLs means in this context
 anyway?  What alternative client-side blacklisting mechanism do you suggest?

It's easy to compute all the public keys that will be generated
by the broken PRNG. The clients could embed that list and refuse
to accept any certificate containing one of them. So, this
is distinct from CRLs in that it doesn't require knowing 
which servers have which cert...

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Leichter, Jerry
On Fri, 8 Aug 2008, Dave Korn wrote:
|  Isn't this a good argument for blacklisting the keys on the client
|  side?
| 
| Isn't that exactly what Browsers must check CRLs means in this
| context anyway?  What alternative client-side blacklisting mechanism
| do you suggest?
Since the list of bad keys is known and fairly short, one could
explicitly check for them in the browser code, without reference to
any external CRL.

Of course, the browser itself may not see the bad key - it may see key
for something that *contains* a bad key.  So such a check would not be
complete.  Still, it couldn't hurt.

One could put similar checks everywhere that keys are used.  Think of it
as the modern version of code that checks for and rejects DES weak and
semi-weak keys.  The more code out there that does the check, the faster
bad keys will be driven out of use.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Dave Korn
Eric Rescorla wrote on 08 August 2008 17:58:

 At Fri, 8 Aug 2008 17:31:15 +0100,
 Dave Korn wrote:
 
 Eric Rescorla wrote on 08 August 2008 16:06:
 
 At Fri, 8 Aug 2008 11:50:59 +0100,
 Ben Laurie wrote:
 However, since the CRLs will almost certainly not be checked, this
 means the site will still be vulnerable to attack for the lifetime of
 the certificate (and perhaps beyond, depending on user
 behaviour). Note that shutting down the site DOES NOT prevent the
 attack. 
 
 Therefore mitigation falls to other parties.
 
 1. Browsers must check CRLs by default.
 
 Isn't this a good argument for blacklisting the keys on the client
 side?
 
   Isn't that exactly what Browsers must check CRLs means in this
 context anyway?  What alternative client-side blacklisting mechanism do
 you suggest? 
 
 It's easy to compute all the public keys that will be generated
 by the broken PRNG. The clients could embed that list and refuse
 to accept any certificate containing one of them. So, this
 is distinct from CRLs in that it doesn't require knowing
 which servers have which cert...

scurries off to read CRL format in RFC

  Oh, you can't specify them solely by key, you have to have all the
associated metadata.  That's annoying, yes, I understand your point now.

  IIRC various of the vendors' sshd updates released in the immediate wake
of the Debian catastrophe do indeed block all the weak keys.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
On Fri, Aug 8, 2008 at 5:57 PM, Eric Rescorla [EMAIL PROTECTED] wrote:
 At Fri, 8 Aug 2008 17:31:15 +0100,
 Dave Korn wrote:

 Eric Rescorla wrote on 08 August 2008 16:06:

  At Fri, 8 Aug 2008 11:50:59 +0100,
  Ben Laurie wrote:
  However, since the CRLs will almost certainly not be checked, this
  means the site will still be vulnerable to attack for the lifetime of
  the certificate (and perhaps beyond, depending on user
  behaviour). Note that shutting down the site DOES NOT prevent the attack.
 
  Therefore mitigation falls to other parties.
 
  1. Browsers must check CRLs by default.
 
  Isn't this a good argument for blacklisting the keys on the client
  side?

   Isn't that exactly what Browsers must check CRLs means in this context
 anyway?  What alternative client-side blacklisting mechanism do you suggest?

 It's easy to compute all the public keys that will be generated
 by the broken PRNG. The clients could embed that list and refuse
 to accept any certificate containing one of them. So, this
 is distinct from CRLs in that it doesn't require knowing
 which servers have which cert...

It also only fixes this single type of key compromise. Surely it is
time to stop ignoring CRLs before something more serious goes wrong?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Dan Kaminsky



Eric Rescorla wrote:

At Fri, 8 Aug 2008 17:31:15 +0100,
Dave Korn wrote:
  

Eric Rescorla wrote on 08 August 2008 16:06:



At Fri, 8 Aug 2008 11:50:59 +0100,
Ben Laurie wrote:
  

However, since the CRLs will almost certainly not be checked, this
means the site will still be vulnerable to attack for the lifetime of
the certificate (and perhaps beyond, depending on user
behaviour). Note that shutting down the site DOES NOT prevent the attack.

Therefore mitigation falls to other parties.

1. Browsers must check CRLs by default.


Isn't this a good argument for blacklisting the keys on the client
side?
  

  Isn't that exactly what Browsers must check CRLs means in this context
anyway?  What alternative client-side blacklisting mechanism do you suggest?



It's easy to compute all the public keys that will be generated
by the broken PRNG. The clients could embed that list and refuse
to accept any certificate containing one of them. So, this
is distinct from CRLs in that it doesn't require knowing 
which servers have which cert...
Funnily enough I was just working on this -- and found that we'd end up 
adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
curious about the feasibility of a large bloom filter that fails back to 
online checking though.  This has side effects but perhaps they can be 
made statistically very unlikely, without blowing out the size of a browser.


Updating the filter could then be something we do on a 24 hour 
autoupdate basis.  Doing either this, or doing revocation checking over 
DNS (seriously), is not necessarily a bad idea.  We need to do better 
than we've been.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Peter Gutmann
Eric Rescorla [EMAIL PROTECTED] writes:

It's easy to compute all the public keys that will be generated
by the broken PRNG. The clients could embed that list and refuse
to accept any certificate containing one of them. So, this
is distinct from CRLs in that it doesn't require knowing 
which servers have which cert...

You'd also end up with a rather large list for the client to carry around, 
which would be especially problematic for lightweight clients.  You'd need to 
represent it as something like a Bloom filter to avoid this (given that most 
users will just click OK on invalid certs, the small false positive rate 
shouldn't have much effect either :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Perry E. Metzger

Ben Laurie [EMAIL PROTECTED] writes:
 It's easy to compute all the public keys that will be generated
 by the broken PRNG. The clients could embed that list and refuse
 to accept any certificate containing one of them. So, this
 is distinct from CRLs in that it doesn't require knowing
 which servers have which cert...

 It also only fixes this single type of key compromise. Surely it is
 time to stop ignoring CRLs before something more serious goes wrong?

The problem is, the CRL mechanism itself is also dangerous.  Sadly,
clients are required to keep on going if they can't reach a CRL
server. That means that if you DoSing the CRL servers or use DNS
attacks to effectively take them offline, you've also effectively
eliminated the certificate revocation.

I'm not going to tell you that paying attention to CRLs wouldn't be
better than what happens now, but it will not eliminate the
problem. It is too hard to prove a negative (that is, to prove to
yourself that no revocation exists.)

The kerberos style of having credentials expire very quickly is one
(somewhat less imperfect) way to deal with such things, but it is far
from perfect and it could not be done for the ad-hoc certificate
system https: depends on -- the infrastructure for refreshing all the
world's certs every eight hours doesn't exist, and if it did imagine
the chaos if it failed for a major CA one fine morning.

One also worries about what will happen in the UI when a certificate
has been revoked. If it just says this cert has been revoked,
continue anyway? the wrong thing will almost always happen.

Perry
-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 08 Aug 2008 10:43:53 -0700,
Dan Kaminsky wrote:
 Eric Rescorla wrote:
  It's easy to compute all the public keys that will be generated
  by the broken PRNG. The clients could embed that list and refuse
  to accept any certificate containing one of them. So, this
  is distinct from CRLs in that it doesn't require knowing 
  which servers have which cert...
 Funnily enough I was just working on this -- and found that we'd end up 
 adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
 curious about the feasibility of a large bloom filter that fails back to 
 online checking though.  This has side effects but perhaps they can be 
 made statistically very unlikely, without blowing out the size of a browser.

Why do you say a couple of megabytes? 99% of the value would be
1024-bit RSA keys. There are ~32,000 such keys. If you devote an
80-bit hash to each one (which is easily large enough to give you a
vanishingly small false positive probability; you could probably get
away with 64 bits), that's 320KB.  Given that the smallest Firefox
build (Windows) is 7.1 MB, this doesn't sound like a nonstarter to me
at all, especially since the browser could download it in the
background.


 Updating the filter could then be something we do on a 24 hour 
 autoupdate basis.  Doing either this, or doing revocation checking over 
 DNS (seriously), is not necessarily a bad idea.  We need to do better 
 than we've been.

Yes, there are a number of approaches to more efficient CRL
checking, I think that's a separate issue.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 02:08:37PM -0400, Perry E. Metzger wrote:
 The kerberos style of having credentials expire very quickly is one
 (somewhat less imperfect) way to deal with such things, but it is far
 from perfect and it could not be done for the ad-hoc certificate
 system https: depends on -- the infrastructure for refreshing all the
 world's certs every eight hours doesn't exist, and if it did imagine
 the chaos if it failed for a major CA one fine morning.

The PKIX moral equivalent of Kerberos V tickets would be OCSP Responses.

I understand most current browsers support OCSP.

 One also worries about what will happen in the UI when a certificate
 has been revoked. If it just says this cert has been revoked,
 continue anyway? the wrong thing will almost always happen.

No doubt.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 11:20:15AM -0700, Eric Rescorla wrote:
 At Fri, 08 Aug 2008 10:43:53 -0700,
 Dan Kaminsky wrote:
  Funnily enough I was just working on this -- and found that we'd end up 
  adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
  curious about the feasibility of a large bloom filter that fails back to 
  online checking though.  This has side effects but perhaps they can be 
  made statistically very unlikely, without blowing out the size of a browser.
 
 Why do you say a couple of megabytes? 99% of the value would be
 1024-bit RSA keys. There are ~32,000 such keys. If you devote an
 80-bit hash to each one (which is easily large enough to give you a
 vanishingly small false positive probability; you could probably get
 away with 64 bits), that's 320KB.  Given that the smallest Firefox
 [...]

You could store {hash, seed} and check matches for false positives
by generating a key with the corresponding seed and then checking for an
exact match -- slow, but rare.  This way you could choose your false
positive rate / table size comfort zone and vary the size of the hash
accordingly.

Nico
-- 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Paul Hoffman

At 1:47 PM -0500 8/8/08, Nicolas Williams wrote:

On Fri, Aug 08, 2008 at 02:08:37PM -0400, Perry E. Metzger wrote:

 The kerberos style of having credentials expire very quickly is one
 (somewhat less imperfect) way to deal with such things, but it is far
 from perfect and it could not be done for the ad-hoc certificate
 system https: depends on -- the infrastructure for refreshing all the
 world's certs every eight hours doesn't exist, and if it did imagine
 the chaos if it failed for a major CA one fine morning.


The PKIX moral equivalent of Kerberos V tickets would be OCSP Responses.

I understand most current browsers support OCSP.


...and only a tiny number of CAs do so.

--Paul Hoffman, Director
--VPN Consortium

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Telephone Phishing

2008-08-08 Thread Perry E. Metzger

I just got called by an autodialer -- the Caller ID was faked (and in
any case didn't point at a real number since area codes don't start
with 0 -- probably a mistake by the scammers).

After I answered, a tape of a cheerful woman informed me this was my
last chance to lower the rate on my credit card, and asked me to
press one to continue.

You can fill in the rest of the script on your own.

I'm sure this happens all the time now and I was just unaware of it,
but it is always more vivid when you see it yourself.

I'm certain this scenario would get enough average people to hand over
their credit card data to more than pay for itself, and smart scammers
are probably using VOIP accounts they got with stolen credit card
numbers to do this anyway.

One can also imagine using this technique for a wide variety of spear
phishing attacks. For example, say you stole a large number of credit
card numbers but didn't have the CVV2s -- you could set up an IVR
system to automatically collect them from your victims.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
On Fri, Aug 8, 2008 at 8:27 PM, Eddy Nigg (StartCom Ltd.)
[EMAIL PROTECTED] wrote:
 Ben Laurie:

 On Fri, Aug 8, 2008 at 12:44 PM, Eddy Nigg (StartCom Ltd.)
 [EMAIL PROTECTED] wrote:


 This affects any web site and service provider of various natures. It's not
 exclusive for OpenID nor for any other protocol / standard / service! It may
 affect an OpenID provider if it uses a compromised key in combination with
 unpatched DNS servers. I don't understand why OpenID is singled out, since
 it can potentially affect any web site including Google's various services
 (if Google would have used Debian systems to create their private keys).


 OpenID is singled out because I am not talking about a potential
 problem but an actual problem.


 Sorry Ben, but any web site or service (HTTP, SMPT, IMAP, SSH, VPN, etc)
 which makes use of a compromised key has an actual problem and not a
 potential problem. Open ID as a standard isn't more affected than, lets say
 XMPP...If there are servers and providers relying on such keys the have a
 real actual problem.

I do not dispute this.

 I don't see your point about Open ID nor didn't I see
 anything new

The point is I found OpenID servers with weak keys.

 The problem of weak keys should be dealt at the CA level, many which have
 failed to do anything serious about it.

Indeed.

 We have spotted other actual problems in other services. Details will
 be forthcoming at appropriate times.


 I think it's superfluous to single out different services since any service
 making use of the weak keys is affected, with recent discovery of DNS
 poisoning making the matter worse. I suggest you try a forum which can
 potentially reach many CAs, they in fact have everything at their disposal
 to remove this threat!

If you have a better forum, bring it on.

However, CAs do not have everything at their disposal to remove the
threat. Browsers,OpenID libraries and RPs must also participate.

Just as saying buffer overflows are bad has not magically caused all
buffer overflows to be fixed, I am confident that the only way to get
this problem fixed is to chase down all the culprits individually. I
am sure that OpenID is not the only thing with problems, as you say.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Leichter, Jerry
|   Funnily enough I was just working on this -- and found that we'd
|   end up adding a couple megabytes to every browser.  #DEFINE
|   NONSTARTER.  I am curious about the feasibility of a large bloom
|   filter that fails back to online checking though.  This has side
|   effects but perhaps they can be made statistically very unlikely,
|   without blowing out the size of a browser.
|  Why do you say a couple of megabytes? 99% of the value would be
|  1024-bit RSA keys. There are ~32,000 such keys. If you devote an
|  80-bit hash to each one (which is easily large enough to give you a
|  vanishingly small false positive probability; you could probably get
|  away with 64 bits), that's 320KB.  Given that the smallest Firefox
|  [...]
You can get by with a lot less than 64 bits.  People see problems like
this and immediately think birthday paradox, but there is no birthday
paradox here:  You aren't look for pairs in an ever-growing set,
you're looking for matches against a fixed set.  If you use 30-bit
hashes - giving you about a 120KB table - the chance that any given
key happens to hash to something in the table is one in a billion,
now and forever.  (Of course, if you use a given key repeatedly, and
it happens to be that 1 in a billion, it will hit every time.  So an
additional table of known good keys that happen to collide is worth
maintaining.  Even if you somehow built and maintained that table for
all the keys across all the systems in the world - how big would it
get, if only 1 in a billion keys world-wide got entered?)

| You could store {hash, seed} and check matches for false positives
| by generating a key with the corresponding seed and then checking for an
| exact match -- slow, but rare.  This way you could choose your false
| positive rate / table size comfort zone and vary the size of the hash
| accordingly.
Or just go off to one of a number of web sites that have a full table.
Many solutions are possible, when they only need to be invoked very,
very rarely.
-- Jerry

| Nico
| -- 
| 
| -
| The Cryptography Mailing List
| Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
| 
| 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Tim Dierks
[Sorry for duplicates, but I got multiple requests for a non-HTML
version, and I didn't want to fork the thread. Also sorry for
initially sending HTML; I didn't realize it was so abhorrent these
days. ]

On Fri, Aug 8, 2008 at 1:43 PM, Dan Kaminsky [EMAIL PROTECTED] wrote:

 It's easy to compute all the public keys that will be generated
 by the broken PRNG. The clients could embed that list and refuse
 to accept any certificate containing one of them. So, this
 is distinct from CRLs in that it doesn't require knowing which servers have 
 which cert...

 Funnily enough I was just working on this -- and found that we'd end up 
 adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
 curious about the feasibility of a large bloom filter that fails back to 
 online checking though.  This has side effects but perhaps they can be made 
 statistically very unlikely, without blowing out the size of a browser.

Using this Bloom filter calculator:
http://www.cc.gatech.edu/~manolios/bloom-filters/calculator.html ,
plus the fact that there are 32,768 weak keys for every key type 
size, I get various sizes of necessary Bloom filter, based on how many
key type / sizes you want to check and various false positive rates:
 * 3 key types/sizes with 1e-6 false positive rate: 2826759 bits = 353 KB
 * 3 key types/sizes with 1e-9 false positive rate: 4240139 bits = 530 KB
 * 7 key types/sizes with 1e-6 false positive rate: 6595771 bits = 824 KB
 * 7 key types/sizes with 1e-9 false positive rate: 9893657 bits = 1237 KB

I presume that the first 3  first 7 key type/sizes in this list
http://metasploit.com/users/hdm/tools/debian-openssl/ are the best to
incorporate into the filter.

Is there any chance it would be feasible to get a list of all the weak
keys that were actually certified by browser-installed CAs, or those
weak certificates? Presumably, this list would be much smaller and
would be more effectively distributed in Bloom filter form.

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
On Fri, Aug 8, 2008 at 7:54 PM, Tim Dierks [EMAIL PROTECTED] wrote:
 Using this Bloom filter calculator:
 http://www.cc.gatech.edu/~manolios/bloom-filters/calculator.html , plus the
 fact that there are 32,768 weak keys for every key type  size, I get
 various sizes of necessary Bloom filter, based on how many key type / sizes
 you want to check and various false positive rates:
  * 3 key types/sizes with 1e-6 false positive rate: 2826759 bits = 353 KB
  * 3 key types/sizes with 1e-9 false positive rate: 4240139 bits = 530 KB
  * 7 key types/sizes with 1e-6 false positive rate: 6595771 bits = 824 KB
  * 7 key types/sizes with 1e-9 false positive rate: 9893657 bits = 1237 KB

 I presume that the first 3  first 7 key type/sizes in this list
 http://metasploit.com/users/hdm/tools/debian-openssl/ are the best to
 incorporate into the filter.

 Is there any chance it would be feasible to get a list of all the weak keys
 that were actually certified by browser-installed CAs, or those weak
 certificates? Presumably, this list would be much smaller and would be more
 effectively distributed in Bloom filter form.

Or as a CRL :-)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Nicolas Williams
On Fri, Aug 08, 2008 at 12:35:43PM -0700, Paul Hoffman wrote:
 At 1:47 PM -0500 8/8/08, Nicolas Williams wrote:
 On Fri, Aug 08, 2008 at 02:08:37PM -0400, Perry E. Metzger wrote:
  The kerberos style of having credentials expire very quickly is one
  (somewhat less imperfect) way to deal with such things, but it is far
  from perfect and it could not be done for the ad-hoc certificate
  system https: depends on -- the infrastructure for refreshing all the
  world's certs every eight hours doesn't exist, and if it did imagine
  the chaos if it failed for a major CA one fine morning.
 
 The PKIX moral equivalent of Kerberos V tickets would be OCSP Responses.
 
 I understand most current browsers support OCSP.
 
 ...and only a tiny number of CAs do so.

Not that long ago nothing supported OCSP.  If all that's left (ha) is
the CAs then we're in good shape.  (OCSP services can be added without
modifying a CA -- just issue the OCSP Responders their certs and let
them use CRLs are their source of revocation information.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Eric Rescorla
At Fri, 8 Aug 2008 15:52:07 -0400 (EDT),
Leichter, Jerry wrote:
 
 |   Funnily enough I was just working on this -- and found that we'd
 |   end up adding a couple megabytes to every browser.  #DEFINE
 |   NONSTARTER.  I am curious about the feasibility of a large bloom
 |   filter that fails back to online checking though.  This has side
 |   effects but perhaps they can be made statistically very unlikely,
 |   without blowing out the size of a browser.
 |  Why do you say a couple of megabytes? 99% of the value would be
 |  1024-bit RSA keys. There are ~32,000 such keys. If you devote an
 |  80-bit hash to each one (which is easily large enough to give you a
 |  vanishingly small false positive probability; you could probably get
 |  away with 64 bits), that's 320KB.  Given that the smallest Firefox
 |  [...]
 You can get by with a lot less than 64 bits.  People see problems like
 this and immediately think birthday paradox, but there is no birthday
 paradox here:  You aren't look for pairs in an ever-growing set,
 you're looking for matches against a fixed set.  If you use 30-bit
 hashes - giving you about a 120KB table - the chance that any given
 key happens to hash to something in the table is one in a billion,
 now and forever.  (Of course, if you use a given key repeatedly, and
 it happens to be that 1 in a billion, it will hit every time.  So an
 additional table of known good keys that happen to collide is worth
 maintaining.  Even if you somehow built and maintained that table for
 all the keys across all the systems in the world - how big would it
 get, if only 1 in a billion keys world-wide got entered?)

I don't believe your math is correct here. Or rather, it would
be correct if there was only one bad key.

Remember, there are N bad keys and you're using a b-bit hash,
which has 2^b distinct values. If you put N' entries in the
hash table, the probability that a new key will have the
same digest as one of them is N'/(2^b). If b is sufficiently
large to make collisions rare, then N'=~N and we get 
N/(2^b).

To be concrete, we have 2^15 distinct keys, so, the
probability of a false positive becomes (2^15)/(2^b)=2^(b-15).
To get that probability below 1 billion, b+15 = 30, so
you need about 45 bits. I chose 64 because it seemed to me
that a false positive probability of 2^{-48} or so was better.

-Ekr




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Leichter, Jerry
|  You can get by with a lot less than 64 bits.  People see problems
|  like this and immediately think birthday paradox, but there is no
|  birthday paradox here:  You aren't look for pairs in an
|  ever-growing set, you're looking for matches against a fixed set.
|  If you use 30-bit hashes - giving you about a 120KB table - the
|  chance that any given key happens to hash to something in the table
|  is one in a billion, now and forever.  (Of course, if you use a
|  given key repeatedly, and it happens to be that 1 in a billion, it
|  will hit every time.  So an additional table of known good keys
|  that happen to collide is worth maintaining.  Even if you somehow
|  built and maintained that table for all the keys across all the
|  systems in the world - how big would it get, if only 1 in a billion
|  keys world-wide got entered?)
| I don't believe your math is correct here. Or rather, it would
| be correct if there was only one bad key.
| 
| Remember, there are N bad keys and you're using a b-bit hash, which
| has 2^b distinct values. If you put N' entries in the hash table, the
| probability that a new key will have the same digest as one of them is
| N'/(2^b). If b is sufficiently large to make collisions rare, then
| N'=~N and we get N/(2^b).
| 
| To be concrete, we have 2^15 distinct keys, so, the probability of a
| false positive becomes (2^15)/(2^b)=2^(b-15).  To get that probability
| below 1 billion, b+15 = 30, so you need about 45 bits. I chose 64
| because it seemed to me that a false positive probability of 2^{-48}
| or so was better.
You're right, of course - I considered 32,000 to be vanishingly small
compared to the number of hash values, but of course it isn't.  The
perils of looking at one number just as decimal and the other just in
exponential form

In any case, I think it's clear that even for extremely conservative
false hit ratios, the table size is quite reasonable.  You wouldn't
want the table on your smart card or RFID chip, perhaps, but there even
a low-end smartphone would have no problems.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Florian Weimer
* Eric Rescorla:

 Why do you say a couple of megabytes? 99% of the value would be
 1024-bit RSA keys. There are ~32,000 such keys.

There are three sets of keys, for big-endian 32-bit, little-endian
32-bit and little-endian 64-bit.  On top of that, openssl genrsa
generates different keys depending on the existence of $HOME/.rnd (and
-3 creates yet another set of keys, but this is more in the league of
different key length).  If the library is used for key generation
(instead of the command line tool), different keys might result.

On the other hand, the on-disk size would be comparable to the phishing
filter database.

Part of the problem of the CRL approach is that CAs usually have
policies against obtaining private keys and therefore can't prove to the
customer that their keys are compromised.  And adding a CRL entry when
the customer isn't convinced that they've got a problem is probably not
a good idea, either.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Hal Finney
[I feel a little uncomfortable replying with such a wide distribution!]

Getting browsers, or OpenID installations, to check CRLs or use OCSP to
check for freshness is likely to be slow going. At this point I think
the momentum still favors fixing the remaining DNS systems that are
vulnerable to cache poisoning. This turnkey-MITM bug makes OpenSSL bad
certs far more exploitable, as Dan Kaminsky pointed out in his report.
OpenID is just one example of many where this is going to keep happening
as long as DNS is unpatched.

I thought of one possible mitigation that can protect OpenID end users
against remote web sites which have not patched their DNS. OpenID
providers who used weak OpenSSL certs would have to change their URLs
so that their old X.509 CA certs on their old URLs no longer work on the
new ones. This will require all of their clients (users who log in with
their OpenID credentials) to change their identifiers. DNS based MITMs
will not be able to forge messages related to the new identifiers.

Customers can be alerted to this requirement as soon as they log in to
a web site (relying party) whose DNS is NOT hacked; the redirection to
the OpenID provider will give opportunity to notify the customer of the
name change. Making this change may be somewhat inconvenient, but since
OpenID is a relatively new standard, at least it is easier than would
be the case with a more established protocol.

In the other direction of attack, the end user's DNS is poisoned and
he gets redirected to a bogus site in place of the OpenID provider;
that site is then able to provide a valid SSL certificate due to the
OpenSSL weakness, thereby stealing whatever authentication credentials
the user normally sends to his OpenID provider. This is one instance of
the general attack where a user is DNS-misdirected to a bogus copy of
a secure site which unfortunately used weak OpenSSL based certs.

Again, I see fixing the DNS as the path of least resistance here,
especially so since the end user is the one bearing most of the risk,
typically DNS is provided by an ISP or some other agency with a formal
legal relationship, and there is the possibility of liability on the
part of the lax DNS provider. Hopefully we will continue to see rapid
uptake of the DNS fix over the next few weeks.

That still leaves weak-cert OpenID users vulnerable to DNS-unpatched
service providers (OpenID relying parties), and that is where my proposed
mitigation above comes in. By renaming its URLs, an OpenID provider who
had the misfortune to create a weak OpenSSL cert (through no fault of
its own) can save its end users considerable potential grief.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]