Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread Ben Laurie
On 10 October 2013 17:06, John Kelsey crypto@gmail.com wrote:
 Just thinking out loud

 The administrative complexity of a cryptosystem is overwhelmingly in key 
 management and identity management and all the rest of that stuff.  So 
 imagine that we have a widely-used inner-level protocol that can use strong 
 crypto, but also requires no external key management.  The purpose of the 
 inner protocol is to provide a fallback layer of security, so that even an 
 attack on the outer protocol (which is allowed to use more complicated key 
 management) is unlikely to be able to cause an actual security problem.  On 
 the other hand, in case of a problem with the inner protocol, the outer 
 protocol should also provide protection against everything.

 Without doing any key management or requiring some kind of reliable identity 
 or memory of previous sessions, the best we can do in the inner protocol is 
 an ephemeral Diffie-Hellman, so suppose we do this:

 a.  Generate random a and send aG on curve P256

 b.  Generate random b and send bG on curve P256

 c.  Both sides derive the shared key abG, and then use SHAKE512(abG) to 
 generate an AES key for messages in each direction.

 d.  Each side keeps a sequence number to use as a nonce.  Both sides use 
 AES-CCM with their sequence number and their sending key, and keep track of 
 the sequence number of the most recent message received from the other side.

 The point is, this is a protocol that happens *inside* the main security 
 protocol.  This happens inside TLS or whatever.  An attack on TLS then leads 
 to an attack on the whole application only if the TLS attack also lets you do 
 man-in-the-middle attacks on the inner protocol, or if it exploits something 
 about certificate/identity management done in the higher-level protocol.  
 (Ideally, within the inner protcol, you do some checking of the identity 
 using a password or shared secret or something, but that's application-level 
 stuff the inner and outer protocols don't know about.

 Thoughts?

AIUI, you're trying to make it so that only active attacks work on the
combined protocol, whereas passive attacks might work on the outer
protocol. In order to achieve this, you assume that your proposed
inner protocol is not vulnerable to passive attacks (I assume the
outer protocol also thinks this is true). Why should we believe the
inner protocol is any better than the outer one in this respect?
Particularly since you're using tainted algorithms ;-).
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-06 Thread Ben Laurie
On 5 October 2013 20:18, james hughes hugh...@mac.com wrote:
 On Oct 5, 2013, at 12:00 PM, John Kelsey crypto@gmail.com wrote:

 http://keccak.noekeon.org/yes_this_is_keccak.html

 From the authors: NIST's current proposal for SHA-3 is a subset of the 
 Keccak family, one can generate the test vectors for that proposal using 
 the Kecca kreference code. and this shows that the [SHA-3] cannot contain 
 internal changes to the algorithm.

 The process of setting the parameters is an important step in 
 standardization. NIST has done this and the authors state that this has not 
 crippled the algorithm.

 I bet this revelation does not make it to Slashdot…

 Can we put this to bed now?

I have to take issue with this:

The security is not reduced by adding these suffixes, as this is only
restricting the input space compared to the original Keccak. If there
is no security problem on Keccak(M), there is no security problem on
Keccak(M|suffix), as the latter is included in the former.

I could equally argue, to take an extreme example:

The security is not reduced by adding these suffixes, as this is only
restricting the input space compared to the original Keccak. If there
is no security problem on Keccak(M), there is no security problem on
Keccak(preimages of Keccak(42)), as the latter is included in the
former.

In other words, I have to also make an argument about the nature of
the suffix and how it can't have been chosen s.t. it influences the
output in a useful way.

I suspect I should agree with the conclusion, but I can't agree with
the reasoning.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread Ben Laurie
On 1 October 2013 01:10, James A. Donald jam...@echeque.com wrote:

 On 2013-10-01 04:22, Salz, Rich wrote:

 designate some big player to do it, and follow suit?
 Okay that data encoding scheme from Google protobufs or Facebook thrift.
  Done.


 We have a complie to generate C code from ASN.1 code

 Google has a compiler to generate C code from protobufs source

 The ASN.1 compiler is open source.  Google's compiler is not.


Ahem: https://code.google.com/p/protobuf/downloads/list.


 Further, google is unhappy that too-clever-code gives too-clever
 programmers too much power, and has prohibited its employees from ever
 doing something like protobufs again.


Wat? Sez who?
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-01 Thread Ben Laurie
On 30 September 2013 23:24, John Kelsey crypto@gmail.com wrote:

 Maybe you should check your code first?  A couple nist people verified
 that the curves were generated by the described process when the questions
 about the curves first came out.


If you don't quote the message you're replying to, its hard to guess who
should check what code - perhaps you could elaborate?


  Don't trust us, obviously--that's the whole point of the procedure.  But
 check your code, because the process worked right when we checked it.

 --John
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread Ben Laurie
On 1 October 2013 09:46, James A. Donald jam...@echeque.com wrote:

  On 2013-10-01 18:06, Ben Laurie wrote:




 On 1 October 2013 01:10, James A. Donald jam...@echeque.com wrote:

 Further, google is unhappy that too-clever-code gives too-clever
 programmers too much power, and has prohibited its employees from ever
 doing something like protobufs again.


  Wat? Sez who?

Protobufs is code generating code.  Not allowed by google style guide.


News to me - where does it say that?
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] TLS2

2013-09-30 Thread Ben Laurie
On 30 September 2013 10:47, Adam Back a...@cypherspace.org wrote:

 I think lack of soft-hosting support in TLS was a mistake - its another
 reason not to turn on SSL (IPv4 addresses are scarce and can only host one
 SSL domain per IP#, that means it costs more, or a small hosting company
 can
 only host a limited number of domains, and so has to charge more for SSL):
 and I dont see why its a cost worth avoiding to include the domain in the
 client hello.  There's an RFC for how to retrofit softhost support via
 client-hello into TLS but its not deployed AFAIK.


Boy, are you out of date:
http://en.wikipedia.org/wiki/Server_Name_Indication.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-21 Thread Ben Laurie
On 18 September 2013 22:23, Lucky Green shamr...@cypherpunks.to wrote:

 According to published reports that I saw, NSA/DoD pays $250M (per
 year?) to backdoor cryptographic implementations. I have knowledge of
 only one such effort. That effort involved DoD/NSA paying $10M to a
 leading cryptographic library provider to both implement and set as
 the default the obviously backdoored Dual_EC_DRBG as the default RNG.


Surprise! The leading blah blah was RSA:
http://stream.wsj.com/story/latest-headlines/SS-2-63399/SS-2-332655/.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-18 Thread Ben Laurie
On 18 September 2013 15:30, Viktor Dukhovni cryptogra...@dukhovni.orgwrote:

 On Tue, Sep 17, 2013 at 11:48:40PM -0700, Christian Huitema wrote:

   Given that many real organizations have hundreds of front end
   machines sharing RSA private keys, theft of RSA keys may very well be
   much easier in many cases than broader forms of sabotage.
 
  Or we could make it easy to have one separate RSA key per front end,
 signed
  using the main RSA key of the organization.

 This is only realistic with DANE TLSA (certificate usage 2 or 3),
 and thus will start to be realistic for SMTP next year (provided
 DNSSEC gets off the ground) with the release of Postfix 2.11, and
 with luck also a DANE-capable Exim release.


What's wrong with name-constrained intermediates?



 For HTTPS, there is little indication yet that any of the major
 browsers are likely to implement DANE support in the near future.

 --
 Viktor.
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] End to end

2013-09-16 Thread Ben Laurie
On 16 September 2013 18:49, Phillip Hallam-Baker hal...@gmail.com wrote:

 To me the important thing about transparency is that it is possible for
 anyone to audit the key signing process from publicly available
 information. Doing the audit at the relying party end prior to every
 reliance seems a lower priority.


This is a fair point, and we could certainly add on to CT a capability to
post-check the presence of a pre-CT certificate in a log.


 In particular, there are some type of audit that I don't think it is
 feasible to do in the endpoint. The validity of a CT audit is only as good
 as your newest notary timestamp value. It is really hard to guarantee that
 the endpoint is not being spoofed by a PRISM capable adversary without
 going to techniques like quorate checking which I think are completely
 practical in a specialized tracker but impractical to do in an iPhone or
 any other device likely to spend much time turned off or otherwise
 disconnected from the network.


I think the important point is that even infrequently connected devices can
_eventually_ reveal the subterfuge.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Suite B after today's news

2013-09-10 Thread Ben Laurie
On 10 September 2013 11:29, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:

 Ben Laurie b...@links.org writes:

 We need to get an extension number allocated, since the one it uses
 clashes
 with ALPN.

 It does?  draft-ietf-tls-applayerprotoneg-01 doesn't mention ID 0x10
 anywhere.

 (In any case -encrypt-then-MAC got there first, these Johnny-come-lately's
 can
 find their own ID to squat on :-).


Feel free to argue the toss with IANA:
http://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml
.

In the meantime, I suggest getting a new number would be more productive.
Which, apparently, means first getting adopted by the TLS WG.

Alternatively, allocate a random number.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-10 Thread Ben Laurie
On 9 September 2013 22:49, Stephen Farrell stephen.farr...@cs.tcd.iewrote:


 Hi Ben,

 On 09/09/2013 05:29 PM, Ben Laurie wrote:
  Perry asked me to summarise the status of TLS a while back ... luckily I
  don't have to because someone else has:
 
  http://tools.ietf.org/html/draft-sheffer-tls-bcp-00
 
  In short, I agree with that draft. And the brief summary is: there's only
  one ciphersuite left that's good, and unfortunately its only available in
  TLS 1.2:
 
  TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

 I don't agree the draft says that at all. It recommends using
 the above ciphersuite. (Which seems like a good recommendation
 to me.) It does not say anything much, good or bad, about any
 other ciphersuite.

 Claiming that all the rest are no good also seems overblown, if
 that's what you meant.


Other than minor variations on the above, all the other ciphersuites have
problems - known attacks, unreviewed ciphers, etc.

If you think there are other ciphersuites that can be recommended -
particularly ones that are available on versions of TLS other than 1.2,
then please do name them.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What TLS ciphersuites are still OK?

2013-09-10 Thread Ben Laurie
On 10 September 2013 03:59, james hughes hugh...@mac.com wrote:


 On Sep 9, 2013, at 2:49 PM, Stephen Farrell stephen.farr...@cs.tcd.ie
 wrote:

 On 09/09/2013 05:29 PM, Ben Laurie wrote:

 Perry asked me to summarise the status of TLS a while back ... luckily I
 don't have to because someone else has:

 http://tools.ietf.org/html/draft-sheffer-tls-bcp-00

 In short, I agree with that draft. And the brief summary is: there's only
 one ciphersuite left that's good, and unfortunately its only available in
 TLS 1.2:

 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

 I retract my previous +1 for this ciphersuite. This is hard coded 1024
 DHE and 1024bit RSA.


It is not hard coded to 1024 bit RSA. I have seen claims that some
platforms hard code DHE to 1024 bits, but I have not investigated these
claims. If true, something should probably be done.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-10 Thread Ben Laurie
On 10 September 2013 22:04, Joe Abley jab...@hopcount.ca wrote:

 Suppose Mallory has access to the private keys of CAs which are in the
 browser list or otherwise widely-trusted.

 An on-path attack between Alice and Bob would allow Mallory to terminate
 Alice's TLS connection, presenting an opportunistically-generated
 server-side certificate with signatures that allow it to be trusted by
 Alice without pop-ups and warnings. Instantiating a corresponding session
 with Bob and ALGing the plaintext through with interception is then
 straightforward.


CT makes this impossible to do undetected, of course.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] What TLS ciphersuites are still OK?

2013-09-09 Thread Ben Laurie
Perry asked me to summarise the status of TLS a while back ... luckily I
don't have to because someone else has:

http://tools.ietf.org/html/draft-sheffer-tls-bcp-00

In short, I agree with that draft. And the brief summary is: there's only
one ciphersuite left that's good, and unfortunately its only available in
TLS 1.2:

TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS

2013-09-06 Thread Ben Laurie
On 6 September 2013 18:24, Perry E. Metzger pe...@piermont.com wrote:

 On Fri, 6 Sep 2013 18:18:05 +0100 Ben Laurie b...@links.org wrote:
  On 6 September 2013 18:13, Perry E. Metzger pe...@piermont.com
  wrote:
 
   Google is also now (I believe) using PFS on their connections, and
   they handle more traffic than anyone. A connection I just made to
   https://www.google.com/ came out as, TLS 1.2, RC4_128, SHA1,
   ECDHE_RSA.
  
   It would be good to see them abandon RC4 of course, and soon.
  
 
  In favour of what, exactly? We're out of good ciphersuites.

 I thought AES was okay for TLS 1.2? Isn't the issue simply that
 Firefox etc. still use TLS 1.0? Note that this was a TLS 1.2
 connection.


Apart from its fragility, AES-GCM is still OK, yes. The problem is that
there's nothing good left for TLS  1.2.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS

2013-09-06 Thread Ben Laurie
On 6 September 2013 18:13, Perry E. Metzger pe...@piermont.com wrote:

 Google is also now (I believe) using PFS on their connections, and
 they handle more traffic than anyone. A connection I just made to
 https://www.google.com/ came out as, TLS 1.2, RC4_128, SHA1,
 ECDHE_RSA.

 It would be good to see them abandon RC4 of course, and soon.


In favour of what, exactly? We're out of good ciphersuites.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-06 Thread Ben Laurie
On 6 September 2013 17:20, Peter Saint-Andre stpe...@stpeter.im wrote:

  Is there a handy list of PFS-friendly
 ciphersuites that I can communicate to XMPP developers and admins so
 they can start upgrading their software and deployments?


Anything with EDH, DHE or ECDHE in the name...
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Using Raspberry Pis

2013-09-06 Thread Ben Laurie
On 26 August 2013 22:43, Perry E. Metzger pe...@piermont.com wrote:

 (I would prefer to see hybrid capability systems in such
 applications, like Capsicum, though I don't think any such have been
 ported to Linux and that's a popular platform for such work.)


FWIW, we're working on a Linux port of Capsicum. Help is always welcome :-)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Hashes into Ciphers

2013-09-04 Thread Ben Laurie
On 4 September 2013 15:49, Perry E. Metzger pe...@piermont.com wrote:

 On Wed, 4 Sep 2013 10:37:12 -0400 Perry E. Metzger
 pe...@piermont.com wrote:
  Phil Karn described a construction for turning any hash function
  into the core of a Feistel cipher in 1991. So far as I can tell,
  such ciphers are actually quite secure, though impractically slow.
 
  Pointers to his original sci.crypt posting would be appreciated, I
  wasn't able to find it with a quick search.

 Answering my own question


 https://groups.google.com/forum/#!original/sci.crypt/tTWR2qIII0s/iDvT3ptY5CEJ

 Note that Karn's construction need not use any particular hash
 function -- he's more or less simply describing how to use a hash
 function of any sort as the heart of a Feistel cipher.


His claim is that it is actually faster than DES, not impractically slow.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Thoughts about keys

2013-09-01 Thread Ben Laurie
On 25 August 2013 21:29, Perry E. Metzger pe...@piermont.com wrote:

 [Disclaimer: very little in this seems deeply new, I'm just
 mixing it up in a slightly different way. The fairly simple idea I'm
 about to discuss has germs in things like SPKI, Certificate
 Transparency, the Perspectives project, SSH, and indeed dozens of
 other things. I think I even suggested a version of this exact idea
 several times in the past, and others may have as well. I'm not going
 to pretend to make claims of real originality here, I'm more
 interested in thinking about how to get such things quite widely
 deployed, though it would be cool to hear about prior art just in case
 I decide to publish a tech report.]

 One element required to get essentially all messaging on the
 Internet end to end encrypted is a good way to find out what people's
 keys are.

 If I meet someone at a reception at a security conference, they might
 scrawl their email address (al...@example.org) for me on a cocktail
 napkin.

 I'd like to be able to then write to them, say to discuss their
 exciting new work on evading censorship of mass releases of stolen
 government documents using genetically engineered fungal spores to
 disseminate the information in the atmosphere worldwide.

 However, in our new everything is always encrypted world, I'll be
 needing their encryption key, and no one can remember something as
 long as that.

 So, how do I translate al...@example.org into a key?

 Now, the PGP web-of-trust model, which I think is broken, would have
 said check a key server, see if there's a reasonable trust path
 between you and Alice.

 I have an alternative suggestion.

 Say that we have a bunch of (only vaguely) trustworthy organizations
 out in the world. (They can use crypto based log protocols of various
 kinds to make sure you don't _really_ need to trust them, but for the
 moment that part doesn't matter.)

 Say that Alice, at some point in the past, sent an email message,
 signed in her own key, to such an organization's key server, saying in
 effect this is al...@example.org's key.

 At intervals, the trustworthy organization (and others like it) can
 send out email messages to Alice, encrypted in said key, saying Hi
 there! Please reply with a message containing this magic cookie,
 encrypted in our key, signed in yours.

 If a third party needing the key for al...@example.org queries the
 vaguely trusted server, it will then give them information like For
 the past six years, we've been sending al...@example.org emails every
 couple of weeks asking her to reply to demonstrate she controls a
 particular public key, and she always has -- new keys have always been
 signed in the old one, too. Here's a log, including various sorts of
 widely witnessed events and hash chains so that if we were lying about
 this we had to be planning to lie about it for a very long time.

 Now of course, in the real world, who wants to go through the effort
 of hand replying to query messages to establish a key over time?
 Instead, Alice has some actually trusted software running on her
 computer at home.

 She can either leave it to automatically do IMAP queries against her
 mailbox (which could even be GMail or what have you) and reply on her
 behalf, or it could ask her to unlock her key while she's at home in
 the morning having her coffee. However, I think the former is actually
 preferable. We'd rather have an imperfect system that is effortless to
 use but can be bypassed by physically breaking in to someone's home.
 (After all if you did that you probably can bug Alice's hardware
 anyway.)

 Alice probably also needs to make sure someone isn't spoofing her
 replies by accessing her IMAP box and replying for her (using a key
 known to the attacker but presumably not to Alice) and then deleting
 the query, but the mere absence of periodic pings from the trusted
 party may be enough to create suspicion, as might doing one's own
 queries against the trusted parties and noticing that the key isn't
 your own.

 Presumably, of course, there should be a bunch of such servers out
 there -- not so many that the traffic becomes overwhelming, but not so
 few that it is particularly feasible to take the system off the
 air. (One can speculate about distributed versions of such systems as
 well -- that's not today's topic.)

 So, this system has a bunch of advantages:

 1) It doesn't require that someone associated with administrators of
 the domain name you're using for email has to cooperate with deploying
 your key distribution solution. Alice doesn't need her managers to agree
 to let her use the system -- her organization doesn't even need to
 know she's turned it on. Yet, it also doesn't allow just anyone to
 claim to be al...@example.org -- to put in a key, you have to show you
 can receive and reply to emails sent to the mailbox.

 2) You know that, if anyone is impersonating Alice, they had to have
 been planning it for a while. In general, this is 

Re: [Cryptography] PRISM PROOF Email

2013-08-23 Thread Ben Laurie
On 22 August 2013 10:36, Phillip Hallam-Baker hal...@gmail.com wrote:

 Preventing key substitution will require a combination of the CT ideas
 proposed by Ben Laurie (so catenate proof notaries etc) and some form of
 'no key exists' demonstration.


We have already outline how to make verifiable maps as well as verifiable
logs, which I think is all you need.
http://www.links.org/files/RevocationTransparency.pdf.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: English 19-year-old jailed for refusal to disclose decryption key

2010-10-06 Thread Ben Laurie
On 6 October 2010 11:57, Ray Dillinger b...@sonic.net wrote:
 a 19-year-old just got a 16-month jail sentence for his refusal to
 disclose the password that would have allowed investigators to see
 what was on his hard drive.

16 weeks, says the article.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-15 Thread Ben Laurie
On 15/09/2010 00:26, Nicolas Williams wrote:
 On Tue, Sep 14, 2010 at 03:16:18PM -0500, Marsh Ray wrote:
 How do you deliver Javascript to the browser securely in the first
 place? HTTP?
 
 I'll note that Ben's proposal is in the same category as mine (which
 was, to remind you, implement SCRAM in JavaScript and use that, with
 channel binding using tls-server-end-point CB type).
 
 It's in the same category because it has the same flaw, which I'd
 pointed out earlier: if the JS is delivered by normal means (i.e., by
 the server), then the script can't be used to authenticate the server.

That's one of the reasons I said it was only good for experimenation.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-15 Thread Ben Laurie
On 14/09/2010 21:16, Marsh Ray wrote:
 On 09/14/2010 09:13 AM, Ben Laurie wrote:
 Demo here: https://webid.digitalbazaar.com/manage/
 
 This Connection is Untrusted

So? It's a demo.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Ben Laurie
On 14/09/2010 12:29, Ian G wrote:
 On 14/09/10 2:26 PM, Marsh Ray wrote:
 On 09/13/2010 07:24 PM, Ian G wrote:
 
 1. In your initial account creation / login, trigger a creation of a
 client certificate in the browser.

 There may be a way to get a browser to generate a cert or CSR, but I
 don't know it. But you can simply generate it at the server side.
 
 Just to be frank here, I'm also not sure what the implementation details
 are here.  I somewhat avoided implementation until it becomes useful.

FWIW, you can get browsers to generate CSRs and eat the resulting certs.
The actual UIs vary from appalling to terrible.

Of some interest to me is the approach I saw recently (confusingly named
WebID) of a pure Javascript implementation (yes, TLS in JS, apparently),
allowing UI to be completely controlled by the issuer. Ultimately this
approach seems too risky for real use, but it could be used to prototype
UI, perhaps finally leading to something usable in browsers.

Slide deck here: http://payswarm.com/slides/webid/#(1)

(note, videos use flash, I think, so probably won't work for anyone with
their eye on the ball).

Demo here: https://webid.digitalbazaar.com/manage/

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-09 Thread Ben Laurie
On 9 September 2010 10:08, James A. Donald jam...@echeque.com wrote:
 On 2010-09-09 6:35 AM, Ben Laurie wrote:

 What I do in Nigori for this is use DSA. Your private key, x, is the
 hash of the login info. The server has g^x, from which it cannot
 recover x,

 Except, of course, by dictionary attack, hence g^x, being low
 entropy, is treated as a shared secret.

Indeed, if it is low entropy (I don't think you can assume it is,
though I'll readily agree it is likely to be), then it is subject to a
dictionary attack.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-08 Thread Ben Laurie
On 8 September 2010 16:45, f...@mail.dnttm.ro wrote:

 Hi.

 Just subscribed to this list for posting a specific question. I hope the 
 question I'll ask is in place here.

 We do a web app with an Ajax-based client. Anybody can download the client 
 and open the app, only, the first thing the app does is ask for login.

 The login doesn't happen using form submission, nor does it happen via a 
 known, standard http mechanism.

 What we do is ask the user for some login information, build a hash out of 
 it, then send it to the server and have it verified. If it checks out, a 
 session ID is generated and returned to the client. Afterwards, only requests 
 accompanied by this session ID are answered by the server.

 Right now, the hash sent by the browser to the server is actually not a hash, 
 but the unhashed login info. This has to be changed, of course.

 What we need is a hashing algorithm that:
 - should not generate the same hash every time - i.e. should includen 
 something random elements
 - should require little code to generate
 - should allow verification of whether two hashes stem from the same login 
 data, without having access to the actual login data

 We need to implement the hashing algorithm in Javascript and the verification 
 algorithm in Java, and it needs to execute reasonably fast, that's why it has 
 to require little code. None of us is really into cryptography, so the best 
 thing we could think of was asking for advice from people who grok the domain.

Well, you can't always get what you want.

What I do in Nigori for this is use DSA. Your private key, x, is the
hash of the login info. The server has g^x, from which it cannot
recover x, and the client does DSA using x.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-09-05 Thread Ben Laurie
On 27/08/2010 19:38, Joshua Hill wrote:
 The fact is that all of the approved deterministic RNGs have places that
 you are expected to use to seed the generator.  The text of the standard
 explicitly states that you can use non-approved non-deterministic RNGs
 to seed your approved deterministic RNG.

This is nice.

 It's an even better situation if you look at the modern deterministic RNGs
 described in NIST SP800-90. (You'll want to use these, anyway.  They are
 better designs and last I heard, NIST was planning on retiring the other
 approved deterministic RNGs.) Every design in SP800-90 requires that your
 initial seed is appropriately large and unpredictable, and the designs all
 allow (indeed, require!) periodic reseeding in similarly reasonable ways.

Given that we seem to have agreed that unpredictable is kinda hard,
I'm amused that SP800-90 requires it. If it is a requirement then I
wonder why NIST didn't specify how to generate and validate such a seed?

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Merkle Signature Scheme is the most secure signature scheme possible for general-purpose use

2010-09-03 Thread Ben Laurie
On 01/09/2010 22:45, Zooko O'Whielacronx wrote:
 On Wed, Sep 1, 2010 at 2:55 PM, Ben Laurie b...@links.org wrote:
 Or, to put it another way, in order to show that a Merkle signature is
 at least as good as any other, then you'll first have to show that an
 iterated hash is at least as secure as a non-iterated hash (which seems
 like a hard problem, since it isn't).
 
 I'm not sure that I agree with you that security of a hash function
 used once on an arbitrarily large message is likely to be better than
 security of a hash function used a few times iteratively on its own
 outputs.

That's the whole point - a hash function used on an arbitrary message
produces one of its possible outputs. Feed that hash back in and it
produces one of a subset of its possible outputs. Each time you do this,
you lose a little entropy (I can't remember how much, but I do remember
David Wagner explaining it to me when I discovered this for myself quite
a few years ago).

So, on that basis alone, I reject the most secure possible argument.

 But regardless of that, I think the fair comparison here is:
 
 ... show that an iterated hash is more likely to have preimage
 resistance than a non-iterated hash is to have collision-resistance.
 
 And I think it is quite clear that for any real hash function such as
 MD5, SHA-1, Tiger, Ripemd, SHA-2, and the SHA-3 candidates that this
 does hold!
 
 What do you think of that argument?

I think I've failed to understand why you thing collisions are not a
problem for Merkle trees.

Also, regardless, you are now talking probabilities and so a claim of
most secure possible still doesn't apply.

Merkle trees, probably the best signature in the world? :-)

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-28 Thread Ben Laurie
On 28/07/2010 01:07, Paul Tiemann wrote:
 There is a long list of flyblown metaphors which could similarly be
 got rid of if enough people would interest themselves in the job; and
 it should also be possible to laugh the not un- formation out of
 existence*...
 
 *One can cure oneself of the not un- formation by memorizing this
 sentence: A not unblack dog was chasing a not unsmall rabbit across a
 not ungreen field.

Had he succeeded in this mission, then we could never have had John
Major on Spitting Image being not inconsiderably incandescent.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Ben Laurie
On 28/07/2010 00:14, Paul Tiemann wrote:
 On Jul 27, 2010, at 3:34 PM, Ben Laurie wrote:
 
 On 24/07/2010 18:55, Peter Gutmann wrote:
 - PKI dogma doesn't even consider availability issues but expects the
  straightforward execution of the condition problem - revoke cert.  For a
  situation like this, particularly if the cert was used to sign 64-bit
  drivers, I wouldn't have revoked because the global damage caused by that 
 is
  potentially much larger than the relatively small-scale damage caused by 
 the
  malware.  So alongside too big to fail we now have too widely-used to
  revoke.  Is anyone running x64 Windows with revocation checking enabled 
 and
  drivers signed by the Realtek or JMicron certs?

 One way to mitigate this would be to revoke a cert on a date, and only
 reject signatures on files you received after that date.
 
 I like that idea, as long as a verifiable timestamp is included.
 
 Without a trusted timestamp, would the bad guy be able to backdate the 
 signature?

Note that I avoided this issue by using the date of receipt.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Ben Laurie
On 28/07/2010 09:57, Peter Gutmann wrote:
 Ben Laurie b...@links.org writes:
 On 24/07/2010 18:55, Peter Gutmann wrote:
 - PKI dogma doesn't even consider availability issues but expects the
   straightforward execution of the condition problem - revoke cert.  For 
 a
   situation like this, particularly if the cert was used to sign 64-bit
   drivers, I wouldn't have revoked because the global damage caused by that 
 is
   potentially much larger than the relatively small-scale damage caused by 
 the
   malware.  So alongside too big to fail we now have too widely-used to
   revoke.  Is anyone running x64 Windows with revocation checking enabled 
 and
   drivers signed by the Realtek or JMicron certs?

 One way to mitigate this would be to revoke a cert on a date, and only 
 reject 
 signatures on files you received after that date.
 
 This wouldn't make any difference, except for the special case of x64, 
 signatures are only verified on install, so existing installed code isn't 
 affected and anything new that's being installed is, with either form of 
 sig-checking.

Obviously if you are going to change revocation you can also change when
signatures are checked. This hardly seems like a show-stopper.

 In any case though the whole thing is really a moot point given the sucking 
 void that is revocation-handling, the Realtek certificate was revoked on the 
 16th but one of my spies has informed me that as of yesterday it was still 
 regarded as valid by Windows.  Previous experience with revoked certs has 
 been 
 that they remain valid more or less indefinitely (which would be really great 
 if CAs offered something like domain-tasting for certs, you could get as many 
 free certs as you wanted).

Again, citing the failure to use revocation correctly right now does not
tell us anything much about the possibility of using it correctly in the
future.

 The way to revoke a cert for signed malware is to wait 0-12 hours for the 
 malware signature to be added to AV databases, not to actually expect PKI to 
 work.

At which time they release another version? Doesn't sound like the
optimal answer to me.

I find your response strange. You ask how we might fix the problems,
then you respond that since the world doesn't work that way right now,
the fixes won't work. Is this just an exercise in one-upmanship? You
know more ways the world is broken than I do?

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Ben Laurie
On 28/07/2010 13:18, Peter Gutmann wrote:
 Ben Laurie b...@links.org writes:
 
 I find your response strange. You ask how we might fix the problems, then 
 you 
 respond that since the world doesn't work that way right now, the fixes 
 won't 
 work. Is this just an exercise in one-upmanship? You know more ways the 
 world 
 is broken than I do?
 
 It's not just that the world doesn't work that way now, it's quite likely 
 that 
 it'll never work that way (for the case of PKI/revocations mentioned in the 
 message, not the original SNI).  We've been waiting for between 20 and 30 
 years (depending on what you define as the start date) for PKI to start 
 working, and your reponse seems to indicate that we should wait even harder.  
 If I look at the mechanisms we've got now, I can identify that commercial PKI 
 isn't helping, and revocations aren't helping, and work around that.  I'm 
 after effective practical solutions, not just a solution exists, QED 
 solutions.

The core problem appears to be a lack of will to fix the problems, not a
lack of feasible technical solutions.

I don't know why it should help that we find different solutions for the
world to ignore?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Ben Laurie
On 28/07/2010 14:05, Perry E. Metzger wrote:
 It is not always the case that a dead technology has failed because of
 infeasibility or inapplicability. I'd say that a number of fine
 technologies have failed for other reasons. However, at some point, it
 becomes incumbent upon the proponents of a failed technology to
 either demonstrate that it can be made to work in a clear and
 convincing way, or to abandon it even if, on some level, they are
 certain that it could be made to work if only someone would do it.

To be clear, I am not a proponent of PKI as we know it, and certainly
the current use of PKI to sign software has never delivered any actual
value, and still wouldn't if revocation worked perfectly.

However, using private keys to prove that you are (probably) dealing
with the same entity as yesterday seems like a useful thing to do. And
still needs revocation.

Is there a good replacement for pk for this purpose?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Ben Laurie
On 28 July 2010 15:05, Perry E. Metzger pe...@piermont.com wrote:
 On Wed, 28 Jul 2010 14:38:53 +0100 Ben Laurie b...@links.org wrote:
 On 28/07/2010 14:05, Perry E. Metzger wrote:
  It is not always the case that a dead technology has failed
  because of infeasibility or inapplicability. I'd say that a
  number of fine technologies have failed for other reasons.
  However, at some point, it becomes incumbent upon the proponents
  of a failed technology to either demonstrate that it can be made
  to work in a clear and convincing way, or to abandon it even if,
  on some level, they are certain that it could be made to work if
  only someone would do it.

 To be clear, I am not a proponent of PKI as we know it, and
 certainly the current use of PKI to sign software has never
 delivered any actual value, and still wouldn't if revocation worked
 perfectly.

 However, using private keys to prove that you are (probably) dealing
 with the same entity as yesterday seems like a useful thing to do.

 I agree with that fully.

 And still needs revocation.

 Does it?

 I will point out that many security systems, like Kerberos, DNSSEC and
 SSH, appear to get along with no conventional notion of revocation at 
 all-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Ben Laurie
On 28/07/2010 15:18, Peter Gutmann wrote:
 Ben Laurie b...@links.org writes:
 
 However, using private keys to prove that you are (probably) dealing with 
 the 
 same entity as yesterday seems like a useful thing to do. And still needs 
 revocation.
 
 It depends on what you mean by revocation, traditional revocation in the PKI 
 sense isn't needed because (well apart from the fact that it doesn't work, 
 you 
 can't un-say something after you've already said it) if you look at what a PK 
 or a cert is, it's just a capability, and the way to revoke (in the 
 capability 
 sense) a capability is to do something like rename the object that the 
 capability refers to or use a level of indirection and break the link when 
 you 
 want to revoke (in the capability sense) the access.  This means that no 
 matter how many copies of the capability are floating around out there and 
 whether the relying party checks CRLs or not, they're not going to be able to 
 get in.

Now you are talking my language! Have I mentioned that my new project at
Google is all about finding good UI for exposing capabilities to users?

 Is there a good replacement for pk for this purpose?
 
 Which purpose?  If you mean securing the distribution channel for binaries, 
 here's a very simple one that doesn't need PK at all, it's called a 
 self-authenticating URL.  To use it you go to your software site, and here's 
 a 
 real-world example that uses it, the Python Package Index, and click on a 
 download link, something like 
 http://pypi.python.org/packages/package.gz#md5= (yeah, I know, it uses 
 MD5...).  This link can point anywhere, because the trusted source of the 
 link 
 includes a hash of the binary (and in this case it's a non-HTTPS source, you 
 can salt and pepper it as required, for exammple make it an HTTPS link and 
 use 
 key continuity to manage the cert).  In this form the concept is called link 
 fingerprints, it was actually implemented for Firefox as a Google Summer of 
 Code project, but then removed again over concerns that if it was present 
 people might actually use it (!!).  It's still available in DL managers like 
 DownThemAll.

The problem here is that it doesn't directly give me an upgrade path. Of
course, I agree that this is sufficient to give me a link to the right
binary, but what about its successors?

 Another option is to cryptographically bind the key to the URL, so you again 
 have a trusted link source for your download and the link is to 
 https://base64.downloadsite.com, where base64 is a fingerprint of the 
 cert 
 you get when you connect to the site.  This does away with the need for a CA,
 because the link itself authenticates the cert that's used.

Yes, this is of course the YURL scheme.

 Then there are other variations, cryptographically generated addresses, ... 
 all sorts of things have been proposed.
 
 The killer, again, is the refusal of any browser vendor to adopt any of it.  
 In the case of FF someone actually wrote the code for them, and it was 
 rejected.  Without support from browser vendors, it doesn't matter what cool
 ideas people come up with, it's never going to get any better.
 
 Peter.
 
 


-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Ben Laurie
On 28/07/2010 16:01, Perry E. Metzger wrote:
 On Wed, 28 Jul 2010 15:16:32 +0100 Ben Laurie b...@google.com wrote:
 SSH does appear to have got away without revocation, though the
 nature of the system is s.t. if I really wanted to revoke I could
 almost always contact the users and tell them in person.
 
 No, that's not what SSH does, or rather, it confuses the particular
 communications channel (i.e. some out of band mechanism) with the
 method that actually de-authorizes the key.
 
 The point is that in SSH, if a key is stolen, you remove it from the
 list of keys allowed to log in to a host. The key now need never be
 thought about again. We require no list of revoked keys be kept,
 just as we required no signed list of keys that were authorized. We
 just had some keys in a database to indicate that they were
 authorized, and we removed a key to de-authorize it.

I am referring to the SSH host key. Fully agree for user keys.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Ben Laurie
On 27/07/2010 15:11, Peter Gutmann wrote:
 The intent with posting it to the list was to get input from a collection of
 crypto-savvy people on what could be done.  The issue had previously been
 discussed on a (very small) private list, and one of the members suggested I
 post it to the cryptography list to get more input from people.  The follow-up
 message (the Part II one) is in a similar vein, a summary of a problem and
 then some starters for a discussion on what the issues might be.

Haven't we already decided what to do: SNI?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-27 Thread Ben Laurie
On 24/07/2010 18:55, Peter Gutmann wrote:
 - PKI dogma doesn't even consider availability issues but expects the
   straightforward execution of the condition problem - revoke cert.  For a
   situation like this, particularly if the cert was used to sign 64-bit
   drivers, I wouldn't have revoked because the global damage caused by that is
   potentially much larger than the relatively small-scale damage caused by the
   malware.  So alongside too big to fail we now have too widely-used to
   revoke.  Is anyone running x64 Windows with revocation checking enabled and
   drivers signed by the Realtek or JMicron certs?

One way to mitigate this would be to revoke a cert on a date, and only
reject signatures on files you received after that date.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Intel to also add RNG

2010-07-14 Thread Ben Laurie
On 12 July 2010 18:13, Jack Lloyd ll...@randombit.net wrote:
 On Mon, Jul 12, 2010 at 12:22:51PM -0400, Perry E. Metzger wrote:

 BTW, let me note that if Intel wanted to gimmick their chips to make
 them untrustworthy, there is very little you could do about it. The
 literature makes it clear at this point that short of carefully
 tearing apart and analyzing the entire chip, you're not going to catch
 subtle behavioral changes designed to allow attackers backdoor
 access. Given that, I see little reason not to trust them on an RNG,
 and I wish they would make it a standard part of the architecture
 already.

 I think it's important to make the distinction between trusting Intel
 not to have made it actively malicious, and trusting them to have
 gotten it perfectly correct in such a way that it cannot fail.
 Fortunately, the second problem, that it is a well-intentioned but
 perhaps slightly flawed RNG [*], could be easily alleviated by feeding
 the output into a software CSPRNG (X9.31, a FIPS 186-3 design, take
 your pick I guess). And the first could be solved by also feeding your
 CSPRNG with anything that you would have fed it with in the absense of
 the hardware RNG - in that case, you're at least no worse off than you
 were before. (Unless your PRNG's security can be negatively affected
 by non-random or maliciously chosen inputs, in which case you've got
 larger problems).

Several years ago I reviewed a new design for FreeBSD's PRNG. It was
vulnerable to sources that had high data rates but low entropy (they
effectively drowned out lower data rate sources). This was fixed,
but I wonder how common an error it is?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Intel to also add RNG

2010-07-12 Thread Ben Laurie
On 2 July 2010 13:19, Eugen Leitl eu...@leitl.org wrote:

 http://www.technologyreview.com/printer_friendly_article.aspx?id=25670channel=Briefingssection=Microprocessors

 Tuesday, June 29, 2010

 Nanoscale Random Number Circuit to Secure Future Chips

 Intel unveils a circuit that can pump out truly random numbers at high speed.

Have they forgotten the enormous amount of suspicion last time they tried this?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Spy/Counterspy

2010-07-11 Thread Ben Laurie
On 10 July 2010 11:57, Jerry Leichter leich...@lrw.com wrote:
 Beyond simple hacking - someone is quoted saying You can consider GPS a
 little like computers before the first virus - if I had stood here before
 then and cried about the risks, you would've asked 'why would anyone
 bother?'. - among the possible vulnerabilities are to high-value cargo,
 armored cars, and rental cars tracked by GPS. As we build more and more
 location-aware services, we are inherently building more
 false-location-vulnerable services at the same time.

Most location-aware services should not care whether the location is
real or false, for privacy reasons. Agree about the issue of
high-value cargo (but I guess they'll just have to use more reliable
mechanisms, like maps and their eyes), don't care about rental cars.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Against Rekeying

2010-03-25 Thread Ben Laurie
On 24/03/2010 08:28, Simon Josefsson wrote:
 Perry E. Metzger pe...@piermont.com writes:
 
 Ekr has an interesting blog post up on the question of whether protocol
 support for periodic rekeying is a good or a bad thing:

 http://www.educatedguesswork.org/2010/03/against_rekeying.html

 I'd be interested in hearing what people think on the topic. I'm a bit
 skeptical of his position, partially because I think we have too little
 experience with real world attacks on cryptographic protocols, but I'm
 fairly open-minded at this point.
 
 One situation where rekeying appears to me not only useful but actually
 essential is when you re-authenticate in the secure channel.
 
 TLS renegotiation is used for re-authentication, for example, when you
 go from no user authentication to user authenticated, or go from user X
 authenticated to user Y authenticated.  This is easy to do with TLS
 renegotiation: just renegotiate with a different client certificate.
 
 I would feel uncomfortable using the same encryption keys that were
 negotiated by an anonymous user (or another user X) before me when I'm
 authentication as user Y, and user Y is planning to send a considerably
 amount of traffic that user Y wants to be protected.  Trusting the
 encryption keys negotiated by user X doesn't seem prudent to me.
 Essentially, I want encryption keys to always be bound to
 authentication.

Note, however, that one of the reasons the TLS renegotiation attack was
so bad in combination with HTTP was that reauthentication did not result
in use of the new channel to re-send the command that had resulted in a
need for reauthentication. This command could have come from the
attacker, but the reauthentication would still be used to authenticate it.

In other words, designing composable secure protocols is hard. And TLS
isn't one. Or maybe it is, now that the channels before and after
rekeying are bound together (which would seem to invalidate your
argument above).

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: TLS break

2009-11-17 Thread Ben Laurie
On Mon, Nov 16, 2009 at 11:30 AM, Bernie Cosell ber...@fantasyfarm.com wrote:

 As I understand it, this is only really a vulnerability in situations
 where a command to do something *precedes* the authentication to enable
 the command.  The obvious place where this happens, of course, is with
 HTTPS where the command [GET or POST] comes first and the authentication
 [be it a cookie or form vbls] comes later.

This last part is not really accurate - piggybacking the evil command
onto authentication that is later presented is certainly one possible
attack, but there are others, such as the Twitter POST attack and the
SMTP attack outlined by Wietse Venema (which doesn't work because of
implementation details, but _could_ work with a different
implementation).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto dongles to secure online transactions

2009-11-09 Thread Ben Laurie
On Sun, Nov 8, 2009 at 7:07 AM, John Levine jo...@iecc.com wrote:
 So before I send it off, if people have a moment could you look at it
 and tell me if I'm missing something egregiously obvious?  Tnx.

 I've made it an entry in my blog at

 http://weblog.johnlevine.com/Money/securetrans.html

Haven't read this thoroughly yet, though I think I disagree with the
idea that the display should be minimal - imagine checking out of
amazon on a 2-line display. Tedious.

Anyway, I should mention my own paper on this subject (with Abe
Singer) from NSPW 2008, Take The Red Pill _and_ The Blue Pill:

http://www.links.org/files/nspw36.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Ben Laurie
On Sat, Oct 17, 2009 at 10:23 AM, John Gilmore g...@toad.com wrote:
 Even plain DSA would be much more space efficient on the signature
 side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
 1024 bit RSA key, and the signatures would be half the size. And NIST
 allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
 is really that important.

 DSA was (designed to be) full of covert channels.

 Given that they are attempted to optimize for minimal packet size, the
 choice of RSA for signatures actually seems quite bizarre.

 It's more bizarre than you think.  But packet size just isn't that big
 a deal.  The root only has to sign a small number of records -- just
 two or three for each top level domain -- and the average client is
 going to use .com, .org, their own country, and a few others).  Each
 of these records is cached on the client side, with a very long
 timeout (e.g. at least a day).  So the total extra data transfer for
 RSA (versus other) keys won't be either huge or frequent.  DNS traffic
 is still a tiny fraction of overall Internet traffic.  We now have
 many dozens of root servers, scattered all over the world, and if the
 traffic rises, we can easily make more by linear replication.  DNS
 *scales*, which is why we're still using it, relatively unchanged,
 after more than 30 years.

 The bizarre part is that the DNS Security standards had gotten pretty
 well defined a decade ago, when one or more high-up people in the IETF
 decided that no standard that requires the use of Jim Bidzos's
 monopoly crypto algorithm is ever going to be approved on my watch.
 Jim had just pissed off one too many people, in his role as CEO of RSA
 Data Security and the second most hated guy in crypto.  (NSA export
 controls was the first reason you couldn't put decent crypto into your
 product; Bidzos's patent, and the way he licensed it, was the second.)
 This IESG prejudice against RSA went so deep that it didn't matter
 that we had a free license from RSA to use the algorithm for DNS, that
 the whole patent would expire in just three years, that we'd gotten
 export permission for it, and had working code that implemented it.
 So the standard got sent back to the beginning and redone to deal with
 the complications of deployed servers and records with varying algorithm
 availability (and to make DSA the officially mandatory algorithm).
 Which took another 5 or 10 years.

ts a fun story, but... RFC 4034 says RSA/SHA1 is mandatory and DSA is
optional. I wasn't involved in DNSSEC back then, and I don't know why
it got redone, but not, it seems, to make DSA mandatory. Also, the new
version is different from the old in many more ways that just the
introduction of DSA.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-26 Thread Ben Laurie
On Mon, Aug 10, 2009 at 6:35 PM, Peter Gutmannpgut...@cs.auckland.ac.nz wrote:
 More generally, I can't see that implementing client-side certs gives you much
 of anything in return for the massive amount of effort required because the
 problem is a lack of server auth, not of client auth.  If I'm a phisher then I
 set up my bogus web site, get the user's certificate-based client auth
 message, throw it away, and report successful auth to the client.  The browser
 then displays some sort of indicator that the high-security certificate auth
 was successful, and the user can feel more confident than usual in entering
 their credit card details.  All you're doing is building even more substrate
 for phishing attacks.

 Without simultaneous mutual auth, which -SRP/-PSK provide but PKI doesn't,
 you're not getting any improvement, and potentially just making things worse
 by giving users a false sense of security.

I certainly agree that if the problem you are trying to solve is
server authentication, then client certs don't get you very far. I
find it hard to feel very surprised by this conclusion.

If the problem you are trying to solve is client authentication then
client certs have some obvious value.

That said, I do tend to agree that mutual auth is also a good avenue
to pursue, and the UI you describe fits right in with Chrome's UI in
other areas. Perhaps I'll give it a try.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread Ben Laurie
Perry E. Metzger wrote:
 Yet another reason why you always should make the crypto algorithms you
 use pluggable in any system -- you *will* have to replace them some day.

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?

It seems to me protocol designers get all excited about this because
they want to design the protocol once and be done with it. But software
authors are generally content to worry about the new algorithm when they
need to switch to it - and since they're going to have to update their
software anyway and get everyone to install the new version, why should
they worry any sooner?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-08-06 Thread Ben Laurie
Zooko Wilcox-O'Hearn wrote:
 I don't think there is any basis to the claims that Cleversafe makes
 that their erasure-coding (Information Dispersal)-based system is
 fundamentally safer, e.g. these claims from [3]: a malicious party
 cannot recreate data from a slice, or two, or three, no matter what the
 advances in processing power. ... Maybe encryption alone is 'good
 enough' in some cases now  - but Dispersal is 'good always' and
 represents the future.

Surely this is fundamental to threshold secret sharing - until you reach
the threshold, you have not reduced the cost of an attack?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The clouds are not random enough

2009-08-02 Thread Ben Laurie
On Sat, Aug 1, 2009 at 10:06 PM, Jerry Leichterleich...@lrw.com wrote:
 Why Cloud Computing Needs More Chaos:

 http://www.forbes.com/2009/07/30/cloud-computing-security-technology-cio-network-cloud-computing.html

 [Moderator's note: ... the article is about a growing problem -- the
 lack of good quality random numbers in VMs provided by services like EC2
 and the effect this has on security. --Perry]

 The problem is broader than this.  A while back, I evaluated a technology
 that did it best to solve a basically insoluble problem:  How does a server,
 built on stock technology, keep secrets that it can use to authenticate with
 other servers after an unattended reboot?  Without tamper-resistant hardware
 that controls access to keys, anything the software can get at at boot, an
 attacker who steals a copy of a backup, say - can also get at.  So, the
 trick is to use a variety of measurements of the hardware - amount of
 memory, disk sizes, disk serial numbers, whatever you can think of that
 varies from machine to machine and is not stored in a backup - and combines
 them to produce a key that encrypts the important secrets.  Since hardware
 does need to be fixed or upgraded at times, a good implementation will use
 some kind of m unchanged out of n measurements algorithm.  Basically, this
 is the kind of thing Microsoft uses to lock license keys to particular
 instances of hardware.  Yes, it can be broken - but you can make breaking it
 a great deal of work.

 Virtualization changes all of this.  Every copy of a virtual machine is will
 be identical as far as most of these measurements are concerned.

I'd imagine (I'm not particularly interested in licence enforcement,
so I really am imagining), that the opposite was the problem: i.e.
that the host could run you on any VM which might have wildly varying
characteristics, depending on what the real machine underneath was,
and what else you were sharing with. So, every time you see the
measurements, they'll be different.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Factoring attack against RSA based on Pollard's Rho

2009-06-07 Thread Ben Laurie
Paul Hoffman wrote:
 At 8:07 PM -0700 6/5/09, Greg Perry wrote:
 Greetings list members,
 
 I have published a unique factoring method related to Pollard's Rho
 that is published here:
 
 http://blog.liveammo.com/2009/06/factoring-fun/
 
 Any feedback would be appreciated.
 
 Is there any practical value to this work? That's a serious question.
 The main statement about the value is This is a factoring attack
 against RSA with an up to 80% reduction in the search candidates
 required for a conventional brute force key attack. Does that mean
 that it reduces the search space for a 1024-bit RSA key to, at best
 205 bits (0.2 * 1024) of brute force?

No, no. You don't multiply by .2, you add log_2(.2), which is around -3.
So, 1021 bits.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Solving password problems one at a time, Re: The password-reset paradox

2009-04-30 Thread Ben Laurie
Steven M. Bellovin wrote:
 We've become prisoners of dogma here.  In 1979, Bob Morris and Ken
 Thompson showed that passwords were guessable.  In 1979, that was
 really novel.  There was a lot of good work done in the next 15 years
 on that problem -- Spaf's empirical observations, Klein's '90 paper on
 improving password security, Lamport's algorithm that gave rise to
 S/Key, my and Mike Merritt's EKE, many others.  Guess what -- we're not
 living in that world now.  We have shadow password files on Unix
 systems; we have Kerberos; we have SecurID; we have SSL which rules out
 the network attacks and eavesdropping that EKE was intended to counter;
 etc.  We also have web-based systems whose failure modes are not nearly
 the same.  Why do we think that the solutions are the same?  There was
 a marvelous paper at Hotsec '07 that I resent simply because the
 authors got there before me; I had (somewhat after them) come to the
 same conclusions: the defenses we've built up against password failure
 since '79 don't the problems of today's world.  We have to recognize
 the new problems before we can solve them.  (I *think* that the paper
 is at
 http://www.usenix.org/events/hotsec07/tech/full_papers/florencio/florencio.pdf
 but I'm on an airplane now and can't check...

That's a pretty annoying paper.

Firstly, I don't care about the average rate of account compromise for
sites that host my stuff, I only care about _my_ account. This means
that I cannot, despite their claim, write down my long, secret user ID
because if anyone ever sees it, I'm sunk because of the short password
they are advocating.

Secondly, they claim that user IDs are in practice secret, on the basis
that if they weren't, then sites would be experiencing massive DoS
attacks. To prove this claim, they cite a case where SSNs are used as
user IDs. Now, if there's one thing we know, it's that SSNs aren't even
a little bit secret. Therefore the reason there is no widepsread DoS is
because no-one wants to mount the attack.

Thirdly, they really need to learn when to use apostrophes!

Incidentally, the reason we don't use EKE (and many other useful
schemes) is not because they don't solve our problems, its because the
rights holders won't let us use them.

 But usability is *the* problem, with server and client penetration a
 close second.

On this we agree. We do have any number of decent cryptographic schemes
that would complete solve phishing. All we have to do is figure out:

a) How to show the user that he is actually using the scheme and is not
being phished.

b) Get it rolled out everywhere.

I am not holding my breath, though perhaps '09 is the year for action?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto Craft Knowledge

2009-02-25 Thread Ben Laurie
Cat Okita wrote:
 On Sat, 21 Feb 2009, Peter Gutmann wrote:
 This points out an awkward problem though, that if you're a commercial
 vendor
 and you have a customer who wants to do something stupid, you can't
 afford not
 to allow this.  While my usual response to requests to do things
 insecurely is
 If you want to shoot yourself in the foot then use CryptoAPI, I can
 only do
 this because I care more about security than money.  For any
 commercial vendor
 who has to put the money first, this isn't an option.
 
 That's not entirely true -- even commercial vendors have things like
 ongoing support to consider, and some customers just cost more money
 than they're worth.

Furthermore, its entirely simplistic to suggest that money first ==
do any fool thing a customer demands. Some businesses do actually care
about their reputation, even if only because they believe that will make
them more money in the long run.

Plus, even the most accommodating company will draw the line somewhere -
not every foolish thing is profitable, even if a customer wants it.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Crypto Craft Knowledge

2009-02-20 Thread Ben Laurie
Stephan Neuhaus wrote:
 Many mistakes in crypto coding come from the fact that API developers
 have so far very successfully shifted the burden of secure usage to the
 application developer, the API user.  But I believe this hasn't worked
 and needs to be changed.

I totally agree, and this is the thinking behind the Keyczar project
(http://www.keyczar.org/):

Cryptography is easy to get wrong. Developers can choose improper
cipher modes, use obsolete algorithms, compose primitives in an unsafe
manner, or fail to anticipate the need for key rotation. Keyczar
abstracts some of these details by choosing safe defaults, automatically
tagging outputs with key version information, and providing a simple
programming interface.

Keyczar is designed to be open, extensible, and cross-platform
compatible. It is not intended to replace existing cryptographic
libraries like OpenSSL, PyCrypto, or the Java JCE, and in fact is built
on these libraries.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-02-13 Thread Ben Laurie
Alexander Klimov wrote:
 On Wed, 11 Feb 2009, Ben Laurie wrote:
 If I have data on my server that I would like to stay on my server
 and not get leaked to some third party, then this is exactly the
 same situation as DRMed content on an end user's machine, is it not?
 
 The treat model is completely different: for DRM the attacker is the
 user who supposedly has complete access to computer, while for server
 the attacker is someone who has only (limited) network connection to
 your server.

You wish. The threat is an attacker who has root on your machine.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-02-12 Thread Ben Laurie
Peter Gutmann wrote:
 Ben Laurie b...@links.org writes:
 
 Apart from the obvious fact that if the TPM is good for DRM then it is also
 good for protecting servers and the data on them,
 
 In which way, and for what sorts of protection?  And I mean that as a 
 serious inquiry, not just a Did you spill my pint? question.

If I have data on my server that I would like to stay on my server and
not get leaked to some third party, then this is exactly the same
situation as DRMed content on an end user's machine, is it not?

  At the moment 
 the sole significant use of TPMs is Bitlocker, which uses it as little more 
 than a PIN-protected USB memory key and even then functions just as well 
 without it.  To take a really simple usage case, how would you:
 
 - Generate a public/private key pair and use it to sign email (PGP, S/MIME,
   take your pick)?
 - As above, but send the public portion of the key to someone and use the
   private portion to decrypt incoming email?
 
 (for extra points, prove that it's workable by implementing it using an actual
 TPM to send and receive email with it, which given the hit-and-miss
 functionality and implementation quality of TPMs is more or less a required
 second step).  I've implemented PGP email using a Fortezza card (which is
 surely the very last thing it was ever intended for), but not using a TPM...

Note that I am not claiming expertise in the use of TPMs. I am making
the claim that _if_ they are good for DRM, _then_ they are also good for
protecting data on servers.

 Mark Ryan presented a plausible use case that is not DRM:
 http://www.cs.bham.ac.uk/~mdr/research/projects/08-tpmFunc/.
 
 This use is like the joke about the dancing bear, the amazing thing isn't the 
 quality of the dancing but the fact that the bear can dance at all :-).  
 It's an impressive piece of lateral thinking, but I can't see people rushing 
 out to buy TPM-enabled PCs for this.

I agree that it is more cute than practical.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-02-01 Thread Ben Laurie

Peter Gutmann wrote:

John Gilmore g...@toad.com writes:


The theory that we should build good and useful tools capable of monopoly
and totalitarianism, but use social mechanisms to prevent them from being
used for that purpose, strikes me as naive.


There's another problem with this theory and that's the practical
implementation issue.  I've read through... well, at least skimmed through the
elephantine bulk of the TCG specs, and also read related papers and
publications and talked to people who've worked with the technology, to see
how I could use it as a crypto plugin for my software (which already supports
some pretty diverse stuff, smart cards, HSMs, the VIA Padlock engine, ARM
security cores, Fortezza cards (I even have my own USG-allocated Fortezza ID
:-), and in general pretty much anything out there that does crypto in any
way, shape, or form).  However after detailed study of the TCG specs and
discussions with users I found that the only thing you can really do with
this, or at least the bits likely to be implemented and supported and not full
of bugs and incompatibilities, is DRM.


Apart from the obvious fact that if the TPM is good for DRM then it is 
also good for protecting servers and the data on them, Mark Ryan 
presented a plausible use case that is not DRM: 
http://www.cs.bham.ac.uk/~mdr/research/projects/08-tpmFunc/.


I wrote it up briefly here: http://www.links.org/?p=530.

As for John's original point, isn't the world full of such tools (guns, 
TV cameras, telephone networks, jet engines, blah blah)?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What EV certs are good for

2009-01-28 Thread Ben Laurie
On Wed, Jan 28, 2009 at 5:14 AM, William Soley william.so...@sun.com wrote:
 On Jan 27, 2009, at 6:04 AM, Jerry Leichter wrote:

 It might be useful to put together a special-purpose HTTPS client which
 would initiate a connection and tell you about the cert returned, then exit.

 I use ...

openssl s_client -connect www.whatever.com:443 -showcerts

 Ships with Mac OS, Solaris, Linux, etc.

And to use TOR, put torify on the front. Having run the tor server, of course.

Except on MacOS, where torify doesn't (can't? Does anyone know better) work.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What EV certs are good for

2009-01-27 Thread Ben Laurie
On Sun, Jan 25, 2009 at 11:04 PM, Jerry Leichter leich...@lrw.com wrote:
 I just received a phishing email, allegedly from HSBC:

Dear HSBC Member,

Due to the high number of fraud attempts and phishing scams, it has been
 decided to
implement EV SSL Certification on this Internet Banking website.

The use of EV SSL certification works with high security Web browsers to
 clearly
identify whether the site belongs to the company or is another site
 imitating that
company's site

 (I hope I haven't quoted enough to trigger someone's spam detectors!)
  Needless to say, the message goes on to suggest clicking on a link to
 update your account.

So did the link have a EV cert?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-24 Thread Ben Laurie
On Sat, Jan 24, 2009 at 2:36 AM, Victor Duchovni
victor.ducho...@morganstanley.com wrote:
 You seem to be out of touch I am afraid. Just look at what many O/S
 distributions do. They adopt a new OpenSSL 0.9.Xy release from time to
 time (for some initial y) and back-port security fixes never changing
 the letter. One can't actually tell from openssl version what version
 one is running and which fixes have been applied.

 Why am I back-porting patch-sets to 0.9.8i? Is that because there is no
 demand for bugfix releases? There is indeed demand for real bugfix
 releases, just that people have gotten used to doing it themselves,
 but this is not very effective and is difficult to audit.

It is not that I am unaware of this, I was pointing out what we
actually do. But you do have a fair point and I will take it up with
the team.

However, I wonder how this is going to pan out? Since historically
pretty much every release has been prompted by a security issue, but
also includes new features and non-security bugfixes, in order to
release 0.9.8j the way you want us to, we would also have to test and
release security updates for 0.9.8 - 0.9.8i, for a total of 10
branched versions. I think this is asking rather a lot of volunteers!

Don't suggest that we should release feature/bugfix versions less
often, I think we already do that less often than we should.

Perhaps the answer is that we security patch every version that is
less than n months old, and end-of-life anything before that?
Suggestions for reasonable values of n?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-23 Thread Ben Laurie
On Tue, Jan 20, 2009 at 5:14 AM, Victor Duchovni
victor.ducho...@morganstanley.com wrote:
 On Mon, Jan 19, 2009 at 10:45:55AM +0100, Bodo Moeller wrote:

 The RFC does exit (TLS 1.2 in RFC 5246 from August 2008 makes SHA-256
 mandatory), so you can send a SHA-256 certificate to clients that
 indicate they support TLS 1.2 or later.  You'd still need some other
 certificate for interoperability with clients that don't support
 SHA-256, of course, and you'd be sending that one to clients that do
 support SHA-256 but not TLS 1.2.  (So you'd fall back to SHA-1, which
 is not really a problem when CAs make sure to use the hash algorithm
 in a way that doesn't rely on hash collisions being hard to find,
 which probably is a good idea for *any* hash algorithm.)

 It would be helpful if as a first step, SSL_library_init() (a.k.a.
 OpenSSL_add_ssl_algorithms()) enabled the SHA-2 family of digests,
 I would make this change in the 0.9.9 development snapshots.

 [ Off topic: I find OpenSSL release-engineering a rather puzzling
 process. The patch releases are in fact feature releases,

Who said they were patch releases?

 and there
 are no real patch releases even for critical security issues.  I chose
 to backport the 0.9.8j security fixes to 0.9.8i and sit out all the
 new FIPS code, ... This should not be necessary. I really hope to see
 real OpenSSL patch releases some day with development of new features
 *strictly* in the development snapshots. Ideally this will start with
 0.9.9a, with no new features, just bugfixes, in [b-z]. ]

I think that is not likely to happen, because that's not the way it
works. The promise we try to keep is ABI compatibility between
releases that have the same numbers. Letters signify new versions
within that series. We do not have a bugfix-only branch. There doesn't
seem to be much demand for one.


 --
Viktor.

 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


OpenPGP:SDK v0.9 released

2009-01-09 Thread Ben Laurie
I thought people might be interested in this now somewhat-complete,
BSD-licensed OpenPGP library...

http://openpgp.nominet.org.uk/cgi-bin/trac.cgi/wiki/V0.9

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-30 Thread Ben Laurie
On Mon, Dec 29, 2008 at 10:10 AM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 David Molnar dmol...@eecs.berkeley.edu writes:

Service from a group at CMU that uses semi-trusted notary servers to
periodically probe a web site to see which public key it uses. The notaries
provide the list of keys used to you, so you can attempt to detect things
like a site that has a different key for you than previously shown to all of
the notaries. The idea is that to fool the system, the adversary has to
compromise all links between the target site and the notaries all the time.

 I think this is missing the real contribution of Perspectives, which (like
 almost any security paper) has to include a certain quota of crypto rube-
 golbergism in order to satisfy conference reviewers.  The real value isn't the
 multi-path verification and crypto signing facilities and whatnot but simply
 the fact that you now have something to deal with leap-of-faith
 authentication, whether it's for self-generated SSH or SSL keys or for rent-a-
 CA certificates.  Currently none of these provide any real assurance since a
 phisher can create one on the fly as and when required.  What Perspectives
 does is guarantee (or at least provide some level of confidence) that a given
 key has been in use for a set amount of time rather than being a here-this-
 morning, gone-in-the-afternoon affair like most phishing sites are.  In other
 words a phisher would have to maintain their site for a week, a month, a year,
 of continuous operation, not just set it up an hour after the phishing email
 goes out and take it down again a few hours later.

 For this function just a single source is sufficient, thus my suggestion of
 Google incorporating it into their existing web crawling.  You can add the
 crypto rube goldberg extras as required, but a basic this site has been in
 operation at the same location with the same key for the past eight months is
 a powerful bar to standard phishing approaches, it's exactly what you get in
 the bricks-and-mortar world, Serving the industry since 1962 goes a lot
 further than Serving the industry since just before lunchtime.

Two issues occur to me. Firstly, you have to trust Google (and your
path to Google).

Secondly, and this seems to me to be a generic issue with Perspectives
and SSL - what happens when the cert rolls? If the key also changes
(which would seem to me to be good practice), then the site looks
suspect for a while.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-30 Thread Ben Laurie
On Tue, Dec 30, 2008 at 4:25 AM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 Ben Laurie b...@google.com writes:

what happens when the cert rolls? If the key also changes (which would seem
to me to be good practice), then the site looks suspect for a while.

 I'm not aware of any absolute figures for this but there's a lot of anecdotal
 evidence that many cert renewals just re-certify the same key year in, year
 out (there was even a lawsuit over the definition of the term renewal in
 certificates a few years ago).  So you could in theory handle this by making a
 statement about the key rather than the whole cert it's in.  OTOH this then
 requires the crawler to dig down into the data structure (SSH, X.509,
 whatever) to pick out the bits corresponding to the key.

Not really a serious difficulty.

 Other alternatives
 are to use a key-rollover mechanism that signs the new key with old one
 (something that I've proposed for SSH, since their key-continuity model kinda
 breaks at that point), and all the other crypto rube-goldbergisms you can
 dream up.

Yeah, that's pretty much the answer I came up with - another option
would be to use both the old and new certs for a while.

But signing the new with the old seems easiest to implement - the
signature can go in an X509v3 extension, which means CAs can sign it
without understanding it, and only Google has to be able to verify it,
so all that needs to change is CSR generating s/w...

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-27 Thread Ben Laurie
On Fri, Dec 26, 2008 at 7:39 AM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
Adding support for a
 service like Perspectives (discussed here a month or two back) would be a good
 start since it provides some of the assurance that a commercial PKI can't (and
 as an additional benefit it also works for SSH servers, since it's not built
 around certificates).

 So, when will Google add Perspectives support to their search database? :-).


I can't find discussion of Perspectives - hint?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: combining entropy

2008-10-29 Thread Ben Laurie
On Tue, Oct 28, 2008 at 7:55 PM, Leichter, Jerry
[EMAIL PROTECTED] wrote:
2.  The Byzantine model.  Failed modules can do anything
including cooperating by exchanging arbitrary
information and doing infinite computation.

So in the Byzantine model I can crack RSA?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: combining entropy

2008-10-27 Thread Ben Laurie
On Sat, Oct 25, 2008 at 12:40 PM, IanG [EMAIL PROTECTED] wrote:
 Jonathan Katz wrote:
 I think it depends on what you mean by N pools of entropy.


 I can see that my description was a bit weak, yes.  Here's a better
 view, incorporating the feedback:

   If I have N people, each with a single pool of entropy,
   and I pool each of their contributions together with XOR,
   is that as good as it gets?

I think you need to define what you mean by as good as it gets.
Clearly XOR loses entropy that might be there, so on the measure of
good == most entropy, it is not.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who cares about side-channel attacks?

2008-10-27 Thread Ben Laurie
Peter Gutmann wrote:
 In fact none of the people/organisations I queried about this fitted into any 
 of the proposed categories, it was all embedded devices, typically SCADA 
 systems, home automation, consumer electronics, that sort of thing, so it was 
 really a single category which was Embedded systems.  Given the string of 
 attacks on crypto in embedded devices (XBox, iPhone, iOpener, Wii, some 
 not-yet-published ones on HDCP devices :-), etc) this is by far the most 
 at-risk category because there's a huge incentive to attack them, the result 
 affects tens/hundreds of millions of devices, and the attacks are immediately 
 and widely actively exploited (modchips/device unlocking/etc, an important 
 difference between this and academic proof-of-concept attacks), so this is 
 the 
 one where I'd expect the vendors to care most.

But they've all been unlocked using easier attacks, surely?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-10-24 Thread Ben Laurie
Peter Gutmann wrote:
 If this had been done in the beginning, before users -- and web site
 designers, and browser vendors -- were mistrained, it might have worked.
 Now, though?  I'm skeptical.
 
 For existing apps with habituated users, so am I.  So how about the following
 strawman: Take an existing browser (say Firefox), brand it as some special-
 case secure online banking browser, and use the new developments solution
 above, i.e. it only talks mutual-auth challenge-response crypto and nothing
 else.  At that point you've reduced Reformat user and reinstall browsing
 habits to Train users to only use safe-browser when they do their banking,
 i.e. 'Never enter banking details using anything other than safe-browser'.
 Even if you only get a subset of users doing this, it's still a massive attack
 surface reduction because you've raised the bar from any idiot who buys a
 phishing kit to having to perform a man-in-the-browser attack.

We've been debating this a lot at Google lately. One argument that I
have increasing sympathy with is that SSO (or if you want to be modern,
federated login) provides an opportunity to change the playing field
sufficiently that we can reprogram users to be less vulnerable to
phishing - or just switch them to protocols that make phishing irrelevant.

To that end, we've released some usability research...

http://google-code-updates.blogspot.com/2008/09/usability-research-on-federated-login.html

Obviously the end game here is that the user only has to protect his
login to a small number of sites - i.e. those that provide the IdP. Once
we get there, perhaps users can be persuaded to authenticate to those
sites using something stronger than username/password.

A sidenote that provides me with some amusement: although the modern
trend is towards using OpenID, no-one wants to use it in the mode it is
designed for, i.e. where the user can pick any old IdP and the RP will
just trust it. In practice where we seem to be headed is that RPs will
trust some smallish number of trusted IdPs. This is, of course, exactly
what the Liberty guys have been working on all along. I predict that
over time, most of the elements of Liberty will be incorporated into OpenID.

Which makes me think that if Liberty had done what it claimed to be
doing when it started, i.e. be a community-based, open-source-friendly
protocol suite, it would have worked much better.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: combining entropy

2008-10-24 Thread Ben Laurie
On Mon, Sep 29, 2008 at 1:13 PM, IanG [EMAIL PROTECTED] wrote:
 If I have N pools of entropy (all same size X) and I pool them
 together with XOR, is that as good as it gets?

Surely not. Consider N pools each of size 1 bit. Clearly you can do
better than the 1 bit your suggestion would yield.

More concretely, concatenation would seem better than XOR.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quiet in the list...

2008-09-06 Thread Ben Laurie
Allen wrote:
 So I'll ask a question. I saw the following on another list:
 
I stopped using WinPT after is crashed too many times.
I am now using Thunderbird with the Enigmail plugin
for GPG interface. It works rather flawlessly and I've
never looked back.
 http://pgp.mit.edu:11371/pks/lookup?search=0xBB678C30op=index


 Yes, I regard the combination of Thunderbird + Enigmail + GPG as the
 best existing solution for secure email.
 
 What does anyone think of of the combo?

I agree.

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quiet in the list...

2008-09-06 Thread Ben Laurie
IanG wrote:
 2.  GPG + Engimail + Thunderbird.  Will never be totally robust because
 there is too much dependency.

What does this mean? GPG + Enigmail, whilst not the best architecture I
ever heard of, is a tiny increment to the complexity of Thunderbird.

Are you saying anything other than big software has bugs?

-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Ben Laurie
[Adding the cryptography list, since this seems of interest]

On Wed, Aug 27, 2008 at 8:58 PM, Story Henry [EMAIL PROTECTED] wrote:
 Apparently rfc2817 allows an http url tp be used for https security.

 Given that Apache seems to have that implemented [1] and that the
 openid url is mostly used for server to server communication, would
 this be a way out of the http/https problem?

 I know that none of the browsers support it, but I suppose that if the
 client does not support this protocol, the server can redirect to the
 https url? This seems like it could be easier to implement that XRI .

 Disclaimer: I don't know much about rfc2817

This inspired a blog post: http://www.links.org/?p=382.

Recent events, and a post to the OpenID list got me thinking.

blockquote
Apparently rfc2817 allows an http url tp be used for https security.

Given that Apache seems to have that implemented [1] and that the
openid url is mostly used for server to server communication, would
this be a way out of the http/https problem?

I know that none of the browsers support it, but I suppose that if the
client does not support this protocol, the server can redirect to the
https url? This seems like it could be easier to implement that XRI .

Disclaimer: I don't know much about rfc2817

Henry


[1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg00251.html
/blockquote

The core issue is that HTTPS is used to establish end-to-end security,
meaning, in particular, authentication and secrecy. If the MitM can
disable the upgrade to HTTPS then he defeats this aim. The fact that
the server declines to serve an HTTP page is irrelevant: it is the
phisher that will be serving the HTTP page, and he will have no such
compunction.

The traditional fix is to have the client require HTTPS, which the
MitM is powerless to interfere with. Upgrades would work fine if the
HTTPS protocol said connect on port 80, ask for an upgrade, and if
you don't get it, fail, however as it is upgrades work at the behest
of the server. And therefore don't work.

Of course, the client requires HTTPS because there was a link that
had a scheme of https. But why did was that link followed? Because
there was an earlier page with a trusted link (we hope) that was
followed. (Note that this argument applies to both users clicking
links and OpenID servers following metadata).

If that page was served over HTTP, then we are screwed, obviously
(bearing in mind DNS cache attacks and weak PRNGs).

This leads to the inescapable conclusion that we should serve
everything over HTTPS (or other secure channels).

Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
Even really serious modern processors can only handle a few thousand
new SSL sessions per second. New plaintext sessions can be dealt with
in their tens of thousands.

Perhaps we should focus on this problem: we need cheap end-to-end
encryption. HTTPS solves this problem partially through session
caching, but it can't easily be shared across protocols, and sessions
typically last on the order of five minutes, an insanely conservative
figure.

What we need is something like HTTPS, shareable across protocols, with
caches that last at least hours, maybe days. And, for sites we have a
particular affinity with, an SSH-like pairing protocol (with less
public key crypto - i.e. more session sharing).

Having rehearsed this discussion many times, I know the next objection
will be DoS on the servers: a bad guy can require the server to spend
its life doing PK operations by pretending he has never connected
before. Fine, relegate PK operations to the slow queue. Regular users
will not be inconvenienced: they already have a session key.
Legitimate new users will have to wait a little longer for initial
load. Oh well.


 Henry


 [1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg00251.html


 http://www.ietf.org/rfc/rfc2817.txt
 Home page: http://bblfish.net/

 ___
 general mailing list
 [EMAIL PROTECTED]
 http://openid.net/mailman/listinfo/general


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] rfc2817: https vs http

2008-09-01 Thread Ben Laurie
On Mon, Sep 1, 2008 at 9:49 PM, Eric Rescorla [EMAIL PROTECTED] wrote:
 At Mon, 1 Sep 2008 21:00:55 +0100,
 Ben Laurie wrote:
 The core issue is that HTTPS is used to establish end-to-end security,
 meaning, in particular, authentication and secrecy. If the MitM can
 disable the upgrade to HTTPS then he defeats this aim. The fact that
 the server declines to serve an HTTP page is irrelevant: it is the
 phisher that will be serving the HTTP page, and he will have no such
 compunction.

 The traditional fix is to have the client require HTTPS, which the
 MitM is powerless to interfere with. Upgrades would work fine if the
 HTTPS protocol said connect on port 80, ask for an upgrade, and if
 you don't get it, fail, however as it is upgrades work at the behest
 of the server. And therefore don't work.

 Even without an active attacker, this is a problem if there is
 sensitive information in the request, since that will generally
 be transmitted prior to discovering the server can upgrade.

Obviously we can fix this at the protocol level.

 Why don't we? Cost. It takes far more tin to serve HTTPS than HTTP.
 Even really serious modern processors can only handle a few thousand
 new SSL sessions per second. New plaintext sessions can be dealt with
 in their tens of thousands.

 Perhaps we should focus on this problem: we need cheap end-to-end
 encryption. HTTPS solves this problem partially through session
 caching, but it can't easily be shared across protocols, and sessions
 typically last on the order of five minutes, an insanely conservative
 figure.

 Session caches are often dialed this low, but it's not really necessary
 in most applications. First, a session cache entry isn't really that
 big. It easily fits into 100 bytes on the server, so you can serve
 a million concurrent user for a measly 100M.

But if the clients drop them after five minutes, this gets you
nowhere. BTW, sessions are only that small if there are no client
certs.

 Second, you can use
 CSSC/Tickets [RFC5077] to offload all the information onto the client.

Likewise.


 -Ekr


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Keyczar

2008-08-13 Thread Ben Laurie

http://www.links.org/?p=374

When I joined Google over two years ago I was asked to find a small 
project to get used to the way development is done there. The project I 
chose was one that some colleagues had been thinking about, a key 
management library. I soon realised that unless the library also handled 
the crypto it was punting on the hard problem, so I extended it to do 
crypto and to handle key rotation and algorithm changes transparently to 
the user of the library.


About nine months later I handed over my starter project to Steve 
Weis, who has worked on it ever since. For a long time we've talked 
about releasing an open source version, and I'm pleased to say that 
Steve and intern Arkajit Dey did just that, earlier this week: Keyczar[1].


Keyczar is an open source cryptographic toolkit designed to make 
it easier and safer for developers to use cryptography in their 
applications. Keyczar supports authentication and encryption with both 
symmetric and asymmetric keys. Some features of Keyczar include:


* A simple API
* Key rotation and versioning
* Safe default algorithms, modes, and key lengths
* Automated generation of initialization vectors and ciphertext 
signatures


When we say simple, by the way, the code for loading a keyset and 
encrypting some plaintext is just two lines. Likewise for decryption. 
And the user doesn't need to know anything about algorithms or modes.


Great work, guys! I look forward to the real version (C++, of course!).

[1] http://www.keyczar.org/

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-12 Thread Ben Laurie
On Tue, Aug 12, 2008 at 9:55 AM, Clausen, Martin (DK - Copenhagen)
[EMAIL PROTECTED] wrote:
 You could use the SSL Blacklist plugin
 (http://codefromthe70s.org/sslblacklist.asp) for Firefox or heise SSL
 Guardian
 (http://www.heise-online.co.uk/security/Heise-SSL-Guardian--/features/11
 1039/) for IE to do this. If presented with a Debian key the show a
 warning.

 The blacklists are implemented using either a traditional blacklist
 (text file) or distributed using DNS.

There are two parties that are vulnerable: the user logging into the
OpenID Provider (OP), and the Relying Party (RP). If the RP
communicates with the OP, then it needs to use TLS and CRLs or OCSP.
Browser plugins do not bail it out.

Cheers,

Ben.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-09 Thread Ben Laurie

Hal Finney wrote:

I thought of one possible mitigation that can protect OpenID end users
against remote web sites which have not patched their DNS. OpenID
providers who used weak OpenSSL certs would have to change their URLs
so that their old X.509 CA certs on their old URLs no longer work on the
new ones. This will require all of their clients (users who log in with
their OpenID credentials) to change their identifiers. DNS based MITMs
will not be able to forge messages related to the new identifiers.


Yeah, I considered this scheme. The problem is that it doesn't really 
help the relying parties, who can still be fooled into believing an 
existing user is returning (or a new one is arriving) from the original 
site. This is particularly a problem for Sun's OpenID Provider, which 
makes the additional assertion (out of band) that the user is a Sun 
employee. So, anyone can become a Sun employee, as of a few days ago.


This is why the lack of CRL checking in OpenID libraries is an issue.


Again, I see fixing the DNS as the path of least resistance here,
especially so since the end user is the one bearing most of the risk,
typically DNS is provided by an ISP or some other agency with a formal
legal relationship, and there is the possibility of liability on the
part of the lax DNS provider. Hopefully we will continue to see rapid
uptake of the DNS fix over the next few weeks.


In general, DNS is not fixable without deploying DNSSEC.

a) The current fix just reduces the probability of an attack. If 
attacker and victim have sufficient bandwidth, it can still be done in 
under a day.


b) There are many scenarios, mostly revolving around the use of wireless 
hotspots, where users are easily fooled into using a malicious DNS provider.


So, DNS patching is not, IMO, the real answer to this problem. Of 
course, the second scenario has been around forever, but is conveniently 
ignored when explaining why CRLs are not necessary (and all other things 
that rely on perfect DNS). All that's happened recently is we've made 
people who are sitting still just as vulnerable as travellers.


But increasingly we are all travellers some of the time, from a how we 
get our 'net POV. We really can't ignore this use case.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
Security Advisory (08-AUG-2008) (CVE-2008-3280)
===

Ben Laurie of Google's Applied Security team, while working with an
external researcher, Dr. Richard Clayton of the Computer Laboratory,
Cambridge University, found that various OpenID Providers (OPs) had
TLS Server Certificates that used weak keys, as a result of the Debian
Predictable Random Number Generator (CVE-2008-0166).

In combination with the DNS Cache Poisoning issue (CVE-2008-1447) and
the fact that almost all SSL/TLS implementations do not consult CRLs
(currently an untracked issue), this means that it is impossible to
rely on these OPs.

Attack Description
--

In order to mount an attack against a vulnerable OP, the attacker
first finds the private key corresponding to the weak TLS
certificate. He then sets up a website masquerading as the original
OP, both for the OpenID protocol and also for HTTP/HTTPS.

Then he poisons the DNS cache of the victim to make it appear that his
server is the true OpenID Provider.

There are two cases, one is where the victim is a user trying to
identify themselves, in which case, even if they use HTTPS to ensure
that the site they are visiting is indeed their provider, they will be
unable to detect the substitution and will give their login
credentials to the attacker.

The second case is where the victim is the Relying Party (RP). In this
case, even if the RP uses TLS to connect to the OP, as is recommended
for higher assurance, he will not be defended, as the vast majority of
OpenID implementations do not check CRLs, and will, therefore, accept
the malicious site as the true OP.

Mitigation
--

Mitigation is surprisingly hard. In theory the vulnerable site should
revoke their weak certificate and issue a new one.

However, since the CRLs will almost certainly not be checked, this
means the site will still be vulnerable to attack for the lifetime of
the certificate (and perhaps beyond, depending on user
behaviour). Note that shutting down the site DOES NOT prevent the
attack.

Therefore mitigation falls to other parties.

1. Browsers must check CRLs by default.

2. OpenID libraries must check CRLs.

3. DNS caching resolvers must be patched against the poisoning attack.

4. Until either 1 and 2 or 3 have been done, OpenID cannot be trusted
   for any OP that cannot demonstrate it has never had a weak
   certificate.

Discussion
--

Normally, when security problems are encountered with a single piece
of software, the responsible thing to do is to is to wait until fixes
are available before making any announcement. However, as a number of
examples in the past have demonstrated, this approach does not work
particularly well when many different pieces of software are involved
because it is necessary to coordinate a simultaneous release of the
fixes, whilst hoping that the very large number of people involved
will cooperate in keeping the vulnerability secret.

In the present situation, the fixes will involve considerable
development work in adding CRL handling to a great many pieces of
openID code. This is a far from trivial amount of work.

The fixes will also involve changes to browser preferences to ensure
that CRLs are checked by default -- which many vendors have resisted
for years. We are extremely pessimistic that a security vulnerability
in OpenID will be seen as sufficiently important to change the browser
vendors minds.

Hence, we see no value in delaying this announcement; and by making
the details public as soon as possible, we believe that individuals
who rely on OpenID will be better able to take their own individual
steps to avoid relying upon the flawed certificates we have
identified.

OpenID is at heart quite a weak protocol, when used in its most
general form[1], and consequently there is very limited reliance upon
its security. This means that the consequences of the combination of
attacks that are now possible is nothing like as serious as might
otherwise have been the case.

However, it does give an insight into the type of security disaster
that may occur in the future if we do not start to take CRLs
seriously, but merely stick them onto to-do lists or disable them in
the name of tiny performance improvements.

Affected Sites
--

There is no central registry of OpenID systems, and so we cannot be
sure that we have identified all of the weak certificates that are
currently being served. The list of those we have found so far is:

openid.sun.com
www.xopenid.net
openid.net.nz

Notes
-

[1] There are ways of using OpenID that are significantly more secure
than the commonly deployed scheme, I shall describe those in a
separate article.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
On Fri, Aug 8, 2008 at 5:57 PM, Eric Rescorla [EMAIL PROTECTED] wrote:
 At Fri, 8 Aug 2008 17:31:15 +0100,
 Dave Korn wrote:

 Eric Rescorla wrote on 08 August 2008 16:06:

  At Fri, 8 Aug 2008 11:50:59 +0100,
  Ben Laurie wrote:
  However, since the CRLs will almost certainly not be checked, this
  means the site will still be vulnerable to attack for the lifetime of
  the certificate (and perhaps beyond, depending on user
  behaviour). Note that shutting down the site DOES NOT prevent the attack.
 
  Therefore mitigation falls to other parties.
 
  1. Browsers must check CRLs by default.
 
  Isn't this a good argument for blacklisting the keys on the client
  side?

   Isn't that exactly what Browsers must check CRLs means in this context
 anyway?  What alternative client-side blacklisting mechanism do you suggest?

 It's easy to compute all the public keys that will be generated
 by the broken PRNG. The clients could embed that list and refuse
 to accept any certificate containing one of them. So, this
 is distinct from CRLs in that it doesn't require knowing
 which servers have which cert...

It also only fixes this single type of key compromise. Surely it is
time to stop ignoring CRLs before something more serious goes wrong?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [OpenID] OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
On Fri, Aug 8, 2008 at 8:27 PM, Eddy Nigg (StartCom Ltd.)
[EMAIL PROTECTED] wrote:
 Ben Laurie:

 On Fri, Aug 8, 2008 at 12:44 PM, Eddy Nigg (StartCom Ltd.)
 [EMAIL PROTECTED] wrote:


 This affects any web site and service provider of various natures. It's not
 exclusive for OpenID nor for any other protocol / standard / service! It may
 affect an OpenID provider if it uses a compromised key in combination with
 unpatched DNS servers. I don't understand why OpenID is singled out, since
 it can potentially affect any web site including Google's various services
 (if Google would have used Debian systems to create their private keys).


 OpenID is singled out because I am not talking about a potential
 problem but an actual problem.


 Sorry Ben, but any web site or service (HTTP, SMPT, IMAP, SSH, VPN, etc)
 which makes use of a compromised key has an actual problem and not a
 potential problem. Open ID as a standard isn't more affected than, lets say
 XMPP...If there are servers and providers relying on such keys the have a
 real actual problem.

I do not dispute this.

 I don't see your point about Open ID nor didn't I see
 anything new

The point is I found OpenID servers with weak keys.

 The problem of weak keys should be dealt at the CA level, many which have
 failed to do anything serious about it.

Indeed.

 We have spotted other actual problems in other services. Details will
 be forthcoming at appropriate times.


 I think it's superfluous to single out different services since any service
 making use of the weak keys is affected, with recent discovery of DNS
 poisoning making the matter worse. I suggest you try a forum which can
 potentially reach many CAs, they in fact have everything at their disposal
 to remove this threat!

If you have a better forum, bring it on.

However, CAs do not have everything at their disposal to remove the
threat. Browsers,OpenID libraries and RPs must also participate.

Just as saying buffer overflows are bad has not magically caused all
buffer overflows to be fixed, I am confident that the only way to get
this problem fixed is to chase down all the culprits individually. I
am sure that OpenID is not the only thing with problems, as you say.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Ben Laurie
On Fri, Aug 8, 2008 at 7:54 PM, Tim Dierks [EMAIL PROTECTED] wrote:
 Using this Bloom filter calculator:
 http://www.cc.gatech.edu/~manolios/bloom-filters/calculator.html , plus the
 fact that there are 32,768 weak keys for every key type  size, I get
 various sizes of necessary Bloom filter, based on how many key type / sizes
 you want to check and various false positive rates:
  * 3 key types/sizes with 1e-6 false positive rate: 2826759 bits = 353 KB
  * 3 key types/sizes with 1e-9 false positive rate: 4240139 bits = 530 KB
  * 7 key types/sizes with 1e-6 false positive rate: 6595771 bits = 824 KB
  * 7 key types/sizes with 1e-9 false positive rate: 9893657 bits = 1237 KB

 I presume that the first 3  first 7 key type/sizes in this list
 http://metasploit.com/users/hdm/tools/debian-openssl/ are the best to
 incorporate into the filter.

 Is there any chance it would be feasible to get a list of all the weak keys
 that were actually certified by browser-installed CAs, or those weak
 certificates? Presumably, this list would be much smaller and would be more
 effectively distributed in Bloom filter form.

Or as a CRL :-)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-08-03 Thread Ben Laurie

Philipp Gühring wrote:

Hi,

I would suggest to use http://www.cacert.at/random/ to test the 
randomness of the DNS source ports. Due to the large variety of 
random-number sources that have been tested there already, it's useful 
as a classification service of unknown randomly looking numbers.
You just have to collect 12 MB of numbers from a DNS server and upload 
it there. (If you get 2 Bytes per request, that's 6 million requests you 
have to do)



I don't see the point of evaluating the quality of a random number
generator by statistical tests.


We successfully used statistical tests to detect broken random number 
generators, we informed the vendors and they fixed them.

http://www.cacert.at/cgi-bin/rngresults


Are you seriously saying that the entropy of FreeBSD /dev/random is 0?

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the unpredictability of DNS

2008-08-03 Thread Ben Laurie

William Allen Simpson wrote:

I've changed the subject.  Some of my own rants are about mathematical
cryptographers that are looking for the perfect solution, instead of
practical security solution.  Always think about the threat first!

In this threat environment, the attacker is unlikely to have perfect
knowledge of the sequence.  Shared resolvers are the most critical
vulnerability, but the attacker isn't necessarily in the packet path, and
cannot discern more than a few scattered numbers in the sequence.  The
more sharing (and greater impact), the more sparse the information.

In any case, the only perfect solution is DNS-security.  Over many
years, I've given *many* lectures to local university, network, and
commercial institutions about the need to upgrade and secure our zones.
But the standards kept changing, and the roots and TLDs were not secured.

Now, the lack of collective attention to known security problems has
bitten us collectively.

Never-the-less, with rephrasing, Ben has some good points


I don't see any actual rephrasing below, unless you are suggesting I 
should have said unpredictable instead of random. I think that's a 
perfectly fine substitution to make.



Ben Laurie wrote:
But just how GREAT is that, really? Well, we don't know. Why? Because 
there isn't actually a way test for randomness. ...


While randomness is sufficient for perfect unpredictability, it isn't
necessary in this threat environment.


I agree, but my point is unaltered if you switch randomness to 
unpredictability.



Keep in mind that the likely unpredictability is about 2**24.  In many
or most cases, that will be implementation limited to 2**18 or less.


Why?

Your DNS resolver could be using some easily predicted random number 
generator like, say, a linear congruential one, as is common in the 
rand() library function, but DNS-OARC would still say it was GREAT.


In this threat environment, a better test would be for determination of a
possible seed for any of several common PRNG.  Or lack of PRNG.


I don't see why. A perfectly reasonable threat is that the attacker 
reverse engineers the PRNG (or just checks out the source). It doesn't 
need to be common to be predictable.


Oh, and I should say that number of ports and standard deviation are 
not a GREAT way to test for randomness. For example, the sequence 
1000, 2000, ..., 27000 has 27 ports and a standard deviation of over 
7500, which looks pretty GREAT to me. But not very random.



Again, the question is not randomness, but unpredictability.


Again, changing the words does not alter my point in any way, though I 
do agree that unpredictable is a better word.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-08-03 Thread Ben Laurie
So, an executive summary of your responses appears to be EKMI leaves 
all the hard/impossible problems to be solved by components that are out 
of scope.


As such, I'm not seeing much value.

Anyway...

Arshad Noor wrote:

Ben Laurie wrote:
OK, so you still have a PKI problem, in that you have to issue and 
manage client certificates. How is this done?



One man's meat  :-).  (I don't necessarily view this as a problem
Ben.  I've built up a career and a small business in the last 9 years
doing just that.)

Nevertheless, to answer the question, the PKI deployment is not part
of the SKSML prtocol (other than the WSS header that carries the XML
Signature and/or XML Encryption of the SOAP Body), but it is part of
an EKMI.  (EKMI = PKI + SKMS).  A company deploying an EKMI must have,
or must build a PKI and deploy the client/server certificates separately
from the SKMS.

While this might be viewed as a problem for some/many companies in the
short-term, I fully anticipate that vendor implementations of SKMS will
integrate with PKI software to manage this process seamlessly in the
future.


PKI out of scope...

I do not believe this is the case. DRM fails because the attacker has 
physical possession of the system he is attacking.




Which is why we are recommending that SKMS clients use hardware based
modules (be it TPMs, smartcards, HSMs, etc.) to generate and store the
Private Key used by SKMS client to decrypt the symmetric keys.  While
even these can be attacked, the problem is now in a different domain
than the SKSML protocol.


...impossibility of solving DRM problem out of scope...


EKMI's goals are not to provide bullet-proof security.  It has more
modest ambitions - to provide a management framework for incremental
security, targeted at the right resource: the data, rather than the 
network.


Are there any even vaguely modern systems that target the network for 
security, or is this a straw man?


What I meant to say is that EKMI's goal is to protect the data and not
the network.


...goals the same as pretty much all cryptographic protocols...

If it is up to them, then why bother with this verification process? 
This sounds like yet more security theatre to me.




I'm not sure which verification process you're referring to.

No, this is not security theater.  EKMI does not guarantee anything, nor
does it promise unbreakable anything.  Everything in EKMI is a choice.
You get to choose the strength of your keys, your PKI, your policies,
your procedures and your implementation.  All we're offering is a tool
that does something specific to the extent that the specifications and
the technology allows.  Nothing more, nothing less.  If you - as a
cryptographer - say that AES-156 will do X under these conditions, then
EKMI will support X under those conditions.  EKMI only adds the ability
to manage a large number of keys centrally, and to ease many of the
administrative burdens we see that large-scale key-management can cause.
Everything it does is constrained by the limitations of the underlying
technology components, polices and practices.  But you still have to
make the choice.


...security out of scope and scope out of scope.

Is there anything other than key escrow that's actually in scope?

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-07-31 Thread Ben Laurie

Dirk-Willem van Gulik wrote:
I fail to see how you could evaluate this without seeing the code (and 
even then - I doubt that one can properly do this -- the ?old? NSA habit 
of tweaking your random generated rather than your protocol/algorithm 
when they wanted your produced upgraded to export quality - is terribly 
effective and very hard to spot).


Or am I missing something ?


I think that, in general, you are correct. However, in the case of NAT 
your adversary is not someone who is trying to guess your randomness, 
but someone who is trying to sell you their NAT gateway. In this case, 
code/silicon inspection probably suffices.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


On the randomness of DNS

2008-07-30 Thread Ben Laurie
I thought this list might be interested in a mini-rant about DNS source 
port randomness on my blog: http://www.links.org/?p=352.


Ever since the recent DNS alert people have been testing their DNS 
servers with various cute things that measure how many source ports you 
use, and how random they are. Not forgetting the command line 
versions, of course


dig +short porttest.dns-oarc.net TXT
dig +short txidtest.dns-oarc.net TXT

which yield output along the lines of

aaa.bbb.ccc.ddd is GREAT: 27 queries in 12.7 seconds from 27 ports with 
std dev 15253


But just how GREAT is that, really? Well, we don'
t know. Why? Because there isn't actually a way test for randomness. 
Your DNS resolver could be using some easily predicted random number 
generator like, say, a linear congruential one, as is common in the 
rand() library function, but DNS-OARC would still say it was GREAT. 
Believe them when they say it isn't GREAT, though! Non-randomness we can 
test for.


So, how do you tell? The only way to know for sure is to review the code 
(or the silicon, see below). If someone tells you don't worry, we did 
statistical checks and it's random then make sure you're holding on to 
your wallet - he'll be selling you a bridge next.


But, you may say, we already know all the major caching resolvers have 
been patched and use decent randomness, so why is this an issue?


It is an issue because of NAT. If your resolver lives behind NAT (which 
is probably way more common since this alert, as many people's reactions 
[mine included] was to stop using their ISP's nameservers and stand up 
their own to resolve directly for them) and the NAT is doing source port 
translation (quite likely), then you are relying on the NAT gateway to 
provide your randomness. But random ports are not the best strategy for 
NAT. They want to avoid re-using ports too soon, so they tend to use an 
LRU queue instead. Pretty clearly an LRU queue can be probed and 
manipulated into predictability.


So, if your NAT vendor is telling you not to worry, because the 
statistics say they are random, then I would start worrying a lot: 
your NAT vendor doesn't understand the problem. It's also pretty 
unhelpful for the various testers out there not to mention this issue, I 
must say.


Incidentally, I'm curious how much this has impacted the DNS 
infrastructure in terms of traffic - anyone out there got some statistics?


Oh, and I should say that number of ports and standard deviation are not 
a GREAT way to test for randomness. For example, the sequence 1000, 
2000, ..., 27000 has 27 ports and a standard deviation of over 7500, 
which looks pretty GREAT to me. But not very random.


--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-07-30 Thread Ben Laurie

Pierre-Evariste Dagand wrote:

 But just how GREAT is that, really? Well, we don'
 t know. Why? Because there isn't actually a way test for randomness. Your
DNS resolver could be using some easily predicted random number generator
like, say, a linear congruential one, as is common in the rand() library
function, but DNS-OARC would still say it was GREAT. Believe them when they
say it isn't GREAT, though!


Well, they are some tests to judge the quality of a random number
generator. The best known being the Diehard tests:

http://en.wikipedia.org/wiki/Diehard_tests
http://stat.fsu.edu/pub/diehard/

For sure, these tests might be an overkill here. Also, there must be
some tests in the Art of Computer Programming too but I don't have it
at hand right now (shame on me).


I doubt you can get a large enough sample in any reasonable time.


I don't see the point of evaluating the quality of a random number
generator by statistical tests.


Which is entirely my point.


But I might be wrong, though.

Regards,




--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On the randomness of DNS

2008-07-30 Thread Ben Laurie

Pierre-Evariste Dagand wrote:

I doubt you can get a large enough sample in any reasonable time.


Indeed.


I don't see the point of evaluating the quality of a random number
generator by statistical tests.

 Which is entirely my point.


I fear I was not clear: I don't see what is wrong in evaluating the
quality of a random number generator with (an extensive set of)
statistical tests.


SHA-1(1), SHA-1(2), SHA-1(3), ... SHA-1(N) will look random, but clearly 
is not.



For sure, it would be better if we could check the source code and
match the implemented RNG against an already known RNG.

But, then, there is a the chicken or the egg problem: how would you
ensure that a *new* RNG is a good source of randomness ? (it's not a
rhetorical questions, I'm curious about other approaches).


By reviewing the algorithm and thinking hard.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Explaining DNSSEC

2008-07-10 Thread Ben Laurie
I was asked off-list for a pointer to an explanation of DNSSEC. I guess 
there may be other readers who'd like that, so here's a pointer to 
Matasano Chargen's rather beautiful exposition:


http://www.matasano.com/log/case-against-dnssec/

Unfinished, but good enough. In particular, part 2 explains DNSSEC

http://www.matasano.com/log/772/a-case-against-dnssec-count-2-too-complicated-to-deploy/

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Kaminsky finds DNS exploit

2008-07-09 Thread Ben Laurie

Paul Hoffman wrote:
First off, big props to Dan for getting this problem fixed in a 
responsible manner. If there were widespread real attacks first, it 
would take forever to get fixes out into the field.


However, we in the security circles don't need to spread the Kaminsky 
finds meme. Take a look at 
http://tools.ietf.org/wg/dnsext/draft-ietf-dnsext-forgery-resilience/. 
The first draft of this openly-published document was in January 2007. 
It is now in WG last call.


The take-away here is not that Dan didn't discover the problem, but 
Dan got it fixed. An alternate take-away is that IETF BCPs don't make 
nearly as much difference as a diligent security expert with a good name.


Guess you need to tell Dan that - he seems to think he did discover it.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Kaminsky finds DNS exploit

2008-07-09 Thread Ben Laurie

Steven M. Bellovin wrote:

On Wed, 09 Jul 2008 11:22:58 +0530
Udhay Shankar N [EMAIL PROTECTED] wrote:

I think Dan Kaminsky is on this list. Any other tidbits you can add 
prior to Black Hat?


Udhay

http://www.liquidmatrix.org/blog/2008/07/08/kaminsky-breaks-dns/


I'm curious about the details of the attack.  Paul Vixie published the
basic idea in 1995 at Usenix Security
(http://www.usenix.org/publications/library/proceedings/security95/vixie.html)
-- in a section titled What We Cannot Fix, he wrote:

With only 16 bits worth of query ID and 16 bits worth of UDP port
number, it's hard not to be predictable.  A determined attacker
can try all the numbers in a very short time and can use patterns
derived from examination of the freely available BIND code.  Even
if we had a white noise generator to help randomize our numbers,
it's just too easy to try them all.


So this seems to me to only be really true in a theoretical sense. 
Exploring the whole 32 bit space obviously requires well in excess of 4 
GB of traffic, which is clearly a non-trivial amount to dump on your victim.


According to other data, the fix in BIND is to:

a) use random ports

b) use a good PRNG

so I'm beginning to suspect the issue is simply that the theory that it 
was easy to attack led to no effort being made to make it as hard as 
possible. And now it has.



Obligatory crypto: the ISC web page on the attack notes DNSSEC is the
only definitive solution for this issue. Understanding that immediate
DNSSEC deployment is not a realistic expectation...


The beauty of DNSSEC being, of course, that any answer that verifies can 
be trusted - so its of no interest who provided that answer.


--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-08 Thread Ben Laurie

Arshad Noor wrote:

Ben Laurie wrote:

Arshad Noor wrote:


I may be a little naive, but can a protocol itself enforce proper
key-management?  I can certainly see it facilitating the required
discipline, but I can't see how a protocol alone can enforce it.


I find the question difficult to understand. Before I could even begin 
to answer, you'd have to define what proper key management actually is.


I consider KM to be the discipline of defining policy and establishing
procedures  infrastructure for the generation, use, escrow, recovery
and destruction of cryptographic keys, in conformance with the defined
policies.


Then I would agree that a protocol alone could not achieve all of this, 
though obviously it is possible to design a protocol that makes it 
impossible.


That said, EKMI (from my brief reading) has a view of key management 
that is only proper in quite constrained circumstances. In 
particular, keys are available to participants other than those who 
are communicating, which is general considered to be a very bad idea. 


I'm not sure I'm following your comment here, Ben.  Did some word get
left out?  In EKMI, keys are available only to those who are known to
the central Symmetric Key Services (SKS) server,  and who are authorized
to receive keys.  The knowledge comes from entries in the SKS server
about the clients and their digital certificates.  The authorization
comes from ACLs and from policies that apply to the client.


OK, so you still have a PKI problem, in that you have to issue and 
manage client certificates. How is this done?


 So, yes,

EKMIs are designed for constrained environments.




The design paradigm we chose for EKMI was to have:

1) the centralized server be the focal point for defining policy;
2) the protocol carry the payload with its corresponding policy;
3) and the client library enforce the policy on client devices;



Well. You said centralized server. Many cryptographic systems don't 
have one of those.




I realizecd that two years ago when I started defining the architecture
for EKMI and the software to implement it.  It was the only logical way
of addressing the business problem of managing encryption keys for five
different platforms/applications that needed to share ciphertext on a
daily basis across hundreds of locations and thousands of POS registers.


I'd be very surprised if it were the _only_ logical way to do it. But 
that aside, my point stands: these characteristics are not shared by all 
cryptographic systems. In fact, I'd say that all of them are not shared 
by most cryptographic systems.


Also, the idea of a client library enforcing policy is DRM all over 
again. Which, as we all know, will never work.


You make an interesting point here.  While, conceptually, EKMI and DRM
share similar designs, I believe the reasons for DRM's failure has more
to do with philosophy than with technology.  But that's a different
debate.


I do not believe this is the case. DRM fails because the attacker has 
physical possession of the system he is attacking.


The fact that the attackers is highly motivated because of the 
objectionable nature of DRM does not seem to differ much from your 
system, though in your case the motivator will presumably be profit.



P.S. Companies deploying an EKMI must have an external process in
place to ensure their applications are using verified libraries
on the client devices, so their polices are not subverted.



Ha ha. Like that's going to work. Even if we assume that libraries are 
verified (fat chance, IMO), how are you going to stop, for example, 
cut'n'paste? Employees reading things out over the phone? Bugs? Etc.




EKMI's goals are not to provide bullet-proof security.  It has more
modest ambitions - to provide a management framework for incremental
security, targeted at the right resource: the data, rather than the 
network.


Are there any even vaguely modern systems that target the network for 
security, or is this a straw man?


  As such, it will merely be a tool in the evolution towards

more secure systems - how people use the tool is up to them.


If it is up to them, then why bother with this verification process? 
This sounds like yet more security theatre to me.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-07 Thread Ben Laurie

Arshad Noor wrote:

Florian Weimer wrote:

* Arshad Noor:


http://www.informationweek.com/shared/printableArticle.jhtml?articleID=208800937 



On a more serious note, I think the criticism probably refers to the
fact that SKSML does not cryptopgrahically enforce proper key
management.  If a participant turns bad (for instance, by storing key
material longer than permitted by the protocol), there's nothing in the
protocol that stops them.


Thank you for your comment, Florian.

I may be a little naive, but can a protocol itself enforce proper
key-management?  I can certainly see it facilitating the required
discipline, but I can't see how a protocol alone can enforce it.
Any examples you can cite where this has been done, would be very
helpful.


I find the question difficult to understand. Before I could even begin 
to answer, you'd have to define what proper key management actually is.


That said, EKMI (from my brief reading) has a view of key management 
that is only proper in quite constrained circumstances. In particular, 
keys are available to participants other than those who are 
communicating, which is general considered to be a very bad idea. This 
is fine if you are a corporation wanting to achieve escrow, of course. 
Though that can be done without requiring a central server to remember 
all the keys, of course.



The design paradigm we chose for EKMI was to have:

1) the centralized server be the focal point for defining policy;
2) the protocol carry the payload with its corresponding policy;
3) and the client library enforce the policy on client devices;

In some form or another, don't all cryptographic systems follow a
similar paradigm?


Well. You said centralized server. Many cryptographic systems don't 
have one of those.


Also, the idea of a client library enforcing policy is DRM all over 
again. Which, as we all know, will never work.


So, in short: no, they don't.


Arshad Noor
StrongAuth, Inc.

P.S. Companies deploying an EKMI must have an external process in
place to ensure their applications are using verified libraries
on the client devices, so their polices are not subverted.


Ha ha. Like that's going to work. Even if we assume that libraries are 
verified (fat chance, IMO), how are you going to stop, for example, 
cut'n'paste? Employees reading things out over the phone? Bugs? Etc.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-03 Thread Ben Laurie

Ed Gerck wrote:

Ben Laurie wrote:
But doesn't that prove the point? The trust that you consequently 
place in the web server because of the certificate _cannot_ be copied 
to another webserver. That other webserver has to go out and buy its 
own copy, with its own domain name it it.


A copy is something identical. So, in fact you can copy that server cert 
to another server that has the same domain (load balancing), and it will 
work. Web admins do it all the time. The user will not notice any 
difference in how the SSL will work.


Obviously. Clearly I am talking about a server in a different domain.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Unpatented PAKE!

2008-06-02 Thread Ben Laurie

Scott G. Kelly wrote:

Here's another approach to password authenticated key exchange with
similar security claims. The underlying mechanism is under
consideration for inclusion in by the 802.11s group in IEEE:

http://www.ietf.org/internet-drafts/draft-harkins-emu-eap-pwd-01.txt


Hmmm. I don't see any IPR statements for that draft.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Unpatented PAKE!

2008-06-02 Thread Ben Laurie

Scott G. Kelly wrote:

Ben Laurie wrote:

Scott G. Kelly wrote:

Here's another approach to password authenticated key exchange with
similar security claims. The underlying mechanism is under
consideration for inclusion in by the 802.11s group in IEEE:

http://www.ietf.org/internet-drafts/draft-harkins-emu-eap-pwd-01.txt

Hmmm. I don't see any IPR statements for that draft.


My understanding is that there are no IPR claims on this method. I am not a 
lawyer, though.


Given the stupidity of the US patent system, at least the authors need
to state that they make no claims, AIUI.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can we copy trust?

2008-06-02 Thread Ben Laurie

Ed Gerck wrote:
In the essay Better Than Free, Kevin Kelly debates which concepts hold 
value online, and how to monetize those values. See 
www.kk.org/thetechnium/archives/2008/01/better_than_fre.php


Kelly's point can be very useful: *When copies are free, you need to 
sell things which can not be copied.*


The problem that I see and present to this list is when he discusses 
qualities that can't be copied and considers trust as something that 
cannot be copied.


Well, in the digital economy we had to learn how to copy trust and we 
did. For example, SSL would not work if trust could not be copied.


How do we copy trust? By recognizing that because trust cannot be 
communicated by self-assertions (*), trust cannot be copied by 
self-assertions either.


To trust something, you need to receive information from sources OTHER 
than the source you want to trust, and from as many other sources as 
necessary according to the extent of the trust you want. With more trust 
extent, you are more likely to need more independent sources of 
verification.


To copy trust, all you do is copy the information from those channels in 
a verifiable way and add that to the original channel information. We do 
this all the time in scientific work: we provide our findings, we 
provide the way to reproduce the findings, and we provide the published 
references that anyone can verify.


To copy trust in the digital economy, we provide  digital signatures 
from one or more third-parties that most people will trust.


This is how SSL works. The site provides a digital certificate signed by 
a CA that most browsers trust, providing an independent channel to 
verify that the web address is correct -- in addition to what the 
browser's location line says.


But doesn't that prove the point? The trust that you consequently place 
in the web server because of the certificate _cannot_ be copied to 
another webserver. That other webserver has to go out and buy its own 
copy, with its own domain name it it.


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Unpatented PAKE!

2008-05-30 Thread Ben Laurie

http://grouper.ieee.org/groups/1363/passwdPK/submissions/hao-ryan-2008.pdf

At last.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-25 Thread Ben Laurie

Steven M. Bellovin wrote:

On Sat, 24 May 2008 20:29:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

Of course, we have now persuaded even the most stubborn OS that 
randomness matters, and most of them make it available, so perhaps

this concern is moot.

Though I would be interested to know how well they do it! I did have 
some input into the design for FreeBSD's, so I know it isn't

completely awful, but how do other OSes stack up?


I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.


I meant: how good are the PRNGs underneath them?

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-24 Thread Ben Laurie

Eric Young wrote:

  #ifndef PURIFY
MD_Update(m,buf,j); /* purify complains */
  #endif

  


I just re-checked, this code was from SSLeay, so it pre-dates OpenSSL
taking over from me
(about 10 years ago, after I was assimilated by RSA Security).

So in some ways I'm the one at fault for not being clear enough about
why 'purify complains' and why it was not relevant.
Purify also incorrectly companied about a construct used in the digest
gathering code which functioned correctly, but purify was
also correct (a byte in a read word was uninitialised, but it was later
overwritten by a shifted byte).

One of the more insidious things about Purify is that once its
complaints are investigated, and deemed irrelevant (but left in the
library),
anyone who subsequently runs purify on an application linking in the
library will get the same purify warning.
This leads to rather distressed application developers.  Especially if
their company has a policy of 'no purify warnings'.

One needs to really ship the 'warning ignore' file for purify (does
valgrind have one?).

I personally do wonder why, if the original author had purify related
comments, which means he was aware of the issues,
but had still left the code in place, the reviewer would not consider
that the code did some-thing important enough to
ignore purify's complaints.


I think the core point is that 10+ years ago, when this code was 
written, randomness was actually quite hard to come by. Daemons like EGD 
had to be installed and fed and cared for. So, even a little entropy 
from uninitialised memory (I use the quotes because I do appreciate 
that the memory probably has somewhat predictable content) was worth having.


Of course, we have now persuaded even the most stubborn OS that 
randomness matters, and most of them make it available, so perhaps this 
concern is moot.


Though I would be interested to know how well they do it! I did have 
some input into the design for FreeBSD's, so I know it isn't completely 
awful, but how do other OSes stack up?


Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   3   4   >