[cryptography] no, don't advertise that you support SSLv2!

2015-08-03 Thread Patrick Pelletier
I was on an e-commerce site today, and was horrified when I saw the 
following badge:


https://lib.store.yahoo.net/lib/yhst-11870311283124/secure.gif

Did they still have SSLv2 enabled?  I checked, and luckily they don't:

https://www.ssllabs.com/ssltest/analyze.html?d=us-dc2-order.store.yahoo.net

So, it's not as bad as their badge claims, but still, they only get a 
C.  (They support only one version: TLS 1.0.)  I would've thought a big 
Web property like Yahoo could do better.  :(


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] on using RDRAND [was: Entropy improvement: haveged + rngd together?]

2013-12-06 Thread Patrick Pelletier

On 12/2/13, 3:16 PM, dj-0ozvisyrzglbdgjk7y7...@public.gmane.org wrote:


I'm currently arguing with NIST about their specifications which make it
hard to provide raw entropy while being FIPS 140-2 and NIST SP800-90
compliant. If I had a free hand, it would not be a configuration.
Configurations suck in numerous ways. It would just be there.


Is the TRNG circuit small enough you could just slap down two of them, 
and use one to feed the NIST pipeline and use the other for raw entropy 
access?


--Patrick


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] cryptographic agility

2013-10-05 Thread Patrick Pelletier

On 10/4/13 9:48 PM, Jeffrey Goldberg wrote:


The AES “failure” in TLS is a CBC padding failure. Any block cipher would have 
“failed” in exactly the same way.


Yes, I know.  My second point, about needing a stream cipher other than 
RC4, is what's applicable to the current BEAST vs RC4 dilemma.  My 
point with block ciphers was more hypothetical.  As far as we know, AES 
is good, but some day it might turn out not to be, and even now, there 
is the concern that the AES-256 key schedule is not as good as it could 
be.  My point was just that if you are going to have multiple block 
ciphers, you should have some diversity, and be able to explain the 
rationale for why you picked each one.  (i. e. This one was for speed, 
that one was for security margin.)  But TLS seems to have opted for the 
logic that if one 128-bit block cipher is good, four 128-bit block 
ciphers are better.  Perhaps Camellia is a good back-up to AES; I don't 
know.  But I'm not aware of it having been presented as has a higher 
security margin or something like that, the way Serpent could have been 
presented.  It was just here's another one.  And then we got SEED and 
ARIA piling on after that.  (Or maybe SEED was before Camellia; I don't 
remember, and it doesn't really matter.)


Yes, CBC mode has been an issue in a lot of the recent attacks against 
TLS.  So, block cipher modes are another axis for diversity.  A lot of 
folks seem to be putting a lot of eggs in the GCM basket lately.  Maybe 
that's okay, but I know some concerns have been raised about the 
complexity of implementing GCM, and the potential for side-channel 
attacks.  Maybe we need EAX as a backup in case GCM doesn't turn out to 
be as great as it was supposed to be.  Again, I'm not *specifically* 
saying we need a Serpent-EAX cipher suite or something like that.  I'm 
just saying that, in general, this is the kind of thinking that should 
be going on: how can we add cipher suites that add diversity, rather 
than just me too?


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Curve25519 OID (was: Re: the spell is broken)

2013-10-05 Thread Patrick Pelletier

On 10/5/13 2:47 PM, Jeffrey Walton wrote:


Do you know if there's a standard name and OID assigned to Dr.
Bernstein's gear? IETF only makes one mention of 25519 in the RFC
search, and its related to TLS and marked TBD.


Not yet.  See this thread:

http://www.ietf.org/mail-archive/web/tls/current/msg10074.html

(In short, the argument was that an OID for Curve25519 is only useful if 
it's going to be used for signatures, and Curve25519 shouldn't directly 
be used for signatures; Ed25519 should be used instead.)


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] cryptographic agility (was: Re: the spell is broken)

2013-10-04 Thread Patrick Pelletier

On 10/4/13 3:19 PM, Nico Williams wrote:


b) algorithm agility is useless if you don't have algorithms to choose
from, or if the ones you have are all in the same family.


Yes, I think that's where TLS failed.  TLS supports four block ciphers 
with a 128-bit block size (AES, Camellia, SEED, and ARIA) without (as 
far as I'm aware) any clear tradeoff between them.  As opposed to, say, 
if Serpent had been provided as the alternative to AES, where there 
would be a fairly clear trade-off.  (Since Serpent was generally 
recognized as being more conservative, albeit slower, than AES, it would 
make a nice back-up cipher.)  Or, today, the 1024-bit block size version 
of ThreeFish would add interesting diversity, since it has a radically 
different blocksize.


And, of course, the big problem was that RC4 was the only stream cipher 
supported by TLS.  There's now work to remedy that with a Salsa20 or 
ChaCha cipher suite, but that should have been done long ago, since 
everyone knew RC4 was getting old and broken-ish.


So, my point is that you should pick certain axes such as stream versus 
block, or security versus speed, and then choose a small number of 
ciphersuites which are radically different on those axes.  There's no 
point in defining many cipher suites that cover areas that are already 
well-covered.  And, conversely, if a particular area is only covered by 
cipher suites that are getting long in the tooth, it's time to 
proactively cover that area with something new.


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] what has the NSA broken?

2013-09-05 Thread Patrick Pelletier

On 9/5/13 6:25 PM, Andy Isaacson wrote:


However, virtually nobody properly keys their ciphers with physical
entropy.  I suspect that correlated key PRNG attacks are almost
certainly a significant part of the NSA/GCHQ crypto break.  Many
deployed systems expose a significant amount of correlated output of
/dev/urandom or the in-process PRNG.


Isn't the point of a good PRNG that future output can't be predicted, 
even knowing all previous output?  If we assume that AES can't be broken 
even with the NSA's resources, why would a PRNG based on AES be 
breakable by the NSA?  (i. e. breaking AES-CTR used as a PRNG and 
breaking AES-CTR used as a cipher amount to the same thing.)  Back to 
the old random vs urandom debate, and whether it's possible to 
decrease entropy.



Also, retrieving key material from endpoints is a high return activity.
Nearly nobody uses PFS ciphersuites, many HTTPS privatekeys are used for
multiple years, and a single 1 KiB leak of key material is sufficient to
decrypt all traffic under that key.


Yeah, the long life of private keys was recently a subject on the 
perpass list:


http://www.ietf.org/mail-archive/web/perpass/current/msg00066.html


RSA-1024 I'd treat as dead, RSA-2048 is
probably robust enough that if NSA have an attack it would be too
valuable to risk exposing under anything but an existential threat
scenario.


It would be fair to say the same thing about 1024-bit Diffie-Hellman, 
too, right?  Most of the charts I've seen seem to indicate that.  So 
even a PFS ciphersuite wouldn't help you that much if you used 1024-bit 
DHE?  And yet a lot of software seems bent against using larger primes:


http://blog.ivanristic.com/2013/08/increasing-dhe-strength-on-apache.html

and OpenSSL seems to consider it the fault of the people wanting to use 
larger primes, rather than vice-versa:


http://www.mail-archive.com/openssl-users@openssl.org/msg71899.html


I've met djb and all my checks for NSA minders came up negative.


Speaking of which, would Curve25519 be a wiser choice for ECDHE than the 
NIST-approved curves, given that Bruce Schneier believes the NSA is 
influencing NIST (for the worse)?


http://www.ietf.org/mail-archive/web/perpass/current/msg00087.html

--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-22 Thread Patrick Pelletier

On 8/22/13 9:40 AM, Nico Williams wrote:


My suggestion is /dev/urandomN where N is one of 128, 192, or 256, and
represents the minimum entropy estimate of HW RNG inputs to date to
/dev/urandomN's pool.  If the pool hasn't received that much entropy
at read(2) time, then block, else never block and just keep stretching
that entropy and accepting new entropy as necessary.


That sounds like the perfect interface!  The existing dichotomy between 
random and urandom (on Linux) is horrible, and it's nice to be able to 
specify how much entropy you are in need of.


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-19 Thread Patrick Pelletier

On 8/19/13 1:51 PM, grarpamp wrote:


This reminds me, where are the open designs for a strong hwRNG based
on the common smoke detector? People say they want a hwRNG, lots
of them are free for asking right down the street at the demolition site.
But where are the designs?


The creator of HotBits provides a fair amount of information about his 
design:


http://www.fourmilab.ch/hotbits/hardware3.html

Although he actually recommends against using the Americium from smoke 
detectors, and says it is safer to purchase a commercial Cesium 
radiation source, which he provides links to.


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] best practices for hostname validation when using JSSE

2013-08-09 Thread Patrick Pelletier
One thing mentioned in the Most Dangerous Code in the World paper  
(and I've verified experimentally) is that JSSE doesn't validate the  
hostname against the X.509 certificate, so if one uses JSSE naively,  
one is open to man-in-the-middle attacks.  The best solution I've been  
able to figure out is to borrow the hostname validation code from  
Apache HttpComponents.  But I'm curious what other people who use JSSE  
are doing, and if there's a best practice for doing this.


Apologies if this isn't on-topic for this list; I know you guys mostly  
discuss higher-level issues, rather than APIs.  I already tried asking  
on Stack Overflow, and they said it was off-topic for Stack Overflow:


http://stackoverflow.com/questions/18139448/how-should-i-do-hostname-validation-when-using-jsse

So, a meta-question would be: where is the right place to ask this  
question?  I haven't been able to find a JSSE-specific mailing list.


Thanks,

--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] best practices for hostname validation when using JSSE

2013-08-09 Thread Patrick Pelletier

On Aug 9, 2013, at 12:49 PM, Tim Dierks wrote:

I added a comment on your Stack Overflow post (incorrectly closed,  
IMHO, but the SO crowd can be prickly).


Thanks!  This was my first attempt at using Stack Overflow.


The right thing to do depends on knowing a couple more details:
 1. Where are you getting your certificates?


To be determined, mostly.  Right now, I'm just generating them on my  
desktop with gnoMint.  The deployment has yet to be ironed out.  (I'm  
just writing a library, and the actual applications that use it will  
be written by others in our company.)  We definitely don't plan on  
using commercial CAs, though!  Either our company would act as a CA  
for all of our customers, or else each customer would act as a CA for  
themselves.  (In the latter case, though, we still want customers to  
be able to talk to each other, so they would have to install each  
other's trust roots.  Although then one customer could in theory MITM  
another customer, which seems bad.  I suppose that we're running up  
against Zooko's triangle, though, in that in order to be secure, we  
either need a centralized CA, or else we'd need to move away from  
domain names and use something like a hash of the public key.  Ugh.)   
But either way, the number of trust roots will be very limited.


I suppose we could switch to a system like trust-on-first-use, which  
would be much less hassle for the users, but then it would be easy to  
MITM as long as the MITM is in place ahead of time.



 2. What's the best way to name the servers you trust?


I think DNS is best.  We identify servers via a URI-like string (e. g.  
foo://example.com/bar/whatever), so the advantage of using the DNS  
name is that we are validating information that is already contained  
in the URI.  If we used some other sort of identifier, then the user  
would need to communicate that identifier in addition to the  
hostname.  (But see previous lament about Zooko's triangle.  I suppose  
we could adopt a syntax like foo://hashofpublic...@example.com/bar/ 
whatever, to avoid needing a centralized CA, but users would probably  
complain about typing in ugly hex strings.)


Since you have a proprietary protocol, the easiest thing to do is  
make sure the cert chains up to a root you trust (ideally not system- 
installed roots, because nobody knows how deep the sewage flows there 
—the only exception would be if you want to delegate trust issues to  
a user of your software who might be expected to manage their own  
trustpoints using system configuration tools, but that gives me the  
willies).


Yeah, I plan on using JSSE's ability to validate the certificate  
chain, except I'll give it a custom set of trust roots instead of the  
standard ones.  That's one reason I'm leaning toward JSSE instead of  
BouncyCastle, since BouncyCastle lacks certificate chain validation in  
addition to hostname validation, while JSSE has certificate chain  
validation and only lacks hostname validation.


Then make sure the name on the cert matches the name of who you  
think you're communicating with (could be DNS name, or some other  
identification of the entity); if you may want to use SSL libraries  
which check certs and which are designed for HTTPS, you probably  
want to use the DNS name.


Yeah, I think that modeling the validation on HTTPS makes sense, even  
though it isn't HTTPS.  That's what I'm already doing in the C  
implementation.  (Since I did this in C several months ago, and now  
I'm doing it in Java.)  I went through pretty much this same thing  
back then, since OpenSSL doesn't validate hostnames, either.  I ended  
up using some sample code from the 2002 O'Reilly book Network  
Security with OpenSSL by Viega/Messier/Chandra to do hostname  
validation, although I'm thinking about switching to the hostname  
validation code published by iSecPartners in conjunction with the  
Most Dangerous Code in the World paper.  Either way, though, neither  
of those validation routines handle wildcards.


I'm not sure yet whether we'll need wildcards.  We're getting along  
fine without them so far.  But if we end up acting as a CA, it might  
be convenient to just issue each customer a wildcard certificate for  
their domain, and let them install it on as many servers as they want,  
rather than us having to get involved every time they add or rename a  
server.  In which case, I'd need to revisit the C implementation and  
add wildcard support.


I don't know enough about JSSE-specific implementation to be able to  
give you a precise answer.


Bruno on Stack Overflow gave an answer that works on Java 7:

http://stackoverflow.com/a/17979954/372643

but since I need a solution that's more portable than that, I think  
I'll just proceed with my original plan of adapting some code from  
Apache HttpComponents.  I just wanted to run it by some experienced  
people and see if anyone said, No, that's crazy! or Here, try my  
library that already does 

Re: [cryptography] Workshop on Real-World Cryptography

2013-03-03 Thread Patrick Pelletier

On 3/2/13 4:12 AM, ianG wrote:


This one had the talk written out, which makes it a top talk in just
that alone:

things that bit us, things we fixed and
things that are waiting in the grass   [slides]
Adam Langley (Google)

http://www.imperialviolet.org/2013/01/13/rwc03.html


This article surprised me, because it could almost be read as an 
argument against AES (or even against block ciphers in general).  Which 
seems to contradict the common cryptographic wisdom of just use AES and 
be done with it.


Besides the argument about AES having timing side-channels in #9, the 
room 101 section at the end suggests we should do away with not only 
CBC, but also AES-GCM, which is commonly touted as the solution to CBC's 
woes.  (He admits it was his most controversial point, and I'm curious 
how it was received when the talk was given.)  But I believe that if we 
rule out both CBC and AES-GCM ciphersuites in TLS, that leaves us with 
only RC4.  (And indeed, unsurprisingly given the author, RC4 seems to be 
what Google's sites prefer.)


It seems like we've been told for ages that RC4 is old and busted, and 
that AES is the one-size-fits-all algorithm, and yet recent developments 
like BEAST and Lucky 13 seem to be pushing us back into the arms of RC4 
and away from AES.


Although cipher suite proliferation is a common criticism of TLS (and 
indeed, it seems like neither Camellia nor SEED nor ARIA offer any 
benefit over AES as far as I'm aware, though I'm not a cryptographer), I 
wonder if there's benefit in adding a ciphersuite for a new stream 
cipher (such as Salsa20) to TLS, to eventually replace RC4?  Such a 
proposal could at least have clearly-stated goals (faster than RC4 and 
AES, more secure than RC4, avoiding the side-channel issues and CBC 
issues of AES), versus the unclear and never-stated goals of 
yet-another-128-bit-block-cipher.


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] OpenSSL wikibook

2013-02-01 Thread Patrick Pelletier
Since the quality of OpenSSL documentation, and the ease of  
contributing to it, has been a subject of discussion on both the  
openssl-users list and the cryptography list in the past few months,  
and since the only commercial book on OpenSSL is over a decade old  
now, I thought it would be worthwhile to start an OpenSSL wikibook:


https://en.wikibooks.org/wiki/OpenSSL

All I have in place right now is a skeleton of a table of contents,  
but I'm hoping that OpenSSL users will contribute to the book in their  
areas of expertise, or as they learn new things that they wish had  
been documented.


--Patrick

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Just how bad is OpenSSL ?

2012-10-27 Thread Patrick Pelletier

On 10/26/12 11:29 AM, John Case wrote:


So, given what is in the stanford report and then reading this rant
about openssl, I am wondering just how bad openssl is ?  I've never had
to implement it or code with it, so I really have no idea.


I think that OpenSSL is written by monkeys is a bit sensationalist and 
unfair, but I do think it's fair to say that OpenSSL is user-hostile, 
and is written by experts for their own use, not for others to come 
along and use.


The biggest problem with OpenSSL is that it is really, really poorly 
documented.  There is no big picture, gentle introduction 
documentation that ships with OpenSSL.  Most of the OpenSSL functions 
are documented in man pages, but some important functions are not 
documented.  And the man pages often don't explain enough about the 
functions, like why would I want to use this function instead of that 
one, when they both sound like they do the same thing?  Also, the 
naming of the man pages is user-hostile.  The worst one is that there is 
a man page named rand.  The other OpenSSL man pages tell me to see 
rand(3).  But, of course, man 3 rand gets me the man page for the C 
standard library function, rand().  I eventually figured out I needed to 
do man 3ssl rand to get the man page for OpenSSL's random number 
functions.


As far as I can tell, there is only one decent book that has been 
written about OpenSSL: the O'Reilly book Network Security with 
OpenSSL.  It is ten years old now, which is not as bad as it sounds, 
because OpenSSL hasn't changed that much since then.  Nevertheless, it 
obviously doesn't cover anything that's been added to OpenSSL in the 
past decade, and I had to modify some of the code examples because 
current versions of OpenSSL are better about using const in function 
prototypes, for instance.  I also got some good laughs, like when the 
book talked about this brand-new algorithm called AES that OpenSSL had 
just recently added support for.


Between reading the O'Reilly book, the man pages, the OpenSSL mailing 
list, random things people have written on the web, and of course, the 
source code (a lot!), I've been able to use OpenSSL, but it's been very 
slow and frustrating, and I curse the name of OpenSSL every day.


Besides the poor documentation, the other thing about OpenSSL is that it 
is definitely not batteries included.  Now, I'm not expecting it to be 
some high-level https library like curl.  I intentionally wanted a 
low-level TLS library, because my job was to encapsulate a custom 
messaging protocol in TLS.  But still, OpenSSL is lacking in some things 
I would hope a low-level TLS library would have.  For instance, it 
doesn't have everything you need to validate certificates, and you need 
to write some of that code yourself.


One of the most glaring things OpenSSL is missing out-of-the-box is 
thread safety.  If you want OpenSSL to be threadsafe, you have to supply 
your own callback.  This is different than the approach taken by many 
other libraries (for example, libevent, or GnuTLS) which supply 
threading implementations for pthreads and Win32, and only require 
callbacks if you are on a more exotic operating system.  Not only is it 
annoying to have to supply this boilerplate code, but my bigger concern 
is that it makes it very tricky to use OpenSSL from more than one 
library in the same process.  Assuming that each library wants to be 
threadsafe, and wants to hide its use of OpenSSL as an implementation 
detail (rather than telling the user you need to supply OpenSSL thread 
callbacks), then that means each library needs to set the OpenSSL 
threading callbacks to point at its own implementation.  But since the 
OpenSSL threading callbacks are global, only one of them is going to 
win.  Now, it's possible that everything will be okay, because whoever 
wins will just end up providing thread safety to everyone.  But, it 
seems kind of sketchy and scary, and I can imagine bad things happening, 
like if one library is initialized (and thus sets the OpenSSL callbacks) 
after another library is already using OpenSSL.


(One of the things I'm getting at in the previous paragraph is that 
OpenSSL seems to be written with the intention that it's only going to 
be used by the main program, which a single person or organization is 
going to write.  OpenSSL doesn't seem to have in mind the possibility 
that it will be used by multiple, higher-level libraries which don't 
know anything about each other, but which are all used by a single 
program in a single address space.)


So, I think that OpenSSL could be made better by two things: (1) better 
documentation, and (2) a small helper library that sits on top of it 
and provides thread-safety callbacks, and other missing helper 
functions like certificate validation.  But, the helper library should 
be small, simple, and low-level enough that it would be possible to 
convince the high-level libraries (such as curl, libevent, etc.) to link 
against