Re: [openssl.org #3607] nistz256 is broken.

2014-12-03 Thread Bodo Moeller

 2. When will RT2574 be integrated to protect our ECC keys in the
 inevitable presence of software defects like this?
 http://rt.openssl.org/Ticket/Display.html?id=2574user=guestpass=guest


Reportedly, Cryptography Research (i.e., Rambus) alleges to have broad
patents on techniques like this (and they might not be the only ones). I'm
not going to look for specific patents and can't assess the validity of
that rumor, the only thing I know for certain is that Cryptography Research
and Rambus are famous, above all else, for starting patent lawsuits (see,
e.g.,
http://www.sec.gov/Archives/edgar/data/1403161/000119312507270394/d10k.htm).

Unfortunately, this means that the OpenSSL project may not be willing to
incorporate coordinate-blinding techniques at this time.

Bodo


Re: [openssl.org #3575] [BUG] FALLBACK_SCSV early in the cipher list breaks handshake

2014-10-20 Thread Bodo Moeller via RT
Sorry, my fault. I'll fix this.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3575] [BUG] FALLBACK_SCSV early in the cipher list breaks handshake

2014-10-20 Thread Bodo Moeller via RT
The fix will be in the next version.

Note that OpenSSL servers aren't expected to see TLS_FALLBACK_SCSV in
normal operation (the code is sufficiently version tolerant, etc.), and if
you've enabled TLS 1.2, there isn't even a higher protocol version that the
client could be falling back from, so the impact of this bug is really low.
It's just bad for testing.

Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3575] [BUG] FALLBACK_SCSV early in the cipher list breaks handshake

2014-10-20 Thread Bodo Moeller
The fix will be in the next version.

Note that OpenSSL servers aren't expected to see TLS_FALLBACK_SCSV in
normal operation (the code is sufficiently version tolerant, etc.), and if
you've enabled TLS 1.2, there isn't even a higher protocol version that the
client could be falling back from, so the impact of this bug is really low.
It's just bad for testing.

Bodo


Re: Patch to mitigate CVE-2014-3566 (POODLE)

2014-10-18 Thread Bodo Moeller
mancha manc...@zoho.com:

 Bodo Moeller wrote:


 I certainly think that the claim that new SCSV does not help with
  [the SSL 3.0 protocol issue related to CBC padding] at all is wrong,
  and that my statement that TLS_FALLBACK_SCSV can be used to counter
  CVE-2014-3566 is right.



 The point is more nuanced and boils down to there being a difference
 between CVE-2014-3566 (SSLv3's vulnerability to padding oracle attacks
 on CBC-mode ciphers) and POODLE (an attack that exploits CVE-2014-3566
 by leveraging protocol fallback implementations to force peers into
 SSLv3 communication).

 TLS_FALLBACK_SCSV does not fix or mitigate CVE-2014-3566. With or
 without 0x5600, SSLv3 CBC-mode cipher usage is broken.


Sure, I understand that. Disabling SSL 3.0 doesn't fix CVE-2014-3566
either, because SSL 3.0 remains just as broken even if you don't use it. In
both cases (TLS_FALLBACK_SCSV or disabling SSL 3.0), it's about avoiding
unwarranted use of SSL 3.0 to avoid the vulnerability.


Chrome, Firefox, etc. intentionally implement protocol fallback (which I
 presume is why there are no MITRE CVE designations for the behavior per
 se). However, one can make a strong case protocol fallback
 implementations that are MITM-triggerable deserve CVE designations.


I agree. If there was such a CVE, that would be the main CVE to point to
here.

Bodo


Re: Patch to mitigate CVE-2014-3566 (POODLE)

2014-10-18 Thread Bodo Moeller
Jeffrey Walton noloa...@gmail.com:


 Is there a way to compile without the patch? I think I would rather
 'config no=ssl3' and omit the additional complexity. Its additional
 protocol complexity and heartbleed is still fresh in my mind.


There's no way to compile without the patch, other than reverting it. It's
a tiny amount of extra logic.

Disabling SSL 3.0 is a good idea, but note the TLS_FALLBACK_SCSV also
addresses similar downgrade attacks to TLS 1.1 or TLS 1.0 (when you should
rather be using TLS 1.2).


Also, are there any test cases that accompany the patch? I'm trying to
 figure out when, exactly, SSL_MODE_SEND_FALLBACK_SCSV needs to be set
 (using the sources as a guide).


If you don't use fallback retries (in which you *intentionally* avoid the
latest protocol versions), you don't need to set it at all.

Presumably I should update the documentation to be more explicit about
this. Where did you look for documentation? Do you think that changing the
SSL_set_mode man page (SSL_CTX_set_mode.pod) would be sufficient, or do you
think that adding guidance to ssl.h is equally (or more) important?

Bodo


Re: Patch to mitigate CVE-2014-3566 (POODLE)

2014-10-16 Thread Bodo Moeller
This is not quite the same discussion as in the TLS Working Group, but I
certainly think that the claim that new SCSV does not help with [the SSL
3.0 protocol issue related to CBC padding] at all is wrong, and that my
statement that TLS_FALLBACK_SCSV can be used to counter CVE-2014-3566 is
right.

Yes, regardless of what you do, SSL 3.0 still has that vulnerability. The
vulnerability is best avoided by not using SSL 3.0. One way to avoid SSL
3.0 is to entirely disable it. Another way to avoid SSL 3.0 at least in
certain scenarios, in case you are not ready to entirely disable it, is to
make use of TLS_FALLBACK_SCSV.

Deploying TLS_FALLBACK_SCSV has further benefits that indeed have nothing
to do with CVE-2014-3566, and deploying TLS_FALLBACK_SCSV will certainly
not fully protect against CVE-2014-3566 if you continue to allow SSL 3.0,
given that TLS_FALLBACK_SCSV requires client-side *and* server-side support
to achieve anything -- so TLS_FALLBACK_SCSV is not *the* fix for
CVE-2014-3566.

However, in the current implementation landscape, if you *do* have both
client-side and server-side support for TLS_FALLBACK_SCSV, this provides
perfectly fine protection against CVE-2014-3566 for these connections;
so CVE-2014-3566
is a very good reason to deploy TLS_FALLBACK_SCSV support now (or to have
it deployed a couple of months ago).

In other words, TLS_FALLBACK_SCSV can be used to counter CVE-2014-3566.

Bodo


Re: Patch to mitigate CVE-2014-3566 (POODLE)

2014-10-15 Thread Bodo Moeller
mancha manc...@zoho.com:


 Any reason for the s_client -fallback_scsv option check to be within an
 #ifndef OPENSSL_NO_DTLS1 block?


Thanks for catching this. No, there's no good reason for that; I should
move it elsewhere.

Bodo


Patch to mitigate CVE-2014-3566 (POODLE)

2014-10-14 Thread Bodo Moeller
Here's a patch for the OpenSSL 1.0.1 branch that adds support for
TLS_FALLBACK_SCSV, which can be used to counter the POODLE attack
(CVE-2014-3566; https://www.openssl.org/~bodo/ssl-poodle.pdf).

Note well that this is not about a bug in OpenSSL -- it's a protocol issue.
If SSL 3.0 is disabled in either the client or in the server, that is
completely sufficient to avoid the POODLE attack. (Also, there's only a
vulnerability if the client actively falls back to SSL 3.0 in case that TLS
connections don't work -- but many browsers still do that to ensure
interoperability with broken legacy servers.) If you can't yet disable SSL
3.0 entirely, TLS_FALLBACK_SCSV can help avoid the attack, if both the
client and the server support it.

Server-side TLS_FALLBACK_SCSV support is automatically provided if you use
the patch. Clients that do fallback connections downgrading the protocol
version should use SSL_set_mode(ssl, SSL_MODE_SEND_FALLBACK_SCSV) for such
downgraded connections.

The OpenSSL team will follow up with official releases that will include
TLS_FALLBACK_SCSV support. Meanwhile, if you can't simply disable SSL 3.0,
you may want to use this patch.

Bodo
diff --git a/apps/s_client.c b/apps/s_client.c
index 4625467..c2e160c 100644
--- a/apps/s_client.c
+++ b/apps/s_client.c
@@ -337,6 +337,7 @@ static void sc_usage(void)
BIO_printf(bio_err, -tls1_1   - just use TLSv1.1\n);
BIO_printf(bio_err, -tls1 - just use TLSv1\n);
BIO_printf(bio_err, -dtls1- just use DTLSv1\n);
+   BIO_printf(bio_err, -fallback_scsv - send TLS_FALLBACK_SCSV\n);
BIO_printf(bio_err, -mtu  - set the link layer MTU\n);
BIO_printf(bio_err, -no_tls1_2/-no_tls1_1/-no_tls1/-no_ssl3/-no_ssl2 - 
turn off that protocol\n);
BIO_printf(bio_err, -bugs - Switch on all SSL implementation 
bug workarounds\n);
@@ -617,6 +618,7 @@ int MAIN(int argc, char **argv)
char *sess_out = NULL;
struct sockaddr peer;
int peerlen = sizeof(peer);
+   int fallback_scsv = 0;
int enable_timeouts = 0 ;
long socket_mtu = 0;
 #ifndef OPENSSL_NO_JPAKE
@@ -823,6 +825,10 @@ int MAIN(int argc, char **argv)
meth=DTLSv1_client_method();
socket_type=SOCK_DGRAM;
}
+   else if (strcmp(*argv,-fallback_scsv) == 0)
+   {
+   fallback_scsv = 1;
+   }
else if (strcmp(*argv,-timeout) == 0)
enable_timeouts=1;
else if (strcmp(*argv,-mtu) == 0)
@@ -1235,6 +1241,10 @@ bad:
SSL_set_session(con, sess);
SSL_SESSION_free(sess);
}
+
+   if (fallback_scsv)
+   SSL_set_mode(con, SSL_MODE_SEND_FALLBACK_SCSV);
+
 #ifndef OPENSSL_NO_TLSEXT
if (servername != NULL)
{
diff --git a/crypto/err/openssl.ec b/crypto/err/openssl.ec
index e0554b4..34754e5 100644
--- a/crypto/err/openssl.ec
+++ b/crypto/err/openssl.ec
@@ -71,6 +71,7 @@ R SSL_R_TLSV1_ALERT_EXPORT_RESTRICTION1060
 R SSL_R_TLSV1_ALERT_PROTOCOL_VERSION   1070
 R SSL_R_TLSV1_ALERT_INSUFFICIENT_SECURITY  1071
 R SSL_R_TLSV1_ALERT_INTERNAL_ERROR 1080
+R SSL_R_SSLV3_ALERT_INAPPROPRIATE_FALLBACK 1086
 R SSL_R_TLSV1_ALERT_USER_CANCELLED 1090
 R SSL_R_TLSV1_ALERT_NO_RENEGOTIATION   1100
 R SSL_R_TLSV1_UNSUPPORTED_EXTENSION1110
diff --git a/ssl/d1_lib.c b/ssl/d1_lib.c
index 6bde16f..82ca653 100644
--- a/ssl/d1_lib.c
+++ b/ssl/d1_lib.c
@@ -266,6 +266,16 @@ long dtls1_ctrl(SSL *s, int cmd, long larg, void *parg)
case DTLS_CTRL_LISTEN:
ret = dtls1_listen(s, parg);
break;
+   case SSL_CTRL_CHECK_PROTO_VERSION:
+   /* For library-internal use; checks that the current protocol
+* is the highest enabled version (according to s-ctx-method,
+* as version negotiation may have changed s-method). */
+#if DTLS_MAX_VERSION != DTLS1_VERSION
+#  error Code needs update for DTLS_method() support beyond DTLS1_VERSION.
+#endif
+   /* Just one protocol version is supported so far;
+* fail closed if the version is not as expected. */
+   return s-version == DTLS_MAX_VERSION;
 
default:
ret = ssl3_ctrl(s, cmd, larg, parg);
diff --git a/ssl/dtls1.h b/ssl/dtls1.h
index e65d501..192c5de 100644
--- a/ssl/dtls1.h
+++ b/ssl/dtls1.h
@@ -84,6 +84,8 @@ extern C {
 #endif
 
 #define DTLS1_VERSION  0xFEFF
+#define DTLS_MAX_VERSION   DTLS1_VERSION
+
 #define DTLS1_BAD_VER  0x0100
 
 #if 0
@@ -284,4 +286,3 @@ typedef struct dtls1_record_data_st
 }
 #endif
 #endif
-
diff --git a/ssl/s23_clnt.c b/ssl/s23_clnt.c
index 2b93c63..d4e43c3 100644
--- a/ssl/s23_clnt.c
+++ b/ssl/s23_clnt.c
@@ -736,6 +736,9 @@ static 

Re: EC_METHOD struct

2014-07-16 Thread Bodo Moeller
balaji marisetti balajimarise...@gmail.com:


 In the EC_METHOD structure, the pointers to methods for converting
 between affine and projective coordinates are named:

 `point_set_Jprojective_coordinates_GFp` and
 `point_get_Jprojective_coordinates_GFp`

 Does that mean any implementation of EC_METHOD (for prime curves) can
 only use Jacobian coordinates? Is it not possible to use some other
 coordinate system (may be homogeneous)?


This method name just means that the EC_METHOD implementation should
support setting points based on Jacobian coordinates. Internally, it might
convert them to something else. (Or, the EC_METHOD could entirely omit
support for this method. Typical usage of the ECC library won't depend on
this.)

Bodo


Re: EC_METHOD struct

2014-07-16 Thread Bodo Moeller
Thulasi Goriparthi thulasi.goripar...@gmail.com:

Wouldn't it have been simpler to name these function pointers just
 projective instead of Jprojective?

 This way, EC methods that use different projective system than jacobian
 could have their own implementation to set/get projective co-ordinates and
 use these function pointers without confusion.


Well, I don't necessarily agree with the without confusion part ...

The behavior that you get with these methods would then depend on the
internals of that implementation, which isn't necessarily what users might
expect from the library.  If someone uses (the hypothetical)
EC_POINT_set_projective_coordinates_GFp with Jacobian coordinates but these
are interpreted as something else, that could be a problem.


Another reason for this is, new EC methods that get implemented would take
 existing simple EC method as reference and steal as much code (as many
 functions) as possible from it.  In simple EC method,
 set_affine_coordinates would internally call set_projective_coordinates
 with Z as 1. One cannot stick to this code, and leave set_projective
 function unset at the same time. Here, change is necessary to call the
 internal function instead of the function pointer that sets x, y, 1 to X,
 Y, Z.


I know -- if you don't implement point_set_Jprojective_coordinates_GFp,
you'll have to provide your own point_set_affine_coordinates_GFp (etc.).
 That should be straightforward in any case, though.  You'll necessarily
have to implement point_get_affine_coordinates_GFp, which will be more
involved.  I think at least for the get functions, it should be pretty
clear why I prefer to have an explicit Jprojective in the function
numbers, rather than merely projective:
if ec_GFp_simple_point_get_affine_coordinates were to use a generically
named EC_POINT_get_projective_coordinates_GFp instead of the actual
EC_POINT_get_Jprojective_coordinates_GFp, you'd have to pay more attention
to notice that it's not actually appropriate for the coordinates that your
implementation is using.

Bodo


[openssl.org #3432]

2014-07-04 Thread Bodo Moeller via RT

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: splitting clientHello into fragments?

2014-06-17 Thread Bodo Moeller

 Does openssl handle a clientHello (or any handshake message) that splits
 across records?


Mostly yes (I know because I made the changes to allow this a long time
ago).  A notable exception is that the cross-version code in s23_srvr.c
requires that the first fragment contain at least 6 bytes of the formatted
ClientHello message (msg_type, length, and client_version), so that it can
find the client_version at a fixed location before the actual SSL 3.0/TLS
handshake code gets called.

Bodo


Re: Locking inefficiency

2014-06-10 Thread Bodo Moeller
Geoffrey Thorpe ge...@geoffthorpe.com:

So I'm going to propose that we initially put this patch into the
 development head only, and defer a decision on whether to cherry-pick it
 into stable branches until that testing is in place.


Sure, sounds right.  (Will you go ahead and handle the patch?)


I certainly don't want us to replace the mutex-using code with something
 that uses mutex *or* rwlock depending on whether OPENSSL_PTHREADS_SUCK is
 defined or not.

 BTW, I mention this because NPTL headers apparently cage the rwlock
 definitions in some #ifdef-ery that I think we want to avoid in the
 mutex-rwlock changes in openssl. Rather than grappling with the
 will-some-platform-fail-in-some-subtle-way issues, I prefer that we rely on
 the short-term arrival of platform coverage/testing to detect the issue if
 there is one to cater for.


OK.


For future work, a lock-free approach (using thread-local storage?)
 certainly might make sense, but switching to read-write locks in the
 default callbacks should be a tiny change with significant benefits to
 multithreaded applications.


 Yeah, but a couple of things come to mind.

 (1) rwlocks (under optimised conditions anyway) seem to be essentially
 lock free in the fast-path case anyway, ie. for the
 read-lock/no-contention case, due to futex magic. That means no
 context-switch (to the kernel or otherwise) in that by-far-the-most-common
 case. So I think a change to rwlocks is likely to eliminate the observable
 syscall and contention overheads anyway.


Yes, that's what I'm thinking too.  However, for those cases that actually
do use the write lock (and remember that it's not *only* about CRYPTO_ERR,
so merely avoiding errors doesn't resolve the issue :-) ), the current
array of global locks is certainly generally not ideal.

Bodo


Re: Locking inefficiency

2014-06-10 Thread Bodo Moeller
Thor, can you quantify what you mean by much more expensive?  (And
qualify it - what platform, what operations?)

The way we use the locks, in heavily multi-threaded applications, you can
have a lot of contention with mutexes that wouldn't exist with read/write
locks, because often all threads would only require the read locks

(However, a lot of the remaining contention is unnecessary because we're
using a fixed array of global locks where, in theory, we could be using
per-object locks rather than these per-type locks.)


Re: Locking inefficiency

2014-06-09 Thread Bodo Moeller
Geoffrey Thorpe ge...@geoffthorpe.com:

First, you're right, pthreads_locking_callback() is collapsing everything
 to a mutex.


I was well aware of this and thought we did this for compatibility reasons
(because I couldn't think of any other reasonable explanation, I guess).
 If actual read-write locks are just as portable, I think it's a no-brainer
that we should switch to them.  (After all, our code is already prepared
for that, for applications that provide appropriate custom callbacks.  It's
just the default that falls behind.)

For future work, a lock-free approach (using thread-local storage?)
certainly might make sense, but switching to read-write locks in the
default callbacks should be a tiny change with significant benefits to
multithreaded applications.

Bodo


[openssl.org #3149] [patch] Fast and side channel protected implementation of the NIST P-256 Elliptic Curve, for x86-64 platforms

2014-04-11 Thread Bodo Moeller via RT
For the record, I have reviewed Adam's versions of the code before these were
posted here, and Adam has resolved the problems that I pointed out. As of the
latest patch, I think the code is suitable for inclusion in OpenSSL. The final
missing part is support that makes it easy to build with or without this NIST
P-256 implementation, depending on whether the target platform supports it,
similar to the enable-ec_nistp_64_gcc_128 config option for the 64-bit
optimized implementations using type __uint128_t. (The current patch
unconditionally links in the new files, but we may not even be able to process
the new assembler files.)

Also, it would be nice to have Shay review our changes to his contribution (the
March 11 patch.bz2 as further changed by the April 10 patch) to make sure
that while fixing the problems we found, we didn't do unwanted changes.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #3113] OpenSSL’s DH implementation uses an unnecessarily long exponent, leading to significant performance loss

2014-01-27 Thread Bodo Moeller via RT
 The -dsaparam option to dhparam converts DSA parameters to DH and sets the
length parameter.

Note that this isn't actually safe to do in general; it's OK for ephemeral DH
with no re-use of private keys.

A shortcoming of our internal format (following from a similar shortcoming of
the TLS DH format) is that DH groups and DH public keys don't come with the
subgroup order or cofactor: so in general we can't validate that the DH share
received from the peer is in the expected subgroup (or even that the generator
is for a prime-order subgroup in the first place), because we don't need what
that subgroup is.

For servers, we could change DH parameter generation to either create safe
primes with the exponent length set to the recommended length plus one, or to
create Lim-Lee primes and avoid that small-subgroup safety margin. (Or enhance
the internal data format and add subgroup checks, which has some advantages and
some disadvantages. With Lim-Lee primes, with this we'd have twice the
necessary computational cost.)

For clients, when we receive a DH key from the server in the ServerKeyExchange
message, there's not much we can do about unnecessarily long exponents in
general because that message lacks the information needed to decide about
exponent length. We can hope that the generator is for a prime-order subgroup
or otherwise that the DH prime is a safe prime, but I don't think that this
is generally guaranteed by the TLS specification: so someone might get a
practical attack from it. What we could do is check for well-known DH groups
and set the exponent length accordingly. Has anyone ever done a survey for TLS
servers supporting DH, to check how widely used well-known groups are vs.
custom DH groups?


It used to be the case that DH was little supported in SSL/TLS. Now it's more
widely supported, but there's also ECDH support. My immediate reaction about
this DH performance issue would be to recommend using ECDH instead. Given the
complications, is improving the classical DH case worth the effort?

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3149] [patch] Fast and side channel protected implementation of the NIST P-256 Elliptic Curve, for x86-64 platforms

2013-11-08 Thread Bodo Moeller via RT
 Here is an updated version of the patch.

 Addressing a) pointer to the function (to select ADCX/ADOX) and b)
 multiple points addition

 There is (only) ~1% performance deterioration in due to the pointer being
 passed now, instead of (originally) being static. You can choose which
 style is preferable.


Thanks!

Alternatives would be (a) using a new lock for safe static initialization,
or (b) more code duplication to avoid the need for an explicit pointer
(there could be two separate implementations for the higher-level
routines).  However, given the 1% performance penalty, that's a minor issue
at this point.

Do you have any comment from Intel on the concerns regarding the scattering
technique (http://cryptojedi.org/peter/data/chesrump-20130822.pdf)?

Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3149] [patch] Fast and side channel protected implementation of the NIST P-256 Elliptic Curve, for x86-64 platforms

2013-11-08 Thread Bodo Moeller via RT
 While if (functiona==NULL || functionb==NULL) { asssign functiona,
 functionb } can be unsafe, I'd argue that if (functiona==NULL) { assign
 functiona } followed by if (functionb) { assign functionb } is.


We're implicitly assuming here that (thanks to alignment, etc.) each
pointer can be accessed atomically, which so far seems reasonable given
the particular platform this code is for. However, the C11 memory model
also allows the compiler to assume there's no write race, and it thus
could, for example, use the same memory location to hold other temporary
values, which could then be misinterpreted as the function pointer by
concurrent threads. See
http://static.usenix.org/event/hotpar11/tech/final_files/Boehm.pdf for
ideas how this might break -- maybe not right now, but possibly with future
compilers, possibly after this code has evolved a bit.

(I'm not promising that it will actually break, but thread-safety analysis
tools are likely to complain loudly.  And at some point the code might
actually fail spectacularly.)

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #3149] [patch] Fast and side channel protected implementation of the NIST P-256 Elliptic Curve, for x86-64 platforms

2013-10-29 Thread Bodo Moeller via RT
 This initialization is used for selecting a code path that would use
ADCX/ADOX
 instructions when the processor supports them. The outcome depends only on
 the appropriate CPUID bits. Therefore, there is no “thread-safe” issue
(because
 any thread would select the same path).

I understand that that's the idea, and would have considered the code to be
safe a while ago (and might have written the initialization just like that),
but actually compiler transformations that are legal with the C memory model
could break this. See
http://static.usenix.org/event/hotpar11/tech/final_files/Boehm.pdf for
inspiration.


 Your ec_p256_points_mul implementation is much worse than necessary when
then input comprises many points

 Indeed right. However, this patch is intended to optimize ECDSA sign/verify
(and ECDH). This usage does not require adding more than a single point.

Sure, but there's no compelling reason to make the other (rarer) use cases
slower. Also, adapting the addition/subtraction chain used in the existing
crypto/ec/ecp_nistp256.c (modified Booth encoding instead of unsigned windows)
could make the new implementation even faster.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #3149] [patch] Fast and side channel protected implementation of the NIST P-256 Elliptic Curve, for x86-64 platforms

2013-10-24 Thread Bodo Moeller via RT
Thanks for the submission!

It seems that the BN_MONT_CTX-related code (used in crypto/ecdsa for
constant-time signing) is entirely independent of the remainder of the patch,
and should be considered separately.


Regarding your reference 'S.Gueron and V.Krasnov, Fast Prime Field Elliptic
Curve Cryptography with 256 Bit Primes' for you NIST P-256 code, is that
document available? (Web search only pointed me back to your patch.)

I've noticed that for secret-independent constant-time memory access, your code
relies on the scattering approach. However
http://cryptojedi.org/peter/data/chesrump-20130822.pdf points out that
apparently this doesn't actually work as intended. (Dan Bernstein's earlier
references: Sections 14, 15 in http://cr.yp.to/papers.html#cachetiming;
http://cr.yp.to/mac/athlon.html.)

Note that in your code, OPENSSL_ia32cap_P-dependent initialization of global
variables is not done in a thread-safe way. How about entirely avoiding this
global state, and passing pointers down to the implementations?

Your ec_p256_points_mul implementation is much worse than necessary when then
input comprises many points (more precisely, more than one point other than the
group generator), because you call ec_p256_windowed_mul multiple times
separately and add the results. I'd suggest instead to implement this modeled
on ec_GFp_nistp256_points_mul instead to benefit from interleaved left-to-right
point multiplication. (This avoids the additional point-double operations from
the separate point multiplication algorithm executions going through each
additional scalar.) Your approach for precomputation also is different (using
fewer point operations based on a larger precomputed table than the one we
currently use in ec_GFp_nistp256_points_mul) -- that table size still seems
appropriate, so keeping that probably makes sense.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: not fork-safe if pids wrap

2013-08-22 Thread Bodo Moeller
 Most other libraries I've seen handle this by saving the pid in a static
 variable, and then comparing the current pid to it.  This has the advantage
 of not needing pthreads, and also of only adding the entropy to the child
 if it is actually needed (i. e. it doesn't exec after fork).


We may have to do that, but we'll still want to always use the current PID
so that we don't end up relying on any kind of random device actually being
present (not all environments have that, so while we can try to reseed, we
can't be sure that this will work).

By the way, in case you wonder why OpenSSL doesn't try to detect forking at
all, that's because the PID may be differ between threads running on the
same memory. If I remember correctly, that was the case for Linux threads
in the ancient times when this code was written:

http://cvs.openssl.org/chngview?cn=1519
http://cvs.openssl.org/chngview?cn=1520

Bodo


Re: not fork-safe if pids wrap

2013-08-22 Thread Bodo Moeller
On Thu, Aug 22, 2013 at 4:50 AM, Bodo Moeller bmoel...@acm.org wrote:


 Most other libraries I've seen handle this by saving the pid in a static
 variable, and then comparing the current pid to it.  This has the advantage
 of not needing pthreads, and also of only adding the entropy to the child
 if it is actually needed (i. e. it doesn't exec after fork).


 We may have to do that, but we'll still want to always use the current PID
 so that we don't end up relying on any kind of random device actually being
 present (not all environments have that, so while we can try to reseed, we
 can't be sure that this will work).


(So we probably should use the current time in addition to the PID to get a
general solution to the PID wrap-around problem even on systems where
actual independent reseeding isn't possible.)


Re: not fork-safe if pids wrap

2013-08-22 Thread Bodo Moeller
  (So we probably should use the current time in addition to the PID to
 get a
  general solution to the PID wrap-around problem even on systems where
  actual independent reseeding isn't possible.)

 The FIPS PRNG uses a combination of PID, a counter and a form of system
 timer
 for the DT vector which is used on every invocation (a requirement of the
 standard).


Oh, good. (I guess it was before the NIST SP 800-90A deterministic random
bit generator that we couldn't use PID at all in NIST mode?)  Using the
same inputs with the different PRNGs certainly would make sense.


Re: Apple are, apparently, dicks...

2013-06-14 Thread Bodo Moeller
On Thu, Jun 13, 2013 at 6:39 PM, Ben Laurie b...@links.org wrote:

It is therefore suggested that I pull this patch:


 https://github.com/agl/openssl/commit/0d26cc5b32c23682244685975c1e9392244c0a4d


The behavior change applies only if new option
SSL_OP_SAFARI_ECDHE_ECDSA_BUG is used (part of SSL_OP_ALL), as is standard
for interoperability bug workarounds, so while it is very unfortunate that
we'd need to do this, I'm in favor of accepting this patch.


Re: Apple are, apparently, dicks...

2013-06-14 Thread Bodo Moeller
  Note that the patch changes the value of SSL_OP_ALL so if OpenSSL shared
 libraries are updated to include the patch existing applications wont set
 it:
 they'd all need to be recompiled.


 That's a valid point.


This is true, unfortunately.





  Possibly alternative is to reuse one of the existing *ancient* flags. Does
 anyone really care about compatibility with a bug in SSLeay 0.80 for
 example?


 Wouldn't it be better to reverse the meaning of the flag and not set it in
 SSL_OP_ALL?


Hm, without any SSL_OP_... settings, the expectation generally is that we
kind of sort of follow the specs and don't do any weird stuff like this for
interoperability's sake. If we switch semantics around for certain options,
the resulting inconsistencies would make all that even more confusing.

In theory we could create an explicit SSL_OP_ALL-equivalent bit
(SSL_OP_ALL_ALL?) that enables all current SSL_OP_ALL features and doesn't
allow further masking, but that seems hard to deploy given that some
current applications may expressly want SSL_OP_ALL *without* certain flags.
Of course the legacy flag an application disables could be the one that
we're about to recycle ... SSL_OP_ALL ideally would always have included
some unused bits for future use, but that again is hard to pull off
retroactively -- it's probably a good idea for a later release (with an
incompatible .so version so that we can safely change SSL_OP_ALL).

If we can find an appropriate ancient flag that no one should care about
(which sounds plausible), recycling that one sounds like a good idea. If we
can't, using reverted semantics might be the best option we have.

Bodo


Re: OCB Authenticated Encryption

2013-02-05 Thread Bodo Moeller
On Tue, Feb 5, 2013 at 9:20 AM, Ted Krovetz t...@krovetz.net wrote:

 At last month's Workshop on Real-World Cryptography at Stanford
 University, Phil Rogaway released a new license for OCB, granting free use
 for all open-source implementations.

   http://www.cs.ucdavis.edu/~rogaway/ocb/license1.pdf


There's a problem with that license, though:

Open Source Software Implementation does not include any Software
Implementation in which the software implicating the Licensed Patents is
combined, so as to form a larger program, with software that is not Open
Source Software.

This restriction seems OK for GPL'ed libraries (because they have a similar
restriction anyway), but not for libraries that are meant to be available
for use in programs that are not necessarily open source. Thus, as much as
I like OCB, I'd rather keep it out of OpenSSL for now.

Bodo


Re: OCB Authenticated Encryption

2013-02-05 Thread Bodo Moeller
On Tue, Feb 5, 2013 at 1:41 PM, Ted Krovetz t...@krovetz.net wrote:

 There are actually two licenses. The second allows all software (even
 closed), but only for non-military use.

   http://www.cs.ucdavis.edu/~rogaway/ocb/license.htm


Thanks.  Is some explanation of the non-military use condition available?
This seems to imply you still can't use the software for any public service
(that could be used for military purposes), unless the open source license
applies.

Note that in any case, given the specifics of the two licenses, the new
code would be excluded from default builds (so that those agreeing with the
conditions of the license can explicitly enable it) -- we're doing that in
other similar cases, to ensure that default builds wouldn't be considered
non-free.

Bodo


[openssl.org #2929] Patch for recursive deadlock in x_pubkey.c [1.0.1c]

2013-01-17 Thread Bodo Moeller via RT
This appears to be a duplicate of ticket #2813 (which is fixed, but missed the
1.0.1c release by one day).

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [CVS] OpenSSL: OpenSSL_1_0_1-stable: openssl/crypto/ cryptlib.c

2012-09-18 Thread Bodo Moeller
 Doh. I see it doesn't write to it. Nevertheless, seems like a bad
 piece of code - its assuming errno is thread local, right?


This code uses the address of errno as a default thread ID for OpenSSL
purposes. This works precisely because you typically have something like
#define errno (*__error()) where the given function returns a
thread-specific pointer, although the relevant standards don't guarantee
that.  (Obviously different thread's instances of errno aren't allowed to
interfere with each other, but the standards don't say how that's
achieved.) We don't promise that this works everywhere, we just offer this
as a default.


Re: OpenSSL 1.0.1c deadlock

2012-09-05 Thread Bodo Moeller
 We've managed on a few occasions now to reproduce an issue where OpenSSL
 deadlocks while trying to acquire a mutex it already has.  I filed
 http://rt.openssl.org/Ticket/**Display.html?id=2866http://rt.openssl.org/Ticket/Display.html?id=2866
 about this issue.  I
 currently have a server where this has occurred, with the process in GDB.
 However, the team that owns the server needs it back, so I wanted to know
 if there is anything further the dev team would like me to gather from the
 process before I drop out of GDB.  So far we've encountered this issue on
 both SLES11 SP2 and Ubuntu 12 LTS linux distributions.


Thanks -- I've managed to find the buggy code (crypto/asn1/x_pubkey.c calls
EVP_PKEY_free(ret) while holding lock CRYPTO_LOCK_EVP_PKEY, but
EVP_PKEY_free(ret) always tries to obtain that lock first). Will patch this
in a moment.

Bodo


[openssl.org #2866] Openssl can deadlock OpenSSL version 1.0.1c

2012-09-05 Thread Bodo Moeller via RT
This issue has been reported in
http://rt.openssl.org/Ticket/Display.html?id=2813 (and fixed).

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL 1.0.1c deadlock

2012-09-05 Thread Bodo Moeller
On Wed, Sep 5, 2012 at 3:06 PM, Bodo Moeller bmoel...@acm.org wrote:


 We've managed on a few occasions now to reproduce an issue where OpenSSL
 deadlocks while trying to acquire a mutex it already has.  I filed
 http://rt.openssl.org/Ticket/**Display.html?id=2866http://rt.openssl.org/Ticket/Display.html?id=2866
 about this issue.  I
 currently have a server where this has occurred, with the process in GDB.
 However, the team that owns the server needs it back, so I wanted to know
 if there is anything further the dev team would like me to gather from the
 process before I drop out of GDB.  So far we've encountered this issue on
 both SLES11 SP2 and Ubuntu 12 LTS linux distributions.


 Thanks -- I've managed to find the buggy code (crypto/asn1/x_pubkey.c
 calls EVP_PKEY_free(ret) while holding lock CRYPTO_LOCK_EVP_PKEY, but
 EVP_PKEY_free(ret) always tries to obtain that lock first). Will patch this
 in a moment.


Actually I see this has been fixed already -- please try the latest 1.0.0
snapshot to confirm:

http://cvs.openssl.org/chngview?cn=22572


Re: [openssl.org #2635] 1/n-1 record splitting technique for CVE-2011-3389

2012-04-17 Thread Bodo Moeller via RT
I think from the point of view of both interoperability and security, the
original empty-fragment approach is best when a cipher using 8-byte blocks
has been negotiated (usually 3DES), while 1 / n-1 splitting is better for
interoperability and fully adequate for large block sizes (AES).

Regardless of which of these splitting techniques is chosen, we'd want it
to be enabled by default, but it always should be possible to entirely
disable this.

So I'd suggest to rename SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS to cover 1 /
n-1 splitting (e.g., SSL_OP_DONT_SPLIT_FRAGMENTS) while retaining the old
name as an alias (#define SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS
SSL_OP_DONT_SPLIT_FRAGMENTS).

To slightly simplify the code, we could stop worrying about the 8-byte
block ciphers (i.e., never use empty fragments, use 1 / n-1 splitting
instead), but while using 8-byte blocks comes with known security issues,
there's probably no good justification to make this worse [*]. Data that
Yngve Pettersen has shown actually suggests that interoperability with 3DES
implementations is better with the empty-fragment approach anyway.


[*] For a 8-byte block cipher, the 1 / n-1 splitting approach means you 56
MAC bits for randomization in the first block. With some luck (p = 2^-56),
these bits will come out as needed by the attacker. You'd typically have
other security problems too when using an 8-byte block cipher, but why add
to them?

I think from the point of view of both interoperability and security, the original empty-fragment approach is best when a cipher using 8-byte blocks has been negotiated (usually 3DES), while 1 / n-1 splitting is better for interoperability and fully adequate for large block sizes (AES).
Regardless of which of these splitting techniques is chosen, wed want it to be enabled by default, but it always should be possible to entirely disable this.So Id suggest to rename SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS to cover 1 / n-1 splitting (e.g., SSL_OP_DONT_SPLIT_FRAGMENTS) while retaining the old name as an alias (#define SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS SSL_OP_DONT_SPLIT_FRAGMENTS).
To slightly simplify the code, we could stop worrying about the 8-byte block ciphers (i.e., never use empty fragments, use 1 / n-1 splitting instead), but while using 8-byte blocks comes with known security issues, theres probably no good justification to make this worse [*]. Data that Yngve Pettersen has shown actually suggests that interoperability with 3DES implementations is better with the empty-fragment approach anyway.
[*] For a 8-byte block cipher, the 1 / n-1 splitting approach means you 56 MAC bits for randomization in the first block. With some luck (p = 2^-56), these bits will come out as needed by the attacker. Youd typically have other security problems too when using an 8-byte block cipher, but why add to them?



Re: [openssl.org #2765] openssl negotiates ECC ciphersuites in SSL 3.0

2012-03-17 Thread Bodo Moeller
On Sat, Mar 17, 2012 at 3:53 PM, Stephen Henson via RT r...@openssl.orgwrote:


   My reading of RFC4492 is that the ECC ciphersuites apply only to TLS
  1.0 or later. According to it: This document describes additions to TLS
  to support ECC, applicable both to TLS Version 1.0 [2] and to TLS
  Version 1.1 [3].  In particular, it defines


Well, SSL 3.0 was never passed as an IETF spefication, so if SSL 3.0 is the
common protocol version, everything's an ad hoc interpretation of the RFCs
(or, worse, you're really following draft-freier-ssl-version3-01.txt by the
letter).  SSL 3.0 behavior is just out of the scope of the RFCs; there's
not good reason not to use the ECC ciphersuites in SSL 3.0 (apart from the
various good reasons to entirely avoid SSL 3.0).


 $ ./gnutls-cli localhost -p 5556 --x509cafile
 ../doc/credentials/x509/ca.pem  -d 99
 ...
 |3| HSK[0x1d0bdc0]: Server's version: 3.0

Does this indicate that the server was actually configured to to only
support SSL 3.0, not TLS?

Bodo


Re: Limiting EC curves in ClientHello

2012-03-05 Thread Bodo Moeller
On Thu, Mar 1, 2012 at 11:28 PM, Erik Tkal et...@me.com wrote:

 So then the question is will this be addressed in 1.0.1 or later?


Probably a bit later.

Bodo


Re: Limiting EC curves in ClientHello

2012-03-01 Thread Bodo Moeller
On Thu, Mar 1, 2012 at 11:16 AM, Erik Tkal et...@juniper.net wrote:

 I looked around and found RFC 5430 - Suite B Profile for Transport Layer
 Security (TLS), which states:

   RFC 4492 defines a variety of elliptic curves.  For cipher suites
   defined in this specification, only secp256r1(23) or secp384r1(24)
   may be used.  …

   Clients desiring to negotiate only a Suite B compliant connection
   MUST generate a Supported Elliptic Curves Extension containing only
   the allowed curves.

 So does this mean that OpenSSL will not support RFC 5430 / Suite B in
 1.0.1?


RFC 5430 specifies that A Suite B compliant TLS server MUST be configured
to support the 128-bit security level, the 192-bit security level, or both
security levels. OpenSSL can be configured for the 128-bit security level
(using secp256r1) or for the 192-bit security level (using secp384r1), but
it currently can't be configured to cleanly support both. (The section from
which you quoted also says that Clients that are willing to do both Suite
B compliant and non-Suite B compliant connections MAY omit the extension or
send the extension but offer other curves as well as the appropriate Suite
B ones.  I don't think that supporting Suite B means that you can't also
allow non-Suite B compliant connections, with clients that don't support
Suite B.)

So without having checked all of the formal requirements, I think that
OpenSSL 1.0.1 will support Suite B as specified by RFC 5430, even though
there's not yet a good way to enable two or more explicitly chosen elliptic
curves while disabling all the others.

Bodo


Re: Limiting EC curves in ClientHello

2012-03-01 Thread Bodo Moeller
On Thu, Mar 1, 2012 at 4:06 PM, Erik Tkal et...@juniper.net wrote:

You mentioned previously that you can get it to specify none or one curve?
 I don’t see how you would specify this, as it appears the client hello
 preparation adds all of them is any EC cipher suite is specified?


Oh, sorry, you are right. Setting up negotiation for one specific curve is
possible for the *server*-side implementation, but the *client*-side code
isn't even quite there yet.

Bodo


Re: Limiting EC curves in ClientHello

2012-02-29 Thread Bodo Moeller
 It appears there is no way to specify that only a subset should be used?


Yes, this is a know deficiency in the current code. I'm more familiar with
the server side, but I think it's similar: if you set up *one* curve, then
negotiation should happen accordingly; if you use a callback to provide
curves, it will be expected to be able to handle any curve, which is
fundamentally broken (a peer could be using a named curve that's not even
defined yet).

So technically, there is a way to specific that only a subset should be
used -- it's just that the subset needs to be of size 0 or 1, which isn't
utterly flexible. We should get around to fixing that at some point.

Bodo


Re: openssl-1.0.1-stable-SNAP-20111019 failure

2011-10-19 Thread Bodo Moeller
On Wed, Oct 19, 2011 at 4:48 PM, Kenneth Robinette 
supp...@securenetterm.com wrote:

 The openssl-1.0.1-stable-20111019 build fails as follows:

 fips_premain.c
 link /nologo /subsystem:console /opt:ref /debug /dll /map /base:0xFB0
 /out:o
 ut32dll\libeay32.dll /def:ms/LIBEAY32.def
 @C:\DOCUME~1\zkrr01\LOCALS~1\Temp\nmb0
 2032.
 LIBEAY32.def : error LNK2001: unresolved external symbol
 EC_GFp_nistp224_method
 out32dll\libeay32.lib : fatal error LNK1120: 1 unresolved externals
 LINK : fatal error LNK1141: failure during build of exports file
 First stage Link failure at \utility\FIPS_2.0\bin\fipslink.pl line 42.
 NMAKE : fatal error U1077: 'perl' : return code '0x75'
 Stop.


Thanks for the report.  I had failed to update util/libeay.num.  This should
be fixed in the next snapshot (20111020).

Bodo


Re: OpenSSL Security Advisory: OCSP stapling vulnerability

2011-02-09 Thread Bodo Moeller
Thanks, Rob; I have updated the Security Advisory at
http://www.openssl.org/news/secadv_20110208.txt.

Bodo


OpenSSL 1.0.0d released

2011-02-08 Thread Bodo Moeller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


   OpenSSL version 1.0.0d released
   ===

   OpenSSL - The Open Source toolkit for SSL/TLS
   http://www.openssl.org/

   The OpenSSL project team is pleased to announce the release of
   version 1.0.0d of our open source toolkit for SSL/TLS. This new
   OpenSSL version is a security and bugfix release. For a complete
   list of changes, please see

   http://www.openssl.org/source/exp/CHANGES.

   The most significant changes are:

  o Fix for security issue CVE-2011-0014
[http://www.openssl.org/news/secadv_20110208.txt]

   We consider OpenSSL 1.0.0d to be the best version of OpenSSL
   available and we strongly recommend that users of older versions
   upgrade as soon as possible. OpenSSL 1.0.0d is available for
   download via HTTP and FTP from the following master locations (you
   can find the various FTP mirrors under
   http://www.openssl.org/source/mirror.html):

 * http://www.openssl.org/source/
 * ftp://ftp.openssl.org/source/

   The distribution file name is:

o openssl-1.0.0d.tar.gz
  Size: 4025484
  MD5 checksum: 40b6ea380cc8a5bf9734c2f8bf7e701e
  SHA1 checksum: 32ca934f380a547061ddab7221b1a34e4e07e8d5

   The checksums were calculated using the following commands:

openssl md5 openssl-1.0.0d.tar.gz
openssl sha1 openssl-1.0.0d.tar.gz

   Yours,

   The OpenSSL Project Team...

Mark J. Cox Nils Larsch Ulf Möller
Ralf S. Engelschall Ben Laurie  Andy Polyakov
Dr. Stephen Henson  Richard Levitte Geoff Thorpe
Lutz JänickeBodo Möller



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQCVAgUBTVGBGapYnaxaapuFAQJltgP/UWoaBO5R7WAGB3p0TBPODCU6Aaw8MroO
p4qKI7363uBnLgLGQIgS8BBar0n8QARYv4t6c7O+HR3Kn7VCix8cErUm5MkoL79n
C2YJVRKPmpuwoPkLGwC6beB1fBiwvUaJd/n+BSU5LO534QcSzF+u4UKczsGnPX72
HSA/Mzf8C6w=
=Rpu4
-END PGP SIGNATURE-


--
Bodo Moellerb...@openssl.org
OpenSSL Project http://www.openssl.org/
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


OpenSSL Security Advisory: OCSP stapling vulnerability

2011-02-08 Thread Bodo Moeller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

OpenSSL Security Advisory [8 February 2011]

OCSP stapling vulnerability in OpenSSL
==

Incorrectly formatted ClientHello handshake messages could cause OpenSSL
to parse past the end of the message.

This issue applies to the following versions:
  1) OpenSSL 0.9.8h through 0.9.8q
  2) OpenSSL 1.0.0 through 1.0.0c

The parsing function in question is already used on arbitary data so no
additional vulnerabilities are expected to be uncovered by this.
However, an attacker may be able to cause a crash (denial of service) by
triggering invalid memory accesses.

The results of the parse are only availible to the application using
OpenSSL so do not directly cause an information leak. However, some
applications may expose the contents of parsed OCSP extensions,
specifically an OCSP nonce extension. An attacker could use this to read
the contents of memory following the ClientHello.

Users of OpenSSL should update to the OpenSSL 1.0.0d (or 0.9.8r) release,
which contains a patch to correct this issue. If upgrading is not
immediately possible, the source code patch provided in this advisory
should be applied.

Neel Mehta (Google) identified the vulnerability. Adam Langley and
Bodo Moeller (Google) prepared the fix.

Which applications are affected
- ---

Applications are only affected if they act as a server and call
SSL_CTX_set_tlsext_status_cb on the server's SSL_CTX. This includes
Apache httpd = 2.3.3.

Patch
- -

- --- ssl/t1_lib.c  25 Nov 2010 12:28:28 -  1.64.2.17
+++ ssl/t1_lib.c8 Feb 2011 00:00:00 -
@@ -917,6 +917,7 @@
}
n2s(data, idsize);
dsize -= 2 + idsize;
+   size -= 2 + idsize;
if (dsize  0)
{
*al = SSL_AD_DECODE_ERROR;
@@ -955,9 +956,14 @@
}
 
/* Read in request_extensions */
+   if (size  2)
+   {
+   *al = SSL_AD_DECODE_ERROR;
+   return 0;
+   }
n2s(data,dsize);
size -= 2;
- - if (dsize  size) 
+   if (dsize != size)
{
*al = SSL_AD_DECODE_ERROR;
return 0;

References
- --

This vulnerability is tracked as CVE-2011-0014.

URL for this Security Advisory:
http://www.openssl.org/news/secadv_20110208.txt

OCSP stapling is defined in RFC 2560.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQCVAgUBTVGA/qpYnaxaapuFAQJSqAQAo3zal2kp+/ZcBcdhXnn98kuDDJaUhCqz
tG+IpnKRqQsGqprz72cOsdlB6C1pzlaLt5tofkxVlXBiAtx1Vn8YeJwQIXAj2CEi
6edgg/w+ni1hBASZBbCQUGLfAmW5tsOxp1ShxCovwh/I+7eetzuSeDfIbB+NYpz7
p3xrSBAVwTY=
=zV3P
-END PGP SIGNATURE-



--
Bodo Moellerb...@openssl.org
OpenSSL Project http://www.openssl.org/
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


OpenSSL Security Advisory: OCSP stapling vulnerability

2011-02-08 Thread Bodo Moeller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

OpenSSL Security Advisory [8 February 2011]

OCSP stapling vulnerability in OpenSSL
==

Incorrectly formatted ClientHello handshake messages could cause OpenSSL
to parse past the end of the message.

This issue applies to the following versions:
  1) OpenSSL 0.9.8h through 0.9.8q
  2) OpenSSL 1.0.0 through 1.0.0c

The parsing function in question is already used on arbitary data so no
additional vulnerabilities are expected to be uncovered by this.
However, an attacker may be able to cause a crash (denial of service) by
triggering invalid memory accesses.

The results of the parse are only availible to the application using
OpenSSL so do not directly cause an information leak. However, some
applications may expose the contents of parsed OCSP extensions,
specifically an OCSP nonce extension. An attacker could use this to read
the contents of memory following the ClientHello.

Users of OpenSSL should update to the OpenSSL 1.0.0d (or 0.9.8r) release,
which contains a patch to correct this issue. If upgrading is not
immediately possible, the source code patch provided in this advisory
should be applied.

Neel Mehta (Google) identified the vulnerability. Adam Langley and
Bodo Moeller (Google) prepared the fix.

Which applications are affected
- ---

Applications are only affected if they act as a server and call
SSL_CTX_set_tlsext_status_cb on the server's SSL_CTX. This includes
Apache httpd = 2.3.3.

Patch
- -

- --- ssl/t1_lib.c  25 Nov 2010 12:28:28 -  1.64.2.17
+++ ssl/t1_lib.c8 Feb 2011 00:00:00 -
@@ -917,6 +917,7 @@
}
n2s(data, idsize);
dsize -= 2 + idsize;
+   size -= 2 + idsize;
if (dsize  0)
{
*al = SSL_AD_DECODE_ERROR;
@@ -955,9 +956,14 @@
}
 
/* Read in request_extensions */
+   if (size  2)
+   {
+   *al = SSL_AD_DECODE_ERROR;
+   return 0;
+   }
n2s(data,dsize);
size -= 2;
- - if (dsize  size) 
+   if (dsize != size)
{
*al = SSL_AD_DECODE_ERROR;
return 0;

References
- --

This vulnerability is tracked as CVE-2011-0014.

URL for this Security Advisory:
http://www.openssl.org/news/secadv_20110208.txt

OCSP stapling is defined in RFC 2560.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQCVAgUBTVGA/qpYnaxaapuFAQJSqAQAo3zal2kp+/ZcBcdhXnn98kuDDJaUhCqz
tG+IpnKRqQsGqprz72cOsdlB6C1pzlaLt5tofkxVlXBiAtx1Vn8YeJwQIXAj2CEi
6edgg/w+ni1hBASZBbCQUGLfAmW5tsOxp1ShxCovwh/I+7eetzuSeDfIbB+NYpz7
p3xrSBAVwTY=
=zV3P
-END PGP SIGNATURE-


-- 
Bodo Moellerb...@openssl.org
OpenSSL Project http://www.openssl.org/
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


OpenSSL 1.0.0d released

2011-02-08 Thread Bodo Moeller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


   OpenSSL version 1.0.0d released
   ===

   OpenSSL - The Open Source toolkit for SSL/TLS
   http://www.openssl.org/

   The OpenSSL project team is pleased to announce the release of
   version 1.0.0d of our open source toolkit for SSL/TLS. This new
   OpenSSL version is a security and bugfix release. For a complete
   list of changes, please see

   http://www.openssl.org/source/exp/CHANGES.

   The most significant changes are:

  o Fix for security issue CVE-2011-0014
[http://www.openssl.org/news/secadv_20110208.txt]

   We consider OpenSSL 1.0.0d to be the best version of OpenSSL
   available and we strongly recommend that users of older versions
   upgrade as soon as possible. OpenSSL 1.0.0d is available for
   download via HTTP and FTP from the following master locations (you
   can find the various FTP mirrors under
   http://www.openssl.org/source/mirror.html):

 * http://www.openssl.org/source/
 * ftp://ftp.openssl.org/source/

   The distribution file name is:

o openssl-1.0.0d.tar.gz
  Size: 4025484
  MD5 checksum: 40b6ea380cc8a5bf9734c2f8bf7e701e
  SHA1 checksum: 32ca934f380a547061ddab7221b1a34e4e07e8d5

   The checksums were calculated using the following commands:

openssl md5 openssl-1.0.0d.tar.gz
openssl sha1 openssl-1.0.0d.tar.gz

   Yours,

   The OpenSSL Project Team...

Mark J. Cox Nils Larsch Ulf Möller
Ralf S. Engelschall Ben Laurie  Andy Polyakov
Dr. Stephen Henson  Richard Levitte Geoff Thorpe
Lutz JänickeBodo Möller



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQCVAgUBTVGBGapYnaxaapuFAQJltgP/UWoaBO5R7WAGB3p0TBPODCU6Aaw8MroO
p4qKI7363uBnLgLGQIgS8BBar0n8QARYv4t6c7O+HR3Kn7VCix8cErUm5MkoL79n
C2YJVRKPmpuwoPkLGwC6beB1fBiwvUaJd/n+BSU5LO534QcSzF+u4UKczsGnPX72
HSA/Mzf8C6w=
=Rpu4
-END PGP SIGNATURE-


--
Bodo Moellerb...@openssl.org
OpenSSL Project http://www.openssl.org/
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL 1.0.0d released

2011-02-08 Thread Bodo Moeller
On Tue, Feb 8, 2011 at 7:48 PM, Corinna Vinschen vinsc...@redhat.comwrote:

OpenSSL version 1.0.0d released



 I'm missing an official release mail for 0.9.8r.  Will you create one?


I wasn't planning to -- http://www.openssl.org/news/secadv_20110208.txt also
announces 0.9.8r for those using the 0.9.8 branch, but a separate
announcement for 0.9.8r doesn't seem right (or at least not using our
template claiming that this is the best version of OpenSSL available).

(Maybe we should have had a combined release announcement OpenSSL versions
1.0.0d and 0.9.8r released?)

Bodo


Re: [openssl.org #1833] [PATCH] Abbreviated Renegotiations

2010-09-06 Thread Bodo Moeller

On Sep 6, 2010, at 10:39 AM, Darryl Miles wrote:

The only user of these field(s) is libssl.so itself.  The exact  
meaning, usage and interpretation of the field(s) is a matter of  
implementation detail which is encapsulated and presented to the  
application via the document OpenSSL APIs.


Ideally this would be true, but in practice various applications do  
access some fields directly.


The big change to stop that would be to move all the struct details  
completely out of the externally visible header files.   Of course,  
that change too would be rather painful for such applications.


Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1833] [PATCH] Abbreviated Renegotiations

2010-09-06 Thread Bodo Moeller via RT
On Sep 6, 2010, at 10:39 AM, Darryl Miles wrote:

 The only user of these field(s) is libssl.so itself.  The exact  
 meaning, usage and interpretation of the field(s) is a matter of  
 implementation detail which is encapsulated and presented to the  
 application via the document OpenSSL APIs.

Ideally this would be true, but in practice various applications do  
access some fields directly.

The big change to stop that would be to move all the struct details  
completely out of the externally visible header files.   Of course,  
that change too would be rather painful for such applications.

Bodo


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: openssl 0.9.8n issue with no-tlsext

2010-03-30 Thread Bodo Moeller

On Mar 30, 2010, at 3:04 PM, Adam Langley wrote:

On Tue, Mar 30, 2010 at 7:35 AM, Thomas Jarosch
thomas.jaro...@intra2net.com wrote:

28141:error:14092073:SSL routines:SSL3_GET_SERVER_HELLO:bad packet
length:s3_clnt.c:878:

openssl is compiled with the no-tlsext option. no-tlsext was  
added back
in 2009 as openssl 0.9.8j had trouble connecting to a Centos 3  
based server.

(http://marc.info/?l=openssl-devm=123192990505188)

openssl-0.9.8m is also affected. Any idea what might be going on?


A tcpdump would be very helpful.


Or just add the -msg option to the command line.

I can see what is happening, though: as of RFC 5746, disabling TLS  
extensions isn't really tenable because you can't do secure  
renegotiation without those, and can't even quite do a secure  
*initial* negotiation because that requires at least sending the  
pseudo-ciphersuite number 0x00 0xFF -- which to current servers is  
equivalent to a TLS extension.


So client-side OpenSSL is buggy if compiled with no-tlsext (in 0.9.8m  
and 0.9.8n) because it sends that pseudo-ciphersuite number without  
being able to handle the TLS extension then expected in the server's  
response.  So the no-tlsext build shouldn't be sending the pseudo- 
ciphersuite number.  However, then you'd soon have problems connecting  
to some updated servers, as these may start to *demand* confirmation  
that clients are updated to support RFC 5746.  So the fix won't help  
you in the long run.


Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL Security Advisory

2010-03-25 Thread Bodo Moeller

On Mar 25, 2010, at 6:33 PM, Jean-Marc Desperrier wrote:


OpenSSL wrote:

Record of death vulnerability in OpenSSL 0.9.8f through 0.9.8m


How comes the vulnerability doesn't touch 0.9.8e though the patched  
file wasn't modified between 0.9.8e and 0.9.8f ?


But that code was modified between 0.9.8d and 0.9.8e, see this patch :
http://cvs.openssl.org/filediff?f=openssl/ssl/s3_pkt.cv1=1.60v2=1.61

Could it be a reference mistake and that this vulnerability is from  
0.9.8e through 0.9.8m ?


No, it's not a mistake -- it's code elsewhere that no longer tolerates  
the coarse logic we are changing in the patch, which has been around  
forever.


Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_library_init() EVP_sha256

2009-06-15 Thread Bodo Moeller
On Mon, Jun 15, 2009 at 5:46 AM, Phil Pennockopenssl-...@spodhuis.org wrote:

 When RFC 5246 came out, specifying TLS 1.2 and having all mandated
 cipher suites use SHA-256, we assumed that to aid the transition OpenSSL
 would add EVL_sha256() to the list of digests initialised in
 SSL_library_init(), even before support of TLS 1.2 itself.  I've checked
 OpenSSL 1.0.0 beta 2 and see that this is still not the case.

 I'm seeing usage of SHA-256 become more widespread by CAs today.

 Are there plans to add this digest to the list initialised by
 SSL_library_init() ?

I think SSL_library_init() is meant to provide just the subset of
algorithms needed by the SSL/TLS protocol implementation itself, which
currently doesn't include SHA-256.

Most applications, however, just call OpenSSL_add_all_algorithms() to
get more than that subset.  If you'd rather not define more encryption
algorithms than needed to cut down some overhead, you should be able
to make do with calling SSL_library_init() and
OpenSSL_add_all_digests().  Then the hash algorithms available for
certificate verification will include SHA-256.

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1831] PATCH: openssl rand -hex

2009-02-01 Thread Bodo Moeller

What is the rationale of not having a newline at the end?  It's text,
after all?


no rationale, just an oversight.



So ... I was going to add the newline while working on the patch, but  
then it occurred to me as you said this comes from OpenBSD CVS I might  
be breaking something there.  No risk then?


Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1831] PATCH: openssl rand -hex

2009-02-01 Thread Bodo Moeller via RT
 What is the rationale of not having a newline at the end?  It's text,
 after all?

 no rationale, just an oversight.


So ... I was going to add the newline while working on the patch, but  
then it occurred to me as you said this comes from OpenBSD CVS I might  
be breaking something there.  No risk then?

Bodo


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1831] PATCH: openssl rand -hex

2009-02-01 Thread Bodo Moeller via RT
 we'll cope ;)

Here's my version of the patch.  Let me know if it looks OK for you.

Bodo



Index: CHANGES
===
RCS file: /e/openssl/cvs/openssl/CHANGES,v
retrieving revision 1.1468
diff -u -r1.1468 CHANGES
--- CHANGES 28 Jan 2009 12:54:51 -  1.1468
+++ CHANGES 1 Feb 2009 17:48:03 -
@@ -745,6 +745,9 @@
 
  Changes between 0.9.8j and 0.9.8k  [xx XXX ]
 
+  *) New -hex option for openssl rand.
+ [Matthieu Herrb]
+
   *) Print out UTF8String and NumericString when parsing ASN1.
  [Steve Henson]
 
Index: apps/rand.c
===
RCS file: /e/openssl/cvs/openssl/apps/rand.c,v
retrieving revision 1.20
diff -u -r1.20 rand.c
--- apps/rand.c 12 Aug 2007 17:44:27 -  1.20
+++ apps/rand.c 1 Feb 2009 17:48:05 -
@@ -68,7 +68,8 @@
 
 /* -out file - write to file
  * -rand file:file   - PRNG seed files
- * -base64   - encode output
+ * -base64   - base64 encode output
+ * -hex  - hex encode output
  * num   - write 'num' bytes
  */
 
@@ -84,6 +85,7 @@
char *outfile = NULL;
char *inrand = NULL;
int base64 = 0;
+   int hex = 0;
BIO *out = NULL;
int num = -1;
 #ifndef OPENSSL_NO_ENGINE
@@ -133,6 +135,13 @@
else
badopt = 1;
}
+   else if (strcmp(argv[i], -hex) == 0)
+   {
+   if (!hex)
+   hex = 1;
+   else
+   badopt = 1;
+   }
else if (isdigit((unsigned char)argv[i][0]))
{
if (num  0)
@@ -148,6 +157,9 @@
badopt = 1;
}
 
+   if (hex  base64)
+   badopt = 1;
+
if (num  0)
badopt = 1;

@@ -160,7 +172,8 @@
BIO_printf(bio_err, -engine e - use engine e, 
possibly a hardware device.\n);
 #endif
BIO_printf(bio_err, -rand file%cfile%c... - seed PRNG from 
files\n, LIST_SEPARATOR_CHAR, LIST_SEPARATOR_CHAR);
-   BIO_printf(bio_err, -base64   - encode output\n);
+   BIO_printf(bio_err, -base64   - base64 encode 
output\n);
+   BIO_printf(bio_err, -hex  - hex encode 
output\n);
goto err;
}
 
@@ -210,7 +223,14 @@
r = RAND_bytes(buf, chunk);
if (r = 0)
goto err;
-   BIO_write(out, buf, chunk);
+   if (!hex) 
+   BIO_write(out, buf, chunk);
+   else
+   {
+   for (i = 0; i  chunk; i++)
+   BIO_printf(out, %02x, buf[i]);
+   BIO_puts(out, \n);
+   }
num -= chunk;
}
(void)BIO_flush(out);
Index: doc/apps/rand.pod
===
RCS file: /e/openssl/cvs/openssl/doc/apps/rand.pod,v
retrieving revision 1.5
diff -u -r1.5 rand.pod
--- doc/apps/rand.pod   7 Sep 2001 06:13:26 -   1.5
+++ doc/apps/rand.pod   1 Feb 2009 17:48:15 -
@@ -10,6 +10,7 @@
 [B-out Ifile]
 [B-rand Ifile(s)]
 [B-base64]
+[B-hex]
 Inum
 
 =head1 DESCRIPTION
@@ -41,6 +42,10 @@
 
 Perform base64 encoding on the output.
 
+=item B-hex
+
+Show the output as a hex string.
+
 =back
 
 =head1 SEE ALSO


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1834] PKCS7_verify return value -1?

2009-01-31 Thread Bodo Moeller
On Fri, Jan 30, 2009 at 10:37 PM, Kurt Roeckx via RT r...@openssl.org wrote:

 The documentation for PKCS7_verify says:

   PKCS7_verify() returns 1 for a successful verification and zero or a
   negative value if an error occurs.

 And in apps/smime.c there is this code:

if (PKCS7_verify(p7, other, store, indata, out, flags))
BIO_printf(bio_err, Verification successful\n);
else
{
BIO_printf(bio_err, Verification failure\n);
goto end;
}

 But looking at the code for PKCS7_verify I can't see a case where
 it returns something other than 0 or 1.

 Could either the code or the documentation be fixed?

Or both:

apps/smime.c isn't changed with the patch from
http://www.openssl.org/news/secadv_20090107.txt, and that's certainly
because PKCS7_verify() doesn't actually ever return -1.  Thanks for
bringing up the inconsistency with the documentation.  Using if
(PKCS7_verify(...)  0) in smime.c can't hurt (that's the pattern
that you have to follow with certain functions, after all), and
updating the documentation to describe the actual PKCS7_verify()
behavior that smime.c is currently relying on can't hurt either.

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #1831] PATCH: openssl rand -hex

2009-01-31 Thread Bodo Moeller via RT
 [...@mindrot.org - Fr. 30. Jan. 2009, 11:52:17]:

 This patch adds a -hex option to the rand app. E.g.
 
 $ openssl rand -hex 8
 d203552d5eb39e76

What is the rationale of not having a newline at the end?  It's text,
after all?

Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: OpenSSL 0.9.8j bug (reproducible SSL negotiation issue, 0.9.8i unaffected)B

2009-01-09 Thread Bodo Moeller
On Fri, Jan 9, 2009 at 1:42 PM, Brad House b...@mainstreetsoftworks.com wrote:

 BTW,  I didn't see in the changelog the fact that tls extensions were
 enabled by default between 0.9.8i and j...

 It's there, 3rd entry:

  *) Enable TLS extensions by default.
 [Ben Laurie]

 Hmm, we must be looking at different things.  I was looking
 at the changelog referenced from the 0.9.8j announcement e-mail:
 http://www.openssl.org/source/exp/CHANGES
 The 0.9.8i-0.9.8j changes start at around line 730...
 What changelog are you looking at?

You have found an unfortunate inconsistency between CHANGES from the
stable branch (from which 0.9.8 releases are made) and the CVS head
(which is the one you are seeing on the webserver, also at
http://www.openssl.org/news/changelog.html).  Everything for 0.9.8
releases appearing in the stable branch (see
http://cvs.openssl.org/fileview?f=openssl/CHANGESv=1.1238.2.134)
should also be added to the CHANGES file at the CVS head.  Sometimes
this is forgotten, either when a change doesn't actually have to be
applied to the CVS head, or when it should be but hasn't.

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1812] the openssl build environment is broken

2009-01-08 Thread Bodo Moeller via RT
On Thu, Jan 8, 2009 at 4:01 PM, Felix von Leitner via RT r...@openssl.org 
wrote:

 [...]  Apart from the inherent
 wrongness of doing recursive make (see
 http://miller.emu.id.au/pmiller/books/rmch/ and note that the
 traditionally cited reason for doing recursive makes, namely being able
 to go into apps and doing make install to only get the apps content
 installed, does not actually work with openssl)

For this particular point, note that while having all those
mini-makefiles is awkward, it does serve a purpose here in that you
can individually remove the source code sub-directories corresponding
to various cryptographic algorithms that you might want to exclude
from your builds (such as for patent reasons).  If you say no-rc5
(say) when configuring OpenSSL, then cyrpto/rc5/Makefile won't be
invoked at all.

Also, while you can't do make install to only install the apps
content, you can go to a subdirectory to only *build* the stuff in
there (so you won't always have to wait for the recursive make to
finish if you're just doing development work within some part of
OpenSSL).  The sub-Makefile will invoke the master Makefile,
instructing it to invoke the appropriate sub-Makefile with appropriate
settings (well ... mostly: you've pointed to CC issues).

These are aspects that the Recursive Make Considered Harmful paper
doesn't talk about.

I'm not saying that the whole Configure/Makefile thing shouldn't be
thoroughly redone -- it's just much more complex than just pasting
everything into a single file.  (There's some support for having this
done automatically, actually: util/mk1mf.pl, which is used for Windows
builds.  You can try make makefile.one to see this.  Currently this
set-up is mostly adding to the overall complexity because many
configuration options need to be handled in mk1mf.pl as special cases
that are already handled outside mk1mf.pl for Unix builds.  Maybe some
kind of makefile.one setup should be used for Unix platforms too: we'd
be keeping individual per-directory Makefiles mostly to record
information on what is in the respective directory [to be used to
create makefile.one], but would not actually call these when building
stuff.)

Bodo


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [PATCH] PURIFY and valgrind

2008-07-18 Thread Bodo Moeller
On Thu, Jul 17, 2008 at 7:07 PM, Frederic Heem [EMAIL PROTECTED] wrote:

 Please find attached a patch which makes valgrind and friends happy. Some
 changes had been done in md_rand.c which broke the purpose of PURIFY.
 Needless to say that the define PURIFY is *not* for production system...

Defining PURIFY should never make the PRNG weak.  If Valgrind finds
data that is used uninitialized, then a PURIFY patch should only
ensure that those exact bytes of data are initialized with some data.
Never overwrite a byte that actually may have been initialized.

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [PATCH] PURIFY and valgrind

2008-07-18 Thread Bodo Moeller
On Fri, Jul 18, 2008 at 6:00 PM, Geoff Thorpe [EMAIL PROTECTED] wrote:
 On Friday 18 July 2008 10:57:50 Bodo Moeller wrote:
 On Thu, Jul 17, 2008 at 7:07 PM, Frederic Heem [EMAIL PROTECTED] wrote:

  Please find attached a patch which makes valgrind and friends happy. Some
  changes had been done in md_rand.c which broke the purpose of PURIFY.
  Needless to say that the define PURIFY is *not* for production system...

 Defining PURIFY should never make the PRNG weak.  If Valgrind finds
 data that is used uninitialized, then a PURIFY patch should only
 ensure that those exact bytes of data are initialized with some data.
 Never overwrite a byte that actually may have been initialized.

 Agreed, though where possible it's preferable for PURIFY-handling to simply
 not use the uninitialised data at all, rather than initialising it before
 use.

Absolutely true!  Thanks for adding this aspect to the picture.

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #1695] RSA_padding_check_SSLv23 broken

2008-07-17 Thread Bodo Moeller via RT
 [EMAIL PROTECTED] - Mi. 04. Jun. 2008, 08:08:00]:
 
 We have addressed the following issue in Mac OS X:
 
 RSA_padding_check_SSLv23 has a bug in the loop that verifies the  
 presence of eight consecutive 0x03 padding bytes just before the null  
 marker signifying the end of the padding.  The problem is that at the  
 start of the for loop (for (k= -8; k0; k++)), p points at the byte  
 *after* the NULL terminator. The eight 0x03 bytes are actually from  
 p[-9] to p[-2] inclusive. The byte at p[-1] is the NULL.  As a result,  
 if an SSLv2-only client is extraordinarily unlucky, an OpenSSL server  
 with SSLv2 enabled may erroneously detect a rollback attack.  Well,  
 this could have happened anyway with a probability of 1 in 2^64, but  
 with this bug the probability was increased to 1 in 2^56.

Thank you very much for your report!  I have just checked in the fix
into the OpenSSL CVS (i.e., it will be in the next snapshots, both for
the main development branch and the 0.9.8 stable branch).

Note that your proposed patch isn't quite right, though.  The loop is
correct, but the error should still be raised in the case that k == -1,
meaning that p[-9] through p[-2] each had the value 0x03.

Bodo


 
 diff -Naur /var/tmp/OpenSSL.roots/OpenSSL/openssl/crypto/rsa/ 
 rsa_ssl.c ./crypto/rsa/rsa_ssl.c
 --- /var/tmp/OpenSSL.roots/OpenSSL/openssl/crypto/rsa/rsa_ssl.c
 2000-11-06 14:34:16.0 -0800
 +++ ./crypto/rsa/rsa_ssl.c2006-10-11 16:40:48.0 -0700
 @@ -130,11 +130,11 @@

 RSAerr(RSA_F_RSA_PADDING_CHECK_SSLV23,RSA_R_NULL_BEFORE_BLOCK_MISSING);
   return(-1);
   }
 - for (k= -8; k0; k++)
 + for (k= -9; k-1; k++)
   {
   if (p[k] !=  0x03) break;
   }
 - if (k == -1)
 + if (k != -1)
   {
   
 RSAerr(RSA_F_RSA_PADDING_CHECK_SSLV23,RSA_R_SSLV3_ROLLBACK_ATTACK);
   return(-1);


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [BUGFIX] BN_GF2m_mod_arr() infinite loop

2008-06-23 Thread Bodo Moeller
On Wed, May 28, 2008 at 03:55:27PM +0800, Huang, Ying wrote:

 The following code will make BN_GF2m_mod_arr() into infinite loop.
[...]
 This patch is based on openssl SNAPSHOT 20080519, and has been tested
 on x86_64 with openssl/test/bntest.c and above program.

Thank you very much for your contribution!  Your bugfix will be in
future snapshots (openssl-SNAP-20080624.tar.gz and later,
openssl-0.9.8-stable-SNAP-20080624.tar.gz and later) and releases.

Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [CVS] OpenSSL: openssl/apps/ ca.c

2008-06-02 Thread Bodo Moeller
On Mon, Jun 2, 2008 at 12:47 PM, Dr. Stephen Henson [EMAIL PROTECTED] wrote:
 On Sun, Jun 01, 2008, Ben Laurie wrote:

 Stop const mismatch warning.

 - else if (index_name_cmp(row,rrow))
 + else if (index_name_cmp((const CSTRING *)row,(const CSTRING *)rrow))

 I do wish you'd find ways to fix these that don't involve casts!

 Well I'm open to suggestions on this one...

 It's a feature of C that if you do...

 const something * const *foo;
 something **bar;

 foo = bar;

 you get a warning about different const types. This bit in the ASN1 code where
 what used to be:

 char **p;

 was changed to the more correct:

 const char * const *p;

 and produces warnings in code which uses the previous form.

You can create a type-safe macro that does the cast for you -- see
examples in Ben's recent code.

An expression such as

(1 ? (p) : (char **)NULL)

in the macro's definition should ensure that using the macro will
cause a warning if p isn't of the intended type.  So just cast an
expression like this to the properly constified type, within the
macro.  Then the cast is just in the macro definition where it can be
more easily verified, rather than having casts directly in the code.

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-19 Thread Bodo Moeller
On Mon, May 19, 2008 at 6:00 AM, Michael Sierchio [EMAIL PROTECTED] wrote:
 Theodore Tso wrote:

 ... I'd be comfortable with an adversary knowing the first megabyte of data 
 fed
 through SHA1, as long as it was followed up by at least 256 bits which
 the adversary *didn't* know.

 I'd be comfortable with an adversary knowing the first zetabyte of
 data fed though SHA1, as long as it was followed up by at least 256 bits
 which the adversary didn't know. ;-)

You are being a few orders of magnitude too optimistic here, though
... ;-)  A zettabyte would be 2^78 bits (less if you use the standard
decimal version of zetta), but SHA-1 will only handle inputs up to
2^64 -1 bits.

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-19 Thread Bodo Moeller
On Mon, May 19, 2008 at 6:30 PM, Thor Lancelot Simon [EMAIL PROTECTED] wrote:
 On Sun, May 18, 2008 at 10:07:03PM -0400, Theodore Tso wrote:
 On Sun, May 18, 2008 at 05:24:51PM -0400, Thor Lancelot Simon wrote:

  So you're comfortable with the adversary knowing, let's say, 511 of
  the first 512 bits fed through SHA1?

 *Sigh*.

 Thor, you clearly have no idea how SHA-1 works.  In fact, I'd be
 comfortable with an adversary knowing the first megabyte of data fed
 through SHA1, as long as it was followed up by at least 256 bits which
 the adversary *didn't* know.

 Thanks for the gratuitous insult.  I'd be perfectly happy with the case
 you'd be happy with, too, but you took my one bit and turned it into 256.

 What I _wouldn't_ be happy with is a PRNG which has been fed only known
 data, but enough of it at startup that it agrees to provide output to
 the user.  There are a terrible lot of these around, and pretending that
 stack contents are random is a great way to accidentally build them.

 Not feeding in data which you have a pretty darned good idea will be
 predictable -- potentially as the first bits in at RNG startup -- is to
 my mind one thing one can should do to avoid the problem.

No-one pretends that stacks contents are random.

The OpenSSL PRNG tries to keep a tally of how much entropy has been
added from external sources.  I won't generate any output for key
generation and such until it is happy about this amount of entropy.
Those stack contents are taken into account with an entropy estimate
of 0.0, i.e., not at all.  Thus, after feeding those 511 known bits to
the OpenSSL PRNG [*], it would still expect just as much additional
seeding as before.  Your failure scenario has nothing to do with the
way this PRNG operates.


]*] Actually the PRNG won't take fractions of bytes, so make that 512
bits, or 504.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Make ssleay_rand_bytes more deterministic

2008-05-19 Thread Bodo Moeller
On Mon, May 19, 2008 at 11:57 PM, Richard Stoughton [EMAIL PROTECTED] wrote:

  - do not mix the PID into the internal entropy pool, and

The OpenSSL PRNG uses the PID twice:

Once it is used as part of the intitial seeding on Unix machines, to
get some data that might provide a little actual entropy.  This part
wasn't functional in the Debian version, because the content of each
and every seed byte was ignored.

But then the PRNG also mixes the PID into the output (via a hash).
This is why the PID did influence the output bytes on Debian.  The
point in using the PID here is *not* to collect entropy.  Rather, it
is to ensure that after a fork() both processes will see different
random numbers.  Without this feature, many typical Unix-style server
programs would be utterly broken.


  - do not mix bits of the given output buffer into the internal entropy pool.

 Note that the second improvement may totally break already broken
 client software.

Why would it?
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-16 Thread Bodo Moeller
On Fri, May 16, 2008 at 6:47 AM, Thor Lancelot Simon [EMAIL PROTECTED] wrote:
 On Thu, May 15, 2008 at 11:45:14PM +0200, Bodo Moeller wrote:
 On Thu, May 15, 2008 at 11:41 PM, Erik de Castro Lopo
 [EMAIL PROTECTED] wrote:
  Goetz Babin-Ebell wrote:

  But here the use of this uninitialized data is intentional
  and the programmer are very well aware of what they did.

  The use of unititialized data in this case is stupid because the
  entropy of this random data is close to zero.

 It may be zero, but it may be more, depending on what happened earlier
 in the program if the same memory locations have been in use before.
 This may very well include data that would be unpredictable to
 adversaries -- i.e., entropy; that's the point here.

 Unfortunately, it may also very well include data that would be
 highly predictable to adversaries.

Sure.  That's not a problem, though.  What happens to the PRNG then is
not too different from what happens when you use it to output bits
(except that with RAND_add(), there is no output that might be seen by
the adversary, so seeding with known data should actually be safer
than generating output if you're worrying about this kind of things at
all).  The adversary may know something about what is going on, but
the internal state still remains secret; and the internal state's
entropy won't be adversely affected more than marginally if at all.
(Because of the way the internal state is structured, this stirring
achieved even with a fixed input might even be considered a feature to
improve the distribution of whatever entropy you already have.)

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-15 Thread Bodo Moeller
On Thu, May 15, 2008 at 4:58 PM, John Parker [EMAIL PROTECTED] wrote:

 In the wake of the issues with Debian, is it possible to modify the
 source so that it is possible to use valgrind with openssl without
 reducing the key space?

Sure.  This might happen with the next release.

 Are we really relying on uninitialized memory for randomness?

Not at all.  It's just that OpenSSL in some situations tries to feed
possibly uninitialized memory into the random number generator anyway,
essentially just for fun and because their *might* be some actual
randomness there from whatever happened earlier in the same process.

The Debian-internal patch was blatantly overbroad in disabling the
essential functionality of the RAND_add() function rather than just
avoiding the one case where this function might have been called with
uninitialized memory.  (That one case is in RAND_load_file(), which
would intentionally feed a complete 1024-byte buffer to RAND_add()
even if fewer than 1024 bytes had been put into the buffer by
fread().)

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-15 Thread Bodo Moeller
On Thu, May 15, 2008 at 7:53 PM, Theodore Tso [EMAIL PROTECTED] wrote:
 On Thu, May 15, 2008 at 11:09:46AM -0500, John Parker wrote:

 What I was hoping for was a -DNO_UNINIT_DATA that wouldn't be the
 default, but wouldn't reduce the keyspace either.

 -DPURIFY *does* do what you want.  It doesn't reduce the keyspace.

 The problem was that what Debian did went far beyond -DPURIFY.  The
 Debian developer in question disabled one call that used uninitialized
 memory, but then later on, removed another similar call that looked
 the same, but in fact *was* using initialized data --- said
 initialized data being real randomness critically necessary for
 security.

This similar call would, under certain conditions, use uninitialized
data too.  I guess Valgrind is more thorough than Purify, because it
seems that those using Purify were not shown this as suspicious, and
thus -DPURIFY didn't cover this particular case.  Of course, totally
disabling the offending call in md_rand.c as was done in Debian was
blatantly wrong.  The correct way would have been to change
RAND_load_file() in randfile.c; that's the function thatt might
sometimes pass uninitialized data to RAND_add() (intentionally, but
without relying on this uninitialized data as a source of randomness).

One of the offending RAND_add() calls has already been taken care of
about a year ago:


http://cvs.openssl.org/filediff?f=openssl/crypto/rand/randfile.cv1=1.47.2.1v2=1.47.2.2

However, another intentional use of potentially unitialized data is
still left as of
http://cvs.openssl.org/getfile/openssl/crypto/rand/randfile.c?v=1.47.2.2
:

i=fread(buf,1,n,in);
if (i = 0) break;
/* even if n != i, use the full array */
RAND_add(buf,n,(double)i);

Changing this into RAND_add(buf,i,(double)i) should make verification
tools happier.  Or it could be

#ifdef PURIFY
RAND_add(buf,i,(double)i);
#else
RAND_add(buf,n,(double)i);
#endif

(abusing the PURIFY macro with a more general meaning).

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-15 Thread Bodo Moeller
On Thu, May 15, 2008 at 11:41 PM, Erik de Castro Lopo
[EMAIL PROTECTED] wrote:
 Goetz Babin-Ebell wrote:

 But here the use of this uninitialized data is intentional
 and the programmer are very well aware of what they did.

 The use of unititialized data in this case is stupid because the
 entropy of this random data is close to zero.

It may be zero, but it may be more, depending on what happened earlier
in the program if the same memory locations have been in use before.
This may very well include data that would be unpredictable to
adversaries -- i.e., entropy; that's the point here.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-15 Thread Bodo Moeller
On Thu, May 15, 2008 at 11:51 PM, Erik de Castro Lopo
[EMAIL PROTECTED] wrote:
 Bodo Moeller wrote:

 It may be zero, but it may be more, depending on what happened earlier
 in the program if the same memory locations have been in use before.
 This may very well include data that would be unpredictable to
 adversaries -- i.e., entropy; that's the point here.

 Do you know its unpredicatable or are you only guessing?

 Can a bad guy force it to be predicatable?

 How much entropy is actually there? Has anyone actually measured it?

All this depends on the specific application.  For many, there almost
certainly won't be any unpredictable data.  For others, in particular
long-running interactive software, there certainly will be at least
some information that is unpredictable to at least some adversaries.
Even if it's just return addresses on the stack, the specific pattern
will depend on the program's past, some aspects of which may be
unknown to adversaries.

We don't care if anyone can force this to be predictable, because
we're in no way relying on it to deliver more than zero bits of
entropy.  We're just hoping there might be some entropy in there
sometimes.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: valgrind and openssl

2008-05-15 Thread Bodo Moeller
On Fri, May 16, 2008 at 12:39 AM, David Schwartz [EMAIL PROTECTED] wrote:

 2) Zeroing memory that doesn't need to be zeroed has a performance cost.

This particular argument doesn't actually apply here.  We wouldn't
have to zeroize any memory, we just wouldn't feed those bytes that are
not known to have been initialized into RAND_add().  The cost of
RAND_add() is a lot higher than that of memset(), so we'd even gain
some performance.  But we'd lose the randomness that is available when
the bytes not currently known to have been initialized have in fact
been previously initialized in a way not known by an attacker.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: OpenSSL: OpenSSL_0_9_8-stable: openssl/ CHANGES Configure

2008-05-02 Thread Bodo Moeller
[EMAIL PROTECTED] (Andy Polyakov) to openssl-dev:

 Log:
   Unobtrusive backport of 32-bit x86 Montgomery improvements from 
 0.9.9-dev:
   you need to use enable-montasm to see a difference.  (Huge speed
   advantage, but BN_MONT_CTX is not binary compatible, so this can't be
   enabled by default in the 0.9.8 branch.)

 Index: openssl/CHANGES
 
 
 $ cvs diff -u -r1.1238.2.94 -r1.1238.2.95 CHANGES
 --- openssl/CHANGES  30 Apr 2008 16:11:31 -  1.1238.2.94
 +++ openssl/CHANGES  1 May 2008 23:11:30 -   1.1238.2.95
 @@ -4,6 +4,28 @@

   Changes between 0.9.8g and 0.9.8h  [xx XXX ]

 +  *) Partial backport from 0.9.9-dev:
 +
 + New candidate for BIGNUM assembler implementation, bn_mul_mont,
 + dedicated Montgomery multiplication procedure, is introduced.
 + While 0.9.9-dev has assembler for various architectures, here
 + in the 0.9.8 branch, only x86_64 is available by default.
 +
 + With Configure option enable-montasm (which exists only for
 + this backport), the 32-bit x86 assembler implementation can be
 + activated at compile-time.  In 0.9.9-dev, BN_MONT_CTX is modified
 + to allow bn_mul_mont to reach for higher 64-bit performance on
 + certain 32-bit targets.  With enable-montasm, this BN_MONT_CTX
 + change is activated in the 0.9.8 branch.
 +
 + Warning: Using enable-montasm thus means losing binary
 + compatibility between patchlevels!  (I.e., applications will
 + have to be recompiled to match the particular library.)
 + So you may want to avoid this setting for shared libraries.
 + Use at your own risk.

  The keyword is certain. While some platforms do require binary
  incompatible changes, others do *not*. x86 does *not*, so there is no
  reason to modify bn.h in this case. As for *now* the only platform that
  depends on bn.h update is 32-bit UltraSPARC build (though for 64-bit
  build one would have to adapt sparcv9a-mont.pl), but PowerPC and
  Itanium are to join.

Yeah, I guess in the end it should be easier to just take out this
part of the modification, because then we make life easier for most by
avoiding the BN_MONT_CTX incompatibility.

It seems that for x86, you are computing but not actually using n0[1].
 Including the BN_BITS=32 special case makes life easier for those
who want to plug in some of the other assembler variants that actually
use n0[1] (that assembler code is obviously not part of this CVS
change, but a lot easier to use with it as a starting point).  But the
BN_MONT_CTX incompatibility is pretty annoying, so I'll get rid of it.


 +#if defined(MONT_WORD)  defined(OPENSSL_BN_ASM_MONT)  (BN_BITS2=32)
 +if (!BN_from_montgomery_word(r,tmp,mont)) goto err;
 +#else
  if (!BN_from_montgomery(r,tmp,mont,ctx)) goto err;
 +#endif

  For reference, BN_from_montgomery_word was introduced for performance
  (it eliminates redundant malloc, which gives measurable improvement),
  there is no reason to make it BN_BITS2=32... I mean it should be there
  on not be there at all. The reason it was not back-ported with x86_64
  module was that the minor releases are not widely tested by community
  (at least we don't explicitly encourage it prior minor releases) and  we
  formally don't know if code breaks on some platform team members don't
  use on regular basis. x86-mont.pl was not back-ported for approximately
  same reason. Trouble is that while on x86_64 we have to cope with
  limited number of assemblers (fairly recent GNU and Solaris ones, both
  were explicitly tested by me), it's not the case on x86 platforms, where
  we find GNU, nasm and few vendor assemblers of all ages. One can argue
  that I'm being conservative, but isn't it what -stable branch should be
  about?

I know that BN_from_montgomery_word() is entirely optional.  The
reason for this specific #if condition is that this makes sure that it
is excluded for all of the default builds (including x86_64) for
exactly the code stability reasons that you describe.

This rationale certainly deserves a source code comment for the #if, though!

In an intermediate patch, I wasn't using BN_from_montgomery_word() at
all.  However, given that the x86 bn_mul_mont code is only activated
for more adventurous types who actively specify enable-montasm at
compile time for the sake of better performance, my reasoning was (and
is) that there's nothing wrong with making available some related
non-stable code for these as well to further improve performance --
while making sure to change nothing at all for all of the default
builds, as it should be in a stable branch.

Bodo
__
OpenSSL Project http://www.openssl.org
Development 

[openssl.org #1583] assertion

2007-09-18 Thread Bodo Moeller via RT
This transaction appears to have no content
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: ECDSA verify fails when digest is all zeros in 0.9.8e

2007-05-22 Thread Bodo Moeller
On Thu, May 17, 2007 at 08:43:47AM -0700, [EMAIL PROTECTED] wrote:

 This is not a problem with the algorithm or the protocol.  It is a
 bug in the implementation.  Digest values that are zero are allowed
 by the ANSI X9.62 (and there is no special case for them) and they
 work fine in other implementations.
[...]

 compute_wNAF is at ec_mult.c:188.and is called with scalar pointing
 to a zero BIGNUM.  But compute_wNAF, either by design or by
 accident, can't deal with a scalar that is zero.

Let's say that by accident compute_wNAF was designed such that it
cannot deal with a scalar that is zero:  At least it will cleanly
signal an internal error in this special case rather than going
completely mad.

This clearly is a bug in crypto/ec/ec_mult.c; and here is a patch that
should fix it.  (This will be in the next daily snapshots.)


--- crypto/ec/ec_mult.c 14 Mar 2006 22:48:31 -  1.32.2.1
+++ crypto/ec/ec_mult.c 22 May 2007 09:03:47 -
@@ -194,6 +194,19 @@
int bit, next_bit, mask;
size_t len = 0, j;

+   if (BN_is_zero(scalar))
+   {
+   r = OPENSSL_malloc(1);
+   if (!r)
+   {
+   ECerr(EC_F_COMPUTE_WNAF, ERR_R_MALLOC_FAILURE);
+   goto err;
+   }
+   r[0] = 0;
+   *ret_len = 1;
+   return r;
+   }
+   
if (w = 0 || w  7) /* 'signed char' can represent integers with 
absolute values less than 2^7 */
{
ECerr(EC_F_COMPUTE_WNAF, ERR_R_INTERNAL_ERROR);
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [PATCH] Mitigation for branch prediction attacks

2007-03-28 Thread Bodo Moeller
On Wed, Mar 28, 2007 at 10:56:54AM -0700, Wood, Matthew D wrote:

 We intentionally use BN_with_flags() to set BN_FLG_CONSTTIME for d
 before d mod (p-1) and d mod (q-1) are computed. 
 
 The reason is that BN_mod(rem,num,divisor,ctx) is equivalent to
 BN_div(NULL,(rem),(num),(divisor),(ctx)). BN_div invokes
 BN_div_no_branch only if num has the BN_FLG_CONSTTIME flag on.
 
 Therefore, we need to set BN_FLG_CONSTTIME for d, rather than p-1 and
 q-1.

Yes, of course.  Somehow I had assumed that it's the flag for the
divisor being looked at, by analogy with the BN_mod_inverse() case,
where it's the flag for the modulus that matters.

I guess I could explain this by the time of day when I was reading the
patch (around 1:30 am), but I actually do think that it makes sense
to expect what I expected.

I'll at least have to fix my description in the CHANGES files.  But I
think the best choice here is to make both BN_div() and
BN_mod_inverse() more fool-proof, by having them check
BN_FLG_CONSTTIME on *both* input BIGNUMs and use the no_branch variant
if either of these is set.

Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [PATCH] Mitigation for branch prediction attacks

2007-03-27 Thread Bodo Moeller
On Tue, Mar 27, 2007 at 02:23:08PM -0700, Wood, Matthew D wrote:

 Changes to OpenSSL 0.9.8d to mitigate branch prediction attacks

Thanks!  I'm working on moving this into the OpenSSL CVS.  I have just
one question: In crypto/rsa/rsa_gen.c, you use BN_with_flags() to set
BN_FLG_CONSTTIME for d before  d mod (p-1)  and  d (mod q-1)  are
computed.  Can I assume that you meant to set the flag for p-1
(stored in variable r1) and q-1 (stored in r2)?

Bodo
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Explicit cipher name behaviour has changed.

2006-09-12 Thread Bodo Moeller
On Tue, Sep 12, 2006 at 11:17:16AM +0300, Vlad W. wrote:
 On 9/6/06, Bodo Moeller [EMAIL PROTECTED] wrote:

 Question: Are you sure about the 0.9.7 (or 0.9.7d) behavior regarding
 AES128-SHA?  What is the exact OpenSSL version?

 Yes, I am. Initially, I found the problem when regression test of my
 application failed. Then, I found the same effect in openssl ciphers
 ultility.
 The version was definitely 0.9.7d.

Hm.  I've tried this again in the 0.9.7 branch, using the ssl_ciph.c
version from 0.9.7d, and found that openssl ciphers -v RC4-MD5
correctly gets two ciphersuites (SSLv3 and SSLv2) and that openssl
ciphers -v AES128-SHA incorrectly gets the two ciphersuites
AES256-SHA and AES128-SHA.  However, you reported that openssl
ciphers -v AES128-SHA returns the single cipher in 0.9.7[d], and I
don't see how this can be true for the 0.9.7 branch without additional
patches.

As far as I know, the problem that openssl ciphers -v RC4-MD5
selects just a single ciphersuite should exist in all OpenSSL releases
so far in which openssl ciphers -v AES128-SHA correctly selects just
a single ciphersuite.  There's always one of the two bugs.  The former
was introduced in OpenSSL 0.9.8b as a side-effect of fixing the
latter, which used to exist since AES ciphersuites were added with
the OpenSSL 0.9.7 release.

OpenSSL 0.9.8d will incorporate my patch to solve the problem.


 Thank you very much, your patch solved the problem, for both openssl
 ciphers util's symptom and my application.
 
 However, the fix based on mask matching is still interesting. May be,
 I'm a paranoiac, but I don't like mixing names and bit masks in the
 same mechanism, especially if previous versions worked without it. The
 cipher_id parameter had been added to ssl_cipher_apply_rule function
 only in 0.9.8b version, and I cannot understand how the mechanism
 worked without it in 0.9.7. Anyway, that's rather interesting than
 necessary for now.

Well, I can't really believe that the mechanism worked before.  It can
only work without tricks like the one in the patch if there are
separate bits for AES128 and AES256 (same for Camellia), which is
something that we can do in OpenSSL 0.9.9 by changing the SSL_CIPHER
structure.  (Of course the problem did not exist before OpenSSL 0.9.7,
when the AES ciphersuites were not in OpenSSL.)

Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Explicit cipher name behaviour has changed.

2006-09-12 Thread Bodo Moeller
On Tue, Sep 12, 2006 at 03:41:14PM +0300, Vlad W. wrote:
 On 9/12/06, Bodo Moeller [EMAIL PROTECTED] wrote:

 Hm.  I've tried this again in the 0.9.7 branch, using the ssl_ciph.c
 version from 0.9.7d, and found that openssl ciphers -v RC4-MD5
 correctly gets two ciphersuites (SSLv3 and SSLv2) and that openssl
 ciphers -v AES128-SHA incorrectly gets the two ciphersuites
 AES256-SHA and AES128-SHA.  However, you reported that openssl
 ciphers -v AES128-SHA returns the single cipher in 0.9.7[d], and I
 don't see how this can be true for the 0.9.7 branch without additional
 patches.

 UPDATE: I've just downloaded several tar files from the openssl.org
 and compiled them.
 
 openssl ciphers -v AES128-SHA changed its behaviour between 0.9.7g
 (single ciphersuite) and 0.9.7h (AES256-SHA added).
 
 Regression test is good, isn't it? :)

Thanks!  Comparing the behavior of 0.9.7g and 0.9.7h, I finally found
out what is going on here:

In versions up to 0.9.7g, the AES128-SHA and AES256-SHA ciphersuites
*did* have different bitmap descriptions and thus were treated
differently in ciphersuite processing, because the AES128 ciphersuites
were classified as having MEDIUM strength whereas AES256 went under
HIGH.

With 0.9.7h, the AES128 classification was changed into HIGH (3DES
is called HIGH, so it makes sense to call AES128 HIGH as well,
even though AES256 has higher strength).  Well, this meant that there
was no longer a difference between the bitmaps for AES128-SHA and
AES256-SHA, so now both show up when you intend to select just one of
them.  The first attempted fix to this (in the 0.9.8 and 0.9.9
branches only) caused the SSLv2/SSLv3 problem that you reported,
and which will be corrected in the next releases.  The combined patch
will also go into the next release for the 0.9.7 branch.

Bodo

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [patch] rsa_locl.h

2006-09-08 Thread Bodo Moeller
On Thu, Sep 07, 2006 at 03:06:47PM +0200, Gisle Vanem wrote:

 crypto/rsa/rsa_locl.h wrongly uses 'size_t' in some arguments. It should 
 match the implementation in crypto/rsa/rsa_sign.c (what happened to this 
 file?). A patch:
 
 --- orig/crypto/rsa/rsa_locl.h   2006-08-28 19:01:02 +0200
 +++ crypto/rsa/rsa_locl.h 2006-08-29 15:30:50 +0200
 @@ -1,4 +1,4 @@
 -extern int int_rsa_verify(int dtype, const unsigned char *m, size_t m_len,
 -   unsigned char *rm, size_t *prm_len,
 -   const unsigned char *sigbuf, size_t siglen,
 +extern int int_rsa_verify(int dtype, const unsigned char *m, unsigned int 
 m_len,
 +unsigned char *rm, unsigned int *prm_len,
 +const unsigned char *sigbuf, unsigned int siglen,
RSA *rsa);

Makes sense.  I'll put this into the CVS (applicable to the 0.9.9-dev
branch only, if anyone is wondering -- i.e., this mismatch isn't in
any of the releases).

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Explicit cipher name behaviour has changed.

2006-09-06 Thread Bodo Moeller
On Sun, Sep 03, 2006 at 08:51:50PM +0200, Vlad W. wrote:

 Working on openssl library upgrade from 0.9.7d to 0.9.8a I found a
 change in explicit cipher name behaviour.
 
 E.g., in version 0.9.7
 openssl ciphers -v RC4-MD5 gets 2 ciphers named RC4-MD5, for SSLv3 and 
 SSLv2,
 and
 openssl ciphers -v AES128-SHA gets the single cipher, as was expected.
 
 In 0.9.8a this command in the first case behaves in the same way, but
 in the second case both AES128-SHA and AES256-SHA was returned.
 
 I found a change in the version 0.9.8b (Check-in Number: 15185) which
 has fixed the problem in the case of AES ciphers.
 
 However, the same change affects the case of the same name and
 different protocols, e.g., there is only SSLv3 RC4-MD5 cipher in the
 first example output. The similar problem occur in any case when same
 cipher name exist in both SSLv3 and SSLv2: when first cipher_id is
 found in ssl_cipher_process_rulestr function (from the check-in
 15185), the function does not continue the searching process.
 
 Thus, both openssl 0.9.8a and 0.9.8b behave differently from 0.9.7
 branch. I'm confused which behaviour is correct, but the both of the
 new ones seem to be problematical. They cause to either wrong cipher
 using (AES128-SHA instead of AES256-SHA in 0.9.8a) or SSLv2 connection
 failure (in 0.9.8b).
 
 Could you comment that and to suggest a solution?

First, clearly you have found a bug.

Question: Are you sure about the 0.9.7 (or 0.9.7d) behavior regarding
AES128-SHA?  What is the exact OpenSSL version?  Given the structure
of the internal tables, code that matches two ciphersuites for
RC4-MD5 (SSLv2 and SSLv3) should match two ciphersuites for
AES128-SHA as well (AES128-SHA and AES256-SHA).  This is because
there currently only is a single bit that indicates AES, be it AES128
or AES256, so the AES128-SHA and AES256-SHA ciphersuites are described
through the same bit pattern in the table that is used for ciphersuite
string processing.  If AES128-SHA or RC4-MD5 is interpreted as a
rule for ciphersuite matching (where said rule is given by the bit-map
description of a ciphersuite with the respective name), then there
will be multiple ciphersuites found for the respective pattern.

Anyway.  The proper way to fix this in OpenSSL is to extend the bit
masks so that we can get two bits for AES128 and AES256 instead of
just a single AES bit (and similarly for Camellia).  Then AES128-SHA
will give a pattern that matches just the AES128-SHA ciphersuite,
and not AES256-SHA as well.

For now, please try the following patch, play around with various
ciphersuite description strings that you can think of, and let me know
if it appears to work properly.  (This patch is for the 0.9.8 branch,
trivial manual editing would be required for applying it to the 0.9.9
branch.)


Index: ssl_ciph.c
===
RCS file: /e/openssl/cvs/openssl/ssl/ssl_ciph.c,v
retrieving revision 1.49.2.11
diff -u -r1.49.2.11 ssl_ciph.c
--- ssl_ciph.c  2 Jul 2006 14:43:21 -   1.49.2.11
+++ ssl_ciph.c  6 Sep 2006 09:13:31 -
@@ -565,7 +565,7 @@
*ca_curr = NULL;/* end of list */
}
 
-static void ssl_cipher_apply_rule(unsigned long cipher_id,
+static void ssl_cipher_apply_rule(unsigned long cipher_id, unsigned long 
ssl_version,
unsigned long algorithms, unsigned long mask,
unsigned long algo_strength, unsigned long mask_strength,
int rule, int strength_bits, CIPHER_ORDER *co_list,
@@ -592,9 +592,10 @@
 
cp = curr-cipher;
 
-   /* If explicit cipher suite match that one only */
+   /* If explicit cipher suite, match only that one for its own 
protocol version.
+* Usual selection criteria will be used for similar 
ciphersuites from other version! */
 
-   if (cipher_id)
+   if (cipher_id  (cp-algorithms  SSL_SSL_MASK) == ssl_version)
{
if (cp-id != cipher_id)
continue;
@@ -731,7 +732,7 @@
 */
for (i = max_strength_bits; i = 0; i--)
if (number_uses[i]  0)
-   ssl_cipher_apply_rule(0, 0, 0, 0, 0, CIPHER_ORD, i,
+   ssl_cipher_apply_rule(0, 0, 0, 0, 0, 0, CIPHER_ORD, i,
co_list, head_p, tail_p);
 
OPENSSL_free(number_uses);
@@ -745,7 +746,7 @@
unsigned long algorithms, mask, algo_strength, mask_strength;
const char *l, *start, *buf;
int j, multi, found, rule, retval, ok, buflen;
-   unsigned long cipher_id = 0;
+   unsigned long cipher_id = 0, ssl_version = 0;
char ch;
 
retval = 1;
@@ -836,6 +837,7 @@
 */
 j = found = 0;
 cipher_id = 0;
+ssl_version = 0;
 while (ca_list[j])
   

Re: what the heck is with camellia update?

2006-07-25 Thread Bodo Moeller
On Thu, Jul 20, 2006 at 03:26:55PM +0200, Andy Polyakov wrote:

 The time span between original submission and update suggests that 
 contributors were planning the update, meaning that the code was 
 considered work in progress all along. If so, why it went into stable 
 branch?

I understand that two different implementations were developed in
parallel, and in the end it was found that the one finished later
was found to provide better performance.

Notice that while the new code indeed is in the stable branch, it is
completely excluded from compilation with the default settings
(implicit -no_camellia).  Otherwise I would not have accepted the
patch for 0.9.8-stable.


 Then this update quality... It's just wrong on several points. 
 Most notably
 
 #ifdef L_ENDIAN
 ...
 #else /* big endian */
 
 The expectation is
 
 #ifdef L_ENDIAN
  little-endian
 #elif defined(B_ENDIAN)
  big-endian
 #else
  endian *neutral*!
 #endif
 
 I mean undefined L_ENDIAN does not mean big-endian, not in OpenSSL 
 context. Furthermore
 
 #if (defined (__GNUC__)  !defined(i386))
 #define CAMELLIA_SWAP4(x) \
   do{\
 asm(bswap %1 : +r (x));\
   }while(0)
 
 So if you try to compile with gcc on non-x86 platform, it will insist on 
 injecting x86 instruction...

True, this does not make sense.

  Update appears as if they were trying to 
 improve performance, but I observe over 30% *degradation* on x86...

The stated reason for switching to the new implementation was
performance, however I don't know what specific platforms this applies
to.

A new patch taking into account the compliation problems found so far
(including the ones pointed out by Gisle Vanem) should be on the way ...


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [openssl.org #1346] Re: SSL_accept concurrency in 0.9.7j and 0.9.8b

2006-06-23 Thread Bodo Moeller
On Tue, Jun 20, 2006 at 07:03:49PM +0200, Kurt Roeckx wrote:

 Applications are also expected to provide a thread ID callback by
 calling CRYPTO_set_id_callback(), although the failure to do so should
 not be a problem on Linux where different threads run with different
 PIDs, since OpenSSL uses the PID as a default for the thread ID.

 I believe this is the only LinuxThreads implementation that
 you're talking about.  Since kernel 2.6 there is also support for
 the Native POSIX Thread Library (NPTL).  Afaik, that doesn't
 change PID for each process anymore.  I guess in that case one
 must use pthread_self(), which returns a pthread_t.

Yes, true.  Sorry for the incomplete and incorrect description.


 (OpenSSL requires the thread ID that is an unsigned long.  Not all
 systems may provide this, but in practice, you can work around this
 problem by casting a pointer of any per-thread object in shared memory
 space into an unsigned long; e.g., do foo=malloc(1); and then use
 (unsigned long)(void *)foo as the thread ID.  You might want to add
 assert(sizeof(void *) = sizeof(long)); to the program if you use
 this approach.)

 This would a problem on platforms like windows x64 which are
 LLP64, where a long is still 32 bit and a pointer is 64 bit.
 Fortuantly, we don't need that on windows.

OK, I have implemented something new for OpenSSL 0.9.9-dev
(this will become available in openssl-SNAP-20060624.tar.gz
at ftp://ftp.openssl.org/snapshot/ in about 12 hours, and
of course in later 0.9.9-dev snapshots): In addition to

void CRYPTO_set_id_callback(unsigned long (*func)(void));

there will be

void CRYPTO_set_idptr_callback(void *(*func)(void));

Same thing, just here the type of the ID is void * rather than
unsigned long.  Thus the malloc() trick will work.  OpenSSL compares
both IDs and believes that it is in a previous thread only if both
values agree with what they previously were.

The default value I have chosen for the pointer-type thread ID (if an
application does not provide a callback) is errno.  For most, if not
all, platforms, this default might end all worries about
CRYPTO_set_id_callback().

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [CVS] OpenSSL: openssl/ CHANGES FAQ openssl/crypto/bn/ bn.h bn_blind.c...

2006-06-23 Thread Bodo Moeller
On Fri, Jun 23, 2006 at 06:42:10PM +0200, Kurt Roeckx wrote:
 On Fri, Jun 23, 2006 at 04:36:07PM +0100, Joe Orton wrote:

   Log:
 New functions CRYPTO_set_idptr_callback(),
 CRYPTO_get_idptr_callback(), CRYPTO_thread_idptr() for a 'void *' type
 thread ID, since the 'unsigned long' type of the existing thread ID
 does not always work well.

 To clarify this, if CRYPTO_get_idptr_callback() is used, is it 
 unnecessary to also call CRYPTO_set_id_callback?

Yes, exactly.

 Does C9x actually guarantee that you can take the address of errno?

 From C99, section 7.5:
[...]
 And:
6.5.3.2  Address and indirection operators
[...]
 I believe you can take the address of errno.

Yes, that is what I too was reading from the standard.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [openssl.org #1346] Re: SSL_accept concurrency in 0.9.7j and 0.9.8b

2006-06-20 Thread Bodo Moeller
On Fri, Jun 09, 2006 at 07:02:36PM +0200, Kurt Roeckx wrote:
 On Fri, Jun 09, 2006 at 12:58:56PM +0200, Howard Chu via RT wrote:
 Howard Chu wrote:

 I'm seeing a lot of bad record mac errors when receiving a lot of 
 connection requests at once. It sounds the same as this email
 http://www.redhat.com/archives/rhl-list/2005-May/msg01506.html
 which unfortunately was never replied to.

 Surrounding the SSL_accept call with its own mutex seems to resolve the 
 problem. Is that supposed to be necessary?

 Given the lack of response here, we're tracking this now as
 http://www.openldap.org/its/index.cgi/Software%20Bugs?id=4583
 
 The same problem occurs with 0.9.8b.

 There are various bugs open in Debian that might also be related
 to this:
 http://bugs.debian.org/198746
 http://bugs.debian.org/212410

Please try verifying the bugs using the latest snapshot of your
preferred version branch (0.9.7, 0.9.8, or 0.9.9-dev) from
ftp://ftp.openssl.org/source and make sure that the affected
multi-threaded applications do provide a locking callback by calling
CRYPTO_set_locking_callback().  There are some recent changes
in OpenSSL that may help avoid the bugs you are observing.

Applications are also expected to provide a thread ID callback by
calling CRYPTO_set_id_callback(), although the failure to do so should
not be a problem on Linux where different threads run with different
PIDs, since OpenSSL uses the PID as a default for the thread ID.

(OpenSSL requires the thread ID that is an unsigned long.  Not all
systems may provide this, but in practice, you can work around this
problem by casting a pointer of any per-thread object in shared memory
space into an unsigned long; e.g., do foo=malloc(1); and then use
(unsigned long)(void *)foo as the thread ID.  You might want to add
assert(sizeof(void *) = sizeof(long)); to the program if you use
this approach.)


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #1346] Re: SSL_accept concurrency in 0.9.7j and 0.9.8b

2006-06-20 Thread Bodo Moeller via RT

Current snapshots use a more thorough locking approach that takes into
account inconsistent cache views on multi-processor or multi-core
systems (where consistency can be reached by obtaining locks).  The
application has to call CRYPTO_set_id_callback() for OpenSSL to work
properly.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Error in 20060610 releases

2006-06-10 Thread Bodo Moeller
On Sat, Jun 10, 2006 at 06:25:33AM -0600, The Doctor wrote:

[...]
 making all in crypto/evp...
 make: don't know how to make e_camellia.o. Stop
 *** Error code 1

Oops ... a new file that I forgot to add to the CVS.  This will be
fixed in the next snapshot (20060611).

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: s23_srvr.c

2005-10-24 Thread Bodo Moeller
On Mon, Oct 24, 2005 at 04:08:19PM +0200, Peter Sylvester wrote:

 [...]  I.e., a client that connects to a
 server can *either* support SSL 2.0 servers *or* use TLS extensions,
 but not both.
 
 The SSL 3.0 and TLS 1.0 specifications have the forward compatibility
 note about extra data at the end of the Client Hello, so s23_srvr.c
 should tolerate always extra data in a Client Hello that does not use
 the SSL 2.0 format.

 A client that fills extra data into the compatible data must indeed
 be prepared that a strict v2 server rejects the client hello, and repeat
 with a correct one. Here we are taling about the server mode.
 
 Would it hurt Openssl to be a tolerant server, and ignore the additional
 in v2 mode, because that doesn't hurt as far as I understand.

Hm.  Probably being this liberal wouldn't actually hurt, but I don't
see a good case for doing this -- it helps only with ill-behaving
clients.  I think its better to fix the latter (should they exist) and
to generally encourage implementors to step away from
2.0-compatibility.  Accepting this new extended 2.0 format might
perpetuate a data format that is already obsolete.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: s23_srvr.c

2005-10-20 Thread Bodo Moeller
On Fri, Oct 07, 2005 at 11:17:47AM +0200, Peter Sylvester wrote:

 In s23_srvr.c there is a length test
 
   if ((csl+sil+cl+11) != s-packet_length)
{
SSLerr(SSL_F_GET_CLIENT_HELLO,SSL_R_RECORD_LENGTH_MISMATCH)
 
 in case that the record contains a SSLV3 or TLSv1 header.
 IMO the != should be a  since tls allows additional
 data in extensions.

This length test occurs in the branch of ssl23_get_client_hello() that
is responsible for parsing a Client Hello sent in SSL 2.0 backwards
compatible format (where s23_srvr.c has to translate from SSL 2.0 into
SSL 3.0/TLS 1.0 format so that s3_srvr.c can continue processing the
handshake).  Backwards compatible Client Hello messages can't include
additional data because this would confuse SSL 2.0 servers, they
strictly follow the format

char MSG-CLIENT-HELLO
char CLIENT-VERSION-MSB
char CLIENT-VERSION-LSB
char CIPHER-SPECS-LENGTH-MSB
char CIPHER-SPECS-LENGTH-LSB
char SESSION-ID-LENGTH-MSB
char SESSION-ID-LENGTH-LSB
char CHALLENGE-LENGTH-MSB
char CHALLENGE-LENGTH-LSB
char CIPHER-SPECS-DATA[(MSB8)|LSB]
char SESSION-ID-DATA[(MSB8)|LSB]
char CHALLENGE-DATA[(MSB8)|LSB]

after the two-byte record header.  I.e., a client that connects to a
server can *either* support SSL 2.0 servers *or* use TLS extensions,
but not both.

The SSL 3.0 and TLS 1.0 specifications have the forward compatibility
note about extra data at the end of the Client Hello, so s23_srvr.c
should tolerate always extra data in a Client Hello that does not use
the SSL 2.0 format.


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: question concerning SSL_ctrl and SSL_CTX_ctrl etc

2005-10-20 Thread Bodo Moeller
On Thu, Oct 13, 2005 at 01:41:56PM +0200, Peter Sylvester wrote:

 In ssl/ssl_lib.c there is a lot of functionality of get/set implemented
 through a SSL_ctrl or SSL_CTX_ctrl, but some are implemented
 directly as functions.
 
 There may be some logic behind that but I am not sure which one.
 One thing seems to be that the get function which need a pointer
 are implemented directly whilst some functions that return integers
 are in a ctrl.
 
 There is for example the GET/SET READ_AHEAD in a ctrl returning
 an int, but all all set/get_verify_mode etc are all as independant
 functions, and, well, there is an void SSL_set_read_ahead
 which duplicates the functionality.
 
 It seems that there had been an effort to move accessors to
 the SSL_ctrl and SSL_CTX_ctrl, since in older versions the
 SSL_ctrl was basically empty and  just an interface to the
 method dependant code. There is also the GET_SESSION_REUSED
 which is common to the v2 and v3, thus could be moved to
 ssl_lib.c
 
 It may be that some stuff is left there to maintain compatibility,
 i.e., the explicit functions like SSL_set_read_ahead
 
 It would be nice to have a kind roadmap somewhere (which may
 already exist) to indicate whether the xxx_ctrl are 'the future'
 or not, and if yes, how to provide the 'get' functions for structures
 like (SSL_get_ctx).

I don't think there is a clear policy on this ...

An advantage of specific functions is that you can define appropriate
prototypes.  Not everything will fit into the SSL_ctrl API, and if it
does, you always have the 'long' and 'void *' arguments even if
'int' or a specific pointer may be more appropriate.

An advantage of the SSL_ctrl (and SSL_CTX_ctrl) approach is that these
are automatically method-dependent, i.e. you can put things into
ssl3_ctrl() that don't apply to SSL 2.0.

Often it may make sense to define a specific function with appropriate
prototype, but implement it through SSL_ctrl() so that its action is
method-dependent.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: TLS Extension support - Server Name Indication

2005-10-20 Thread Bodo Moeller
On Thu, Oct 13, 2005 at 07:23:29PM +0200, Peter Sylvester wrote:

 I have put a version of openssl that supports the TLS servername extension
 into our web server. It is based on a openssl development snapshot of 
 last week.
 We have split of and simplified the code that was done together with SRP
 last year, an,d corrected known bugs.
 
 See http://www.edelweb.fr/EdelKey/files/openssl-0.9.8+SERVERNAME.tar.gz
 
 see also http://www.edelweb.fr/EdelKey/
 
 The snapshot was one day before the 0.9.8a announcement, [...]

 basically s_client and s_server have been slighlty enhanced and in ssl
 there the modules that have OPENSSL_NO_TLSEXT contain the new
 functionality.
 In the s23_lib.c it is possible to have anounce  a TLS extension and 
 to ignore it  on the server side as with s3_lib.
 
 There is one functionality which is not necessary to support the servername
 extension, but only to allow a renegotiation of a session using another
 servername, e;g. when a web server received a Host:  This is not yet
 fully tested, and I am not sure whether the implemenation is good.
 The idea is to switch the ssl-ctx point to another context. The reference
 counting for the ctx is simple, but during an SSL_new there is some
 data cached down into the SSL, and, in particular the interesting
 one, the server's certificate. It may not be necessary to switch the
 actual CTX, but rather change the SSL to cache from the other CTX.

Great!  Can you provide your changes in 'diff -u' format, relative to
the snapshot it was based on?

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [openssl.org #1070] PATCH: fix for memory leaks in smime verification

2005-05-16 Thread Bodo Moeller
On Mon, May 16, 2005 at 06:30:26PM +0200, Riaz Rahaman via RT wrote:

 I don't see any patch attached?

The message came to the openssl-dev mailing list via the OpenSSL RT2
request tracker.  Attachments are not forwarded on this path, but
can be viewed on the web -- in this case:

   https://www.aet.tu-cottbus.de/rt2/Ticket/Display.html?id=1070

(Log in with name guest and password guest.)

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #1070] PATCH: fix for memory leaks in smime verification

2005-05-16 Thread Bodo Moeller via RT

Thanks, this should be fixed in the next snapshots.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: version 2 is used for Client Hello when version 3 was requested in client code

2005-05-12 Thread Bodo Moeller
On Thu, May 12, 2005 at 09:40:38AM +0200, Thomas wrote:
 Am Freitag, 13. Mai 2005 20:32 schrieb Bodo Moeller:
  On Wed, May 11, 2005 at 02:14:23PM +0200, Thomas Biege wrote:

 You see I use SSLv23_method() and later SSL_CTX_set_options(ctx,
 SSL_OP_ALL

 | SSL_OP_NO_SSLv2); to disable SSLv2 support.

 Is it normal that the Client Hello message is SSLv2 and later TLS is
 used?

 Yes.  In the past this used to be necessary because some SSL 3.0
 implementations were confused by seeing TLS 1.0 records in the Client
 Hello.  But now these issues should be history.

 Why wasn't SSLv3(.0) be used? Or will only headers of SSLv3(.1) be
 identified as real SSLv3? I am confused a bit b/c everyone tells you that
 SSLv2 isn't secure and so usage of it should be avoided... and then it was
 used silently. Maybe its insecurity doesn't matter in this early stage.

With SSL_OP_NO_SSLv2, SSL 2.0 was never used, so its security problems
did not apply.  The SSL 2.0 compatible client hello message that was
generated by SSLv23_client_method() is just a different way of
arranging essentially the same information that occurs in an SSL 3.0
or TLS 1.0 client hello message.  (You just can't list compression
techniques in the SSL 2.0 format, and you can't include TLS
extensions.  TLS extensions are not yet supported by OpenSSL, though.)

When the SSL 2.0 compatible client hello is *not* used, the data sent
by the client contains two version numbers: One is the version number
in the record headers (the SSL 2.0 format does not have anything like
this); the second is the version number given in the actual client
hello message (the maximum protocol version supported by the client).
In the past when many servers supported only SSL 2.0 and 3.0 but not
TLS 1.0, setting the version number in the record header to 3.1 (for
TLS 1.0) could lead to some servers rejecting such packets because,
not recognizing the record header format, they did not even look at
the actual client hello message -- clients had to use the SSL 2.0
format to avoid this server bug.  By now, this is no longer a problem,
and even when clients use a nonsense version number such as 3.42,
servers will simply reply with the maximum protocol version that they
support (i.e., either TLS 1.0 or SSL 3.0).

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: PATCH: p2q (or rather q2p) 'RSA' option (also TSU NOTIFICATION)

2005-04-27 Thread Bodo Moeller
On Tue, Apr 26, 2005 at 01:25:03PM -0700, Marius Schilder wrote:

 See attached, I removed the extra field I added to
 rsa_st. All computation is done in-line, very
 unintrusive patch now. Any reason this can't make it
 in the dist?

Speed improvements like this are certainly interesting, but
unfortunately there *are* reasons why this particular approach cannot
be used: namely, US patent 6,396,926 and similar patents elsewhere.

[As for the patch itself, I think it would have to include additional
comments that say what it is you are doing.  By this I don't mean a
detailed description of the algorithm, but a brief explanation of the
basic idea and pointers to a reference for those interested in further
background.  Everyone knows more or less what RSA is, but this
particular variant needs some additional explanation.]


Extending the RSA implementation to ordinary multi-prime RSA (with a
square-free modulus, i.e. p_1 * ... *p_m with all prime factors p_i
different) would be less problematic because, while there is a bogus
multi-prime RSA patent, multi-prime RSA is in fact already described
in the original RSA patent.  Of course for more than two different
primes we can't get away by kludging the new functionality into the
current function API and data structures as for the p^2 * q variant,
so this would require a more thorough overhaul of the code.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #801] Memory leak in pem_lib.c

2005-03-11 Thread Bodo Moeller via RT

Thanks, this will be fixed in the next snapshots.
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #1017] Bug in EC_GROUP_cmp

2005-03-09 Thread Bodo Moeller via RT

Thanks for the correction.  This will be in the next snapshot.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #674] bug report - lock_dbg_cb

2003-08-14 Thread Bodo Moeller via RT

Thanks for the report.  This has now been corrected in the CVS.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   [EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


Re: [openssl.org #555] RSA blinding MT patch

2003-04-03 Thread (Bodo Moeller) via RT

Tom Wu via RT [EMAIL PROTECTED]:
 Bodo Moeller via RT wrote:

 The next round of snapshots (20030402, to appear at
 ftp://ftp.openssl.org/snapshot;type=d in about six hours)
 should solve the multi-threading problems.  Please test them when they
 are available.

 The good news is that the fix in the snapshot fixes the problem, but the 
 bad news is that it seems to kill performance in my benchmarks.  On a 
 P3-750 running Linux, I get 106 RSA sign/s (1024-bit) with my patch, 
 regardless of the number of simultaneous threads.  With the snapshot 
 fix, I get 102 RSA sign/s with one thread, but if I try with 2 or more 
 threads it drops down to 81 sign/s.
 
 It's quite possible that I've misconfigured something on my own end, but 
 I suspect that it is more likely that the local blinding operation is 
 slowing things down.

Yes, surely this is what is happening, local blinding is somewhat
expensive.

   In the case where the blinding struct is owned by 
 a different thread from the one doing an RSA op, the code has to do a 
 modexp and a mod inverse, as opposed to the two squarings that the 
 update normally does.

These two squarings should be changed, though -- OpenSSL should use a
random new blinding factor after a couple of RSA secret key operations
instead of predictably updating the factor (I didn't want to change
all of this at once).  So the update code will be slower in the
future; not every time, but sometimes.

I believe that on most if not all platforms, the 
 cost of putting critical sections around the blinding convert/update 
 will be drastically smaller than the cost of the extra local blinding 
 computation.

This depends: on a single-processor machine, indeed additional locking
should usually be faster than using local blinding; but for
multi-processor systems, the cost of locking could be quite high.


There are some strategies that could be used to make blinding faster
without expensive locking, but these would require incompatible
changes to the RSA and/or BN_BLINDING structures (the addition of
thread_id is an incompatible change in theory, but it's one that does
not directly affect applications).


-- 
Bodo Möller [EMAIL PROTECTED]
PGP http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/moeller/0x36d2c658.html
* TU Darmstadt, Theoretische Informatik, Alexanderstr. 10, D-64283 Darmstadt
* Tel. +49-6151-16-6628, Fax +49-6151-16-6036

__
OpenSSL Project http://www.openssl.org
Development Mailing List   [EMAIL PROTECTED]
Automated List Manager   [EMAIL PROTECTED]


  1   2   3   4   5   6   7   >