Re: [openssl-dev] Re: Verify X.509 certificate, openssl verify returns bad signature

2010-08-30 Thread Erwann ABALEA
Hodie IV Kal. Sep. MMX, Mounir IDRASSI scripsit:
[...]
 Specifically, Peter Gutmann in his X.509 Style Guide says this about this
 field : If you're writing certificate-handling code, just treat the
 serial number as a blob which happens to be an encoded integer.

This is the kind of advice that pushes programmers to allocate fixed
size fields in databases, and consider a certificate's serial number
to always fit the size. This is also bad in practice.

-- 
Erwann ABALEA erwann.aba...@keynectis.com
Département RD
KEYNECTIS
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1833] [PATCH] Abbreviated Renegotiations

2010-08-30 Thread Robin Seggelmann via RT

On Aug 27, 2010, at 2:32 PM, Stephen Henson via RT wrote:

 [seggelm...@fh-muenster.de - Fri Aug 27 11:34:17 2010]:
 
 Unfortunately, there was newer code which was not yet covered by the
 patch. This caused an abbreviated handshake to fail.
 
 
 Applied now, thanks.
 
 Note that since we need to retain binary compatibility between 1.0.0 and
 1.0.1 we will need to either avoid having to add a new field to ssl.h or
 move it to the end of the structure.
 
 As things are any application accessing a field after the new member
 would misbehave.

Do you need a patch which moves the int renegotiate; to the end of the struct 
for 1.0.1?

-Robin

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: inconsistent timings for rsa sign/verify with 100K bit rsa keys

2010-08-30 Thread Mounir IDRASSI

 On 8/30/2010 12:20 PM, Georgi Guninski wrote:

you write sign operation, does it explain verification operation -
timings for signing with low pub exponent key vs verification with big exponent 
?



To answer this question, one must remember that the signing is done 
using the CRT parameters (p, q, dp, dq and d^-1 mod p) and that 
theoretically it is 4 times faster than doing a raw exponentiation with 
the private exponent d (see section 14.75 in Handbook Of Applied 
Cryptography for a justification).

Your figures exactly meet this. I'll explain.

The verification with key2 involves a modular exponentiation with a 
public exponent of 100 001 bits with a hamming weight equal to 49945.
The private exponent of key1 is  100 002 bits and it has a hamming 
weight of 49 922.
Thus, a modular exponentiation with the public exponent of key2 will 
cost roughly the same as the modular exponentiation with the private 
exponent of key1.
Moreover, as I explained at the beginning of this email, the actual 
signing is done using CRT which is 4 times faster that the modular 
exponentiation with the private exponent.


So, the modular exponentiation with the public exponent of key2 is 4 
times slower that the signing operation of key1 and it should cost 4 x 5 
min = 20 min which is very close to the 21 min you actually obtained.


Does this answer your question?

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 8/30/2010 12:20 PM, Georgi Guninski wrote:

On Mon, Aug 30, 2010 at 06:10:23AM +0200, Mounir IDRASSI wrote:

  Hi,

The big difference in the sign operation timings between the two
keys is not caused by any property of the second key parameters
(like their hamming weight) but it is rather the expected
manifestation of two counter-measures implemented by OpenSSL. Those
are :
- RSA Blinding that protects against timing attacks.
- Verification of CRT output that protects against fault attacks.


ok, thanks.


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Verify X.509 certificate, openssl verify returns bad signature

2010-08-30 Thread Goran Rakic
У пон, 30. 08 2010. у 20:38 +0200, Dr. Stephen Henson пише:

 I wouldn't advise changing the code in that way (FYI I wrote it). The normal
 workaround in OpenSSL for broken encodings is to use the original encoding
 by caching it. The attached three line patch adds this workaround for
 certificates.

Thanks Stephen. This preprocessor black magic looks very interesting, I
will spend some free time trying to understand it in the following days.

I read your message on openssl-dev about the issue with a dirty cache.
As a naive code reader, I am wondering could not the modified field in
the cached data be set whenever certificate data is modified to
invalidate the cache? Will this allow integrating this patch upstream?

Kind regards,
Goran Rakic


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1833] [PATCH] Abbreviated Renegotiations

2010-08-30 Thread Darryl Miles

Robin Seggelmann via RT wrote:

Note that since we need to retain binary compatibility between 1.0.0 and
1.0.1 we will need to either avoid having to add a new field to ssl.h or
move it to the end of the structure.

As things are any application accessing a field after the new member
would misbehave.


Can you cite the mechanism via which an application achieves this 
(misbehaving) ?




Do you need a patch which moves the int renegotiate; to the end of the struct 
for 1.0.1?


Which internal members of the openssl/ssl.h (struct ssl_st) are visible 
outside of the OpenSSL implementation (i.e. by the application) ?


My understanding is that providing there are no macro's directly 
accessing members of the struct from application code the order issue is 
moot.


If the application programmer has read ssl.h and decided he is going to 
access internal members of (struct ssl_st) directly, when it has not 
been documented as safe to do so; should he not be left to burn ?


If there are functions/macros/mechanisms that can be compiled into 
application code which do access and expect structure members to be at 
specific offsets, WHY IS THIS THE DEFAULT ANYWAY ?  i.e. why doesn't the 
application programmer have to define some 
-DOPENSSL_UNSAFE_DIRECT_ACCESS disable those accesses that indirect 
through a function (inside the OpenSSL implementation library) to those 
implemented as macros and therefore embedded inside applications.


But first please confirm the API calls put at risk with you concern 
with this patch/feature.




A larger concern to me is the increasing of the size of the (struct 
ssl_st) a matter you seem to place at a lower priority than struct 
member order.


If it is possible and accepted usage that an application might allocate 
a fixed amount of storage, such as static global variables, local stack 
variables, embedding the (SSL) inside another application defined struct 
and use of sizeof(SSL).


If this is a concern might it be useful to both:
 * Implement an API call that allows an application program to check 
the sizeof(SSL) it was compiled with against the runtime libraries 
implementation size (preferably in a convenient way, mostly assisted by 
header files and man page copy'n'paste snippet with a view of being 
future proof).
 * Reserve some extra headroom in the struct, if you think you need to 
increase the size during the lifetime of the ABI compatibility you wish 
to retain.
 * Document any restriction placed on the programmer when using the 
library.  For example if storage for a specific type is not to be 
allocated statically (at compiled time).


If you increase the size of the struct those applications that do 
allocate a fixed amount of storage based on openssl-1.0.0 will find that 
the OpenSSL library is scribbling on memory when it accesses the 
locations at the highest offsets of the new larger structure.


The application will not have allocated quite enough memory and so 
random problems will occur.


Can I suggest you combine the storage area used by these flags so no 
size increase is necessary.  The extra instruction Logical And/Or 
masking of a register value can be done very cheaply and the patch does 
not appear to affect any critical performance path with bulk transfer.



Darryl
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1833] [PATCH] Abbreviated Renegotiations

2010-08-30 Thread Darryl Miles via RT
Robin Seggelmann via RT wrote:
 Note that since we need to retain binary compatibility between 1.0.0 and
 1.0.1 we will need to either avoid having to add a new field to ssl.h or
 move it to the end of the structure.

 As things are any application accessing a field after the new member
 would misbehave.

Can you cite the mechanism via which an application achieves this 
(misbehaving) ?


 Do you need a patch which moves the int renegotiate; to the end of the 
 struct for 1.0.1?

Which internal members of the openssl/ssl.h (struct ssl_st) are visible 
outside of the OpenSSL implementation (i.e. by the application) ?

My understanding is that providing there are no macro's directly 
accessing members of the struct from application code the order issue is 
moot.

If the application programmer has read ssl.h and decided he is going to 
access internal members of (struct ssl_st) directly, when it has not 
been documented as safe to do so; should he not be left to burn ?

If there are functions/macros/mechanisms that can be compiled into 
application code which do access and expect structure members to be at 
specific offsets, WHY IS THIS THE DEFAULT ANYWAY ?  i.e. why doesn't the 
application programmer have to define some 
-DOPENSSL_UNSAFE_DIRECT_ACCESS disable those accesses that indirect 
through a function (inside the OpenSSL implementation library) to those 
implemented as macros and therefore embedded inside applications.

But first please confirm the API calls put at risk with you concern 
with this patch/feature.



A larger concern to me is the increasing of the size of the (struct 
ssl_st) a matter you seem to place at a lower priority than struct 
member order.

If it is possible and accepted usage that an application might allocate 
a fixed amount of storage, such as static global variables, local stack 
variables, embedding the (SSL) inside another application defined struct 
and use of sizeof(SSL).

If this is a concern might it be useful to both:
  * Implement an API call that allows an application program to check 
the sizeof(SSL) it was compiled with against the runtime libraries 
implementation size (preferably in a convenient way, mostly assisted by 
header files and man page copy'n'paste snippet with a view of being 
future proof).
  * Reserve some extra headroom in the struct, if you think you need to 
increase the size during the lifetime of the ABI compatibility you wish 
to retain.
  * Document any restriction placed on the programmer when using the 
library.  For example if storage for a specific type is not to be 
allocated statically (at compiled time).

If you increase the size of the struct those applications that do 
allocate a fixed amount of storage based on openssl-1.0.0 will find that 
the OpenSSL library is scribbling on memory when it accesses the 
locations at the highest offsets of the new larger structure.

The application will not have allocated quite enough memory and so 
random problems will occur.

Can I suggest you combine the storage area used by these flags so no 
size increase is necessary.  The extra instruction Logical And/Or 
masking of a register value can be done very cheaply and the patch does 
not appear to affect any critical performance path with bulk transfer.


Darryl


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org