Re: Possibly a bug

2007-03-22 Thread Mounir IDRASSI
Hi Gabriel,
This is no bug. In your code, you should check the return value of
BN_bn2bin and you will notice that it returns zero. This means that
nothing has been written to your bytes buffer. So, your code is just
printing garbage. In your example, you are dealing with the zero value,
and in openSSL, BN_num_bytes returns 0 in this case (it's deduced from
BN_num_bits).

Cheers,

Mounir IDRASSI
IDRIX
http://www.idrix.fr

Gabriel Maganis wrote:
 Hello,
I am new to openssl and I have tried to use the bignumber library
 like below,

 --
  unsigned char* hex = 00;
  BIGNUM* bn;
  unsigned char* bytes;
  int i;
  bn = BN_new();

  BN_hex2bn(bn, hex);

  bytes = (unsigned char*) malloc(16);
  BN_bn2bin(bn, bytes);

   for(i=0; i16; i++)
 printf(%02x, bytes[i]);
 -

 I believe I have found a bug because the above should print 32 '0'
 characters but it doesn't. I also found that the code above works for
 other hexadecimal strings and it's only for the above case that it
 fails.

 Thanks.
 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   [EMAIL PROTECTED]



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Visual Studio 2005 and openssl question

2007-06-01 Thread Mounir IDRASSI
Hi,
Does the Windows XP machine where you have troubles contain the
Microsoft runtime for msvcr80 installed?
Visual Studio 2005 links by default to the msvcr80 dll. This dll must be
installed in the target machine using a setup that can be downloaded
using the following link:
http://go.microsoft.com/fwlink/?linkid=65127 .
A more complete description on the deployment requirements and methods
for application built by VS2005 can be found here:
http://msdn2.microsoft.com/en-us/library/ms235291(VS.80).aspx .
I hope this helps.

Cheers,
Mounir IDRASSI
IDRIX - Cryptography and IT Security Experts
http://www.idrix.fr


gary clark wrote:
 hello,

 I know this is probably inappropriate venue but I am
 at a loss on why I cannot run openssl on a XP windows
 machine which does not have openssl installed.

 I have built a client and server on a machine which
 has had openssl installed and got it to work with
 certificates. However when I port the executable code
 to a machine without openssl installed it fails.

 1) I port the built libeay32.dll and ssleay32.dll to
 windows32 directory.

 2) I then attempt to run my application and it fails
 to load the libeay32.dll and ssleay32.dll libraries.
 I use the command LoadLibrary(Llibeay32.dll) is this
 valid to do?

 Why would it fail to load the libraries?

   
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [openssl.org #1650] signature length in ECDSA

2008-03-07 Thread Mounir IDRASSI
Hi,
This is due to the fact that the integers are stored as unsigned. So,
when the MSB of a computed value is set, the encoding adds an extra
0x00 to its ASN.1 representation.
This is normal and it explains what you are experiencing.

Cheers,

Mounir IDRASSI
IDRIX - Cryptography And IT Security Experts
http://www.idrix.fr

On 3/7/08, JP Szikora via RT [EMAIL PROTECTED] wrote:
 Hi,

  I try to understand why the signature length is variable in ECDSA.
  Normally with 160bits EC, it must be 46 bytes long: 20 bytes for each of
  the 2 components and 4 + 2 bytes for ASN.1.

  I think it must be a bug in the ASN.1 creation of the signature.

  Here is the details:

  I'm testing this with openssl-0.9.8g.
  I create a key:
  openssl ecparam -out ec_key.pem -name secp160k1 -genkey
  And I sign with it:
  openssl dgst -ecdsa-with-SHA1 -sign ec_key.pem  -out test_ec.sign test.txt

  Now this signature (test_ec.sign) is between 46 and 48 bytes long if I
  done it a few times.

  I compared the asn1parse output with the hexadecimal content of the
  signature, and the difference is an extra 0x00 before one or the two
  members of the pair (r,s).

  1. the most frequent case: 47 bytes:
  asn.1 structure:
0:d=0  hl=2 l=  45 cons: SEQUENCE2:d=1  hl=2 l=  21 prim:
  INTEGER :BD8188D4FB9445C456FF257BC9A77E759CC63DA9
   25:d=1  hl=2 l=  20 prim: INTEGER
  :2AC486BB6DF4D81A44B38CE319935270B22CACC8
  the signature in hexadecimal:
  
 302d0215_00bd8188d4fb9445c456ff257bc9a77e759cc63da9_0214_2ac486bb6df4d81a44b38ce319935270b22cacc8


  I put a _ to clearly separate the elements.

  2.  48 bytes signature:
0:d=0  hl=2 l=  46 cons: SEQUENCE2:d=1  hl=2 l=  21 prim:
  INTEGER   :95CB1F3A35F4358D158BE94BA41031CE1563CD0F
   25:d=1  hl=2 l=  21 prim: INTEGER
  :A07D76EF47CF74D385FF60DA7EBF8E86652AD230
  
 302e0215_0095cb1f3a35f4358d158be94ba41031ce1563cd0f_0215_00a07d76ef47cf74d385ff60da7ebf8e86652ad230



  3. 46 bytes signature:
0:d=0  hl=2 l=  44 cons: SEQUENCE2:d=1  hl=2 l=  20 prim:
  INTEGER   :22294F048F61B727DB3B0786D440717532601082
   24:d=1  hl=2 l=  20 prim: INTEGER
  :09D21753A2DD8395CB965D583F27835B051E7C42
  
 302c0214_22294f048f61b727db3b0786d440717532601082_0214_09d21753a2dd8395cb965d583f27835b051e7c42



  I reproduced this on a recent snapshot of the 0.9.9-dev branch.

  Now, if I modified the signature to remove the extra 0x00 preceding one
  of the members and modifying the length component in ASN.1, the
  signature is still valid...

  Thanks for your help,

  Best Regards,

  Jean-Pierre

  --
  Dr Jean-Pierre Szikora  e-mail: [EMAIL PROTECTED]
tel: 32-2-764.75.00
  74, av. Hippocrate - UCL 7459  fax: 32-2-764.65.65
  1200 Brussels - Belgium


  __
  OpenSSL Project http://www.openssl.org
  Development Mailing List   openssl-dev@openssl.org
  Automated List Manager   [EMAIL PROTECTED]




-- 
Mounir IDRASSI
IDRIX
http://www.idrix.fr
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Private Key Larger then Public Key

2008-03-25 Thread Mounir IDRASSI
Hi,
This is normal since the RSA private key is in the Chinese Remainder
format ( p, q, dp, dq and d). The first four elements have half the size
of the moduluse and the last has the same size as the modulus. Thus, the
hole RSA private key encoding will take three times the modulus size
which is roughly the size of the RSA public key.

Mounir IDRASSI
IDRIX - Cryptography And IT Security Experts
http://www.idrix.fr



robert2007 a écrit :
 Hello,

 I am working with OpenSSL and am interested in why my private key is three
 time the size of my public key when using 1024 bit RSA?

 Thanks.
   

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Question about ECDH_compute_key and X9.63 standard

2008-05-28 Thread Mounir IDRASSI
Hi,

The KDF implementation in ecdhtest.c is based on the IEEE P1363 standard
as the rest of the implementation of ECDH in OpenSSL. It can be regarded
as a generalization of the X9.63 standard. However, the file ecdhtest.c
is not part of the OpenSSL core and thus you can provide your own
implementation of KDF and still use OpenSSL ECDH functions without any
problem.

Regards,

Mounir IDRASSI
IDRIX
http://www.idrix.fr

Mark Shnaider a écrit :

 Hello,

 If I understand correctly, regarding  X9.63 standard (5.6.3) derive
 key (in case KDF_SHA1) must be computed as

   SHA1(Z  || counter || [SharedInfo])

 Z -   secret value.

 But function KDF in the file ecdhtest .c does not use counter  and
 compute key as:

SHA1(Z)

 To my mind  bit string of  counter equal 1 must be included in Sha1
 hash calculation.

 Is it bug, or my understanding?

 Best regards

 Mark


 *Mark Shnaider | Software engineer | ARX*
 phone: +972.3.9279543 | mobile: +972.54.2448543 | email: [EMAIL PROTECTED]
 |_ www.arx.com_


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: [openssl.org #1681] 0.9.8h bug report

2008-05-29 Thread Mounir IDRASSI
Hi,

You should not touch the file sha1-586.pl because the problem is located
in the file x86ms.pl that is dedicated to MASM. In this file, the line 273
containing $extra should be removed to be able to compile the generated
assembly files.

Cheers,
-- 
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On Thu, May 29, 2008 8:13 am, Craig Muchinsky via RT wrote:
 I pulled down 0.9.8h today and attempted to build on a win32 machine,
 but ran into an issue when compiling the generated s1_win32.asm file. It
 looks like there is a syntax error in sha1-586.pl at line 152, the
 second argument (16) is causing the following error:



  ml /Cp /coff /c /Cx /Focrypto\sha\asm\s1_win32.obj
 .\crypto\sha\asm\s1_win32.asm

Assembling: .\crypto\sha\asm\s1_win32.asm

   Microsoft (R) Macro Assembler Version 8.00.50727.762

   Copyright (C) Microsoft Corporation.  All rights reserved.

   .\crypto\sha\asm\s1_win32.asm(13) : error A2008: syntax error :
 integer



   NMAKE : fatal error U1077: 'C:\Program Files (x86)\Microsoft Visual
 Studio 8\VC\bin\ml.EXE' : return code '0x1'

   Stop.



 By simply removing the ',16' from line 152 everything compiles fine.



 Thanks,

 Craig Muchinsky


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: syntax error in generated asm for sha1 (0.9.8h)

2008-06-25 Thread Mounir IDRASSI
Hi,

This error has been already reported and I indicated how to solve it : in
the file x86ms.pl, the line 273 containing $extra should be removed.
Please refer to the following link :

http://www.mail-archive.com/openssl-dev@openssl.org/msg24059.html

Cheers,

-- 
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On Wed, June 25, 2008 2:43 pm, Pierre Joye wrote:
 Hello,

 I'm new to the list and did not find a better place to report bugs.
 Please point me to the right place if this mailing list should be used
 for bug reports.

 While building the new binaries to be used by PHP windows releases, I
 got a syntax error in s1_win32.asm (generated by sha1-586.pl).

 Generated code:
   TITLE   sha1-586.asm
 .486
 .modelFLAT
 _TEXT$SEGMENT PAGE 'CODE'

 PUBLIC_sha1_block_data_order
 16
 _sha1_block_data_order PROC NEAR


 The 16 is obviously causing the error. I tried to figure (quicly)
 what's wrong in the script but perl can be sometimes more cryptic than
 the generated asm ;)

 Thanks for your fantastic work on openssl!

 Cheers,
 --
 Pierre

 http://blog.thepimp.net | http://www.libgd.org
 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   [EMAIL PROTECTED]


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: PKCS11 private key

2008-12-04 Thread Mounir IDRASSI

Hi,

Usually, the PKCS11 engine is used to access a smart card or an HSM 
which don't offer the possibility of exporting a private key. So, in 
this case, it's not feasible.
More generally, you can check if a pkcs11 object is exportable by 
reading the attributes CKA_EXTRACTABLE, CKA_NEVER_EXTRACTABLE and 
CKA_ALWAYS_SENSITIVE.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

venkat naidu wrote:

hi
 all,
 
 i have a query related to PKCS11
 
can we export the private key retrieved from PKCS11 ? ( here we are 
storing the private key using the PKCS11 functionalities )
 
if so how can we do that ? any such operation will voilate any standrads ?
 
Thanks for all your help
 
regards

   Venkat


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: Can I create cryptoprovider based on openssl ?

2008-12-18 Thread Mounir IDRASSI

Hi,

What do you mean by CryptoProvider? Does it refer to Microsoft 
Cryptographic Service Provider?
If it is the case, you can take a look at the CSP #11 project 
(http://csp11.labs.libre-entreprise.org/): it provides a CSP 
implementation based on the PKCS#11 interface and there are 
implementations of PKCS#11 dlls based on OpenSSL (look at OpenCryptoki).

I hope this helps,

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


Rustam Rakhimov wrote:

Hi everybody 
Can I create CryptoProvider based on openssl, if somebody have some Idea
about it please let me know ?

Rustam !!!

  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: pubkey format

2008-12-20 Thread Mounir IDRASSI

Hi,

You can also use the well-known PuTTYgen that can import pem encoded 
keys and export the public key in OpenSSH authorized_keys file format.
Concerning the ssh-rsa format itself, it's rather simple. Each line is 
as follows :


   ssh-rsa XX rsa-key-description

where the XXX filed is the base-64 encoding of the following content : 
(all non explicit values are in big-endian)


4 bytes : 00 00 00 07 (for public key type)
7 bytes : 73 73 68 2D 72 73 61 (ssh-rsa in ascii)
4 bytes : length of the public exponent encoding
E bytes : public exponent value (i.e if 0x010001, then E=3 and encoding 
is 01 00 01)
4 bytes : length of the unsigned modulus encoding. If the MSB of modulus 
is set, then the length must count an extra 00 that will be added to the 
content.
N bytes: modulus value. If its MSB is set, then an extra 00 must precede 
its real value.


For example, if the public exponent is 0x25 and the modulus is 
0xFEDCBA91, we will have :


00 00 00 07 73 73 68 2D 72 73 61 00 00 00 01 25 00 00 00 05 00 FE DC BA 91

and in base-64 encoding :

B3NzaC1yc2EBJQUA/ty6kQ==

So, the OpenSSH public key will be :
ssh-rsa B3NzaC1yc2EBJQUA/ty6kQ== sample-openssh-key

I hope this will help,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


Shahin Khorasani wrote:

Hi,

You can download simple utility to transform PKCS#1 RSA public key to 
opnessh public key format from here: 
http://www.parssign.com/openssh_pk_linux.tar.gz


It is free to use and linked statically on Linux (must works on most 
distributions)


Regards,
Shahin Khorasani

Dhiva wrote:

openssl x509 -in sample.pem -pubkey -noout

What is the format of the pubkey ?

How can i convert or transform this key to ssh-rsa format? I am talking
about the ssh keys that are available in authorized_keys file.

Or
Does openssl has any tools to manage the pubkey ? like dismantle and
assemble again.

thanks
dhiva
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org
  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: pubkey format

2008-12-21 Thread Mounir IDRASSI

Hi,

I have written a small program that converts an encoded OpenSSL public 
key to the OpenSSH format. Its output can be used to fill the 
authorized_keys file.

You can get the source from the following link :
http://www.idrix.fr/Root/Samples/pubkey2ssh.c

It should compile under MacOSX with no problem. Tell me if it's not the 
case.


--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

Shahin Khorasani wrote :
Sorry, the source code is not available. but you can write a customized 
application (Mounir explained the format properly)


- Shahin Khorasani

Kyle Hamilton wrote:
 Can the source be made available?  I would like to use it on MacOSX.

 -Kyle H

 On Fri, Dec 19, 2008 at 11:43 PM, Shahin Khorasani
 khoras...@amnafzar.com wrote:
   
 Hi,


 You can download simple utility to transform PKCS#1 RSA public key to
 opnessh public key format from here:
 http://www.parssign.com/openssh_pk_linux.tar.gz

 It is free to use and linked statically on Linux (must works on most
 distributions)

 Regards,
 Shahin Khorasani

 Dhiva wrote:
 
 openssl x509 -in sample.pem -pubkey -noout


 What is the format of the pubkey ?

 How can i convert or transform this key to ssh-rsa format? I am talking
 about the ssh keys that are available in authorized_keys file.

 Or
 Does openssl has any tools to manage the pubkey ? like dismantle and
 assemble again.

 thanks
 dhiva
 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org

   
 __

 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org

 
 __

 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org
   

__ OpenSSL 
Project http://www.openssl.org Development Mailing List openssl-dev@openssl.org 
Automated List Manager majord...@openssl.org
  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: pubkey format

2009-01-09 Thread Mounir IDRASSI

Hi,

These are link errors. You certainly forgot to add -lcrypto to the gcc 
link command line (gcc -o pubkey2ssh pubkey2ssh.c -lcrypto) .


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

Dhiva wrote:

Thanks for the sample code.

I am getting the following errors.
 _BIO_f_base64, referenced from:
 _main in ccxPEkYV.o
 _ERR_get_error, referenced from:
 _main in ccxPEkYV.o
 _main in ccxPEkYV.o
 _ERR_free_strings, referenced from:
...
in total 23 errors

I tried with gcc as well as with xcode. Same result.
Looks like some headers files may be missing.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Building OpenSSL 0.9.8k on Win64A

2009-08-19 Thread Mounir IDRASSI
Hi,

You are using the wrong build environment. Be sure to launch the right command 
prompt using the link installed under program files by Visual studio 2008 under 
the name Visual Studio 2008 x64 Win64 Command Prompt.
You can also use the batch file named vcvarsamd64.bat to set the right build 
environment: this file resides under VCInstallDir\VC\bin\amd64.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


Bernhard Marschall wrote:
 Hi,

 Sorry to ask again, but I only found the question in the mailing list
 archive but no answer.

 I tried to build OpenSSL 0.9.8k on 64-bit Windows, but it failed.

 As mentioned in the docu I called

   perl Configure VC-WIN64A
   ms\do_win64a
   nmake -f ms\ntdll.mak

 link /nologo /subsystem:console /opt:ref /dll /out:out32dll\libeay32.dll 
 /def:ms/LIBEAY32.def @C:\Users\build\AppData\Local\Temp\2\nmEB5C.tmp 
 tmp32dll\uplink.obj : fatal error LNK1112: module machine type 'X86' 
 conflicts with target machine type 'x64'
 NMAKE : fatal error U1077: 'C:\win32app\devstudio_2008\VC\BIN\link.EXE' : 
 return code '0x458'
 Stop.

 Any ideas how to fix this?

 Best regards,
 Bernhard
 --
 
 Bernhard Marschall, Software Development
 Hyperwave AG, Albrechtgasse 9, A-8010 Graz
 Tel +43 (0) 316 820918 32, Fax +43 (0) 316 820918 99
 Landesgericht Graz, FN 269228z, Vorstand: Herwig Gangl, Gerhard Pail
 
 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org
   
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Building OpenSSL 0.9.8k on Win64A

2009-08-19 Thread Mounir IDRASSI
Hi,

You are using the wrong build environment. Be sure to launch the right
command prompt using the link installed under program files by Visual
studio 2008 under the name Visual Studio 2008 x64 Win64 Command
Prompt. You can also use the batch file named vcvarsamd64.bat to set
the right build environment: this file resides under
VCInstallDir\VC\bin\amd64.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

Bernhard Marschall wrote:
 Hi,

 Sorry to ask again, but I only found the question in the mailing list
 archive but no answer.

 I tried to build OpenSSL 0.9.8k on 64-bit Windows, but it failed.

 As mentioned in the docu I called

   perl Configure VC-WIN64A
   ms\do_win64a
   nmake -f ms\ntdll.mak

 link /nologo /subsystem:console /opt:ref /dll /out:out32dll\libeay32.dll 
 /def:ms/LIBEAY32.def @C:\Users\build\AppData\Local\Temp\2\nmEB5C.tmp 
 tmp32dll\uplink.obj : fatal error LNK1112: module machine type 'X86' 
 conflicts with target machine type 'x64'
 NMAKE : fatal error U1077: 'C:\win32app\devstudio_2008\VC\BIN\link.EXE' : 
 return code '0x458'
 Stop.

 Any ideas how to fix this?

 Best regards,
 Bernhard
 --
 
 Bernhard Marschall, Software Development
 Hyperwave AG, Albrechtgasse 9, A-8010 Graz
 Tel +43 (0) 316 820918 32, Fax +43 (0) 316 820918 99
 Landesgericht Graz, FN 269228z, Vorstand: Herwig Gangl, Gerhard Pail
 
 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org
   
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Bug in ECDSA_do_sign?

2009-10-12 Thread Mounir IDRASSI

Hi,

In order to be able to sign a digest with ECDSA, the bit length of the 
digest value must be lesser than or equal to the bit size of field used 
in the elliptic curve.
So, if you want to sign an SHA-256 digest, you must use an elliptic 
curve defined over a field with a bit size greater than 256.


The sample code you modified is using the |wap-wsg-idm-ecid-wtls8 curve 
which is defined over a 112 bits prime field. Thus, it is normal that 
ECDSA_do_sign fail because the input size (256 bit) is greater than 112.
You have two possible solutions here: either use another curve with a 
bigger field (like ||secp256k1 or ||secp384r1) or truncate the digest 
value to be lesser than or equal to 14 bytes (equivalent to 112 bits)


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr
|

Kirk81 wrote:

Actually I'm looking for the EVP interface and I found out that 'The EVP
interface should almost always be used in preference to the low level
interfaces. This is because the code then becomes transparent to the
algorithm used and much more flexible'. It might be true but... 


...I would like to know, how can I use the low level functions for the
example that I posted?! What am I missing? Anyone can help me?



Dr. Stephen Henson wrote:
  

On Fri, Oct 09, 2009, Kirk81 wrote:



Hello,

I found your example of ECDSA_do_sign/verify very uselful.

Now I'm trying to modify the code and I would like to use an SHA-256's
message digest in your sign function. Something like:

unsigned char obuf[32];

SHA-256(data, len, obuf);

// now, in obuf there's the message digest (calculated using the SHA-256
function).

but I have an error in the ECDSA_do_sign function when I pass the message
digest in this way:
sig = ECDSA_do_sign(obuf, 32, pkey);

what's wrong? 


Problably I'm missing the conversion between data types: I mean,
something
used to convert the output of the SHA to an integer. In this case: which
function should I use and how?

  
I'd suggest you try OpenSSL 1.0.0 and the EVP interface instead. 


Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org





  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: ECDSA_do_verify()

2009-10-27 Thread Mounir IDRASSI

Hi,

What you are seeing is the side-effect of OpenSSL initialization 
internals during the first time you access a cryptographic function that 
uses random numbers (like ECDSA).
If, in your code, you do two signature in a raw before doing the 
verification, you will notice that the first signature is always slower 
that the second one and the second signature takes almost the same time 
as the verification.


If you want to remove this side-effect, add the following two lines at 
beginning of your program before doing any cryptographic operation :


   BIGNUM *dummy = BN_new();
   BN_rand(dummy, 256, 1, 1);

After adding these lines, you will see the magic! (the timings will 
become more reasonable)


FYI, the side-effect has to do with the entropy collection of the 
OpenSSL random generator. During the first cryptographic operation, most 
of the time is consumed by the function RAND_poll.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

Kirk81 wrote:

Hello,
I'm trying to benchmark the ECDSA with a 160 prime key and the SHA-1
function: I pass a string of characters to the SHA-1 and then I pass the
digest to the ECDSA_do_sign and the ECDSA_so_verify function.

For the purposed I've modified a code that was posted previously. The code
is the following and it's for MSV 2005. 
http://www.nabble.com/file/p26074867/ecdsa.c ecdsa.c 


With a Intel Pentium M processor 1500MHz, I can hash and sign (with the
above configuration) in 2.6 [ms] and I'm able to verify it in 0.02 [ms].

BUT...Is it possible that the verify function is so fast? Am I doing any
mistake or is it a bug?

Thanks in advance
  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: ECDSA_do_verify()

2009-10-27 Thread Mounir IDRASSI

Hi,

Of course I tested it before writing the email!! The output I get from 
your program is below. Moreover, I used a professional profiling tool to 
analyse the time consumption and to verify that it is coming from the 
first call of the first signing operation, and specifically from RAND_poll.
How could the initialization of a BIGNUM and computing a random value 
for it be worse??
Also, in your code, MSVC 2008 complains that there is an overflow in the 
integral constant line 79 : you have to replace 134774L by 134774LL to 
avoid this.


More generally, if you need more accurate timing values, I advise you to 
compute the mean of several measurements : for example, in your code, 
you can perform a loop of 1000 iteration containing the call to SHA1 and 
ECDSA_do_sign and then divide the elapsed time by 1000 (the same thing 
for the verification). Thus, you will remove the side effect of the 
first signature call and you will get more significant values.


Have tried modifying your code to do two signatures in a raw, one after 
another, and output the timing of each one?


And here is the output of your program after introducing the dummy 
BIGNUM random initialization (on a Pentium M processor 1700 MHz) :


C:\ecdsatest.exe

-- WinTimeHigh: 0
-- WinTimeLow: 0 [ns]
-- CPU-Ticks.High = 0
-- CPU-Ticks.Low = 18832
(sig-r, sig-s): 
(9134279177818445EE242B823B088E70CDB05AB9,6BA4942A96BA5B1D798F859FB331F557D5170E1F)


sign returned 1
(sig-r, sig-s): 
(9134279177818445EE242B823B088E70CDB05AB9,6BA4942A96BA5B1D798F859FB331F557D5170E1F)

i2d_ECDSA_SIG returned 0062E2B8, length 47
d2i_ECDSA_SIG returned 0062E2E8

-- WinTimeHigh: 0
-- WinTimeLow: 0 [ns]
-- CPU-Ticks.High = 0
-- CPU-Ticks.Low = 22368
verify returned 1

And just in case, I have put the MSVC 2008 build binary against OpenSSL 
09.8k on the following link : http://www.idrix.fr/test/ecdsatest.zip


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

Kirk81 wrote:

Hello,

I put the two lines but it doesn't work , at all: in fact, it works worth!
:-/ 


Have u tried to do it before suggest it to me? What result did u get?

Thanks




Mounir IDRASSI wrote:
  

Hi,

What you are seeing is the side-effect of OpenSSL initialization 
internals during the first time you access a cryptographic function that 
uses random numbers (like ECDSA).
If, in your code, you do two signature in a raw before doing the 
verification, you will notice that the first signature is always slower 
that the second one and the second signature takes almost the same time 
as the verification.


If you want to remove this side-effect, add the following two lines at 
beginning of your program before doing any cryptographic operation :


BIGNUM *dummy = BN_new();
BN_rand(dummy, 256, 1, 1);

After adding these lines, you will see the magic! (the timings will 
become more reasonable)


FYI, the side-effect has to do with the entropy collection of the 
OpenSSL random generator. During the first cryptographic operation, most 
of the time is consumed by the function RAND_poll.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr




  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: ECDSA_do_verify()

2009-10-28 Thread Mounir IDRASSI

Hi,

I have already thought about it but never done it because I found it to 
be too tedious, especially concerning the build system and the heavy 
macro usage, combined with a lack of motivation!
However, I believe it is possible to isolate ECDSA and it should take a 
week at most for an experienced OpenSSL developer to come up with a 
clean library subset.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

Kirk81 wrote:

yep, thanks!!

you were right! I had a shock about the performance: I didn't aspect it soo
fast!! That was my doubt.
Here my MSVC 2008 build binary against 'openssl-1.0.0-stable-SNAP-20091028'.
http://www.nabble.com/file/p26093729/ECDSA.exe ECDSA.exe 


Thanks for the hint about the loop of measurements: it was in program in the
following version.

And thank you very much about the overflow: I didn't notice that.

Seem you're quite familiary with openSSL and crypography so I'd like to ask
a if u ever de-coupling the functions of openSSL's project? I would like to
isolate the ECDSA (and the corresponding functions) from openSSL and remove
all the rest.  Is it possible?

Kirk



Mounir IDRASSI wrote:
  

Hi,

Of course I tested it before writing the email!! The output I get from 
your program is below. Moreover, I used a professional profiling tool to 
analyse the time consumption and to verify that it is coming from the 
first call of the first signing operation, and specifically from

RAND_poll.
How could the initialization of a BIGNUM and computing a random value 
for it be worse??
Also, in your code, MSVC 2008 complains that there is an overflow in the 
integral constant line 79 : you have to replace 134774L by 134774LL to 
avoid this.


More generally, if you need more accurate timing values, I advise you to 
compute the mean of several measurements : for example, in your code, 
you can perform a loop of 1000 iteration containing the call to SHA1 and 
ECDSA_do_sign and then divide the elapsed time by 1000 (the same thing 
for the verification). Thus, you will remove the side effect of the 
first signature call and you will get more significant values.


Have tried modifying your code to do two signatures in a raw, one after 
another, and output the timing of each one?


And here is the output of your program after introducing the dummy 
BIGNUM random initialization (on a Pentium M processor 1700 MHz) :


C:\ecdsatest.exe

 -- WinTimeHigh: 0
 -- WinTimeLow: 0 [ns]
 -- CPU-Ticks.High = 0
 -- CPU-Ticks.Low = 18832
(sig-r, sig-s): 
(9134279177818445EE242B823B088E70CDB05AB9,6BA4942A96BA5B1D798F859FB331F557D5170E1F)


 sign returned 1
(sig-r, sig-s): 
(9134279177818445EE242B823B088E70CDB05AB9,6BA4942A96BA5B1D798F859FB331F557D5170E1F)

i2d_ECDSA_SIG returned 0062E2B8, length 47
d2i_ECDSA_SIG returned 0062E2E8

 -- WinTimeHigh: 0
 -- WinTimeLow: 0 [ns]
 -- CPU-Ticks.High = 0
 -- CPU-Ticks.Low = 22368
verify returned 1

And just in case, I have put the MSVC 2008 build binary against OpenSSL 
09.8k on the following link : http://www.idrix.fr/test/ecdsatest.zip


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

Kirk81 wrote:


Hello,

I put the two lines but it doesn't work , at all: in fact, it works
worth!
:-/ 


Have u tried to do it before suggest it to me? What result did u get?

Thanks




Mounir IDRASSI wrote:
  
  

Hi,

What you are seeing is the side-effect of OpenSSL initialization 
internals during the first time you access a cryptographic function that 
uses random numbers (like ECDSA).
If, in your code, you do two signature in a raw before doing the 
verification, you will notice that the first signature is always slower 
that the second one and the second signature takes almost the same time 
as the verification.


If you want to remove this side-effect, add the following two lines at 
beginning of your program before doing any cryptographic operation :


BIGNUM *dummy = BN_new();
BN_rand(dummy, 256, 1, 1);

After adding these lines, you will see the magic! (the timings will 
become more reasonable)


FYI, the side-effect has to do with the entropy collection of the 
OpenSSL random generator. During the first cryptographic operation, most 
of the time is consumed by the function RAND_poll.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr



  
  

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org





  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #2117] Bug submission: ECDSA with Curves Below 128-Level

2009-11-29 Thread Mounir IDRASSI

Hi,

What you are seeing here is actually the combination of two issues :
   - First, there is a bug in the function pkey_ec_sign (ec_pmeth.c 
line 146): the error code returned by ECDSA_sign is not correctly 
handled. The line should be if (ret = 0) instead of if (ret  0).
   - Secondly, the current implementation of ECDSA in OpenSSL doesn't 
handle the case where the digest is bigger the EC field size. In the 
function ecdsa_do_sign (ecs_ossl.c line 256), a commentary is there 
saying that the digest should be truncated in this case but apparently 
no decision have been already made for this corner case and an error 
code is returned.


If you correct the bug in the pkey_ec_sign, you will get the following 
error message in the cases where you had empty output :


Error Signing Data
5052:error:2A065065:lib(42):ECDSA_do_sign:data too large for key 
size:.\crypto\ecdsa\ecs_ossl.c:265:


From this point, we have to push for a decision from the OpenSSL team 
about the digest truncation and its implementation in ecdsa_do_sign 
which seems necessary to have a fully compliant ECDSA implementation.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


Michael Smith via RT wrote:

Operating System: Ubuntu 9.10
OpenSSL is openssl-1.0.0-stable-SNAP-20091122


A blank result is returned for ECDSA signatures using SHA-256 and
112-level and below curves.  128-level and above curves work.  SHA-1
with these curves works.  I'm guessing that this is this part isn't yet
built into OpenSSL.  I'm submitting a bug report to get this into the
roadmap and in case I'm wrong and did find a hard-limit on something.
=)

These curves and SHA-256 are specified in NIST 186-3 as part of the
ECDSA standard and as part of RFC 5480.

From RFC 5480:
To promote interoperability, the following choices are RECOMMENDED:

   Minimum  | ECDSA| Message| Curves
   Bits of  | Key Size | Digest |
   Security |  | Algorithms |
   -+--++---
   80   | 192  | SHA-256| secp192r1
   -+--++---
   112  | 224  | SHA-256| secp224r1
   -+--++---
   128  | 256  | SHA-256| secp256r1
   -+--++---
   192  | 384  | SHA-384| secp384r1
   -+--++---
   256  | 512  | SHA-512| secp521r1
   -+--++---




The process to recreate:
Create P-192 private key
Create P-224 private key
Create P-256 private key
Create P-384 private key
Validate all keys
Create digital signature in hex output using each key

The private keys are attached.
The output from each process is attached.
A strace from each signature creation is attached.


I've repeated this process with the following curves and results:
B-233/sect223r1  No output with SHA-256
K-233/sect223k1  No output with SHA-256
B-283/sect283r1  Works
K-283/sect283k1  Works


I've also used the -sign without -hex with the same result.



Thanks Much, you guys are great!

Cheers
--Mike




  


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Sha1 hash differs

2006-08-01 Thread Mounir IDRASSI

Hi,
The problem comes certainly from the line calling SHA_Update : you are 
always hashing  DATA_SIZE_IN_BYTES byte of data but the command line 
tool hashes only the exact length of the file. You should replace 
DATA_SIZE_IN_BYTES with strlen(data) .


Cheers,

Mounir IDRASSI
IDRIX
http://www.idrix.net


k b a écrit :

Hi,
here's what i'm doing,  i comparing the sha1 hash generated with the  
command line tool

#openssl sha1 -out digest.txt -binary smallplaintxt
against the one created by my c program.
i load both the hash using hexedit to see if they are the same but to 
my surprise they differ.


here's what i'm doing in my c file.

#define DATA_SIZE_IN_BYTES  5

int main (int argc, char *argv[])
{
 SHA_CTX shaCTX;
 static unsigned char hash[SHA_DIGEST_LENGTH];
 char * generateHashFile = argv[1];

 if (!SHA_Init(shaCTX)) {
   return -1;
 }

 char *data = (char *) readPlainText();

 printf(length %d, %s\n, strlen(data), data);
 SHA_Update(shaCTX, data, (sizeof(char) * DATA_SIZE_IN_BYTES));
 SHA_Final(hash, shaCTX);
 OPENSSL_cleanse(shaCTX, sizeof(shaCTX));

 FILE *fp;
 printf(writing hash to %s\n, generateHashFile);
 fp = fopen(generateHashFile, wb);

 if (ferror(fp) != 0) {
 return -1;
 }
 int i = 0;
 for (i = 0; i  SHA_DIGEST_LENGTH; i++) {
 putc(hash[i], fp);
 }
 fclose(fp);
 return 0;
}


char * readPlainText()
{
 char *plainTxt = malloc(sizeof(char) * DATA_SIZE_IN_BYTES);
 if (plainTxt == NULL ) printf(malloc failed\n);
 memset(plainTxt, 0x00, sizeof(char) * DATA_SIZE_IN_BYTES);

 FILE *fp ;
 int ch = -1;

 fp = fopen(smallplaintxt, rb);
 if (fp == NULL || (ferror(fp) != 0))
 {
 printf(unable to read plain text file \n);
 exit(-1);
 }
 char * ptr = plainTxt;
 while ( (ch = getc(fp)) != EOF )
 {
   printf(%c\n, (char) ch);
   *ptr++ = (char) ch;
 }
 fclose(fp);
 return plainTxt;
}

any insight would be appreciated.
KB


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]




__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: when does RAND_pseudo_bytes() return 0?

2010-02-17 Thread Mounir IDRASSI

Hi,

If you are not using an engine, then pseudorand is implemented in 
md_rand.c : function ssleay_rand_pseudo_bytes (line 524).


Cheers,

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 2/17/2010 8:10 PM, Thomas Anderson wrote:

According tohttp://www.openssl.org/docs/crypto/RAND_bytes.html,
RAND_bytes() returns 1 on success, 0 otherwise. The error code can be
obtained by ERR_get_error(3). RAND_pseudo_bytes() returns 1 if the
bytes generated are cryptographically strong, 0 otherwise. Both
functions return -1 if they are not supported by the current RAND
method. 

Fromhttp://cvs.openssl.org/fileview?f=openssl/crypto/rand/
rand_lib.cv=1.20:

int RAND_pseudo_bytes(unsigned char *buf, int num)
 {
 const RAND_METHOD *meth = RAND_get_rand_method();
 if (meth  meth-pseudorand)
 return meth-pseudorand(buf,num);
 return(-1);
 }

Where is pseudorand defined?  I figured maybe each of the rand_win.c,
rand_unix.c, etc, would define it, but the string pseudorand doesn't
appear to occur in any of those files.

Any ideas?
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org
   


--
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_ENC_MASK since SEED

2010-02-22 Thread Mounir IDRASSI

Hi,

SSL_ENC_MASK is a bit mask. Each time a new algorithm is added, a new 
bit est positioned.
If you look in the file ssl_locl.h just under the define of 
SSL_ENC_MASK, you'll find the definitions of the bits associated with 
each algorithm.
For example, for Camellia it's 0x0800 (bit number 27) and for SEED 
it's 0x1000 (bit number 28).


Cheers,

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 2/22/2010 12:14 PM, Gregory BELLIER wrote:

Hello.

I'm studying some parts of the OpenSSL code and I now have a question.
In ssl/ssl_locl.h, I'm wandering if the #define SSL_ENC_MASK is right.

Before the Camellia was added (0.9.8b), we had :
#define SSL_ENC_MASK 0x0*4*3F8000L

In 0.9.8c:
#define SSL_ENC_MASK 0x0*C*3F8000L

From 4 to C - +8

I guess, that each time you add a new cipher, the SSL_ENC_MASK is +8.
I may guess wrong, please correct me so I can learn.

However, the SSL_ENC_MASK in 0.9.8f is 0x1C3F8000L because of the new 
cipher which is SEED.


- I would have thought the new mask should have been 0x143F8000L since 
SEED.


- In the case I misunderstand and that the current mask is the good 
one, what should the next one be ? 0x253F8000L or 0x2C3F8000L ?


Regards,
   Grégory BELLIER.



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: SSL_ENC_MASK since SEED

2010-02-22 Thread Mounir IDRASSI


The bit flag for a new algorithm would logically be 0x2000 and the 
next 0x4000. Thus, the value of the mask would be 0x3C3F8000L and 
0x7C3F8000L respectively.


--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 2/22/2010 2:29 PM, Gregory BELLIER wrote:

Thanks Mounir but you didn't exactly answer my question.

I noticed Camellia and SEED. My question was about how to define the 
mask according to a new cipher.

That's why I've already taken a look at the last 2 entries.

However, if a new algorithm makes it in OpenSSL, what would the mask be ?
I guess it would be 0x2C3F8000L because the new algorithm would be 
declared with 0x2000. But I'm not sure and wondering if I miss 
something.





Mounir IDRASSI a écrit :

Hi,

SSL_ENC_MASK is a bit mask. Each time a new algorithm is added, a new 
bit est positioned.
If you look in the file ssl_locl.h just under the define of 
SSL_ENC_MASK, you'll find the definitions of the bits associated with 
each algorithm.
For example, for Camellia it's 0x0800 (bit number 27) and for 
SEED it's 0x1000 (bit number 28).


Cheers,




__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


--
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #2220] Resolved: BUG REPORT - 1.0.0 won't compile with no-rc4 option

2010-04-06 Thread Mounir IDRASSI
Steve has checked-in today a fix to this issue in CVS : 
http://cvs.openssl.org/chngview?cn=19520 and 
http://cvs.openssl.org/chngview?cn=19521

You can grab the source from CVS or wait for tomorrow's snapshot.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 4/6/2010 3:47 PM, Mark Aldred via RT wrote:

How was this resolved?  Is there a patch or new release?

On Tue, Apr 6, 2010 at 7:21 AM, Stephen Henson via RTr...@openssl.orgwrote:

   

According to our records, your request has been resolved. If you have any
further questions or concerns, please respond to this message.

 



   



How was this resolved?  Is there a patch or new release?

On Tue, Apr 6, 2010 at 7:21 AM, Stephen Henson via RTr...@openssl.org
mailto:r...@openssl.org  wrote:

 According to our records, your request has been resolved. If you have any
 further questions or concerns, please respond to this message.




--
Mark Aldred
Technical Development Manager
TwinStrata, Inc.
508.651.0199 x205
www.twinstrata.comhttp://www.twinstrata.com
   




Re: [openssl.org #2240] Missing Supported Point Formats Extension in ServerHello should be ignored

2010-04-24 Thread Mounir IDRASSI
Hi,

I'm attaching a simple patch that should correct this behavior.
Can you test it and tell us the results?
Thanks,

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


 Dear openssl support,

 I investigated the following web servers.
 But all of them failed with the same error.

 1) apache-tomcat-6.0.26 + bcprov-ext-jdk16-145 + jdk1.6.0_17 (centos 5)
 2) jboss-4.2.3.GA + bcprov-jdk15 + jdk1.6.0_17 (centos 5)
 3) IIS 7 (windows 7)

 On the other hand, many browsers except for opera successfully connect to
 the servers.
 Something wrong?

 Regards,
 Koichi Sugimoto.

 2010/4/20 Jack Lloyd via RT r...@openssl.org


 RFC 4492 says:

   A client that receives a ServerHello message containing a Supported
   Point Formats Extension MUST respect the server's choice of point
   formats during the handshake (cf. Sections 5.6 and 5.7).  If no
   Supported Point Formats Extension is received with the ServerHello,
   this is equivalent to an extension allowing only the uncompressed
   point format.

 OpenSSL 1.0.0 rejects such a negotiation, always requiring the
 extension to exist in the ServerHello:

 CONNECTED(0003)
  TLS 1.0 Handshake [length 00cd], ClientHello
01 00 00 c9 03 01 4b cc f2 87 fc 1d 05 2d 0c 1f
4a 74 8b 8c 6f 20 c3 56 fb 35 4a 73 b0 9c e0 c1
6f 34 1b 10 f9 9f 00 00 5c c0 14 c0 0a 00 39 00
38 00 88 00 87 c0 0f c0 05 00 35 00 84 c0 12 c0
08 00 16 00 13 c0 0d c0 03 00 0a c0 13 c0 09 00
33 00 32 00 9a 00 99 00 45 00 44 c0 0e c0 04 00
2f 00 96 00 41 00 07 c0 11 c0 07 c0 0c c0 02 00
05 00 04 00 15 00 12 00 09 00 14 00 11 00 08 00
06 00 03 00 ff 01 00 00 44 00 0b 00 04 03 00 01
02 00 0a 00 34 00 32 00 01 00 02 00 03 00 04 00
05 00 06 00 07 00 08 00 09 00 0a 00 0b 00 0c 00
0d 00 0e 00 0f 00 10 00 11 00 12 00 13 00 14 00
15 00 16 00 17 00 18 00 19 00 23 00 00
  TLS 1.0 Handshake [length 002a], ServerHello
02 00 00 26 03 01 20 3f 72 c5 29 9f 22 b1 a6 af
4b 81 31 eb 4c 85 bf bb 3a a5 8b b8 21 86 16 c5
7c 84 5c 73 4a 4a 00 c0 08 00
 139742562498200:error:1411809D:SSL
 routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat
 list:t1_lib.c:1440:
 139742562498200:error:14092113:SSL
 routines:SSL3_GET_SERVER_HELLO:serverhello tlsext:s3_clnt.c:942:

 OpenSSL 1.0.0 29 Mar 2010
 built on: Mon Apr 19 19:52:35 EDT 2010
 platform: linux-x86_64
 options:  bn(64,64) rc4(1x,char) des(idx,cisc,16,int) idea(int)
 blowfish(idx)
 compiler: gcc -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H
 -m64 -DL_ENDIAN -DTERMIO -O3 -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2
 -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM
 -DAES_ASM -DWHIRLPOOL_ASM
 OPENSSLDIR: /usr/local/ssl

 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org




t1_lib.c.diff
Description: Binary data


Re: [openssl.org #2312] Function protos in 1.0.0a: unsigned long changed to size_t not so good for amd/x64, Itanium

2010-07-28 Thread Mounir IDRASSI

 Hi,

As far as I know, OpenSSL 1.0 is not meant to be binary compatible with 
OpenSSL 0.9.8x, at least for low-level APIs like the AES one you are 
referring to.
So, as you suggest it, an application should know if it is using a 0.9.8 
libeay32 or an 1.0 one, and depending on that it will use the correct 
prototype.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 7/28/2010 3:02 PM, John Skodon via RT wrote:

Hi guys:

I'm probably wrong here, but it looks like you've changed some function prototypes, e.g., aes.h, in 
version 1.0.0a to  size_t from unsigned long in 0.9.8o.

E.g.,
0.9.8o, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
  size_t length, const AES_KEY *key,
  unsigned char ivec[AES_BLOCK_SIZE],
  unsigned char ecount_buf[AES_BLOCK_SIZE],
  unsigned int *num);

1.0.0a, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
  const unsigned long length, const AES_KEY *key,
  unsigned char ivec[AES_BLOCK_SIZE],
  unsigned char ecount_buf[AES_BLOCK_SIZE],
  unsigned int *num);

The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has long and unsigned 
long as 32-bits, and size_t as 64-bits. So, it would seem that code that called 
AES_ctr128_encrypt() compiled under 0.9.8o would push 32-bits less onto the stack on AMD/Itanium than code 
using the 1.0.0a headers.

Just about every other popular compiler model I can think of, primarily W32, 
Unix 32, and Unix 64 LP64 would not experience a problem.

If I'm correct, code calling these functions on AMD/x64 would need maybe two 
different function pointers defined for AES_ctr128_encrypt(), and on the fly 
switching between the two, depending on the version retrieved from LIBEAY32.DLL.

Am I missing something here?

Thanks in advance for your help,
JGS



Hi guys:
I'm probably wrong here, but it looks like you've changed some function
prototypes, e.g., aes.h, in version 1.0.0a to size_t from unsigned long in
0.9.8o.
E.g.,
0.9.8o, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
* size_t length*, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
1.0.0a, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
* const unsigned long length*, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has long and
unsigned long as 32-bits, and size_t as 64-bits. So, it would seem that code
that called AES_ctr128_encrypt() compiled under 0.9.8o would push 32-bits less
onto the stack on AMD/Itanium than code using the 1.0.0a headers.
Just about every other popular compiler model I can think of, primarily W32,
Unix 32, and Unix 64 LP64 would not experience a problem.
If I'm correct, code calling these functions on AMD/x64 would need maybe two
different function pointers defined for AES_ctr128_encrypt(), and on the fly
switching between the two, depending on the version retrieved from LIBEAY32.DLL.
Am I missing something here?
Thanks in advance for your help,
JGS


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #2312] Function protos in 1.0.0a: unsigned long changed to size_t not so good for amd/x64, Itanium

2010-07-28 Thread Mounir IDRASSI

 Hi,

I mentioned only the low-level APIs because at the time of my writing I 
remembered only these.
Actually, there are also breaking changes in high-level APIs between the 
0.9.8 and the 1.0 series. For example, if you look at the EVP high-level 
API, you'll notice that new fields have been added to the EVP_PKEY and 
EVP_MD structures and that the *do_cipher* callback in the EVP_CIPHER 
structure had its last parameter type changed from unsigned int to 
size_t. Also, in the SSL API, the structure SSL_CIPHER saw many fields 
removed and new ones added.
These changed, especially the different structures modified definitions, 
were in most cases needed to accommodate new features that couldn't be 
supported using the older defines.
So, as Steve said in a previous posting, OpenSSL doesn't claim binary 
compatibility across major version changes: in general recompiling 
source against different major versions is recommended. And I will add 
that in many cases, recompiling is mandatory.


I hope this clarifies things.
Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 7/29/2010 12:19 AM, John Skodon wrote:

Hi:

Thanks for the quick reply.

You mentioned ...not meant to be binary compatible...for low-level 
APIs... However, it could have been binary compatible had the size_t 
version of the function been given a new name, e.g.,  
AES_ctr128_encrypt_sz(), while the old entry name had also been 
retained for the unsigned long version. If you're using 
GetProcAddress() to retrieve the function address out of the DLL, 
you've got to decide, based on SSL version, which function pointer var 
to associate with the func and which to use, on the fly. A bit tricky, 
perhaps. With C, you can't have two different prototypes for the same 
function, not to mention, how would you call one versus another?


Anyway, it's probably too late now, even if I could persuade you to 
change. There would still be some tricky func var switching required 
for versions 1.0.0, and 1.0.0a. Also, I haven't diff-ed all the 
function protos in SSL. So, you're saying you didn't do any unsigned 
long to size_t changes in the higher-level prototypes?


JGS

- Original Message - From: Mounir IDRASSI via RT 
r...@openssl.org

To: skod...@earthlink.net
Cc: openssl-dev@openssl.org
Sent: Wednesday, July 28, 2010 8:03 AM
Subject: Re: [openssl.org #2312] Function protos in 1.0.0a: unsigned 
long changed to size_t not so good for amd/x64, Itanium




 Hi,

As far as I know, OpenSSL 1.0 is not meant to be binary compatible with
OpenSSL 0.9.8x, at least for low-level APIs like the AES one you are
referring to.
So, as you suggest it, an application should know if it is using a 0.9.8
libeay32 or an 1.0 one, and depending on that it will use the correct
prototype.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 7/28/2010 3:02 PM, John Skodon via RT wrote:

Hi guys:

I'm probably wrong here, but it looks like you've changed some 
function prototypes, e.g., aes.h, in version 1.0.0a to  size_t 
from unsigned long in 0.9.8o.


E.g.,
0.9.8o, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
  size_t length, const AES_KEY *key,
  unsigned char ivec[AES_BLOCK_SIZE],
  unsigned char ecount_buf[AES_BLOCK_SIZE],
  unsigned int *num);

1.0.0a, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
  const unsigned long length, const AES_KEY *key,
  unsigned char ivec[AES_BLOCK_SIZE],
  unsigned char ecount_buf[AES_BLOCK_SIZE],
  unsigned int *num);

The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has 
long and unsigned long as 32-bits, and size_t as 64-bits. So, 
it would seem that code that called AES_ctr128_encrypt() compiled 
under 0.9.8o would push 32-bits less onto the stack on AMD/Itanium 
than code using the 1.0.0a headers.


Just about every other popular compiler model I can think of, 
primarily W32, Unix 32, and Unix 64 LP64 would not experience a 
problem.


If I'm correct, code calling these functions on AMD/x64 would need 
maybe two different function pointers defined for 
AES_ctr128_encrypt(), and on the fly switching between the two, 
depending on the version retrieved from LIBEAY32.DLL.


Am I missing something here?

Thanks in advance for your help,
JGS



Hi guys:
I'm probably wrong here, but it looks like you've changed some function
prototypes, e.g., aes.h, in version 1.0.0a to size_t from 
unsigned long in

0.9.8o.
E.g.,
0.9.8o, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
* size_t length*, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
1.0.0a, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
* const unsigned long length*, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has 
long and
unsigned long as 32

Re: openssl-1.0.0a and glibc detected sthg ;)

2010-08-07 Thread Mounir IDRASSI

 Hi,

I checked the parameters of your 4008 bits key and it is indeed invalid 
(q is not prime).
How did you generate it? It would be surprising if it was done through 
OpenSSL.

Anyway, you must generate a new RSA key.

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 8/7/2010 1:21 PM, Georgi Guninski wrote:

openssl-1.0.0a on ubuntu, debian and arch.
attached a private key and a cert.

~/local/bin/openssl s_server -www -accept  -cert /tmp/CA.cert  -key 
/tmp/CA.key

~/local/bin/openssl s_client -connect localhost:

depth=0 CN = CA
verify return:1
*** glibc detected *** /home/build/local/bin/openssl: double free or corruption 
(fasttop): 0x00979300 ***

  ~/local/bin/openssl rsa -check -in /tmp/CA.key |more
writing RSA key
RSA key error: q not prime # definitely


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: openssl-1.0.0a and glibc detected sthg ;)

2010-08-08 Thread Mounir IDRASSI

 Hi,

You are right : there is a double free bug in the function 
*ssl3_get_key_exchange* which leads to crash if an error occurs.
The bug is in line 1510 of s3_clnt.c where we forget to set the variable 
bn_ctx to NULL after freeing it and this leads to the double free error 
when BN_CTX_free is called a second time on line 1650.


I'm attaching a patch against the latest source that corrects this. I'll 
also send to RT.

Thanks for the report.

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 8/8/2010 3:04 PM, Georgi Guninski wrote:

i was pointing out this:

~/local/bin/openssl s_client -connect localhost:

depth=0 CN = CA
verify return:1
*** glibc detected *** /home/build/local/bin/openssl: double free or
corruption (fasttop): 0x00979300 ***

the glibc message means that the current heap operation is on invalid
pointer. the testcase crashed browser links on arch linux too (when
trying to connect to s_server -www).

btw, it seems *important* to use |s_server| from *1.0.0a*


On Sat, Aug 07, 2010 at 02:21:09PM +0300, Georgi Guninski wrote:

openssl-1.0.0a on ubuntu, debian and arch.
attached a private key and a cert.

~/local/bin/openssl s_server -www -accept  -cert /tmp/CA.cert  -key 
/tmp/CA.key

~/local/bin/openssl s_client -connect localhost:

depth=0 CN = CA
verify return:1
*** glibc detected *** /home/build/local/bin/openssl: double free or corruption 
(fasttop): 0x00979300 ***


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


--- E:/dev/libraries/openssl-1.0.latest/ssl/s3_clnt.c.original  Sun Feb 28 
01:24:24 2010
+++ E:/dev/libraries/openssl-1.0.latest/ssl/s3_clnt.c   Sun Aug  8 14:49:30 2010
@@ -1508,6 +1508,7 @@
s-session-sess_cert-peer_ecdh_tmp=ecdh;
ecdh=NULL;
BN_CTX_free(bn_ctx);
+   bn_ctx = NULL;
EC_POINT_free(srvr_ecpoint);
srvr_ecpoint = NULL;
}


Re: [openssl.org #2315] PSS certificates with keysize n*8+1 don't validate

2010-08-08 Thread Mounir IDRASSI

 Hi,

I was not able to reproduce your problem using the same snapshot.  I run 
your commands a dozen times with no error. Tested under Linux 32-bit 
(Centos 5, gcc 4.1.2) and Linux 64-bit (Debian 5, gcc 4.3.2).

What platform/compiler are you using?
How does your openssl.cnf look like? In my tests, I use the one 
installed by the snapshot build.


Is anyone else able to reproduce this problem?

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 8/8/2010 9:40 PM, Hanno Boeck via RT wrote:

It seems that openssl has a problem with pss certificates and uncommon rsa key
sizes. For all keysizes with keysize mod 8 = 1 (or keysize = n*8+1),
verification of a self-signed test cert fails.

I've not yet investigated if it's the generation or the verification code that
is wrong, it's probably related to the emBits variable from the emsa-pss-
verify/encode-code.

Check with this:
openssl genrsa 2007  test.key
openssl req -batch -new -x509 -sigopt rsa_padding_mode:pss -nodes -days 9
-key test.key  test.crt
openssl verify -check_ss_sig -CAfile test.crt test.crt

Output of the last command is:
139831192893096:error:0407E06D:rsa routines:RSA_verify_PKCS1_PSS:data too
large:rsa_pss.c:127:
139831192893096:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP
lib:a_verify.c:215:


Tested with openssl-SNAP-20100808.

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: openssl-1.0.0a and glibc detected sthg ;)

2010-08-09 Thread Mounir IDRASSI

 Hi,

Signature verification is done through a modular exponentiation (using 
public exponent and modulus) that always leads to a result even fur a 
bogus RSA modulus.
This result is checked against the PKCS#1 padding format. Since the RSA 
private key is invalid, the output of this exponentiation is different 
from DataToBeSigned used during certificate creation and thus the code 
doesn't find the PKCS#1 padding block header.

So, the signature is bad because the decrypted signature has a bad format!
I hope this clarifies things to you.

You say at the end of your message that the private key was generated by 
a python wrapper, certainly a wrapper of OpenSSL, but in a previous 
message you are saying that you generated the key yourself (pen and 
paper). Which statement is correct? Maybe your wrapper wraps something 
else...


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr



On 8/9/2010 7:44 AM, Georgi Guninski wrote:

is the certificate at http://marc.info/?l=openssl-devm=128118163216952w=2
(with the malformed key) *syntactically* correct modulo the bad self signature?

with 1.0.0a
~/local/bin/openssl verify -check_ss_sig -CAfile /tmp/CA-P.cert /tmp/CA-P.cert


/tmp/CA-P.cert: CN = CA
error 7 at 0 depth lookup:certificate signature failure
139828504536744:error:0407006A:rsa 
routines:RSA_padding_check_PKCS1_type_1:block type is not 01:rsa_pk1.c:100:
139828504536744:error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding 
check failed:rsa_eay.c:699:
139828504536744:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP 
lib:a_verify.c:184:

echo $?
0

i would expect an error about bad self signature, not format stuff.

the private key was generated by a python wrapper, the cert was generated with
ubuntu's 0.9.8k 25 Mar 2009


On Sun, Aug 08, 2010 at 03:21:34PM +0200, Mounir IDRASSI wrote:
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Verify X.509 certificate, openssl verify returns bad signature

2010-08-28 Thread Mounir IDRASSI

 Hi,

The problem you are encountering is partly caused by the way OpenSSL 
handles integers whose DER encoded value starts with one or more zeros : 
in this case, OpenSSL removes the leading zero when creating the 
corresponding ASN1_INTEGER structure thus leading to the fact that 
computed DER of this structure and the original one will be different!!


In your case, the certificate you are trying to verify has a DER encoded 
serial number 00 00 65. So, OpenSSL will create an ASN1_INTEGER with a 
value of 00 65. And in the course of the certificate signature 
verification, this structure will be encoded to DER which will lead to a 
encoded value of 00 65. Thus, the generated DER of the CertInfo will 
be different from the original one, which explains why the signature 
verification fails.


After some digging, I found that part of the problem is caused by the 
functions c2i_ASN1_INTEGER and d2i_ASN1_UINTEGER in file 
crypto\asn1\a_int.c. At lines 244 and 314, there is an if block that 
removes any leading zeros. Commenting out these blocks solves the DER 
encoding mismatch but the verification still fails because the computed 
digest is different from the recovered one.


I will continue my investigation to find all the culprits.
Meanwhile, the question remains why in the first place the removal of 
the leading zero from the parsed DER encoding was added since this 
clearly have the side effect of making the computed DER different from 
the original one.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 8/28/2010 10:43 PM, Goran Rakic wrote:

Hi all,

I have two X.509 certificates MUPCAGradjani.crt and MUPCARoot.crt
downloaded from http://ca.mup.gov.rs/sertifikati-lat.html

Certificate path is MUPCARoot  MUPCAGradjani and I would like to
validate MUPCAGradjani against the other. What I did is to convert both
to PEM format and rename them by hash as efd6650d.0 (Gradjani) and
fc5fe32d.0 (Root) using this script:

 #!/bin/bash
 hash=`openssl x509 -in $1 -inform DER -noout -hash`
 echo Saving $1 as $hash.0
 openssl x509 -in $1 -inform DER -out $hash.0 -outform PEM

Now I run:

 $ openssl verify -CApath . efd6650d.0
 error 7 at 0 depth lookup:certificate signature failure
 16206:error:04077068:rsa routines:RSA_verify:bad signature:rsa_sign.c:255:
 16206:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP 
lib:a_verify.c:173:/pre

Hm, that is not working. What am I doing wrong here?

I am running OpenSSL 0.9.8k 25 Mar 2009 on Ubuntu 10.04 GNU/Linux. I
also have my personal certificate issued by MUPCAGradjani that I would
like to verify but it is failing with the same error (just one level
down):

 $ openssl verify -CApath . qualified.pem
 qualified.pem: /CN=MUPCA Gradjani/O=MUP Republike 
Srbije/L=Beograd/C=Republika Srbija (RS)
 error 7 at 1 depth lookup:certificate signature failure
 16258:error:04077068:rsa routines:RSA_verify:bad signature:rsa_sign.c:255:
 16258:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP 
lib:a_verify.c:173:/pre

When I install downloaded certificates in Windows using Internet
Explorer and doubleclick on my personal certificate (qualified.cer) it
looks valid. I am not sure, but I believe it is doing certificate chain
validation so the certificates and paths should be valid. After all they
are issued by a trustful CA.

Output of openssl x509 -nameopt multiline,utf8,-esc_msb -noout -text
-in $1 looks reasonable for both downloaded certificates and is the
same before and after conversion to PEM (using -inform DER in the first
case). My take on this is that I am not doing conversion properly or
maybe the original certificates are in some other format requiring extra
argument, but I can not find answer in the docs.

How can I properly validate X.509 certificate from
http://ca.mup.gov.rs/sertifikati-lat.html by certificate chain?

Kind regards,
Goran


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-us...@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: Verify X.509 certificate, openssl verify returns bad signature

2010-08-29 Thread Mounir IDRASSI
Hi Peter,

Although the certificate's encoding of the serial number field breaks the
BER specification about the minimal bytes representation, it is known that
many CA's and libraries treat this field as a blob and usually encode it
on a fixed length basis without caring about leading zeros.
Specifically, Peter Gutmann in his X.509 Style Guide says this about this
field : If you're writing certificate-handling code, just treat the
serial number as a blob which happens to be an encoded integer.

Moreover, major PKI libraries are tolerant vis-a-vis the encoding of the
serial number field of a certificate and they verify successfully the
certificate chain given by the original poster.

For example, NSS, GnuTLS and CryptoAPI accept the given certificates and
verify successfully their trust.

Supporting or not specific broken implementations have always been the
subject of heated debates. Concerning the specific issue here, it's clear
that OpenSSL is too restrictive compared to other major libraries since
this is a minor deviation from the BER specs (i.e. minimal bytes
representation) and thus hurts deployments of real-world certificates.

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


 The encoding is invalid BER.
 The openssl is tolerant but also destructive in copy.

 whenever  you use openssl x509 -in -out ... you remove one leading 0
 octet.

 IMHO openssl should reject the cert because of invalid encoding.


 On 08/29/2010 04:17 AM, Mounir IDRASSI wrote:
  Hi,

 The problem you are encountering is partly caused by the way OpenSSL
 handles integers whose DER encoded value starts with one or more zeros
 : in this case, OpenSSL removes the leading zero when creating the
 corresponding ASN1_INTEGER structure thus leading to the fact that
 computed DER of this structure and the original one will be different!!

 In your case, the certificate you are trying to verify has a DER
 encoded serial number 00 00 65. So, OpenSSL will create an
 ASN1_INTEGER with a value of 00 65. And in the course of the
 certificate signature verification, this structure will be encoded to
 DER which will lead to a encoded value of 00 65. Thus, the generated
 DER of the CertInfo will be different from the original one, which
 explains why the signature verification fails.

 After some digging, I found that part of the problem is caused by the
 functions c2i_ASN1_INTEGER and d2i_ASN1_UINTEGER in file
 crypto\asn1\a_int.c. At lines 244 and 314, there is an if block that
 removes any leading zeros. Commenting out these blocks solves the DER
 encoding mismatch but the verification still fails because the
 computed digest is different from the recovered one.

 I will continue my investigation to find all the culprits.
 Meanwhile, the question remains why in the first place the removal of
 the leading zero from the parsed DER encoding was added since this
 clearly have the side effect of making the computed DER different from
 the original one.

 Cheers,
 --
 Mounir IDRASSI
 IDRIX
 http://www.idrix.fr


 On 8/28/2010 10:43 PM, Goran Rakic wrote:
 Hi all,

 I have two X.509 certificates MUPCAGradjani.crt and MUPCARoot.crt
 downloaded from http://ca.mup.gov.rs/sertifikati-lat.html

 Certificate path is MUPCARoot  MUPCAGradjani and I would like to
 validate MUPCAGradjani against the other. What I did is to convert both
 to PEM format and rename them by hash as efd6650d.0 (Gradjani) and
 fc5fe32d.0 (Root) using this script:

  #!/bin/bash
  hash=`openssl x509 -in $1 -inform DER -noout -hash`
  echo Saving $1 as $hash.0
  openssl x509 -in $1 -inform DER -out $hash.0 -outform PEM

 Now I run:

  $ openssl verify -CApath . efd6650d.0
  error 7 at 0 depth lookup:certificate signature failure
  16206:error:04077068:rsa routines:RSA_verify:bad
 signature:rsa_sign.c:255:
  16206:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP
 lib:a_verify.c:173:/pre

 Hm, that is not working. What am I doing wrong here?

 I am running OpenSSL 0.9.8k 25 Mar 2009 on Ubuntu 10.04 GNU/Linux. I
 also have my personal certificate issued by MUPCAGradjani that I would
 like to verify but it is failing with the same error (just one level
 down):

  $ openssl verify -CApath . qualified.pem
  qualified.pem: /CN=MUPCA Gradjani/O=MUP Republike
 Srbije/L=Beograd/C=Republika Srbija (RS)
  error 7 at 1 depth lookup:certificate signature failure
  16258:error:04077068:rsa routines:RSA_verify:bad
 signature:rsa_sign.c:255:
  16258:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP
 lib:a_verify.c:173:/pre

 When I install downloaded certificates in Windows using Internet
 Explorer and doubleclick on my personal certificate (qualified.cer) it
 looks valid. I am not sure, but I believe it is doing certificate chain
 validation so the certificates and paths should be valid. After all
 they
 are issued by a trustful CA.

 Output of openssl x509 -nameopt multiline,utf8,-esc_msb -noout -text
 -in $1 looks reasonable

Re: inconsistent timings for rsa sign/verify with 100K bit rsa keys

2010-08-29 Thread Mounir IDRASSI

 Hi,

The big difference in the sign operation timings between the two keys is 
not caused by any property of the second key parameters (like their 
hamming weight) but it is rather the expected manifestation of two 
counter-measures implemented by OpenSSL. Those are :

   - RSA Blinding that protects against timing attacks.
   - Verification of CRT output that protects against fault attacks.

Each of these counter-measures involves a modular exponentiation by the 
public exponent e.
When the public exponent e is equal to 2^16-1, then the cost of these 
two counter-measures is negligible.
When e is big (in your case the same size as the modulus), then these 
counter-measures add an overhead that is equal to twice the cost of an 
RSA verification.
In your case, for the second key, this overhead is expected to be equal 
to 2x21 min = 42 min. The cost of a pure CRT signature without 
counter-measures is roughly 5 min (taken from CRT of key1 for whom 
counter-measures are negligible).
This gives us an expected running time for a signature with key2 of 42 + 
5 = 47 min, that is very close to what you actually obtained.


You can deactivate the blinding counter-measure by calling the function 
RSA_blinding_off. On the other hand, CRT output verification 
counter-measure can't be deactivated.

I hope this clarifies the behavior you have encountered.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 8/29/2010 10:51 AM, Georgi Guninski wrote:

inconsistent timings for rsa sign/verify with 100K bit rsa keys.

using pycrypto i generated two valid 100 000 bit rsa keys with the same modulus:

key1: log(n)=100K, e=2^16-1,d=BIG
key2: log(n)=100K, e=BIG, d=BIG
(note key1 and key2 share the same modulus)

recompiled openssl with increased parameters so the keys are usable.

i expect the keys to be slow, but this benchmarks quite surprise me:

   sign   verify
key1  5min1sec
key2 48min   21min
(tested on patched openssl1.0.0a)

is it normal key2 to be so slower compared with the signing of key1 (the 1sec 
verification with low exponent is clear to me).

signature verification passes for both keys  and the big exponents seem of the right 
size. both keys passed rsa check with reduced number of pseudoprimality tests 
(to 3).

pycrypto is much faster with key2 and general purpose math program suggest 
sign/verify to be about 5min for big exponents (phi(n)).

the tarball with the private keys + 2 certs (190K) is at:

http://seclists.org/fulldisclosure/2010/Aug/384
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: inconsistent timings for rsa sign/verify with 100K bit rsa keys

2010-08-30 Thread Mounir IDRASSI

 On 8/30/2010 12:20 PM, Georgi Guninski wrote:

you write sign operation, does it explain verification operation -
timings for signing with low pub exponent key vs verification with big exponent 
?



To answer this question, one must remember that the signing is done 
using the CRT parameters (p, q, dp, dq and d^-1 mod p) and that 
theoretically it is 4 times faster than doing a raw exponentiation with 
the private exponent d (see section 14.75 in Handbook Of Applied 
Cryptography for a justification).

Your figures exactly meet this. I'll explain.

The verification with key2 involves a modular exponentiation with a 
public exponent of 100 001 bits with a hamming weight equal to 49945.
The private exponent of key1 is  100 002 bits and it has a hamming 
weight of 49 922.
Thus, a modular exponentiation with the public exponent of key2 will 
cost roughly the same as the modular exponentiation with the private 
exponent of key1.
Moreover, as I explained at the beginning of this email, the actual 
signing is done using CRT which is 4 times faster that the modular 
exponentiation with the private exponent.


So, the modular exponentiation with the public exponent of key2 is 4 
times slower that the signing operation of key1 and it should cost 4 x 5 
min = 20 min which is very close to the 21 min you actually obtained.


Does this answer your question?

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 8/30/2010 12:20 PM, Georgi Guninski wrote:

On Mon, Aug 30, 2010 at 06:10:23AM +0200, Mounir IDRASSI wrote:

  Hi,

The big difference in the sign operation timings between the two
keys is not caused by any property of the second key parameters
(like their hamming weight) but it is rather the expected
manifestation of two counter-measures implemented by OpenSSL. Those
are :
- RSA Blinding that protects against timing attacks.
- Verification of CRT output that protects against fault attacks.


ok, thanks.


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


CVE-2010-2939

2010-09-03 Thread Mounir IDRASSI
Hi,

The very simple patch I submitted to RT, for the issue CVE-2010-2939, on
August 8th under reference #2314 has not been applied yet.
Is there any reason for that? I hope it was not lost in translation...

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: TLS 1.1 / 1.0 Interoperation

2010-10-09 Thread Mounir IDRASSI

 Hi Paul,

I was not able to reproduce your problem using that snapshot. I set up 
an SSL server using SSLv23_server_method and set the options SSL_OP_ALL 
| SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 as you did : I always have 
s-version equal to 0x0301 as expected and the test you mentioned is OK 
since s-client_version is also equal to 0x0301.

Same test can be done using the command line :
openssl s_server -accept 443 -key server.pem -cert server.pem -no_ssl2 
-no_ssl3 -bugs


Can you post a sample code that exposes the problem?

By the way, I detected a double free in the implementation of 
ssl3_send_server_key_exchange in this snapshot. I'll see if it has been 
already corrected, otherwise I'll send a patch for it.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 08/10/2010 18:55, Paul Suhler wrote:

Hi, everyone.

[I'm re-sending this to the developers list.]

I've found that when a server built with
openssl-1.0.1-stable-SNAP-20101004 receives a Client Hello from a client
specifying TLS 1.0 (version = 0x0301), the connection is rejected for a
bad version.  This appears to be implemented in ssl3_get_client_hello()
by:

 if ((s-version == DTLS1_VERSION  s-client_version
s-version) ||
 (s-version != DTLS1_VERSION  s-client_version
s-version))
 {
 SSLerr(SSL_F_SSL3_GET_CLIENT_HELLO,
SSL_R_WRONG_VERSION_NUMBER);

In the SSL_CTX, I'm setting options SSL_OP_ALL | SSL_OP_NO_SSLv2 |
SSL_OP_NO_SSLv3.  I see no options that would be forcing TLS 1.1 only.

However, RFC 4346 Appendix E says:

Similarly, a TLS 1.1  server that wishes to interoperate with TLS 1.0

or SSL 3.0 clients SHOULD accept SSL 3.0 client hello messages and
respond with a SSL 3.0 server hello if an SSL 3.0 client hello with a

version field of {3, 0} is received, denoting that this client does
not support TLS.  Similarly, if a SSL 3.0 or TLS 1.0 hello with a
version field of {3, 1} is received, the server SHOULD respond with a

TLS 1.0 hello with a version field of {3, 1}.

Am I misunderstanding the requirements of the RFC, or is this part of
the fix for the renegotiation exploit?  (I'm not renegotiating when this
happens; it's the initial connection attempt that's rejected.)

Thanks very much,

Paul

_
Paul A. Suhler | Firmware Engineer | Quantum Corporation | Office:
949.856.7748 | paul.suh...@quantum.commailto:paul.suh...@quantum.com

Preserving the World's Most Important Data. Yours.(tm)



--
The information contained in this transmission may be confidential. Any 
disclosure, copying, or further distribution of confidential information is not 
permitted unless such privilege is explicitly granted in writing by Quantum. 
Quantum reserves the right to have electronic communications, including email 
and attachments, sent across its networks filtered through anti virus and spam 
software programs and retain such messages in order to comply with applicable 
data security and retention requirements. Quantum is not responsible for the 
proper and complete transmission of the substance of this communication or for 
any delay in its receipt.



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: TLS 1.1 / 1.0 Interoperation

2010-10-10 Thread Mounir IDRASSI

 Hi Paul,

The use of an XXX_server_method function in a server defines the minimal 
client version it supports.

SSLv23_server_method   = SSLv2
SSLv3_server_method = SSLv3
TLSv1_server_method = TLS 1.0
TLSv1_1_server_method = TLS 1.1.
Thus, the error you are getting is normal: you told OpenSSL to support 
only TLS 1.1 and that's why TLS 1.0 clients are rejected.
Use TLSv1_server_method if you want to support both TLS 1.0 and TLS 1.1 
clients.
By the way, setting SSL_OP_NO_SSLv2 and SSL_OP_NO_SSLv3 is useless since 
the server only supports TLS 1.0/1.1.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 10/10/2010 6:28 AM, Paul Suhler wrote:

Hi, Mounir.

In the server, I use TLSv1_1_server_method, resulting in s-version ==
0x0302 (TLS 1.1).  In the client, I use TLSv1_client_method to get TLS
1.0.  When the server sees s-client_version == 0x0301, shouldn't it
change s-version to 0x0301 and operate thereafter in 1.0 mode?

Thanks for the warning about the double free.

Cheers,

Paul

_
Paul A. Suhler | Firmware Engineer | Quantum Corporation | Office:
949.856.7748 | paul.suh...@quantum.com
Preserving the World's Most Important Data. Yours.(tm)

-Original Message-
From: owner-openssl-...@openssl.org
[mailto:owner-openssl-...@openssl.org] On Behalf Of Mounir IDRASSI
Sent: Saturday, October 09, 2010 6:37 PM
To: openssl-dev@openssl.org
Subject: Re: TLS 1.1 / 1.0 Interoperation


   Hi Paul,

I was not able to reproduce your problem using that snapshot. I set up
an SSL server using SSLv23_server_method and set the options SSL_OP_ALL
| SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 as you did : I always have
s-version equal to 0x0301 as expected and the test you mentioned is OK
since s-client_version is also equal to 0x0301.
Same test can be done using the command line :
openssl s_server -accept 443 -key server.pem -cert server.pem -no_ssl2
-no_ssl3 -bugs

Can you post a sample code that exposes the problem?

By the way, I detected a double free in the implementation of
ssl3_send_server_key_exchange in this snapshot. I'll see if it has been
already corrected, otherwise I'll send a patch for it.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 08/10/2010 18:55, Paul Suhler wrote:

Hi, everyone.

[I'm re-sending this to the developers list.]

I've found that when a server built with
openssl-1.0.1-stable-SNAP-20101004 receives a Client Hello from a
client specifying TLS 1.0 (version = 0x0301), the connection is
rejected for a bad version.  This appears to be implemented in
ssl3_get_client_hello()
by:

  if ((s-version == DTLS1_VERSION   s-client_version
s-version) ||
  (s-version != DTLS1_VERSION   s-client_version
s-version))
  {
  SSLerr(SSL_F_SSL3_GET_CLIENT_HELLO,
SSL_R_WRONG_VERSION_NUMBER);

In the SSL_CTX, I'm setting options SSL_OP_ALL | SSL_OP_NO_SSLv2 |
SSL_OP_NO_SSLv3.  I see no options that would be forcing TLS 1.1 only.

However, RFC 4346 Appendix E says:

 Similarly, a TLS 1.1  server that wishes to interoperate with TLS
1.0

 or SSL 3.0 clients SHOULD accept SSL 3.0 client hello messages and
 respond with a SSL 3.0 server hello if an SSL 3.0 client hello
with a

 version field of {3, 0} is received, denoting that this client

does

 not support TLS.  Similarly, if a SSL 3.0 or TLS 1.0 hello with a
 version field of {3, 1} is received, the server SHOULD respond
with a

 TLS 1.0 hello with a version field of {3, 1}.

Am I misunderstanding the requirements of the RFC, or is this part of
the fix for the renegotiation exploit?  (I'm not renegotiating when
this happens; it's the initial connection attempt that's rejected.)

Thanks very much,

Paul
__
__
_
Paul A. Suhler | Firmware Engineer | Quantum Corporation | Office:
949.856.7748 | paul.suh...@quantum.commailto:paul.suh...@quantum.com

Preserving the World's Most Important Data. Yours.(tm)



--
The information contained in this transmission may be confidential.

Any disclosure, copying, or further distribution of confidential
information is not permitted unless such privilege is explicitly granted
in writing by Quantum. Quantum reserves the right to have electronic
communications, including email and attachments, sent across its
networks filtered through anti virus and spam software programs and
retain such messages in order to comply with applicable data security
and retention requirements. Quantum is not responsible for the proper
and complete transmission of the substance of this communication or for
any delay in its receipt.
__
OpenSSL Project http://www.openssl.org

Re: TLS 1.1 / 1.0 Interoperation

2010-10-13 Thread Mounir IDRASSI

 Hi Paul,

I'm glad to see that my post helped you even if it was not completely 
correct.
I answered too quickly and I wrongly extrapolated the 
SSLv23_server_method behavior to the others.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 10/13/2010 8:10 PM, Paul Suhler wrote:

Hi, Mounir.

Thanks for your help; we can now negotiate between 1.0 and 1.1.  My only
comment is that -- based on our testing -- only SSLv23_{server,
client}_method allows negotiation.  TLSv1_*_method will *not* accept TLS
1.1 connections.  And SSL3_*_method will not accept TLS connections.

This is actually documented in
http://www.openssl.org/docs/ssl/SSL_CTX_new.html, although it doesn't
(yet) mention TLS 1.1.  For the benefit of whoever works on that
documentation I'd recommend that it be changed to specify 1.0:

TLSv1_method(void), TLSv1_server_method(void), TLSv1_client_method(void)

A TLS/SSL connection established with these methods will only understand
the TLSv1.0 protocol. A client will send out TLSv1.0 client hello
messages and will indicate that it only understands TLSv1.0. A server
will only understand TLSv1.0 client hello messages. This especially
means, that it will not understand SSLv2 client hello messages which are
widely used for compatibility reasons, see SSLv23_*_method(). It will
also not understand SSLv3 client hello messages.

And if you really want consistency, change TLSv1_method to
TLSv1_0_method, etc.

Unless the intention is really that TLSv1_method will accept 1.1, but
that's a lot more work.

Cheers,

Paul

_
Paul A. Suhler | Firmware Engineer | Quantum Corporation | Office:
949.856.7748 | paul.suh...@quantum.com
Preserving the World's Most Important Data. Yours.(tm)

-Original Message-
From: owner-openssl-...@openssl.org
[mailto:owner-openssl-...@openssl.org] On Behalf Of Mounir IDRASSI
Sent: Sunday, October 10, 2010 3:58 PM
To: openssl-dev@openssl.org
Subject: Re: TLS 1.1 / 1.0 Interoperation


   Hi Paul,

The use of an XXX_server_method function in a server defines the minimal
client version it supports.
  SSLv23_server_method   =  SSLv2
  SSLv3_server_method =  SSLv3
  TLSv1_server_method =  TLS 1.0
  TLSv1_1_server_method =  TLS 1.1.
Thus, the error you are getting is normal: you told OpenSSL to support
only TLS 1.1 and that's why TLS 1.0 clients are rejected.
Use TLSv1_server_method if you want to support both TLS 1.0 and TLS 1.1
clients.
By the way, setting SSL_OP_NO_SSLv2 and SSL_OP_NO_SSLv3 is useless since
the server only supports TLS 1.0/1.1.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 10/10/2010 6:28 AM, Paul Suhler wrote:

Hi, Mounir.

In the server, I use TLSv1_1_server_method, resulting in s-version ==
0x0302 (TLS 1.1).  In the client, I use TLSv1_client_method to get TLS
1.0.  When the server sees s-client_version == 0x0301, shouldn't it
change s-version to 0x0301 and operate thereafter in 1.0 mode?

Thanks for the warning about the double free.

Cheers,

Paul
__
__
_
Paul A. Suhler | Firmware Engineer | Quantum Corporation | Office:
949.856.7748 | paul.suh...@quantum.com Preserving the World's Most
Important Data. Yours.(tm)

-Original Message-
From: owner-openssl-...@openssl.org
[mailto:owner-openssl-...@openssl.org] On Behalf Of Mounir IDRASSI
Sent: Saturday, October 09, 2010 6:37 PM
To: openssl-dev@openssl.org
Subject: Re: TLS 1.1 / 1.0 Interoperation


Hi Paul,

I was not able to reproduce your problem using that snapshot. I set up
an SSL server using SSLv23_server_method and set the options
SSL_OP_ALL
| SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 as you did : I always have
s-version equal to 0x0301 as expected and the test you mentioned is
s-OK
since s-client_version is also equal to 0x0301.
Same test can be done using the command line :
openssl s_server -accept 443 -key server.pem -cert server.pem -no_ssl2
-no_ssl3 -bugs

Can you post a sample code that exposes the problem?

By the way, I detected a double free in the implementation of
ssl3_send_server_key_exchange in this snapshot. I'll see if it has
been already corrected, otherwise I'll send a patch for it.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 08/10/2010 18:55, Paul Suhler wrote:

Hi, everyone.

[I'm re-sending this to the developers list.]

I've found that when a server built with
openssl-1.0.1-stable-SNAP-20101004 receives a Client Hello from a
client specifying TLS 1.0 (version = 0x0301), the connection is
rejected for a bad version.  This appears to be implemented in
ssl3_get_client_hello()
by:

   if ((s-version == DTLS1_VERSIONs-client_version
s-version) ||
   (s-version != DTLS1_VERSIONs-client_version
s-version))
   {
   SSLerr(SSL_F_SSL3_GET_CLIENT_HELLO,
SSL_R_WRONG_VERSION_NUMBER

Concerning [openssl.org #2240] and kEECDH handshake failures

2010-11-25 Thread Mounir IDRASSI

Hi,

As discovered 7 months ago, OpenSSL wrongly returns an error if the 
ServerHello is missing the Supported Point Format extension. This 
contradicts RFC 4492 that clearly states that in this case the client 
should interpret it as only uncompressed format is supported.
For the moment, the patch I sent for this at the time under ticket #2240 
has not been accepted yet.


As this issue is starting to become more spread than before thanks to 
the generalization of ECC support, the correction of this should IMHO be 
present on the next release.

Is this already planned for 1.0.0c?

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: BN_NIST_521 or BN_NIST_512 ?

2011-02-01 Thread Mounir IDRASSI

Hi,

NIST's FIPS PUB 186-3 defines curve P-521 (take from NSA Suite B). Take 
a look at : 
http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf .


You are certainly confusing it with Brainpool ECC curve P-512.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 2/2/2011 1:40 AM, Paul Suhler wrote:


Hi, everyone.

The file crypto/bn/bn_nist.c seems to have some mis-named symbols, e.g.,

BN_NIST_521_TOP

BN_get0_nist_prime_521

BN_get0_nist_prime_521

BN_NIST_521_RSHIFT

BN_nist_mod_521

…etc.

Itlooks likethey all should be“512”

I see thisat least as early as0.9.8o,and it’s been carried forward 
into1.0.0c,openssl-1.0.1-stable-SNAP-20110201, andopenssl-SNAP-20110201.


Thanks,

Paul

***_*

Paul A. Suhler, PhD| Firmware Engineer |Quantum 
Corporation|Office:949.856.7748 
|___paul.suhler@quantum.com_mailto:paul.suh...@quantum.com


***Preserving the World's Most Important Data.Yours.™*


The information contained in this transmission may be confidential. 
Any disclosure, copying, or further distribution of confidential 
information is not permitted unless such privilege is explicitly 
granted in writing by Quantum. Quantum reserves the right to have 
electronic communications, including email and attachments, sent 
across its networks filtered through anti virus and spam software 
programs and retain such messages in order to comply with applicable 
data security and retention requirements. Quantum is not responsible 
for the proper and complete transmission of the substance of this 
communication or for any delay in its receipt.


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


New Timing Attack on OpenSSL ECDSA

2011-05-25 Thread Mounir IDRASSI

Hi all,

Is there any plan for implementing counter measures against the newly 
discovered vulnerability in ECDSA operations of OpenSSL?
For those not aware of it, here is the US-CERT link of this 
vulnerability : http://www.kb.cert.org/vuls/id/536044
Here is also the original paper that contains the vulnerability details 
: http://eprint.iacr.org/2011/232.pdf


The patch suggested by the paper seems simple enough. It can be enhanced 
by adding a random multiple of the order to the scalar k. Is there any 
objection for getting this merged into OpenSSL source?


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr
__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: New Timing Attack on OpenSSL ECDSA

2011-05-25 Thread Mounir IDRASSI

Hi all,

The paper is clearly indicating that they successfully mounted a 
practical attack againt OpenSSL TLS implementation that uses elliptic 
curves and ECDHE_ECDSA based ciphers. They used the OpenSSL s_server 
utility and the versions indicated in their paper is 0.9.8o and 1.0.1a. 
I'm not aware of any changes in this part of OpenSSL since these 
versions were release so all current OpenSSL version are vulnerable.


David: Can explain a little more you argument? I couldn't find the code 
referenced in your email in the OpenSSL source and I'm not sure how the 
details you gave are linked to OpenSSL implementation.


As I stated in my first email, the paper comes with a temporary patch 
that should mitigate this issue. Is there any one working on this? I 
think it should be taken seriously even if ECDSA based ciphers are not 
widely used.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 5/25/2011 7:20 PM, Paul Suhler wrote:


Hi, David.

So what is the meaning of the “Affected” status for OpenSSL? Is that 
simply because ECDSA is supported by OpenSSL? Or did they actually 
test against an implementation that exhibited the vulnerability?


Either way, FIPS 140-3 will only require protection against 
non-invasive attacks at level 3 and higher.


Cheers,

Paul

*_*
Paul A. Suhler| Firmware Engineer |Quantum Corporation| 
Office:949.856.7748 | paul.suh...@quantum.com 
mailto:paul.suh...@quantum.com

*Preserving the World's Most Important Data. **Yours**.™*

*From:*owner-openssl-...@openssl.org 
[mailto:owner-openssl-...@openssl.org] *On Behalf Of *David McGrew

*Sent:* Wednesday, May 25, 2011 8:25 AM
*To:* John Foley
*Cc:* openssl-dev@openssl.org
*Subject:* Re: New Timing Attack on OpenSSL ECDSA

Hi John,

thanks for forwarding. There has been a short thread on this on 
attack-interest yesterday and today.


The way that these timing attacks work is that the attacker will time 
a lot of crypto operations (in this case the ECDSA signing operation) 
and then exploit the fact that the time taken by the algorithm depends 
to some extent on the private key or on another secret value (in this 
case the secret k used in ECDSA signing). If the signing operation 
has a branch on the bits of the secret key, so that a 1 bit will 
cause an operation that takes longer that if a 0 bit is present, 
then this will cause a key-dependent timing. In elliptic curve 
cryptography, the secret is always the exponent used in the 
exponentiation routine.


I think our implementation is safe against this type of attacks. We 
use a sliding window method for exponentiation, so that the 
branching takes place on windows. The exponent is broken up into 
4-bit windows. There is a loop over all the windows, and each window 
gets processed by a switch statement to determine what 
ec_group_element should get multiplied into the accumulator r. In 
the case in which all the bits of the window are zero, then this is a 
multiplication by the identity element, and we can skip that 
multiplication if we want. However, I put in a dummy operation to make 
sure that a multiplication gets done even when the window is all zero:


case 0x0:

/* multiply by IE, which we don't need to actually perform */

//printf(multiplying r by 1\n); // ec_group_elementH_print(x0); 
printf (\n);


#ifdef DUMMY_MULT

ec_group_mult(dum, dum, r, C);

#endif

break;

So as long as the compiler doesn't optimize away that ec_group_mult() 
operation, the execution time of the exponentiation routine ought to 
be independent of the exponent.


I have skimmed over the paper, and it turns out that the dependency on 
the exponent that they exploit is the fact that the openssl 
exponentiation for binary curves skips over the initial zero bits in 
the exponent. The signature only leaks information when there are a 
significant number of leading zeros.


It would not be hard to write a function that collected timing 
information based on different exponents, and could estimate/detect 
this sort of vulnerability. That would be a *great* thing to add to 
our test suite. But since it doesn't need to go inside the canister, 
let's put off implementing it until after next Tues ;-)


David

On May 25, 2011, at 7:52 AM, John Foley wrote:



David,

Would your ECDSA implementation be subject to the following timing attack?


 Original Message 

*Subject: *



New Timing Attack on OpenSSL ECDSA

*Date: *



Wed, 25 May 2011 15:59:58 +0200

*From: *



Mounir IDRASSI mounir.idra...@idrix.net 
mailto:mounir.idra...@idrix.net


*Reply-To: *



openssl-dev@openssl.org mailto:openssl-dev@openssl.org

*Organization: *



IDRIX

*To: *



openssl-dev@openssl.org mailto:openssl-dev@openssl.org

Hi all,
  
Is there any plan for implementing counter measures against the newly

discovered vulnerability in ECDSA operations

Re: [CVS] OpenSSL: openssl/ CHANGES openssl/crypto/ecdsa/ ecs_ossl.c

2011-05-27 Thread Mounir IDRASSI

Hi ,

I agree with Bruce: we should default to a constant time behavior so 
definitely the code must use #ifndef instead of #ifdef since the patch 
makes the scalar a fixed bit length value.

I think the paper authors got confused when they wrote the code.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 5/27/2011 4:10 PM, Bruce Stephens wrote:

Dr. Stephen Hensonst...@openssl.org  writes:

[...]


   +#ifdef ECDSA_POINT_MUL_NO_CONSTTIME
   +/* We do not want timing information to leak the length of k,
   + * so we compute G*k using an equivalent scalar of fixed
   + * bit-length. */
   +
   +if (!BN_add(k, k, order)) goto err;
   +if (BN_num_bits(k)= BN_num_bits(order))
   +if (!BN_add(k, k, order)) goto err;
   +#endif /* def(ECDSA_POINT_MUL_NO_CONSTTIME) */
   +

Almost certainly my misunderstanding, but isn't the sense of this wrong?

That is, surely the new code should be added if we want the CONSTTIME
behaviour (i.e., if NO_CONSTTIME is not defined), and we'd want that by
default so it should be #ifndef rather than #ifdef?

(I agree it's #ifdef in the eprint too, which increases the likelyhood
that I'm just misunderstanding something.)

[...]

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl-dev] [openssl.org #2782] BUG report: RSA private key serializer

2012-04-02 Thread Mounir IDRASSI

Hi,

I'm afraid Erwann is right: you are mistaken in your understanding of 
RSA and DER encoding rules.
RSA specifies the size of the modulus and its two primes (in order to be 
immune against some factoring attacks) but it says nothing about the 
size of the exponents.
Erwann's explanation of DER encoding is very clear. Even Microsoft 
implementation of Crypto API and CNG adheres to this. So, as he pointed 
it out, there must be another explanation for the .NET error you are 
encountering.


Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 4/2/2012 3:28 PM, Tamir Khason via RT wrote:

Hello, Erwann
This is not related to .NET. Integer is not only value, but also size.
Both exponents and its coefficients should be the same length
(according RSA definition, both integers) so those numbers should be
serialized into ASN1_INTEGER. In for some reason, you want to have
integer with different size (for me it's wrong, but it might be your
decision because of size optimization), you should use variouse size
serialization.

This is what is this bug about.


On Mon, Apr 2, 2012 at 3:52 PM, Erwann Abalea via RTr...@openssl.org  wrote:

Bonjour,

Le 02/04/2012 13:21, Tamir Khason via RT a écrit :

There is a bug in ASN.1 DER serializer used to generate RSA private
keys. It trims trailing zeros despite the DER specification. Please
see the full info and reproduction steps in my blog
http://khason.net/dev/openssl-bug-or-why-some-private-keys-cannot-be-used-for-net/#comments


You're wrong. You're mixing things, length encoding and value encoding
(as in TLV).

In DER, there's no indefinite length objects, because the purpose of
DER is to have the only one non ambiguous representation of an object.
Since an indefinite length (i.e. not known in advance) object can also
be represented by its definite length counterpart by rewriting it once
the object length is known, then an indefinite length can't be the
only one representation of this object.

Next, when writing a DER object, its serialization needs to be unique. A
set of rules are applied to enforce this. For integers, these rules tell
us that the lowest number of bytes need to be used, also ensuring that
negative numbers are expressed in 2s complement form (highest bit set to
1). Therefore, while you can express the number 0x32 as the following
serialization forms all representing the same number:
   32
   0032
   32
only the first representation is a DER one. The others encode the same
value, but with useless leading bytes.

Negative numbers cannot have a heading 00 octet, because the highest
order bit would then be equal to 0, and the number considered positive.

Therefore, the number 0x92 can be serialized as:
   92
   0092
   92
only the second form is a DER one. The first has its highest order bit
set to 1, the number considered negative, its value is then -0x6E. The
third form has an unnecessary leading 00 octet.

Of course, adding trailing 00 octets are forbidden, this would
completely change the encoded number. Like writing 70 is not the same
as writing 7.

In your bad example key, exponent2's length is smaller than
exponent1's and coefficient's ones. They're not guaranteed to be of the
same length. Exponent{1,2} and coefficient are results of calculations
(d mod (p-1), d mod (q-1), q^-1 mod p respectively), and their
magnitude can vary.
Any a mod b number cannot be the same size of b (consider for
example 2^32+1 mod 2^32, it's not a 32 bits integer).

If your bad key cannot be used in .NET, there's another reason.

--
Erwann ABALEA
-
podoclaste: casse-pieds







__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #1681] 0.9.8h bug report

2008-05-29 Thread Mounir IDRASSI via RT
Hi,

You should not touch the file sha1-586.pl because the problem is located
in the file x86ms.pl that is dedicated to MASM. In this file, the line 273
containing $extra should be removed to be able to compile the generated
assembly files.

Cheers,
-- 
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On Thu, May 29, 2008 8:13 am, Craig Muchinsky via RT wrote:
 I pulled down 0.9.8h today and attempted to build on a win32 machine,
 but ran into an issue when compiling the generated s1_win32.asm file. It
 looks like there is a syntax error in sha1-586.pl at line 152, the
 second argument (16) is causing the following error:



  ml /Cp /coff /c /Cx /Focrypto\sha\asm\s1_win32.obj
 .\crypto\sha\asm\s1_win32.asm

Assembling: .\crypto\sha\asm\s1_win32.asm

   Microsoft (R) Macro Assembler Version 8.00.50727.762

   Copyright (C) Microsoft Corporation.  All rights reserved.

   .\crypto\sha\asm\s1_win32.asm(13) : error A2008: syntax error :
 integer



   NMAKE : fatal error U1077: 'C:\Program Files (x86)\Microsoft Visual
 Studio 8\VC\bin\ml.EXE' : return code '0x1'

   Stop.



 By simply removing the ',16' from line 152 everything compiles fine.



 Thanks,

 Craig Muchinsky



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   [EMAIL PROTECTED]


[openssl.org #2118] [PATCH] handle ECDSA_sign error correctly in pkey_ec_sign (the correct one)

2009-11-30 Thread Mounir IDRASSI via RT
Hi,

This is a patch against openssl-1.0.0-stable-SNAP-20091129 which 
corrects the way the error code returned by ECDSA_sign is handled in the 
function pkey_ec_sign.

Cheers,
-- 
Mounir IDRASSI
IDRIX
http://www.idrix.fr



--- H:/Dev/libraries/openssl-1.0.0-stable-SNAP-20091129/crypto/ec/ec_pmeth.c
Wed Nov  5 19:38:56 2008
+++ H:/Dev/libraries/openssl-1.0.0-stable-SNAP-20091129/crypto/ec/ec_pmeth_1.c  
Sun Nov 29 20:58:37 2009
@@ -143,7 +143,7 @@
 
ret = ECDSA_sign(type, tbs, tbslen, sig, sltmp, ec);
 
-   if (ret  0)
+   if (ret = 0)
return ret;
*siglen = (size_t)sltmp;
return 1;


Re: [openssl.org #2240] Missing Supported Point Formats Extension in ServerHello should be ignored

2010-04-24 Thread Mounir IDRASSI via RT
Hi,

I'm attaching a simple patch that should correct this behavior.
Can you test it and tell us the results?
Thanks,

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


 Dear openssl support,

 I investigated the following web servers.
 But all of them failed with the same error.

 1) apache-tomcat-6.0.26 + bcprov-ext-jdk16-145 + jdk1.6.0_17 (centos 5)
 2) jboss-4.2.3.GA + bcprov-jdk15 + jdk1.6.0_17 (centos 5)
 3) IIS 7 (windows 7)

 On the other hand, many browsers except for opera successfully connect to
 the servers.
 Something wrong?

 Regards,
 Koichi Sugimoto.

 2010/4/20 Jack Lloyd via RT r...@openssl.org


 RFC 4492 says:

   A client that receives a ServerHello message containing a Supported
   Point Formats Extension MUST respect the server's choice of point
   formats during the handshake (cf. Sections 5.6 and 5.7).  If no
   Supported Point Formats Extension is received with the ServerHello,
   this is equivalent to an extension allowing only the uncompressed
   point format.

 OpenSSL 1.0.0 rejects such a negotiation, always requiring the
 extension to exist in the ServerHello:

 CONNECTED(0003)
  TLS 1.0 Handshake [length 00cd], ClientHello
01 00 00 c9 03 01 4b cc f2 87 fc 1d 05 2d 0c 1f
4a 74 8b 8c 6f 20 c3 56 fb 35 4a 73 b0 9c e0 c1
6f 34 1b 10 f9 9f 00 00 5c c0 14 c0 0a 00 39 00
38 00 88 00 87 c0 0f c0 05 00 35 00 84 c0 12 c0
08 00 16 00 13 c0 0d c0 03 00 0a c0 13 c0 09 00
33 00 32 00 9a 00 99 00 45 00 44 c0 0e c0 04 00
2f 00 96 00 41 00 07 c0 11 c0 07 c0 0c c0 02 00
05 00 04 00 15 00 12 00 09 00 14 00 11 00 08 00
06 00 03 00 ff 01 00 00 44 00 0b 00 04 03 00 01
02 00 0a 00 34 00 32 00 01 00 02 00 03 00 04 00
05 00 06 00 07 00 08 00 09 00 0a 00 0b 00 0c 00
0d 00 0e 00 0f 00 10 00 11 00 12 00 13 00 14 00
15 00 16 00 17 00 18 00 19 00 23 00 00
  TLS 1.0 Handshake [length 002a], ServerHello
02 00 00 26 03 01 20 3f 72 c5 29 9f 22 b1 a6 af
4b 81 31 eb 4c 85 bf bb 3a a5 8b b8 21 86 16 c5
7c 84 5c 73 4a 4a 00 c0 08 00
 139742562498200:error:1411809D:SSL
 routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat
 list:t1_lib.c:1440:
 139742562498200:error:14092113:SSL
 routines:SSL3_GET_SERVER_HELLO:serverhello tlsext:s3_clnt.c:942:

 OpenSSL 1.0.0 29 Mar 2010
 built on: Mon Apr 19 19:52:35 EDT 2010
 platform: linux-x86_64
 options:  bn(64,64) rc4(1x,char) des(idx,cisc,16,int) idea(int)
 blowfish(idx)
 compiler: gcc -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H
 -m64 -DL_ENDIAN -DTERMIO -O3 -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2
 -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM
 -DAES_ASM -DWHIRLPOOL_ASM
 OPENSSLDIR: /usr/local/ssl

 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org





t1_lib.c.diff
Description: Binary data


Re: [openssl.org #2245] [PATCH] Add /Zi to VC++ CFLAG in debug configuration (1.0.0 and 0.9.8)

2010-04-27 Thread Mounir IDRASSI via RT
Hi,

I have on purpose only added /Zi to the debug build  because it is not 
always desirable to add symboles to release builds whereas it is always 
needed for debug ones.

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 4/26/2010 11:58 PM, William A. Rowe Jr. wrote:
 On 4/26/2010 1:18 PM, Mounir IDRASSI via RT wrote:

 Hi,

 This patch adds the /Zi switch to CFLAG in the debug configuration in
 order to permit stepping inside OpenSSL code during debug sessions.
 It applied to the latest snapshots of 1.0.0 and 0.9.8 source trees.
  
 It should be in base_cflags, since it is required to produce something from a 
 crash
 dump that can be analyzed.  Apparently half of a patch was applied without 
 thought
 to this, you are certainly right that in the current state, the win32 build 
 results
 are worthless to someone creating a release and to someone trying to debug 
 the build.

 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org



__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #2312] Function protos in 1.0.0a: unsigned long changed to size_t not so good for amd/x64, Itanium

2010-07-28 Thread Mounir IDRASSI via RT
  Hi,

As far as I know, OpenSSL 1.0 is not meant to be binary compatible with 
OpenSSL 0.9.8x, at least for low-level APIs like the AES one you are 
referring to.
So, as you suggest it, an application should know if it is using a 0.9.8 
libeay32 or an 1.0 one, and depending on that it will use the correct 
prototype.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 7/28/2010 3:02 PM, John Skodon via RT wrote:
 Hi guys:

 I'm probably wrong here, but it looks like you've changed some function 
 prototypes, e.g., aes.h, in version 1.0.0a to  size_t from unsigned long 
 in 0.9.8o.

 E.g.,
 0.9.8o, AES.H:
 void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
   size_t length, const AES_KEY *key,
   unsigned char ivec[AES_BLOCK_SIZE],
   unsigned char ecount_buf[AES_BLOCK_SIZE],
   unsigned int *num);

 1.0.0a, AES.H:
 void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
   const unsigned long length, const AES_KEY *key,
   unsigned char ivec[AES_BLOCK_SIZE],
   unsigned char ecount_buf[AES_BLOCK_SIZE],
   unsigned int *num);

 The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has long 
 and unsigned long as 32-bits, and size_t as 64-bits. So, it would seem 
 that code that called AES_ctr128_encrypt() compiled under 0.9.8o would push 
 32-bits less onto the stack on AMD/Itanium than code using the 1.0.0a headers.

 Just about every other popular compiler model I can think of, primarily W32, 
 Unix 32, and Unix 64 LP64 would not experience a problem.

 If I'm correct, code calling these functions on AMD/x64 would need maybe two 
 different function pointers defined for AES_ctr128_encrypt(), and on the fly 
 switching between the two, depending on the version retrieved from 
 LIBEAY32.DLL.

 Am I missing something here?

 Thanks in advance for your help,
 JGS



 Hi guys:
 I'm probably wrong here, but it looks like you've changed some function
 prototypes, e.g., aes.h, in version 1.0.0a to size_t from unsigned long in
 0.9.8o.
 E.g.,
 0.9.8o, AES.H:
 void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
 * size_t length*, const AES_KEY *key,
 unsigned char ivec[AES_BLOCK_SIZE],
 unsigned char ecount_buf[AES_BLOCK_SIZE],
 unsigned int *num);
 1.0.0a, AES.H:
 void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
 * const unsigned long length*, const AES_KEY *key,
 unsigned char ivec[AES_BLOCK_SIZE],
 unsigned char ecount_buf[AES_BLOCK_SIZE],
 unsigned int *num);
 The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has long 
 and
 unsigned long as 32-bits, and size_t as 64-bits. So, it would seem that 
 code
 that called AES_ctr128_encrypt() compiled under 0.9.8o would push 32-bits less
 onto the stack on AMD/Itanium than code using the 1.0.0a headers.
 Just about every other popular compiler model I can think of, primarily W32,
 Unix 32, and Unix 64 LP64 would not experience a problem.
 If I'm correct, code calling these functions on AMD/x64 would need maybe two
 different function pointers defined for AES_ctr128_encrypt(), and on the fly
 switching between the two, depending on the version retrieved from 
 LIBEAY32.DLL.
 Am I missing something here?
 Thanks in advance for your help,
 JGS


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #2314] [PATCH] fix double free in ssl3_get_key_exchange in case of error

2010-08-08 Thread Mounir IDRASSI via RT
  Hi,

This patch corrects a double free bug in ssl3_get_key_exchange 
(s3_clnt.c) when an error happens during the connection to a server.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


--- E:/dev/libraries/openssl-1.0.latest/ssl/s3_clnt.c.original  Sun Feb 28 
01:24:24 2010

+++ E:/dev/libraries/openssl-1.0.latest/ssl/s3_clnt.c   Sun Aug  8 14:49:30 2010

@@ -1508,6 +1508,7 @@

s-session-sess_cert-peer_ecdh_tmp=ecdh;
ecdh=NULL;
BN_CTX_free(bn_ctx);
+   bn_ctx = NULL;
EC_POINT_free(srvr_ecpoint);
srvr_ecpoint = NULL;
}


Re: [openssl.org #2240] Missing Supported Point Formats Extension in ServerHello should be ignored

2010-10-01 Thread Mounir IDRASSI via RT
  Hi Steven,

Can you please check the protocol and the cipher used for each case 
(SSLv3_server_method vs SSLv23_server_method) using the same client?
The only explanation for the difference you are seeing is that when you 
use SSLv3_server_method, TLS extension ECPointFormats is sent with 
ServerHello message whereas it is not sent when SSLv23_server_method is 
used.

--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 10/1/2010 12:59 AM, Steven Noonan via RT wrote:
 Hi Mounir,

 The patch you attached to PR 2240 works perfectly here. Was having
 difficulty connecting to an OpenFire Jabber server via Gajim, Psi, and
 Kopete, but now I'm not.

 Another fix I discovered for the Psi/Kopete issue was to use
 SSLv3_server_method() instead of SSLv23_server_method() in qca-ossl.
 Any idea why this makes a difference?

 - Steven


 __
 OpenSSL Project http://www.openssl.org
 Development Mailing List   openssl-dev@openssl.org
 Automated List Manager   majord...@openssl.org


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #2358] [PATCH] Correct a double free bug in ssl3_send_server_key_exchange

2010-10-10 Thread Mounir IDRASSI via RT
  Hi,

This patch against the latest 1.0.1 stable snapshot corrects a double 
free bug in function ssl3_send_server_key_exchange (s3_srvr.c) that 
occurs when an ECDHE cipher is used, leading to a crash.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

--- C:/Dev/libraries/openssl-1.0.1-stable/ssl/s3_srvr.c.originalSun Oct 
10 03:42:36 2010

+++ C:/Dev/libraries/openssl-1.0.1-stable/ssl/s3_srvr.c Sun Oct 10 03:47:02 2010

@@ -1768,6 +1768,7 @@

(unsigned char *)encodedPoint, 
encodedlen);
OPENSSL_free(encodedPoint);
+   encodedPoint = NULL;
p += encodedlen;
}
 #endif