Initialization of RNG in FIPS mode

2014-10-08 Thread Roger No-Spam
Hi,
 
I'm experimenting with porting openssl-1.0.1/openssl-fips-2.0 to a proprietary 
platform.  FIPS_mode_set was failing for me, and some investigation showed that 
it was the rsa post tests that failed, and that it was related to RNG 
initialization. I found that if I added the following code before my 
FIPS_mode_set() call, FIPS mode was entered successfully. 
 
 {
 DRBG_CTX *ctx;
 size_t i;
 /*FIPS_set_error_callbacks(put_err_cb, add_err_cb); */
 for (i = 0; i  sizeof(dummy_entropy); i++)
  dummy_entropy[i] = i  0xff;
 if (entropy_stick)
  memcpy(dummy_entropy + 32, dummy_entropy + 16, 16);
 ctx = FIPS_get_default_drbg();
 FIPS_drbg_init(ctx, NID_aes_256_ctr, DRBG_FLAG_CTR_USE_DF);
 FIPS_drbg_set_callbacks(ctx, dummy_cb, 0, 16, dummy_cb, 0);
 FIPS_drbg_instantiate(ctx, dummy_entropy, 10);
 FIPS_rand_set_method(FIPS_drbg_method());
 }

This looks a bit complicated. I've been trying to find information on how RNG 
initialization is supposed to work in FIPS mode, but I have not been able to 
find anything. How is this supposed to be handled? I fear that I unknowingly 
have ripped something out that is causing this.
 
Can anyone give me a description of RNG initialization in FIPS mode, please?
 
--
R
 
  

Re: Initialization of RNG in FIPS mode

2014-10-08 Thread Kevin Fowler
Roger,
The FIPS_mode_set() call normally calls OpenSSL_init(), which calls
RAND_init_fips(), which initializes/instantiates the FIPS DRBG (including
seeding with good entropy from call to the default DRBG bytes() method).
This all happens if built with OPENSSL_FIPS defined. So check that is
defined, and check that FIPS_mode_set() calls OpenSSL_init().

You are right that the rsa/dsa selftests fail if the FIPS DRBG is not
seeded, and your solution accomplished that. But I assume you want the DRBG
seeded with good entropy from the system/kernel.

Kevin

On Wed, Oct 8, 2014 at 9:02 AM, Roger No-Spam roger_no_s...@hotmail.com
wrote:

 Hi,

 I'm experimenting with porting openssl-1.0.1/openssl-fips-2.0 to a
 proprietary platform.  FIPS_mode_set was failing for me, and some
 investigation showed that it was the rsa post tests that failed, and that
 it was related to RNG initialization. I found that if I added the following
 code before my FIPS_mode_set() call, FIPS mode was entered successfully.

  {
  DRBG_CTX *ctx;
  size_t i;
  /*FIPS_set_error_callbacks(put_err_cb, add_err_cb); */
  for (i = 0; i  sizeof(dummy_entropy); i++)
   dummy_entropy[i] = i  0xff;
  if (entropy_stick)
   memcpy(dummy_entropy + 32, dummy_entropy + 16, 16);
  ctx = FIPS_get_default_drbg();
  FIPS_drbg_init(ctx, NID_aes_256_ctr, DRBG_FLAG_CTR_USE_DF);
  FIPS_drbg_set_callbacks(ctx, dummy_cb, 0, 16, dummy_cb, 0);
  FIPS_drbg_instantiate(ctx, dummy_entropy, 10);
  FIPS_rand_set_method(FIPS_drbg_method());
  }

 This looks a bit complicated. I've been trying to find information on how
 RNG initialization is supposed to work in FIPS mode, but I have not been
 able to find anything. How is this supposed to be handled? I fear that I
 unknowingly have ripped something out that is causing this.

 Can anyone give me a description of RNG initialization in FIPS mode,
 please?

 --
 R




[openssl.org #3559] Weak digest for (EC)DH key exchange when connecting to SNI defined host

2014-10-08 Thread Hubert Kario via RT
# Start a server:
openssl req -x509 -newkey rsa:2048 -keyout localhost.key -out localhost.crt 
-subj /CN=localhost -nodes -batch
openssl req -x509 -newkey rsa:2048 -keyout server.key -out server.crt -subj 
/CN=server -nodes -batch
openssl s_server -key localhost.key -cert localhost.crt -key2 server.key -cert2 
server.crt -servername server

# connect to it using new enough client (openssl 1.0.2 at least):
openssl s_client -connect localhost:4433 /dev/null 2/dev/null| grep 'Peer 
signing digest'
openssl s_client -connect localhost:4433 -servername server /dev/null 
2/dev/null| grep 'Peer signing digest'

The results are respectively:
Peer signing digest: SHA512
Peer signing digest: SHA1

The virtual host should use the same signing digest as the
default host (that is the strongest mutually supported by
client and server).

The issue is present in at least openssl-1.0.1e-39.fc20.x86_64
(fedora package, where it also affects Apache mod_ssl) as well
as current development master e0fdea3e49e7454

In master it also affects SuiteB mode where it causes the SNI
to not work:
openssl ecparam -name prime256v1 -out p256
openssl req -x509 -newkey ec:p256 -keyout server.key -out server.crt -subj 
/CN=server -nodes -batch -sha256
openssl req -x509 -newkey ec:p256 -keyout localhost.key -out localhost.crt 
-subj /CN=localhost -nodes -batch -sha256
openssl s_server -key localhost.key -cert localhost.crt -key2 server.key -cert2 
server.crt -servername server -cipher SUITEB128

In different terminal:
$ openssl s_client -connect localhost:4433 /dev/null 2 /dev/null | grep 'Peer 
signing digest'
Peer signing digest: SHA256

$ openssl s_client -connect localhost:4433 -servername server /dev/null
WARNING: can't open config file: /usr/local/ssl/openssl.cnf
CONNECTED(0003)
140627487106720:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert 
handshake failure:s23_clnt.c:757:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 390 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---

While at the same time server reports:
ACCEPT
Hostname in TLS extension: server
Switching server context.
ERROR
140475191449248:error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared 
cipher:s3_srvr.c:1405:
shutting down SSL
CONNECTION CLOSED

-- 
Regards,
Hubert Kario
Quality Engineer, QE BaseOS Security team
Email: hka...@redhat.com
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #3560] OpenSSL selects weak digest for (EC)DH kex signing in TLSv1.2 when connecting to SNI virtual server

2014-10-08 Thread Tomas Mraz via RT
When connecting to a virtual, SNI defined host openssl selects SHA1
digest instead of SHA512, as it does for the default host.

Steps to Reproduce:
1. openssl req -x509 -newkey rsa:2048 -keyout localhost.key -out localhost.crt 
-subj /CN=localhost -nodes -batch
2. openssl req -x509 -newkey rsa:2048 -keyout server.key -out server.crt -subj 
/CN=server -nodes -batch
3. openssl s_server -key localhost.key -cert localhost.crt -key2 server.key 
-cert2 server.crt -servername server

In other console, using OpenSSL 1.0.2:
1. openssl s_client -connect localhost:4433 /dev/null 2/dev/null| grep 'Peer 
signing digest'
2. openssl s_client -connect localhost:4433 -servername server /dev/null 
2/dev/null| grep 'Peer signing digest'


Actual results:
1. Peer signing digest: SHA512
2. Peer signing digest: SHA1

Expected results:
1. Peer signing digest: SHA512
2. Peer signing digest: SHA512

See also: https://bugzilla.redhat.com/show_bug.cgi?id=1150033

I've investigated this a little and found that the second SSL context
that is used when the server receives the servername extension does not
have full copy of settings from the main context. Namely the
tls1_process_sigalgs() is not properly called for it. I am not sure what
would be the proper fix though.

-- 
Tomas Mraz
No matter how far down the wrong road you've gone, turn back.
  Turkish proverb
(You'll never know whether the road is wrong though.)


__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


[openssl.org #3561] bug report: openssl prints wrong pre_master secret in debug mode

2014-10-08 Thread Shuai Li via RT
Hello,

After defining the macro TLS_DEBUG in the debug stage, I found openssl
printed wrong pre master secret. What it was actually printing out is
master secret!

bug found in ssl/t1_enc.c at line 635. My OS is ubuntu 12.04, openssl-1.0.1i


Best,

Shuai

__
OpenSSL Project http://www.openssl.org
Development Mailing List   openssl-dev@openssl.org
Automated List Manager   majord...@openssl.org


Re: [openssl.org #3538] 1.0.1h make test fails on test_verify - Debian x64

2014-10-08 Thread Andrey Kulikov
Strange, but now on the same machine everything works fine.
Seems it was fluctuations of world ether...

On 21 September 2014 15:08, Andrey Kulikov via RT r...@openssl.org wrote:

 # uname -a
 Linux deb7 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux
 # gcc --version
 gcc-4.7.real (Debian 4.7.2-5) 4.7.2

 ./config  make  make test
 fails with following:
 ...
 make[1]: *** [test_verify] Error 2
 make: *** [tests] Error 2
 make[1]: Leaving directory `openssl-1.0.1h/test'

 All other versions perform make test Ok, including 1.0.1i



[openssl.org #3560] OpenSSL selects weak digest for (EC)DH kex signing in TLSv1.2 when connecting to SNI virtual server

2014-10-08 Thread Stephen Henson via RT
On Wed Oct 08 19:12:41 2014, tm...@redhat.com wrote:
 When connecting to a virtual, SNI defined host openssl selects SHA1
 digest instead of SHA512, as it does for the default host.


The cause is that some negotiated parameters are wiped when SSL_set_SSL_CTX is
called. Try the attached patch.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org



cert.diff
Description: Binary data