Hi,

I've run into this error too, but is seen intermittently in my case during
regression when there is a lot of traffic, meaning a lot of invocations to
OpenSSL's DRBG.

What could be the possible causes of the continuous RNG test to fail for the
default DRBG in FIPS-mode?
My first guess was low entropy and I was thinking in the direction of
feeding more entropy into kernel's pool but David's post says "No matter how
strange or low-entropy the seeding, this should happen only with vanishingly
small probability.", and "But the design of the system needs to combine that
with some other things like date&time, processor serial number, etc., that
together make a value that will never occur more than once.". I understand
and agree with these statements, but still trying to understand the real
cause of this failure.

Also, I saw that DRBG is fed entropy only at initialization (done once once)
and reseed. And the reseed interval for the default DRBG (NID_aes_256_ctr)
is 2^24. So, unless this failure is happening at reseed time (which I
haven't verified yet), the entropy shouldn't be directly related to this
failure. Please correct me if my understanding is not accurate.

Thanks,
Neha.



--
View this message in context: 
http://openssl.6102.n7.nabble.com/FIPS-OpenSSL-default-DRBG-continuous-test-failing-tp46646p46693.html
Sent from the OpenSSL - Dev mailing list archive at Nabble.com.
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [email protected]
Automated List Manager                           [email protected]

Reply via email to