Re: [openssl-dev] DRBG entropy

2016-07-27 Thread Paul Dale
John's spot on the mark here.  Testing gives a maximum entropy not a minimum.  
While a maximum is certainly useful, it isn't what you really need to guarantee 
your seeding.

A simple example which passes the NIST SP800-90B first draft tests with flying 
colours:

seed = π - 3
for i = 1 to n do
seed = frac(exp(1+2*seed))
entropy[i] = 256 * frac(2^20 * seed)

where frac is the fractional part function, exp is the exponential function.

I.e. start with the fractional part of the transcendental π and iterate with a 
simple exponential function.  Take bits 21-28 of each iterate as a byte of 
"entropy".  Clearly there is really zero entropy present: the formula is simple 
and deterministic; the floating point arithmetic operations will all be 
correctly rounded; the exponential is evaluated in a well behaved area of its 
curve where there will be minimal rounding concerns; the bits being extracted 
are nowhere near where any rounding would occur and any rounding errors will 
likely be deterministic anyway.

Yet this passes the SP800-90B (first draft) tests as IID with 7.89 bits of 
entropy per byte!

IID is a statistical term meaning independent and identically distributed which 
in turn means that each sample doesn't depend on any of the other samples 
(which is clearly incorrect) and that all samples are collected from the same 
distribution.  The 7.89 bits of entropy per byte is pretty much as high as the 
NIST tests will ever say.  According to the test suite, we've got an "almost 
perfect" entropy source.


There are other test suites if you've got sufficient data.  The Dieharder suite 
is okay, however the TestU01 suite is most discerning I'm currently aware of.  
Still, neither will provide an entropy estimate for you.  For either of these 
you will need a lot of data -- since you've got a hardware RNG, this shouldn't 
be a major issue.  Avoid the "ent" program, it seems to overestimate the 
maximum entropy present.


John's suggestion of collecting additional "entropy" and running it through a 
cryptographic has function is probably the best you'll be able to achieve 
without a deep investigation.  As for how much data to collect, be 
conservative.  If the estimate of the maximum entropy is 2.35 bits per byte, 
round this down to 2 bits per byte, 1 bit per byte or even ½ bit per byte.  The 
lower you go the more likely you are to be getting the entropy you want.  The 
trade-off is the time for the hardware to generate the data and for the 
processor to hash it together.


Pauli
-- 
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption 
Phone +61 7 3031 7217
Oracle Australia

-Original Message-
From: John Denker [mailto:s...@av8n.com] 
Sent: Wednesday, 27 July 2016 11:40 PM
To: openssl-dev@openssl.org
Subject: Re: [openssl-dev] DRBG entropy

On 07/27/2016 05:13 AM, Leon Brits wrote:
> 
> I have a chip (FDK RPG100) that generates randomness, but the 
> SP800-90B python test suite indicated that the chip only provides
> 2.35 bits/byte of entropy. According to FIPS test lab the lowest value 
> from all the tests are used as the entropy and 2 is too low. I must 
> however make use of this chip.

That's a problem on several levels.

For starters, keep in mind the following maxim:
 Testing can certainty show the absence of entropy.
 Testing can never show the presence of entropy.

That is to say, you have ascertained that 2.35 bits/byte is an /upper bound/ on 
the entropy density coming from the chip.  If you care about security, you need 
a lower bound.  Despite what FIPS might lead you to believe, you cannot obtain 
this from testing.
The only way to obtain it is by understanding how the chip works.
This might require a trmendous amount of effort and expertise.



Secondly, entropy is probably not even the correct concept.  For any given 
probability distribution P, i.e. for any given ensemble, there are many 
measurable properties (i.e. functionals) you might look at.
Entropy is just one of them.  It measures a certain /average/ property.
For cryptologic security, depending on your threat model, it is quite possible 
that you ought to be looking at something else.  It may help to look at this in 
terms of the Rényi functionals:
  H_0[P] = multiplicity  = Hartley functional
  H_1[P] = plain old entropy = Boltzmann functional
  H_∞[P] = adamance

The entropy H_1 may be appropriate if the attacker needs to break all messages, 
or a "typical" subset of messages.  The adamance H_∞ may be more appropriate if 
there are many messages and the attacker can win by breaking any one of them.

To say the same thing in other words:
 -- A small multiplicity (H_0) guarantees the problem is easy for the attacker.
 -- A large adamance (H_∞) guarantees the problem is hard for the attacker.



Now let us fast-forward and suppose, hypothetically, that you have obtained a 
lower bound on what the chip produces.

Re: [openssl-dev] Session resume with different TLS version?

2016-07-27 Thread David Woodhouse
On Tue, 2016-07-26 at 23:52 +, David Benjamin wrote:
> Ah, you've hit upon a slew of odd behaviors which only got fully fixed on the 
> master branch.

Thanks for the comprehensive response. I'm not going to touch that with
a barge-pole then.

> (I'm not familiar with DTLS1_BAD_VER, but if it's a different
> protocol version, it sounds like you should configure it like other
> versions and not mess with session resumption bugs.)

It's a different protocol version, and the *only* way it ever gets used
in the modern world is with a session resume (because that's how
Cisco's AnyConnect VPN works). Hence the thought process was that if
the session resume would *force* the protocol version (which you now
told me it shouldn't, for the client), then I wouldn't *need* any other
method of specifying it.

In RT#3711 we had previously talked about the option of enabling full
support via something like DTLSv0_9_client_method(), and it had been
decided not to — on the basis that the existing SSL_OP_CISCO_ANYCONNECT
hack was sufficient.

That's less true now with the generic DTLS_client_method() and
DTLS_ANY_VERSION, because the SSL_OP_CISCO_ANYCONNECT hack needs to be
propagated into a lot more places and it actually ends up being
*cleaner* to implement it "properly" AFAICT.

I've updated my submission in PR#1296 accordingly; thanks for the
feedback.

https://github.com/openssl/openssl/pull/1296

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation

smime.p7s
Description: S/MIME cryptographic signature
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] Load secrets to context.

2016-07-27 Thread Dr. Stephen Henson
On Wed, Jul 27, 2016, john gloster wrote:

> Hi,
> 
> Can we use both the following APIs in the same application to load
> certificate to the SSL context?
> 
> *SSL_CTX_use_certificate_file()*
> *SSL_CTX_use_certificate_chain_file()*
> 

You should only use one. If you use SSL_CTX_use_certificate_chain_file() the
file needs to contains the certificates from EE to root in order in PEM
format.

If you want to do things differently you can load the PEM files manually and
use functions like SSL_CTX_use_certificate() and SSL_CTX_add0_chain_cert().

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] [TLS1 PRF]: unknown algorithm

2016-07-27 Thread Dr. Stephen Henson
On Wed, Jul 27, 2016, Catalin Vasile wrote:

> Hi,
> 
> I'm trying to use the EVP_PKEY_TLS1_PRF interface.
> 
> The first thing I do inside my code is:
> pctx = EVP_PKEY_CTX_new_id(EVP_PKEY_TLS1_PRF, NULL);
> But pctx is NULL after that call.
> 
> I've watched test/evp_test.c and it does not seem it does anything special, 
> but it successful in running the TLS1-PRF tests.
> 
> Is there something I'm missing?
> 

Is it linking against an older version of OpenSSL?

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Load secrets to context.

2016-07-27 Thread john gloster
Hi,

Can we use both the following APIs in the same application to load
certificate to the SSL context?

*SSL_CTX_use_certificate_file()*
*SSL_CTX_use_certificate_chain_file()*

If we can how to use them?

Thanks in advance.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] [TLS1 PRF]: unknown algorithm

2016-07-27 Thread Catalin Vasile
Hi,

I'm trying to use the EVP_PKEY_TLS1_PRF interface.

The first thing I do inside my code is:
pctx = EVP_PKEY_CTX_new_id(EVP_PKEY_TLS1_PRF, NULL);
But pctx is NULL after that call.

I've watched test/evp_test.c and it does not seem it does anything special, but 
it successful in running the TLS1-PRF tests.

Is there something I'm missing?

Best regards,
Cata
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


Re: [openssl-dev] DRBG entropy

2016-07-27 Thread Leon Brits
John,

Thanks for your reply.

The SP800-90B test has different types of test but the test with the lowest 
output is used as the maximum entropy capability of the chip. That is how I 
understand it from the FIPS lab.

For the FIPS validation, using a NDRNG, that source must feed the DRBG directly 
(FIPS lab) and not from something like the PRNG. I use seed the /dev/random 
from the NDRNG and then source from the PRNG, but that is not allowed for 
DRBGs. Again I hope I understand them correct.

They said I must look at the OpenSSL user guide v2.0 para 6.1.1 where low 
entropy sources are discussed. Now, I already make use of the "get_entropy" 
function for my DRBG implementation. I use to source from the PRNG in that 
callback. I must now get it directly from my entropy source, which give rise to 
my question of how to ensure that I have high entropy of data before the 
callback exits.

Regards,
LJB
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] Windows uplink override, PR 1356

2016-07-27 Thread Jim Carroll
I'm assisting with the port of the python package M2Crypto to use OpenSSL
1.1.0. The latest windows build of python 2.7.12 still does not include
applink.c, which leaves us unable to use BIO_s_fd functions and those
BIO_s_file functions that accept FILE objects.

I'd like to offer patch #1356 (https://github.com/openssl/openssl/pull/1356)

The patch modifies how OPENSSL_Uplink() searches for the applinktable.
Instead of only searching within the process space of the application, it
enables a developer to register an optional applinktable stored in another
module (aka another DLL).

The solution requires the developer to ensure the call to
OPENSSL_SetApplink() is made from an extension that was compiled using the
same threading model as both python and the installed OpenSSL library. But,
because python's distutils system is fully aware of the build environment,
this is a trivial matter for python developers.

The patch includes documentation along with detailed example of how to use
it.


begin 666 smime.p7s
M,( &"2J&2(;W#0$'`J" ,( "`0$Q"S )!@4K#@,"&@4`,( &"2J&2(;W#0$'
M`0``H((.$3""!#8P@@,>H ,"`0("`0$P#08)*H9(AO<-`0$%!0`P;S$+, D&
M`U4$!A,"4T4Q%# 2!@-5! H3"T%D9%1R=7-T($%",28P) 8#500+$QU!9&14

Re: [openssl-dev] DRBG entropy

2016-07-27 Thread John Denker
On 07/27/2016 05:13 AM, Leon Brits wrote:
> 
> I have a chip (FDK RPG100) that generates randomness, but the
> SP800-90B python test suite indicated that the chip only provides
> 2.35 bits/byte of entropy. According to FIPS test lab the lowest
> value from all the tests are used as the entropy and 2 is too low. I
> must however make use of this chip.

That's a problem on several levels.

For starters, keep in mind the following maxim:
 Testing can certainty show the absence of entropy.
 Testing can never show the presence of entropy.

That is to say, you have ascertained that 2.35 bits/byte is an
/upper bound/ on the entropy density coming from the chip.  If
you care about security, you need a lower bound.  Despite what
FIPS might lead you to believe, you cannot obtain this from testing.
The only way to obtain it is by understanding how the chip works.
This might require a trmendous amount of effort and expertise.



Secondly, entropy is probably not even the correct concept.  For any
given probability distribution P, i.e. for any given ensemble, there
are many measurable properties (i.e. functionals) you might look at.
Entropy is just one of them.  It measures a certain /average/ property.
For cryptologic security, depending on your threat model, it is quite
possible that you ought to be looking at something else.  It may help
to look at this in terms of the Rényi functionals:
  H_0[P] = multiplicity  = Hartley functional
  H_1[P] = plain old entropy = Boltzmann functional
  H_∞[P] = adamance

The entropy H_1 may be appropriate if the attacker needs to break
all messages, or a "typical" subset of messages.  The adamance H_∞
may be more appropriate if there are many messages and the attacker
can win by breaking any one of them.

To say the same thing in other words:
 -- A small multiplicity (H_0) guarantees the problem is easy for the attacker.
 -- A large adamance (H_∞) guarantees the problem is hard for the attacker.



Now let us fast-forward and suppose, hypothetically, that you
have obtained a lower bound on what the chip produces.

One way to proceed is to use a hash function.  For clarity, let's
pick SHA-256.  Obtain from the chip not just 256 bits of adamance,
but 24 bits more than that, namely 280 bits.  This arrives in the
form of a string of bytes, possibly hundreds of bytes.  Run this
through the hash function.  The output word is 32 bytes i.e. 256
bits of high-quality randomness.  The key properties are:
 a) There will be 255.99 bits of randomness per word, guaranteed
  with high probability, more than high enough for all practical
  purposes.
 b) It will be computationally infeasible to locate or exploit
  the missing 0.01 bit.

Note that it is not possible to obtain the full 256 bits of
randomness in a 256-bit word.  Downstream applications must be
designed so that 255.99 is good enough.



As with all of crypto, this requires attention to detail.  You
need to protect the hash inputs, outputs, and all intermediate
calculations.  For example, you don't want such things to get
swapped out.
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev


[openssl-dev] DRBG entropy

2016-07-27 Thread Leon Brits
Hi all,

I have a chip (FDK RPG100) that generates randomness, but the SP800-90B python 
test suite indicated that the chip only provides 2.35 bits/byte of entropy. 
According to FIPS test lab the lowest value from all the tests are used as the 
entropy and 2 is too low. I must however make use of this chip.

Looking at the paragraph in the User Guide 2.0 where low entropy sources are 
discussed and have some additional questions:

1. In my DRBG callback for entropy (function get_entropy in the guide), I 
simply used our chip as the source (the driver reading from the chip, makes it 
available at /dev/hwrng). Now that I've come to learn that the chip's entropy 
is too low, how do I ensure that this callback exists with a buffer of 
acceptable entropy?

2. Should I just return a 4 times larger buffer? Wat if that is larger than 
the "max_len"?

3. Can the DRBG repeatedly call the callback until the entropy is high 
enough?

Your advice is appreciated

Regards
LJB
-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev