We have to keep in mind what threats we care about and their practicality.
The security of a DRBG is dependent on both the secrecy of the seed
material provided and the security of the algorithm in terms of its output
not leaking information that materially leaks the internal state in a
manner that enables it to be discovered or reversed in a manner to enable
determination of previous or future outputs.

For some of the arguments used to date there appears to be an assumption
that there is a practical difference between a broken DRBG algorithm such
that it is not such a security issue if we separate out the DRBG instances
on a per SSL connection.
In real terms if a DRBG is broken and its state is able to be determined
remotely there is no practical difference in separating DRBG instances -
they are all equally vulnerable in the same manner.
In the case of the DualEC-DRBG this was clear - and no one I've seen has
ever suggested that you were safer if you had separate instances of a
broken algorithm for a DRBG - it makes no practical difference to the
security at all.
Sure there is a slight technical difference - but from a security
perspective there is no difference - you are susceptible to the same attack
- so the minor technical difference offers no actual meaningful security
value - and everyone that has referenced this to date has also indicated
that they don't think that there is actually any real practical value to
the difference - it has been more of a "it cannot harm" sort of comment.

In more general terms we need to have a clear view on what we think about
our thread model - what is considered inside the scope of what we care to
address - and what is frankly outside the scope (for our view).

   - We don't consider attacks from the same process against itself within
   our threat model.
   - Correspondingly we don't consider attacks from one thread against
   another thread without our threat model.
   - We don't consider privileged user attacks against the user in our
   threat model (i.e. root can read the memory of the process on most
   Unix-like systems).
   - We also don't actually consider a need to protect all secret
   information from every possible other bug that might leak arbitrary parts
   of memory. We could. But we don't. And if we did we would need to protect
   both the seeding material for the DRBG and its internal state and
   potentially its output. We don't do that - because that isn't within our
   threat model.

Typical applications share an SSL_CTX between multiple SSL instances and we
maintain the session cache against the SSL_CTX. This may be in a single
process (thread) or shared across multiple threads - or even shared across
multiple prcesses (which is simply the same as being in a single process
from our perspective where the "magic" to coordinate the session id cache
between processes is left to the developer/user).

In a FIPS context, every DRBG has requirements on its inputs (seeding) and
on maintaining a continuous RNG test (block-based compare for non-repeating
outputs at a block level).
All of these would be a per-instance requirement on the DRBG. They have to
be factored in.

There is also the argument that locking is bad and fewer locks are better -
and that argument needs to be backed up by looking at the overall status -
which particular application model are we concerned about? Have we measured
it? Have we figured out where the bottlenecks are? Have we worked through
optimising the actual areas of performance impact? Or are we just
prematurely optimising? Excessive locking will have an impact for certain
application models - but I don't think anyone is suggesting that what we
had previously was excessive - and given the significant performance impact
of the recent changes which went unmeasured and unaddressed I think it is
clear we haven't been measuring performance related items for the DRBG at
all to date - so there wasn't any "science" behind the choices made.

Simple, clear, well documented code with good tests and known architectural
assumptions is what we are trying to achieve - and my sense from the
conversations on this topic to date was that we don't have a consensus as
to what problem we are actually trying to solve - so the design approach
shifts, and shifts again - all of which are the authors of the PRs
responding to what is (in my view at least) conflicting suggestions based
on different assumptions.

That is what I put the -1 on the the PR - to have this discussion - and
agree on what we are trying to solve - and also agree on what we are not
trying to solve. And perhaps we can actually document some of our "threat
model" - as I'm sure we have different views on that as well.

I don't think we should have per-SSL DRBGs - it offers no meaningful
security value. We could have a per-SSL_CTX - but I'm not sure that is
needed. We could have a per-thread - but again that is unclear if we
actually need that either.
My thoughts are per-SSL_CTX might make the most sense based on my
understanding of the high-performance server contexts (in various web
servers and custom servers).
You can make a reasonable argument that we are sharing many security
related things between SSL_CTXs already so this is the right place.

And if we cannot reach a consensus because we cannot get to a shared view
then perhaps it needs to be configurable for the user.


On Wed, Mar 14, 2018 at 12:10 PM, Salz, Rich <rs...@akamai.com> wrote:

> >    Either that or just always use the per-thread DRBG for the current
>     thread, and don't bother to do per-SSL at all.
> There is appeal to isolating each SSL connection so that an adversary
> can't use information it has about *it's* connection to attack another.
> Granted, this might not be practical, but still...
> _______________________________________________
> openssl-project mailing list
> openssl-project@openssl.org
> https://mta.openssl.org/mailman/listinfo/openssl-project
openssl-project mailing list

Reply via email to