Re: [TLS] [lamps] [EXTERNAL] Re: Q: Creating CSR for encryption-only cert?

2022-11-20 Thread Nico Williams
On Thu, Oct 06, 2022 at 05:09:21PM +, John Gray wrote:
> For a use case like an HSM or TPM where private keys can never leave
> rules out option 1 (plus who wants to send their private key anyway
> unless it is for server backup or escrow purposes).  Option 3 would
> work but is bad for CT log spamming.  Option 2 of using an encrypted
> challenge seems like the best choice, but obviously causes a 2nd round
> trip.  There doesn’t seem to be an ideal solution.

Apart from amortizing the cost of the extra round trip, I don't see an
acceptable alternative to (2).

When you have already done a two round trip PoP enrollment of _some
other_ encryption-only public key, you can just send new certificates
for encryption-only public keys encrypted to the first one, thus
amortizing the cost of the first two round trip enrollment by turning
subsequent enrollments into one round trip affairs.

In TPM parlance you'd enroll a machine and the public key of a
decrypt-only, restricted primary, fixedTPM private key, complete with a
two round-trip proof of possession protocol, and thereafter you would
just send new certificates for new decrypt-only, possibly-fixedTPM keys,
via TPM2_MakeCredential() with the new key as the activation object and
the original key as the target.  You don't need a TPM to make a protocol
like that work, though one might want to use a TPM for this to secure
the first key.

Nico
-- 


Re: Goodbye

2020-07-03 Thread Nico Williams
On Fri, Jul 03, 2020 at 05:45:22PM +, Jordan Brown wrote:
> On 7/3/2020 6:03 AM, Marc Roos wrote:
> > Also hypocrite of Akamai, looking at the composition of the executive team.
> 
> I think it's pretty clear that Rich was speaking as himself, not as a
> representative of Akamai.

Hi Jordan,

It's pretty clear that Rich was insinuating that the OMC are a bunch of
racists, and that they should be canceled.  He didn't _say_ it, but
strongly implied it, or at least that was the message received by some.
It's fair to suppose it was the intended message.  And Rich specifically
referred to his employer, not in his signature as you refer to yours,
but in the text of his post.  No, Rich did not say his words and actions
represent Akamai, but still, he referred to Akamai specifically.  More
about this below.

The OMC is staffed by real people, with real families, and the
disagreement here did not rise to the point where one should dox the OMC
members and imply that their employers should terminate their
employment.  They do not deserve what Rich implies.  I know one of them
well, one who voted against Rich's PR, and I assure you he's no racist.

Rich didn't care to be polite.  No!  He meant to _provoke_, and not just
replies like Marc's, but actual action against the OMC's members.  I
won't call out Akamai on account of Rich's behavior because I don't want
to do what he did, but I don't blame anyone else for doing it -- Rich
made it fair game by doing it first.  But we must call out behavior like
Rich's -- it's simply not acceptable.

Nico
-- 


Re: [openssl-users] Changing malloc/debug stuff

2015-12-18 Thread Nico Williams
On Thu, Dec 17, 2015 at 09:28:28AM +, Salz, Rich wrote:
> I want to change the memory alloc/debug things.
> 
> Right now there are several undocumented functions to allow you to
> swap-out the malloc/realloc/free routines, wrappers that call those
> routines, debug versions of those wrappers, and functions to set the
> set-options versions of those functions.  Yes, really :)  Is anyone
> using that stuff?

This is another one of those things that isn't easy to deal with sanely
the way OpenSSL is actually used (i.e., by other libraries as well as by
apps).

> I want to change the model so that there are three wrappers around
> malloc/realloc/free, and that the only thing you can do is change that
> wrapper.  This is vastly simpler and easier to understand.  I also
> documented it.  A version can be found at
> https://github.com/openssl/openssl/pull/450

This seems much more sane.

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] Changing malloc/debug stuff

2015-12-17 Thread Nico Williams
On Thu, Dec 17, 2015 at 08:16:50PM +, Salz, Rich wrote:
> > > https://github.com/openssl/openssl/pull/450
> > 
> > This seems much more sane.
> 
> I'll settle for less insane :)

That is, I think, the best you can do.  Some allocations might have
taken place by the time a wrapper or alternative allocator is
installed, in which case something bad will happen.  In the case of
alternative allocators the something bad is "it blows up", while in the
case of a wrapper the something bad is "some state/whatever will be
off".

A fully sane approach would be to have every allocated object internally
point to its destructor, and then always destroy by calling that
destructor instead of a global one.  (Or call a global one that knows
how to find the object's private destructor pointer, and then calls
that.)  If you wish, something more OO-ish.  But for many allocations
that's not possible because they aren't "objects" in the sense that
matters.  You could always wrap allocations so that they always have
room at the front for the corresponding destructor, then return the
offset of the end of that pointer, but this will be very heavy-duty for
many allocations.  So, all in all, I like and prefer your approach.

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] OpenSSL support on Solaris 11 (built on Solaris 10)

2015-06-16 Thread Nico Williams
On Tue, Jun 16, 2015 at 12:51:31PM +0530, Atul Thosar wrote:
 Currently, we build OpenSSL v0.9.8zc on Solaris 10 (SunOS, sun4u, sparc)
 and it works well on Solaris 10 platform. We use Sun Studio 12 compiler.
 
 We would like to run it on Solaris 11.2 (SunOS, sun4v, sparc) platform w/o
 changing the build platform. I mean we will continue to build OpenSSL on
 Solaris 10 and run it on Solaris 11.
 
 Has anyone encounter such situation?  Appreciate any help/pointers if this
 mechanism will work?

Historically the approach you describe has worked quite well with
Solaris because the ABI is quite stable.  This is particularly the case
if you use only features that are extremely unlikely to be removed in a
new minor release (Solaris 12 would be a minor release).

(You should not build on anything older than S10 to run on S10 or later,
mostly due to subtle changes in how snprintf() works.)

But you should read the ABI compatibility promises that Oracle makes and
decide for yourself.

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] OpenSSL support on Solaris 11 (built on Solaris 10)

2015-06-16 Thread Nico Williams
I should add that you should read all the release notes of every update
and check if your product would be affected.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] Replacing RFC2712 (was Re: Kerberos)

2015-05-13 Thread Nico Williams

We're closer.

On Wed, May 13, 2015 at 07:10:10PM +0200, Jakob Bohm wrote:
 On 13/05/2015 17:46, Nico Williams wrote:
 On Wed, May 13, 2015 at 12:03:33PM +0200, Jakob Bohm wrote:
 On 12/05/2015 21:45, Nico Williams wrote:
 On Tue, May 12, 2015 at 08:23:34PM +0200, Jakob Bohm wrote:
 How about the following simplifications for the new
 extension, lets call  it GSS-2 (at least in this e-mail).
 
 1. GSS (including SASL/GS2) is always done via the SPNego
 GSS mechanism, which provides standard handling of
 mechanism negotiation (including round-trip optimizations),
 and is already its own standard (complete with workarounds
 for historic bugs in the dominant implementation...).
 
 SASL/GS2 and SPNEGO are incompatible.
 
 How?  I thought SPNEGO encapsulated and negotiated
 arbitrary GSS mechanisms.
 
 The problem is that negotiating twice is bad (for various reasons), and
 SASL has non-GSS mechanisms, so negotiating SASL mechanisms, then GSS is
 a two-level negotiation that is fraught with peril, therefore forbidden.

 Ok, having not studied the standard SASL in GSS
 specification, I presumed each GSS-encapsulated SASL
 mechanism would have its own GSS mechanism OID in
 some systematic way, leaving just one negotiation.

SASL/GS2 is the other way around: GSS in SASL.

The idea is that you can have GSS as SASL mechanisms in a way that sucks
less than the original GSS-in-SASL bridge in RFC (that added an
extra round-trip), and which makes it easy to add mechanisms like SCRAM
as both, a GSS and a SASL mechanism.

I'm perfectly happy to drop SASL though.

 To me the key benefit of SPNEGO is the existence of
 already battle tested negotiation code readily available
 i many/most current GSS implementation.  It is one less
 thing to design and implement wrong.
 
 It's quite complex owing to having been underspecified in the first
 place then having grown a number of bug workarounds over the years.

 Yes, but it is now a mature protocol, and I was trying
 to avoid creating yet another near identical
 handshake protocol.

The only complication in a negotiation mechanism is protecting the
negotiation.  Since the TLS handshakes are ultimately integrity-
protected, there's no complication at all to having the client send a
list of mechanisms and the server pick one (the client can even send an
optimistic choice's initial context token).  In fact, it's much nicer
than SPNEGO in many ways; if at all possible one should avoid SPNEGO.

Among other things, not using SPNEGO means that it will be much easier
to implement this protocol without extensions to GSS (extensions would
be needed only to optimize it).

 In your protocol the client already sent a SPNEGO initial security
 context token.  A response is required, as GSS context establishment
 token exchanges are strictly synchronous.
 
 As written, I had forgotten about the Finished
 messages.  Thus the point wasto simply delay the
 server GSS response (2. GSS leg) to just after
 switching onthe encryption, later in the same
 round of messages.  The 3. leg (second client to
 server GSS token) would then follow etc.

We could extend GSS (see below) to support late channel binding, but
since a mechanism might not be able to do it, this protocol would have
to fall back on MIC tokens to complete the channel binding, in some
cases at a cost of one more round trip.

 With PROT_READY there should be no need for an extra round-trip.
 
 Depends a lot on the mechanism.  Some GSS mechanisms
 (other than Kerberos IV/V) cannot use their MIC until
 they have received a later token from the other end,
 but can incorporate binding data earlier than that.  I
 think GSS-SRP-6a has that property.

Kerberos in particular supports PROT_READY.  There is no Kerberos IV GSS
mechanism, FYI.  I'd never heard of GSS-SRP-6a; do you have a reference?

 6. If the GSS mechanism preferred by the client requires the
 authenticated hash value to be known before sending the
 first GSS leg, then the client shall simply abstain from
 including that first leg in the first leg SPNego message
 if sent in the client hello extension.
 If we're doing a MIC exchange then we don't need to know the channel
 binding a initial security context token production time.
 However the early channel binding might save a leg.
 
 You mean late.  Your idea seems to be to exposed knowledge of when is
 the latest that a mechanism can begin to use the channel binding so as
 to delay giving it the channel binding until we know it.  That would be
 a significant change to GSS, and often it won't help (e.g., Kerberos,
 the mechanism of interest in this thread).

 The idea would be if an implementation (not the protocol
 extension specification as such) is blessed with a
 non-standard GSS option to provide the channel binding
 after the 1. leg, but not with the early MIC use ability
 of Kerberos, the the protocol extension should not prevent
 it from taking advantage of this to do the channel binding
 before the 2. leg, rather than

Re: [openssl-users] [openssl-dev] Replacing RFC2712 (was Re: Kerberos)

2015-05-13 Thread Nico Williams
On Wed, May 13, 2015 at 12:03:33PM +0200, Jakob Bohm wrote:
 On 12/05/2015 21:45, Nico Williams wrote:
 On Tue, May 12, 2015 at 08:23:34PM +0200, Jakob Bohm wrote:
 How about the following simplifications for the new
 extension, lets call  it GSS-2 (at least in this e-mail).
 
 1. GSS (including SASL/GS2) is always done via the SPNego
 GSS mechanism, which provides standard handling of
 mechanism negotiation (including round-trip optimizations),
 and is already its own standard (complete with workarounds
 for historic bugs in the dominant implementation...).
 SASL/GS2 and SPNEGO are incompatible.

 How?  I thought SPNEGO encapsulated and negotiated
 arbitrary GSS mechanisms.

The problem is that negotiating twice is bad (for various reasons), and
SASL has non-GSS mechanisms, so negotiating SASL mechanisms, then GSS is
a two-level negotiation that is fraught with peril, therefore forbidden.

 To me the key benefit of SPNEGO is the existence of
 already battle tested negotiation code readily available
 i many/most current GSS implementation.  It is one less
 thing to design and implement wrong.

It's quite complex owing to having been underspecified in the first
place then having grown a number of bug workarounds over the years.

It'd be much easier to send a list of mechanism OIDs in the ClientHello,
have the server announce a choice in its response, and have the first
GSS token sent as early application data in the same flight as the
client's Finished message (assuming traditional TLS handshakes here),
with GSS channel binding.  When the client knows what mechanism they
want they could send the initial context token in the ClientHello (if
it's not too large) and use MIC tokens for channel binding.

 The ALPN approach is to do the mechanism negotiation via ALPN.  This is
 much better than SPNEGO in general.

 However I strongly suspect that using ALPN will cause
 practical conflicts with early HTTP/2 implementations
 and early ALPN implementations, as such early
 implementations are likely to only cater to that
 single use of ALPN.

Perhaps so.  I would prefer to optimize the GSS flights as well too.

 3. The TLS server (if it supports and allows the extension)
 responds with a 0 byte TLS extension GSS-2 to confirm
 support.
 Well, presumably the first response GSS token should go here.
 No, see below.

In your protocol the client already sent a SPNEGO initial security
context token.  A response is required, as GSS context establishment
token exchanges are strictly synchronous.

 5. In the last legs, the GSS mechanism is told to (mutually
 if possible) authenticate some already defined hash of the
 TLS handshake, thereby protecting the key exchange.Other
 than the round trip saving for the first 2 legs, this is
 what distinguishes GSS-2 from simply doing application level
 GSS over a TLS connection.  Any GSS negotiated keys are not
 used beyond this authentication of the TLS key exchange.
 
 This is the MIC exchange I mention above.

 Yep, however as this entails extra round trips, it is
 not the only option.

With PROT_READY there should be no need for an extra round-trip.

 6. If the GSS mechanism preferred by the client requires the
 authenticated hash value to be known before sending the
 first GSS leg, then the client shall simply abstain from
 including that first leg in the first leg SPNego message
 if sent in the client hello extension.
 
 If we're doing a MIC exchange then we don't need to know the channel
 binding a initial security context token production time.

 However the early channel binding might save a leg.

You mean late.  Your idea seems to be to exposed knowledge of when is
the latest that a mechanism can begin to use the channel binding so as
to delay giving it the channel binding until we know it.  That would be
a significant change to GSS, and often it won't help (e.g., Kerberos,
the mechanism of interest in this thread).

 7. If the client wants encryption of the first GSS leg, it
 can either abstain from including that leg in the first
 SPNego GSS leg, or it can send a 0-byte first leg and then
 send the real first SPNego leg in the first encrypted client
 o server record, with the server responding with the second
 leg in the first encrypted server to client record as before
 (but no longer in the same round trip as the second half of
 the TLS handshake).
 
 With the ALPN approach this is a given.
 
 However if the first leg need not be encrypted and
 need not know thechannel binding, it can be sent a
 round earlier. This can (I hope) be decided on a per
 mechanism basis, thus if a GSS mechanism need not know
 its channel binding until the second leg,
 implementations that can provide the binding to the
 GSS layer later can take advantage of it.

No, this can't be decided on a per-mechanism basis, not without first
modifying GSS significantly.

 9. When the GSS-2 extension is negotiated, TLS
 implementations SHOULD allow anonymous (unauthenticated)
 cipher suites even

Re: [openssl-users] [openssl-dev] Replacing RFC2712 (was Re: Kerberos)

2015-05-13 Thread Nico Williams

I wonder if we could do this in the KITTEN WG list.  Maybe not every
extension to TLS needs to be treated as a TLS WG work item...  We should
ask the security ADs.

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] Replacing RFC2712 (was Re: Kerberos)

2015-05-12 Thread Nico Williams
I should add that I prefer a protocol that optimizes the GSS round trips
over one that doesn't, though that means using SPNEGO for negotiation
(when negotiation is desired).
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Testing OpenSSL based solution

2015-05-12 Thread Nico Williams
On Tue, May 12, 2015 at 06:10:39PM +, Salz, Rich wrote:
 You can't easily have test vectors for DSA signatures since they
 include a random.  Any test vector would have to include the random,
 and any API would have to be able to accept the random as part of the
 sign API.  Verification should be okay.

It'd be nice to have derandomized *DSA forms for OpenSSL.

CFRG is on the case, thankfully, so eventually there should be a
derandomized ECC signature scheme in OpenSSL.  (Assuming the consensus
ends up being in favor of having a deterministic, state-less signature
scheme.)

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] Replacing RFC2712 (was Re: Kerberos)

2015-05-12 Thread Nico Williams
On Tue, May 12, 2015 at 08:23:34PM +0200, Jakob Bohm wrote:
 How about the following simplifications for the new
 extension, lets call  it GSS-2 (at least in this e-mail).
 
 1. GSS (including SASL/GS2) is always done via the SPNego
 GSS mechanism, which provides standard handling of
 mechanism negotiation (including round-trip optimizations),
 and is already its own standard (complete with workarounds
 for historic bugs in the dominant implementation...).

SASL/GS2 and SPNEGO are incompatible.

The ALPN approach is to do the mechanism negotiation via ALPN.  This is
much better than SPNEGO in general.

We don't have to use the ALPN approach, and we don't have to support
SASL.  But see below.

 2. The TLS client always begins by sending the first
 GSS/SPNego leg in a (new) TLS extension GSS-2.

This is incompatible with doing channel binding the GSS way.  Instead
we'd have to exchange MICs of the channel binding when the GSS context
is fully established.  (This is fine, of course, and not a criticism,
just pointing this out.)

 3. The TLS server (if it supports and allows the extension)
 responds with a 0 byte TLS extension GSS-2 to confirm
 support.

Well, presumably the first response GSS token should go here.

 4. The second and subsequent legs of the GSS handshake are
 sent as the sole contents of the first encrypted records,
 actual application data is not sent until the GSS handshake
 succeeds.  Note that the first encrypted server to client
 record (containing the second leg) can be sent in the same
 protocol round trip as the second half of the TLS
 handshake.  It is an open design issue if these TLS records
 should be tagged as application records or key exchange
 records.

This is just as in the ALPN approach.  They should be tagged as
application records so that the implementation can be either at the
application layer or in the TLS library.

 5. In the last legs, the GSS mechanism is told to (mutually
 if possible) authenticate some already defined hash of the
 TLS handshake, thereby protecting the key exchange.Other
 than the round trip saving for the first 2 legs, this is
 what distinguishes GSS-2 from simply doing application level
 GSS over a TLS connection.  Any GSS negotiated keys are not
 used beyond this authentication of the TLS key exchange.

This is the MIC exchange I mention above.

 6. If the GSS mechanism preferred by the client requires the
 authenticated hash value to be known before sending the
 first GSS leg, then the client shall simply abstain from
 including that first leg in the first leg SPNego message
 if sent in the client hello extension.

If we're doing a MIC exchange then we don't need to know the channel
binding a initial security context token production time.

 7. If the client wants encryption of the first GSS leg, it
 can either abstain from including that leg in the first
 SPNego GSS leg, or it can send a 0-byte first leg and then
 send the real first SPNego leg in the first encrypted client
 o server record, with the server responding with the second
 leg in the first encrypted server to client record as before
 (but no longer in the same round trip as the second half of
 the TLS handshake).

With the ALPN approach this is a given.

 8. If the GSS mechanism reports failure, the TLS connection
 SHALL be aborted with a specified alert.

Yes.

 9. When the GSS-2 extension is negotiated, TLS
 implementations SHOULD allow anonymous (unauthenticated)
 cipher suites even if they would not otherwise do so,
 however they MUST be able to combine the GSS-2 extension
 with any and all of the cipher suites and TLS versions they
 otherwise implement.  For instance, if an implementation of
 the GSS-2 extension is somehow bolted on to a fully
 patched OpenSSL 1.0.0 library (via generic extension
 mechanisms), then that combination would support it with
 TLS 1.0 only, and TLS 1.3 capable implementations would be
 negotiating TLS 1.0 when doing GSS-2 with such an
 implementation.

If only GSS mechanisms that provide integrity protection or better as
used, then this is fine.

 10. Session resumption and Session renegotiation shall have
 the same semantics for the GSS authentication result as
 they do for certificate validation results done in the
 same handshakes.

Yes.

 11. NPN and ALPN are neither required nor affected by GSS-2
 and operate as they would with any other TLS mechanisms,
 such as certificates.

NPN is out of the question now.

You're missing a status message for authorization (GSS authentication
might complete, but authorization fail), though this is not strictly
necessary: the server can simply close the connection, including sending
an alert about this (or not) just before closing the connection.

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] Replacing RFC2712 (was Re: Kerberos)

2015-05-11 Thread Nico Williams
On Mon, May 11, 2015 at 04:42:49PM +, Viktor Dukhovni wrote:
 On Mon, May 11, 2015 at 11:25:33AM -0500, Nico Williams wrote:
 
   - If you don't want to depend on server certs, use anon-(EC)DH
 ciphersuites.
  
 Clients and servers must reject[*] TLS connections using such a
 ciphersuite but not using a GSS-authenticated application protocol.
 
 [*] Except when employing unauthenticated encrypted communication
 to mitigate passive monitoring (oportunistic security).

As this would be replacing RFC2712, it's not opportunistic to begin with :)
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Replacing RFC2712 (was Re: Kerberos)

2015-05-11 Thread Nico Williams
On Fri, May 08, 2015 at 10:57:52PM -0500, Nico Williams wrote:
 I should have mentioned NPN and ALPN too.
 [...]

A few more details:

 - If you don't want to depend on server certs, use anon-(EC)DH
   ciphersuites.

   Clients and servers must reject TLS connections using such a
   ciphersuite but not using a GSS-authenticated application protocol.

 - The protocol MUST use GSS channel binding to TLS.

 - Use SASL/GS2 instead of plain GSS and you get to use an authzid
   (optional) and you get a builtin authorization status result message
   at no extra cost, and all while still using GSS.

You get to optimize only the mechanism negotiation, and you get TLS w/
Kerberos (and others) and without PKIX (if you don't want it).

See RFCs 7301, 5801, 5056, and 5929 (but note that the TLS session hash
extension is required).

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Kerberos

2015-05-08 Thread Nico Williams
On Fri, May 08, 2015 at 05:17:29PM -0400, Nathaniel McCallum wrote:
 I agree that the current situation is not sustainable. I was only
 hoping to start a conversation about how to improve the situation.

RFC2712 uses Authenticator, which is an ASN.1 type quite clearly NOT
intended for use outside RFC1510 because it isn't a PDU.  RFC2712
unnecessarily constructed its own AP-REQ that's different from the
RFC1510 (now 4120) AP-REQ.

This is bad for a variety of reasons, not the least of which are
complicating Kerberos APIs and/or RFC2712 implementations (which might
have to parse out the Authenticator and Ticket from a plain AP-REQ).

I also notice that the EncryptedPreMasterSecret is under-specified (is
it a Kerberos EncryptedData?  who knows?).

RFC2712 could be replaced with a properly-done protocol that uses
Kerberos in the full TLS handshake (i.e., not in session resumption).
This would be the lowest-effort fix.

A generic GSS-in-TLS extension would require much more energy (see
below).

 For instance, there is this: http://tls-kdh.arpa2.net/

Yes, it'd be nice to add PFS to the Kerberos AP exchange, and we just
might get there, but adding Kerberos and/or GSS to TLS is a very
different undertaking.

 I don't see any reason this couldn't be expanded to do GSSAPI.

Well, that's difficult because GSS has arbitrary round trips...

You're not the first to want this, see for example here:

https://tools.ietf.org/html/draft-santesson-tls-gssapi-01
https://tools.ietf.org/html/draft-williams-tls-app-sasl-opt-04

And more if you consider other efforts like False Start and look past
GSS/SASL.  Probably many more than I know of then...

Two main design axis:

1) When does the GSS context token begin, and how is channel binding
done.

 - no GSS mech negotiation, first GSS context token goes in TLS
   ClientHello;

   (channel binding done via MIC tokens or GSS_Pesudorandom() output
   exchanges)

or (e.g., if the client needs to negotiate mechs)

 - TLS ClientHello carries client mechList, server announces a mech in
   its handshake message, first GSS context token goes in second client
   handshake flight with normal channel binding

(Both options could be specified, with clients choosing as desired.)

x

2) How many GSS context tokens can be exchanged and who is responsible
for continuing past the traditional TLS handshake.

 - one round trip only

or

 - arbitrary round trips continued by TLS or by the application

The first order of business is to decide on whether or not to support
multiple round trips (IMO we must; what's the point if not?).

The second is to decide whether or not additional context token round
trips are to be done by the application, both as to how they appear on
the wire and how they appear in the API.

The third is to decide whether GSS mechanism negotiation is supported,
and whether it can be optimized away when it's not needed.

The fourth is to decide whether SASL (with SASL/GS2 to get GSS) isn't
better, since if we're going to spend a pair of flights in negotiation,
we might as well let server-talks-first SASL mechs get a leg up on GSS.
Remember, SASL can do GSS just fine via SASL/GS2 [RFC5801].

 But maybe this mailing list isn't the right place for such a
 discussion.

Well, TLS WG would be the right forum, but they are busy with TLS 1.3.
Some of us could get together elsewhere, probably not here.

 Perhaps the right question to ask is how much interest there would be
 in improving this situation in the TLS WG and whether or not OpenSSL
 would have interest in implementing such a project.

My impression is: none, because TLS WG is too busy at this time, and in
the past it has been very difficult to get the necessary level of
implementor effort.  Past performance is not always a predictor of
future performance.

It would help if GSS had better, less niche mechanisms.  For example: if
Kerberos had PKCROSS (based on DANE, say), that would help.  Or if ABFAB
went viral.  But for now everyone in the TLS world is happy _enough_
with WebPKI for server (should be service, but hey) authentication and
bearer tokens for user authentication.

Part of the problem is that HTTP authentication schemes (whether in HTTP
proper or not) have no real binding to TLS, and HTTP is basically a
routable (and usually routed) protocol anyways, which complicates
everything.  But HTTPS is the main consumer of TLS.  One might think
that adding user authentication options to TLS would be desirable for
HTTP applications, but again, the routing inherent to HTTP means that
routing must pass along user authentication information, but this isn't
always easy.  And HTTP is stateless and so doesn't deal well with
needing continuation of authentication exchanges, so bearer tokens it
basically kinda has to be, so that better mechanisms lose their appeal.

If the main consumer of GSS-in-TLS were to be something other than HTTP,
well, great, but still, HTTPS is the biggest consumer (next is SMTP)...
And it's easier then to 

Re: [openssl-users] [openssl-dev] Kerberos

2015-05-08 Thread Nico Williams

I should have mentioned NPN and ALPN too.

A TLS application could use ALPN to negotiate the use of a variant of
the real application protocol, with the variant starting with a
channel-bound GSS context token exchange.

The ALPN approach can optimize the GSS mechanism negotiation, at the
price of a cartesian explosion of {app protocols} x {GSS mechs}.  A
variant based on the same idea could avoid the cartesian explosion.  But
hey, TLS is the land of cartesian explosions; when in Rome...

The ALPN approach would be my preference here.  With TLS libraries
implementing the GSS context exchange, naturally.  The result would be
roughly what you seem to have in mind.

If we ask TLS WG, I strongly suspect that we'll be asked to look at ALPN
first.

I should add that I also would like to see the RFC4121 Kerberos GSS
mechanism gain PFS, independently of TLS gaining GSS.

Nico
-- 
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: not fork-safe if pids wrap

2013-08-23 Thread Nico Williams
On Fri, Aug 23, 2013 at 1:12 AM, Patrick Pelletier
c...@funwithsoftware.org wrote:
 On 8/22/13 12:46 PM, Nico Williams wrote:
 The parent might be multi-threaded, leading to the risk that a thread
 in the parent and the child will obtain the same PRNG outputs until
 the parent thread that fork()ed completes the re-seeding.

 That's a good point; I hadn't thought of that.

You were optimizing prematurely, and you know what they say about that :)

Running atfork handlers when you're going to exec may sound silly and
inefficient, but you don't really know that the child will exec(), not
when writing library code (like OpenSSL's).  Trying to avoid it is
premature optimization.  To be fair, all processes ought to exec() or
exit() soon after starting on the child-side of fork(), but we all
know many programs that fork() and just continue as if nothing.
fork() is evil, but few know it.

 Also, it's not a requirement that pthread_atfork() require -lpthread.
 It's entirely possible (and on Solaris 10 and up, for example, it is
 in fact so) that pthread_atfork() is in libc.

 That actually makes much more sense, since pthread_atfork() really has
 nothing to do with threads.  But at least on Linux, pthread_atfork() is part
 of -lpthread.

Well, to be fair in Solaris 10 and up all of pthreads is in libc.
POSIX doesn't specify what libraries should provide what functions.

 If you are going to exec() anyways you should have used vfork(), or
 better! you should have used posix_spawn() or equivalent.

 On Linux, posix_spawn still ends up calling the atfork handlers, if (and
 only if) you specify any file_actions.  I was actually in an unfortunate
 situation recently where the atfork handlers of a closed-source OpenGL
 library were causing crashes, so to run an external process with output
 redirection, I had to posix_spawn /bin/sh (with no file_actions), and then
 give /bin/sh a shell command to perform the output redirections and then
 exec the program I really wanted to run.  Ugly!

This is all OT but, are you sure?  Looking at recent-ish glibc it
looks like vfork() is used when: a) you set the POSIX_SPAWN_USEVFORK
attribute (not portable) or b) you didn't set sigmask, sigdef,
schedparam, scheduler, pgroup, or resetids.  Whereas fork() always
runs the handlers.  But perhaps your libc does something else.

Anyways, that is obviously a bug in OpenGL, and also OT :(

Nico
--
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: DLL hell

2013-08-22 Thread Nico Williams
FYI, in a few weeks I'll have some time to actually implement and
submit patches.  I'll attempt to identify useful points for automatic
self-initialization (any hints as to commonly used first calls, not
counting the callback setters, would be welcomed).  I'll also have to
spend sometime with the build system to autodetect correct build
options.  The rest should be simple enough.

I do plan to stay away from RNG and fork-safety issues, except for
re-seeding or perturbing the PRNG on the child-side of fork().

I'm not sure how to test, except for, perhaps, building two sets of
shared objects with different SONAMEs so as to be able to load two
instances of the same libraries (and their callers) in one process.
Some ideas as to what to test for would be nice.  I was thinking of
instrumenting OpenSSL entry points with calls to a utility that uses
dladdr() and stack walkers to ensure that each {caller, OpenSSL
instance} pair are always paired up correctly and no caller
inappropriately calls a different OpenSSL instance.  Would such a test
need to be in-tree and build-time configurable?  But this seems more
like a test of the run-time linker loader than a test of OpenSSL.
Ideas on how to test are welcomed.

That's the plan.  If any of you see anything wrong with it, please
save me time and effort by letting me know.

Nico
--
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: not fork-safe if pids wrap

2013-08-22 Thread Nico Williams
On Thu, Aug 22, 2013 at 1:00 AM, Patrick Pelletier
c...@funwithsoftware.org wrote:
 On 8/21/13 8:55 AM, Nico Williams wrote:

 OpenSSL should use pthread_atfork() and mix in more /dev/urandom into
 its pool in the child-side of the fork(),  Only a child-side handler
 is needed, FYI, unless there's locks to acquire and release, in which
 case you also need a pre-fork and parent-side handlers, or unless
 fork() is just a good excuse to add entropy to the pool on the parent
 side anyways :)


 Yeah, it seems like a good excuse.  Actually, it probably makes more sense
 to only add entropy on the parent side, since the parent is likely to live
 longer, and there's a good chance the child is just going to exec() anyway,
 in which case adding entropy to it will have been for nothing.

The parent might be multi-threaded, leading to the risk that a thread
in the parent and the child will obtain the same PRNG outputs until
the parent thread that fork()ed completes the re-seeding.  This would
be bad, unless there's locking around the PRNG, in which case there'd
be no problem.  (Perhaps each thread should maintain its own PRNG,
seeded separately, to avoid the need for locking.)

 The downside with using pthread_atfork() is that it requires pthreads. Since
 you pointed out there are still non-threaded libcs (at least for static
 linking), that would be an issue.

That's OK.  If you want to build OpenSSL for static linking w/o
pthreads you should get to, just don't upgrade the process model to
threaded later on.  It'd be best to not support static,
single-threaded process models that can upgrade to multi-threaded, but
that may not be possible, in which case the right answer is to require
consumers to run in or upgrade to threaded process models.

Also, it's not a requirement that pthread_atfork() require -lpthread.
It's entirely possible (and on Solaris 10 and up, for example, it is
in fact so) that pthread_atfork() is in libc.

 Most other libraries I've seen handle this by saving the pid in a static
 variable, and then comparing the current pid to it.  This has the advantage
 of not needing pthreads, and also of only adding the entropy to the child if
 it is actually needed (i. e. it doesn't exec after fork).

Optimizing getpid() using pthread_atfork() is nice, though decent
getpid() implementations effectively do so anyways.

 The only thing that bothers me about doing the pid check is that in theory
 it could still fail, although it's a really unlikely case. Imagine a parent
 process that forks a child process.  The child doesn't generate any random
 numbers, so the reseed doesn't happen in the child.  The parent dies, and
 then the child forks a grandchild.  In an incredibly rare and unlucky case,
 the grandchild could have the same pid as the original parent, and then the
 grandchild wouldn't detect it had forked.

Right, it's best to re-seed on the child-side of fork().

If you are going to exec() anyways you should have used vfork(), or
better! you should have used posix_spawn() or equivalent.

Use of fork() presents many problems, not the least of which is a
performance problem in multi-threaded processes with very large heaps
and high page dirtying rates, such as Java programs.  vfork() helps to
some degree, but the correct and portable thing to do is spawn,
specifically posix_spawn(), which avoids all child-side-of-fork
re-initialization problems by definition.

Nico
--
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: not fork-safe if pids wrap

2013-08-22 Thread Nico Williams
On Thu, Aug 22, 2013 at 2:46 PM, Nico Williams n...@cryptonector.com wrote:
 Use of fork() presents many problems, not the least of which is a
 performance problem in multi-threaded processes with very large heaps
 and high page dirtying rates, such as Java programs.  [...]

Also, obviously, web browsers.
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: not fork-safe if pids wrap (was Re: DLL hell)

2013-08-21 Thread Nico Williams
On Wed, Aug 21, 2013 at 2:19 AM, Patrick Pelletier
c...@funwithsoftware.org wrote:
 An easy way to work around this, if you don't mind linking against pthreads,
 is to do this at the start of your application, after initializing OpenSSL:

 typedef void (*voidfunc) (void);

 if (ENGINE_get_default_RAND () == NULL)
   pthread_atfork (NULL, (voidfunc) RAND_poll, (voidfunc) RAND_poll);

This is a pretty standard thing to do, and Solaris' libpkcs11 does it
(not to add entropy but to re-initialize, since PKCS#11 requires all
session and object handles to no longer be usable on the child-side of
fork()).

 But, of course, this ought to eventually be fixed in OpenSSL itself. (By
 using the pid-comparison trick that libottery uses, rather than just mixing
 in the pid.)  I'm happy to submit a patch, if we think there's a good chance
 it would be considered?

OpenSSL should use pthread_atfork() and mix in more /dev/urandom into
its pool in the child-side of the fork(),  Only a child-side handler
is needed, FYI, unless there's locks to acquire and release, in which
case you also need a pre-fork and parent-side handlers, or unless
fork() is just a good excuse to add entropy to the pool on the parent
side anyways :)

Nico
--
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: not fork-safe if pids wrap (was Re: DLL hell)

2013-08-21 Thread Nico Williams
On Wed, Aug 21, 2013 at 5:41 AM, Ben Laurie b...@links.org wrote:
 Something needs to be done, but won't this re-introduce the problem of
 /dev/random starvation, leading to more use of /dev/urandom (on platforms
 where this is a problem)?

 Mixing in the time seems like a safer solution that should also fix the
 problem. Possibly only when the PID changes.

Stirring in time and PID seems like just a fail-safe.  Some bytes from
/dev/urandom should also be added -- it won't hang once seeded (or
ever on Linux, but hopefully a simple service can be added by users to
seed urandom from random).  Provided one read from /dev/random has
been done I think perturbing the pool with time + PID + urandom should
suffice.

Nico
--
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Encumbered EC crypto algorithms in openssl?

2013-08-17 Thread Nico Williams
On Sat, Aug 17, 2013 at 8:49 PM, Scott Doty scott+open...@sonic.net wrote:
 That's actually a handy reference, for in looking at Curve25519, I came
 across...

 http://cr.yp.to/ecdh/patents.html

That's half the point, yes.  It'd be all of the point if Curve25519
didn't also rock perf-wise.
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: DLL hell

2013-08-16 Thread Nico Williams
On Thu, Aug 15, 2013 at 11:51:05PM -0700, Patrick Pelletier wrote:
 Oh.  Is there any reason not to blow that away, or at least build-time
 select which to use?
 
 I'm in agreement with you; I just don't think you're going to get
 the OpenSSL folks on board.  They'll probably say something like we
 want to be totally agnostic to threading library without
 acknowledging that pthreads and Windows threads cover the vast
 majority of modern mainstream operating systems.

Ah...  I need OpenSSL developers to consider this.  Would that mean
re-posting to the openssl-dev list?

 Great.  I was hoping that the response wouldn't be something like no
 way, we need these callback setting functions for XYZ reasons or,
 worse, no way.
 
 Unfortunately, I think the response will be that.  (The OpenSSL
 folks just haven't weighed in on this thread yet.)  That's why I was

I'm ever an optimist and I fail to see any reason to not make
initialization automatic and safe on all major platforms, keeping the
old callback setters as no-ops and as fallbacks in cases where
build-time configuration specifically requires that those setters not be
no-ops.

The alternative has to be don't *EVER* use OpenSSL from a library, or
always link with and initialize OpenSSL in every program that might -no
matter how indirectly- use an OpenSSL-using library, and *clearly* that
can't be what the OpenSSL devs want, or if it is, then it's clearly way
too late.

 floating the idea of writing an unofficial companion library that
 would smooth over these rough spots and provide a batteries
 included approach to people who want it, without having to convince
 the OpenSSL project to change the core library, which I think would
 be an uphill battle at best.

That can't really work unless *every* OpenSSL-using library used it, or
unless we specifically go for using symbol interposition (which means
dynamic linking, FYI, so it'd not work for statically-linked builds).

I'd like to get authoritative answers to my questions before considering
alternatives.

Nico
-- 
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: DLL hell

2013-08-16 Thread Nico Williams
On Fri, Aug 16, 2013 at 02:44:23PM +, Viktor Dukhovni wrote:
 On Fri, Aug 16, 2013 at 07:17:22AM -0700, Thomas J. Hruska wrote:
  I think a lot of the init logic heralds from the original SSLeay
  days. There seems to be intent that initialization is supposed to
  happen in main() in the application and libraries shouldn't be
  calling initialization routines in OpenSSL.
 
 This is a big problem, when main() has no knowledge of OpenSSL,
 rather OpenSSL is used indirectly via an intermediate library, that
 may even be dynamically loaded (e.g. Java dynamically loading
 GSSAPI, with Heimdal's GSS library using OpenSSL).

Right!

 Now it is certainly not appropriate for other libraries to call
 OpenSSL one-time initialization functions.  The result is a mess.

Exactly.

 Therefore, it is probably time to consider moving the OpenSSL
 library initialization code into OpenSSL itself, with the set of
 ciphers and digests to initialize by default as well as the thread
 locking mechanism chosen at compile time.

But would patches for this be welcomed?

Nico
-- 
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Encumbered EC crypto algorithms in openssl?

2013-08-16 Thread Nico Williams
If only we could agree to use DJB's Curve25519...
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


DLL hell

2013-08-15 Thread Nico Williams
Hi, I'm sorry if this has all been discussed extensively before.  A
brief search for DLL hell in the archives turns up disappointingly
(and surprisingly) little.  I do see a thread with messages from my
erstwhile colleagues at Sun/Oracle, so I know it's been discussed,
e.g., here: http://www.mail-archive.com/openssl-dev@openssl.org/msg27453.html
.  Recent developments, like Android's failure to properly initialize
OpenSSL's PRNG make me think it's time to table (in the British sense)
the issue once more.

To summarize the rest of this long post (please forgive me):

There should be no need for run-time, global initialization of
OpenSSL (or any sub-system of it) by applications.  Moreover, any
existing functions for such initialization should become no-ops or do
their work just once, and in a thread-safe way.  As much configuration
of threading and other system libraries should be done at build-time
as possible; as little should be done at run-time as possible.  Even
for special-purpose OSes.


OpenSSL requires too much explicit and global run-time initialization
of it by applications in ways that preclude (or at least used to and
still might cause problems for) multiple distinct callers of OpenSSL
in the same process.  And yet the layered software systems we've built
over the past decade result in just that happening.  Typical examples
include: applications that use TLS and Kerberos, PAM applications
(each module might need to use OpenSSL, and so might the application),
systems running without nscd (the name service switch modules may need
OpenSSL, and so might the app), and so on.

Are there best practices for dealing with this other than don't do
that?  (We *really* can't not do that: horse, barn door, and all
that.)

More to the point: why are there *any* initialization requirements at
all?  I can understand the need for it on some embedded systems, but
on modern general-purpose operating systems there should always be:

 - a single threading library to depend on for locking and which includes:

 - a way to perform one-time initialization (e.g., POSIX's
pthread_once(), Win32's  InitOnceExecuteOnce());

 - a decent source of entropy (but even if you must gather some on
your own, measuring jitter or what have you, one-time initialization
should suffice).

I realize that in the past there have been a multiplicity of threading
libraries (e.g., Solaris' native vs. POSIX threads, Linux's
LinuxThreads vs. NPTL).  But in practice these have either been
unified or one has been deprecated -- for the latter case you could
have OpenSSL build options for special cases, but the OpenSSL
distributed by distros should be linked with the standard/dominant
thread library.  I also realize that on special-purpose systems the
builder of OpenSSL may have to provide a thread library one way or
another, but even then doing so at build-time is easier than at
run-time.

Another source of DLL hell besides multiple initializations in one
process has been multiple dependents in one process dependning on
different versions of OpenSSL that are not ABI compatible with each
other.  There's less that OpenSSL can do about this (besides the
obvious be ABI backwards-compatible going forward), but distros can
do some things, like use -B direct (Solaris) or versioned symbols
(Linux), and this can be documented.

(To be fair there are still problems with multiple instances of a
library in a process, on POSIX systems anyways, because of things like
POSIX file locking being insane (I'm referring to the
drop-locks-on-first-close nonsense), so that two libraries (of similar
or different pedigree -- doesn't matter) using POSIX file locking to
synchronize access to a shared file will end up stepping on each
other's toes if they run in the same process concurrently.  But by and
large ensuring that accidental interposition (see previous paragraph)
doesn't happen should suffice for many cases.  In any case, OpenSSL
thankfully appears not to use POSIX file locking for anything, thank
goodness.)

Note the self-contradicting text in the FAQ:


* Is OpenSSL thread-safe?

Yes (with limitations: an SSL connection may not concurrently be used
by multiple threads).  On Windows and many Unix systems, OpenSSL
automatically uses the multi-threaded versions of the standard
libraries.  If your platform is not one of these, consult the INSTALL
file.

Multi-threaded applications must provide two callback functions to
OpenSSL by calling CRYPTO_set_locking_callback() and
CRYPTO_set_id_callback(), for all versions of OpenSSL up to and
including 0.9.8[abc...]. As of version 1.0.0, CRYPTO_set_id_callback()
and associated APIs are deprecated by CRYPTO_THREADID_set_callback()
and friends. This is described in the threads(3) manpage.


Huh?  Which is it?  Must apps call CRYPTO_THREADID_set_callback() even
though [o]n Windows and many Unix systems, OpenSSL automatically uses
the multi-threaded versions of the standard libraries?  Why?  One
would think that the 

Re: DLL hell

2013-08-15 Thread Nico Williams
On Thu, Aug 15, 2013 at 10:58 PM, Patrick Pelletier
c...@funwithsoftware.org wrote:
 On 8/15/13 10:24 AM, Nico Williams wrote:
 .  Recent developments, like Android's failure to properly initialize
 OpenSSL's PRNG make me think it's time to table (in the British sense)
 the issue once more.

 Can you point to any article or post which explains exactly what the OpenSSL
 half of the Android issue was?  (I understand the Harmony SecureRandom
 issue, but that's a separate thing.)  OpenSSL is supposed to call RAND_poll
 on the first call to RAND_bytes, and RAND_poll knows how to seed from
 /dev/urandom on systems that have it, which should include Android.  Neither
 I nor others speculating on Google+ could figure out why this wasn't the
 case, and why explicit seeding would have been necessary:

 https://plus.google.com/+AndroidDevelopers/posts/YxWzeNQMJS2

Hmm, I've only read the article linked from there:
http://android-developers.blogspot.com/2013/08/some-securerandom-thoughts.html

Not enough info there :(  I don't really feel like finding the
relevant OpenJDK JCA code, nor the Android derivative of it.  It'd be
easier to ask them for more details.

If they were wrong about OpenSSL in this respect and the problem was
truly specific to Android then I apologize for spreading a falsehood.

 Figuring out the right sequence of initialization functions to call, even
 when just one application is using OpenSSL, has not been entirely clear.  In
 particular, see my rambling talk-page discourse on the OpenSSL Wiki about
 what is and isn't necessary in order to get ENGINEs initialized, and how it
 depends upon some build-time #defines in a non-obvious way:

I'll take a look.  For now it seems there should be no need to set any
thread-related callbacks, no?  Or if they are needed, we should make
them no-ops on OSes with thread libraries.

  [...]
 Thank you!  I'm glad I'm not the only one who feels this is a big problem.
 It's something I've expressed concern about in the past, albeit in a
 parenthesized paragraph about halfway through a long, far-reaching rant:

 http://lists.randombit.net/pipermail/cryptography/2012-October/003388.html

I've ranted myself about this privately many a time.  I... just never
got involved.  I felt this [possibly incorrectly attributed to
OpenSSL] event was the straw that broke the camel's back for me.

I think that a crypto library should have no worse
initialization/finalization/thread safety/fork safety semantics than
libpkcs11 in Solaris/Illumos, which: a) thread-safely ref-counts
C_Initialize()/C_Finalize() calls, b) leaves locking around objects to
the app, c) re-initializes (and loses all objects) on the child-side
of fork(), d) no thread/lock callback setters.  (It's necessary to
finalize all objects that refer to sessions for crypto coprocessors,
TPMs, tokens, as well as any other stateful objects on the child side
of fork(), either that or establish new sessions.  It shouldn't be
necessary to finalize key objects, say, but hey, it's what PKCS#11
requires.)  Better yet: implied initialization (using pthread_once()).

 * using the OpenSSL Ruby Gem, while also using another Ruby Gem that depends
 on OpenSSL indirectly

 * using OpenSSL directly while also using libevent's optional integration
 with OpenSSL

I'm sure there's many more :(

 Huh?  Which is it?  Must apps call CRYPTO_THREADID_set_callback() even
 though [o]n Windows and many Unix systems, OpenSSL automatically uses
 the multi-threaded versions of the standard libraries?  Why?  One
 would think that the multi-threaded versions of the standard
 libraries on such OSes would provide all that OpenSSL needs.  I don't
 get it.


 I think what it means is that We link against thread-safe versions of the
 standard library (which implies there are thread-unsafe versions of the
 standard library, which I think might be true on Windows, but I'm pretty
 sure there isn't any thread-unsafe version of the standard library on Linux
 or OS X) so that, for example, errno doesn't get clobbered by multiple
 threads.  But we still don't call any threading functions to make OpenSSL
 itself threadsafe, so you'll have to provide this boilerplate yourself in
 every OpenSSL application you write.

Well, *static* libc.a's tend to not be very thread-safe.  Solaris  10
and Linux still today both have a number of process models:

 - dynamically linked, linked with a threading library
 - dynamically linked, NOT linked with a threading library
 - linked with static libc, linked with a threading library
 - linked with static libc, NOT linked with a threading library

and interesting things happen if in the not linked with a thread
library cases if an object loaded with dlopen() does link with a
thread library (even more interesting if the program is statically
linked with libc).  The dlopen() upgrade to threaded model case
happens when you use the name service switch without an nscd daemon or
PAM or...

A lot of people seem to love static linking