On 22 Oct 2015, at 6:03 PM, Stefan Eissing <stefan.eiss...@greenbytes.de> wrote:

> This is all true and correct - as long as all this happens in a single 
> thread. If you have multiple threads and create sub pools for each from a 
> main pool, each and every create and destroy of these sub-pools, plus any 
> action on the main pool must be mutex protected. I found out. 

Normally if you’ve created a thread from a main pool, you need to create a pool 
cleanup for that thread off the main pool that is registered with 
apr_pool_pre_cleanup_register(). In this cleanup, you signal the thread to shut 
down gracefully and then apr_thread_join to wait for the thread to shut down, 
then the rest of the pool can be cleaned up.

The “pre” is key to this - the cleanup must run before the subpool is cleared.

> Similar with buckets. When you create a bucket in one thread, you may not 
> destroy it in another - *while* the bucket_allocator is being used. 
> bucket_allocators are not thread-safe, which means bucket_brigades are not, 
> which means that all buckets from the same brigade must only be used inside a 
> single thread.

“…inside a single thread at a time”.

The event MPM is an example of this in action.

A connection is handled by an arbitrary thread until that connection must poll. 
At that point it goes back into the pool of connections, and when ready is 
given to another arbitrary thread. In this case the threads are handled “above” 
the connections, so the destruction of a connection doesn’t impact a thread.

> This means for example that, even though mod_http2 manages the pool lifetime 
> correctly, it cannot pass a response bucket from a request pool in thread A 
> for writing onto the  main connection in thread B, *as long as* the response 
> is not complete and thread A is still producing more buckets with the same 
> allocator. etc. etc.
> 
> That is what I mean with not-thread-safe.

In this case you have different allocators, and so must pass the buckets over.

Remember that being lock free is a feature, not a bug. As soon as you add 
mutexes you add delay and slow everything down, because the world must stop 
until the condition is fulfilled.

A more efficient way of handling this is to use some kind of IPC so that the 
requests signal the master connection and go “I’ve got data for you”, after 
which the requests don’t touch that data until the master has said “I;ve got 
it, feel free to send more”. That IPC could be a series of mutexes, or a socket 
of some kind. Anything that gets rid of a global lock.

That doesn’t mean request processing must stop dead, that request just gets put 
aside and that thread is free to work on another request.

I’m basically describing the event MPM.

Regards,
Graham
—

Reply via email to