> Am 07.10.2015 um 16:09 schrieb Graham Leggett <minf...@sharp.fm>:
> 
> On 07 Oct 2015, at 3:43 PM, Stefan Eissing <stefan.eiss...@greenbytes.de> 
> wrote:
> 
>> Having just had time to look at which test cases fail: I see that static 
>> resources via HTTP/2 seem to work fine, however my tests with a proxy and or 
>> rewrite in between fail with high likelihood. 
>> 
>> Any hint at what exactly I might have to look for, any hint about what 
>> actually has changed, would be appreciated.
> 
> In terms of requests you seem to be doing the right thing. You create a 
> request with h2_task_create_request(), which was in turn called from 
> h2_task_process_request(), which then calls ap_process_request() which 
> handles the request and then passes an EOR bucket down the end of the chain.
> 
> I think the problems start once the above is done - are you performing any 
> sort of manual cleanup of either requests or connections or other pools that 
> are parents of connections or pools? If you do you’re probably destroying the 
> request before it is finished going over the network.
> 
> A request is started and you then forget about it, when the core processes 
> the EOR bucket the request will disappear on it’s own sometime at a future 
> date.
> 
> Can you describe how cleanups occur in the http2 world?

In http2 land, the request happen on "pseudo" connections, connections properly 
created by ap_run_create_connection(), but with own filters of type 
AP_FTYPE_PROTOCOL and AP_FTYPE_NETWORK, registered by mod_h2. 

These filters copy the data (or move the files) across the processing filters 
thread into the h2 multiplexer (h2_mplx) where the master connection thread 
reads it and send it out to the client, properly framed for HTTP/2.

Memory wise, master, multiplexer and slave connections have separate apr_pool 
hierarchies, due to multi-threading issues with any other attempt to handle it.
- master connection, mpm assigned thread, pool provided by core
- multiplexer, everything protected by mutex, child pools for ever h2 stream
- slave connection, child pools of the h2_workers assigned to them

Due to the non-multithreadability of apr_buckets, no buckets are ever moved 
across threads. non-meta buckets are read, meta buckets are deleted. That 
should work fine for EOR buckets, as all data has been copied already when they 
arrive.

One special case is implemented for file buckets. If the number of already open 
files is not "too high", apr_file_setaside() is used to have the file handle 
cleanup registered at the stream pool, no longer at the slave connection pool, 
and a new file bucket is written.

So, data/files can and will live long after the slave connection has gone away 
and all its pools have been reclaimed. This is desired and even the ideal case, 
as stream out a files can be done solely from the main connection. This can 
interleave many streams using only a single thread.

Stream pool destruction is synched with 
1. slave connection being done and no longer writing to it
2. h2 stream having been written out to the client or otherwise being closed
Only after 1+2 happened will this memory be reclaimed.

Transient buckets are used heavily on the master connection, as the way data 
buckets are being generated does not suit the coalescing ssl filter as it is 
currently designed. Instead, each master connection has a max-size buffer where 
frames are assembled and properly chunked into nicely sized transient buckets 
for passing down the network filters.

This is h2 bucket/pool handling in a nutshell.

//Stefan


Reply via email to