right, I understand. check_pipeline_flush already tests whether the input filters hold any data if I'm not mistaken.
It doesn't quite work like that. check_pipeline_flush (via the EATCRLF get_brigade call) only does anything if there are stray CRLFs after a request - it doesn't return any knowledge if there is a request pending. (In fact, EATCRLF will actually read the data from the socket into core_input_filter's buffer - so it'll directly cause the poll() to not work correctly.) For example, when mod_ssl is active, an EATCRLF call always returns ENOTIMPL. So, the check_pipeline_flush doesn't always work as expected and the EATCRLF check isn't enough to determine if there is any 'held' data.
not if we modify it so that the worker thread doesn't give up the connection to the event thread when there is more data in the input filters. That's what I meant by "react appropriately". Sorry if I wasn't clear.
The problem is that there is no reliable way to determine if there is more data in the input filters without actually invoking a read. Connection-level filters like mod_ssl would have to be rewritten to be async. SPECULATIVE with APR_NONBLOCK_READ will come the closest to achieving the goal though. However, I expect mod_ssl isn't going to work quite right with non-blocking reads.
Trying to support both 'slow' *and* 'fast' connections I think will require changes outside of the scope of the MPM. This is why I'd prefer branching 2.3 and work on it in there: these changes are likely to snowball. Plus, this effort dovetails with trying to rethink how filters work. -- justin
