On 19 Feb 2018, at 11:14 AM, Stefan Eissing <stefan.eiss...@greenbytes.de> 
wrote:

> If I understand your gist correctly, this would allow HTTP/2 processing to 
> return to the main (async) event loop more often. Which would be great.
> 
> In the case of HTTP/2, it would be even more cool, to trigger the 
> (re-)processing of an AGAIN connection from another thread. The use
> case is: H2 main look started request, awaits response HEADERS/DATA *or* 
> incoming data from client.
> 
> now: timed wait on a condition with read checks on the main connection at 
> dynamic intervals
> then: return AGAIN (READ mode) to event look, new HEADERS/DATA from request 
> triggers re-process_connection.

This is the problem I want to solve - I want to be able to run multiple 
connections, and allow them to yield to each other.

I want to give our hooks the option to bite off and process data in chunks 
they’re in control of. Right now, you call the handler hook and it’s a one shot 
deal - the handler must finish whatever it wants to do, and only return once 
done.

Yes, in most cases our handlers generate data and pass them to the filter 
chain, which then handles the async data flow, but it would be nice if they 
weren’t forced to do that.

Regards,
Graham
—

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to