On Wed, Jul 3, 2019 at 9:23 AM Stefan Eissing
<[email protected]> wrote:
>
> > Am 02.07.2019 um 14:00 schrieb Eric Covener <[email protected]>:
> >
> > I think a successful exit of CONN_STATE_WRITE_COMPLETION would
> > conceptually lead to keepalive state so that would be plausible.  And
> > a failed one would go into lingering close.
>
> That is also how I understand it.

I think WRITE_COMPLETION, as his name does _not_ indicate, can also
handle _read_ completion by using/setting CONN_SENSE_WANT_READ.

> With the current mpm architecture, there are only two situations where
> connection processing by h2 returns to the mpm:
> 1. when all streams(request) have been handled and the connection is really 
> in KeepAlive
> 2. when the flow-control windows of existing streams are exhausted. The only 
> thing that can unlock this state are new frames from the client.
>
> My understanding is that the mpm either times out the connection or receives 
> client data which it then makes invoke connection processing again. Here, the 
> KeepAliveTimeout seems to alway apply which is wrong for the situation 2 
> described above.

So you'd want Timeout, right?

> > An option in this area is trunk-only PT_USER callback in event which
> > frees you from shoehorning into the states and just lets you get
> > called back when a socket is readable.
> > This is what's used in mod_proxy_wstunnell in the opt-in async mode.
>
> Hmm, I need to look at that. Thanks for the pointer.

That (but trunk only and quite huge change for 2.4.x), or as said
above CONN_SENSE_WANT_READ.
However the MPM event's queues (write_completion_q, keepalive_q, ...)
handle _fixed_ timeouts only (using fast/dumb APR_RINGs in O(1)), so
CONN_SENSE_WANT_READ could work only if h2 needs Timeout instead of
KeepAliveTimeout.
If a "dynamic" timeout is needed, PT_USER is the only option I think.

So possibly something along the lines of the attached patch would be
enough, modulo CONN_SENSE_WANT_READ which you know better where/how to
set ;)


Regards,
Yann.
Index: server/mpm/event/event.c
===================================================================
--- server/mpm/event/event.c	(revision 1862460)
+++ server/mpm/event/event.c	(working copy)
@@ -1153,10 +1153,11 @@ read_request:
         else if (ap_filter_should_yield(c->output_filters)) {
             pending = OK;
         }
-        if (pending == OK) {
+        if (pending == OK || (pending == DECLINED &&
+                              cs->pub.sense == CONN_SENSE_WANT_READ)) {
             /* Still in WRITE_COMPLETION_STATE:
-             * Set a write timeout for this connection, and let the
-             * event thread poll for writeability.
+             * Set a read/write timeout for this connection, and let the
+             * event thread poll for read/writeability.
              */
             cs->queue_timestamp = apr_time_now();
             notify_suspend(cs);
Index: modules/http2/h2_conn.c
===================================================================
--- modules/http2/h2_conn.c	(revision 1862460)
+++ modules/http2/h2_conn.c	(working copy)
@@ -231,6 +231,7 @@ apr_status_t h2_conn_run(conn_rec *c)
             case H2_SESSION_ST_BUSY:
             case H2_SESSION_ST_WAIT:
                 c->cs->state = CONN_STATE_WRITE_COMPLETION;
+                c->cs->sense = CONN_SENSE_WANT_READ; // TODO: appropriately..
                 break;
             case H2_SESSION_ST_CLEANUP:
             case H2_SESSION_ST_DONE:
Index: modules/http2/h2_task.c
===================================================================
--- modules/http2/h2_task.c	(revision 1862460)
+++ modules/http2/h2_task.c	(working copy)
@@ -665,8 +665,10 @@ static apr_status_t h2_task_process_request(h2_tas
          * will result in a segfault immediately instead
          * of nondeterministic failures later.
          */
-        if (cs) 
+        if (cs) {
             cs->state = CONN_STATE_WRITE_COMPLETION;
+            cs->sense = CONN_SENSE_WANT_READ; // TODO: appropriately..
+        }
         r = NULL;
     }
     else if (!r) {

Reply via email to