Le 21/05/2023 à 17:50, Yann Ylavic a écrit :
On Sun, May 21, 2023 at 4:22 PM Yann Ylavic <ylavic....@gmail.com> wrote:

On Sun, May 21, 2023 at 2:07 PM Christophe JAILLET
<christophe.jail...@wanadoo.fr> wrote:

Le 19/08/2021 à 15:43, yla...@apache.org a écrit :

@@ -604,7 +604,7 @@ static apr_status_t send_brigade_nonbloc
            */
           if (nbytes > sconf->flush_max_threshold
                   && next != APR_BRIGADE_SENTINEL(bb)
-                && !is_in_memory_bucket(next)) {
+                && next->length && !is_in_memory_bucket(next)) {
               (void)apr_socket_opt_set(s, APR_TCP_NOPUSH, 1);
               rv = writev_nonblocking(s, bb, ctx, nbytes, nvec, c);
               if (rv != APR_SUCCESS) {

The comment above this code is:
          /* Flush above max threshold, unless the brigade still contains in
           * memory buckets which we want to try writing in the same pass (if
           * we are at the end of the brigade, the write will happen outside
           * the loop anyway).
           */

With the added next->length, IIUC, we will *also* process the bucket
*after* the metadata. So we could accumulate above flush_max_threshold
for no good reason (i.e. not processing data already in memory).

Is it what is expected?

Buckets with ->length == 0 don't contain data (in memory or not), as
opposed to ->length == -1 (unknown/on-read data length) for instance.
No special action is needed for empty buckets at the network level,
they just need to be destroyed when consumed (e.g. for EOR buckets to
terminate/cleanup the underlying request), so the loop simply ignores
them until the flush (i.e. writev/sendfile will delete them finally).
So presumably we can't accumulate above flush_max_threshold by not
flushing before empty buckets? More exactly we won't continue to
accumulate once flush_max_threshold is reached, but we might flush
more than that if the last accumulated bucket crosses the threshold
(that's independent from empty buckets though).

Hm, re-reading your question and the code I think I replied
orthogonally (and not accurately) here.
What the code does actually is coalescing in memory buckets (including
empty ones, barely) regardless of flush_max_threshold, precisely
because they are in memory already and we want to try sending them on
the network (up to its actual capacity) to release memory on httpd as
much as possible. Ignoring ->length == 0 barely means that metadata
buckets are in memory buckets (which they really are actually), maybe
it would be clearer to add the ->length == 0 test directly in
is_in_memory_bucket() after all..

Anyway I see what you mean now, when we are at flush_max_threshold and
the next bucket is a metadata, ignoring/continuing on that next bucket
in the next loop without re-checking for flush_max_threshold is bad.

Yes, that's my point.

I'll apply my patch and propose both for backport.

Thanks for the review and feedback.

CJ

And you are very right, good catch! Your patch looks good to me too.

This commit (r1892450) did not make it to 2.4.x it seems, maybe with
your fix it could now :) That's a good optimization I think..


Regards;
Yann.


Reply via email to