Hi Dylan,

On Thu, Dec 15, 2016 at 12:59:32PM +0700, Dylan Jay wrote:
> Still no one else has a usecase for a maxconn based when the end of headers
> are received rather than when the connection is closed?
> This is still an issue for us and we haven???t found any load balancer that
> can handle it. It leaves our servers under-utilised. 
> 
> Basically we want to set maxconn = 20 (or some large number) and
> maxconn_processing = 1 (where maxconn_processing is defined as not having
> finished returning headers yet).

I understand how this could be useful to you, but I also think that you
have a software architecture issue that makes this use case quite unique.
But at the same time I think it could make sense to have such a feature,
for example for those using fd-passing between their dynamic frontend and
the static server, as well as those using sendfile().

What I think is that we could implement a new per-server and per-backend
counter of active requests still being processed, and have a max for these
ones so that we can also decide to queue/dequeue on such events. But there
are different steps and some people will want to be even finer by only
counting as active the requests whose POST body has been completely sent.
People using WAFs may find this useful for example : you send a large
request, you know the request will not be processed until completely sent,
and you know it's not completed until you receive the headers.

We would also need to implement a new LB algorithm, eg: "least-active" as
a complement to leastconn to pick the most suitable server.

I'm seeing a similar use case when using haproxy with distcc, the data
transfer time is not negligible compared to the compilation time and if
I could parse the protocol, I'd need to take into account the real
processing time as well.

Now the obvious question is : who is interested in working on implementing
this ? Are you willing to take a look at it and implement it yourself,
possibly with some help from the rest of us ?

Best regards,
Willy

Reply via email to