Hello,

Please take a look at this quote:

"*Least Connected*

*With least connected load balancing, nginx won’t forward any traffic to a
busy server. This method is useful when operations on the application
servers take longer to complete. Using this method helps to avoid overload
situations, because nginx doesn't pass any requests to servers which are
already under load.*"

See article:
https://futurestud.io/tutorials/nginx-load-balancing-advanced-configuration
.

I'm using this nginx feature and it helped me a lot: I have an application
with two routes, the first one is fast because it just process common
atomic CRUD operations; the second is a little bit slow, because that
spends a long time processing hard things like report generation. So I
start two instances of my application and configuring two proxies in my
upstream, all the time the requests are redirected to the first proxy, but
when some request executes some reporting generation -- blocking the server
--, all next requests are redirected to the second proxy, however, after a
timeout (about 10 seconds), the server checks if the first proxy is
readable, coming back the requests for the first one.

Does MHD offer something like this on its threading models? It so, what
flags I need to pass to get the following behavior: all the time the users
are routed in the first thread using the MHD's event-driven, however, when
someone executes some reporting generation -- blocking the server --, each
new request MHD checks if the route is still blocked, if so, MHD creates a
new thread (also in event-drive way) redirecting the next request for it.

I don't know if I was clear in my explanation, but in short I'm trying
something like the nginx's "Least Connected", however using purely MHD
requests.

--
Silvio Clécio

Reply via email to