Hi list

I have an haproxy setup which looks something like this:

frontend fast
  bind 0.0.0.0:8000;
  default_backend fast_servers;

frontend slow
  bind 0.0.0.0:8001;
  default_backend slow_servers;

backend fast_servers;
  server fast0 127.0.0.1:9000 maxconn 1 check;
  server fast1 127.0.0.1:9001 maxconn 1 check;
  server fast2 127.0.0.1:9002 maxconn 1 check;
  server fast3 127.0.0.1:9003 maxconn 1 check;

backend slow_servers;
  server slow0 127.0.0.1:9000 maxconn 1 check;
  server slow1 127.0.0.1:9001 maxconn 1 check;


My front end web server sends requests expected to complete quickly to the
fast pool (port 8000), and requests expected to take some time to process
to the slow pool (port 8001). I use maxconn 1 to ensure that requests are
never queued behind some other slow moving request wherever it's possible.

In order to avoid idle backend servers, I would like to share some of
backend servers (ports 9000 and 9001 here) between both fast and slow
pools. My problem is that when a slow request is being processed by the
slow_servers pool, the fast_servers will still send requests to the same
backend instance, sometimes resulting in a long delay before processing can
be completed.

Is there a way of configuring haproxy to respect 'maxconn 1' across backend
pools, so that if a slow request is being processed by the slow_servers,
the fast_server pool will also respect this and not send a request there?

Thanks

Reply via email to