Hi Mark,

Mark Staudinger wrote on 08.08.2017:

> Hi Folks,
>
> I have a multi-tenant HAProxy set-up loosely as follows
>
> frontend main
>    bind ip:port
>    various options
>    ACLs to match domains (client1, client2, etc)
>    use_backend client1 if client1
>    use_backend client2 if client2

How about to add here a defaults section with some default-server lines?

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4

###
A "defaults" section sets default parameters for all other sections following
its declaration. Those default parameters are reset by the next "defaults"
section. See below for the list of parameters which can be set in a "defaults"
section. The name is optional but its use is encouraged for better readability.
###

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-default-server


Like.
http://git.haproxy.org/?p=haproxy.git;a=blob;f=examples/acl-content-sw.cfg;h=1872789ac2d1198f4321e77c0dad4f382cc8f206;hb=HEAD


> backend client1
>    verious options
>    option httpchk with customized domain/URL as health check target
>    custom ACLs
>    server cache1 10.10.10.10:80 weight 50 maxconn 1000 check inter 20s
>    server cache2 10.10.10.20:80 weight 50 maxconn 1000 check inter 20s
>
> backend client2
>    verious options
>    option httpchk with customized domain/URL as health check target
>    custom ACLs
>    server cache1 10.10.10.10:80 weight 50 maxconn 1000 check inter 20s
>    server cache2 10.10.10.20:80 weight 50 maxconn 1000 check inter 20s


> As you can see each of the "cacheX" servers is used in multiple places, as
> these servers are also multi-tenant.

> The per-client backends are utilized both for providing custom ACLs, as
> well as providing a client-specific health check to each cache server, for
> example in the event that a given cache server is not yet seeded for a  
> given client.

> However, what is missing in this scenario is the ability to set  
> global/aggregate limits per cache server, so as to fine-tune the amount of
> active/queued connections across all backends to a given cache server.

> I'm sure I could shim this by creating a local "listen" block for each  
> cache server and using that IP/port instead of going direct to the server
> - but there are some significant drawbacks to that method:

> * additional logging (currently I am feeding the clientX backend logs to a
> parser)
> * disassociation of the "real" aggregate backend counters on the shim from
> the properly-named "clientX" backend counters
> * disassociation of the session state log portion from the shim to the  
> clientX backend log
> * using a separate shim for each cache server would be necessary to  
> preserve the health check status, yet this method wouldn't allow a request
> to be redistributed if maxconn/maxqueue has been exceeded, as the  
> connection would already have been made and a 503 issued.

> Any thoughts on how to best achieve the goal of being able to set proper
> maxconn/maxqueue limits when an individual server is used across multiple
> backends as in this scenario?

> Best Regards,
> Mark

-- 
Best Regards
Aleks


Reply via email to