On 21 July 2014 11:03, Szelcsányi Gábor <[email protected]> wrote:
> Thank you for looking into this. I cannot reproduce it with 1.5-dev24. If > I set the bind-process option at the backend section too (same values with > frontend) the problem does not occur with 1.5.2. This could be a solution > for me. > > In my case I didn't have to pin backends to a process. > Regards, > Gabor > > > On Sun, Jul 20, 2014 at 7:34 PM, Pavlos Parissis < > [email protected]> wrote: > >> On 18/07/2014 08:33 μμ, Szelcsányi Gábor wrote: >> > Hi, >> > >> > I've been reading the documentation and searching the mail list, but one >> > thing is not clear for me. I have nbroc 2, 2 frontends pined to a >> > separate cpu core and 1-1 backend. The bind-process options of these >> > backends are inherited from their parent frontend. Thus, are both >> > processes supposed to do healthcheck for backend servers or just the >> > desired process should do that? >> > >> > example: >> > >> > nbproc 2 >> > cpu-map 1 0 >> > cpu-map 2 1 >> > ... >> > >> > frontend frn1 >> > bind 10.0.0.10:80 <http://10.0.0.10:80> process 1 name frn1 >> > bind-process 1 >> > ... >> > default_backend bck1 >> > >> > frontend frn2 >> > bind 10.0.0.10:81 <http://10.0.0.10:81> process 2 name frn2 >> > bind-process 2 >> > ... >> > default_backend bck2 >> > >> > backend bck1 >> > option httpchk HEAD /healthcheck HTTP/1.1\r\n >> > ... >> > server srv1 10.0.0.1:80 <http://10.0.0.1:80> maxconn 5000 >> > weight 50 check inter 5s fall 2 rise 1 slowstart 15s >> > server srv2 10.0.0.2:80 <http://10.0.0.2:80> maxconn 5000 >> > weight 50 check inter 5s fall 2 rise 1 slowstart 15s >> > >> > backend bck2 >> > option httpchk HEAD /healthcheck HTTP/1.1\r\n >> > ... >> > server srv3 10.0.0.3:80 <http://10.0.0.3:80> maxconn 5000 >> > weight 50 check inter 5s fall 2 rise 1 slowstart 15s >> > server srv4 10.0.0.4:80 <http://10.0.0.4:80> maxconn 5000 >> > weight 50 check inter 5s fall 2 rise 1 slowstart 15s >> > >> > So the question is should both haproxy processes send health check >> > queries to srv1 and srv2 or only the first process is designated to do >> this? >> > In my setup I see traffic from both processes. If I set 6 or more pinned >> > frontends with different backends then the health checks can saturate >> > the backend servers. I tought only the right process should check for >> > status. The rest could never send traffic to the servers anyway. Am I >> > wrong or I just missing something? >> > >> > I'm using 1.5.2 stable. (released 2014/07/12) >> > HA-Proxy version 1.5.2 2014/07/12 >> > Copyright 2000-2014 Willy Tarreau <[email protected] <mailto:[email protected]>> >> > >> > Build options : >> > TARGET = linux26 >> > CPU = generic >> > CC = gcc >> > CFLAGS = -O2 -g -fno-strict-aliasing >> > OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 >> > USE_GETADDRINFO=1 USE_ZLIB=1 USE_EPOLL=1 USE_CPU_AFFINITY=1 >> > USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_TFO=1 >> > >> > >> > Regards, >> > Gabor >> >> >> I can't reproduce the behavior you describe. Below is the test conf I >> used where I set different User-Agent for the healthcheck on backends in >> order to make it easier for me to see if process 2 sends checks on >> foo-server1. >> >> nbproc 2 >> cpu-map 1 0 >> cpu-map 2 1 >> >> frontend main >> bind *:80 >> bind-process 1 >> default_backend foo >> >> backend foo >> default-server inter 10s >> option httpchk GET / HTTP/1.1\r\nHost:\ >> foo.example.com\r\nUser-Agent:\ HAProxy >> server foo-server1 21.229.28.251:80 check >> >> >> frontend main2 >> bind *:81 >> bind-process 2 >> default_backend foo2 >> >> backend foo2 >> default-server inter 10s >> option httpchk GET / HTTP/1.1\r\nHost:\ >> foo.example.com\r\nUser-Agent:\ HAProxy2 >> server foo-server2 20.229.28.252:80 check >> >> >> # haproxy -vv >> HA-Proxy version 1.5.2 2014/07/12 >> Copyright 2000-2014 Willy Tarreau <[email protected]> >> >> Build options : >> TARGET = linux2628 >> CPU = generic >> CC = gcc >> CFLAGS = >> OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 >> USE_PCRE=1 >> >> Default settings : >> maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 >> >> Encrypted password support via crypt(3): yes >> Built with zlib version : 1.2.3 >> Compression algorithms supported : identity, deflate, gzip >> Built with OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010 >> Running on OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010 >> OpenSSL library supports TLS extensions : yes >> OpenSSL library supports SNI : yes >> OpenSSL library supports prefer-server-ciphers : yes >> Built with PCRE version : 7.8 2008-09-05 >> PCRE library supports JIT : no (USE_PCRE_JIT not set) >> Built with transparent proxy support using: IP_TRANSPARENT >> IPV6_TRANSPARENT IP_FREEBIND >> >> Available polling systems : >> epoll : pref=300, test result OK >> poll : pref=200, test result OK >> select : pref=150, test result OK >> Total: 3 (3 usable), will use epoll. >> >> >> Cheers, >> Pavlos >> >> >> >> >

