Hi, we now upgraded to FreeBSD 9.3-RELEASE-p9 and upgraded HAProxy to version 1.5.11. The statistics about outflowing Bytes to the Backend are still not getting updated (they stay 0). (Screenshot: http://puu.sh/g0WQ4/7365665bf7.png) Is there any advise how we can get them to work again? When we deployed those servers about 6 months ago on FreeBSD 9.2 together with HAProxy 1.4 everything was working fine, so I suspect this to be a HAProxy or FreeBSD issue.
Best regards, Tobi On Tue, Feb 10, 2015 at 10:56 AM, Tobias Feldhaus <[email protected] > wrote: > > > On Thu, Feb 5, 2015 at 9:38 PM, Pavlos Parissis <[email protected] > > wrote: > >> On 04/02/2015 11:38 πμ, Tobias Feldhaus wrote: >> > Hi, >> > >> > To refresh the page did not help (the number of seconds the PRIMARY >> > backend was considered to be "down" increased continuously, but not the >> > number of Bytes or the color). >> > >> > [deploy@haproxy-tracker-one /var/log] /usr/local/sbin/haproxy -vv >> > HA-Proxy version 1.5.10 2014/12/31 >> > Copyright 2000-2014 Willy Tarreau <[email protected] <mailto:[email protected]>> >> > >> > Build options : >> > TARGET = freebsd >> > CPU = generic >> > CC = cc >> > CFLAGS = -O2 -pipe -fstack-protector -fno-strict-aliasing >> -DFREEBSD_PORTS >> > OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 >> > USE_PCRE_JIT=1 >> > >> > Default settings : >> > maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = >> 200 >> > >> > Encrypted password support via crypt(3): yes >> > Built with zlib version : 1.2.8 >> > Compression algorithms supported : identity, deflate, gzip >> > Built with OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014 >> > Running on OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014 >> > OpenSSL library supports TLS extensions : yes >> > OpenSSL library supports SNI : yes >> > OpenSSL library supports prefer-server-ciphers : yes >> > Built with PCRE version : 8.35 2014-04-04 >> > PCRE library supports JIT : yes >> > Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY >> > >> > Available polling systems : >> > kqueue : pref=300, test result OK >> > poll : pref=200, test result OK >> > select : pref=150, test result OK >> > Total: 3 (3 usable), will use kqueue. >> > >> > >> > ----- haproxy.conf ----- >> > >> > global >> > daemon >> > stats socket /var/run/haproxy.sock level admin >> > log /var/run/log local0 notice >> > >> > defaults >> > mode http >> > stats enable >> > stats hide-version >> > stats uri /lbstats >> > global log >> > >> > frontend LBSTATS *:8888 >> > mode http >> > >> > frontend KAFKA *:8090 >> > mode tcp >> > default_backend KAFKA_BACKEND >> > >> > backend KAFKA_BACKEND >> > mode tcp >> > log global >> > option tcplog >> > option dontlog-normal >> > option httpchk GET / >> >> httpcheck in tcp mode? Have you manage to load HAProxy with this setting >> without getting an error like >> [ALERT] 035/213450 (17326) : Unable to use proxy 'foo_com' with wrong >> mode, required: http, has: tcp. >> [ALERT] 035/213450 (17326) : You may want to use 'mode http'. > > > The KAFKA v0.6 service speaks only TCP and it does not allow direct > checking of HAProxy. (HAProxy does not check if data is _really_ flowing > through the sockets e.g. it does not speak the KAFKA protocol.) This is > why we have a local app on the machine that checks KAFKA's functionality > and communicates it to HAProxy on port 9093. Is there any better way of > doing this? > > >> > server KAFKA_PRIMARY kafka-primary.acc:9092 check port 9093 inter 2000 >> > rise 302400 fall 5 >> >> rise 302400!! Are you sure? HAProxy will have to wait 302400 * 2 seconds >> before it detects the server up >> >> > This is intended, as we are failing over in a very rare case and want to > avoid 'flickering' between the two systems at all costs. > > > server KAFKA_SECONDARY kafka-overflow.acc:9092 check port 9093 inter >> > 2000 rise 2 fall 5 backup >> > >> > >> >> I can't reproduce your problem even when I use your server settings but >> in http mode for backend. >> >> Cheers, >> Pavlos >> >> >> > Best regards, > Tobi > >

