Re: Haproxy 1.7.10 and 1.8.6 periodically sigfault
Sorry but I have another issue HA-Proxy version 1.8.12 2018/06/27 after setting weight I have such errors Jul 2 10:47:15 v54-haproxy-site-1 kernel: traps: haproxy[25876] general protection ip:7f2401d621ad sp:7ffd7e0f7d00 error:0 in libc-2.17.so [7f2401d29000+1c3000] Jul 2 10:47:15 v54-haproxy-site-1 haproxy: [ALERT] 182/071650 (25875) : Current worker 25876 exited with code 139 Jul 2 10:47:15 v54-haproxy-site-1 haproxy: [ALERT] 182/071650 (25875) : exit-on-failure: killing every workers with SIGTERM Jul 2 10:47:15 v54-haproxy-site-1 haproxy: [WARNING] 182/071650 (25875) : All workers exited. Exiting... (139) Jul 2 10:47:15 v54-haproxy-site-1 systemd: haproxy.service: main process exited, code=exited, status=139/n/a Jul 2 10:47:15 v54-haproxy-site-1 systemd: Unit haproxy.service entered failed state. Jul 2 10:47:15 v54-haproxy-site-1 systemd: haproxy.service failed. Jul 2 10:47:15 v54-haproxy-site-1 systemd: haproxy.service holdoff time over, scheduling restart. setting weight after update to 1.8.12 set server site-api/hz30 weight 10 Best regards, Alexey Gordeev С уважением Гордеев А.Д. Best regards, Alexey Gordeev С уважением Гордеев А.Д. On Sat, Jun 30, 2018 at 1:55 PM, Aleksey Gordeev wrote: > We have 2 haproxy servers ( 1.8.9 + 1.7.10) with the same traffic, Api > for mobile application. (different domains - one service). > > Today in the same time 2 servers went to sigfault. I have difficulties to > create dump. I need some time to figure how to do it. Strange thing is that > they restarted at one time. So i think problem is in traffic. I understand > that you can't help me without any dump. I will try to spend some more time > to create dump. Also install latest versions. > > server 1 (1.7) > > HA-Proxy version 1.7.10 2018/01/02 > > Jun 30 05:48:52 hz20 kernel: haproxy[13965]: segfault at 1957ff6 ip > 7f9abaa56dfd sp 7ffe9a1efdc8 error 4 in libc-2.17.so[7f9aba905000+ > 1b8000] > Jun 30 05:48:52 hz20 haproxy-systemd-wrapper: haproxy-systemd-wrapper: > exit, haproxy RC=0 > Jun 30 05:48:52 hz20 systemd: haproxy-quizzland.service holdoff time over, > scheduling restart. > Jun 30 05:48:52 hz20 systemd: Starting HAProxy Load Balancer... > Jun 30 05:48:52 hz20 systemd: Started HAProxy Load Balancer. > Jun 30 05:49:05 hz20 kernel: haproxy[13377]: segfault at 147cff5 ip > 7fa5ca207df3 sp 7ffc53e59c08 error 4 in libc-2.17.so[7fa5ca0b6000+ > 1b8000] > Jun 30 05:49:05 hz20 haproxy-systemd-wrapper: haproxy-systemd-wrapper: > exit, haproxy RC=0 > Jun 30 05:49:06 hz20 systemd: haproxy-quizzland.service holdoff time over, > scheduling restart. > Jun 30 05:49:06 hz20 systemd: Starting HAProxy Load Balancer... > Jun 30 05:49:06 hz20 systemd: Started HAProxy Load Balancer. > Jun 30 05:49:07 hz20 kernel: TCP: request_sock_TCP: Possible SYN flooding > on port 443. Sending cookies. Check SNMP counters. > Jun 30 05:49:09 hz20 kernel: haproxy[13452]: segfault at d8bff4 ip > 7ff61e717e11 sp 7ffccbeb8dd8 error 4 in libc-2.17.so[7ff61e5c6000+ > 1b8000] > Jun 30 05:49:09 hz20 haproxy-systemd-wrapper: haproxy-systemd-wrapper: > exit, haproxy RC=0 > Jun 30 05:49:10 hz20 systemd: haproxy-quizzland.service holdoff time over, > scheduling restart. > Jun 30 05:49:10 hz20 systemd: Starting HAProxy Load Balancer... > Jun 30 05:49:10 hz20 systemd: Started HAProxy Load Balancer. > Jun 30 05:49:12 hz20 kernel: haproxy[13479]: segfault at 1720ff4 ip > 7f9d5b23edfd sp 7fff7510c118 error 4 in libc-2.17.so[7f9d5b0ed000+ > 1b8000] > > Haproxy 1.8 > > HA-Proxy version 1.8.9-2b5ef6-34 2018/06/11 > Copyright 2000-2018 Willy Tarreau > > Jun 30 05:40:01 v54-haproxy-quizzland-1 systemd: Stopping User Slice of > root. > Jun 30 05:49:19 v54-haproxy-quizzland-1 kernel: haproxy[17738]: segfault > at e71ff6 ip 7fc2997054e1 sp 7fff3912a718 error 4 in libc-2.17.so > [7fc2995aa000+1c3000] > Jun 30 05:49:19 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/022828 > (17736) : Current worker 17738 exited with code 139 > Jun 30 05:49:19 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/022828 > (17736) : exit-on-failure: killing every workers with SIGTERM > Jun 30 05:49:19 v54-haproxy-quizzland-1 haproxy: [WARNING] 180/022828 > (17736) : All workers exited. Exiting... (139) > Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: haproxy.service: main > process exited, code=exited, status=139/n/a > Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: Unit haproxy.service > entered failed state. > Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: haproxy.service failed. > Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: haproxy.service holdoff > time over, scheduling restart. > Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: Starting HAProxy Load > Balancer... > Jun 30 05:49:19 v54-haproxy-quizzland-1
Re: Haproxy 1.7.10 and 1.8.6 periodically sigfault
: [WARNING] 180/054932 (1489) : All workers exited. Exiting... (139) Jun 30 05:49:34 v54-haproxy-quizzland-1 systemd: haproxy.service: main process exited, code=exited, status=139/n/a Jun 30 05:49:34 v54-haproxy-quizzland-1 systemd: Unit haproxy.service entered failed state. Jun 30 05:49:34 v54-haproxy-quizzland-1 systemd: haproxy.service failed. Best regards, Alexey Gordeev С уважением Гордеев А.Д. On Tue, Jun 26, 2018 at 10:53 PM, Willy Tarreau wrote: > Hello Aleksey, > > On Tue, Jun 26, 2018 at 04:27:04PM +0300, Aleksey Gordeev wrote: > > Hello, Have this fault again with > > > > Jun 26 09:08:51 v54-haproxy-quizzland-1 kernel: TCP: request_sock_TCP: > > Possible SYN flooding on port 443. Sending cookies. Check SNMP counters. > > Jun 26 09:09:31 v54-haproxy-quizzland-1 kernel: haproxy[1016]: segfault > at > > df7ff6 ip 7fec6d1694e6 sp 7ffc9d9c5888 error 4 in libc-2.17.so > > [7fec6d00e000+1c3000] > > Jun 26 09:09:31 v54-haproxy-quizzland-1 haproxy: [ALERT] 172/023009 > (1014) > > : Current worker 1016 exited with code 139 > > Jun 26 09:09:31 v54-haproxy-quizzland-1 haproxy: [ALERT] 172/023009 > (1014) > > : exit-on-failure: killing every workers with SIGTERM > > Jun 26 09:09:31 v54-haproxy-quizzland-1 haproxy: [WARNING] 172/023009 > > (1014) : All workers exited. Exiting... (139) > > Jun 26 09:09:31 v54-haproxy-quizzland-1 systemd: haproxy.service: main > > process exited, code=exited, status=139/n/a > > Jun 26 09:09:31 v54-haproxy-quizzland-1 systemd: Unit haproxy.service > > entered failed state. > > Jun 26 09:09:31 v54-haproxy-quizzland-1 systemd: haproxy.service failed. > > > > It's strange but centos didn't create dump. I will try to find the > reason. > > The dump is intercepted by all their crap like an "abrt" service and > stuff like this :-( Good luck to catch it! Last time I tried, it took > me no less than a full afternoon to figure where they were sequestrating > it. > > However, have you checked the updates first instead of wasting your time ? > Your versions have 1 critical and 7 major bugs for 1.8.6, and 1 major bug > for 1.7.10. Both of them are affected by a crash when trying to read from > a closed socket, so it could be one candidate. Please at least update to > the latest version to avoid this : > > http://www.haproxy.org/bugs/bugs-1.8.6.html > http://www.haproxy.org/bugs/bugs-1.7.10.html > > Cheers, > Willy >
Haproxy 1.7.10 and 1.8.6 periodically sigfault
Once a week it restarts with sigfault Jun 21 17:04:29 v54 kernel: haproxy[303]: segfault at 2670ff8 ip 7f0c375824e1 sp 7ffd2f8d7528 error 4 in libc-2.17.so [7f0c37427000+1c3000] Jun 21 17:04:29 v541 haproxy: [ALERT] 169/070052 (302) : Current worker 303 exited with code 139 Jun 21 17:04:29 v54haproxy: [ALERT] 169/070052 (302) : exit-on-failure: killing every workers with SIGTERM Jun 21 17:04:29 v54 haproxy: [WARNING] 169/070052 (302) : All workers exited. Exiting... (139) It's difficult to create dump because it's rare case.We use 5 instances of haproxy. Only one fails. The main difference is that it has huge json payloads. Other hosts serves web sites. This one - api. Best regards, Alexey Gordeev С уважением Гордеев А.Д. haproxy-global.cfg Description: Binary data haproxy.cfg Description: Binary data
Re: Haproxy 1.7.10 constantly restarting
Thank you for answer. Sorry for stupid question. Found it. I forget about letsencrypt. it restarts when renewing certificates. Best regards, Alexey Gordeev С уважением Гордеев А.Д. On Mon, Mar 12, 2018 at 12:23 AM, Vincent Bernat <ber...@luffy.cx> wrote: > ❦ 11 mars 2018 07:19 -0400, Aleksey Gordeev <gord...@thyn.ru> : > > > I'm sorry is that question is not suitable. Please give correct channel > to contact. > > > > It's started about a month ago. I have separate instances of same > > version haproxy. One of them restarts every 2 or 3 days. > > > > I have only this in log > > > > Mar 11 06:43:21 systemd[1]: Stopping HAProxy Load Balancer... > > Mar 11 06:43:21 haproxy-systemd-wrapper[10939]: > haproxy-systemd-wrapper: SIGTERM -> 10942. > > Mar 11 06:43:21 haproxy-systemd-wrapper[10939]: > haproxy-systemd-wrapper: exit, haproxy RC=0 > > Mar 11 06:43:21 systemd[1]: Starting HAProxy Load Balancer... > > Mar 11 06:43:21 systemd[1]: Started HAProxy Load Balancer. > > Mar 11 06:43:21 haproxy-systemd-wrapper[19642]: > haproxy-systemd-wrapper: executing /usr/sbin/haproxy > > This seems to happen because something is just restarting haproxy. Maybe > logrotate? "rgrep haproxy /etc" may give a clue. > -- > Write clearly - don't be too clever. > - The Elements of Programming Style (Kernighan & Plauger) >
Haproxy 1.7.10 constantly restarting
I'm sorry is that question is not suitable. Please give correct channel to contact. It's started about a month ago. I have separate instances of same version haproxy. One of them restarts every 2 or 3 days. I have only this in log Mar 11 06:43:21 systemd[1]: Stopping HAProxy Load Balancer... Mar 11 06:43:21 haproxy-systemd-wrapper[10939]: haproxy-systemd-wrapper: SIGTERM -> 10942. Mar 11 06:43:21 haproxy-systemd-wrapper[10939]: haproxy-systemd-wrapper: exit, haproxy RC=0 Mar 11 06:43:21 systemd[1]: Starting HAProxy Load Balancer... Mar 11 06:43:21 systemd[1]: Started HAProxy Load Balancer. Mar 11 06:43:21 haproxy-systemd-wrapper[19642]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy I'm not very good in linux. So, what additional info can I provide? cat /proc/sys/net/ipv4/ip_local_port_range 1024 65535 ss -s Total: 4535 (kernel 5526) TCP: 7563 (estab 4341, closed 2934, orphaned 227, synrecv 0, timewait 2931/0), ports 0 Transport Total IP IPv6 * 5526 - - RAW 0 0 0 UDP 10 8 2 TCP 4629 4627 2 INET 4639 4635 4 FRAG 0 0 0 Haproxy -vv A-Proxy version 1.7.10 2018/01/02 Copyright 2000-2018 Willy TarreauBuild options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing -DTCP_USER_TIMEOUT=18 OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1 USE_PCRE_JIT=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.7 Running on zlib version : 1.2.7 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with OpenSSL version : OpenSSL 1.0.2n 7 Dec 2017 Running on OpenSSL version : OpenSSL 1.0.2n 7 Dec 2017 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.32 2012-11-30 Running on PCRE version : 8.32 2012-11-30 PCRE library supports JIT : yes Built without Lua support Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Available filters : [COMP] compression [TRACE] trace [SPOE] spoe - С Уважением Гордеев А.Д. haproxy-main1 Description: Binary data haproxy-global.cfg Description: Binary data
Re: SOLVED! (Was: 400 error on cookie string)
Hi, I'm "cas". My Name is Aleksey Gordeev. I was using my company email. Please set me as a reporter. 2017-01-05 22:17 GMT+03:00 Willy Tarreau <w...@1wt.eu>: > Small update on this, Axel Reinhold faced an apparently different > issue on an SVN server until we noticed the requests were sent in > small chunks cut before the CRLF and experiencing the same problem. > He could confirm the patch fixes the problem for him as well, so > I'm going to merge the patch now. > > "cas", if you want to be credited as a reporter of the issue, you > need to raise your hand very quickly now, because once the patch is > merged it will be too late. > > Best regards, > Willy >