Sorry but I have another issue

HA-Proxy version 1.8.12 2018/06/27
after setting weight I have such errors

Jul  2 10:47:15 v54-haproxy-site-1 kernel: traps: haproxy[25876] general
protection ip:7f2401d621ad sp:7ffd7e0f7d00 error:0 in libc-2.17.so
[7f2401d29000+1c3000]
Jul  2 10:47:15 v54-haproxy-site-1 haproxy: [ALERT] 182/071650 (25875) :
Current worker 25876 exited with code 139
Jul  2 10:47:15 v54-haproxy-site-1 haproxy: [ALERT] 182/071650 (25875) :
exit-on-failure: killing every workers with SIGTERM
Jul  2 10:47:15 v54-haproxy-site-1 haproxy: [WARNING] 182/071650 (25875) :
All workers exited. Exiting... (139)
Jul  2 10:47:15 v54-haproxy-site-1 systemd: haproxy.service: main process
exited, code=exited, status=139/n/a
Jul  2 10:47:15 v54-haproxy-site-1 systemd: Unit haproxy.service entered
failed state.
Jul  2 10:47:15 v54-haproxy-site-1 systemd: haproxy.service failed.
Jul  2 10:47:15 v54-haproxy-site-1 systemd: haproxy.service holdoff time
over, scheduling restart.

setting weight after update to 1.8.12

set server site-api/hz30 weight 10





Best regards, Alexey Gordeev
С уважением Гордеев А.Д.



Best regards, Alexey Gordeev
С уважением Гордеев А.Д.

On Sat, Jun 30, 2018 at 1:55 PM, Aleksey Gordeev <gordeev...@gmail.com>
wrote:

> We have 2 haproxy  servers  ( 1.8.9 + 1.7.10) with the same traffic, Api
> for mobile application. (different domains - one service).
>
> Today in the same time 2 servers went to sigfault. I have difficulties to
> create dump. I need some time to figure how to do it. Strange thing is that
> they restarted at one time. So i think problem is in traffic. I understand
> that you can't help me without any dump. I will try to spend some more time
> to create dump. Also install latest versions.
>
> server 1 (1.7)
>
> HA-Proxy version 1.7.10 2018/01/02
>
> Jun 30 05:48:52 hz20 kernel: haproxy[13965]: segfault at 1957ff6 ip
> 00007f9abaa56dfd sp 00007ffe9a1efdc8 error 4 in libc-2.17.so[7f9aba905000+
> 1b8000]
> Jun 30 05:48:52 hz20 haproxy-systemd-wrapper: haproxy-systemd-wrapper:
> exit, haproxy RC=0
> Jun 30 05:48:52 hz20 systemd: haproxy-quizzland.service holdoff time over,
> scheduling restart.
> Jun 30 05:48:52 hz20 systemd: Starting HAProxy Load Balancer...
> Jun 30 05:48:52 hz20 systemd: Started HAProxy Load Balancer.
> Jun 30 05:49:05 hz20 kernel: haproxy[13377]: segfault at 147cff5 ip
> 00007fa5ca207df3 sp 00007ffc53e59c08 error 4 in libc-2.17.so[7fa5ca0b6000+
> 1b8000]
> Jun 30 05:49:05 hz20 haproxy-systemd-wrapper: haproxy-systemd-wrapper:
> exit, haproxy RC=0
> Jun 30 05:49:06 hz20 systemd: haproxy-quizzland.service holdoff time over,
> scheduling restart.
> Jun 30 05:49:06 hz20 systemd: Starting HAProxy Load Balancer...
> Jun 30 05:49:06 hz20 systemd: Started HAProxy Load Balancer.
> Jun 30 05:49:07 hz20 kernel: TCP: request_sock_TCP: Possible SYN flooding
> on port 443. Sending cookies.  Check SNMP counters.
> Jun 30 05:49:09 hz20 kernel: haproxy[13452]: segfault at d8bff4 ip
> 00007ff61e717e11 sp 00007ffccbeb8dd8 error 4 in libc-2.17.so[7ff61e5c6000+
> 1b8000]
> Jun 30 05:49:09 hz20 haproxy-systemd-wrapper: haproxy-systemd-wrapper:
> exit, haproxy RC=0
> Jun 30 05:49:10 hz20 systemd: haproxy-quizzland.service holdoff time over,
> scheduling restart.
> Jun 30 05:49:10 hz20 systemd: Starting HAProxy Load Balancer...
> Jun 30 05:49:10 hz20 systemd: Started HAProxy Load Balancer.
> Jun 30 05:49:12 hz20 kernel: haproxy[13479]: segfault at 1720ff4 ip
> 00007f9d5b23edfd sp 00007fff7510c118 error 4 in libc-2.17.so[7f9d5b0ed000+
> 1b8000]
>
> Haproxy 1.8
>
> HA-Proxy version 1.8.9-2b5ef6-34 2018/06/11
> Copyright 2000-2018 Willy Tarreau <wi...@haproxy.org>
>
> Jun 30 05:40:01 v54-haproxy-quizzland-1 systemd: Stopping User Slice of
> root.
> Jun 30 05:49:19 v54-haproxy-quizzland-1 kernel: haproxy[17738]: segfault
> at e71ff6 ip 00007fc2997054e1 sp 00007fff3912a718 error 4 in libc-2.17.so
> [7fc2995aa000+1c3000]
> Jun 30 05:49:19 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/022828
> (17736) : Current worker 17738 exited with code 139
> Jun 30 05:49:19 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/022828
> (17736) : exit-on-failure: killing every workers with SIGTERM
> Jun 30 05:49:19 v54-haproxy-quizzland-1 haproxy: [WARNING] 180/022828
> (17736) : All workers exited. Exiting... (139)
> Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: haproxy.service: main
> process exited, code=exited, status=139/n/a
> Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: Unit haproxy.service
> entered failed state.
> Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: haproxy.service failed.
> Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: haproxy.service holdoff
> time over, scheduling restart.
> Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: Starting HAProxy Load
> Balancer...
> Jun 30 05:49:19 v54-haproxy-quizzland-1 systemd: Started HAProxy Load
> Balancer.
> Jun 30 05:49:32 v54-haproxy-quizzland-1 kernel: haproxy[1439]: segfault at
> 14ddff6 ip 00007f96fc38e4e6 sp 00007ffcc6d89b78 error 4 in libc-2.17.so
> [7f96fc233000+1c3000]
> Jun 30 05:49:32 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/054919 (1436)
> : Current worker 1439 exited with code 139
> Jun 30 05:49:32 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/054919 (1436)
> : exit-on-failure: killing every workers with SIGTERM
> Jun 30 05:49:32 v54-haproxy-quizzland-1 haproxy: [WARNING] 180/054919
> (1436) : All workers exited. Exiting... (139)
> Jun 30 05:49:32 v54-haproxy-quizzland-1 systemd: haproxy.service: main
> process exited, code=exited, status=139/n/a
> Jun 30 05:49:32 v54-haproxy-quizzland-1 systemd: Unit haproxy.service
> entered failed state.
> Jun 30 05:49:32 v54-haproxy-quizzland-1 systemd: haproxy.service failed.
> Jun 30 05:49:32 v54-haproxy-quizzland-1 systemd: haproxy.service holdoff
> time over, scheduling restart.
> Jun 30 05:49:32 v54-haproxy-quizzland-1 systemd: Starting HAProxy Load
> Balancer...
> Jun 30 05:49:32 v54-haproxy-quizzland-1 systemd: Started HAProxy Load
> Balancer.
> Jun 30 05:49:34 v54-haproxy-quizzland-1 kernel: haproxy[1490]: segfault at
> 1ff6ff4 ip 00007fd1d55594d7 sp 00007ffc24363578 error 4 in libc-2.17.so
> [7fd1d53fe000+1c3000]
> Jun 30 05:49:34 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/054932 (1489)
> : Current worker 1490 exited with code 139
> Jun 30 05:49:34 v54-haproxy-quizzland-1 haproxy: [ALERT] 180/054932 (1489)
> : exit-on-failure: killing every workers with SIGTERM
> Jun 30 05:49:34 v54-haproxy-quizzland-1 haproxy: [WARNING] 180/054932
> (1489) : All workers exited. Exiting... (139)
> Jun 30 05:49:34 v54-haproxy-quizzland-1 systemd: haproxy.service: main
> process exited, code=exited, status=139/n/a
> Jun 30 05:49:34 v54-haproxy-quizzland-1 systemd: Unit haproxy.service
> entered failed state.
> Jun 30 05:49:34 v54-haproxy-quizzland-1 systemd: haproxy.service failed.
>
>
>
>
>
>
> Best regards, Alexey Gordeev
> С уважением Гордеев А.Д.
>
> On Tue, Jun 26, 2018 at 10:53 PM, Willy Tarreau <w...@1wt.eu> wrote:
>
>> Hello Aleksey,
>>
>> On Tue, Jun 26, 2018 at 04:27:04PM +0300, Aleksey Gordeev wrote:
>> > Hello, Have this fault again with
>> >
>> > Jun 26 09:08:51 v54-haproxy-quizzland-1 kernel: TCP: request_sock_TCP:
>> > Possible SYN flooding on port 443. Sending cookies.  Check SNMP
>> counters.
>> > Jun 26 09:09:31 v54-haproxy-quizzland-1 kernel: haproxy[1016]: segfault
>> at
>> > df7ff6 ip 00007fec6d1694e6 sp 00007ffc9d9c5888 error 4 in libc-2.17.so
>> > [7fec6d00e000+1c3000]
>> > Jun 26 09:09:31 v54-haproxy-quizzland-1 haproxy: [ALERT] 172/023009
>> (1014)
>> > : Current worker 1016 exited with code 139
>> > Jun 26 09:09:31 v54-haproxy-quizzland-1 haproxy: [ALERT] 172/023009
>> (1014)
>> > : exit-on-failure: killing every workers with SIGTERM
>> > Jun 26 09:09:31 v54-haproxy-quizzland-1 haproxy: [WARNING] 172/023009
>> > (1014) : All workers exited. Exiting... (139)
>> > Jun 26 09:09:31 v54-haproxy-quizzland-1 systemd: haproxy.service: main
>> > process exited, code=exited, status=139/n/a
>> > Jun 26 09:09:31 v54-haproxy-quizzland-1 systemd: Unit haproxy.service
>> > entered failed state.
>> > Jun 26 09:09:31 v54-haproxy-quizzland-1 systemd: haproxy.service failed.
>> >
>> > It's strange but centos didn't create dump. I will try to find the
>> reason.
>>
>> The dump is intercepted by all their crap like an "abrt" service and
>> stuff like this :-(  Good luck to catch it! Last time I tried, it took
>> me no less than a full afternoon to figure where they were sequestrating
>> it.
>>
>> However, have you checked the updates first instead of wasting your time ?
>> Your versions have 1 critical and 7 major bugs for 1.8.6, and 1 major bug
>> for 1.7.10. Both of them are affected by a crash when trying to read from
>> a closed socket, so it could be one candidate. Please at least update to
>> the latest version to avoid this :
>>
>>     http://www.haproxy.org/bugs/bugs-1.8.6.html
>>     http://www.haproxy.org/bugs/bugs-1.7.10.html
>>
>> Cheers,
>> Willy
>>
>
>

Attachment: haproxy.cfg
Description: Binary data

Attachment: haproxy-global.cfg
Description: Binary data

Reply via email to