Re: TCP mode and ultra short lived connection

2021-02-11 Thread Максим Куприянов
Thank you very much, Willy! Turning off abortonclose (it was enabled globally) for this particular session really helped :) -- Best regards, Maksim вт, 9 февр. 2021 г. в 17:46, Willy Tarreau : > Hi guys, > > > > I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over > > >

Re: TCP mode and ultra short lived connection

2021-02-08 Thread Максим Куприянов
delay to the client right after connection or after sending of data - everything works as expected. Clients differ, so TC could possibly be an only option. But maybe there is a better way. Вт, 9 февр. 2021 г. в 02:12, Lukas Tribus : > Hello, > > On Mon, 8 Feb 2021 at 18:14, Максим Куприянов

Re: TCP mode and ultra short lived connection

2021-02-08 Thread Максим Куприянов
s/13049828/fin-vs-rst-in-tcp-connections > > RST is much better for short living connections. > > пн, 8 февр. 2021 г. в 22:17, Максим Куприянов >: > >> Hi! >> >> I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over >> Graphite's component

TCP mode and ultra short lived connection

2021-02-08 Thread Максим Куприянов
Hi! I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over Graphite's component receiving metrics from clients and clients who are connecting just to send one or two Graphite-metrics and disconnecting right after. It looks like this 1. Client connects to haproxy (SYN/SYN-ACK/ACK)

Re: HTTP/2 streams - how they're balanced?

2021-01-22 Thread Максим Куприянов
Thank you, Willy! I will take a try with a new version if it could help :) пт, 22 янв. 2021 г. в 11:43, Willy Tarreau : > Hi Maksim, > > On Thu, Jan 21, 2021 at 09:27:33PM +0300, ?? ? wrote: > > Hi! > > > > Can anyone please explain or point out in the documentation how streams > in

Re: HTTP/2 streams – how they're balanced?

2021-01-21 Thread Максим Куприянов
:17355 proto h2 чт, 21 янв. 2021 г. в 21:27, Максим Куприянов : > Hi! > > Can anyone please explain or point out in the documentation how streams in > HTTP/2 connection are balanced? > > Right now I have haproxy=2.1.4 with http/2 balancer configured and it > seems to me th

HTTP/2 streams – how they're balanced?

2021-01-21 Thread Максим Куприянов
Hi! Can anyone please explain or point out in the documentation how streams in HTTP/2 connection are balanced? Right now I have haproxy=2.1.4 with http/2 balancer configured and it seems to me that gRPC-requests over a single connection from a client are always forwarded to the same backend

Re: haproxy=2.0.5: A bogus APPCTX is spinning and refuses to die

2019-09-16 Thread Максим Куприянов
Created an issue: https://github.com/haproxy/haproxy/issues/277 пт, 6 сент. 2019 г. в 12:36, Максим Куприянов : > Hi everybody! > > Any news on this issue? Maybe you need some more detailed info? I still > getting these errors on instances with high request rates. > > чт, 29 ав

Re: haproxy=2.0.5: A bogus APPCTX is spinning and refuses to die

2019-09-06 Thread Максим Куприянов
Hi everybody! Any news on this issue? Maybe you need some more detailed info? I still getting these errors on instances with high request rates. чт, 29 авг. 2019 г. в 14:21, Максим Куприянов : > Hi! > > Sometimes on reload of 2.0.5 I got this in logs: > A bogus APPCTX [0x

haproxy=2.0.5: A bogus APPCTX is spinning and refuses to die

2019-08-29 Thread Максим Куприянов
Hi! Sometimes on reload of 2.0.5 I got this in logs: A bogus APPCTX [0x7fc1a06ff0e0] is spinning at 122591 calls per second and refuses to die, aborting now! Please report this error to developers [strm=0x557eb7f4e630 src=xxx fe=yyy be=yyy dst= rqf=c48202 rqa=0 rpf=80048202 rpa=0 sif=EST,200040

haproxy=2.0.3: SIGABRT in task_run_applet

2019-08-08 Thread Максим Куприянов
Hi! Fro 3 to 4 times per day haproxy=2.0.3 dies with a SIGABRT. Config is huge, more than 1000 backends. Backtrace follows Program terminated with signal SIGABRT, Aborted. #0 0x7febeb888428 in raise () from /lib/x86_64-linux-gnu/libc.so.6 [Current thread is 1 (Thread 0x7febe37fe700 (LWP

feature request: http-check with backend weight control possibility

2019-08-05 Thread Максим Куприянов
Hi! It would be nice to to add some backend weight control option to http-checks. For example backend could add some X-WEIGHT http-header to it's health-check responses and haproxy could use them instead of a separate haproxy-agent instance on a backend to control backend weight or even

haproxy=2.0.3: ereq counter grow in tcp-mode since haproxy=2.0

2019-07-24 Thread Максим Куприянов
Hi! I've mentioned that since moving from 1.9.8 to 2.0-branch of haproxy, ereq counter of frontend tcp-mode sections began to grow. I had zeroes in that counter before haproxy 2.0, now the number of "error requests" is much higher. Example: listen sample.service:1234 bind ipv6@xxx:yyy mode

haproxy=2.0.1: socket leak

2019-06-28 Thread Максим Куприянов
Hi! I found out that in some situations under high rate of incoming connections haproxy=2.0.1 starts leaking sockets. It looks like haproxy doesn't close connections to its backends after request is finished (FIN received from client) thus leaving its server-sockets in close-wait state. As an

Re: haproxy 2.0: SIGSEGV in ssl_subscribe

2019-06-25 Thread Максим Куприянов
Hi Olivier, Thank you for the patches, I've built a new binary and now it works fine. вт, 25 июн. 2019 г. в 15:23, Olivier Houchard : > Hi Maksim, > > On Tue, Jun 25, 2019 at 01:29:24PM +0300, Максим Куприянов wrote: > > Hi! > > > > Got SIGSEGV in ssl_subscribe funct

haproxy 2.0: SIGSEGV in ssl_subscribe

2019-06-25 Thread Максим Куприянов
Hi! Got SIGSEGV in ssl_subscribe function. Happens multiple times per day. Haproxy was built from trunk with commits upto: http://git.haproxy.org/?p=haproxy-2.0.git;a=commit;h=9eae8935663bc0b27c23018e8cc24ae9a3e31732 Program terminated with signal SIGSEGV, Segmentation fault. #0 ssl_subscribe

Re: SD-termination cause

2019-06-03 Thread Максим Куприянов
Hi Willy! Sorry for bothering you, but do you have any news about this case? чт, 23 мая 2019 г. в 10:35, Willy Tarreau : > Hi Maksim, > > On Thu, May 23, 2019 at 10:00:19AM +0300, ?? ? wrote: > > 2nd session (from haproxy to ssl-enabled backend A, dumped with tshark > for > > better

Re: SD-termination cause

2019-05-23 Thread Максим Куприянов
Hi, Willy! This kind of errors only happen on proxy-sections with ssl-enabled backends ('ssl verify none' in server lines). In order to find out what realy happens from network point of view I added one plain-http backend to one of the proxy-sections. Then I captured the sutuation when request

SD-termination cause

2019-05-21 Thread Максим Куприянов
Hi! I've run into some weird problem of many connections failed with SD status in log. And I have no idea how to discover the source of the problem. >From the point of client it looks like this: * Client (located on the same machine as haproxy) successfully opens a connection to haproxy over

Re: 1.9.6: SIGFPE in fwrr_update_position

2019-04-23 Thread Максим Куприянов
Hi! It seems to me there is something wrong with this patch: for some reason process stops responding with 100% CPU used by all threads. Backtrace: (gdb) thread apply all bt Thread 4 (Thread 0x7fdf68c9c700 (LWP 615744)): #0 0x564fc9a61990 in fwrr_update_server_weight (srv=0x564fcb5014b0) at

Re: 1.9.6: SIGFPE in fwrr_update_position

2019-04-15 Thread Максим Куприянов
Hi Willy! Actually I don't think this is a CPU fault. The reason is that I have same cores with non-zero dividers on 4 more hardware servers with different CPU models. So I agree upon another thread activity. The unique thing about these servers – all of them use haproxy-agent to set up weights

Re: 1.9.6: SIGFPE in fwrr_update_position

2019-04-11 Thread Максим Куприянов
Hello Willy! I hope i could find some cores still available and will search for them tomorrow. But since they could contain some sensitive information, its not a good idea to share it right here on the mail list. So could you please tell me some personal email address where I could send the link

Re: 1.9.6: SIGFPE in fwrr_update_position

2019-04-10 Thread Максим Куприянов
Hi! Any news about the reason of these faults? I can mention, that some of our backends set their weights with the help of haproxy agent. Could it be the reason? чт, 4 апр. 2019 г. в 14:22, Максим Куприянов : > Hi, everybody! > > Got multiple incidents of failure with 1.9.

1.9.6: SIGFPE in fwrr_update_position

2019-04-04 Thread Максим Куприянов
Hi, everybody! Got multiple incidents of failure with 1.9.6: Core was generated by `/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy'. Program terminated with signal SIGFPE, Arithmetic exception. #0 0x559afb73c533 in fwrr_update_position (grp=0x559afbd9fb68,

1.9.5: SIGSEGV in wake_srv_chk under heavy load

2019-03-28 Thread Максим Куприянов
Hi! We accidentally got a spike of client requests to our site and under that heavy load to ssl-protected backends haproxy=1.9.5 had fallen down :( backtrace Core was generated by `/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy'. Program terminated with signal SIGSEGV,

Re: 1.9.5, SIGABRT

2019-03-27 Thread Максим Куприянов
Hi! Thank you very much, I'll test your patch and will write back tomorrow. ср, 27 мар. 2019 г. в 16:50, William Lallemand : > On Wed, Mar 27, 2019 at 02:24:23PM +0100, William Lallemand wrote: > > > > On Wed, Mar 27, 2019 at 01:59:59PM +0300, Максим Куприянов wrote: > >

Re: 1.9.5, SIGABRT

2019-03-27 Thread Максим Куприянов
server 1h option redispatch option dontlognull ср, 27 мар. 2019 г. в 16:26, William Lallemand : > > On Wed, Mar 27, 2019 at 01:59:59PM +0300, Максим Куприянов wrote: > > Hi, everybody! Got a core on 1.9.5. > > > > Hello, > > How did it happened? Can you reproduc

1.9.5, SIGABRT

2019-03-27 Thread Максим Куприянов
Hi, everybody! Got a core on 1.9.5. Core was generated by `/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid'. Program terminated with signal SIGABRT, Aborted. #0 0x7fe164fd0428 in raise () from /lib/x86_64-linux-gnu/libc.so.6 (gdb) thread apply all bt Thread 1

segfault in eb32sc_lookup_ge (1.9.4)

2019-03-24 Thread Максим Куприянов
Hi! I caught 2 segfaults on different machines. Both look the same: haproxy[483437]: segfault at 8 ip 55d7185283fa sp 7f257955f5b8 error 4 in haproxy[55d7183b1000+1d7000] Unfortunately I don't have core files and their configs are too big and complex to share, but I figured out, the

Re: haproxy=1.8.5 stuck in thread syncing

2018-05-24 Thread Максим Куприянов
Hi, Christopher! Could you tell if these patches will be backported to haproxy 1.8 or not? 2018-04-11 20:06 GMT+03:00 Максим Куприянов <maxim.kupriya...@gmail.com>: > Hi! > > Thank you very much for the patches. Looks like they helped. > > 2018-03-29 14:25 GMT+05:00 Chr

Re: haproxy=1.8.5 stuck in thread syncing

2018-04-11 Thread Максим Куприянов
Hi! Thank you very much for the patches. Looks like they helped. 2018-03-29 14:25 GMT+05:00 Christopher Faulet <cfau...@haproxy.com>: > Le 28/03/2018 à 14:16, Максим Куприянов a écrit : > >> Hi! >> >> I'm sorry but configuration it's too huge too share (over 1

Re: haproxy=1.8.5 stuck in thread syncing

2018-03-28 Thread Максим Куприянов
Le 28/03/2018 à 09:36, Максим Куприянов a écrit : > >> Hi! >> >> Yesterday one of our haproxies (1.8.5) with nbthread=8 set in its config >> stuck with 800% CPU usage. Some responses were served successfully but many >> of them just timed out. perf top showed this: >

haproxy=1.8.5 stuck in thread syncing

2018-03-28 Thread Максим Куприянов
Hi! Yesterday one of our haproxies (1.8.5) with nbthread=8 set in its config stuck with 800% CPU usage. Some responses were served successfully but many of them just timed out. perf top showed this: 59.19% [.] thread_enter_sync 32.68% [.] fwrr_get_next_server We made a core and here is a

Re: segfault in haproxy=1.8.4

2018-03-25 Thread Максим Куприянов
Hi! It's been almost 2 weeks since I've installed the patch and there were no segfaults since then. It seems that the problem is fixed now. Thank you! 2018-03-19 23:16 GMT+03:00 William Dauchy : > On Mon, Mar 19, 2018 at 08:41:16PM +0100, Willy Tarreau wrote: > > For me,

Re: segfault in haproxy=1.8.4

2018-03-14 Thread Максим Куприянов
Hi, Christopher! Thank you very much for the patch. I'll apply it to my canary host today but it will take a week or even more to assure that no crashes occur. Anyway I'll write you back. 2018-03-14 23:56 GMT+03:00 Christopher Faulet : > Le 07/03/2018 à 09:58, Christopher

Re: segfault in haproxy=1.8.4

2018-03-05 Thread Максим Куприянов
Hi Willy! I have 2 more haproxy-servers with exactly the same configuration and load. Both has threads compiled in but not enabled in config (no nbthreads). And there're no segfaults at all. So I'm sure everything is fine without threads. Haproxy's config file itself is way too large to find out

segfault in haproxy=1.8.4

2018-03-05 Thread Максим Куприянов
Hi! I have a backtrace for segfault in haproxy=1.8.4 with 4 threads. It happens usually under heavy load. Can you take a look? Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `/usr/sbin/haproxy -f /etc/haproxy/haproxy-market.cfg -p

Re: segfault error 6 in haproxy=1.8-3 (pendconn_grab_from_px)

2018-01-17 Thread Максим Куприянов
sp 7ffe59ed65b8 error 7 in haproxy[55bbf3492000+169000] Both lead us to init_task: $ /usr/bin/addr2line -e /usr/sbin/haproxy -fCi 0x10b3af init_task ??:? 2018-01-17 13:12 GMT+03:00 Максим Куприянов <maxim.kupriya...@gmail.com>: > Hi! > > I have multiple instances of hap

segfault error 6 in haproxy=1.8-3 (pendconn_grab_from_px)

2018-01-17 Thread Максим Куприянов
Hi! I have multiple instances of haproxy=1.8-3 running with nbthreads=2 and more. Those instances sometimes fail with an error like that in dmesg: haproxy[22287]: segfault at 5574915aab8a ip 55828ce3adea sp 7ffc105c8fb0 error 6 in haproxy[55828cd2f000+169000] Instances with no

Re: 1.8.0 stuck in write(threads_sync_pipe[1], "S", 1)

2017-12-04 Thread Максим Куприянов
Hi! Everything seems fine. Haproxy is still alive, so your patch solves the problem. Thank you! Maxim 2017-12-02 13:22 GMT+03:00 Максим Куприянов <maxim.kupriya...@gmail.com>: > Hi! > > Thank you for such a quick response. I'll apply patch and leave one > instance of 1

Re: 1.8.0 stuck in write(threads_sync_pipe[1], "S", 1)

2017-12-02 Thread Максим Куприянов
Hi! Thank you for such a quick response. I'll apply patch and leave one instance of 1.8 under load till Monday. Than I'll write you back.

1.8.0 stuck in write(threads_sync_pipe[1], "S", 1)

2017-12-01 Thread Максим Куприянов
Hi! Tonight all of mine haproxy 1.8.0 instances stopped answering. They didn't forward traffic and even didn't answered over socket. They're compiled with threads, but threads are not enabled in they configs (no nbthread option). All of them stuck in same place: # strace -f -p 831919 Process

Re: Sticky-table contents is not distributed among peers

2017-11-29 Thread Максим Куприянов
I'm doing something completely wrong and there is a better way. What I really want is to get an acl for a backend selection based on request-per-second rate of connections through the whole location with many haproxy installations. How can I achieve this? 2017-11-29 19:39 GMT+05:00 Максим Куприянов <

Sticky-table contents is not distributed among peers

2017-11-29 Thread Максим Куприянов
Hi! First of all I'd like to thank you for such a great software, as Haproxy is. It is really one of the best opensource projects. And I'm your happy user for many years :) But now, I need help in troubleshooting. Recently I've tried to use distributed sticky-tables, but for some reason they're

Re: Haproxy 1.7 and Ipv6-only hosts

2017-01-10 Thread Максим Куприянов
Hi Willy, Baptiste! Patch is working for me. Thank you very much for help! 2017-01-06 21:53 GMT+03:00 Willy Tarreau : > Hi Baptiste, Maxim, > > On Wed, Dec 28, 2016 at 02:04:44PM +0100, Baptiste wrote: > > On Fri, Dec 23, 2016 at 5:21 PM, Willy Tarreau wrote: > > >

Haproxy 1.7 and Ipv6-only hosts

2016-12-23 Thread Максим Куприянов
Hi! Since I've installed 1.7.1 version of haproxy over 1.6.10 – it stopped working with ipv6-only backends (no A-record in DNS at all, only ), even with USE_GETADDRINFO=1 set. Haproxy says, that it 'could not resolve address' and exits on a parsing phase. The problem is in fuction