Re: HAProxy graceful restart old process not going away
Hi Willy, On Fri, Aug 1, 2014 at 10:49 AM, Willy Tarreau wrote: > Hi Stefan, > > On Thu, Jul 24, 2014 at 03:32:30PM +0200, Stefan Majer wrote: > > Hi Willy, > > > > coming back to this old thread. > > We still have the problem that from time to time that after doing a > > > > # service haproxy reload > > which actually does > > # /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p > /var/run/haproxy.pid > > -sf > > the old process persists and we end up having more than one haproxy > process. > > The old process will never end (even after days) till we forcefully kill > it. > > > > We issue a haproxy reload every time a configuration change happened > which > > can happen quite often because, say 50 - 100 times a day. > > > > To nail this problem down we are finally able to reproduce this behavior > > easily ! > > > > We do the following commands on a recent ubuntu, centos, rhel whatever. > We > > installed haproxy-1.5.2, and 1.4.25 same effect. > > We reload haproxy in parallel by executing: > > # service haproxy reload & service haproxy reload & > > > > repeat this some time (5-10 times) > > and you will see: > > # ps -ef |grep haproxy > > # haproxy 3855 1 0 12:34 ?00:00:00 /usr/sbin/haproxy -f > > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3797 > > # haproxy 3950 1 0 12:35 ?00:00:00 /usr/sbin/haproxy -f > > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3932 > > > > I know it is not recommended to reload a already reloading process, but i > > want to make clear that this is a potential source of confusion. > > I dont know if it is possible to check if there is already a reload in > > progress and return silently ? > > But you realize that this is completely expected ? You're asking multiple > processes in parallel to signal the same old one that it must be leaving > and then to all start in parallel. > Of course, we are aware that we need to prevent a parallel reload by serializing configuration generation and reloading the haproxy process. > There would be a solution to avoid this, it consists in disabling the > SO_REUSEPORT option on the listening sockets, so that only one process > gets the listening ports and the other ones fail and leave. The problem > is that it would make the reloads more noticeable because you'd get a > short period of time with no port bound. > > We could also think about grabbing a lock on the pid file, but that would > make life harder for people working with minimal environments where locks > are not implemented. Also it would require keeping the lock on the file > for all the process' life, which is not really nice either. Additionally, > not everyone uses pidfiles anyway... > One solution might be to have one master process which handles all the configuration parsing, and child management and the childs get restarted once the master have told them so. This is the way nginx works. What do you think ? > How large is your configuration ? With small configs, haproxy can start in > a few milliseconds. Here on my laptop, a small 20 lines config takes 2 ms > to start, and a huge one (30 backends) takes 3 seconds, so that's 10 > microseconds per backend. I really doubt that even an excited user could > manage to cause conflicts during a startup, especially when you restart > it 100 times a day at most :-/ > Our configuration is not that big, we have currently ~200 frontend and ~ 1000 backends configured. But as is understand the old process dies not before he surpassed all sockets to the new daemon and this may take some time if some of the sockets are processing long running sessions. So the overall reload may take some seconds to probably minutes. > > Regards, > Willy > > Since we know how to act on this situation we got back a very stable and performant loadbalancer, thanks for that ! Greetings Stefan -- Stefan Majer
Re: HAProxy graceful restart old process not going away
Hi Stefan, On Thu, Jul 24, 2014 at 03:32:30PM +0200, Stefan Majer wrote: > Hi Willy, > > coming back to this old thread. > We still have the problem that from time to time that after doing a > > # service haproxy reload > which actually does > # /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid > -sf > the old process persists and we end up having more than one haproxy process. > The old process will never end (even after days) till we forcefully kill it. > > We issue a haproxy reload every time a configuration change happened which > can happen quite often because, say 50 - 100 times a day. > > To nail this problem down we are finally able to reproduce this behavior > easily ! > > We do the following commands on a recent ubuntu, centos, rhel whatever. We > installed haproxy-1.5.2, and 1.4.25 same effect. > We reload haproxy in parallel by executing: > # service haproxy reload & service haproxy reload & > > repeat this some time (5-10 times) > and you will see: > # ps -ef |grep haproxy > # haproxy 3855 1 0 12:34 ?00:00:00 /usr/sbin/haproxy -f > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3797 > # haproxy 3950 1 0 12:35 ?00:00:00 /usr/sbin/haproxy -f > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3932 > > I know it is not recommended to reload a already reloading process, but i > want to make clear that this is a potential source of confusion. > I dont know if it is possible to check if there is already a reload in > progress and return silently ? But you realize that this is completely expected ? You're asking multiple processes in parallel to signal the same old one that it must be leaving and then to all start in parallel. There would be a solution to avoid this, it consists in disabling the SO_REUSEPORT option on the listening sockets, so that only one process gets the listening ports and the other ones fail and leave. The problem is that it would make the reloads more noticeable because you'd get a short period of time with no port bound. We could also think about grabbing a lock on the pid file, but that would make life harder for people working with minimal environments where locks are not implemented. Also it would require keeping the lock on the file for all the process' life, which is not really nice either. Additionally, not everyone uses pidfiles anyway... How large is your configuration ? With small configs, haproxy can start in a few milliseconds. Here on my laptop, a small 20 lines config takes 2 ms to start, and a huge one (30 backends) takes 3 seconds, so that's 10 microseconds per backend. I really doubt that even an excited user could manage to cause conflicts during a startup, especially when you restart it 100 times a day at most :-/ Regards, Willy
Re: HAProxy graceful restart old process not going away
Hi Steven, the actual config is: global log /dev/loglocal0 log /dev/loglocal1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL defaults log global modehttp option httplog option dontlognull timeout connect 5000 timeout client 5 timeout server 5 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http This is a totally idle server which actually does not loadbalancing at all. Hope this helps Stefan Majer On Thu, Jul 24, 2014 at 4:06 PM, Steven Le Roux wrote: > Hi, > > do you have set timeout on keep alive ? > > can you share the template you're using for your configuration ? > > On Thu, Jul 24, 2014 at 3:32 PM, Stefan Majer > wrote: > > Hi Willy, > > > > coming back to this old thread. > > We still have the problem that from time to time that after doing a > > > > # service haproxy reload > > which actually does > > # /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p > /var/run/haproxy.pid > > -sf > > the old process persists and we end up having more than one haproxy > process. > > The old process will never end (even after days) till we forcefully kill > it. > > > > We issue a haproxy reload every time a configuration change happened > which > > can happen quite often because, say 50 - 100 times a day. > > > > To nail this problem down we are finally able to reproduce this behavior > > easily ! > > > > We do the following commands on a recent ubuntu, centos, rhel whatever. > We > > installed haproxy-1.5.2, and 1.4.25 same effect. > > We reload haproxy in parallel by executing: > > # service haproxy reload & service haproxy reload & > > > > repeat this some time (5-10 times) > > and you will see: > > # ps -ef |grep haproxy > > # haproxy 3855 1 0 12:34 ?00:00:00 /usr/sbin/haproxy -f > > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3797 > > # haproxy 3950 1 0 12:35 ?00:00:00 /usr/sbin/haproxy -f > > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3932 > > > > I know it is not recommended to reload a already reloading process, but i > > want to make clear that this is a potential source of confusion. > > I dont know if it is possible to check if there is already a reload in > > progress and return silently ? > > > > Our actual production haproxy -vv looks like: > > HA-Proxy version 1.5.2 2014/07/12 > > Copyright 2000-2014 Willy Tarreau > > > > Build options : > > TARGET = linux2628 > > CPU = generic > > CC = gcc > > CFLAGS = -O2 -g -fno-strict-aliasing > > OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 > > USE_PCRE=1 > > > > Default settings : > > maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 > > > > Encrypted password support via crypt(3): yes > > Built with zlib version : 1.2.3 > > Compression algorithms supported : identity, deflate, gzip > > Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013 > > Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013 > > OpenSSL library supports TLS extensions : yes > > OpenSSL library supports SNI : yes > > OpenSSL library supports prefer-server-ciphers : yes > > Built with PCRE version : 7.8 2008-09-05 > > PCRE library supports JIT : no (USE_PCRE_JIT not set) > > Built with transparent proxy support using: IP_TRANSPARENT > IPV6_TRANSPARENT > > IP_FREEBIND > > > > Available polling systems : > > epoll : pref=300, test result OK > >poll : pref=200, test result OK > > select : pref=150, test result OK > > Total: 3 (3 usable), will use epoll. > > > > Hope this clarifies some issues in the list with the same effect. We will > > now ensure from the calling side that no parallel reload will be > triggered > > to prevent this situation. > > > > Greetings > > Stefan Majer > > > > > > On Thu, Jan 30, 2014 at 10:07 AM, Willy Tarreau wrote: > >> > >> Hi Stefan, > >> > >> On Thu, Jan 30, 2014 at 09:46:12AM +0100, Stefan Majer wrote: > >> > Hi Willy, > >> > > >> > we see the same effect in our environment here as well. > >> > We are not sure if this is related to a still open Websocket > connection. > >> > > >> > Do you think that a > >> > > >> > timeout
Re: HAProxy graceful restart old process not going away
Hi, do you have set timeout on keep alive ? can you share the template you're using for your configuration ? On Thu, Jul 24, 2014 at 3:32 PM, Stefan Majer wrote: > Hi Willy, > > coming back to this old thread. > We still have the problem that from time to time that after doing a > > # service haproxy reload > which actually does > # /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid > -sf > the old process persists and we end up having more than one haproxy process. > The old process will never end (even after days) till we forcefully kill it. > > We issue a haproxy reload every time a configuration change happened which > can happen quite often because, say 50 - 100 times a day. > > To nail this problem down we are finally able to reproduce this behavior > easily ! > > We do the following commands on a recent ubuntu, centos, rhel whatever. We > installed haproxy-1.5.2, and 1.4.25 same effect. > We reload haproxy in parallel by executing: > # service haproxy reload & service haproxy reload & > > repeat this some time (5-10 times) > and you will see: > # ps -ef |grep haproxy > # haproxy 3855 1 0 12:34 ?00:00:00 /usr/sbin/haproxy -f > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3797 > # haproxy 3950 1 0 12:35 ?00:00:00 /usr/sbin/haproxy -f > /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3932 > > I know it is not recommended to reload a already reloading process, but i > want to make clear that this is a potential source of confusion. > I dont know if it is possible to check if there is already a reload in > progress and return silently ? > > Our actual production haproxy -vv looks like: > HA-Proxy version 1.5.2 2014/07/12 > Copyright 2000-2014 Willy Tarreau > > Build options : > TARGET = linux2628 > CPU = generic > CC = gcc > CFLAGS = -O2 -g -fno-strict-aliasing > OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 > USE_PCRE=1 > > Default settings : > maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 > > Encrypted password support via crypt(3): yes > Built with zlib version : 1.2.3 > Compression algorithms supported : identity, deflate, gzip > Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013 > Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013 > OpenSSL library supports TLS extensions : yes > OpenSSL library supports SNI : yes > OpenSSL library supports prefer-server-ciphers : yes > Built with PCRE version : 7.8 2008-09-05 > PCRE library supports JIT : no (USE_PCRE_JIT not set) > Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT > IP_FREEBIND > > Available polling systems : > epoll : pref=300, test result OK >poll : pref=200, test result OK > select : pref=150, test result OK > Total: 3 (3 usable), will use epoll. > > Hope this clarifies some issues in the list with the same effect. We will > now ensure from the calling side that no parallel reload will be triggered > to prevent this situation. > > Greetings > Stefan Majer > > > On Thu, Jan 30, 2014 at 10:07 AM, Willy Tarreau wrote: >> >> Hi Stefan, >> >> On Thu, Jan 30, 2014 at 09:46:12AM +0100, Stefan Majer wrote: >> > Hi Willy, >> > >> > we see the same effect in our environment here as well. >> > We are not sure if this is related to a still open Websocket connection. >> > >> > Do you think that a >> > >> > timeout tunnel 1h# timeout to use with WebSocket and CONNECT >> > >> > in the configuration will help to terminate these processes after the >> > specified timeout. >> >> It's not exactly this. The timeout will ensure that dead or idle >> connections will eventually get killed. But active connections will >> not be killed as long as there is traffic flowing on them. >> >> Haproxy only quits after the last session terminates. So for sure, >> if it is maintained alive because of dead connections, this will >> help. But if there is regular traffic on active connections, it >> will not be enough. >> >> Regards, >> Willy >> > > > > -- > Stefan Majer -- Steven Le Roux Jabber-ID : ste...@jabber.fr 0x39494CCB 2FF7 226B 552E 4709 03F0 6281 72D7 A010 3949 4CCB
Re: HAProxy graceful restart old process not going away
Hi Willy, coming back to this old thread. We still have the problem that from time to time that after doing a # service haproxy reload which actually does # /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid -sf the old process persists and we end up having more than one haproxy process. The old process will never end (even after days) till we forcefully kill it. We issue a haproxy reload every time a configuration change happened which can happen quite often because, say 50 - 100 times a day. To nail this problem down we are finally able to reproduce this behavior easily ! We do the following commands on a recent ubuntu, centos, rhel whatever. We installed haproxy-1.5.2, and 1.4.25 same effect. We reload haproxy in parallel by executing: # service haproxy reload & service haproxy reload & repeat this some time (5-10 times) and you will see: # ps -ef |grep haproxy # haproxy 3855 1 0 12:34 ?00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3797 # haproxy 3950 1 0 12:35 ?00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 3932 I know it is not recommended to reload a already reloading process, but i want to make clear that this is a potential source of confusion. I dont know if it is possible to check if there is already a reload in progress and return silently ? Our actual production haproxy -vv looks like: HA-Proxy version 1.5.2 2014/07/12 Copyright 2000-2014 Willy Tarreau Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.3 Compression algorithms supported : identity, deflate, gzip Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013 Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 7.8 2008-09-05 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Hope this clarifies some issues in the list with the same effect. We will now ensure from the calling side that no parallel reload will be triggered to prevent this situation. Greetings Stefan Majer On Thu, Jan 30, 2014 at 10:07 AM, Willy Tarreau wrote: > Hi Stefan, > > On Thu, Jan 30, 2014 at 09:46:12AM +0100, Stefan Majer wrote: > > Hi Willy, > > > > we see the same effect in our environment here as well. > > We are not sure if this is related to a still open Websocket connection. > > > > Do you think that a > > > > timeout tunnel 1h# timeout to use with WebSocket and CONNECT > > > > in the configuration will help to terminate these processes after the > > specified timeout. > > It's not exactly this. The timeout will ensure that dead or idle > connections will eventually get killed. But active connections will > not be killed as long as there is traffic flowing on them. > > Haproxy only quits after the last session terminates. So for sure, > if it is maintained alive because of dead connections, this will > help. But if there is regular traffic on active connections, it > will not be enough. > > Regards, > Willy > > -- Stefan Majer
Re: HAProxy graceful restart old process not going away
Hi Wei, On Tue, Feb 11, 2014 at 09:12:19PM +, Wei Kong wrote: > ?Hi Willy > > > Will -st kill all connections whether or not there are still active > transactions being processed? Yes, -st is done precisely to kill the old process after the new one has taken over. We will be able to improve this in the future so that the restart terminates all idle connections, but at the moment we have no list of idle connections yet. Regards, Willy
RE: HAProxy graceful restart old process not going away
?Hi Willy Will -st kill all connections whether or not there are still active transactions being processed? Thanks, Wei On Tue, Jan 28, 2014 at 11:28 PM, Willy Tarreau mailto:w...@1wt.eu>> wrote: On Tue, Jan 28, 2014 at 10:16:39PM +, Wei Kong wrote: > Thanks. Looks like it is websocket connections for us too. So is killing > the process the only way? It depends if you're willing to kill your websocket connections or not. At some point they will disappear since the old process does not accept any new connection. However I understand it can be long, especially with some setups using 24h as the timeout, resulting in dead clients maintaining their connection for an artificially long time! There was a feature I wanted to implement for client-side HTTP keep-alive which would consist in reducing the keep-alive timeout and disabling keep- alive for new requests over existing connections, so that these ones would vanish much faster. Maybe we could do something like this for existing tunnels. It's not very easy if we want to consider existing silent connections. If you really don't care about the old connections, just use -st instead of -sf when reloading, and once the new process takes over, the old one will go away even if it has some remaining connections. Willy -- Stefan Majer
Re: HAProxy graceful restart old process not going away
Hi Stefan, On Thu, Jan 30, 2014 at 09:46:12AM +0100, Stefan Majer wrote: > Hi Willy, > > we see the same effect in our environment here as well. > We are not sure if this is related to a still open Websocket connection. > > Do you think that a > > timeout tunnel 1h# timeout to use with WebSocket and CONNECT > > in the configuration will help to terminate these processes after the > specified timeout. It's not exactly this. The timeout will ensure that dead or idle connections will eventually get killed. But active connections will not be killed as long as there is traffic flowing on them. Haproxy only quits after the last session terminates. So for sure, if it is maintained alive because of dead connections, this will help. But if there is regular traffic on active connections, it will not be enough. Regards, Willy
Re: HAProxy graceful restart old process not going away
Hi Willy, we see the same effect in our environment here as well. We are not sure if this is related to a still open Websocket connection. Do you think that a timeout tunnel 1h# timeout to use with WebSocket and CONNECT in the configuration will help to terminate these processes after the specified timeout. If there is a chance i will give it a try in our production. Greetings Stefan On Tue, Jan 28, 2014 at 11:28 PM, Willy Tarreau wrote: > On Tue, Jan 28, 2014 at 10:16:39PM +, Wei Kong wrote: > > Thanks. Looks like it is websocket connections for us too. So is killing > > the process the only way? > > It depends if you're willing to kill your websocket connections or not. At > some point they will disappear since the old process does not accept any > new connection. However I understand it can be long, especially with some > setups using 24h as the timeout, resulting in dead clients maintaining > their connection for an artificially long time! > > There was a feature I wanted to implement for client-side HTTP keep-alive > which would consist in reducing the keep-alive timeout and disabling keep- > alive for new requests over existing connections, so that these ones would > vanish much faster. Maybe we could do something like this for existing > tunnels. It's not very easy if we want to consider existing silent > connections. > > If you really don't care about the old connections, just use -st instead > of -sf when reloading, and once the new process takes over, the old one > will go away even if it has some remaining connections. > > Willy > > > -- Stefan Majer
Re: HAProxy graceful restart old process not going away
Hi Willy, we see the same effect in our environment here as well. We are not sure if this is related to a still open Websocket connection. Do you think that a timeout tunnel 1h# timeout to use with WebSocket and CONNECT in the configuration will help to terminate these processes after the specified timeout. If there is a chance i will give it a try in our production. Greetings Stefan On Tue, Jan 28, 2014 at 11:28 PM, Willy Tarreau wrote: > On Tue, Jan 28, 2014 at 10:16:39PM +, Wei Kong wrote: > > Thanks. Looks like it is websocket connections for us too. So is killing > > the process the only way? > > It depends if you're willing to kill your websocket connections or not. At > some point they will disappear since the old process does not accept any > new connection. However I understand it can be long, especially with some > setups using 24h as the timeout, resulting in dead clients maintaining > their connection for an artificially long time! > > There was a feature I wanted to implement for client-side HTTP keep-alive > which would consist in reducing the keep-alive timeout and disabling keep- > alive for new requests over existing connections, so that these ones would > vanish much faster. Maybe we could do something like this for existing > tunnels. It's not very easy if we want to consider existing silent > connections. > > If you really don't care about the old connections, just use -st instead > of -sf when reloading, and once the new process takes over, the old one > will go away even if it has some remaining connections. > > Willy > > > -- Stefan Majer
Re: HAProxy graceful restart old process not going away
On Tue, Jan 28, 2014 at 10:16:39PM +, Wei Kong wrote: > Thanks. Looks like it is websocket connections for us too. So is killing > the process the only way? It depends if you're willing to kill your websocket connections or not. At some point they will disappear since the old process does not accept any new connection. However I understand it can be long, especially with some setups using 24h as the timeout, resulting in dead clients maintaining their connection for an artificially long time! There was a feature I wanted to implement for client-side HTTP keep-alive which would consist in reducing the keep-alive timeout and disabling keep- alive for new requests over existing connections, so that these ones would vanish much faster. Maybe we could do something like this for existing tunnels. It's not very easy if we want to consider existing silent connections. If you really don't care about the old connections, just use -st instead of -sf when reloading, and once the new process takes over, the old one will go away even if it has some remaining connections. Willy
Re: HAProxy graceful restart old process not going away
Thanks. Looks like it is websocket connections for us too. So is killing the process the only way? Thanks, Wei On 1/27/14, 11:47 PM, "k simon" wrote: > We got the simlar problem, then capture the traffic and found it's >result in websocket. So we had to kill the old process manually when >finished graceful restart. > > > >于 28/1/14 下午2:37, Willy Tarreau 写道: >> On Mon, Jan 27, 2014 at 11:24:46PM +, Wei Kong wrote: >>> We use >>> >>> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p >>>/var/run/haproxy.pid -sf >>> >>> In production to gracefully restart haproxy. But sometimes we notice >>>that the >>> old haproxy process taking a long time to go away and if we make >>>multiple >>> updates and it would result in multiple haproxy processes for a long >>>time. >>> How can we make sure the old haproxy can go away in a reasonable >>>amount of >>> time? >> >> Maybe you have long transfers going on, or long keep-alive timeouts ? >> >> Willy >> >>
Re: HAProxy graceful restart old process not going away
We got the simlar problem, then capture the traffic and found it's result in websocket. So we had to kill the old process manually when finished graceful restart. 于 28/1/14 下午2:37, Willy Tarreau 写道: On Mon, Jan 27, 2014 at 11:24:46PM +, Wei Kong wrote: We use /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid -sf In production to gracefully restart haproxy. But sometimes we notice that the old haproxy process taking a long time to go away and if we make multiple updates and it would result in multiple haproxy processes for a long time. How can we make sure the old haproxy can go away in a reasonable amount of time? Maybe you have long transfers going on, or long keep-alive timeouts ? Willy
Re: HAProxy graceful restart old process not going away
On Mon, Jan 27, 2014 at 11:24:46PM +, Wei Kong wrote: > We use > > /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid -sf > > > In production to gracefully restart haproxy. But sometimes we notice that the > old haproxy process taking a long time to go away and if we make multiple > updates and it would result in multiple haproxy processes for a long time. > How can we make sure the old haproxy can go away in a reasonable amount of > time? Maybe you have long transfers going on, or long keep-alive timeouts ? Willy