Le 9/19/22 à 17:44, Tim Düsterhus a écrit :
Hi

recently our HAProxy nodes started handling long-running HTTP
connections (similar to WebSockets). This causes old workers to stay
around for several days after a reload.

This isn't too bad from a memory perspective, we have sufficient RAM to
keep around the old processes until the connections die naturally. It's
much worse from a CPU perspective: The old workers appear to still
perform health checks and DNS lookups and this takes away precious
resources from the active workers.

My understanding is that the old workers will only handle existing
connections and thus will never need to connect to a backend (server)
again. Thus it should not be necessary to waste CPU on DNS lookups and
health checks for a stopping worker.

Am I missing something here?


It is not exactly how it works. When HAProxy is reloaded, it stops to accept new connections and it closes idle HTTP connections on the server side and also on the client side. However, on client sides, a connection is idle if at least one request was processed. So totally inactive clients are not closed. These clients may still perform a request. Connection to servers are not blocked on old workers.

It is important when "idle-close-on-response" option is enabled. In this case, idle client connections are not closed and may try to perform a last request. This means all backend mechanisms must still be running (load-balancing, redispatch, l7-retry, health-checks, dns resolution ...).


--
Christopher Faulet

Reply via email to