Hi Eduardo,

On Thu, May 23, 2019 at 10:09:55AM -0300, Eduardo Doria Lima wrote:
> Hi Aleks,
> I don't understand what you means with "local host". But could be nice if
> new process get data of old process.

That's exatly the principle. A peers section contains a number of peers,
including the local one. Example, let's say you have 4 haproxy nodes, all
of them will have the exact same section :

   peers my-cluster
       peer node1
       peer node2
       peer node3
       peer node4

When you start haproxy it checks if there is a peer with the same name
as the local machine, if so it considers it as the local peer and will
try to synchronize the full tables with it. Normally what this means is
that the old process connects to the new one to teach it everything.
When your peers don't hold the same name, you can force it on the command
line using -H to give the local peer name, e.g. "-H node3".

Also, be sure to properly reload, not restart! The restart (-st) will
kill the old process without leaving it a chance to resychronize! The
reload (-sf) will tell it to finish its work then quit, and among its
work there's the resync job ;-)

> As I said to João Morais, we "solve" this problem adding a sidecar HAProxy
> (another container in same pod) only to store the sticky-table of main
> HAProxy. In my opinion it's a resource waste, but this is best solution now.

That's a shame because the peers naturally support not losing tables on
reload, so indeed your solution is way more complex!

Hoping this helps,

Reply via email to