Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Willy Tarreau
Hi Eduardo, On Thu, May 23, 2019 at 10:09:55AM -0300, Eduardo Doria Lima wrote: > Hi Aleks, > > I don't understand what you means with "local host". But could be nice if > new process get data of old process. That's exatly the principle. A peers section contains a number of peers, including the

Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Eduardo Doria Lima
Hi Aleks, I don't understand what you means with "local host". But could be nice if new process get data of old process. As I said to João Morais, we "solve" this problem adding a sidecar HAProxy (another container in same pod) only to store the sticky-table of main HAProxy. In my opinion it's a

Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Aleksandar Lazic
Hi Eduardo. Thu May 23 14:30:46 GMT+02:00 2019 Eduardo Doria Lima : > HI Aleks, > "First why do you restart all haproxies at the same time and don't use > rolling updates ?" > We restarts all HAProxys at the same time because they watch Kubernetes API. > The ingress (

Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Eduardo Doria Lima
HI Aleks, "First why do you restart all haproxies at the same time and don't use rolling updates ?" We restarts all HAProxys at the same time because they watch Kubernetes API. The ingress (https://github.com/jcmoraisjr/haproxy-ingress) do this automatic. I was talking with ingress creator João

Re: Sticky-table persistence in a Kubernetes environment

2019-05-22 Thread Aleksandar Lazic
Hi Eduardo. That's a pretty interesting question, at least for me. First why do you restart all haproxies at the same time and don't use rolling updates ? https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/ Maybe you can add a init container to update the peers in the