HI Aleks,

"First why do you restart all haproxies at the same  time and don't use
rolling updates ?"

We restarts all HAProxys at the same time because they watch Kubernetes
API. The ingress (https://github.com/jcmoraisjr/haproxy-ingress) do this
automatic. I was talking with ingress creator João Morais about the
possibility of use a random value to restart but we agree it's not 100%
secure to keep the table.
The ingress don't use rolling update because it's fast to realod HAProxy
than kill entire Pod. I think. I will find more about this.

"Maybe you can add a init container to update the peers in the current
running haproxy pod's  with socket commands, if possible."

The problem is not update the peers, we can do this. The problem is all the
peers reload at same time.

"* how often happen such a restart?"

Not to much, but enough to affect some users when it occurs.

"* how many entries are in the tables?"

I don't know exactly, maybe between thousand and ten thousand.


Thanks!

Att,
Eduardo



Em qua, 22 de mai de 2019 às 16:10, Aleksandar Lazic <al-hapr...@none.at>
escreveu:

> Hi Eduardo.
>
> That's a pretty interesting question, at least for me.
>
> First why do you restart all haproxies at the same  time and don't use
> rolling updates ?
>
> https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
>
> Maybe you can add a init container to update the peers in the current
> running haproxy pod's  with socket commands, if possible.
>
> https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
>
> http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
>
> Agree with you that peers possibility would be nice.
>
> Some other questions are.
>
> * how often happen such a restart?
> * how many entries are in the tables?
>
> I don't see anything wrong to use a "quorum" Server. This is a pretty
> common solution even on contained setups.
>
> Regards
> Aleks
>
> Wed May 22 15:36:10 GMT+02:00 2019 Eduardo Doria Lima <
> eduardo.l...@trt20.jus.br>:
>
> Hi,
>
> I'm using HAProxy to support a system that was initially developed for
> Apache (AJP) and JBoss. Now we are migrating it's infrastructure to a
> Kubernetes cluster with HAProxy as ingress (load balancer).
>
> The big problem is this system depends strict to JSESSIONID. Some internal
> requests made in Javascript or Angular don't respect browser cookies and
> send requests only with original Jboss JSESSIONID value.
>
> Because of this we need a sticky-table to map JSESSIONID values. But in a
> cluster environment (https://github.com/jcmoraisjr/haproxy-ingress)
> HAProxy has many instances and this instances don't have fixed IP, they are
> volatile.
>
> Also, in Kubernetes cluster everything is in constant change and any
> change is a reload of all HAProxy instances. So, we lost the sticky-table.
>
> Even we use "peers" feature as described in this issue (
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296) by me, we don't
> know if table will persist because all instances will reload in the same
> time.
>
> We thought to use a separate HAProxy server only to cache this table. This
> HAProxy will never reload. But I'm not comfortable to use a HAProxy server
> instance only for this.
>
> I appreciate if you help me. Thanks!
>
>
> Att,
> Eduardo
>
>

Reply via email to