Hi Aleks,

I don't understand what you means with "local host". But could be nice if
new process get data of old process.

As I said to João Morais, we "solve" this problem adding a sidecar HAProxy
(another container in same pod) only to store the sticky-table of main
HAProxy. In my opinion it's a resource waste, but this is best solution now.

I know João don't have time to implement the peers part now. But I'm trying
to make some tests, if successful I can make a pull request.


Att,
Eduardo

Em qui, 23 de mai de 2019 às 09:40, Aleksandar Lazic <al-hapr...@none.at>
escreveu:

>
> Hi Eduardo.
>
> Thu May 23 14:30:46 GMT+02:00 2019 Eduardo Doria Lima :
>
> > HI Aleks,
>  > "First why do you restart all haproxies at the same time and don't use
> rolling updates ?"
>  > We restarts all HAProxys at the same time because they watch Kubernetes
> API. The ingress ( https://github.com/jcmoraisjr/haproxy-ingress [
> https://github.com/jcmoraisjr/haproxy-ingress] ) do this automatic. I was
> talking with ingress creator João Morais about the possibility of use a
> random value to restart but we agree it's not 100% secure to keep the
> table. The ingress don't use rolling update because it's fast to realod
> HAProxy than kill entire Pod. I think. I will find more about this.
>
> João, Baptiste and I talked about this topic on the kubeconf here and the
> was the suggestion to add the "local host" in the peers section.
>  When a restart happen then haproxy new process ask haproxy old process to
> get the data.
>
> I don't know when joao have the time to implement the peers part.
>
> Regards
>  Aleks
>
> > "Maybe you can add a init container to update the peers in the current
> running haproxy pod's with socket commands, if possible."
>  > The problem is not update the peers, we can do this. The problem is all
> the peers reload at same time.
>  > "* how often happen such a restart?"
>  > Not to much, but enough to affect some users when it occurs.
>  >
>  > "* how many entries are in the tables?"
>  > I don't know exactly, maybe between thousand and ten thousand.
>  >
>  > Thanks!
>  > Att, Eduardo
>  >
>  >
>  >
>  > Em qua, 22 de mai de 2019 às 16:10, Aleksandar Lazic <
> al-hapr...@none.at [] > escreveu:
>  >
>  >>
>  >> Hi Eduardo.
>  >>
>  >> That's a pretty interesting question, at least for me.
>  >>
>  >> First why do you restart all haproxies at the same time and don't use
> rolling updates ?
>  >>
>  >>
> https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
> [
> https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
> ]
>  >>
>  >> Maybe you can add a init container to update the peers in the current
> running haproxy pod's with socket commands, if possible.
>  >>
>  >> https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ [
> https://kubernetes.io/docs/concepts/workloads/pods/init-containers/]
>  >>
>  >> http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3 [
> http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3]
>  >>
>  >> Agree with you that peers possibility would be nice.
>  >>
>  >> Some other questions are.
>  >>
>  >> * how often happen such a restart?
>  >> * how many entries are in the tables?
>  >>
>  >> I don't see anything wrong to use a "quorum" Server. This is a pretty
> common solution even on contained setups.
>  >>
>  >> Regards
>  >> Aleks
>  >>
>  >> Wed May 22 15:36:10 GMT+02:00 2019 Eduardo Doria Lima <
> eduardo.l...@trt20.jus.br [] >:
>  >>
>  >>> Hi,
>  >>> I'm using HAProxy to support a system that was initially developed
> for Apache (AJP) and JBoss. Now we are migrating it's infrastructure to a
> Kubernetes cluster with HAProxy as ingress (load balancer).
>  >>> The big problem is this system depends strict to JSESSIONID. Some
> internal requests made in Javascript or Angular don't respect browser
> cookies and send requests only with original Jboss JSESSIONID value.
>  >>> Because of this we need a sticky-table to map JSESSIONID values. But
> in a cluster environment ( https://github.com/jcmoraisjr/haproxy-ingress [
> https://github.com/jcmoraisjr/haproxy-ingress] ) HAProxy has many
> instances and this instances don't have fixed IP, they are volatile.
>  >>> Also, in Kubernetes cluster everything is in constant change and any
> change is a reload of all HAProxy instances. So, we lost the sticky-table.
>  >>> Even we use "peers" feature as described in this issue (
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296 [
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296] ) by me, we
> don't know if table will persist because all instances will reload in the
> same time.
>  >>> We thought to use a separate HAProxy server only to cache this table.
> This HAProxy will never reload. But I'm not comfortable to use a HAProxy
> server instance only for this.
>  >>> I appreciate if you help me. Thanks!
>  >>>
>  >>> Att,
>  >>> Eduardo
>  >>>
>  >>
>  >
>
>
>

Reply via email to