Hi Eduardo.

That's a pretty interesting question, at least for me.

First why do you restart all haproxies at the same time and don't use rolling 
updates ?


Maybe you can add a init container to update the peers in the current running 
haproxy pod's with socket commands, if possible.



Agree with you that peers possibility would be nice.

Some other questions are.

* how often happen such a restart?
 * how many entries are in the tables?

I don't see anything wrong to use a "quorum" Server. This is a pretty common 
solution even on contained setups.


Wed May 22 15:36:10 GMT+02:00 2019 Eduardo Doria Lima 

> Hi,
> I'm using HAProxy to support a system that was initially developed for Apache 
> (AJP) and JBoss. Now we are migrating it's infrastructure to a Kubernetes 
> cluster with HAProxy as ingress (load balancer).
> The big problem is this system depends strict to JSESSIONID. Some internal 
> requests made in Javascript or Angular don't respect browser cookies and send 
> requests only with original Jboss JSESSIONID value.
> Because of this we need a sticky-table to map JSESSIONID values. But in a 
> cluster environment ( https://github.com/jcmoraisjr/haproxy-ingress 
> [https://github.com/jcmoraisjr/haproxy-ingress] ) HAProxy has many instances 
> and this instances don't have fixed IP, they are volatile.
> Also, in Kubernetes cluster everything is in constant change and any change 
> is a reload of all HAProxy instances. So, we lost the sticky-table.
> Even we use "peers" feature as described in this issue ( 
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296 
> [https://github.com/jcmoraisjr/haproxy-ingress/issues/296] ) by me, we don't 
> know if table will persist because all instances will reload in the same time.
> We thought to use a separate HAProxy server only to cache this table. This 
> HAProxy will never reload. But I'm not comfortable to use a HAProxy server 
> instance only for this.
> I appreciate if you help me. Thanks!
> Att,
> Eduardo

Reply via email to