Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Willy Tarreau
Hi Eduardo,

On Thu, May 23, 2019 at 10:09:55AM -0300, Eduardo Doria Lima wrote:
> Hi Aleks,
> 
> I don't understand what you means with "local host". But could be nice if
> new process get data of old process.

That's exatly the principle. A peers section contains a number of peers,
including the local one. Example, let's say you have 4 haproxy nodes, all
of them will have the exact same section :

   peers my-cluster
   peer node1 10.0.0.1:1200
   peer node2 10.0.0.2:1200
   peer node3 10.0.0.3:1200
   peer node4 10.0.0.4:1200

When you start haproxy it checks if there is a peer with the same name
as the local machine, if so it considers it as the local peer and will
try to synchronize the full tables with it. Normally what this means is
that the old process connects to the new one to teach it everything.
When your peers don't hold the same name, you can force it on the command
line using -H to give the local peer name, e.g. "-H node3".

Also, be sure to properly reload, not restart! The restart (-st) will
kill the old process without leaving it a chance to resychronize! The
reload (-sf) will tell it to finish its work then quit, and among its
work there's the resync job ;-)

> As I said to João Morais, we "solve" this problem adding a sidecar HAProxy
> (another container in same pod) only to store the sticky-table of main
> HAProxy. In my opinion it's a resource waste, but this is best solution now.

That's a shame because the peers naturally support not losing tables on
reload, so indeed your solution is way more complex!

Hoping this helps,
Willy



Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Eduardo Doria Lima
Hi Aleks,

I don't understand what you means with "local host". But could be nice if
new process get data of old process.

As I said to João Morais, we "solve" this problem adding a sidecar HAProxy
(another container in same pod) only to store the sticky-table of main
HAProxy. In my opinion it's a resource waste, but this is best solution now.

I know João don't have time to implement the peers part now. But I'm trying
to make some tests, if successful I can make a pull request.


Att,
Eduardo

Em qui, 23 de mai de 2019 às 09:40, Aleksandar Lazic 
escreveu:

>
> Hi Eduardo.
>
> Thu May 23 14:30:46 GMT+02:00 2019 Eduardo Doria Lima :
>
> > HI Aleks,
>  > "First why do you restart all haproxies at the same time and don't use
> rolling updates ?"
>  > We restarts all HAProxys at the same time because they watch Kubernetes
> API. The ingress ( https://github.com/jcmoraisjr/haproxy-ingress [
> https://github.com/jcmoraisjr/haproxy-ingress] ) do this automatic. I was
> talking with ingress creator João Morais about the possibility of use a
> random value to restart but we agree it's not 100% secure to keep the
> table. The ingress don't use rolling update because it's fast to realod
> HAProxy than kill entire Pod. I think. I will find more about this.
>
> João, Baptiste and I talked about this topic on the kubeconf here and the
> was the suggestion to add the "local host" in the peers section.
>  When a restart happen then haproxy new process ask haproxy old process to
> get the data.
>
> I don't know when joao have the time to implement the peers part.
>
> Regards
>  Aleks
>
> > "Maybe you can add a init container to update the peers in the current
> running haproxy pod's with socket commands, if possible."
>  > The problem is not update the peers, we can do this. The problem is all
> the peers reload at same time.
>  > "* how often happen such a restart?"
>  > Not to much, but enough to affect some users when it occurs.
>  >
>  > "* how many entries are in the tables?"
>  > I don't know exactly, maybe between thousand and ten thousand.
>  >
>  > Thanks!
>  > Att, Eduardo
>  >
>  >
>  >
>  > Em qua, 22 de mai de 2019 às 16:10, Aleksandar Lazic <
> al-hapr...@none.at [] > escreveu:
>  >
>  >>
>  >> Hi Eduardo.
>  >>
>  >> That's a pretty interesting question, at least for me.
>  >>
>  >> First why do you restart all haproxies at the same time and don't use
> rolling updates ?
>  >>
>  >>
> https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
> [
> https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
> ]
>  >>
>  >> Maybe you can add a init container to update the peers in the current
> running haproxy pod's with socket commands, if possible.
>  >>
>  >> https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ [
> https://kubernetes.io/docs/concepts/workloads/pods/init-containers/]
>  >>
>  >> http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3 [
> http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3]
>  >>
>  >> Agree with you that peers possibility would be nice.
>  >>
>  >> Some other questions are.
>  >>
>  >> * how often happen such a restart?
>  >> * how many entries are in the tables?
>  >>
>  >> I don't see anything wrong to use a "quorum" Server. This is a pretty
> common solution even on contained setups.
>  >>
>  >> Regards
>  >> Aleks
>  >>
>  >> Wed May 22 15:36:10 GMT+02:00 2019 Eduardo Doria Lima <
> eduardo.l...@trt20.jus.br [] >:
>  >>
>  >>> Hi,
>  >>> I'm using HAProxy to support a system that was initially developed
> for Apache (AJP) and JBoss. Now we are migrating it's infrastructure to a
> Kubernetes cluster with HAProxy as ingress (load balancer).
>  >>> The big problem is this system depends strict to JSESSIONID. Some
> internal requests made in Javascript or Angular don't respect browser
> cookies and send requests only with original Jboss JSESSIONID value.
>  >>> Because of this we need a sticky-table to map JSESSIONID values. But
> in a cluster environment ( https://github.com/jcmoraisjr/haproxy-ingress [
> https://github.com/jcmoraisjr/haproxy-ingress] ) HAProxy has many
> instances and this instances don't have fixed IP, they are volatile.
>  >>> Also, in Kubernetes cluster everything is in constant change and any
> change is a reload of all HAProxy instances. So, we lost the sticky-table.
>  >>> Even we use "peers" feature as described in this issue (
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296 [
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296] ) by me, we
> don't know if table will persist because all instances will reload in the
> same time.
>  >>> We thought to use a separate HAProxy server only to cache this table.
> This HAProxy will never reload. But I'm not comfortable to use a HAProxy
> server instance only for this.
>  >>> I appreciate if you help me. Thanks!
>  >>>
>  >>> Att,
>  >>> Eduardo
>  >>>
>  >>
>  >
>
>
>


Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Aleksandar Lazic


Hi Eduardo.

Thu May 23 14:30:46 GMT+02:00 2019 Eduardo Doria Lima :

> HI Aleks,
 > "First why do you restart all haproxies at the same time and don't use 
 > rolling updates ?"
 > We restarts all HAProxys at the same time because they watch Kubernetes API. 
 > The ingress ( https://github.com/jcmoraisjr/haproxy-ingress 
 > [https://github.com/jcmoraisjr/haproxy-ingress] ) do this automatic. I was 
 > talking with ingress creator João Morais about the possibility of use a 
 > random value to restart but we agree it's not 100% secure to keep the table. 
 > The ingress don't use rolling update because it's fast to realod HAProxy 
 > than kill entire Pod. I think. I will find more about this.

João, Baptiste and I talked about this topic on the kubeconf here and the was 
the suggestion to add the "local host" in the peers section.
 When a restart happen then haproxy new process ask haproxy old process to get 
the data.

I don't know when joao have the time to implement the peers part.

Regards
 Aleks

> "Maybe you can add a init container to update the peers in the current 
> running haproxy pod's with socket commands, if possible."
 > The problem is not update the peers, we can do this. The problem is all the 
 > peers reload at same time.
 > "* how often happen such a restart?"
 > Not to much, but enough to affect some users when it occurs.
 >
 > "* how many entries are in the tables?"
 > I don't know exactly, maybe between thousand and ten thousand.
 >
 > Thanks!
 > Att, Eduardo
 >
 >
 >
 > Em qua, 22 de mai de 2019 às 16:10, Aleksandar Lazic < al-hapr...@none.at [] 
 > > escreveu:
 >
 >>
 >> Hi Eduardo.
 >>
 >> That's a pretty interesting question, at least for me.
 >>
 >> First why do you restart all haproxies at the same time and don't use 
 >> rolling updates ?
 >>
 >> https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/ 
 >> [https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/]
 >>
 >> Maybe you can add a init container to update the peers in the current 
 >> running haproxy pod's with socket commands, if possible.
 >>
 >> https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ 
 >> [https://kubernetes.io/docs/concepts/workloads/pods/init-containers/]
 >>
 >> http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3 
 >> [http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3]
 >>
 >> Agree with you that peers possibility would be nice.
 >>
 >> Some other questions are.
 >>
 >> * how often happen such a restart?
 >> * how many entries are in the tables?
 >>
 >> I don't see anything wrong to use a "quorum" Server. This is a pretty 
 >> common solution even on contained setups.
 >>
 >> Regards
 >> Aleks
 >>
 >> Wed May 22 15:36:10 GMT+02:00 2019 Eduardo Doria Lima < 
 >> eduardo.l...@trt20.jus.br [] >:
 >>
 >>> Hi,
 >>> I'm using HAProxy to support a system that was initially developed for 
 >>> Apache (AJP) and JBoss. Now we are migrating it's infrastructure to a 
 >>> Kubernetes cluster with HAProxy as ingress (load balancer).
 >>> The big problem is this system depends strict to JSESSIONID. Some internal 
 >>> requests made in Javascript or Angular don't respect browser cookies and 
 >>> send requests only with original Jboss JSESSIONID value.
 >>> Because of this we need a sticky-table to map JSESSIONID values. But in a 
 >>> cluster environment ( https://github.com/jcmoraisjr/haproxy-ingress 
 >>> [https://github.com/jcmoraisjr/haproxy-ingress] ) HAProxy has many 
 >>> instances and this instances don't have fixed IP, they are volatile.
 >>> Also, in Kubernetes cluster everything is in constant change and any 
 >>> change is a reload of all HAProxy instances. So, we lost the sticky-table.
 >>> Even we use "peers" feature as described in this issue ( 
 >>> https://github.com/jcmoraisjr/haproxy-ingress/issues/296 
 >>> [https://github.com/jcmoraisjr/haproxy-ingress/issues/296] ) by me, we 
 >>> don't know if table will persist because all instances will reload in the 
 >>> same time.
 >>> We thought to use a separate HAProxy server only to cache this table. This 
 >>> HAProxy will never reload. But I'm not comfortable to use a HAProxy server 
 >>> instance only for this.
 >>> I appreciate if you help me. Thanks!
 >>>
 >>> Att,
 >>> Eduardo
 >>>
 >>
 >





Re: Sticky-table persistence in a Kubernetes environment

2019-05-23 Thread Eduardo Doria Lima
HI Aleks,

"First why do you restart all haproxies at the same  time and don't use
rolling updates ?"

We restarts all HAProxys at the same time because they watch Kubernetes
API. The ingress (https://github.com/jcmoraisjr/haproxy-ingress) do this
automatic. I was talking with ingress creator João Morais about the
possibility of use a random value to restart but we agree it's not 100%
secure to keep the table.
The ingress don't use rolling update because it's fast to realod HAProxy
than kill entire Pod. I think. I will find more about this.

"Maybe you can add a init container to update the peers in the current
running haproxy pod's  with socket commands, if possible."

The problem is not update the peers, we can do this. The problem is all the
peers reload at same time.

"* how often happen such a restart?"

Not to much, but enough to affect some users when it occurs.

"* how many entries are in the tables?"

I don't know exactly, maybe between thousand and ten thousand.


Thanks!

Att,
Eduardo



Em qua, 22 de mai de 2019 às 16:10, Aleksandar Lazic 
escreveu:

> Hi Eduardo.
>
> That's a pretty interesting question, at least for me.
>
> First why do you restart all haproxies at the same  time and don't use
> rolling updates ?
>
> https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
>
> Maybe you can add a init container to update the peers in the current
> running haproxy pod's  with socket commands, if possible.
>
> https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
>
> http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
>
> Agree with you that peers possibility would be nice.
>
> Some other questions are.
>
> * how often happen such a restart?
> * how many entries are in the tables?
>
> I don't see anything wrong to use a "quorum" Server. This is a pretty
> common solution even on contained setups.
>
> Regards
> Aleks
>
> Wed May 22 15:36:10 GMT+02:00 2019 Eduardo Doria Lima <
> eduardo.l...@trt20.jus.br>:
>
> Hi,
>
> I'm using HAProxy to support a system that was initially developed for
> Apache (AJP) and JBoss. Now we are migrating it's infrastructure to a
> Kubernetes cluster with HAProxy as ingress (load balancer).
>
> The big problem is this system depends strict to JSESSIONID. Some internal
> requests made in Javascript or Angular don't respect browser cookies and
> send requests only with original Jboss JSESSIONID value.
>
> Because of this we need a sticky-table to map JSESSIONID values. But in a
> cluster environment (https://github.com/jcmoraisjr/haproxy-ingress)
> HAProxy has many instances and this instances don't have fixed IP, they are
> volatile.
>
> Also, in Kubernetes cluster everything is in constant change and any
> change is a reload of all HAProxy instances. So, we lost the sticky-table.
>
> Even we use "peers" feature as described in this issue (
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296) by me, we don't
> know if table will persist because all instances will reload in the same
> time.
>
> We thought to use a separate HAProxy server only to cache this table. This
> HAProxy will never reload. But I'm not comfortable to use a HAProxy server
> instance only for this.
>
> I appreciate if you help me. Thanks!
>
>
> Att,
> Eduardo
>
>


Re: Sticky-table persistence in a Kubernetes environment

2019-05-22 Thread Aleksandar Lazic

Hi Eduardo.

That's a pretty interesting question, at least for me.

First why do you restart all haproxies at the same time and don't use rolling 
updates ?

https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/

Maybe you can add a init container to update the peers in the current running 
haproxy pod's with socket commands, if possible.

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

http://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3

Agree with you that peers possibility would be nice.

Some other questions are.

* how often happen such a restart?
 * how many entries are in the tables?

I don't see anything wrong to use a "quorum" Server. This is a pretty common 
solution even on contained setups.

Regards
 Aleks

Wed May 22 15:36:10 GMT+02:00 2019 Eduardo Doria Lima 
:

> Hi,
> I'm using HAProxy to support a system that was initially developed for Apache 
> (AJP) and JBoss. Now we are migrating it's infrastructure to a Kubernetes 
> cluster with HAProxy as ingress (load balancer).
> The big problem is this system depends strict to JSESSIONID. Some internal 
> requests made in Javascript or Angular don't respect browser cookies and send 
> requests only with original Jboss JSESSIONID value.
> Because of this we need a sticky-table to map JSESSIONID values. But in a 
> cluster environment ( https://github.com/jcmoraisjr/haproxy-ingress 
> [https://github.com/jcmoraisjr/haproxy-ingress] ) HAProxy has many instances 
> and this instances don't have fixed IP, they are volatile.
> Also, in Kubernetes cluster everything is in constant change and any change 
> is a reload of all HAProxy instances. So, we lost the sticky-table.
> Even we use "peers" feature as described in this issue ( 
> https://github.com/jcmoraisjr/haproxy-ingress/issues/296 
> [https://github.com/jcmoraisjr/haproxy-ingress/issues/296] ) by me, we don't 
> know if table will persist because all instances will reload in the same time.
> We thought to use a separate HAProxy server only to cache this table. This 
> HAProxy will never reload. But I'm not comfortable to use a HAProxy server 
> instance only for this.
> I appreciate if you help me. Thanks!
>
> Att,
> Eduardo
>



Sticky-table persistence in a Kubernetes environment

2019-05-22 Thread Eduardo Doria Lima
Hi,

I'm using HAProxy to support a system that was initially developed for
Apache (AJP) and JBoss. Now we are migrating it's infrastructure to a
Kubernetes cluster with HAProxy as ingress (load balancer).

The big problem is this system depends strict to JSESSIONID. Some internal
requests made in Javascript or Angular don't respect browser cookies and
send requests only with original Jboss JSESSIONID value.

Because of this we need a sticky-table to map JSESSIONID values. But in a
cluster environment (https://github.com/jcmoraisjr/haproxy-ingress) HAProxy
has many instances and this instances don't have fixed IP, they are
volatile.

Also, in Kubernetes cluster everything is in constant change and any change
is a reload of all HAProxy instances. So, we lost the sticky-table.

Even we use "peers" feature as described in this issue (
https://github.com/jcmoraisjr/haproxy-ingress/issues/296) by me, we don't
know if table will persist because all instances will reload in the same
time.

We thought to use a separate HAProxy server only to cache this table. This
HAProxy will never reload. But I'm not comfortable to use a HAProxy server
instance only for this.

I appreciate if you help me. Thanks!


Att,
Eduardo