Question regarding haproxy backend behaviour

2018-04-22 Thread Moemen MHEDHBI
Hi


On 18/04/2018 21:46, Ayush Goyal wrote:
> Hi
>
> Thanks Igor/Moemen for your response. I hadn't considered frontend
> queuing, although I am not sure where to measure it. I have wound down
> the benchmark infrastructure for time being and it would take me some
> time to replicate it again for providing additional stats. In the
> meantime, I am attaching the sample logs of 200 lines for benchmarks
> from 1 of the haproxy server.
>

Sorry for the late reply. In order to explain the stats you were seeing
let us get back to your first question:
>  1. How the nginx_backend connections are being terminated to serve
the new
connections?

As told in the previous answer, the backend connection can be terminated
when the server decides to close the connection or due to a HAProxy
timeout or when the client terminates the connection..
But in keep-alive mode, when the server closes the connection, HAProxy
won't close the client side connection. So unless the client asks for
closing the connection (in keep-alive the client keeps the connection
open for further requests) you will see more connections on the frontend
side than the backend side.
You can use the "option forceclose" which will ensure that HAProxy
actively closes the connection on both sides after each request and you
will see that the number of frontend and backend connections are closer.
Frontend connections may still be a little higher because in general
(HAProxy and the servers are in the same site) the latency in the
frontend side is higher than the one in the backend side.

> Reading the logs however, I could see that both srv_queue and
> backend_queue are 0. One detail that you may notice reading the logs,
> that I had omitted earlier for sake of simplicity is that nginx_ssl_fe
> frontend is bound on 2 processes to split cpu load. So instead of this:
>>
>>> frontend nginx_ssl_fe
>>>         bind *:8443 ssl 
>>>         maxconn 10
>>>         bind-process 2
>>
> It has
> > bind-process 2 3 
>
> In these logs haproxy ssl_sess_id_router frontend is doing 21k
> frontend connections, and both processes of nginx_ssl_fe are doing
> approx 10k frontend connections for total of ~20k frontend
> connections. This is just one node there are 3 more nodes like this,
> making the frontend connections in the ssl_sess_id_router frontend
> ~63k and ~60k in all frontends for nginx_ssl_fe. The nginx is still
> handling only 32k connections from nginx_backend.
>
> Please let me know if you need more info.
>
> Thanks,
> Ayush Goyal
>    
>  
>
> On Tue, Apr 17, 2018 at 10:03 PM Moemen MHEDHBI  > wrote:
>
> Hi
>
>
> On 16/04/2018 12:04, Igor Cicimov wrote:
>>
>>
>> On Mon, 16 Apr 2018 6:09 pm Ayush Goyal > > wrote:
>>
>> Hi Moemen,
>>
>> Thanks for your response. But I think I need to clarify a few
>> things here. 
>>
>> On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI
>> > wrote:
>>
>> Hi
>>
>>
>> On 12/04/2018 19:16, Ayush Goyal wrote:
>>> Hi,
>>>
>>> I have a question regarding haproxy backend connection
>>> behaviour. We have following setup:
>>>
>>>   +-+     +---+
>>>   | haproxy |>| nginx |
>>>   +-+     +---+
>>>
>>> We use a haproxy cluster for ssl off-loading and then
>>> load balance request to
>>> nginx cluster. We are currently benchmarking this setup
>>> with 3 nodes for haproxy
>>> cluster and 1 nginx node. Each haproxy node has two
>>> frontend/backend pair. First
>>> frontend is a router for ssl connection which
>>> redistributes request to the second 
>>> frontend in the haproxy cluster. The second frontend is
>>> for ssl handshake and 
>>> routing requests to nginx servers. Our configuration is
>>> as follows:
>>>
>>> ```
>>> global
>>>     maxconn 10
>>>     user haproxy
>>>     group haproxy
>>>     nbproc 2
>>>     cpu-map 1 1
>>>     cpu-map 2 2
>>>
>>> defaults
>>>     mode http
>>>     option forwardfor
>>>     timeout connect 5s
>>>     timeout client 30s
>>>     timeout server 30s
>>>     timeout tunnel 30m
>>>     timeout client-fin 5s
>>>
>>> frontend ssl_sess_id_router
>>>         bind *:443
>>>         bind-process 1
>>>         mode tcp
>>>         maxconn 10
>>>         log global
>>>         option tcp-smart-accept
>>>         option 

Re: Question regarding haproxy backend behaviour

2018-04-18 Thread Ayush Goyal
Hi

Thanks Igor/Moemen for your response. I hadn't considered frontend queuing,
although I am not sure where to measure it. I have wound down the benchmark
infrastructure for time being and it would take me some time to replicate
it again for providing additional stats. In the meantime, I am attaching
the sample logs of 200 lines for benchmarks from 1 of the haproxy server.

Reading the logs however, I could see that both srv_queue and backend_queue
are 0. One detail that you may notice reading the logs, that I had omitted
earlier for sake of simplicity is that nginx_ssl_fe frontend is bound on 2
processes to split cpu load. So instead of this:

frontend nginx_ssl_fe
>> bind *:8443 ssl 
>> maxconn 10
>> bind-process 2
>>
>> It has
> bind-process 2 3

In these logs haproxy ssl_sess_id_router frontend is doing 21k frontend
connections, and both processes of nginx_ssl_fe are doing approx 10k
frontend connections for total of ~20k frontend connections. This is just
one node there are 3 more nodes like this, making the frontend connections
in the ssl_sess_id_router frontend ~63k and ~60k in all frontends for
nginx_ssl_fe. The nginx is still handling only 32k connections from
nginx_backend.

Please let me know if you need more info.

Thanks,
Ayush Goyal



On Tue, Apr 17, 2018 at 10:03 PM Moemen MHEDHBI 
wrote:

> Hi
>
> On 16/04/2018 12:04, Igor Cicimov wrote:
>
>
>
> On Mon, 16 Apr 2018 6:09 pm Ayush Goyal  wrote:
>
>> Hi Moemen,
>>
>> Thanks for your response. But I think I need to clarify a few things
>> here.
>>
>> On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI 
>> wrote:
>>
>>> Hi
>>>
>>> On 12/04/2018 19:16, Ayush Goyal wrote:
>>>
>>> Hi,
>>>
>>> I have a question regarding haproxy backend connection behaviour. We
>>> have following setup:
>>>
>>>   +-+ +---+
>>>   | haproxy |>| nginx |
>>>   +-+ +---+
>>>
>>> We use a haproxy cluster for ssl off-loading and then load balance
>>> request to
>>> nginx cluster. We are currently benchmarking this setup with 3 nodes for
>>> haproxy
>>> cluster and 1 nginx node. Each haproxy node has two frontend/backend
>>> pair. First
>>> frontend is a router for ssl connection which redistributes request to
>>> the second
>>> frontend in the haproxy cluster. The second frontend is for ssl
>>> handshake and
>>> routing requests to nginx servers. Our configuration is as follows:
>>>
>>> ```
>>> global
>>> maxconn 10
>>> user haproxy
>>> group haproxy
>>> nbproc 2
>>> cpu-map 1 1
>>> cpu-map 2 2
>>>
>>> defaults
>>> mode http
>>> option forwardfor
>>> timeout connect 5s
>>> timeout client 30s
>>> timeout server 30s
>>> timeout tunnel 30m
>>> timeout client-fin 5s
>>>
>>> frontend ssl_sess_id_router
>>> bind *:443
>>> bind-process 1
>>> mode tcp
>>> maxconn 10
>>> log global
>>> option tcp-smart-accept
>>> option splice-request
>>> option splice-response
>>> default_backend ssl_sess_id_router_backend
>>>
>>> backend ssl_sess_id_router_backend
>>> bind-process 1
>>> mode tcp
>>> fullconn 5
>>> balance roundrobin
>>> ..
>>> option tcp-smart-connect
>>> server lbtest01 :8443 weight 1 check send-proxy
>>> server lbtest02 :8443 weight 1 check send-proxy
>>> server lbtest03 :8443 weight 1 check send-proxy
>>>
>>> frontend nginx_ssl_fe
>>> bind *:8443 ssl 
>>> maxconn 10
>>> bind-process 2
>>> option tcp-smart-accept
>>> option splice-request
>>> option splice-response
>>> option forwardfor
>>> reqadd X-Forwarded-Proto:\ https
>>> timeout client-fin 5s
>>> timeout http-request 8s
>>> timeout http-keep-alive 30s
>>> default_backend nginx_backend
>>>
>>> backend nginx_backend
>>> bind-process 2
>>> balance roundrobin
>>> http-reuse safe
>>> option tcp-smart-connect
>>> option splice-request
>>> option splice-response
>>> timeout tunnel 30m
>>> timeout http-request 8s
>>> timeout http-keep-alive 30s
>>> server testnginx :80  weight 1 check
>>> ```
>>>
>>> The nginx node has nginx with 4 workers and 8192 max clients, therefore
>>> the max
>>> number of connection it can accept is 32768.
>>>
>>> For benchmark, we are generating ~3k new connections per second where
>>> each
>>> connection makes 1 http request and then holds the connection for next 30
>>> seconds. This results in a high established connection on the first
>>> frontend,
>>> ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
>>> haproxy
>>> nodes). The second frontend (nginx_ssl_fe) receives the same number of
>>> connection on the frontend. On nginx node, we see that active connections
>>> 

Re: Question regarding haproxy backend behaviour

2018-04-17 Thread Moemen MHEDHBI
Hi


On 16/04/2018 12:04, Igor Cicimov wrote:
>
>
> On Mon, 16 Apr 2018 6:09 pm Ayush Goyal  > wrote:
>
> Hi Moemen,
>
> Thanks for your response. But I think I need to clarify a few
> things here. 
>
> On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI
> > wrote:
>
> Hi
>
>
> On 12/04/2018 19:16, Ayush Goyal wrote:
>> Hi,
>>
>> I have a question regarding haproxy backend connection
>> behaviour. We have following setup:
>>
>>   +-+     +---+
>>   | haproxy |>| nginx |
>>   +-+     +---+
>>
>> We use a haproxy cluster for ssl off-loading and then load
>> balance request to
>> nginx cluster. We are currently benchmarking this setup with
>> 3 nodes for haproxy
>> cluster and 1 nginx node. Each haproxy node has two
>> frontend/backend pair. First
>> frontend is a router for ssl connection which redistributes
>> request to the second 
>> frontend in the haproxy cluster. The second frontend is for
>> ssl handshake and 
>> routing requests to nginx servers. Our configuration is as
>> follows:
>>
>> ```
>> global
>>     maxconn 10
>>     user haproxy
>>     group haproxy
>>     nbproc 2
>>     cpu-map 1 1
>>     cpu-map 2 2
>>
>> defaults
>>     mode http
>>     option forwardfor
>>     timeout connect 5s
>>     timeout client 30s
>>     timeout server 30s
>>     timeout tunnel 30m
>>     timeout client-fin 5s
>>
>> frontend ssl_sess_id_router
>>         bind *:443
>>         bind-process 1
>>         mode tcp
>>         maxconn 10
>>         log global
>>         option tcp-smart-accept
>>         option splice-request
>>         option splice-response
>>         default_backend ssl_sess_id_router_backend
>>
>> backend ssl_sess_id_router_backend
>>         bind-process 1
>>         mode tcp
>>         fullconn 5
>>         balance roundrobin
>>         ..
>>         option tcp-smart-connect
>>         server lbtest01 :8443 weight 1 check send-proxy
>>         server lbtest02 :8443 weight 1 check send-proxy
>>         server lbtest03 :8443 weight 1 check send-proxy
>>
>> frontend nginx_ssl_fe
>>         bind *:8443 ssl 
>>         maxconn 10
>>         bind-process 2
>>         option tcp-smart-accept
>>         option splice-request
>>         option splice-response
>>         option forwardfor
>>         reqadd X-Forwarded-Proto:\ https
>>         timeout client-fin 5s
>>         timeout http-request 8s
>>         timeout http-keep-alive 30s
>>         default_backend nginx_backend
>>
>> backend nginx_backend
>>         bind-process 2
>>         balance roundrobin
>>         http-reuse safe
>>         option tcp-smart-connect
>>         option splice-request
>>         option splice-response
>>         timeout tunnel 30m
>>         timeout http-request 8s
>>         timeout http-keep-alive 30s
>>         server testnginx :80  weight 1 check
>> ```
>>
>> The nginx node has nginx with 4 workers and 8192 max clients,
>> therefore the max
>> number of connection it can accept is 32768.
>>
>> For benchmark, we are generating ~3k new connections per
>> second where each
>> connection makes 1 http request and then holds the connection
>> for next 30
>> seconds. This results in a high established connection on the
>> first frontend,
>> ssl_sess_id_router, ~25k per haproxy node (Total ~77k
>> connections on 3 haproxy
>> nodes). The second frontend (nginx_ssl_fe) receives the same
>> number of
>> connection on the frontend. On nginx node, we see that active
>> connections
>> increase to ~32k.
>>
>> Our understanding is that haproxy should keep a 1:1
>> connection mapping for each
>> new connection in frontend/backend. But there is a connection
>> count mismatch
>> between haproxy and nginx (Total 77k connections in all 3
>> haproxy for both
>> frontends vs 32k connections in nginx made by nginx_backend),
>> We are still not
>> facing any major 5xx or connection errors. We are assuming
>> that this is
>> happening because haproxy is terminating old idle ssl
>> connections to serve the
>> new 

Re: Question regarding haproxy backend behaviour

2018-04-16 Thread Igor Cicimov
On Mon, 16 Apr 2018 6:09 pm Ayush Goyal  wrote:

> Hi Moemen,
>
> Thanks for your response. But I think I need to clarify a few things here.
>
> On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI 
> wrote:
>
>> Hi
>>
>> On 12/04/2018 19:16, Ayush Goyal wrote:
>>
>> Hi,
>>
>> I have a question regarding haproxy backend connection behaviour. We have
>> following setup:
>>
>>   +-+ +---+
>>   | haproxy |>| nginx |
>>   +-+ +---+
>>
>> We use a haproxy cluster for ssl off-loading and then load balance
>> request to
>> nginx cluster. We are currently benchmarking this setup with 3 nodes for
>> haproxy
>> cluster and 1 nginx node. Each haproxy node has two frontend/backend
>> pair. First
>> frontend is a router for ssl connection which redistributes request to the
>>  second
>> frontend in the haproxy cluster. The second frontend is for ssl
>> handshake and
>> routing requests to nginx servers. Our configuration is as follows:
>>
>> ```
>> global
>> maxconn 10
>> user haproxy
>> group haproxy
>> nbproc 2
>> cpu-map 1 1
>> cpu-map 2 2
>>
>> defaults
>> mode http
>> option forwardfor
>> timeout connect 5s
>> timeout client 30s
>> timeout server 30s
>> timeout tunnel 30m
>> timeout client-fin 5s
>>
>> frontend ssl_sess_id_router
>> bind *:443
>> bind-process 1
>> mode tcp
>> maxconn 10
>> log global
>> option tcp-smart-accept
>> option splice-request
>> option splice-response
>> default_backend ssl_sess_id_router_backend
>>
>> backend ssl_sess_id_router_backend
>> bind-process 1
>> mode tcp
>> fullconn 5
>> balance roundrobin
>> ..
>> option tcp-smart-connect
>> server lbtest01 :8443 weight 1 check send-proxy
>> server lbtest02 :8443 weight 1 check send-proxy
>> server lbtest03 :8443 weight 1 check send-proxy
>>
>> frontend nginx_ssl_fe
>> bind *:8443 ssl 
>> maxconn 10
>> bind-process 2
>> option tcp-smart-accept
>> option splice-request
>> option splice-response
>> option forwardfor
>> reqadd X-Forwarded-Proto:\ https
>> timeout client-fin 5s
>> timeout http-request 8s
>> timeout http-keep-alive 30s
>> default_backend nginx_backend
>>
>> backend nginx_backend
>> bind-process 2
>> balance roundrobin
>> http-reuse safe
>> option tcp-smart-connect
>> option splice-request
>> option splice-response
>> timeout tunnel 30m
>> timeout http-request 8s
>> timeout http-keep-alive 30s
>> server testnginx :80  weight 1 check
>> ```
>>
>> The nginx node has nginx with 4 workers and 8192 max clients, therefore
>> the max
>> number of connection it can accept is 32768.
>>
>> For benchmark, we are generating ~3k new connections per second where each
>> connection makes 1 http request and then holds the connection for next 30
>> seconds. This results in a high established connection on the first
>> frontend,
>> ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
>> haproxy
>> nodes). The second frontend (nginx_ssl_fe) receives the same number of
>> connection on the frontend. On nginx node, we see that active connections
>> increase to ~32k.
>>
>> Our understanding is that haproxy should keep a 1:1 connection mapping
>> for each
>> new connection in frontend/backend. But there is a connection count
>> mismatch
>> between haproxy and nginx (Total 77k connections in all 3 haproxy for both
>> frontends vs 32k connections in nginx made by nginx_backend), We are
>> still not
>> facing any major 5xx or connection errors. We are assuming that this is
>> happening because haproxy is terminating old idle ssl connections to
>> serve the
>> new ones. We have following questions:
>>
>> 1. How the nginx_backend connections are being terminated to serve the new
>> connections?
>>
>> Connections are usually terminated when the client receives the whole
>> response. Closing the connection can be initiated by the client, server of
>> HAProxy (timeouts, etc..)
>>
>
> Client connections are keep-alive here for 30 seconds from client side.
> Various timeout values in both nginx and haproxy are sufficiently high of
> the order of 60 seconds. Still what we are observing here is that nginx is
> closing the connection after 7-14 seconds to serve new client requests. Not
> sure why nginx or haproxy will close existing keep-alive connections to
> serve new requests when timeouts are sufficiently high?
>
>> 2. Why haproxy is not terminating connections on the frontend to keep it
>> them at 32k
>> for 1:1 mapping?
>>
>> I think there is no 1:1 mapping between the number of connections in
>> haproxy and nginx. This is because you are chaining the two fron/back pairs
>> in haproxy, 

Re: Question regarding haproxy backend behaviour

2018-04-16 Thread Ayush Goyal
Hi Moemen,

Thanks for your response. But I think I need to clarify a few things here.

On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI  wrote:

> Hi
>
> On 12/04/2018 19:16, Ayush Goyal wrote:
>
> Hi,
>
> I have a question regarding haproxy backend connection behaviour. We have
> following setup:
>
>   +-+ +---+
>   | haproxy |>| nginx |
>   +-+ +---+
>
> We use a haproxy cluster for ssl off-loading and then load balance request
> to
> nginx cluster. We are currently benchmarking this setup with 3 nodes for
> haproxy
> cluster and 1 nginx node. Each haproxy node has two frontend/backend pair.
> First
> frontend is a router for ssl connection which redistributes request to the
>  second
> frontend in the haproxy cluster. The second frontend is for ssl handshake
> and
> routing requests to nginx servers. Our configuration is as follows:
>
> ```
> global
> maxconn 10
> user haproxy
> group haproxy
> nbproc 2
> cpu-map 1 1
> cpu-map 2 2
>
> defaults
> mode http
> option forwardfor
> timeout connect 5s
> timeout client 30s
> timeout server 30s
> timeout tunnel 30m
> timeout client-fin 5s
>
> frontend ssl_sess_id_router
> bind *:443
> bind-process 1
> mode tcp
> maxconn 10
> log global
> option tcp-smart-accept
> option splice-request
> option splice-response
> default_backend ssl_sess_id_router_backend
>
> backend ssl_sess_id_router_backend
> bind-process 1
> mode tcp
> fullconn 5
> balance roundrobin
> ..
> option tcp-smart-connect
> server lbtest01 :8443 weight 1 check send-proxy
> server lbtest02 :8443 weight 1 check send-proxy
> server lbtest03 :8443 weight 1 check send-proxy
>
> frontend nginx_ssl_fe
> bind *:8443 ssl 
> maxconn 10
> bind-process 2
> option tcp-smart-accept
> option splice-request
> option splice-response
> option forwardfor
> reqadd X-Forwarded-Proto:\ https
> timeout client-fin 5s
> timeout http-request 8s
> timeout http-keep-alive 30s
> default_backend nginx_backend
>
> backend nginx_backend
> bind-process 2
> balance roundrobin
> http-reuse safe
> option tcp-smart-connect
> option splice-request
> option splice-response
> timeout tunnel 30m
> timeout http-request 8s
> timeout http-keep-alive 30s
> server testnginx :80  weight 1 check
> ```
>
> The nginx node has nginx with 4 workers and 8192 max clients, therefore
> the max
> number of connection it can accept is 32768.
>
> For benchmark, we are generating ~3k new connections per second where each
> connection makes 1 http request and then holds the connection for next 30
> seconds. This results in a high established connection on the first
> frontend,
> ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
> haproxy
> nodes). The second frontend (nginx_ssl_fe) receives the same number of
> connection on the frontend. On nginx node, we see that active connections
> increase to ~32k.
>
> Our understanding is that haproxy should keep a 1:1 connection mapping for
> each
> new connection in frontend/backend. But there is a connection count
> mismatch
> between haproxy and nginx (Total 77k connections in all 3 haproxy for both
> frontends vs 32k connections in nginx made by nginx_backend), We are still
> not
> facing any major 5xx or connection errors. We are assuming that this is
> happening because haproxy is terminating old idle ssl connections to serve
> the
> new ones. We have following questions:
>
> 1. How the nginx_backend connections are being terminated to serve the new
> connections?
>
> Connections are usually terminated when the client receives the whole
> response. Closing the connection can be initiated by the client, server of
> HAProxy (timeouts, etc..)
>

Client connections are keep-alive here for 30 seconds from client side.
Various timeout values in both nginx and haproxy are sufficiently high of
the order of 60 seconds. Still what we are observing here is that nginx is
closing the connection after 7-14 seconds to serve new client requests. Not
sure why nginx or haproxy will close existing keep-alive connections to
serve new requests when timeouts are sufficiently high?

> 2. Why haproxy is not terminating connections on the frontend to keep it
> them at 32k
> for 1:1 mapping?
>
> I think there is no 1:1 mapping between the number of connections in
> haproxy and nginx. This is because you are chaining the two fron/back pairs
> in haproxy, so when the client establishes 1 connctions with haproxy you
> will see 2 established connections in haproxy stats. This explains why the
> number of connections in haproxy is the double of the ones in nginx.
>

I want to 

Re: Question regarding haproxy backend behaviour

2018-04-15 Thread Moemen MHEDHBI
Hi


On 12/04/2018 19:16, Ayush Goyal wrote:
> Hi,
>
> I have a question regarding haproxy backend connection behaviour. We
> have following setup:
>
>   +-+     +---+
>   | haproxy |>| nginx |
>   +-+     +---+
>
> We use a haproxy cluster for ssl off-loading and then load balance
> request to
> nginx cluster. We are currently benchmarking this setup with 3 nodes
> for haproxy
> cluster and 1 nginx node. Each haproxy node has two frontend/backend
> pair. First
> frontend is a router for ssl connection which redistributes request to
> the second 
> frontend in the haproxy cluster. The second frontend is for ssl
> handshake and 
> routing requests to nginx servers. Our configuration is as follows:
>
> ```
> global
>     maxconn 10
>     user haproxy
>     group haproxy
>     nbproc 2
>     cpu-map 1 1
>     cpu-map 2 2
>
> defaults
>     mode http
>     option forwardfor
>     timeout connect 5s
>     timeout client 30s
>     timeout server 30s
>     timeout tunnel 30m
>     timeout client-fin 5s
>
> frontend ssl_sess_id_router
>         bind *:443
>         bind-process 1
>         mode tcp
>         maxconn 10
>         log global
>         option tcp-smart-accept
>         option splice-request
>         option splice-response
>         default_backend ssl_sess_id_router_backend
>
> backend ssl_sess_id_router_backend
>         bind-process 1
>         mode tcp
>         fullconn 5
>         balance roundrobin
>         ..
>         option tcp-smart-connect
>         server lbtest01 :8443 weight 1 check send-proxy
>         server lbtest02 :8443 weight 1 check send-proxy
>         server lbtest03 :8443 weight 1 check send-proxy
>
> frontend nginx_ssl_fe
>         bind *:8443 ssl 
>         maxconn 10
>         bind-process 2
>         option tcp-smart-accept
>         option splice-request
>         option splice-response
>         option forwardfor
>         reqadd X-Forwarded-Proto:\ https
>         timeout client-fin 5s
>         timeout http-request 8s
>         timeout http-keep-alive 30s
>         default_backend nginx_backend
>
> backend nginx_backend
>         bind-process 2
>         balance roundrobin
>         http-reuse safe
>         option tcp-smart-connect
>         option splice-request
>         option splice-response
>         timeout tunnel 30m
>         timeout http-request 8s
>         timeout http-keep-alive 30s
>         server testnginx :80  weight 1 check
> ```
>
> The nginx node has nginx with 4 workers and 8192 max clients,
> therefore the max
> number of connection it can accept is 32768.
>
> For benchmark, we are generating ~3k new connections per second where each
> connection makes 1 http request and then holds the connection for next 30
> seconds. This results in a high established connection on the first
> frontend,
> ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
> haproxy
> nodes). The second frontend (nginx_ssl_fe) receives the same number of
> connection on the frontend. On nginx node, we see that active connections
> increase to ~32k.
>
> Our understanding is that haproxy should keep a 1:1 connection mapping
> for each
> new connection in frontend/backend. But there is a connection count
> mismatch
> between haproxy and nginx (Total 77k connections in all 3 haproxy for both
> frontends vs 32k connections in nginx made by nginx_backend), We are
> still not
> facing any major 5xx or connection errors. We are assuming that this is
> happening because haproxy is terminating old idle ssl connections to
> serve the
> new ones. We have following questions:
>
> 1. How the nginx_backend connections are being terminated to serve the new
> connections?
Connections are usually terminated when the client receives the whole
response. Closing the connection can be initiated by the client, server
of HAProxy (timeouts, etc..)

> 2. Why haproxy is not terminating connections on the frontend to keep
> it them at 32k
> for 1:1 mapping?
I think there is no 1:1 mapping between the number of connections in
haproxy and nginx. This is because you are chaining the two fron/back
pairs in haproxy, so when the client establishes 1 connctions with
haproxy you will see 2 established connections in haproxy stats. This
explains why the number of connections in haproxy is the double of the
ones in nginx.

> Thanks
> Ayush Goyal

-- 
Moemen MHEDHBI



Question regarding haproxy backend behaviour

2018-04-12 Thread Ayush Goyal
Hi,

I have a question regarding haproxy backend connection behaviour. We have
following setup:

  +-+ +---+
  | haproxy |>| nginx |
  +-+ +---+

We use a haproxy cluster for ssl off-loading and then load balance request
to
nginx cluster. We are currently benchmarking this setup with 3 nodes for
haproxy
cluster and 1 nginx node. Each haproxy node has two frontend/backend pair.
First
frontend is a router for ssl connection which redistributes request to the
 second
frontend in the haproxy cluster. The second frontend is for ssl handshake
and
routing requests to nginx servers. Our configuration is as follows:

```
global
maxconn 10
user haproxy
group haproxy
nbproc 2
cpu-map 1 1
cpu-map 2 2

defaults
mode http
option forwardfor
timeout connect 5s
timeout client 30s
timeout server 30s
timeout tunnel 30m
timeout client-fin 5s

frontend ssl_sess_id_router
bind *:443
bind-process 1
mode tcp
maxconn 10
log global
option tcp-smart-accept
option splice-request
option splice-response
default_backend ssl_sess_id_router_backend

backend ssl_sess_id_router_backend
bind-process 1
mode tcp
fullconn 5
balance roundrobin
..
option tcp-smart-connect
server lbtest01 :8443 weight 1 check send-proxy
server lbtest02 :8443 weight 1 check send-proxy
server lbtest03 :8443 weight 1 check send-proxy

frontend nginx_ssl_fe
bind *:8443 ssl 
maxconn 10
bind-process 2
option tcp-smart-accept
option splice-request
option splice-response
option forwardfor
reqadd X-Forwarded-Proto:\ https
timeout client-fin 5s
timeout http-request 8s
timeout http-keep-alive 30s
default_backend nginx_backend

backend nginx_backend
bind-process 2
balance roundrobin
http-reuse safe
option tcp-smart-connect
option splice-request
option splice-response
timeout tunnel 30m
timeout http-request 8s
timeout http-keep-alive 30s
server testnginx :80  weight 1 check
```

The nginx node has nginx with 4 workers and 8192 max clients, therefore the
max
number of connection it can accept is 32768.

For benchmark, we are generating ~3k new connections per second where each
connection makes 1 http request and then holds the connection for next 30
seconds. This results in a high established connection on the first
frontend,
ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
haproxy
nodes). The second frontend (nginx_ssl_fe) receives the same number of
connection on the frontend. On nginx node, we see that active connections
increase to ~32k.

Our understanding is that haproxy should keep a 1:1 connection mapping for
each
new connection in frontend/backend. But there is a connection count mismatch
between haproxy and nginx (Total 77k connections in all 3 haproxy for both
frontends vs 32k connections in nginx made by nginx_backend), We are still
not
facing any major 5xx or connection errors. We are assuming that this is
happening because haproxy is terminating old idle ssl connections to serve
the
new ones. We have following questions:

1. How the nginx_backend connections are being terminated to serve the new
connections?
2. Why haproxy is not terminating connections on the frontend to keep it
them at 32k
for 1:1 mapping?

Thanks
Ayush Goyal