Hi,

I just put a service that has around  400 - 4K http / https mixed requests
per second behind haproxy. The endpoint with the highest rate of requests
is a POST request with a 1-8K json body. I used haproxy for mostly ssl
offloading since there is only one server. Immediately I started seeing log
lines similar to the one below for the said service in haproxy logs:

Nov 15 15:07:59 localhost.localdomain haproxy[22773]: 41.75.220.204:24716
[15/Nov/2017:15:06:59.772] http med-api/med1722 0/0/0/-1/60000 502 204 3047
- - SH-- 1736/1736/13/13/0 0/0 "POST count HTTP/1.1"

The requests were timing out after 60 seconds on the application side.
After this I started logging requests with latency higher than a certain
threshold and saw that lots of requests were stuck waiting on reading the
request post body from haproxy. I did some research and found the option
http-buffer-request. I set it on my backend and the timeouts disappeared.
However my application server is still spending a lot of time polling for
request data compared to NGINX, which is almost non-existent. My
application is written in go and I can monitor the number of goroutines (
lightweight threads ) grouped by their tasks. If I proxy the traffic
through NGINX the number of goroutines waiting for request data is almost
not observable, but when I pass the traffic through haproxy this number
rises until 1200.

I am just trying to understand the difference in behaviour between NGINX
and haproxy and if is there any setting I can tweak to fix this issue. Hope
everything is clear this is my first question in the mailing lists, so
please let me know if I can make my question clearer.

HAPROXY 1.7.8
nbproc 30
2 process for http
28 process for https

Default section options from my config
 log global
   mode http
   option dontlognull
   option log-separate-errors
   option http-buffer-request
   timeout connect 5s
   timeout client 30s
   timeout server 60s
   timeout http-keep-alive 4s

Omer

Reply via email to