Hi Willy,

This would explain the 503s
  # change a 503 response into a 204(a friendly decline).
  errorfile 503 /etc/haproxy/errors/204.http

  acl is_disable path_beg /getuid/rogue-ad-exchange
  # http-request deny defaults to 403, change it to a 503,
  # which is a masked 204 since haproxy doesn't have a 204 errorfile.
  http-request deny deny_status 503 if is_disable
backend robotstxt
  errorfile 503 /etc/haproxy/errors/200.robots.http
backend crossdomainxml
  errorfile 503 /etc/haproxy/errors/200.crossdomain.http
backend emptygif
  errorfile 503 /etc/haproxy/errors/200.emptygif.http
Basically I use 503 if I want to block a sender in a friendly way(i.e
making them believe we just declined the transaction) and to host 3 tiny
files, robots.txt, crossdomain.xml and empty.gif.
It felt excessive to setup redundant webservers for a total of 703 bytes of
files and also it felt wasteful to have it in the java backend. So I
cheated haproxys errorfiles.
So I don't think that the 503 causes retries for our clients, it's just me
abusing haproxy.

We receive transactional requests, ad exchanges sending us requests.
Also real browsers connecting to us when cookie syncing.
So the transactional we want to keep-alive so the clients sends multiple
http requests per connection.
And the browser clients we want to close the connection to the client after
it's request+response.
So the browser clients backend have "option forceclose". Which would
explain the short connections.
Currently we have "http-reuse safe" in the defaults section and "http-reuse
never" in a tcp mode listener that forwards all :443 traffic to another set
of haproxies that has more cores and does TLS termination. And this is to
not mess upp the X-Forward-For headers.

I will try "http-reuse always" in the defaults, but not in the tcp mode
listener as we rely on X-Forward-For.
Even if I get better performance it still wouldn't answer why the HAProxy
CPU usage (with same config) would increase with the same config in v1.7
compared to v2.0.
Assuming that the "http-reuse always" might help performance in 2.0, it's
not fair comparing a better performance tuned v2.0 vs a less tuned v1.7.


On Wed, Jul 24, 2019 at 8:07 PM Willy Tarreau <w...@1wt.eu> wrote:

> Hi Elias,
> On Wed, Jul 24, 2019 at 11:01:22AM +0200, Elias Abacioglu wrote:
> > Hi Lukas,
> >
> > 2.0.3 still has the same issue, after 1-3 minutes it goes to using 100%
> of
> > it's available cores.
> > I've created a new strace file. Will send it to you and Willy.
> Thanks for testing. I've looked at your trace. I'm not seeing any abnormal
> behaviour there. However I'm seeing lots of 503 responses returned by the
> server. Could it be that your client retries on 503, leading to an increase
> of the load ? It could also possibly explain why this happens after some
> time (i.e. if the servers start to fall after some time).
> Also I'm seeing that you're having a lot of short connections. Maybe you're
> accumulating a large number of idle connections to the backend servers.
> Could you please try to add "http-reuse always" to your backend(s) to see
> if that improves the situation ?
> Thanks,
> Willy

Reply via email to