Hi Fred,

On Fri, Sep 28, 2012 at 07:07:10AM +0200, Fred Leeflang wrote:
> FWIW, I've just done an iperf test from my own server at home (connected 
> with 100/100 fiber)

lucky man :-)

> to the 'real' interface of the loadbalancer and I'm 
> getting a throughput of 92.5Mbits/s. (I've done this test to eliminate 
> the possibility of any other virtualisation confusing things)

What happens sometimes is conntrack is loaded with default settings in
the hypervisor, limiting the connection rate to a very low throughput
once all ports have been used. However bitrate is not affected of course.

> I've also done an iperf test from the second lb's interface to the first 
> lb's interface (both are on separate physical machines) and this results 
> in a throughput of 941Mbits/s.

OK so at least we can say that the physical network works well.

In your logs I'm seeing that your nginx server responds in roughly 50-100ms,
and that you have around 10 concurrent connections on the frontend max. This
means around 100-200 connections per second max. It would thus be possible
that you're limited there (or by the number of concurrent conns sent by siege).

> The haproxy version is 1.4.8

Hmmm this is bad :

  $ git log --pretty=oneline v1.4.8..v1.4.22 | egrep -c BUG\|CRIT
  76

In short, 76 bugs are known to have been fixed after the version you're
using. Some of them were important and could cause such issues (eg:
incorrect chunk size calculation on chunked responses). So it is indeed
possible that your outdated version is the cause of the issues.

I suggest two things :
  - run a test with siege aimed at the server which is just after haproxy
    (nginx ?) to see if performance is better without haproxy
  - update your haproxy to the latest stable version in your branch (1.4.22)
    to get all known fixes, and check again.

If nothing here helps, then a tcpdump on the siege host would help.

Regards,
Willy


Reply via email to