I don't see where you can turn off Nagle for HAProxy. All I could find
was a change in the source code back in 2006 where Willy took OUT the
TCP_NODELAY flag.

 

After HAProxy, the stack is identical for both tests. See below:

 

Test 1 ("Direct Test")

Internet --> Firewall (w/ NAT) --> IIS --> static .html/.css/.png/etc

 

Test 2 ("HAProxy Test")

Internet --> Firewall (w/ NAT) --> HAProxy --> IIS --> static
.html/.css/.png/etc

 

So, even if Nagle were enabled in another part of the stack, it's
enabled in BOTH tests, so it shouldn't be a contributing factor. Unless
there's some adverse interaction between having Nagle enabled before or
after HAProxy and HAProxy itself. That seems far-fetched to me, but
again, I'm not a network engineer (I just play one on TV).

 

David

 

From: Wout Mertens [mailto:[email protected]] 
Sent: Monday, November 14, 2011 12:51 AM
To: John Marrett
Cc: David Prothero; [email protected]
Subject: Re: HAProxy performance issues

 

I agree with John, and want to add that Nagle applies to everything. You
may want to go through your full application stack to see where the
Nagle algorithm might be applied and if you can turn it off.

 

Wout.

 

On Nov 13, 2011, at 5:12 , John Marrett wrote:





What I said still applies. The only difference is that there is
(substantially) less overhead on a new http connection compared to
https.

Keepalive is quite likely to be the reason for the substantial
performance deviation between your two tests.

-JohnF

On 2011 11 12 22:46, "David Prothero" <[email protected]> wrote:

Thanks for that tip. I will keep an eye out for that when we begin our
SSL performance testing. Currently, however, the delay is with regular
http connections directly to haproxy.

David

Wout Mertens &lt;[email protected]
<mailto:lt%[email protected]> &gt; wrote:

On Nov 11, 2011, at 17:43 , David Prothero wrote:





The local test showed a very small (and more than acceptable) overhead
of 7ms for the entire page load (all 29 requests) when going through
HAProxy. However, tests from longer distances over various IP's showed
an overhead that seemed to be proportional to the amount of latency in
the connection. Typical overhead times we are seeing from various
locations (both from enterprise and consumer grade connections) are
around 200-400ms.

 

 

Delay values of multiples of 200ms are due to the Nagle algorithm. Try
adding

 

socket=l:TCP_NODELAY=1

socket=r:TCP_NODELAY=1


to your stunnel configuration.

 

Wout.

 

Reply via email to