Hi Krishna,

On Wed, Nov 11, 2015 at 12:31:42PM +0530, Krishna Kumar (Engineering) wrote:
> Thanks Baptiste. My configuration file is very basic:
> global
>       maxconn 100
> defaults
>         mode http
> option http-keep-alive
>         option splice-response
>         option clitcpka
>         option srvtcpka
>         option tcp-smart-accept
>         option tcp-smart-connect
>         timeout connect 60s
>         timeout client 1800s
>         timeout server 1800s
>         timeout http-request 1800s
>         timeout http-keep-alive 1800s
> frontend private-frontend
>         maxconn 100
>         mode http
>         bind IP1:80
>         default_backend private-backend
> backend private-backend
>          http-reuse always
>          server IP2 IP2:80 maxconn 10
> As described by you, I did the following tests:
> 1. Telnet to the HAProxy IP, and then run each of the following tests:
> A.  Serial: Run wget, sleep 0.5; wget, sleep 0.5; (8 times). tcpdump shows 
> that
>       when each wget finishes, client closes the connection and
> haproxy does RST to
>       the single backend. Next wget opens a new connection to haproxy,
> and in turn
>       to the server upon request.

That's expected. To be clear about one point so that there is no doubt
about this, we don't have connection pools for now, we can only share
*existing* connections. So once your last connection closes, you don't
have server connections anymore and you create new ones.

> B. Run 8 wgets in parallel. Each opens a new connection to get a 128 byte 
> file.
>      Again, 8 separate connections are opened to the backend server.

But are they *really* processed in parallel ? If the file is only 128 bytes,
I can easily imagine that the connections are opened and closed immediately.
Please keep in mind that wget doesn't work like a browser *at all*. A browser
keeps connections alive. Wget fetches one object and closes. That's a huge
difference because the browser connection remains reusable while wget's not.

> C.   Run "wget -i <file-containing 5 files>". wget uses keepalive to not close
>        the connection. Here, wget opens only 1 connection to haproxy,
> and haproxy
>        opens 1 connection to the backend, over which wget transfers
> the 5 files one after
>        the other. Behavior is identical to 1.5.12 (same config file,
> except without the reuse
>        directive).

OK. That's a better test.

> D.  Run 5 "wget -i  <file-containing 5 files>" in parallel. 5
> connections are opened by
>       the 5 wgets, and 5 connections are opened by haproxy to the
> single server, finally
>       all are closed by RST's.

Is wget advertising HTTP/1.1 in the request ? If not that could
explain why they're not merged, we only merge connections from
HTTP/1.1 compliant clients. Also we keep private any connection
which sees a 401 or 407 status code so that authentication doesn't
mix up with other clients and we remain compatible with broken
auth schemes like NTLM which violates HTTP. There are other criteria
to mark a connection private :
  - proxy protocol used to the server
  - SNI sent to the server
  - source IP binding to client's IP address
  - source IP binding to anything dynamic (eg: header)
  - 401/407 received on a server connection.

> I also modified step#1 above, to do a telnet, followed by a GET in
> telnet to actually
> open a server connection, and then run the other tests. I still don't
> see re-using connection
> having effect.

How did you make your test, what exact request did you type ?


Reply via email to