Re: anyone doing this: internet - http proxy with KA - haproxy?

2010-01-03 Thread Hank A. Paulson
I tried the version from git and it worked ok - not sure if the client side 
has KA running, so it may not help the problem.


The throughput was fine and stayed high during the run, but the memory use 
increase was linear until all RAM was consumed - recompiled again and tried 
with slice-auto on to see if that would help and same result after about 3 
hours, all RAM used.



On 1/2/10 3:34 PM, Willy Tarreau wrote:

[ for an unknown reason, this mail failed to reach the list, trying again ]

Hi Hank,

I should have read the ML before responding to you privately :-)

On Sat, Jan 02, 2010 at 02:24:27AM -0800, Hank A. Paulson wrote:

I have a site with 90% of the traffic from a few client IPs that are 300ms
or so away, their gateway software doesn't seem to be dealing with
thousands of connections very well and we can't take advantage of large tcp
windows because the connection is over after one response.

So I am thinking of trying to put something that will maintain a few long
connections with that far away client IP and see if that improves things.

Anyone have any suggestions for http proxies with keep alive that I can put
in front of haproxy? Anyone doing this? config suggestions?


you can simply download the very latest snapshot (not yet available in the
snapshot directory, you'll have to extract it from GIT) :

http://haproxy.1wt.eu/git?p=haproxy.git;a=snapshot;sf=tgz

Then replace option httpclose with option http-server-close and you'll
have keep-alive on the client-side. It also supports pipelining, which
further reduces latency when your clients support it too.

Best regards,
Willy





Re: [PATCH] [BUG] Healthchecks: get a proper error code if connection cannot be completed immediately

2010-01-03 Thread Willy Tarreau
On Sun, Jan 03, 2010 at 01:45:07AM +0100, Krzysztof Piotr Oledzki wrote:
 From 89baa36c17701a2abd6de629b3e0f360c194c6dc Mon Sep 17 00:00:00 2001
 From: Krzysztof Piotr Oledzki o...@ans.pl
 Date: Sat, 2 Jan 2010 22:03:01 +0100
 Subject: [BUG] Healthchecks: get a proper error code if connection cannot be 
 completed immediately
 
 In case of a non-blocking socket, used for connecting to a remote
 server (not localhost), the error reported by the health check
 was most of a time one of EINPROGRESS/EAGAIN/EALREADY.
 
 This patch adds a getsockopt(..., SO_ERROR, ...) call so now
 the proper error message is reported.
 ---

Applied, thanks Krzysztof!
Willy




[ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-03 Thread Willy Tarreau
Hi all,

Yes that's it, it's not a joke !

 -- Keep-alive support is now functional on the client side. --

From now on, we support 4 different modes :

  - tunnel mode (the default one), where we only look at
the first request and first response then let everything
be exchanged as if it were pure data. I wanted to get rid
of that one but I realized that many people are running
their static servers that way as a workaround for lack of
keep-alive. So let it live a bit longer.

  - server-close : this new mode is enabled via option http-server-close
and maintains keep-alive on the client side while closing connections
on the server side. It's comparable to what apache 1.3 or nginx do.

  - close : this one is still enabled via option httpclose but now
tracks the whole request too in order to keep ni sync till the end
(previously it considered everything as data).

  - force-close : enabled via option forceclose, like httpclose, but
it also enforces connection close at the end of each request. In
the past it was discouraged from using it because it used a trick
consisting in closing the request as soon as the response started
to come back. Now it really waits for both channels to complete
before closing. It will probably become the new httpclose later
as it is what it should be.

Technically, the code could also support plain end-to-end keep-alive,
but there are still some issues to work on first, starting with how
to steal a sleeping session in case of starvation. Also, the code to
reinitialize a request is quite awful right now and I should say I
don't like it, so most likely there will be some changes later which
will benefit to full keep-alive.

Some minor improvements have also been performed. The redirects are
now emitted as HTTP/1.1 with a content-length and maintain keep-alive
if the request asked to do so and the response is relative to the same
site (begins with a slash).

The HTTP parser has been enhanced and fixed a lot to correctly support
keep-alive. Despite previous blind efforts, it was far from being
pipelining-compatible. Now pipelining works for both requests and
responses (provided the server responds fast enough for its response
to be merged with a former one).

Pipelining is really what reduces perceived latency over slow networks,
and what saves CPU cycles, as we reduce the number of system calls that
way. The code is currently capable of 1 million pipelined requests and
responses per second on one core of my Phenom 3 GHz (this is done by
matching an ACL and performing a keep-alive redirect). But performance
drop as soon as we make system calls. In server-close mode, I've got
numbers varying between 40 and 82000 requests/s. I have to re-run the
tests with a stabilized lab to get more reliable numbers.

The end-user feeling has improved nicely. The following page contains
110 small images and is served about twice as fast with keep-alive and
pipelining enabled when doing a soft refresh (Ctrl-R) :

   http://www.ant-computing.com/album.html

This is due to the fact that the browser can fill packets with requests
and that haproxy fills packets with multiple short responses (304 not
modified), resulting in a low number of overall packets exchanged over
the net.

Please note that there have been several issues during the development
of this feature, and while we have apparently fixed everything we found,
it is still possible that some of them remain. So use it with care.

Another important point concerns the logs. When multiple requests are
processed from a same connection, they will look similar in the logs
because there is no indication of keep-alive right now. However, the
accept date is reset to the completion date of the last request, so
that the next request time corresponds to the time the browser took
to send a new request.

The keep-alive timeout is bound to the http-request timeout right now,
but I'm thinking about adding a new timeout for this one, so that we
can lower it even more (eg: a few tens or hundreds of milliseconds).

Please note that there have been quite a number of fixes since last
version. Maybe some of those fixes were for bugs introduced since,
but if you're experiencing issues with 1.4-dev4, you should give this
one a try. Oh BTW I've removed the limits on the numbers of config
files and reqadd/rspadd statements some users were complaining about.

Right now I don't have anything else to say. You can grab the sources
here as usual :

 http://haproxy.1wt.eu/download/1.4/src/

As usual, have fun and please report any positive or negative experience !

Willy




Re: haproxy GIT errors with keepalive connections since last commits

2010-01-03 Thread Willy Tarreau
On Mon, Jan 04, 2010 at 12:18:38AM +0100, Cyril Bonté wrote:
 Hi Willy,
 
 Le Dimanche 3 Janvier 2010 13:11:08, Willy Tarreau a écrit :
I've updated the running version on the site so that it's
fixed now.
   
   I confirm I don't have the problem anymore ;)
 
 I'm sorry to say that another problem has appeared (I didn't see it 2 or 3 
 hours ago). It looks like long responses are truncated.
 For example, try to access to this link :
 http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=1f44589b7109b07f8845b430fa031a6762923c03
 
 The page is cut in the middle.

you're damn right :-(

The page left the site that way before being forwarded by the second haproxy.
I'm investigating.

Thanks!
Willy




Re: haproxy GIT errors with keepalive connections since last commits

2010-01-03 Thread Willy Tarreau
On Mon, Jan 04, 2010 at 12:18:38AM +0100, Cyril Bonté wrote:
 I'm sorry to say that another problem has appeared (I didn't see it 2 or 3 
 hours ago). It looks like long responses are truncated.
 For example, try to access to this link :
 http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=1f44589b7109b07f8845b430fa031a6762923c03
 
 The page is cut in the middle.

OK this is fixed now. For an unknown reason I forgot to disable
automatic closing in some forwarding states. This appeared clearly
with chunked encoding depending on time races as you could see it
here.

This will appear in this night's snapshot. If we don't find any
other issue to fix within a few days, I'll release -dev6 with
this patch.

Thanks,
Willy