That's more of an issue with the site than a (proxy based) load
balancer - the LB would be doing the exact same thing as the client.
WT Precisely not and that's the problem. The proxy cannot ask the user
WT if he wants to retry on sensible requests, and the cannot precisely
WT know what is at
Hi Ross,
On Wed, Jan 13, 2010 at 12:12:04PM -0500, Ross West wrote:
I can see a small confusion here because I've used the wrong
terminology. Proxy is not the correct term, as there are actual proxy
devices out there (eg: Squid) which are generally visible to the
client/server and shouldn't
I'll enter in this conversation as I've used (successfully) a load
balancer which did server-side keep-alive a while ago.
WT Hmmm that's different. There are issues with the HTTP protocol
WT itself making this extremely difficult. When you're keeping a
WT connection alive in order to send a
On Tue, Jan 12, 2010 at 11:56:38AM +0300, Dmitry Sivachenko wrote:
Imagine the following scenario: we have large number of requests from
different clients. Each client send request rarely, so no need for keep-alive
between client and haproxy.
OK I see your usage pattern now. I know three
Hi Ross,
first, thanks for bringing your experience here, it's much appreciated.
On Tue, Jan 12, 2010 at 10:27:09AM -0500, Ross West wrote:
I'll enter in this conversation as I've used (successfully) a load
balancer which did server-side keep-alive a while ago.
WT Hmmm that's different.
WT It's not only a matter of caching the request to replay it, it is that
WT you're simply not allowed to. I know a guy who ordered a book at a
WT large well-known site. His order was processed twice. Maybe there is
WT something on this site which grants itself the right to replay a user's
WT
On Tue, Jan 12, 2010 at 07:01:52PM -0500, Ross West wrote:
WT It's not only a matter of caching the request to replay it, it is that
WT you're simply not allowed to. I know a guy who ordered a book at a
WT large well-known site. His order was processed twice. Maybe there is
WT something on
On Mon, Jan 04, 2010 at 12:13:49AM +0100, Willy Tarreau wrote:
Hi all,
Yes that's it, it's not a joke !
-- Keep-alive support is now functional on the client side. --
Hello!
Are there any plans to implement server-side HTTP keep-alive?
I mean I want client connecting to haproxy NOT to
Hi Hank,
as I suspected, the problem was with the header capture. They were not
released before being erased when clearing the session for a new keep-alive
request. I have fixed it in git if you want to try the snapshot again :
Definitely haproxy process, nothing else runs on there and the older version
remains stable for days/weeks:
F S UIDPID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
1 S nobody 15547 1 18 80 0 - 1026097 epoll_ 10:54 ? 00:54:30
/usr/sbin/haproxy14d5 -D -f
Le Mardi 5 Janvier 2010 23:42:46, Willy Tarreau a écrit :
On Tue, Jan 05, 2010 at 11:14:32PM +0100, Cyril Bonté wrote:
Well, eventually after several different tests, that's OK for me.
A short http-request timeout (some seconds max) will prevent the
accumulation of connections ESTABLISHED
Hi Cyril,
On Wed, Jan 06, 2010 at 08:58:17PM +0100, Cyril Bonté wrote:
Le Mardi 5 Janvier 2010 23:42:46, Willy Tarreau a écrit :
On Tue, Jan 05, 2010 at 11:14:32PM +0100, Cyril Bonté wrote:
Well, eventually after several different tests, that's OK for me.
A short http-request timeout
Hi Willy,
Le Mardi 5 Janvier 2010 06:15:54, Willy Tarreau a écrit :
The only suspected remaining issue reported by Cyril seems not to be one
at first after some tests. I could reproduce the same behaviour but the
close_wait connections were the ones pending in the system which got
delayed due
On Tue, Jan 05, 2010 at 11:14:32PM +0100, Cyril Bonté wrote:
Hi Willy,
Le Mardi 5 Janvier 2010 06:15:54, Willy Tarreau a écrit :
The only suspected remaining issue reported by Cyril seems not to be one
at first after some tests. I could reproduce the same behaviour but the
close_wait
On 1/4/10 9:15 PM, Willy Tarreau wrote:
On Mon, Jan 04, 2010 at 07:05:48PM -0800, Hank A. Paulson wrote:
On 1/4/10 2:43 PM, Willy Tarreau wrote:
- Maybe this new timeout should have a default value to prevent infinite
keep-alive connections.
- For this timeout, haproxy could display a warning
Hi Willy,
I didn't have much time for tests but I can make some first feedbacks.
Le Lundi 4 Janvier 2010 00:13:49, Willy Tarreau a écrit :
The keep-alive timeout is bound to the http-request timeout right now,
In practice, this is the minimum value between the http-request timeout and
the
Hi Cyril,
On Mon, Jan 04, 2010 at 10:21:40PM +0100, Cyril Bonté wrote:
Hi Willy,
I didn't have much time for tests but I can make some first feedbacks.
That's fortunate because I spent the day again chasing some remaining
CLOSE_WAIT issues. I finally found horrified that the response parser
On Tue, Jan 05, 2010 at 12:20:22AM +0100, Cyril Bonté wrote:
Le Lundi 4 Janvier 2010 23:43:14, Willy Tarreau a écrit :
Hi Cyril,
On Mon, Jan 04, 2010 at 10:21:40PM +0100, Cyril Bonté wrote:
Hi Willy,
I didn't have much time for tests but I can make some first feedbacks.
That's
On 1/4/10 2:43 PM, Willy Tarreau wrote:
- Maybe this new timeout should have a default value to prevent infinite
keep-alive connections.
- For this timeout, haproxy could display a warning (at startup) if the value
is greater than the client timeout.
In fact I think that using http-request
On Mon, Jan 04, 2010 at 07:05:48PM -0800, Hank A. Paulson wrote:
On 1/4/10 2:43 PM, Willy Tarreau wrote:
- Maybe this new timeout should have a default value to prevent infinite
keep-alive connections.
- For this timeout, haproxy could display a warning (at startup) if the
value is greater
Hi all,
Yes that's it, it's not a joke !
-- Keep-alive support is now functional on the client side. --
From now on, we support 4 different modes :
- tunnel mode (the default one), where we only look at
the first request and first response then let everything
be exchanged as if it
21 matches
Mail list logo