On 05/10/2014 09:44 AM, Willy Tarreau wrote:
Now the bind-process mess is fixed so that we now support per-listener
process binding using the "process" bind keyword, which ensures that
we won't need to change the config format during the stable release if
we want to slightly improve it. And that
On 05/07/2014 12:35 PM, Vincent Bernat wrote:
❦ 7 mai 2014 11:15 +0200, Willy Tarreau :
haproxy does not include DTrace probes by any chance right? :)
No, and I have no idea how this works either. But if you feel like it
can provide some value and be done without too much effort, feel fre
On 04/25/2014 05:17 AM, Willy Tarreau wrote:
No unfortunately. The starts report the*real* events and this is
extremely important. I regret that your boss will have to live with
the reality that his servers are facing !
It would be nice if there was some way from the stats socket to break
thi
http://comments.gmane.org/gmane.comp.web.haproxy/5856
'''Since we have not yet reworked the ACLs to rely on the pattern
subsystem, it's still not possible to make use of "hdr_ip(X-f-f,-1)" as
we do on the "balance" or "source" keywords.'''
Is the ACL rework mentioned for hddr_ip and a specifi
We are running 1.4.24 for an application that sees almost entirely small
http requests. We have the following timeouts:
timeout client 7s
timeout server 4s
timeout connect 4s
timeout http-request 7s
There are a significant number of cR/http-408 responses in the
On 2013-12-17 03:14, Annika Wickert wrote:
> - accesslist for statssocket or ldap authentication for stats socket
For ldap auth I presume you mean the web ui. You could accomplish this
today by proxying through httpd (or equivalent).
On 12/15/2013 09:41 PM, Willy Tarreau wrote:
- several hash algorithms are provided, and it is possible to select them
per backend. This high quality work was done at Tumblr by Bhaskar Maddala.
This is an interesting development. How do the included functions
compare to the currenlty
On 12/04/2013 11:24 AM, Chris Burroughs wrote:
On 12/03/2013 04:07 PM, Chris Burroughs wrote:
This could just be me not being adept at email patches. Sorry if this
is obvious but is this supposed to apply against 1.4 or 1.5?
To answer my own question this applies against 1.5. I'm not
On 12/09/2013 05:02 PM, James Hogarth wrote:
To answer yes it is against 1.5 ... The caveats are the peers don't work
and the session table and load balancing can get messed up due to the lack
of shared information between processes but if you just need to utilise
multiple stat sockets and the re
On 12/04/2013 02:10 AM, Willy Tarreau wrote:
We happen to have another CPU we purchased to be good with highly
>threaded Java apps: Intel Xeon CPU E5-2670 0 @ 2.60GHz
>
>It also has a L2 cache per core. This CPU has performed significantly
>better in both "many" and "a few" threaded workloads.
On 12/03/2013 04:07 PM, Chris Burroughs wrote:
This could just be me not being adept at email patches. Sorry if this
is obvious but is this supposed to apply against 1.4 or 1.5?
To answer my own question this applies against 1.5. I'm not sure of the
feasibility or desirabili
On 11/28/2013 03:10 AM, Annika Wickert wrote:
Is this a normal behaviour?
http://imgur.com/I7sRWy2
A graph of similar behavior at nbproc=3. Anecdotally the variance seems
to be higher under lower loads.
Hi James,
This could just be me not being adept at email patches. Sorry if this
is obvious but is this supposed to apply against 1.4 or 1.5?
This is also something that I think we would likely find very helpful in
1.4.
On 11/20/2013 11:32 AM, Avatar wrote:
Ou, we've been waiting it so much. Really delightful thing.
Thanks.
On Mon, Nov 18, 2013 at 9:49 PM, James Hogarth wrote:
Hi all,
We've been looking at improving the behaviour
On 11/26/2013 07:25 AM, Chris Burroughs wrote:
As far as I can tell from AMD docs and Vincent's handy /sys trick, each
of the 6 cores has a fully independent L2 cache, and the chip has a
single shared L3 cache.
I'm not sure I'm following the part about the "same part of the
On 11/23/2013 04:13 AM, Willy Tarreau wrote:
This is 25% user and 75% system. It's on the high side for the user, since
you generally get between 15 and 25% user for 75-85% system, but since you
have logs enabled, it's not really surprizing so yes it's in the norm. You
should be able to slightly
I am currently trying to migrate a somewhat over-complicated and
over-provisioned setup to something simpler and more efficient. The
application servers lots of small HTTP requests (often 204 or 3xx), and
gets requests from all sorts of oddly behaving clients or
over-aggressive pre-connecting
diff --git a/doc/configuration.txt b/doc/configuration.txt
index f2043a1..0217139 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -659,7 +659,7 @@ nopoll
nosepoll
Disables the use of the "speculative epoll" event polling system on Linux. It
is equivalent to the command-lin
Using 1.4.24 without USE_LINUX_SPLICE=1 results in an error when setting
'defaults option splice-auto', and maxpipes is reported as zero. This
makes sense.
With USE_LINUX_SPLICE=1 I get maxpipes == maxconn/4 as the docs say.
However, "current pipes" is always listed as 0/0, even under load.
On 11/18/2013 05:18 AM, Willy Tarreau wrote:
The typical case is a client establishing a connection, not sending anything
over it then closing (or expiring with the time out). When you run with very
short timeouts, it can be caused by request typed by hand and by network losses.
When you see the
I'm trying to track down a problem with "cR" and "CR" timeouts with
haproxy 1.4. A service I thought was nice and stable turns out to have
# 4xx/# 2xx ~= 0.2 according to hatop (which pulls in data from the
stats socket). Close to 20% client timeouts is far higher than I expected.
The logs l
A variety of nicely formatted mirrors of the docs used to be at:
https://code.google.com/p/haproxy-docs
But all such urls are now returng 403. I'm not sure if they are
"official" or not, but does anyone know what happened to them?
On 02/01/2013 03:07 AM, Cyril Bonté wrote:
> Hi Willy,
>
> Le 01/02/2013 08:44, Willy Tarreau a écrit :
>> I have some vague memories of someone here reporting this on the tomcat
>> ML, resulting in a fix one or two years ago, but I may confuse with
>> something else. Maybe you should experiment w
On 01/31/2013 08:55 AM, Willy Tarreau wrote:
> This one has everything needed, transfer-encoding: chunked specifies the
> size so the connection can stay alive.
>
>> But responses from haproxy still closed with either http-server-close or
>> http-pretend-keepalive set still close the connection.
I'm using haproxy 1.4.17 if that's relevant. I tried replacing
http-server-close with http-pretend-keepalive, which as far as I can
tell had no effect to client side keepalive behaviour. Responses still
looked something like this:
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Cache-Control: ma
eep-alive so the http-pretend-keepalive was added a while ago
> to handle servers like that. Does that work better?
>
> -Bryan
>
>
>
> On Wed, Jan 30, 2013 at 1:36 PM, Chris Burroughs
> wrote:
>
>> Form curl:
>> < HTTP/1.1 200 OK
>> < Server:
ote:
> If you're asking for keep-alive from client to haproxy and no keep alive
> from haproxy to server, then that's what the http-server-close option
> provides.
>
> What makes you think that keep alive is not working?
>
> -Bryan
>
>
> On Wed, Jan 30, 20
We are using haproxy with tproxy to front of our various web services.
Most of them are very short lived one-off requests, so we have generally
optimised for closing everything quickly and getting out of the way. We
have a new case where we would like client keep-alives, while maintain
are traditi
On 09/08/2011 02:20 AM, Willy Tarreau wrote:
> On Tue, Sep 06, 2011 at 07:01:44PM -0400, Chris Burroughs wrote:
>> > On 09/01/2011 09:04 PM, Chris Burroughs wrote:
>>> > > I've looked at the source code and I think that's what's going on, but
>
On 09/01/2011 09:04 PM, Chris Burroughs wrote:
> I've looked at the source code and I think that's what's going on, but
> it has been a while since I've read C networking code.
If someone is in a particularly explanatory mood, I'm also trying to
figure out ho
I'm trying to figure out what exactly the httpclose/forceclose is doing
when it forces "the closing of the outgoing server channel as soon as
the server begins to reply and only if the request buffer is empty". Is
it sending a RST?
I've looked at the source code and I think that's what's going on
31 matches
Mail list logo