On 7/16/09 9:51 PM, Willy Tarreau wrote:
Hi,
On Thu, Jul 16, 2009 at 03:52:16AM -0700, Hank A. Paulson wrote:
I have a machine with 1 GB RAM and a Core Duo 2 processor running only
haproxy (and necessary system processes) including rsyslog sending haproxy
logs to other machines. No iptables
FYI, I think the default unit for times in haproxy is ms not seconds, so these
are not correct, AFAIK.
clitimeout 6 # 16.6 Hrs. 60 seconds
srvtimeout 3 # 8.33 Hrs. 30 seconds
contimeout 4000 # 1.11 Hrs. 4 seconds
On 8/12/09 10:22 PM, Nelson Serafica wrote
I was also getting a lot 502s from Varnish, I had hoped by ignoring them they
would go away. I don't want to check if they are still there because I think
it might be too soon.
Let me know if you need more tcpdumps, I might be able to supply them (if, of
course, they are still there).
On 8/2
It was .18 then .19 and I just switched to 20 a couple days ago, so I will
check later tonight.
On 8/31/09 12:01 PM, Willy Tarreau wrote:
Hello,
On Mon, Aug 31, 2009 at 11:30:16AM -0700, Hank A. Paulson wrote:
I was also getting a lot 502s from Varnish, I had hoped by ignoring them
they
if you use haproxy with app-generated-cookie based balancing, it will continue
to send requests with that cookie to that backend as long as that cookie
exists and that backend is up - afaik.
If you look at the cookie in a browser tool, what is the expiration time?
If it is not, as long as you w
what cookies the app uses, and what their expiry date is. But what about
source IP persistence as well? How do we configure the timeout for that?
Thanks,
James
On 3 Sep 2009, at 17:47, Hank A. Paulson wrote:
if you use haproxy with app-generated-cookie based balancing, it will
continue to sen
s and iptable_filter with a standard Centos and iptables. Even
dm_multipath and others that you are not interested in would be expected...
-Original Message-
From: Hank A. Paulson [mailto:h...@spamproof.nospammail.net]
Sent: Thursday, September 03, 2009 1:02 PM
To: HAproxy Mailing Lists
Su
I have:
reqidel ^Cookie
or
reqidel ^Cookie:
or
reqidel ^Cookie:\
or
reqidel ^Cookie:.*
in one backend but the requests are arriving at the servers in that backend
with Cookie headers.
I tried the follow:
* I recompiled with and without PCRE
* changed the reqidel lines with and without "^", ":
Yes, I have httpclose everywhere since I forgot it once.
Doing some testing and it seems to freak out when I do
reqidel Accept
or something like that, I made a bunch of changes and now it is working but
there is something about one of the Accept regexes that it didnt seem to like.
I am not sur
On 9/17/09 1:17 PM, David Birdsong wrote:
I'm having trouble with the syntax of the hdr* matching for creating ACL's.
I know that my control flow is correct. by changing the acl
defiinition to a simple src ip, i get the desired change of backends.
ie. this will send traffic to img backends (
I'm having trouble with the syntax of the hdr* matching for creating
ACL's.
I know that my control flow is correct. by changing the acl
defiinition to a simple src ip, i get the desired change of backends.
ie. this will send traffic to img backends (desired)
# acl from_vnsh hdr_sub -i ng
On 9/17/09 3:18 PM, Willy Tarreau wrote:
Hi Marc,
On Thu, Sep 17, 2009 at 10:49:44AM -0400, Marc wrote:
I didn't specify a TARGET option. This was before the Makefile was changed
to prevent that mistake.
this should not cause an issue either. Can you check if the CPU is
spent in user or syst
I think you are getting a new cookie because of how your openais/etc is
failing over, not haproxy. haproxy doesn't create the cookie it just passes it
along.
drbd/openais doesn't have a way to maintain the same tcp connection id across
failovers (AFAIK), so when you fail over you get a new tcp
On 9/21/09 1:28 PM, Stefan wrote:
Hello
Am Montag 21 September 2009 20:59:01 schrieb Hank A. Paulson:
I think you are getting a new cookie because of how your openais/etc is
failing over, not haproxy. haproxy doesn't create the cookie it just passes
it along.
drbd/openais doesn't
Screen shot of some different states of the new "last check field" from
Krzysztof Olędzki
http://imgur.com/LabZH - shows response time and http status
looks great! Thanks to all 1.4 developers and sponsors for their efforts.
$ haproxy -vv
HA-Proxy version 1.4-dev3 2009/09/23
Copyright 2000-200
On 9/29/09 10:41 AM, David Birdsong wrote:
On Tue, Sep 29, 2009 at 10:30 AM, Willy Tarreau wrote:
Many dual-core opterons had no synchronization for their internal timestamp
counters, so those were often disabled and replaced with slow external clock
sources, resulting in poor network performan
On 10/7/09 3:21 PM, Ben Fyvie wrote:
So what our problem really comes down to is why doesn't mongrel quietly stop
receiving requests after monit issues the initial "kill". (FYI - it is our
understanding that calling "mongrel stop" also issues a "kill" command so
there is no nicer way to ask it to
A couple of guesses you might look at -
I have found the stats page to show deceptively low numbers at times.
You might want to check the http log stats that show the
global/frontend/backend queue numbers around the time those requests. My guess
is that the cases where you are seeing 3 second ti
...@1wt.eu]
Sent: Wednesday, October 14, 2009 12:38 PM
To: Jonah Horowitz
Cc: Hank A. Paulson; haproxy@formilux.org
Subject: Re: Problems with long connect times
Hi Jonah,
On Wed, Oct 14, 2009 at 12:31:07AM -0700, Jonah Horowitz wrote:
driver: tg3
version: 3.98
firmware-version: 5721-v3.55a
bus-info
For the code you are developing, if you make the interface general enough so
that parameters can be added or removed that would be good.
Telnet/text/memcached style protocols seem popular to allow easy
debugging/monitoring.
So if your protocol says a machine has to send a load info bundle like:
On 10/19/09 12:46 PM, Willy Tarreau wrote:
On Mon, Oct 19, 2009 at 12:25:00PM -0700, Hank A. Paulson wrote:
at what point does haproxy begin queuing requests for a backend?
Is it at sum(maxconn * wght)
or sum(maxconn) ignoring wght
^^^ This one precisely
or is it at fullconn even if
I come not to praise SPDY, but to bury it...
On 11/19/09 11:58 PM, Karsten Elfenbein wrote:
Hi,
I still don't seen any real advantage of spdy over http/1.1 with pipelining.
google addresses some of those objections in their docs, example:
(HTTP pipelining helps, but still enforces only a FIF
I have a site with 90% of the traffic from a few client IPs that are 300ms or
so away, their gateway software doesn't seem to be dealing with thousands of
connections very well and we can't take advantage of large tcp windows because
the connection is over after one response.
So I am thinking
02:24:27AM -0800, Hank A. Paulson wrote:
I have a site with 90% of the traffic from a few client IPs that are 300ms
or so away, their gateway software doesn't seem to be dealing with
thousands of connections very well and we can't take advantage of large tcp
windows because the connecti
On 1/4/10 2:43 PM, Willy Tarreau wrote:
- Maybe this new timeout should have a default value to prevent infinite
keep-alive connections.
- For this timeout, haproxy could display a warning (at startup) if the value
is greater than the client timeout.
In fact I think that using http-request by
On 1/4/10 9:15 PM, Willy Tarreau wrote:
On Mon, Jan 04, 2010 at 07:05:48PM -0800, Hank A. Paulson wrote:
On 1/4/10 2:43 PM, Willy Tarreau wrote:
- Maybe this new timeout should have a default value to prevent infinite
keep-alive connections.
- For this timeout, haproxy could display a warning
-0800, Hank A. Paulson wrote:
Using git 034550b7420c24625a975f023797d30a14b80830
"[BUG] stats: show UP/DOWN status also in tracking servers" 6 hours ago...
I am still seeing continuous memory consumption (about 1+ GB/hr) at 50-60
Mbps even after the number of connections has stablized:
O
I wanted to report after using 1.4-dev6 for several sites for a couple days
that the results seem very good.
One site was peaking at over 150 Mbps and over 65 million hits past couple of
days, during that time memory use stayed steady between 1.5-2.5 GB and went
down when load went down.
On
On 1/27/10 9:42 PM, Willy Tarreau wrote:
Hi,
On Wed, Jan 27, 2010 at 04:15:30PM +0100, Franco Imboden wrote:
Hi,
I have a question concerning a failover scenario of a webservice where no
cookies are supported.
as long as nothing happens, all requests should be routed to side A. If side
You have selinux on, so it may be unhappy with some part of haproxy - the
directory it uses, the socket listeners, etc. Turn it off (if you can) until
you get everything working ok. Turning it off requires a reboot.
To see if it is on:
# sestatus
google for how to turn it off
I would back off
On 7 February 2010 02:06, Hank A. Paulson mailto:h...@spamproof.nospammail.net>> wrote:
You have selinux on, so it may be unhappy with some part of haproxy
- the directory it uses, the socket listeners, etc. Turn it off (if
you can) until you get everything working ok. Turn
For reference, one of the sites I have is F12, 60-90 million hits/day 60-120+
Mbps logging via rsyslogd to 2 different logging servers (with separate error
logging turned on) and it runs fine on a Xen VM with 5 GB and 1 VCPU - always
below 80% CPU load.
Fedora 8 is hundreds of years old, but i
And pcre static is a separate package for Fedora/RedHat.
For RH/Fedora:
yum install pcre-devel pcre
make TARGET=linux26 ARCH==x86-64 USE_REGPARM=1 USE_PCRE=1
then you can do:
./haproxy -vv
to make sure you have several poll options, just having select seems strange.
Also, nbproc should be 1
On 3/30/10 11:49 PM, Willy Tarreau wrote:
On Wed, Mar 31, 2010 at 02:17:37AM -0400, Geoffrey Mina wrote:
There was nothing between the two but a switch... although, disabling the
Windows firewall on the IIS server seems to have fixed the problem! I don't
have much experience with the built in w
On 1/16/10 5:46 PM, Bart van der Schans wrote:
Hi,
A few days ago there's was some interest in munin plugins for haproxy.
I have written a few plugins in perl. To code is fairly strait forward
and should be quite easy to adjust to your needs. The four attached
plugins are:
- haproxy_check_durat
You probably have to monitor the log with a log watching tool and have it run
the script. Or use the haproxy socket to monitor and trigger the script.
On 4/10/10 7:31 AM, Gullin, Daniel wrote:
Yes I know, but mean that I got active/backup on the webfarm. I have one
webserver that is active and
On 4/12/10 8:25 AM, Dirk Taggesell wrote:
On Mon, Apr 12, 2010 at 12:07 AM, David Birdsong
wrote:
It's expected that consistent hashing won't provide an even
distribution to your backends. You'll need to adjust the weights of
each server to even out the traffic if you want a smooth distrib
Hi,
A few more troubleshooting ideas:
Also, if you do dig www.wildfalcon.com/wildfalcon.com does it resolve to the
IPs on the haproxy box?
If you telnet to www.wildfalcon.com 80 on the haproxy box does it work?
If so then if you do tcpdump for port 53 and watch that during the haproxy
start u
On 5/17/10 10:24 PM, Willy Tarreau wrote:
On Mon, May 17, 2010 at 07:42:03PM -0700, Hank A. Paulson wrote:
I have some sites running a similar set up - Xen domU, keepalived,
fedora not RHEL and they get 50+ million hits per day with pretty
fast response. you might want to use the "log sep
s in if you have a portal that people sign into and then
have a menu/navbar that they can choose different services that should be
going to different front/backends.
On 5/18/10 3:49 PM, Chih Yin wrote:
On Mon, May 17, 2010 at 11:11 PM, Hank A. Paulson
mailto:h...@spamproof.nospammail.net>&g
On 5/18/10 7:45 PM, Hank A. Paulson wrote:
I am wondering if the
previous admin set up more than one copy of haproxy and that is why
several services are redirected to the same machine - like glassfish
prod there is no other reference to port 4850 in this config, so what is
running on port 4850
That is because haproxy does not _yet_ parlez ssl so it can't see the http
level attributes to route requests with them.
But there is good news and bad news related to that - since I am from the
future I can tell you that haproxy will have ssl encrypt/decrypt capabilities
added in version 1.8.
I got this error hit via the haproxy socket, I noticed that there are a few
hits when searching for it, all related to corrupt headers with lighttpd and
people seem to be assuming it is lighttpd's fault but in the case I received,
it is clear that there are some junk characters at the beginning
On 6/27/10 9:55 PM, Willy Tarreau wrote:
Hi Hank,
On Sun, Jun 27, 2010 at 02:12:35PM -0700, Hank A. Paulson wrote:
I got this error hit via the haproxy socket, I noticed that there are
a few hits when searching for it, all related to corrupt headers with
lighttpd and people seem to be assuming
I have an old URL that needs to be redir'd to a new url in several situations
but I can't see how to avoid looping.
acl hdr_sub(host) host1
acl hdr_sub(host) host2
acl oldurl path_beg /oldblah
acl newurl path_beg /newu
redirect location http://host1/newu if oldurl host2
The problem is if I ha
Also, what is the rule for multiple if conditions? I am missing it if it is in
the docs.
reqirep blah if a b or c
is that:
(a and b) or c
or
a and (b or c)
or
something else :)
Can the file based lists be used for any matches or just IPs?
I think something else is going on - do you have iptables running?
Is it redir-ing from 80 to 7000 or something?
What if you just run haproxy on web01 and apache/blah on web02 and comment out
web01 in your haproxy config so you have just one path to debug to start.
On 8/4/10 9:44 PM, Willy Ta
You are going need something like mysql proxy for that.
haproxy can only look at the attributes of a connection (port, is there data
available from the client (aka did the client speak first), etc.
And in the case of httpd the headers but it can't look at the full contents of
the whole data str
On 8/17/10 2:16 PM, Turlapati, Sreenivasa wrote:
Thxs a lot.
Sorry, if I am misguiding you.
I am just curious to know, when HAProxy is set at TCP mode, we want to
scan or glance over the incoming request for a particular string say
'XYZ'. If the incoming request contains the 'XYZ' string, route
On 8/17/10 4:13 PM, Cyril Bonté wrote:
Le mercredi 18 août 2010 00:32:55, Turlapati, Sreenivasa a écrit :
Hi,
We are not trying to load balance across our databases.
We got simple client - server architecture.
Here is the brief description . when the application users logged into the
Front En
No or not in a stock version - are all the POSTs in your system convert-able?
I don't think you can write a general POST->GET converter because there are
options with POSTs that are not available with GET - my favorite "100
Continue" - is that available to use with GET as far as the protocol de
What did the haproxy stats web page show during the test?
How long was each test run? many people seem to run ab for a few seconds.
Was tomcat "doing" anything for the test urls, I am a bit shocked you got 3700
rps from tomcat. Most apps I have seen on it fail at much lower rps.
Raise the max c
I have a site that is using a url based stickiness:
example.com/something/:delim:K:delim:unique_session_info_here:delim::delim:/more_path_stuff...
right now the delim is ( and as far as I know that doesn't need to be encoded.
The problem is haproxy 1.4 can't seem to see that in the URL for appse
Anyone have any ideas about how much effort it would take to add
substring of url_path as a stick pattern? Current stick patterns are noted in
section 7.8 of the 1.4.8 manual but don't currently include substring :(
On 10/13/10 4:01 PM, Hank A. Paulson wrote:
I have a site that is using
Copied a working 1.4.8 config to a Fedora 14 box with Fedora compiled 1.4.8
haproxy and it says unknown option 'splice-auto'.
Is that correct?
# service haproxy restart
[ALERT] 292/190016 (3644) : parsing [/etc/haproxy/haproxy.cfg:2] : unknown
option 'splice-auto'.
[ALERT] 292/190016 (3644) : E
Thanks, as soon as I hit enter, I realized the problem - since I just cut and
paste the make line I don't think about it that much.
On 10/21/10 12:00 PM, Cyril Bonté wrote:
Hi Hank,
Le jeudi 21 octobre 2010 20:41:11, Hank A. Paulson a écrit :
Copied a working 1.4.8 config to a Fedora 1
I don't have benchmarks, but have sites running haproxy on Xen VMs with apache
on Xen VMs and can pump 120 Mbps and 80 million hits a day through one haproxy
VM and that is with haproxy doing rsysloging of all requests to 2 remote
rsyslog servers on top of the serving of requests with some layer
Just a guess, but is there something that might be doing reverse dns lookups
for each request when using haproxy? I find when I turn on tcpdump on port 53
on a firewall or router, I and others are surprised at how much reverse lookup
traffic there is going on in any given environment.
On 10/26
Can someone give a request/response level example of how request-learn works?
I can't understand how if haproxy has not seen cookie going out it can tell
where to direct it to when it comes in.
Thanks.
If you are trying to failover "only" an IP address(es) and haproxy - do
yourself a huge favor and just use keepalived. It is fast and painless to set
up and maintain.
http://www.keepalived.org/
People spend alot of time trying to get everything "fully" automated and then
it often doesn't perf
Where is the rest of your haproxy config - if you are talking to port 443 on
your tomcat servers...
If you have have the 2 backend servers and you want haproxy to talk to the
encrypted/ssl ports on them (and you want your end users to see the certs that
are on the tomcat servers) then the only
A few ideas that you might or might not want to consider:
* As another poster just mentioned you might consider ICP but they suggested
having all your squids talk to one master squid. I would instead maybe do this:
Currently, my understanding of your layout:
haproxy -> hashed_url -> squid X
Accept-* headers talk about what the ends of the connection want in terms of
page content. What is allowed in the headers themselves is a different part of
the spec, not spec'd by the content of a header but by the spec itself.
Many HTTP/1.1 header field values consist of words separated by LWS
I have a possible bug, I have a backend I want strip all the X-* headers off
the requests. But I found that if I did:
reqidel ^X
reqidel ^Via:\
or
reqdel ^x-.*:\
reqdel ^Via
or similar
haproxy [1.4.8 (Fedora package version) and hand compiled 1.4.9 version both
using pcre] both would not re
1 - With recent CPUs Intel 5300/5400/5500/5600 and AMD 6100 the set of optimal
compiler settings for optimizations :) is not something anyone can keep up
with - not to mention different versions of gcc that understand none, some or
all of the features of these CPUs. march native allows gcc to ta
Looks good in my limited test cases, headers are gone regardless of ordering
of del statements, but in your notes:
"but since headers were the last header processing, the issue
remained unnoticed."
You mean "cookies were the last", right?
On 11/27/10 10:14 PM, Willy Tarreau wrote:
Hi Cyril an
Please see the thread:
"need help figuring out a sticking method"
I asked about this, Willie says there are issues figuring out a workable
config syntax for 'regex to pull the URL/URI substring' but (I think) that
coding the functionality is not technically super-difficult just not enough
hand
On 12/19/10 9:46 PM, Willy Tarreau wrote:
Hi Craig,
On Thu, Dec 16, 2010 at 11:47:51PM +0100, Craig wrote:
A typical use-case is a special server from you cluster that fullfills a
maintainance special task, I guess it's a common use-case. Any opinions
on this?
How would this work if you have
Me too, 1 or 2 per day usually - but my server rejects them and then the
maillist server complains that msgs to me are bouncing:
Some messages to you could not be delivered. If you're seeing this
message it means things are back to normal, and it's merely for your
information.
Here is the list
On 6/30/10 9:50 PM, Willy Tarreau wrote:
On Wed, Jun 30, 2010 at 08:53:19PM -0700, Bryan Talbot wrote:
See section 7.7: AND is implicit.
7.7. Using ACLs to form conditions
--
Some actions are only performed upon a valid condition. A condition is a
combination o
Ah, the old "use A more than once, dummy" approach - I did not see that coming.
Thanks :)
On 1/7/11 3:04 PM, Bryan Talbot wrote:
Doesn't this work?
... if A B1 or A B2 or A B3 or A B4
-Bryan
On Fri, Jan 7, 2011 at 7:16 AM, Hank A. Paulson mailto:h...@spamproof.nospamm
I think this covers the most cases, I am not sure if the "-i" is needed or not:
acl acl_aspc url_dom -i autos-prestige-collection HTTP_URL_ABS
acl acl_aspc hdr_dom(Host) -i autos-prestige-collection
use_backend aspc if acl_aspc
On 1/12/11 11:38 AM, Bryan Talbot wrote:
I think the problem is
I got a segfault at start up when parsing a config that uses pattern files.
Same config runs under 1.4.10
Commenting out that line prevents the segfault.
Sending more info directly to Willy
On 3/8/11 2:18 PM, Willy Tarreau wrote:
Hi,
I'm announcing haproxy 1.4.12. I know I did not take the ti
Is ICY really listening on localhost:3128 ?
If you telnet directly to that, does it work?
On 3/9/11 6:36 PM, David Young wrote:
Hi folks,
First-time poster here - we've been working on implementing haproxy to
perform load balancing between our backend squid proxies.
I stumbled across an issue
Not necessarily a solution to this performance issue, but I was thinking about
how to get to that next level of performance for haproxy.
Here is an idea I had that is a bit far out. Supermicro and others now have
GPU servers - TESLA from NIVDIA, etc. A project from Korea has used these
"GPGPU"
I recently found this resource:
http://www.countryipblocks.net/
on the day they say they are closing due to lack of donations. :(
I thought other hap users might be interested in this use case and will
hopefully think about donating, too.
For one site targeting users in several countries au, nz
I can't see what I am missing here.
Any help is appreciated
Jun 14 02:00:00 localhost haproxy[3052]: 10.101.1.2:2892
[14/Jun/2011:02:00:00.088] w wi/wi-9 35/111/-1/-1/146 503 212 W=9 - CQ--
202/202/27/18/0 1/0 {w.x.y.z|Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
AppleWebKit/534.7 (KHTML, l
Try adding:
optionhttplog
under your listen, I am not sure what haproxy does if you say tcplog after
saying httplog, so you want to make sure have httplog since those log entries
provide more info. Run with "option httplog" on the listen during the busy
time and post some examples of th
On 7/11/11 11:22 AM, James Bardin wrote:
On Mon, Jul 11, 2011 at 2:18 PM, Alexander Hollerith
wrote:
Thank you very much for pointing me into that direction. I think that definitely answers
my question. Since haproxy itself might keep more than one process alive after dealing
with an "-sf" (
On 7/18/11 5:25 PM, Dmitriy Samsonov wrote:
My final task is to handle DDoS attacks with flexible and robust
filter available. Haproxy is already helping me to stay alive under
~8-10k DDoS bots (I'm using two servers and DNS RR in production), but
attackers are not sleeping and I'm expecting atta
I think the problem here is that the EC2 way of doing automatic server
replacement is directly opposite normal and sane patterns of doing server
changes in other environments. So someone on EC2 only is thinking this is a
process to hook into and use and others, like Willie, are thinking wtf? why
I am going around again about cookie-less sessions and just want to double
check that nothing works for them :)
In 1.5 there is the stick on url param option, but afaict this and everything
else won't work in a situation where you have two things:
1 - clients that don't support cookies.
2 - se
Sorry, I meant working with balance url_param hashing
On 8/5/11 2:13 PM, Hank A. Paulson wrote:
I am going around again about cookie-less sessions and just want to double
check that nothing works for them :)
In 1.5 there is the stick on url param option, but afaict this and everything
else
On 8/5/11 3:01 PM, Baptiste wrote:
Hi Hank
Actually stick on URL param should work with client which does not
support cookies.
is the first reply a 30[12] ?
So you are saying that stick on URL param reads the outgoing 302 and saves the
URL param from that in the stick table on 1.5? f so, grea
On 8/6/11 12:32 AM, Willy Tarreau wrote:
Hi Baptiste,
On Sat, Aug 06, 2011 at 09:24:08AM +0200, Baptiste wrote:
On Sat, Aug 6, 2011 at 8:51 AM, Hank A. Paulson
wrote:
On 8/5/11 3:01 PM, Baptiste wrote:
Hi Hank
Actually stick on URL param should work with client which does not
support
I was wondering if acls that I create in the frontend should be available in
backends, too? I was getting errors when I tried but the error disappeared
when I either moved the reqadd/rspadd to the frontend or if I used a
predefined acl like LOCALHOST.
Thanks.
does hdr_cnt not work or am I just completely unable to get an example that
works? I can't imagine it doesn't work but I have tried _many_ - some examples
and nothing seems to work (maybe it is 40+ hrs):
acl hdrcnttest hdr_cnt gt 0
reqadd x-has-host:\ YES if hdrcnttest
acl hdrcnttest hdr_cnt
t 3:06 PM, Hank A. Paulson
wrote:
does hdr_cnt not work or am I just completely unable to get an example that
works? I can't imagine it doesn't work but I have tried _many_ - some
examples and nothing seems to work (maybe it is 40+ hrs):
acl hdrcnttest hdr_cnt gt 0
reqadd x-has-host:\ Y
On 9/9/11 2:13 AM, Willy Tarreau wrote:
Hi Hank,
On Thu, Sep 08, 2011 at 07:12:29PM -0700, Hank A. Paulson wrote:
Whether I have the rules in the backend or the front does not seem to
make a difference - I tried some rules in front and back and neither
worked.
Maybe I am missing something
can you provide some valid examples of using http_req_first,
acl aclX http_req_first
or
use_backend beX if http_req_first
does not seem to work for me in 1.4.17
Thanks.
You can get weird results like this sometimes if you don't use http-close or
any other http closing option on http backends. You should paste your config.
Maybe there should be a warning, if there is not already, for that situation -
maybe just when running "-c".
On 9/19/11 5:46 AM, Christoph
uri /
Le 20/09/11 01:27, « Hank A. Paulson » a
écrit :
You can get weird results like this sometimes if you don't use http-close
or
any other http closing option on http backends. You should paste your
config.
Maybe there should be a warning, if there is not already, for that
situation -
On 10/2/11 1:57 PM, Slawek wrote:
On 02/10/2011 21:44, Cyril Bonté wrote:
I've following setup: haproxy-public -> varnish -> haproxy-director,
where haproxy-public monitors haproxy-director.
I noticed that haproxy frequently fails to intercept URI specified in
monitor-uri and passes that reques
I am not sure if these counts are exceeding the "never" threshold
500 when haproxy encounters an unrecoverable internal error, such as a
memory allocation failure, which should never happen
I am not sure what I can do to troubleshoot this since it is in prod :(
Is there a way to set
On 10/3/11 12:19 PM, Brane F. Gračnar wrote:
On Monday 03 of October 2011 20:09:17 Hank A. Paulson wrote:
I am not sure if these counts are exceeding the "never" threshold
500 when haproxy encounters an unrecoverable internal error, such as a
memory allocation fail
96 matches
Mail list logo