I like the default message. If you want to suppress it, then you can use -q.
Having some standard output that can be suppressed with -q is also
fairly standard for UNIX commands.
On Mon, Nov 13, 2023 at 4:07 AM William Lallemand
wrote:
>
> On Mon, Nov 13, 2023 at 09:52:57AM +0100, Baptiste
I agree defaulting to alpn h2,http/1.1 sooner (don't wait for 2.9),
and even 2.6 would be fine IMO. Wouldn't be a new feature for 2.6,
only a non breaking (AFAIK) default change...
I would have concerns making QUIC default for 443 ssl (especially
prior to 2.8), but you are not suggesting that
Assuming no direct access to apache servers, does anyone know if
haproxy would by default protect against these vulnerabilities?
What exactly is needed to reproduce the poor performance issue with openssl
3? I was able to test 20k req/sec with it using k6 to simulate 16k users
over a wan. The k6 box did have openssl1. Probably could have sustained
more, but that's all I need right now. Openssl v1 tested a little faster,
The SYN-ACK tracking works in transparert mode with haproxy. I have setup
haproxy to rebind all connections before and basically proxy the internet
(and use NAT for udp). That said, I assume the point of DSR is that it's
not always going to take the same path and that is where the real issue
is.
That's what 50s? You are probably doing pooling and it's using LRU instead of
actually cycling through connections. At least that is what I have seen node
typically do.
Instead of 50 seconds, try:
timeout client 12h
timeout server 12h
You might want to enable
Not positive the only use case, but I have a number of udp ports also open
so ran tcpdump on them and they are all talking to syslog. Seems to line up
about 1 per cpu on a couple of machines I checked.
On Fri, Aug 5, 2022 at 7:19 PM Shawn Heisey wrote:
> I am running haproxy in a couple of
Here is your answer:
Layer7 wrong status, code: 401, info: "Unauthorized"
Your health check is not providing the required credentials and failing.
You can either fix that, or as you only have one backend, you might want to
remove the check as it's not gaining you little with only one backend.
On
http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or
hdr_sub(user-agent) -i "\$\{jndi:" }
was not catching the bad traffic. I think the escapes were causing issues
in the matching.
The following did work:
http-request deny deny_status 405 if { url_sub -i -f
If you want them to all use the same outgoing IP, you could place them
behind a NAT router instead of using outgoing proxy server.
That said, if you do want to use haproxy, I think you will want to use the
"usesrc client" on the haproxy config and the haproxy server will also need
the prerouting
Sounds like the biggest part of hot restarts is the cost of leaving the old
process running as they have a lot of long running TCP connections, and if
you do a lot of restarts the memory requirements build up. Not much of an
issue for short lived http requests (although it would be nice if keep
A couple of possible options...
You could use tcp-request inspect-delay to delay the response a number of
seconds (and accept it quick if legitimate traffic).
You could use redirects which will have the clients do more requests
(Possibly with the inspect delays).
That said, it would be useful to
CentOS 6 isn't EOL until the end of the month, so there is a couple of more
weeks left.
There is at least one place to pay for support through 2024.
($3/month/server)
Might be good to keep for a a bit past EOL, as I know when migrating
services sometimes I'll throw a proxy server on the old
I could be wrong, but I think he is stating that if you have that
allowed, it can be used to get a direct connection to the backend
bypassing any routing or acls you have in the load balancer, so if you
some endpoints are blocked, or internal only, they could potentially
be accessed this way.
For
Look into module rpaf for apache along with option forwardfor in haproxy
and no need for routing changes, or you can setup haproxy as a transparent
proxy (source usesrc client) and not change apache but would require
routing changes on the apache servers.
-Original Message-
From: Simon
You could setup the acls so they all goto one backend, and thus limit the
number of connections on that backend to something low like 1. Not exactly
rate limit, but at most 1 connection to server them all...
-Original Message-
From: hapr...@serverphorums.com
I tend to have really large rise, and small fall like 2 and 9 (or 99 or
higher would be good if you want to ensure it stays down long enough to
trigger). That way they stay dead for awhile, but can go down quickly.
Anyways, so that it shows in my monitoring system I have this in my zabbix
cfg
There are all sorts of kernel tuning parameters under /proc that can make
a big difference, not to mention what type of virtual NIC you have in the
VM. Are they running the same kernel version and Gentoo release? Have
you compared sysctl.conf (or whatever gento uses to customize settings in
There is a brief time between the switchover from the old process to the
new where new connections can not be accepted. Better to mark the backend
servers down without switching processes. (Several ways to do that).
If the refused connection concerns you, and you cant avoid starting
haproxy,
Been using haproxy for some time. but have not used it with SSL yet.
What is the best option to implement SSL? There seems to be several
options, some requiring 1.5 (which isn't exactly ideal as 1.5 isn't
considered stable yet).
I do need to route based on the incoming request, so decode
, newer version of stunnel probably perform better.
-Original Message-
From: Brane F. Gračnar [mailto:brane.grac...@tsmedia.si]
Sent: Tuesday, December 13, 2011 5:21 PM
To: David Prothero
Cc: John Lauro; haproxy@formilux.org
Subject: Re: SSL best option for new deployments
On 12/13
Also, how large is large? 4GB?
-Original Message-
From: Baptiste [mailto:bed...@gmail.com]
Sent: Friday, October 28, 2011 5:48 PM
To: Justin Rice
Cc: haproxy@formilux.org
Subject: Re: HAProxy and Downloading Large Files
hi,
What do HAProxy logs report you when the error
I suggest you use balance leastconn instead of roundrobin. That way the
weights effect the ratios, but they are not locked in. If a server clears
connections faster than the others, it will get more requests... if it
falls behind it will get less...
Given that multiple factors impact how many
Is there an easy way to have haproxy log the host with the uri instead of
just the relative uri? I have some 503 errors, and they are going to
virtual hosts on the backend and I am having some trouble tracking them
down. and the uri isn't specific enough as it is common among multiple
hosts.
Works great. I have several pairs of vm haproxy servers in transparent mode
and running heartbeat to take over the shared IP.
-Original Message-
From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
Sent: Tuesday, September 27, 2011 3:46 PM
To: haproxy@formilux.org
...
-Original Message-
From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
Sent: Tuesday, September 27, 2011 6:13 PM
To: John Lauro
Cc: haproxy@formilux.org
Subject: Re: TPROXY + Hearbeat
Hey John,
Thanks for the quick response. That's great to know. So both the VIPs
and the shared IP
Thanks, that worked.
-Original Message-
From: Baptiste [mailto:bed...@gmail.com]
Sent: Tuesday, September 27, 2011 6:02 PM
To: John Lauro
Cc: haproxy@formilux.org
Subject: Re: Log host info with uri
You might want to use capture request header host len 64
cheers
On Tue, Sep 27
be done.
-Original Message-
From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
Sent: Tuesday, September 27, 2011 8:03 PM
To: John Lauro
Subject: Re: TPROXY + Hearbeat
Hey John,
Thank you for the giving me more detail. I really appreciate it.
We're moving from a pair
Are you using connection tracking with iptables? If so, you might want to
consider using a more basic configuration without connection tracking.
What does your iptables configuration look like?
From: Joe Torsitano [mailto:jtorsit...@weatherforyou.com]
Sent: Saturday, December 19,
Is there some simple configuration option(s) staring me in the face
that I'm missing, or is this more complex than it seems on the surface?
In terms of some simple configuration option...
Why not just have 3 active? If one is down, it's load will automatically be
routed to the other two.
I think haproxy can only do header manipulation with HTTP. In other words,
rspadd will not work with mode tcp.
You should be able to have your PHP scrip add the custom header.
Postfix can handle a lot of outgoing mail. If you don't mind me asking, I'm
just curious how much mail are you
to tune apache. As other have mentioned, telling
the crawlers to behave themselves or totally ignore the wiki with a robots
file is probably best.
-Original Message-
From: Wout Mertens [mailto:wout.mert...@gmail.com]
Sent: Monday, November 16, 2009 7:31 AM
To: John Lauro
Cc: haproxy
I would probably do that sort of throttling at the OS level with iptables,
etc...
That said, before that I would investigate why the wiki is so slow...
Something probably isn't configured right if it chokes with only a few
simultaneous accesses. I mean, unless it's embedded server with under
I see two potential issues (which may or may not be important for you).
1. Non http 1.1 clients may have trouble (ie: they don't send the host
on the URL request, or if they are not really http but using port 80).
2. Back tracking if you get a complaint from some website (ie: RIAA
You could run mode tcp if you setup haproxy in transparent mode .
From: Dirk Taggesell [mailto:dirk.tagges...@googlemail.com]
Sent: Wednesday, October 28, 2009 9:03 AM
To: haproxy@formilux.org
Subject: Backend sends 204, haproxy sends 502
Hi all,
I want to load balance a new server
the client's
IP with source haproxyinterfaceip usesrc client
Might be good if the transparent mode had a reference to usesrc..
From: Dirk Taggesell [mailto:dirk.tagges...@googlemail.com]
Sent: Wednesday, October 28, 2009 9:48 AM
To: John Lauro
Subject: Re: Backend sends 204, haproxy
You mention loopback interface. You could be running out of port numbers to
for the connections.
What's your /proc/sys/net/ipv4/ip_local_port_range?
What's netstat -s | grep -i listshow on the server?
-Original Message-
From: David Birdsong [mailto:david.birds...@gmail.com]
under netstat -s
-Original Message-
From: David Birdsong [mailto:david.birds...@gmail.com]
Sent: Wednesday, October 21, 2009 7:07 AM
To: John Lauro
Cc: haproxy
Subject: Re: slow tcp handshake
On Wed, Oct 21, 2009 at 3:51 AM, John Lauro
john.la...@covenanteyes.com wrote:
You mention
, October 21, 2009 7:07 AM
To: John Lauro
Cc: haproxy
Subject: Re: slow tcp handshake
On Wed, Oct 21, 2009 at 3:51 AM, John Lauro
john.la...@covenanteyes.com wrote:
You mention loopback interface. You could be running out of port
numbers to
for the connections.
What's your /proc/sys/net
Sorry to report, from 1.3.21:
Oct 13 23:36:43 haf1a kernel: haproxy[25428]: segfault at 19 ip
0041620f sp 7381ef60 error 4 in haproxy[40+3d000]
(I know, kind of old, as we were running 1.3.18 on this box, so not sure
which version the problem started)
Compiled with:
make
: Krzysztof Olędzki [mailto:o...@ans.pl]
Sent: Wednesday, October 14, 2009 5:54 AM
To: John Lauro
Cc: haproxy@formilux.org
Subject: Re: [ANNOUNCE] haproxy 1.4-dev4 and 1.3.21
On 2009-10-14 10:47, John Lauro wrote:
Sorry to report, from 1.3.21:
Oct 13 23:36:43 haf1a kernel: haproxy[25428
It works well. Don't forget divider=10 for even better performance.
From: geoffreym...@gmail.com [mailto:geoffreym...@gmail.com]
Sent: Friday, September 25, 2009 9:45 PM
To: haproxy@formilux.org
Subject: HAProxy - Virtual Server + CentOS/RHEL 5.3
Hello,
Does anyone know of any reason
...@gmail.com]
Sent: Friday, September 25, 2009 10:17 PM
To: John Lauro; geoffreym...@gmail.com
Cc: haproxy@formilux.org
Subject: Re: RE: HAProxy - Virtual Server + CentOS/RHEL 5.3
I have no idea what divider=10 is... but I'm sure I'll figure it out as I
start getting myself familiar with HAProxy
service iptables stop
should take care of it in Centos.
Although your lsmod doesn't make sense. It should be showing ip_conntrack
and ip_tables and iptable_filter with a standard Centos and iptables. Even
dm_multipath and others that you are not interested in would be expected...
/proc/net/nf_conntrack | wc -l
50916
# service iptables stop
(it was never started)
# cat /proc/net/nf_conntrack | wc -l
65358
This is Fedora, sorry, not CentOS.
the only other thing running is keepalived to manage the ip address for
haproxy.
On 9/3/09 10:16 AM, John Lauro wrote
I don't think you can easily have two health checks. You could also do port
forwarding with iptables or inetd/xinetd and run the health check on a
different port. Stop the forwarding when you want maintenance mode.
-Original Message-
From: Matt [mailto:mattmora...@gmail.com]
Sent:
The biggest issue probably is that you are using cookies that will tie a
client to a server. For me, I noticed it often takes over 3 days to get the
first 80% of the traffic off if I mark a server as soft down as people never
reboot or close the browser. If you have more random traffic and less
Is there something I need to change in my config. I set it as leastconn
to balance traffic but it isn't. Can haproxy knows to transfer request
to other back end when it knows it has a much more traffic compare to
other server?
It can not transfer to a different server when you tie the
Do you have haproxy between your web servers and the 3rd party? If not (ie:
only to your servers), perhaps that is what you should do. Trying to throttle
the maximum connections to your web servers sounds pointless given that it's
not a very good correlation to the traffic to the third party
Nearly an extra .1 seems high, but to be fair it doesn’t appear you did much of
a test:
Number of clients running queries: 1
Average number of queries per client: 0
Simulating only 1 client, I wouldn’t expect any performance improvement, and
without doing any queries, you
(ignore previos message that had this response replying to wrong message.)
I set my to alert if ever non 0 for queue and for my graphs I just use
current sessions, and also total connections (graph as delta / sec) for
connection rate.
I assume you normally have a queue during busy times
Only bind to the port so it doesn’t matter if additional addresses are added or
removed.
From: Daniel Gentleman [mailto:dani...@chegg.com]
Sent: Thursday, July 23, 2009 6:13 PM
To: haproxy@formilux.org
Subject: HAProxy and FreeBSD CARP failover
Hi list.
I'd like to set up a redundant
Are you certain there is no issue with the web server? I have seen (years
ago, prior to my use with haproxy) apache produce strange problems like this
for ie that firefox was able to cope with if it's access_log file reached
2GB. On a busy server, that is easily reached in days or sooner, and
This is what I use to reload:
haproxy -D -f /etc/lb-transparent-slave.cfg -sf $(pidof haproxy)
(Which has pidof lookup process id instead of file it in a file, but that
shouldn't matter.)
The main problem is you are (-st) terminating (aborting) existing
connections instead of (-sf) finishing
It's a little different config than I have, but it looks ok to me.
What's haproxy -vv give?
I have:
[r...@haf1 etc]# haproxy -vv
HA-Proxy version 1.3.15.7 2008/12/04
Copyright 2000-2008 Willy Tarreau w...@1wt.eu
Build options :
TARGET = linux26
CPU = generic
CC = gcc
That would be nice, but I don’t think so (at least not completely). Using
“balance leastconn” will give the faster servers a little more as they will
clear their connections quicker.
From: Sihem [mailto:stfle...@yahoo.fr]
Sent: Wednesday, March 18, 2009 6:26 AM
To: haproxy@formilux.org
Mine don't appear to have that much difference. Are any of the servers
down, or maybe reaching their session limits? What's your retr and redis
look like?
From: Sun Yijiang [mailto:sunyiji...@gmail.com]
Sent: Tuesday, March 17, 2009 3:18 AM
To: kuan...@mail.51.com
Cc: haproxy@formilux.org
Not built into Haproxy, but you can use heartbeat or keepalived along with
haproxy for IP takeover on a pair of physical boxes (or VMs).
From: Scott Pinhorne [mailto:scott.pinho...@voxit.co.uk]
Sent: Tuesday, March 17, 2009 10:52 AM
To: haproxy@formilux.org
Subject: Multiple Proxies
Hi
You need to explain a little more, as I am not understating something.
Perhaps what you mean by VIP?
If they share the same single VIP at the same time, then why would you use
round-robin DNS? Round-robin is for multiple IP addresses...?
Also, if you do a virtual IP like Microsoft Windows does
Sorry for the off topic question, so feel free to reply directly. Can
anyone recommend a BGP package for linux. I have little experience with
BGP, and on the plus site I mainly just need to advertise a net (so a a
simple default route for outgoing is all I need in local routing table).
There
I still don't understand why people stick to heartbeat for things
as simple as moving an IP address. Heartbeat is more of a clustering
solution, with abilities to perform complex tasks.
When it comes to just move an IP address between two machines an do
nothing else, the VRRP protocol is
- net.netfilter.nf_conntrack_max = 265535
- net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
= this proves that netfiler is indeed running on this machine
and might be responsible for session drops. 265k sessions is
very low for the large time_wait. It limits to
Put a - in front of the path in syslogd.conf.
Ie:
local0.*
-/mnt/log/haproxy_0.log
local1.*
-/mnt/log/haproxy_1.log
local2.*
-/mnt/log/haproxy_2.log
local3.*
-/mnt/log/haproxy_3.log
local4.*
-/mnt/log/haproxy_4.log
local5.*
-/mnt/log/haproxy_5.log
That will help a lot with your load.
Hello,
Running mode tcp in case that makes a difference for any comments, as I know
there are others options for http.
I need to preserve for auditing the IP address of the clients and be able to
associate it with a session. One problem, it appears the client IP and port
are logged,
Hello,
I am considering using haproxy with mysql. Basically one server, and one
backup server. Has anyone used haproxy with mysql? What were your
experiences (good and bad)? What values do you use for timeouts, etc.?
Thank you.
I am not sure it would be called a bad idea, just not an effective one...
don't expect it to help much when an ISP is down for only an hour. Most
clients do not honor low TTL values, especially if they are revisiting the
site without closing the browser.
I would like to hear anyone using
66 matches
Mail list logo