Re: Should we change the -c output?

2023-11-13 Thread John Lauro
I like the default message.  If you want to suppress it, then you can use -q.
Having some standard output that can be suppressed with -q is also
fairly standard for UNIX commands.



On Mon, Nov 13, 2023 at 4:07 AM William Lallemand
 wrote:
>
> On Mon, Nov 13, 2023 at 09:52:57AM +0100, Baptiste wrote:
> > On Thu, Nov 9, 2023 at 5:00 PM William Lallemand 
> > wrote:
> >
> > > Hello,
> > >
> > > haproxy -c seems to be too verbose in the systemd logs by
> > > showing "Configuration file is valid" for every reloads.
> > >
> > > Is there anyone against removing this message by default?
> > > This will still output the alerts and warnings if some exists but the
> > > "Configuration file is valid" message will only be displayed in
> > > combination with -V.
> > >
> > > People tend to use the return code of the command and not the output,
> > > but I prefer to ask.
> > >
> > > Change will only be applied starting from 2.9. Patch attached.
> > >
> > > --
> > > William Lallemand
> > >
> >
> > Hi William,
> >
> > I used to use this message for 13 years while manually checking confs :)
> > I think it may impact admins / devs who run these manual checks, but not
> > too hard as we all look for "ERROR" or "WARNING" by default.
> > I think it's "ok" to change this. I will just miss it :D
> >
> > Baptiste
>
> That's what I thought either, and I like it since it's a little bit more
> like an UNIX command, which displays nothing when everything is correct.
>
> I pushed the patch, thanks!
>
> --
> William Lallemand
>



Re: Opinions desired on HTTP/2 config simplification

2023-04-15 Thread John Lauro
I agree defaulting to alpn h2,http/1.1 sooner (don't wait for 2.9),
and even 2.6 would be fine IMO.  Wouldn't be a new feature for 2.6,
only a non breaking (AFAIK) default change...

I would have concerns making QUIC default for 443 ssl (especially
prior to 2.8), but you are not suggesting that anyways.


On Sat, Apr 15, 2023 at 5:33 AM Willy Tarreau  wrote:
>
> Hi everyone,
>
> I was discussing with Tristan a few hours ago about the widespread
> deployment of H2 and H3, with Cloudflare showing that H1 only accounts
> for less than 7% of their traffic and H3 getting close to 30% [1],
> and the fact that on the opposite yesterday I heard someone say "we
> still have not tried H2, so H3..." (!).
>
> Tristan said something along the lines of "if only proxies would enable
> it by default by now", which resonated to me like when we decided to
> switch some defaults on (keep-alive, http-reuse, threads, etc).
>
> And it's true that at the beginning there was not even a question about
> enabling H2 by default on the edge, but nowadays it's as reliable as H1
> and used by virtually everyone, yet it still requires admins to know
> about this TLS-specific extension called "ALPN" and the exact syntax of
> its declaration, in order to enable H2 over TLS, while it's already on
> by default for clear traffic.
>
> Thus you're seeing me coming with my question: does anyone have any
> objection against turning "alpn h2,http/1.1" on by default for HTTP
> frontends, and "alpn h3" by default for QUIC frontends, and have a new
> "no-alpn" option to explicitly turn off ALPN negotiation on HTTP
> frontends e.g. for debugging ? This would mean that it would no longer
> be necessary to know the ALPN strings to configure these protocols. I
> have not looked at the code but I think it should not be too difficult.
> ALPN is always driven by the client anyway so the option states what we
> do with it when it's presented, thus it will not make anything magically
> fail.
>
> And if we change this default, do you prefer that we do it for 2.8 that
> will be an LTS release and most likely to be shipped with next year's
> LTS distros, or do you prefer that we skip this one and start with 2.9,
> hence postpone to LTS distros of 2026 ?
>
> Even if I wouldn't share my feelings, some would consider that I'm
> trying to influence their opinion, so I'll share them anyway :-)  I
> think that with the status change from "experimental-but-supported" to
> "production" for QUIC in 2.8, having to manually and explicitly deal
> with 3 HTTP versions in modern configs while the default (h1) only
> corresponds to 7% of what clients prefer is probably an indicator that
> it's the right moment to simplify these a little bit. But I'm open to
> any argument in any direction.
>
> It would be nice to be able to decide (and implement a change if needed)
> before next week's dev8, so that it leaves some time to collect feedback
> before end of May, so please voice in!
>
> Thanks!
> Willy
>
> [1] https://radar.cloudflare.com/adoption-and-usage
>



CVE-2023-25690 and CVE-2023-27522

2023-03-22 Thread John Lauro
Assuming no direct access to apache servers, does anyone know if
haproxy would by default protect against these vulnerabilities?



Re: Followup on openssl 3.0 note seen in another thread

2022-12-15 Thread John Lauro
What exactly is needed to reproduce the poor performance issue with openssl
3?  I was able to test 20k req/sec with it using k6 to simulate 16k users
over a wan.  The k6 box did have openssl1.  Probably could have sustained
more, but that's all I need right now.  Openssl v1 tested a little faster,
but within 10%.  Wasn't trying to max out my tests as that should be over
4x the needed performance.

Not doing H3, and the backends are send-proxy-v2.
Default libs on Alma linux on arm.
# rpm -qa | grep openssl
openssl-pkcs11-0.4.11-7.el9.aarch64
xmlsec1-openssl-1.2.29-9.el9.aarch64
openssl-libs-3.0.1-43.el9_0.aarch64
openssl-3.0.1-43.el9_0.aarch64
openssl-devel-3.0.1-43.el9_0.aarch64

This is the first box I setup with EL9 and thus openssl-3.  Might it only
be an issue when ssl is used to the backends?

On Thu, Dec 15, 2022 at 11:50 PM Willy Tarreau  wrote:

> On Thu, Dec 15, 2022 at 08:58:29PM -0700, Shawn Heisey wrote:
> > I'm sure the performance issue has been brought to the attention of the
> > OpenSSL project ... what did they have to say about the likelihood and
> > timeline for providing a fix?
>
> They're still working on it for 3.1. 3.1-alpha is "less worse" than
> 3.0 but still far behind 1.1.1 in our tests.
>
> > Is there an article or bug filed I can read for more information?
>
> There's this issue that centralizes the status of the most important
> regression reports:
>
>   https://github.com/openssl/openssl/issues/17627#issuecomment-1060123659
>
> We've also planned to issue an article to summarize our observations
> about this before users are hit too strong, but it will take some
> time to collect all info and write it down. But it's definitely a big
> problem for users who upgrade to latest LTS distros that shipped 3.0
> without testing it (though I can't blame distros, it's not the package
> maintainers' job to run performance tests on what they maintain) :-(
>
> My personal feeling is that this disaster combined with the stubborn
> refusal to support the QUIC crypto API that is mandatory for any
> post-2021 HTTP agent basically means that OpenSSL is not part of the
> future of web environments and that it's urgent to find alternatives,
> just like all other projects are currently seeking. And with http-based
> products forced to abandon OpenSSL, it's unlikely that their performance
> issues will be relevant in the future so it should get even worse over
> time by lack of testing and exposure. It's sad, because before the QUIC
> drama, we hoped to spend some time helping them improve their perfomance
> by reducing the locking abuse. Now the project has gone too far in the
> wrong direction for anything to be doable anymore, and I doubt that
> anyone has the energy to fork 1.1.1 and restart from a mostly clean
> state. But anyway, a solution must be found for the next batch of LTS
> distros so that users can jump from 20.x to 24.x and skip 22.x.
>
> There's currently a great momentum around WolfSSL that was already
> adopted by Apache, Curl, and Ngtcp2 (which is the QUIC stack that
> powers most HTTP/3-compatible agents). Its support on haproxy is
> making fast progress thanks to the efforts on the two sides, and it's
> pleasant to speak to people who care about performance. I'd bet we'll
> find it packaged in a usable state long before OpenSSL finally changes
> their mind on QUIC and reaches distros in a usable state. That's a
> perfect (though sad) example of the impact of design by committee!
>
>https://www.openssl.org/policies/omc-bylaws.html#OMC
>https://en.wikipedia.org/wiki/Design_by_committee
>
> Everything was written...
> Willy
>
>


Re: dsr and haproxy

2022-11-07 Thread John Lauro
The SYN-ACK tracking works in transparert mode with haproxy.  I have setup
haproxy to rebind all connections before and basically proxy the internet
(and use NAT for udp).  That said, I assume the point of DSR is that it's
not always going to take the same path and that is where the real issue
is.  Haproxy can handle an initial SYN-ACK man in the middle, but moving
the end point would be the problem.

On Mon, Nov 7, 2022 at 2:12 AM Willy Tarreau  wrote:

> On Fri, Nov 04, 2022 at 05:33:40PM +0100, Lukas Tribus wrote:
> > On Fri, 4 Nov 2022 at 16:50, Szabo, Istvan (Agoda)
> >  wrote:
> > >
> > > Yeah, that's why I'm curious anybody ever made it work somehow?
> >
> > Perhaps I should have been clearer.
> >
> > It's not supported because it's not possible.
> >
> > Haproxy the OSS uses the socket API, haproxy cannot forward IP packets
> > arbitrarily, which is required for DRS.
>
> And in fact it goes beyond the API. The first and foremost reason is
> that if you want to intercept TCP and work on contents, you have to
> accept an incoming connection first. For this you need do respond to
> a SYN with a SYN-ACK that holds a locally-chosen sequence number.
> Then assuming the connection is validated and you're going to pass
> it to a server, while you could imagine replicating the SYN sequence
> number from the client, the server will chose its own sequence number
> for the SYN-ACK, which will not match the one you chosed previously,
> and as such if you send the server's response directly to the client,
> this last one will never understand this traffic because it's shifted
> by the difference between the two sequence numbers.
>
> Some large hosting platforms had worked around this in the late 90s
> and early 2000s by prepending a header to TCP segments sent to the
> server, containing all the front connection's parameters (a bit like
> the proxy protocol), and the servers' TCP stack was heavily modified
> to use the parameters presented in this header to create the connection,
> including ports, sequence numbers, options etc that the server had to
> use. For obvious reasons such servers ought never be exposed to the
> net or it would have been trivial to DoS them or even to hijack their
> connections! I remember that others had proposed TCP extensions to
> tell a peer to skip a range of sequence numbers to make this possible
> (i.e. "I'm sending you a 1.3 GB hole then the data comes") as a way to
> splice a server connection to an already accepted one. But similarly
> this totally disappeared because it was hackish and totally insecure.
>
> > This is a hard no, not a "we do not support this configuration because
> > nobody ever tried it and we can't guarantee it will work".
>
> Definitely ;-)
>
> Cheers,
> Willy
>
>


Re: TCP connections resets during backend transfers

2022-10-20 Thread John Lauro
That's what 50s?  You are probably doing pooling and it's using LRU instead of 
actually cycling through connections.  At least that is what I have seen node 
typically do.

Instead of 50 seconds, try:
timeout client  12h
timeout server  12h

You might want to enable logging on haproxy and general logging on maria.  If 
you see what I have seen in the past, you will notice that most of the SQL 
requests come through one connection, then next highest from a second, and 
so-on until you get to a connection that is mostly idle.

From: Artur 
Sent: Tuesday, October 18, 2022 5:15 AM
To: haproxy 
Subject: TCP connections resets during backend transfers

Hello,

While renewing a node.js servers and a galera cluster (mariadb) I'm
seeing an unexpected behaviour on TCP connections between node.js
application and mariadb.
There is a lot of connections resets during transfers on backend side.

My previous (working) setup was based on Debian 10, mariadb 10.5,
node.js 16 (and some dependencies) and haproxy 2.6.
I had a server running several node.js processes and a 3-node galera
mariadb cluster.
To provide some HA, I configured haproxy as a TCP proxy for mariadb
connections.
The usual setup is :
node.js -> haproxy -> mariadb
node.js application uses a connection pool to maintain several open
connections to database server that may be idle for a long time.
The timeouts are adjusted in haproxy to avoid disconnecting idle
connections.
This setup worked just fine on old servers.

Then I've setup new servers on Debian 11: a new mariadb galera cluster
(10.6), a new node.js application server (no real changes in node.js
software versions there) and haproxy (2.6.6 currently).
The global setup of all of this is quite the same as before but not
exactly the same. I tried however to be as close as possible to the old
setup.
Now, once I started the node.js application, the database connections
are established and after about 20 minutes I start to see application
warnings about lost connections to database.
On haproxy stats page I can see lot of 'connections resets during
tranfers' backend side.
On database side I can see idle processes that stay there even if I
close node.js application or restart haproxy. These have to timeout or
be killed to disappear. As if there was no communication any more
between haproxy and mariadb (on these tcp connections).
At the same moment other database connections are established or
continue to function. Maybe something related to idle connections ?

If it may help : all these servers are VMs in OVH public cloud and
communications between servers are established through a private vlan in
the same datacenter.

If I remove haproxy from workflow (node.js -> mariadb) I cannot see any
error anymore. But I don't understand why it worked fine before and is
working this way right now...
Any help is welcome.

My current haproxy setup :

global
   log /dev/log  local0
   log /dev/log  local1 notice
   chroot /var/lib/haproxy
   stats socket /run/haproxy/admin.sock mode 660 level admin
   stats timeout 30s
   user haproxy
   group haproxy
   daemon

   # Default SSL material locations
   ca-base /etc/ssl/certs
   crt-base /etc/ssl/private

   # See:
https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fssl-config.mozilla.org%2F%23server%3Dhaproxy%26server-version%3D2.0.3%26config%3Dintermediatedata=05%7C01%7Cjohn.lauro%40covenanteyes.com%7C1615f59ae41445e417e708dab0e9783f%7C41175d2868f5486593eb6372ba83c5bb%7C0%7C0%7C638016814063244113%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=U2r2enW1n3iM%2BFRo2FXU0Ob63XPE6Wcry3WZSg7t0wU%3Dreserved=0
   ssl-default-bind-ciphers
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
   ssl-default-bind-ciphersuites
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
   ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

   ssl-dh-param-file /etc/haproxy/ssl/dhparams.pem
   tune.ssl.default-dh-param 2048

   maxconn 5

   #nosplice

defaults
   log global
   option dontlognull
   option dontlog-normal
   timeout connect 5000
   timeout client  5
   timeout server  5

   #option tcpka

   errorfile 400 /etc/haproxy/errors/400.http
   errorfile 403 /etc/haproxy/errors/403.http
   errorfile 408 /etc/haproxy/errors/408.http
   errorfile 500 /etc/haproxy/errors/500.http
   errorfile 502 /etc/haproxy/errors/502.http
   errorfile 503 /etc/haproxy/errors/503.http
   errorfile 504 /etc/haproxy/errors/504.http

   option splice-auto
   option splice-request
   option splice-response

frontend db3_front
   bind 127.0.1.1:3306
   mode tcp
   # haproxy client connection timeout is 1 second longer than the
default mariadb wait_timeout 

Re: haproxy listening on lots of UDP ports

2022-08-05 Thread John Lauro
Not positive the only use case, but I have a number of udp ports also open
so ran tcpdump on them and they are all talking to syslog. Seems to line up
about 1 per cpu on a couple of machines I checked.

On Fri, Aug 5, 2022 at 7:19 PM Shawn Heisey  wrote:

> I am running haproxy in a couple of places.  It is listening on multiple
> seemingly random high UDP ports.
>
> The one running "2.6.2-ce3023-30 2022/08/03" has the following ports.
> This server is in AWS.  The first three lines are expected:
>
> elyograg@bilbo:/var/log$ sudo lsof -Pn -i | grep haproxy
> haproxy   1928967root6u  IPv4 2585012  0t0 UDP *:443
> haproxy   1928967root7u  IPv4 2585013  0t0 TCP *:80
> (LISTEN)
> haproxy   1928967root8u  IPv4 2585014  0t0 TCP *:443
> (LISTEN)
> haproxy   1928967root   16u  IPv4 2587974  0t0 UDP *:57183
> haproxy   1928967root   17u  IPv4 2585855  0t0 UDP *:60746
>
> The one running "2.7-dev2-f9d4a7-78 2022/08/05" is in my basement and
> has the following ports.  The first four lines are expected.  There are
> a lot more UDP ports active on this one.
>
> elyograg@smeagol:~/git/lucene-solr$ sudo lsof -Pn -i | grep haproxy
> haproxy   1469717  root6u  IPv4 14230127 0t0  UDP
> 192.168.217.170:443
> haproxy   1469717  root7u  IPv4 14230128 0t0  TCP *:8983
> (LISTEN)
> haproxy   1469717  root8u  IPv4 14230129 0t0  TCP *:80
> (LISTEN)
> haproxy   1469717  root9u  IPv4 14230130 0t0  TCP *:443
> (LISTEN)
> haproxy   1469717  root   46u  IPv4 14242826 0t0  UDP *:45727
> haproxy   1469717  root   47u  IPv4 14212730 0t0  UDP *:40101
> haproxy   1469717  root   49u  IPv4 14209917 0t0  UDP *:34584
> haproxy   1469717  root   50u  IPv4 14212920 0t0  UDP *:55409
> haproxy   1469717  root   51u  IPv4 14209875 0t0  UDP *:46192
> haproxy   1469717  root   52u  IPv4 14229139 0t0  UDP *:36370
> haproxy   1469717  root   53u  IPv4 14209916 0t0  UDP *:50898
> haproxy   1469717  root   55u  IPv4 14242839 0t0  UDP *:45456
> haproxy   1469717  root   56u  IPv4 14242890 0t0  UDP *:37717
> haproxy   1469717  root   57u  IPv4 14240387 0t0  UDP *:45547
> haproxy   1469717  root   58u  IPv4 14240302 0t0  UDP *:33960
> haproxy   1469717  root   60u  IPv4 14240885 0t0  UDP *:42145
>
> These extra ports are not exposed to the world.  The external firewalls
> are locked down pretty well.  And the hosts also have firewalls (ufw)
> that are similarly restricted.
>
> What are these ports for?  They are not in the haproxy config files.  I
> did try searching for an explanation, and didn't find anything.
>
> Thanks,
> Shawn
>
>
>


Re: HAProxy thinks Plex is down when it's not

2022-02-19 Thread John Lauro
Here is your answer:
Layer7 wrong status, code: 401, info: "Unauthorized"

Your health check is not providing the required credentials and failing.
You can either fix that, or as you only have one backend, you might want to
remove the check as it's not gaining you little with only one backend.

On Sat, Feb 19, 2022 at 11:47 AM Moutasem Al Khnaifes <
moutasem.al-khnai...@web.de> wrote:

> ### Detailed Description of the Problem
>
> I use HAProxy to get access to NextCloud and Plex from outside the
> network. but for some reason HAProxy thinks that Plex is down, and the
> status page is inaccessible
>
>
> ### Expected Behavior
>
> going to nextcloud.domain.com and plex.domain.com should redirect me to
> each service respectively. however, only NextCloud is accessible:
> ```
> Feb 19 16:18:21 localserver systemd[1]: Started HAProxy Load Balancer.
> Feb 19 16:18:21 localserver haproxy[30087]: Proxy show-403 started.
> Feb 19 16:18:21 localserver haproxy[30087]: Proxy letsencrypt started.
> Feb 19 16:18:21 localserver haproxy[30087]: Proxy letsencrypt started.
> Feb 19 16:18:21 localserver haproxy[30087]: Proxy nextcloud-http started.
> Feb 19 16:18:21 localserver haproxy[30087]: Proxy nextcloud-http started.
> Feb 19 16:18:21 localserver haproxy[30087]: Proxy plex-http started.
> Feb 19 16:18:21 localserver haproxy[30087]: Proxy plex-http started.
> Feb 19 16:18:22 localserver haproxy[30088]: [WARNING] 049/161822 (30088) :
> Server plex-http/plex is DOWN, reason: Layer7 wron>
> Feb 19 16:18:22 localserver haproxy[30088]: [ALERT] 049/161822 (30088) :
> backend 'plex-http' has no server available!
> ```
> trying to access Plex and the Status Page will always be redirected to an
> error page:
> ```
> 503 Service Unavailable
> No server is available to handle this request.
> ```
>
>
> ### Steps to Reproduce the Behavior
>
> 1. Run NextCloud Snap on port 81
> 2. Run Plex on port 32400
> 3. Use Haproxy with SSL termination
>
>
> ### Do you have any idea what may have caused this?
>
> Plex is failing the Health Check preformed by HAProxy even when it is
> running
> I can not see why the Status Page is inaccessible
>
> ### Do you have an idea how to solve the issue?
>
> 1. Haproxy assumes always service is available
> 2. HAProxy preforms different Health Check on Service
>
> ### What is your configuration?
>
> ```haproxy
> global
> log /dev/loglocal0
> log /dev/loglocal1 notice
> chroot /var/lib/haproxy
> stats socket /var/lib/haproxy/admin.sock mode 660 level admin
> expose-fd listeners
> stats timeout 30s
> user haproxy
> group haproxy
> daemon
>
>
> # Default SSL material locations
> ca-base /etc/ssl/certs
> crt-base /etc/ssl/private
>
> # See:
> https://ssl-config.mozilla.org/#server=haproxy=2.0.3=intermediate
> ssl-default-bind-ciphers xxx>
> ssl-default-bind-ciphersuites xxx
> ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
>
> defaults
> log global
> modehttp
> option  httplog
> option  dontlognull
> timeout connect 5000
> timeout client  50
> timeout server  50
> errorfile 400 /etc/haproxy/errors/400.http
> errorfile 403 /etc/haproxy/errors/403.http
> errorfile 408 /etc/haproxy/errors/408.http
> errorfile 500 /etc/haproxy/errors/500.http
> errorfile 502 /etc/haproxy/errors/502.http
> errorfile 503 /etc/haproxy/errors/503.http
> errorfile 504 /etc/haproxy/errors/504.http
>
> frontend http
> bind :::443 ssl crt /etc/haproxy/ssl-certs/cert.pem
> reqadd X-Forwarded-Proto:\ https
>
> acl letsencrypt-req path_beg /.well-known/acme-challenge/
> use_backend letsencrypt if letsencrypt-req
>
> acl path_dav path_beg /.well-known/caldav || path_beg
> /.well-known/carddav
> redirect location "https://nextcloud.domain.com/remote.php/dav;
> if path_dav
>
> acl host_nextcloud hdr(host) -i nextcloud.domain.com
> use_backend nextcloud-http if host_nextcloud
>
> acl host_plex hdr(host) -i plex.domain.com
> use_backend plex-http if host_plex
>
> default_backend show-403
>
> listen  stats
> bind localhost:1936
> modehttp
> log global
>
> maxconn 10
>
> clitimeout  100s
> srvtimeout  100s
> contimeout  100s
> timeout queue   100s
>
> stats enable
> stats hide-version
> stats refresh 30s
> stats show-node
> stats auth admin:password
> stats uri  /haproxy?stats
>
> backend show-403
> mode http
> http-request deny deny_status 403
>
> backend letsencrypt
> mode http
> server letsencrypt localhost:10500
>
> backend nextcloud-http
> mode http
> balance roundrobin
> option forwardfor
>

Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread John Lauro
http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or
hdr_sub(user-agent) -i "\$\{jndi:" }
was not catching the bad traffic.  I think the escapes were causing issues
in the matching.

The following did work:
http-request deny deny_status 405 if { url_sub -i -f
/etc/haproxy/bad_header.lst }
http-request deny deny_status 405 if { hdr_sub(user-agent)
-i -f /etc/haproxy/bad_header.lst }
and in bad_header.lst
${jndi:

That said, this is still incomplete as it is only checking some headers,
and I am sure some incoming bad data will be from POST data which is more
difficult to intercept with haproxy...

(and the request is still logged by haproxy, so if you feed your haproxy
log to log4j it will not help against that...)


Re: Setup HAProxy as a Forward Proxy for SMTP

2021-05-06 Thread John Lauro
If you want them to all use the same outgoing IP, you could place them
behind a NAT router instead of using outgoing proxy server.

That said, if you do want to use haproxy, I think you will want to use the
"usesrc client" on the haproxy config and the haproxy server will also need
the prerouting and divert firewall rules and routing for the configured IPs
that will be proxied to come into the haproxy servers.

That would provide some stats and management over NAT, but not sure it's
really any better than those IPs configured into a NAT router other than
PowerMTA would have control of the client side IP vs the NAT server
deciding which to use from the pool.

On Thu, May 6, 2021 at 5:02 PM Brizz Bane  wrote:

> No.  PowerMTA would not be the last hop, because then it would be using
> the IPs that the PowerMTA Server is on.
>
> I am wanting PowerMTA -> HAProxy -> t...@gmail.com
>
> From the article:
>
> This allows customers to deploy all their source IPs on an external proxy
> server instead of being deployed on the individual PowerMTA nodes. The
> internal PowerMTA nodes will route their email through the correct source
> IP deployed on proxy node via the use of proxy protocol.
>
> Sorry if you have received this multiple times.  I'm not sure how to
> reply to the messages and have them show up in the mailing list.
>
> Sorry if you have received
>
> On Thu, May 6, 2021 at 2:13 AM Baptiste  wrote:
>
>> Hi,
>>
>> From the first link, I understand you're trying to do the following:
>> user MUA ==> HAProxy ==> fleet of power MTA ==> Internet  ==>
>> destination MTA
>>
>> Is this correct?
>>
>> Baptiste
>>
>>
>> On Thu, May 6, 2021 at 5:13 AM Brizz Bane  wrote:
>>
>>> I am wanting to set up HAProxy to act as a proxy for PowerMTA.  I do not
>>> want a reverse or load balancing setup, so what I'm wanting to do is
>>> atypical and I've not found much online.
>>>
>>> Here are a couple links describing PowerMTA's integration with HAProxy:
>>>
>>>
>>> https://www.sparkpost.com/docs/tech-resources/pmta-50-features/#outbound-proxy-support
>>>
>>> https://www.postmastery.com/powermta-5-0-using-a-proxy-for-email-delivery/
>>>
>>> I have searched for hours and asked everywhere that I could think of.
>>> I've not made any progress.
>>>
>>> How can I go about doing this?  If you need any more information please
>>> let me know.  ANY help or guidance would be greatly appreciated.
>>>
>>> Thank you,
>>>
>>> brizz
>>>
>>
> On Thu, May 6, 2021 at 2:13 AM Baptiste  wrote:
>
>> Hi,
>>
>> From the first link, I understand you're trying to do the following:
>> user MUA ==> HAProxy ==> fleet of power MTA ==> Internet  ==>
>> destination MTA
>>
>> Is this correct?
>>
>> Baptiste
>>
>>
>> On Thu, May 6, 2021 at 5:13 AM Brizz Bane  wrote:
>>
>>> I am wanting to set up HAProxy to act as a proxy for PowerMTA.  I do not
>>> want a reverse or load balancing setup, so what I'm wanting to do is
>>> atypical and I've not found much online.
>>>
>>> Here are a couple links describing PowerMTA's integration with HAProxy:
>>>
>>>
>>> https://www.sparkpost.com/docs/tech-resources/pmta-50-features/#outbound-proxy-support
>>>
>>> https://www.postmastery.com/powermta-5-0-using-a-proxy-for-email-delivery/
>>>
>>> I have searched for hours and asked everywhere that I could think of.
>>> I've not made any progress.
>>>
>>> How can I go about doing this?  If you need any more information please
>>> let me know.  ANY help or guidance would be greatly appreciated.
>>>
>>> Thank you,
>>>
>>> brizz
>>>
>>


Re: About the 'Hot Restarts' of haproxy

2021-04-13 Thread John Lauro
Sounds like the biggest part of hot restarts is the cost of leaving the old
process running as they have a lot of long running TCP connections, and if
you do a lot of restarts the memory requirements build up.  Not much of an
issue for short lived http requests (although it would be nice if keep
alive wasn't followed on the old haproxy processes so they could die
quicker).

On Tue, Apr 13, 2021 at 6:25 AM Willy Tarreau  wrote:

> On Tue, Apr 13, 2021 at 01:31:12AM +, Rmrf99 wrote:
> > In this Slack engineering blog post:
> https://slack.engineering/migrating-millions-of-concurrent-websockets-to-envoy/
> >
> > they replace HAProxy with Envoy for **Hot Restart**, just curious does
> > HAProxy new version will have similar approach? or have better
> solution(in
> > the future).
>
> So in fact it's not for hot restarts, since we've supported that even
> before envoy was born, it's in order to introduce new servers at run
> time. We do have some ongoing work on this, and some significant parts
> are already available with experimental support:
>
> https://github.com/haproxy/haproxy/issues/1136
>
> Willy
>
>


Re: Disable client keep-alive using ACL

2020-11-18 Thread John Lauro
A couple of possible options...
You could use tcp-request inspect-delay to delay the response a number of
seconds (and accept it quick if legitimate traffic).
You could use redirects which will have the clients do more requests
(Possibly with the inspect delays).

That said, it would be useful to force a client connection closed at times,
but there are ways to protect the backends and slow some clients without
completely blocking them.

On Wed, Nov 18, 2020 at 3:14 AM Tim Düsterhus, WoltLab GmbH <
duester...@woltlab.com> wrote:

> Lukas,
>
>
> The reason is that we want to avoid outright blocking with e.g. a 429
> Too Many Requests, because that could affect legitimate traffic. Forcing
> the client to re-establish the connection should not be noticeable for a
> properly implemented client, other than an increased latency.
>
> I'm aware that this will be more costly for us as well, but we have
> plenty of spare capacity at the load balancers.
>
>
>


Re: do we want to keep CentOS 6 builds?

2020-11-15 Thread John Lauro
CentOS 6 isn't EOL until the end of the month, so there is a couple of more
weeks left.

There is at least one place to pay for support through 2024.
($3/month/server)

Might be good to keep for a a bit past EOL, as I know when migrating
services sometimes I'll throw a proxy server on the old server to the new
one...  and there will likely be some that don't make the Nov 30th deadline
to retire all Centos 6 servers.


On Sun, Nov 15, 2020 at 11:15 AM Илья Шипицин  wrote:

> Hello,
>
> we still run cirrus-ci builds.
> CentOS 6 is EOL.
>
> should we drop it?
>
> Ilya
>


Re: http2 smuggling

2020-09-11 Thread John Lauro
I could be wrong, but I think he is stating that if you have that
allowed, it can be used to get a direct connection to the backend
bypassing any routing or acls you have in the load balancer, so if you
some endpoints are blocked, or internal only, they could potentially
be accessed this way.
For example, if you have something like:
  acl is_restrict path_sub /.git/
  http-request deny if is_restrict !is_safe_ip

The acl could be bypassed by using the method to connect directly to a backend.

That's not to say it's a security flaw in haproxy, but a potential
misconfiguration that could allow traffic you thought was blocked by
the proxy.


On Fri, Sep 11, 2020 at 2:07 AM Willy Tarreau  wrote:
>
> Hi Igor,
>
> On Fri, Sep 11, 2020 at 01:55:10PM +1000, Igor Cicimov wrote:
> > Should we be worried?
> >
> > https://portswigger.net/daily-swig/http-request-smuggling-http-2-opens-a-new-attack-tunnel
>
> But this stuff is total non-sense. Basically the guy is complaining
> that the products he tested work exactly as desired, designed and
> documented!
>
> The principle of the upgrade at the gateway level precisely is to say
> "OK both the client and the server want to speak another protocol you
> agreed upon, let me retract" and let them talk over a tunnel. That's
> exactly what is needed to support WebSocket for example. The simple
> fact that he found that many proxies/gateways work like this should
> ring a bell about the intended behavior!
>
> In addition there is zero smuggling here as there is no desynchronisation.
> It's just a tunnel between the client and the server, both agreeing to
> do so. It does *exactly* the same as if the client had connected to the
> server using a CONNECT method and the server had returned 200. So there
> is absolutely no attack nor whatever here, just a connection that remains
> dedicated to a client and a server till the end.
>
> Sadly, as usual after people discover protocols during the summer, some
> journalists will surely want to make noise about this to put some bread
> on their table...
>
> Thanks for the link anyway I had a partial laugh; partial only because
> it makes useless noise.
>
> Cheers,
> Willy
>



RE: recording stats

2012-02-16 Thread John Lauro
Look into module rpaf for apache along with option forwardfor in haproxy
and no need for routing changes, or you can setup haproxy as a transparent
proxy (source usesrc client) and not change apache but would require
routing changes on the apache servers.

 -Original Message-
 From: Simon Ollivier [mailto:solliv...@aic.fr]
 Sent: Thursday, February 16, 2012 6:49 AM
 To: haproxy@formilux.org
 Subject: recording stats
 
 Hi,
 I'm using HAPROXY in order to load balance my 2 http servers.
 On theses servers i want to get the apache access_log files with the
 real IP source address (not the haproxy server address)
 I have not found any solution.
 I have the full haproxy logs (real ip src - real ip dest), so thats
 good, but i would like to use an analyzer. I installed awstats but it
 couldn't find any visitors. Is it a format probleme?
 
 Thank you!




RE: Rate limit spider / bots ?

2012-02-14 Thread John Lauro
You could setup the acls so they all goto one backend, and thus limit the 
number of connections on that backend to something low like 1.  Not exactly 
rate limit, but at most 1 connection to server them all...



 -Original Message-
 From: hapr...@serverphorums.com [mailto:hapr...@serverphorums.com]
 Sent: Monday, February 13, 2012 4:15 PM
 To: haproxy@formilux.org
 Subject: Rate limit spider / bots ?

 Hi folks



 Been using haproxy for a while now and love it load balancing apache and
 nginx web server clusters and I am glad to have stumbled across this forum
 :)



 The question I have is, is it possible to rate limit spider and bots by
 user agent from haproxy level ? i.e. rate limit yandex and baidu bots ?



 thanks

 ---
 posted at http://www.serverphorums.com
 http://www.serverphorums.com/read.php?10,445347,445347#msg-445347




RE: Understanding how to use rise and fall settings on a dead server

2012-02-02 Thread John Lauro
I tend to have really large rise, and small fall like 2 and 9 (or 99 or
higher would be good if you want to ensure it stays down long enough to
trigger).  That way they stay dead for awhile, but can go down quickly.

 

Anyways, so that it shows in my monitoring system I have this in my zabbix
cfg on all my load balancers and trigger an alert if it is ever 0:

UserParameter=proxysrvrsdown,echo show stat | /usr/local/bin/socat
/var/lib/haproxy-stat stdio  | grep -c DOWN

 

So, if a frontend is flapping (and it could be the web server and not the
nic), I will get the flapping as alerts from my network monitoring.  

 

 

Personally, if you think a backend should stay down when down, I would
recommend having the backend do it's own self checks and shoot itself in
the head if it detects problems, so that it will stay down.  That said if
you have enough backends, having a high rise could be a good idea.
However, be warned that if there is one machine really bad, and the
problem is on the load balancer side or global network hiccup, all
backends could incorrectly be marked as down.  So you really don't want
them to stay down for too long.

 

 

 

From: j...@dashtickets.com [mailto:j...@dashtickets.com] On Behalf Of John
Clegg
Sent: Wednesday, February 01, 2012 10:29 PM
To: haproxy@formilux.org
Subject: Understanding how to use rise and fall settings on a dead server

 

Hi

 

I'm trying to understand how ensure backserver which is failing and
classified as dead stays dead. 

 

I've just had an instance on another server which is using another
load-balancer where the NIC has intermittently failing and it caused the
load-balancer to flap constantly. 

 

I would like to set a threshold where if the back-end service fails that
it says dead, it stays dead and needs to be manually re-added to
load-balancer. 

 

I'm trying to understand how the rise and fall settings (plus other config
settings) can achieve this, or if there is another approach.

 

Any ideas would be appreciated.

 

Regards

 

John 

 

-- 

 

John Clegg

Dash Tickets 

http://www.dashtickets.co.nz

 



RE: VM vs bare metal and threading

2012-01-13 Thread John Lauro
There are all sorts of kernel tuning parameters under /proc that can make
a big difference, not to mention what type of virtual NIC you have in the
VM.  Are they running the same kernel version and Gentoo release?  Have
you compared sysctl.conf (or whatever gento uses to customize settings in
/proc)?

Generally I prefer to run haproxy (and only haproxy) in 1 CPU vms (less
CPUs, lower latency from the vm scheduler), with haproxy, my only
exception is if I also want ssl and load is higher.  When dealing with
high rates and larger number of connections make sure you don't go low on
RAM.  Haproxy goes exponentially worse as it starts to swap, in fact
running swapoff -a isn't a bad idea, especially for bench testing... and
it takes a lot more ram to support 8000 connections/sec than 300.

In summary, check RAM, and /proc tuning.


 -Original Message-
 From: Matt Banks [mailto:mattba...@gmail.com]
 Sent: Friday, January 13, 2012 2:40 PM
 To: haproxy@formilux.org
 Subject: VM vs bare metal and threading
 
 All,
 
 I'm not sure what the issue is here, but I wanted to know if there was
an
 easy explanation for this.
 
 We've been doing some load testing of HAProxy and have found the
following:
 
 HAProxy (both 1.4.15 and 1.4.19 builds) running under Gentoo in a 2 vCPU
VM
 (Vsphere 4.x) running on a box with a Xeon x5675 (3.06 GHz current gen
 Westmere) maxes out (starts throwing 50x errors) at around a session
rate
 of 3500.
 
 However, copies of the same binaries pointed at the same backend servers
on
 a Gentoo box (bare metal) with 2x E5405 (2.00GHz - Q4,2007 launch) top
out
 at a session rate of around 8000 - at which point the back end servers
 start to fall over. And that HAProxy machine is doing LOTS of other
things
 at the same time.
 
 Here's the reason for the query: We're not sure why, but the bare metal
box
 seems to be balancing the load better across cpu's. (We're using the
same
 config file, so nbproc is set to 1 for both setups). Most of our HAProxy
 setups aren't really getting hit hard enough to tell if multiple CPU's
are
 being used or not as their session rates typically stay around 300-400.
 
 We know it's not virtualization in general because we have a virtual
 machine in the production version of this system that achieves higher
 numbers on lesser hardware.
 
 Just wondering if there is somewhere we should start looking.
 
 TIA.
 matt



RE: Need help with HAProxy

2012-01-12 Thread John Lauro
There is a brief time between the switchover from the old process to the
new where new connections can not be accepted.  Better to mark the backend
servers down without switching processes.  (Several ways to do that).



If the refused connection concerns you, and you can’t avoid starting
haproxy, one option is to put up a firewall rule to block syn packets
while haproxy reloads, and then unblock.  That way clients will retry the
connection in about 3 seconds instead of being refused.







From: Mahawar, Manisha (contractor)
[mailto:manisha.maha...@twc-contractor.com]
Sent: Thursday, January 12, 2012 5:03 PM
To: haproxy@formilux.org
Subject: Need help with HAProxy



I am using HAProxy on RedHat 5.5 and have below configuration.

global
daemon
maxconn 1024
log  127.0.0.1  local1 info

defaults
log global
balance roundrobin
mode http
retries3
option redispatch
timeout connect 30ms
timeout client  30ms
timeout server  30ms

listen epgs
bind *:80
server server1 127.0.0.1:8080 maxconn 1 check
server server2 epg.local.com:8080 maxconn 1 check
stats uri /stats

I started firing 5000 request to HAProxy using JMeter. While JMeter is
firing the request I removed the server2 from configuration file and fired
haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat
/var/run/haproxy.pid) command. I noticed 2 connection refused errors in
JMeter log.

2012/01/11 03:31:31 ERROR - jmeter.protocol.http.sampler.HTTPJavaImpl:
readResponse:  java.net.ConnectException: Connection refused
2012/01/11 03:31:31 ERROR - jmeter.protocol.http.sampler.HTTPJavaImpl:
readResponse: java.net.ConnectException: Connection refused
2012/01/11 03:31:31 ERROR - jmeter.protocol.http.sampler.HTTPJavaImpl:
Cause: java.net.ConnectException: Connection refused
2012/01/11 03:31:31 ERROR - jmeter.protocol.http.sampler.HTTPJavaImpl:
Cause: java.net.ConnectException: Connection refused

Do you know which configuration I should use to make HAProxy not drop any
request and am i verifying it correctly too?

Thanks for your help in advance.

Manisha





  _

This E-mail and any of its attachments may contain Time Warner Cable
proprietary information, which is privileged, confidential, or subject to
copyright belonging to Time Warner Cable. This E-mail is intended solely
for the use of the individual or entity to which it is addressed. If you
are not the intended recipient of this E-mail, you are hereby notified
that any dissemination, distribution, copying, or action taken in relation
to the contents of and attachments to this E-mail is strictly prohibited
and may be unlawful. If you have received this E-mail in error, please
notify the sender immediately and permanently delete the original and any
copy of this E-mail and any printout.



SSL best option for new deployments

2011-12-13 Thread John Lauro
Been using haproxy for some time.  but have not used it with SSL yet.

 

What is the best option to implement SSL?  There seems to be several
options, some requiring 1.5 (which isn't exactly ideal as 1.5 isn't
considered stable yet).

 

I do need to route based on the incoming request, so decode before haproxy
as opposed to the MODE TCP options.  Also, would like persistent between
client and haproxy, but allow different back-ends per request.

 

I do need to preserve the IP address of the original client.  So either
transparent (is that possible when going through stunnel or other and
haproxy on the same box), or X-Forwarded-for or similar added.

 

 



RE: SSL best option for new deployments

2011-12-13 Thread John Lauro
Interesting.

Found this with google comparing the two (only a few months old):
http://vincent.bernat.im/en/blog/2011-ssl-benchmark.html

In summary, performance appears to be close as long as you only have 1 core, 
but stud scales better with multiple cores.  However, as noted in the 
replies, newer version of stunnel probably perform better.




 -Original Message-
 From: Brane F. Gračnar [mailto:brane.grac...@tsmedia.si]
 Sent: Tuesday, December 13, 2011 5:21 PM
 To: David Prothero
 Cc: John Lauro; haproxy@formilux.org
 Subject: Re: SSL best option for new deployments

 On 12/13/2011 10:43 PM, David Prothero wrote:
  I've been using stunnel with the X-Forwarded-For patch. Is stud
 preferable to stunnel for some reason?

 Stunnel usually uses thread-per-connection architecture - as you
 probably know this programming model has serious scaling issues. Stud is
 single-threaded and runs as single-master/multiple-workers process,
 meaning that it can efficiently utilize power of multi-core cpus without
 context-switching overheaded resulting from hundreds (possibly
 thousands) of threads competing for cpu time slice.

 Stud is implemented on top of libev, one of the most efficient event
 loops available.

 It also uses much less memory than stunnel (openssl = 1.x.x).

 Best regards, Brane



RE: HAProxy and Downloading Large Files

2011-10-28 Thread John Lauro
Also, how large is large?  4GB?



 -Original Message-
 From: Baptiste [mailto:bed...@gmail.com]
 Sent: Friday, October 28, 2011 5:48 PM
 To: Justin Rice
 Cc: haproxy@formilux.org
 Subject: Re: HAProxy and Downloading Large Files
 
 hi,
 
 What do HAProxy logs report you when the error occur?
 What version of HAPRoxy are you running?
 
 Regards
 
 
 On Fri, Oct 28, 2011 at 11:02 PM, Justin Rice jrice0...@gmail.com
wrote:
  To all,
  I am having issues concerning downloading large files from one of my
web
  apps. TCP mode works just fine. The requirements, however, call for
HTTP
  mode - which does not work. Has anyone ever had this problem before?
Is
 this
  a timeout issue? Thanks for your time and suggestions in advance.
  -Justin




RE: HAProxy 1.4.8 Tunning for load balancing 3 servers

2011-09-29 Thread John Lauro
I suggest you use balance leastconn instead of roundrobin.  That way the
weights effect the ratios, but they are not locked in.  If a server clears
connections faster than the others, it will get more requests...  if it
falls behind it will get less...

Given that multiple factors impact how many req/s a server can do such as
number of cores, it's possible one server is faster at low loads with low
latency responses, but under heavy loads a different server might have
higher latency but can actually handle more simultaneous connections
better and in total be faster.



 -Original Message-
 From: Ivan Hernandez [mailto:ihernan...@kiusys.com]
 Sent: Wednesday, September 28, 2011 7:08 AM
 To: haproxy@formilux.org
 Subject: HAProxy 1.4.8 Tunning for load balancing 3 servers
 
 Hello,
 
 I have 3 webservers, a little old one that can handle 350req/s, a middle
 one that handles 1700req/s and a bigger one that handles 2700req/s on
 tests with the apache benchmark tool with 128 simultaneous connections.
 So I decided to put haproxy as load balancer in other server so i can
 (theorically) reach up to 4500req/s.
 
 I worked for a while trying many different configurations but the
 systems seems to have a limit of the fastest server on the whole
 cluster. If I take out from the cluster 1 or 2 servers, the haproxy
 performance is always the same of the fastest server in the cluster.
 Of course, load of each individual server goes down, what means that
 requests are distributed between them, but speed doesn't goes up.
 
 So, here I copy my config in case it has some obvious error:
 
 Thanks !
 Ivan
 
 global
  log 127.0.0.1local0
  log 127.0.0.1local1 notice
  maxconn 8192
  user haproxy
  group haproxy
 
 defaults
  logglobal
  retries3
  maxconn8192
  contimeout5000
  clitimeout5
  srvtimeout5
 
 listen  web-farm 0.0.0.0:80
  mode http
  option httpclose
  option abortonclose
  balance roundrobin
  server small 192.168.1.100:80 weight 1 check inter 2000 rise 2 fall
5
  server medium 192.168.1.101:80 weight 2 check inter 2000 rise 2
fall 5
  server big 192.168.1.102:80 weight 8 check inter 2000 rise 2 fall 5
 




Log host info with uri

2011-09-27 Thread John Lauro
Is there an easy way to have haproxy log the host with the uri instead of
just the relative uri?  I have some 503 errors, and they are going to
virtual hosts on the backend and I am having some trouble tracking them
down.  and the uri isn't specific enough as it is common among multiple
hosts.  I'm sure this can be done, just having trouble figuring it out at
the moment.

 

 

 



RE: TPROXY + Hearbeat

2011-09-27 Thread John Lauro
Works great.  I have several pairs of vm haproxy servers in transparent mode 
and running heartbeat to take over the shared IP.


 -Original Message-
 From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
 Sent: Tuesday, September 27, 2011 3:46 PM
 To: haproxy@formilux.org
 Subject: TPROXY + Hearbeat

 Hello,

 Is anyone running redundant HAProxy servers that use TPROXY for
 transparent proxying (preserve source IP) and use Heartbeat for
 failover of VIPs and shared interface IPs? We're curious if you run
 into issues due to combination of shared IPs and TPROXY? Thank you in
 advance.

 -J




RE: TPROXY + Hearbeat

2011-09-27 Thread John Lauro
As an example setup for some of systems:
My haresources file has:
hawebcl1  IPaddr2::xx.xx.xx.77/24/eth0

Actual IPs are xx.xx.xx.78 and xx.xx.xx.79 on the haproxy boxes.

The real gateway is .1.

So both haproxy hosts have the mangle setup for tproxy, gateway as .1, 
etc...
All the backend servers have .77 as their default gateway instead of .1.

I leave haproxy running on both.  It means both constantly poll the backend 
servers, but why both having heartbeat start/stop it...


Only minor annoying part is you must specify the unique IP on the source 
lines in haproxy config which makes it slightly harder to keep them in sync. 
IE:
source  xx.xx.xx.78 usesrc client
If you have heartbeat stop/start haproxy you could probably just use the 
shared IP for a common config file.

Both haproxys (active and passive) and all backend servers can access the 
internet fine for updates/etc.  All outgoing traffic relays through the 
active haproxy box just link incoming traffic, but not a problem...  That 
for those setup on public ips.


We have some servers setup in multiple datacenters setup behind an anycast 
network.  For those it's setup much the same, except the backend servers 
have a 2nd NIC with a private IP address, and we then use policy based 
routing on each backend server so that originating outgoing traffic from 
those go to a separate NAT server, and traffic from the haproxy go back via 
that...  Have to do the split because of the anycast, as we have to 
originate from a regular public IP instead of one from the anycast ip...

You could probably do it with NAT for outgoing tied to source IP of the 
private NAT, but haven't tried that and doubt running NAT on the server 
running haproxy would be a good idea for anything but light load...


 -Original Message-
 From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
 Sent: Tuesday, September 27, 2011 6:13 PM
 To: John Lauro
 Cc: haproxy@formilux.org
 Subject: Re: TPROXY + Hearbeat

 Hey John,

 Thanks for the quick response. That's great to know. So both the VIPs
 and the shared IP your backends use as their default gateway fail over
 well?

 Is your HAProxy pair the actual network boundary box between the
 subnets, or is it just the default gateway for your backends and the
 pair relay off the real subnet gateway? (any issues with utility
 traffic originating from the backend servers like package updates
 running through HAProxy pair as the default gw?)

 Thank you so much for your help!

 -J

 On Tue, Sep 27, 2011 at 4:09 PM, John Lauro john.la...@covenanteyes.com
 wrote:
  Works great.  I have several pairs of vm haproxy servers in transparent
 mode
  and running heartbeat to take over the shared IP.
 
 
  -Original Message-
  From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
  Sent: Tuesday, September 27, 2011 3:46 PM
  To: haproxy@formilux.org
  Subject: TPROXY + Hearbeat
 
  Hello,
 
  Is anyone running redundant HAProxy servers that use TPROXY for
  transparent proxying (preserve source IP) and use Heartbeat for
  failover of VIPs and shared interface IPs? We're curious if you run
  into issues due to combination of shared IPs and TPROXY? Thank you in
  advance.
 
  -J
 
 



RE: Log host info with uri

2011-09-27 Thread John Lauro
Thanks, that worked.

 -Original Message-
 From: Baptiste [mailto:bed...@gmail.com]
 Sent: Tuesday, September 27, 2011 6:02 PM
 To: John Lauro
 Cc: haproxy@formilux.org
 Subject: Re: Log host info with uri

 You might want to use capture request header host len 64

 cheers

 On Tue, Sep 27, 2011 at 11:46 PM, John Lauro
 john.la...@covenanteyes.com wrote:
  Is there an easy way to have haproxy log the host with the uri instead
of
  just the relative uri?  I have some 503 errors, and they are going to
  virtual hosts on the backend and I am having some trouble tracking
them
  down…  and the uri isn’t specific enough as it is common among
multiple
  hosts…  I’m sure this can be done, just having trouble figuring it out
at
  the moment…
 
 
 
 
 
 



RE: TPROXY + Hearbeat

2011-09-27 Thread John Lauro
Sorry, should have been like instead of link, and the next sentence 
didn't make much sense as-is...

In summary, I meant to say that it is a relatively simple setup as long as 
everything is using standard public IPs.


If you are doing NAT, or anycast it is a little more complex setup, but can 
be done.


 -Original Message-
 From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
 Sent: Tuesday, September 27, 2011 8:03 PM
 To: John Lauro
 Subject: Re: TPROXY + Hearbeat

 Hey John,

  Thank you for the giving me more detail. I really appreciate it.
 We're moving from a pair of A10 hardware load balancers to a dedicated
 hosting environment where we don't have easy access to appliances or
 the privilege of laying out the network. So our hope was to do exactly
 what you're doing with HAProxy.

 Just one other question, what do you mean by ...just link incoming
 traffic, but not a problem...  That for those setup on public ips.

 Thank you again!

 -J


 On Tue, Sep 27, 2011 at 5:39 PM, John Lauro john.la...@covenanteyes.com
 wrote:
  As an example setup for some of systems:
  My haresources file has:
  hawebcl1  IPaddr2::xx.xx.xx.77/24/eth0
 
  Actual IPs are xx.xx.xx.78 and xx.xx.xx.79 on the haproxy boxes.
 
  The real gateway is .1.
 
  So both haproxy hosts have the mangle setup for tproxy, gateway as .1,
  etc...
  All the backend servers have .77 as their default gateway instead of .1.
 
  I leave haproxy running on both.  It means both constantly poll the
 backend
  servers, but why both having heartbeat start/stop it...
 
 
  Only minor annoying part is you must specify the unique IP on the source
  lines in haproxy config which makes it slightly harder to keep them in
 sync.
  IE:
  source  xx.xx.xx.78 usesrc client
  If you have heartbeat stop/start haproxy you could probably just use the
  shared IP for a common config file.
 
  Both haproxys (active and passive) and all backend servers can access 
  the
  internet fine for updates/etc.  All outgoing traffic relays through the
  active haproxy box just link incoming traffic, but not a problem... 
  That
  for those setup on public ips.
 
 
  We have some servers setup in multiple datacenters setup behind an
 anycast
  network.  For those it's setup much the same, except the backend servers
  have a 2nd NIC with a private IP address, and we then use policy based
  routing on each backend server so that originating outgoing traffic from
  those go to a separate NAT server, and traffic from the haproxy go back
 via
  that...  Have to do the split because of the anycast, as we have to
  originate from a regular public IP instead of one from the anycast ip...
 
  You could probably do it with NAT for outgoing tied to source IP of the
  private NAT, but haven't tried that and doubt running NAT on the server
  running haproxy would be a good idea for anything but light load...
 
 
  -Original Message-
  From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
  Sent: Tuesday, September 27, 2011 6:13 PM
  To: John Lauro
  Cc: haproxy@formilux.org
  Subject: Re: TPROXY + Hearbeat
 
  Hey John,
 
  Thanks for the quick response. That's great to know. So both the VIPs
  and the shared IP your backends use as their default gateway fail over
  well?
 
  Is your HAProxy pair the actual network boundary box between the
  subnets, or is it just the default gateway for your backends and the
  pair relay off the real subnet gateway? (any issues with utility
  traffic originating from the backend servers like package updates
  running through HAProxy pair as the default gw?)
 
  Thank you so much for your help!
 
  -J
 
  On Tue, Sep 27, 2011 at 4:09 PM, John Lauro
 john.la...@covenanteyes.com
  wrote:
   Works great.  I have several pairs of vm haproxy servers in
 transparent
  mode
   and running heartbeat to take over the shared IP.
  
  
   -Original Message-
   From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
   Sent: Tuesday, September 27, 2011 3:46 PM
   To: haproxy@formilux.org
   Subject: TPROXY + Hearbeat
  
   Hello,
  
   Is anyone running redundant HAProxy servers that use TPROXY for
   transparent proxying (preserve source IP) and use Heartbeat for
   failover of VIPs and shared interface IPs? We're curious if you run
   into issues due to combination of shared IPs and TPROXY? Thank you 
   in
   advance.
  
   -J
  
  
 



RE: Doesn't work for a very few visitors

2009-12-19 Thread John Lauro
Are you using connection tracking with iptables?  If so, you might want to
consider using a more basic configuration without connection tracking.

 

What does your iptables configuration look like?

 

 

 

From: Joe Torsitano [mailto:jtorsit...@weatherforyou.com] 
Sent: Saturday, December 19, 2009 4:25 PM
To: Willy Tarreau
Cc: haproxy@formilux.org
Subject: Re: Doesn't work for a very few visitors

 

Hi Willy,

I have been using iptables on the HAProxy servers.  Luckily I found a couple
of willing test subject who were having the problem and shutting off
iptables seemed to correct it (they could then see the sites).  I use a
pretty basic iptables configuration just to restrict access to SSH and close
off all unused ports.  What is it about iptables that HAProxy doesn't get
along with?  Is there an iptables or other firewall configuration that will
work with HAProxy or do I just have to pretty much leave the server HAProxy
is running on wide open?

Thanks for the information.


-- 
Joe Torsitano




On Fri, Dec 18, 2009 at 11:04 PM, Willy Tarreau w...@1wt.eu wrote:

On Fri, Dec 18, 2009 at 05:00:38PM -0800, Joe Torsitano wrote:
 Hi Willy,

 What's strange is traffic still appears normal, and is, for probably at
 least 99% of the visitors.  Logged traffic remains about normal (hundreds
of
 thousands of visitors a day).  I just get a few e-mails asking why the
site
 has been down for days or when it will be back.  But I cannot recreate the
 problem.  And I know there are probably people who just don't e-mail and,
 unfortunately, don't come back.

yes, very possible unfortunately.

 Here is the config file with the IP addresses changed, pretty much the
 default that comes with it...

A few questions that come to mind :
- What version are you running by the way (haproxy -vv) ?
 Several cases of truncated responses were observed between
 1.3.16 and 1.3.18, and sometimes a 502 response could be
 sent if the server closed too fast before 1.3.19. So please
 endure you're on 1.3.22. More info here about the bugs in
 your version :

   http://haproxy.1wt.eu/knownbugs-1.3.html

- Have you tried to look for client errors in the logs ?

- Have you tried to look in the logs if you could find some of
 the complainers' traces ? Most often, you can check for the
 same class-B or class-C addresses as the IP that posted the
 mail, and try to isolate the accesses by taking the access
 time into account.

- are you sure that 2000 concurrent connections are enough ?
 You may check that in the logs too, as there is a field
 with connection counts.

- I'm seeing there is no option httpclose below. Could you
 try to add it in the defaults section and see if it changes
 anything ? Before doing that, please check that you don't
 have iptables enabled on your haproxy machine.

I'm also thinking about something else. You said that when
you don't go through haproxy you don't get any complaint.
Are your systems configured similarly ? I mean, the very
low rate of problems could very well be caused by some TCP
settings which are incompatible with a minority of users
running behind a buggy router/firewall.

In order to check this, you could run the following command
on each server (including the one with haproxy) :

   $ sysctl -a | fgrep net.ipv4.tcp

Please verify if tcp_ecn and tcp_window_scaling are at the
same values. If not, start by setting tcp_ecn to 0 on
the haproxy server. Then later you can try to similarly
disable tcp_window_scaling, though this one is far less
likely because it's enabled almost everywhere.

Also check with ip route and ip address on all servers
if you don't see a different MTU value on the default
route. It's possible that a small part of your clients
are still running misconfigured a PPPoE ADSL line and
can't send/receive full packets. There are still some
large sites who deal with that by setting their MTU to
1492 or even 1452 on the external interface. But this
is less likely.

Regards,
Willy





Internal Virus Database is out of date.
Checked by AVG - www.avg.com
Version: 8.5.427 / Virus Database: 270.14.105/2561 - Release Date: 12/12/09
19:39:00



RE: Standby/backup with 2 nodes

2009-12-03 Thread John Lauro
 Is there some simple configuration option(s) staring me in the face
 that I'm missing, or is this more complex than it seems on the surface?
 

In terms of some simple configuration option...
Why not just have 3 active?  If one is down, it's load will automatically be
routed to the other two.


If you really want to do no more than two active, you could do something
like load balance internally to two different virtual servers that then only
have a single active and a backup server for each of those two, and just
make their backup servers the same.  Still, not sure why you would want to
do that instead of just 3 active servers...





RE: Load balancing PostFix Mail Out

2009-12-02 Thread John Lauro
I think haproxy can only do header manipulation with HTTP.  In other words,
rspadd will not work with mode tcp.

You should be able to have your PHP scrip add the custom header.


Postfix can handle a lot of outgoing mail.  If you don't mind me asking, I'm
just curious how much mail are you sending out that you need to route it
through a load balancer?  Or is it more for redundancy of a multi-server to
redundant mail gateway?





 -Original Message-
 From: boris17...@gmail.com [mailto:boris17...@gmail.com]
 Sent: Wednesday, December 02, 2009 11:33 PM
 To: haproxy@formilux.org
 Subject: Load balancing PostFix Mail Out
 
 I'm currently trying to load balance balance Mailout from postfix MTA.
 It work nice, but I have some question (maybe feature).
 I want to add a custom header (this one is just after a test) but
 rspadd seem to work only on http.
   rspadd  X-HAProxyRocks:\ Yes\0
 
 Our mail sender is a php script. I tryed to use (maybe it's a script
 problem) HAProxy directly from the PHPScripts (using fgets for sending
 emails) but it doesn't work (script not seem to like haproxy as a
 SMTP).
 
 Now my solution is to set :
 php script = Postfix localhost:25 = Haproxy localhost:10025 = a lot
 of postfix.
 
 Maybe I've done some bad stuff somewhere.
 I just upgraded (not tested yet) to 1.3.22 because I see a changelog
 with fgets (haproxy fgets on google).
 
 I think my php script wait for a normal smtp response, but HAproxy is
 waiting to have the response from postfix backend server.
 
 My configuration file is quite normal:
 ===
 frontend LB
  mode tcp
  bind 127.0.0.1:10025
  default_backend PostFix
 backend PostFix
  mode tcp
  contimeout  3000
  srvtimeout  3000
  option  redispatch
  retries 5
  balance roundrobin
  option  smtpchk EHLO hi
  rspadd  X-HAProxyRocks:\ Yes\0
  server  ara01   87.98.142.XX:25weight  30
 check   inter 20
  server  ara02   94.23.155.XX:25weight  30
 check   inter 20
  server  ara03   91.121.218.XX:25weight  30
   check   inter 20
 ===
 For now it work nice with a Postfix frontend with haproxy as relay
 host. Maybe I can simple it.
 
 Sorry for my crappy english, teachers fault ;)
 
 Kind regards, a HAProxy lover.
 
 Other Question: I want a HAProxy logo, there is some official logos ?
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.426 / Virus Database: 270.14.86/2533 - Release Date:
 12/02/09 19:43:00




RE: Preventing bots from starving other users?

2009-11-16 Thread John Lauro
Oopps, my bad...  It's actually tc and not iptables.  Googletc qdisc
for some info.

You could allow your local ips go unrestricted, and throttle all other IPs
to 512kb/sec for example.

What software is the running on?  I assume it's not running under apache or
there would be some ways to tune apache.  As other have mentioned, telling
the crawlers to behave themselves or totally ignore the wiki with a robots
file is probably best.

 -Original Message-
 From: Wout Mertens [mailto:wout.mert...@gmail.com]
 Sent: Monday, November 16, 2009 7:31 AM
 To: John Lauro
 Cc: haproxy@formilux.org
 Subject: Re: Preventing bots from starving other users?
 
 Hi John,
 
 On Nov 15, 2009, at 8:29 PM, John Lauro wrote:
 
  I would probably do that sort of throttling at the OS level with
 iptables,
  etc...
 
 Hmmm How? I don't want to throw away the requests, just queue them.
 Looking for iptables rate limiting it seems that you can only drop the
 request.
 
 Then again:
 
  That said, before that I would investigate why the wiki is so slow...
  Something probably isn't configured right if it chokes with only a
 few
  simultaneous accesses.  I mean, unless it's embedded server with
 under 32MB
  of RAM, the hardware should be able to handle that...
 
 Yeah, it's running pretty old software on a pretty old server. It
 should be upgraded but that is a fair bit of work; I was hoping that a
 bit of configuration could make the situation fair again...
 
 Thanks,
 
 Wout.
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.425 / Virus Database: 270.14.60/2495 - Release Date:
 11/15/09 19:50:00




RE: Preventing bots from starving other users?

2009-11-15 Thread John Lauro
I would probably do that sort of throttling at the OS level with iptables,
etc...

That said, before that I would investigate why the wiki is so slow...
Something probably isn't configured right if it chokes with only a few
simultaneous accesses.  I mean, unless it's embedded server with under 32MB
of RAM, the hardware should be able to handle that...


 -Original Message-
 From: Wout Mertens [mailto:wout.mert...@gmail.com]
 Sent: Sunday, November 15, 2009 9:57 AM
 To: haproxy@formilux.org
 Subject: Preventing bots from starving other users?
 
 Hi there,
 
 I was wondering if HAProxy helps in the following situation:
 
 - We have a wiki site which is quite slow
 - Regular users don't have many problems
 - We also get crawled by a search bot, which creates many concurrent
 connections, more than the hardware can handle
 - Therefore, service is degraded and users usually have their browsers
 time out on them
 
 Given that we can't make the wiki faster, I was thinking that we could
 solve this by having a per-source-IP queue, which made sure that a
 given source IP cannot have more than e.g. 3 requests active at the
 same time. Requests beyond that would get queued.
 
 
 Is this possible?
 
 Thanks,
 
 Wout.
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.425 / Virus Database: 270.14.60/2495 - Release Date:
 11/15/09 07:50:00




RE: Using HAProxy In Place of WCCP

2009-11-04 Thread John Lauro
I see two potential issues (which may or may not be important for you).

 

1.   Non http 1.1 clients may have trouble (ie: they don't send the host
on the URL request, or if they are not really http but using port 80).

2.   Back tracking if you get a complaint from some website (ie: RIAA
complaint) is going to be near impossible of determining who accessed
whatever.

 

 

 

From: d...@opteqint.net [mailto:d...@opteqint.net] On Behalf Of Dave
Sent: Wednesday, November 04, 2009 6:13 AM
To: haproxy@formilux.org
Subject: Using HAProxy In Place of WCCP

 

Hi all,
 I'm busy investigating using HAProxy to balance traffic to a cache farm, in
an environment which doesn't have WCCP. Are there any issues with attempting
to use HAProxy to intercept internet traffic, and redirect it to a farm of
caches as opposed to the default usage of HAProxy? 

My anticapted setup would be use to have a listen group on say port 8080,
redirect port 80 traffic using a firewall to HAProxy and have it then send
that HTTP traffic to a farm of cache devices. It seems like this should be
pretty simple to setup, using the same type of setup you would use for just
balancing a group of http servers?

Is anyone using this or have you heard of it being used in such a way - I
don't currently see any issues?

Thanks in advance for your help
Dave

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.424 / Virus Database: 270.14.29/2455 - Release Date: 11/03/09
19:38:00



RE: Backend sends 204, haproxy sends 502

2009-10-28 Thread John Lauro
You could run mode tcp if you setup haproxy in transparent mode .

 

From: Dirk Taggesell [mailto:dirk.tagges...@googlemail.com] 
Sent: Wednesday, October 28, 2009 9:03 AM
To: haproxy@formilux.org
Subject: Backend sends 204, haproxy sends 502

 

Hi all,

I want to load balance a new server application that generally sends
http code 204 - to save bandwidth and to avoid client-side caching.
In fact it only exchanges cookie data, thus no real content is delivered
anyway.

When requests are made via haproxy, the backend - as intended - delivers
a code 204 but haproxy instead turns it into a code 502. Unfortunately I
cannot use tcp mode because the server app needs the client's IP
address. Is there something else I can do?

Request directly to the appserver:

bash-3.2$ curl --verbose http://cm01.example.com:8000/c;
* About to connect() to cm01.example.com port 8000 (#0)
*   Trying 22.33.44.55... connected
* Connected to cm01.example.com (22.33.44.55) port 8000 (#0)
  GET /c HTTP/1.1
  User-Agent: curl/7.19.6 (i386-apple-darwin9.8.0) libcurl/7.19.6
OpenSSL/0.9.8k zlib/1.2.3
  Host: cm01.example.com:8000
  Accept: */*
 
 HTTP/1.1 204 No Content
 Date: Wed, 28 Oct 2009 11:56:44 GMT
 Server: Jetty/5.1.11RC0 (Linux/2.6.21.7-2.fc8xen amd64 java/1.6.0_16
 Expires: Thu, 01 Jan 1970 00:00:00 GMT
 Set-Cookie: pid=08f0b764185;Path=/;Domain=.example.com;Expires=Thu,
16-Oct-59 11:56:44 GMT
 Connection: close

* Closing connection #0

The above is how it is intended to look. And now via haproxy:

bash-3.2$ curl --verbose http://cm01.example.com/c;
* About to connect() to cm01.example.com port 80 (#0)
*   Trying 22.33.44.55... connected
* Connected to cm01.example.com (22.33.44.55) port 80 (#0)
  GET /c HTTP/1.1
  User-Agent: curl/7.19.6 (i386-apple-darwin9.8.0) libcurl/7.19.6
OpenSSL/0.9.8k zlib/1.2.3
  Host: cm01.example.com
  Accept: */*
 
* HTTP 1.0, assume close after body
 HTTP/1.0 502 Bad Gateway
 Cache-Control: no-cache
 Connection: close
 Content-Type: text/html

htmlbodyh1502 Bad Gateway/h1
The server returned an invalid or incomplete response.
/body/html
* Closing connection #0

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.423 / Virus Database: 270.14.29/2455 - Release Date: 10/28/09
09:34:00



RE: Backend sends 204, haproxy sends 502

2009-10-28 Thread John Lauro
Oops, me bad.  that's technically right.   I was burned by this terminology
too.   What is considered transparent mode is actually good if you want to
proxy the world instead of your servers, and it can be combined with usesrc.

 

Anyways, what I should of said was you can make Haproxy present the client's
IP with source   haproxyinterfaceip usesrc client  

 

Might be good if the transparent mode had a reference to usesrc..

 

 

From: Dirk Taggesell [mailto:dirk.tagges...@googlemail.com] 
Sent: Wednesday, October 28, 2009 9:48 AM
To: John Lauro
Subject: Re: Backend sends 204, haproxy sends 502

 

 

On Wed, Oct 28, 2009 at 2:05 PM, John Lauro john.la...@covenanteyes.com
wrote:

You could run mode tcp if you setup haproxy in transparent mode .

The docs say: Note that contrary to a common belief, this option does






NOT make HAProxy present the client's IP to the server when establishing


the connection.

Which makes sense. And it doesn't work.

 

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.423 / Virus Database: 270.14.29/2455 - Release Date: 10/28/09
09:34:00



RE: slow tcp handshake

2009-10-21 Thread John Lauro
You mention loopback interface.  You could be running out of port numbers to
for the connections.
What's your /proc/sys/net/ipv4/ip_local_port_range?


What's netstat -s | grep -i listshow on the server?



 -Original Message-
 From: David Birdsong [mailto:david.birds...@gmail.com]
 Sent: Wednesday, October 21, 2009 6:36 AM
 To: haproxy
 Subject: slow tcp handshake
 
 This isn't haproxy related, but this list is so knowledgable on
 network problems.
 
 I'm troubleshooting our slow webserver and I've drilled down to a TCP
 handshake taking up to 10 seconds.  This handshake doesn't actually
 really start until the client sends it's 3rd syn.  The first 2 syn's
 are completely ignored, the 3rd is ACKed a full 10 seconds after the
 first syn is sent.  After this, read times are fast.
 
 This happens over the loopback interface.
 
 Can an app get backed up in it's listen queue and affect some sort of
 syn queue, or will the kernel handle the handshake irrespective of the
 server's listen queue?
 
 I've searched all over the internets, and I'm plumb out of ideas.
 
 syn_cookies are disabled
 ip_tables unloaded
 /proc/sys/net/ipv4/tcp_max_syn_backlog was set to 1024 and active
 connections to the server never rose above 960, so thought this may be
 it...but i doubled it and it had no affect
 
 
 Fedora 8 2.6.26.8-57.fc8
 Web server is lighttpd
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.422 / Virus Database: 270.14.11/2430 - Release Date:
 10/20/09 18:42:00




RE: slow tcp handshake

2009-10-21 Thread John Lauro
You could bump your range up.  It might help if you have a high connection
rate and not just a high number of connections.

echo 1024 61000  /proc/sys/net/ipv4/ip_local_port_range


Good that nothing shows, as most 0 values are not printed.  You could check
for anything else that looks strange under netstat -s

 -Original Message-
 From: David Birdsong [mailto:david.birds...@gmail.com]
 Sent: Wednesday, October 21, 2009 7:07 AM
 To: John Lauro
 Cc: haproxy
 Subject: Re: slow tcp handshake
 
 On Wed, Oct 21, 2009 at 3:51 AM, John Lauro
 john.la...@covenanteyes.com wrote:
  You mention loopback interface.  You could be running out of port
 numbers to
  for the connections.
  What's your /proc/sys/net/ipv4/ip_local_port_range?
 cat /proc/sys/net/ipv4/ip_local_port_range
 32768 61000
 
 
 
 
  What's netstat -s | grep -i list    show on the server?
 nothing at all, no list to match on that output
 
 
 
 
 also, i've disabled tcp_sack with no effect
 
  -Original Message-
  From: David Birdsong [mailto:david.birds...@gmail.com]
  Sent: Wednesday, October 21, 2009 6:36 AM
  To: haproxy
  Subject: slow tcp handshake
 
  This isn't haproxy related, but this list is so knowledgable on
  network problems.
 
  I'm troubleshooting our slow webserver and I've drilled down to a
 TCP
  handshake taking up to 10 seconds.  This handshake doesn't actually
  really start until the client sends it's 3rd syn.  The first 2 syn's
  are completely ignored, the 3rd is ACKed a full 10 seconds after the
  first syn is sent.  After this, read times are fast.
 
  This happens over the loopback interface.
 
  Can an app get backed up in it's listen queue and affect some sort
 of
  syn queue, or will the kernel handle the handshake irrespective of
 the
  server's listen queue?
 
  I've searched all over the internets, and I'm plumb out of ideas.
 
  syn_cookies are disabled
  ip_tables unloaded
  /proc/sys/net/ipv4/tcp_max_syn_backlog was set to 1024 and active
  connections to the server never rose above 960, so thought this may
 be
  it...but i doubled it and it had no affect
 
 
  Fedora 8 2.6.26.8-57.fc8
  Web server is lighttpd
 
  No virus found in this incoming message.
  Checked by AVG - www.avg.com
  Version: 8.5.422 / Virus Database: 270.14.11/2430 - Release Date:
  10/20/09 18:42:00
 
 
 
 
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.422 / Virus Database: 270.14.11/2430 - Release Date:
 10/20/09 18:42:00




RE: slow tcp handshake

2009-10-21 Thread John Lauro
You may also want to check ulimit -n prior to running your server.  It may
default to 1024 on your distro, and if lighttpd doesn't automatically
increase it for you, that could be your problem.

 -Original Message-
 From: David Birdsong [mailto:david.birds...@gmail.com]
 Sent: Wednesday, October 21, 2009 7:07 AM
 To: John Lauro
 Cc: haproxy
 Subject: Re: slow tcp handshake
 
 On Wed, Oct 21, 2009 at 3:51 AM, John Lauro
 john.la...@covenanteyes.com wrote:
  You mention loopback interface.  You could be running out of port
 numbers to
  for the connections.
  What's your /proc/sys/net/ipv4/ip_local_port_range?
 cat /proc/sys/net/ipv4/ip_local_port_range
 32768 61000
 
 
 
 
  What's netstat -s | grep -i list    show on the server?
 nothing at all, no list to match on that output
 
 
 
 
 also, i've disabled tcp_sack with no effect
 
  -Original Message-
  From: David Birdsong [mailto:david.birds...@gmail.com]
  Sent: Wednesday, October 21, 2009 6:36 AM
  To: haproxy
  Subject: slow tcp handshake
 
  This isn't haproxy related, but this list is so knowledgable on
  network problems.
 
  I'm troubleshooting our slow webserver and I've drilled down to a
 TCP
  handshake taking up to 10 seconds.  This handshake doesn't actually
  really start until the client sends it's 3rd syn.  The first 2 syn's
  are completely ignored, the 3rd is ACKed a full 10 seconds after the
  first syn is sent.  After this, read times are fast.
 
  This happens over the loopback interface.
 
  Can an app get backed up in it's listen queue and affect some sort
 of
  syn queue, or will the kernel handle the handshake irrespective of
 the
  server's listen queue?
 
  I've searched all over the internets, and I'm plumb out of ideas.
 
  syn_cookies are disabled
  ip_tables unloaded
  /proc/sys/net/ipv4/tcp_max_syn_backlog was set to 1024 and active
  connections to the server never rose above 960, so thought this may
 be
  it...but i doubled it and it had no affect
 
 
  Fedora 8 2.6.26.8-57.fc8
  Web server is lighttpd
 
  No virus found in this incoming message.
  Checked by AVG - www.avg.com
  Version: 8.5.422 / Virus Database: 270.14.11/2430 - Release Date:
  10/20/09 18:42:00
 
 
 
 
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.422 / Virus Database: 270.14.11/2430 - Release Date:
 10/20/09 18:42:00




RE: [ANNOUNCE] haproxy 1.4-dev4 and 1.3.21

2009-10-14 Thread John Lauro
Sorry to report, from 1.3.21:  
Oct 13 23:36:43 haf1a kernel: haproxy[25428]: segfault at 19 ip
0041620f sp 7381ef60 error 4 in haproxy[40+3d000]


(I know, kind of old, as we were running 1.3.18 on this box, so not sure
which version the problem started)


Compiled with:
make TARGET=linux26 USE_LINUX_TPROXY=1

Seems to crash on the standby box too fairly quickly which only generates
it's own traffic for checks, so it should be easy to reproduce.


 -Original Message-
 From: Willy Tarreau [mailto:w...@1wt.eu]
 Sent: Monday, October 12, 2009 1:52 AM
 To: haproxy@formilux.org
 Subject: [ANNOUNCE] haproxy 1.4-dev4 and 1.3.21
 
 Hello,
 
 OK they're both released today : 1.3.21 and 1.4-dev4
 
 1.3.21 contains all the minor fixes and improvements I talked
 about last week, plus a few ACLs that were missing (ability to
 match on a backend's queue length).
 
 I'd like it if distro maintainers would update to 1.3.21, as some
 of them are still in 1.3.19 which contains the bug that can cause
 a crash on missing timeout. Also, given we have not fixed a single
 major bug between .20 and .21, I think it proves that we have reached
 a high level of stability and newer 1.3 versions should become rare.
 Even 1.4 remains very stable, which is nice, considering the amount
 of changes it has received. I think that the internal architecture
 changes have helped a lot to get rid of many tricks that were needed
 to get something to work in old versions.
 
 1.4-dev4 has received some eye-candy updates to the stats page,
 mostly coming from Krzysztof. Precise server health status is now
 reported there, which can be very convenient for finding why a server
 is seen as down. Stats can be reported per listening socket, which
 is very convenient when you have multiple ISP accesses and are able
 to create one bind line for each of them. Take a look at the demo
 page, I have splitted IPv4 and IPv6 for instance.
 
 Also, we now have the ability to clear the stats without restarting,
 as well as to change a server's weight live without restarting (which
 includes setting its weight to zero to disable it).
 
 Another change concerns the load-balancing algorithms. They have
 been reorganized and a new hasing method was implemented : consistent
 hashing[1]. This was already discussed several months ago, but I was
 against it since it would cost a lot of CPU. I finally found how to
 implement it with trees so that it's cheap. The advantage of this
 hashing method is that you can add or remove servers with limited
 redistribution. This is mainly used for caches, where we don't want
 a cache failure to suddenly redistribute all the objects to caches
 which don't have them. The hash is not as smooth as the old one,
 but still not bad at all. The hashing method can be selected using
 the new hash-type keyword. This rework was the opportunity to
 reintroduce the old static round-robin algorithm, which has the
 advantage over the dynamic one to support more than 4000 servers
 (I know some people already have close to 1000 servers in a single
 backend).
 
 Next development version should focus a bit on QoS and on improved
 detection of failures.
 
 As usual, sources, doc and binaries for 1.3 are available here :
 
http://haproxy.1wt.eu/download/1.3/
 
 And sources for 1.4 are available here :
 
http://haproxy.1wt.eu/download/1.4/
 
 
 Happy update,
 Willy
 
 [1] http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-
 consistent-hashing/
 
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.421 / Virus Database: 270.14.9/2427 - Release Date:
 10/11/09 18:34:00




RE: [ANNOUNCE] haproxy 1.4-dev4 and 1.3.21

2009-10-14 Thread John Lauro
valgrind ./haproxy -f /etc/lb.cfg
==8149== Memcheck, a memory error detector
==8149== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al.
==8149== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info
==8149== Command: ./haproxy -f /etc/lb.cfg
==8149== 
[WARNING] 286/084451 (8149) : [./haproxy.main()] Cannot raise FD limit to
40055.
==8149== Invalid read of size 1
==8149==at 0x41620F: uxst_event_accept (proto_uxst.c:469)
==8149==by 0x42DB38: _do_poll (ev_sepoll.c:532)
==8149==by 0x4021C6: run_poll_loop (haproxy.c:926)
==8149==by 0x403843: main (haproxy.c:1203)
==8149==  Address 0x19 is not stack'd, malloc'd or (recently) free'd
==8149== 
==8149== 
==8149== Process terminating with default action of signal 11 (SIGSEGV):
dumping core
==8149==  Access not within mapped region at address 0x19
==8149==at 0x41620F: uxst_event_accept (proto_uxst.c:469)
==8149==by 0x42DB38: _do_poll (ev_sepoll.c:532)
==8149==by 0x4021C6: run_poll_loop (haproxy.c:926)
==8149==by 0x403843: main (haproxy.c:1203)
==8149==  If you believe this happened as a result of a stack
==8149==  overflow in your program's main thread (unlikely but
==8149==  possible), you can try to increase the size of the
==8149==  main thread stack using the --main-stacksize= flag.
==8149==  The main thread stack size used in this run was 8388608.
==8149== 
==8149== HEAP SUMMARY:
==8149== in use at exit: 3,664,267 bytes in 159 blocks
==8149==   total heap usage: 289 allocs, 130 frees, 3,669,374 bytes
allocated
==8149== 
==8149== LEAK SUMMARY:
==8149==definitely lost: 0 bytes in 0 blocks
==8149==indirectly lost: 0 bytes in 0 blocks
==8149==  possibly lost: 193,987 bytes in 104 blocks
==8149==still reachable: 3,470,280 bytes in 55 blocks
==8149== suppressed: 0 bytes in 0 blocks
==8149== Rerun with --leak-check=full to see details of leaked memory
==8149== 
==8149== For counts of detected and suppressed errors, rerun with: -v
==8149== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4)


 -Original Message-
 From: Krzysztof Olędzki [mailto:o...@ans.pl]
 Sent: Wednesday, October 14, 2009 5:54 AM
 To: John Lauro
 Cc: haproxy@formilux.org
 Subject: Re: [ANNOUNCE] haproxy 1.4-dev4 and 1.3.21
 
 On 2009-10-14 10:47, John Lauro wrote:
  Sorry to report, from 1.3.21:
  Oct 13 23:36:43 haf1a kernel: haproxy[25428]: segfault at 19 ip
  0041620f sp 7381ef60 error 4 in haproxy[40+3d000]
 
 
  (I know, kind of old, as we were running 1.3.18 on this box, so not
 sure
  which version the problem started)
 
 
  Compiled with:
  make TARGET=linux26 USE_LINUX_TPROXY=1
 
  Seems to crash on the standby box too fairly quickly which only
 generates
  it's own traffic for checks, so it should be easy to reproduce.
 
 Would it be possible to compile haproxy with -g (normally enabled), and
 run non-stripped binary with valgrind[1]. If the bug is trivial it
 should immediately show where the problem is.
 
 [1] http://valgrind.org/
 
 Best regards,
 
   Krzysztof Olędzki
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.421 / Virus Database: 270.14.11/2430 - Release Date:
 10/13/09 19:11:00




RE: HAProxy - Virtual Server + CentOS/RHEL 5.3

2009-09-25 Thread John Lauro
It works well.  Don't forget divider=10 for even better performance.  

 

From: geoffreym...@gmail.com [mailto:geoffreym...@gmail.com] 
Sent: Friday, September 25, 2009 9:45 PM
To: haproxy@formilux.org
Subject: HAProxy - Virtual Server + CentOS/RHEL 5.3

 

Hello,
Does anyone know of any reason why HAProxy wouldn't work well on a virtual
install of either CentOS or RHEL 5.3? 5.3 is built on a 2.6.18 kernel.

Thanks,
Geoff 

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.409 / Virus Database: 270.13.112/2394 - Release Date: 09/25/09
17:52:00



RE: RE: HAProxy - Virtual Server + CentOS/RHEL 5.3

2009-09-25 Thread John Lauro
Actually it's a setting for the kernel so it behaves better in a virtual
machine, not a HAProxy setting..Add it to the kernel lines in
/boot/grub/grub.conf and reboot.  your virtual installs will idle with much
less cpu usage.

 

 

 

From: geoffreym...@gmail.com [mailto:geoffreym...@gmail.com] 
Sent: Friday, September 25, 2009 10:17 PM
To: John Lauro; geoffreym...@gmail.com
Cc: haproxy@formilux.org
Subject: Re: RE: HAProxy - Virtual Server + CentOS/RHEL 5.3

 

I have no idea what divider=10 is... but I'm sure I'll figure it out as I
start getting myself familiar with HAProxy. :)

Thanks,
Geoff

On Sep 25, 2009 9:54pm, John Lauro john.la...@covenanteyes.com wrote:
 
 
 
 
 
 It works well.  Don't forget divider=10 for even better
 performance.  
 
 
  
 
 
 
 
 
 
 
 
 From:
 geoffreym...@gmail.com [mailto:geoffreym...@gmail.com] 
 
 Sent: Friday, September 25, 2009 9:45 PM
 
 To: haproxy@formilux.org
 
 Subject: HAProxy - Virtual Server + CentOS/RHEL 5.3
 
 
 
 
 
 
 
 
  
 
 
 Hello,
 
 Does anyone know of any reason why HAProxy wouldn't work well on a virtual
 install of either CentOS or RHEL 5.3? 5.3 is built on a 2.6.18 kernel.
 
 
 
 Thanks,
 
 Geoff 
 
 
 No virus
 found in this incoming message.
 
 Checked by AVG - www.avg.com
 
 Version: 8.5.409 / Virus Database: 270.13.112/2394 - Release Date:
09/25/09
 17:52:00
 
 
 
 
 
 
 
 
 

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.409 / Virus Database: 270.13.112/2394 - Release Date: 09/25/09
17:52:00



RE: nf_conntrack: table full, dropping packet.

2009-09-03 Thread John Lauro
service iptables stop
should take care of it in Centos.


Although your lsmod doesn't make sense.  It should be showing ip_conntrack
and ip_tables and iptable_filter with a standard Centos and iptables.  Even
dm_multipath and others that you are not interested in would be expected...



 -Original Message-
 From: Hank A. Paulson [mailto:h...@spamproof.nospammail.net]
 Sent: Thursday, September 03, 2009 1:02 PM
 To: HAproxy Mailing Lists
 Subject: nf_conntrack: table full, dropping packet.
 
 Does anyone know how to get rid of/turn off/kill/remove/exorcise
 netfilter
 and/or conntrack?
 I don't use iptables and it seems to cause a lot of overhead.
 
 Does it require a custom compiled kernel?
 I am using CentOS and Fedora standard precompiled kernels right now.
 
 Thank you for any help in this frustrating matter.
 
 # lsmod | grep -i ip
 ipv6  290320  20
 
 sysctl -a | grep -i netfilter
 net.netfilter.nf_conntrack_generic_timeout = 12
 net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 12
 net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 12
 net.netfilter.nf_conntrack_tcp_timeout_established = 2000
 net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 12
 net.netfilter.nf_conntrack_tcp_timeout_close_wait = 12
 net.netfilter.nf_conntrack_tcp_timeout_last_ack = 12
 net.netfilter.nf_conntrack_tcp_timeout_time_wait = 10
 net.netfilter.nf_conntrack_tcp_timeout_close = 8
 net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 30
 net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 30
 net.netfilter.nf_conntrack_tcp_loose = 1
 net.netfilter.nf_conntrack_tcp_be_liberal = 0
 net.netfilter.nf_conntrack_tcp_max_retrans = 3
 net.netfilter.nf_conntrack_udp_timeout = 12
 net.netfilter.nf_conntrack_udp_timeout_stream = 18
 net.netfilter.nf_conntrack_icmp_timeout = 8
 net.netfilter.nf_conntrack_acct = 1
 net.netfilter.nf_conntrack_max = 1048576
 net.netfilter.nf_conntrack_count = 7645
 net.netfilter.nf_conntrack_buckets = 16384
 net.netfilter.nf_conntrack_checksum = 1
 net.netfilter.nf_conntrack_log_invalid = 0
 net.netfilter.nf_conntrack_expect_max = 256
 
 
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.409 / Virus Database: 270.13.73/2338 - Release Date:
 09/03/09 05:50:00




RE: nf_conntrack: table full, dropping packet.

2009-09-03 Thread John Lauro
I haven't used fedora much recently.  Looks it's compiled into the kernel
instead of as a module with fedora, so I think you would have to do a custom
kernel to disable the connection tracking.  (or switch distros)


 -Original Message-
 From: Hank A. Paulson [mailto:h...@spamproof.nospammail.net]
 Sent: Thursday, September 03, 2009 2:15 PM
 To: 'HAproxy Mailing Lists'
 Subject: Re: nf_conntrack: table full, dropping packet.
 
 # lsmod
 Module  Size  Used by
 xen_netfront   19808  0
 pcspkr  2848  0
 xen_blkfront   12404  2
 
 # cat /proc/net/nf_conntrack | wc -l
 50916
 
 # service iptables stop
 (it was never started)
 
 # cat /proc/net/nf_conntrack | wc -l
 65358
 
 This is Fedora, sorry, not CentOS.
 
 the only other thing running is keepalived to manage the ip address for
 haproxy.
 
 On 9/3/09 10:16 AM, John Lauro wrote:
  service iptables stop
  should take care of it in Centos.
 
 
  Although your lsmod doesn't make sense.  It should be showing
 ip_conntrack
  and ip_tables and iptable_filter with a standard Centos and iptables.
 Even
  dm_multipath and others that you are not interested in would be
 expected...
 
 
 
  -Original Message-
  From: Hank A. Paulson [mailto:h...@spamproof.nospammail.net]
  Sent: Thursday, September 03, 2009 1:02 PM
  To: HAproxy Mailing Lists
  Subject: nf_conntrack: table full, dropping packet.
 
  Does anyone know how to get rid of/turn off/kill/remove/exorcise
  netfilter
  and/or conntrack?
  I don't use iptables and it seems to cause a lot of overhead.
 
  Does it require a custom compiled kernel?
  I am using CentOS and Fedora standard precompiled kernels right now.
 
  Thank you for any help in this frustrating matter.
 
  # lsmod | grep -i ip
  ipv6  290320  20
 
  sysctl -a | grep -i netfilter
  net.netfilter.nf_conntrack_generic_timeout = 12
  net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 12
  net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 12
  net.netfilter.nf_conntrack_tcp_timeout_established = 2000
  net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 12
  net.netfilter.nf_conntrack_tcp_timeout_close_wait = 12
  net.netfilter.nf_conntrack_tcp_timeout_last_ack = 12
  net.netfilter.nf_conntrack_tcp_timeout_time_wait = 10
  net.netfilter.nf_conntrack_tcp_timeout_close = 8
  net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 30
  net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 30
  net.netfilter.nf_conntrack_tcp_loose = 1
  net.netfilter.nf_conntrack_tcp_be_liberal = 0
  net.netfilter.nf_conntrack_tcp_max_retrans = 3
  net.netfilter.nf_conntrack_udp_timeout = 12
  net.netfilter.nf_conntrack_udp_timeout_stream = 18
  net.netfilter.nf_conntrack_icmp_timeout = 8
  net.netfilter.nf_conntrack_acct = 1
  net.netfilter.nf_conntrack_max = 1048576
  net.netfilter.nf_conntrack_count = 7645
  net.netfilter.nf_conntrack_buckets = 16384
  net.netfilter.nf_conntrack_checksum = 1
  net.netfilter.nf_conntrack_log_invalid = 0
  net.netfilter.nf_conntrack_expect_max = 256
 
 
 
  No virus found in this incoming message.
  Checked by AVG - www.avg.com
  Version: 8.5.409 / Virus Database: 270.13.73/2338 - Release Date:
  09/03/09 05:50:00
 
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.409 / Virus Database: 270.13.73/2338 - Release Date:
 09/03/09 05:50:00




RE: take servers out of the pool

2009-08-20 Thread John Lauro
I don't think you can easily have two health checks. You could also do port
forwarding with iptables or inetd/xinetd and run the health check on a
different port.  Stop the forwarding when you want maintenance mode.  



 -Original Message-
 From: Matt [mailto:mattmora...@gmail.com]
 Sent: Thursday, August 20, 2009 4:23 AM
 To: haproxy@formilux.org
 Subject: Re: take servers out of the pool
 
 In this case I am not load balancing to apache or anything else where
 I can touch/remove a file, I am load balancing directly to the http
 application which doesn't serve any local files, it's a jetty app.
 
 On LVS I could touch a file on the LB or use the ipvsadm command to
 drop a servers weight to 0.  However i'm working in EC2 hence my use
 of haproxy.
 
 Could I run apache on the LB and have two health checks? one for a uri
 on the backend ( option httpchk server1/myapp ) and another with
 disable-on-404 that's pointing to a file on the LB (localhost/server1)
 for maintenance?
 
 Thanks,
 
 Matt
 
 2009/8/20 Willy Tarreau w...@1wt.eu:
  On Wed, Aug 19, 2009 at 11:18:08PM +0200, Magnus Hansen wrote:
  Very true...
  There are some nice examples in the docs.
  You could also use the persist option to keep old users on the
 server
  while new ones go to other servers.
  I use that to make sure i dont kick users..
 
  better use the http-check disable-on-404 now, as it allows you to
 set
  a server's weight to zero based on a reply to a health-check. If I
 can
  find some time (joke) I'll update the architecture manual with
 examples
  using this, and possibly with simple scripts to move a file on the
 server
  to perform various maintenance operations.
 
  Willy
 
 
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.392 / Virus Database: 270.13.58/2309 - Release Date:
 08/17/09 06:08:00




RE: Balancing bytes in/out

2009-08-13 Thread John Lauro
The biggest issue probably is that you are using cookies that will tie a
client to a server.  For me, I noticed it often takes over 3 days to get the
first 80% of the traffic off if I mark a server as soft down as people never
reboot or close the browser.  If you have more random traffic and less
regular visitors it might not take so long.

Leastconn is more accurate for how the servers look at that moment in time.
However, it does not take into account all clients that closed the session
but have a cookie for a server, and it's only used for clients that don't
already have a cookie.  Round robin or one of the other methods might work
better, but it can take close to a week for you to know if it's any better
or not as a large number of your users are already tied to a specific server
with a cookie.  I used to use roundrobin and I think it worked better with
cookies, but currently I don't have any traffic requiring cookies and so
leastconnn has been working better.

 -Original Message-
 From: Nelson Serafica [mailto:ntseraf...@gmail.com]
 Sent: Thursday, August 13, 2009 1:23 AM
 To: HAproxy Mailing Lists
 Subject: Balancing bytes in/out
 
 I just install and run haproxy for almost 3 months now and found no
 problem with it. Its just that I just notice that in my MRTG, the
 traffic is not the same. HAPROXY has SERVER A and SERVER B as backend
 web server. In the mrtg, SERVER B has a lot traffic compare to SERVER
 A. I have enable stats and I notice that Bytes IN/OUT is not almost
 equal (see attachment). Here is the config of my haproxy:
 
 listen  WEB_SERVERS 1.2.3.4:80
   mode http
   clitimeout  6 # 16.6 Hrs.
   srvtimeout  3 # 8.33 Hrs.
   contimeout  4000  # 1.11 Hrs.
   balance leastconn
   option forwardfor
   option  httpclose
   cookie igx insert nocache indirect
   server WEBA 2.3.4.5:80 cookie igx1 maxconn 2500 inter 1000
 fastinter 200 fall 2 check
   server WEBB 2.3.4.6:80 cookie igx2 maxconn 2500 inter 1000
 fastinter 200 fall 2 check
   stats uri   /my_stats
   stats realm Global\ statistics
   stats auth  stats:password
 global
   maxconn 1 # Total Max Connections.
   log 127.0.0.1   local0
   log 127.0.0.1   local1 notice
   daemon
   nbproc  1 # Number of processes
   userhaproxy
   group   haproxy
   chroot  /var/chroot/haproxy
 defaults
   log global
   option  httplog
   modetcp
   clitimeout  6 # 16.6 Hrs.
   srvtimeout  3 # 8.33 Hrs.
   contimeout  4000  # 1.11 Hrs.
   retries 3
   option  redispatch
   option  httpclose
 
 Is there something I need to change in my config. I set it as leastconn
 to balance traffic but it isn't. Can haproxy knows to transfer request
 to other back end when it knows it has a much more traffic compare to
 other server?
 
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.392 / Virus Database: 270.13.47/2289 - Release Date:
 08/12/09 18:12:00




RE: Balancing bytes in/out

2009-08-13 Thread John Lauro
 
 Is there something I need to change in my config. I set it as leastconn
 to balance traffic but it isn't. Can haproxy knows to transfer request
 to other back end when it knows it has a much more traffic compare to
 other server?
 
It can not transfer to a different server when you tie the client down to a
server with a cookie. If you can narrow down which traffic has to be tied
down to a specific server, I think you could ignore the cookie for the other
traffic and have better balance.




RE: Connection limiting Sorry servers

2009-08-10 Thread John Lauro
Do you have haproxy between your web servers and the 3rd party?  If not (ie: 
only to your servers), perhaps that is what you should do.  Trying to throttle 
the maximum connections to your web servers sounds pointless given that it's 
not a very good correlation to the traffic to the third party servers.

If you need to rate limit the connections per second, you could always do that 
with iptables on linux, or pf on bsd, etc...  but it sounds like it's something 
the third party needs to fix.


 -Original Message-
 From: Boštjan Merčun [mailto:bostjan.mer...@dhimahi.com]
 Sent: Monday, August 10, 2009 9:32 AM
 To: Willy Tarreau
 Cc: haproxy@formilux.org
 Subject: Re: Connection limiting  Sorry servers
 
 On Wed, 2009-08-05 at 18:26 +0200, Willy Tarreau wrote:
  On Wed, Aug 05, 2009 at 05:52:50PM +0200, Bo??tjan Mer??un wrote:
   Hi Willy
  
   On Mon, 2009-08-03 at 09:21 +0200, Willy Tarreau wrote:
  
why are you saying that ? Except for rare cases of huge bugs, a
 server
is not limited in requests per second. At full speed, it will
 simply use
100% of the CPU, which is why you bought it after all. When a
 server dies,
it's almost always because a limited resource has been exhausted,
 and most
often this resource is memory. In some cases, it may be other
 limits such
as sockets, file descriptors, etc... which cause some unexpected
 exceptions
not to be properly caught.
  
   We have a problem that our servers open connections to some 3rd
 party,
   and if we get too many users at the same time, they get too many
   connections.
 
  So you're agreeing that the problem comes from too many
 connections. This
  is exactly what maxconn is solving.
 
 The whole story is like that: during the process on our servers, we
 have
 to open a few connections for every user to some 3rd party and the
 process for the user finishes.
 If any of the connections is unsuccesful, so is everything that user
 did
 before that (if he does not try again and eventually succeeds).
 The 3rd party limits total concurrent connections and connections per
 second.
 The number of connections that users make to the 3rd party depends on
 what users do on our pages. User can just browse the site for 10
 minutes
 and open no connections or he can finish his process in a minute and
 open more then 10 connections during that time.
 
 As you probably see, my problem is the difference between the user,
 that
 comes the check the site and the user that knows exactly what he wants
 on the site.
 
 The factor is at least 20 (probably more) which means that one setting
 is not good for all scenarios, either it will be to high and users will
 flood the 3rd patry with too many connections or few users will be able
 to browse the site and the rest will wait even though server will be
 sleeping.
 
 I know that these problems should be solved on different levels like
 application, 3rd party connection limiting etc... but the problem is
 actually more of political nature and what I am trying to do is just
 solving the current situation with the tools and options I have. One of
 them is HAProxy and it's connection limiting and with it I would like
 to
 help myself as much as I can.
 
 I hope that clarified my situation a bit.
 
 I will not be able to test anything for a week or more likely two, but
 I
 will continue as soon as possible and if I come to any useful
 conclusions, I will also notify the list.
 
 Thank you again and best regards
 
 
 Bostjan
 
 
 Checked by AVG - www.avg.com
 Version: 8.5.392 / Virus Database: 270.13.25/2256 - Release Date:
 08/07/09 06:22:00




RE: HAProxy and MySQL

2009-08-07 Thread John Lauro
Nearly an extra .1 seems high, but to be fair it doesn’t appear you did much of 
a test:

Number of clients running queries: 1
Average number of queries per client: 0

 

Simulating only 1 client, I wouldn’t expect any performance improvement, and 
without doing any queries, you are only benchmarking connection time?  Sorry, 
not really familiar with mysqlslap.  I wouldn’t be surprised with it being 
slower, but would expect more of a .001 difference instead of over .1 seconds 
slower…  Thought about using haproxy for load balancing mysql, but haven’t 
implemented it yet.

 

I am not an haproxy expert, but I think “option persist” doesn’t apply for mode 
tcp.How do the numbers look if you actually have it benchmark multiple 
queries and simulate multiple clients?

 

 

 

From: Evgeniy Sudyr [mailto:eject.in...@gmail.com] 
Sent: Friday, August 07, 2009 10:54 AM
To: haproxy@formilux.org
Subject: HAProxy and MySQL

 

Hi, I'm trying to use HAProxy as round robin load balancer for 2 MySQL servers. 
I'm using mysqlslap for benchmarking.

At the moment I figured that  load balanced connection is slowest in times. I 
need explanation from HAProxy experts why ?

There is my config:
cat /etc/haproxy/haproxy.cfg

global
maxconn 2000
pidfile /var/run/haproxy.pid
user _haproxy
group _haproxy

defaults
retries 3
maxconn 2000
contimeout  5000
clitimeout  5
srvtimeout  5

listen MySQL 192.168.100.254:3306
mode tcp
balance roundrobin
option  persist
server mysql1 192.168.100.131:3306
server mysql2 192.168.100.132:3306


There is my test results:

HA Proxy Load balancer QUERY1

mysqlslap -u root --password=password -a -h 192.168.100.254
Benchmark
Average number of seconds to run all queries: 0.125 seconds
Minimum number of seconds to run all queries: 0.125 seconds
Maximum number of seconds to run all queries: 0.125 seconds
Number of clients running queries: 1
Average number of queries per client: 0

HA Proxy Load balancer QUERY2

mysqlslap -u root --password=password -a -h 192.168.100.254
Benchmark
Average number of seconds to run all queries: 0.125 seconds
Minimum number of seconds to run all queries: 0.125 seconds
Maximum number of seconds to run all queries: 0.125 seconds
Number of clients running queries: 1
Average number of queries per client: 0

 MySQL SERVER1

mysqlslap -u root --password=password -a -h 192.168.100.131
Benchmark
Average number of seconds to run all queries: 0.015 seconds
Minimum number of seconds to run all queries: 0.015 seconds
Maximum number of seconds to run all queries: 0.015 seconds
Number of clients running queries: 1
Average number of queries per client: 0

 MySQL SERVER2
mysqlslap -u root --password=password -a -h 192.168.100.132
Benchmark
Average number of seconds to run all queries: 0.015 seconds
Minimum number of seconds to run all queries: 0.015 seconds
Maximum number of seconds to run all queries: 0.015 seconds
Number of clients running queries: 1
Average number of queries per client: 0


---
Thanks!
Evgeniy Sudyr

Checked by AVG - www.avg.com
Version: 8.5.392 / Virus Database: 270.13.25/2256 - Release Date: 08/07/09 
06:22:00



RE: Stats as counters for graphing

2009-08-07 Thread John Lauro
(ignore previos message that had this response replying to wrong message.)

 

I set my to alert if ever non 0 for queue and for my graphs I just use
current sessions, and also total connections  (graph as delta / sec) for
connection rate.

 

I assume you normally have a queue during busy times if you want to graph
it?

 

 

From: Karl Pietri [mailto:k...@slideshare.com] 
Sent: Friday, August 07, 2009 1:51 PM
To: HAProxy
Subject: Stats as counters for graphing

 

Is there any way to get the # of queued connections for a backend as a
counter so that i can graph it properly?  as far as i can tell at any random
time i pull the stats it gives me the current number, which isn't that
useful, i need to get the # that happened since the last time i pulled the
stats.

 

If there isn't a way. feature request? :)

 

-Karl

Checked by AVG - www.avg.com
Version: 8.5.392 / Virus Database: 270.13.25/2256 - Release Date: 08/07/09
06:22:00



RE: HAProxy and FreeBSD CARP failover

2009-07-23 Thread John Lauro
Only bind to the port so it doesn’t matter if additional addresses are added or 
removed.  

 

From: Daniel Gentleman [mailto:dani...@chegg.com] 
Sent: Thursday, July 23, 2009 6:13 PM
To: haproxy@formilux.org
Subject: HAProxy and FreeBSD CARP failover

 

Hi list. 
I'd like to set up a redundant HAProxy server using CARP failover in FreeBSD so 
the spare server will automatically snatch up the listen IP and balance out 
our server farm. I can get HAProxy configured, but it won't actually start 
unless the IP is already bound to the box. Suggestions?

(latest haproxy-devel from FreeBSD ports)

---Daniel

Checked by AVG - www.avg.com
Version: 8.5.375 / Virus Database: 270.13.20/2248 - Release Date: 07/22/09 
18:00:00



RE: An IE bug and HAproxy?

2009-05-29 Thread John Lauro
Are you certain there is no issue with the web server?  I have seen (years
ago, prior to my use with haproxy) apache produce strange problems like this
for ie that firefox was able to cope with if it's access_log file reached
2GB.  On a busy server, that is easily reached in days or sooner, and
typically it's only rotated weekly by default.  Have you tried bypassing
haproxy to whichever server(s) you are certain you are seeing the issue on?


 none of which can be seen between Stunnel and the browser. Of course
 the
 problem could also be Stunnel? But a further piece of data points to
 HAproxy: one of the web servers is installed in the same machine as
 HAproxy
 and that specific server doesn't seem to suffer from this same problem.
 If
 the culprit was Stunnel I should imagine that there couldn't be any
 differences between servers as all the data it gets is from HAproxy.
 These
 computers have firewall (Shorewall), but firewalls were temporary not
 in use
 when this data was collected. And as the capture shows the ACKs send by
 the
 machine B hosting the web server were received on the computer A
 hosting
 HAproxy  Stunnel.
 
 This is admittedly a difficult bug to trace as IE works OK most of the
 time
 and so the problem is difficult to reproduce. It is also possible that
 this
 has nothing to do with IE and it has only been a coindicence that IE is
 affected.
 
 I am soon leaving for a trip for but will join the discussion when I am
 back. You all have a nice early summer.




RE: reloading haproxy

2009-05-14 Thread John Lauro
This is what I use to reload:

haproxy -D -f /etc/lb-transparent-slave.cfg -sf $(pidof haproxy)

(Which has pidof lookup process id instead of file it in a file, but that
shouldn't matter.)
The main problem is you are (-st) terminating (aborting) existing
connections instead of (-sf) finishing them.



 -Original Message-
 From: Adrian Moisey [mailto:adr...@careerjunction.co.za]
 Sent: Thursday, May 14, 2009 4:33 AM
 To: haproxy@formilux.org
 Subject: reloading haproxy
 
 Hi
 
 I am currently testing HAProxy for deployment in our live environment.
 I have HAProxy setup to load balance between 4 web servers and I'm
 using
 ab (apache bench) for testing throughput.
 
 I am trying to get the haproxy reloading working, but it doesn't seem
 to
 work.
 
 I start up a few ab's and then run:
 /usr/sbin/haproxy -f /etc/haproxy.cfg -D -p /var/run/haproxy.pid -st
 `cat /var/run/haproxy.pid`
 
 I see the new haproxy take over the old one, but my apache bench fails
 with the following error message:
 apr_socket_recv: Connection reset by peer (104)
 
 
 I thought that the new haproxy would take over the old one without any
 issues, but that doesn't seem to be the case.
 
 Does anyone know how do reload haproxy without affecting the client?
 --
 Adrian Moisey
 Acting Systems Designer | CareerJunction | Better jobs. More often.
 Web: www.careerjunction.co.za | Email: adr...@careerjunction.co.za
 Phone: +27 21 818 8621 | Mobile: +27 82 858 7830 | Fax: +27 21 818 8855
 
 Internal Virus Database is out of date.
 Checked by AVG - www.avg.com
 Version: 8.5.320 / Virus Database: 270.12.10/2088 - Release Date:
 05/05/09 13:07:00




RE: Transparent proxy

2009-05-11 Thread John Lauro
It's a little different config than I have, but it looks ok to me.

 

What's haproxy -vv give?

I have:

[r...@haf1 etc]# haproxy -vv

HA-Proxy version 1.3.15.7 2008/12/04

Copyright 2000-2008 Willy Tarreau w...@1wt.eu

 

Build options :

  TARGET  = linux26

  CPU = generic

  CC  = gcc

  CFLAGS  = -O2 -g

  OPTIONS = USE_LINUX_TPROXY=1

 

(I know, I am a little behind, but if it's not broke.)

 

When you say, haproxy says 503., I assume it doesn't actually say that but
that's what a web browser gets back from it?

 

I assume the web servers have the haproxy's private IP address as their
default route?  If they are going to some other device as a NAT gateway,
that will not work.

Do they show a SYN_RECV or ESTABLISHED connection from the public client
trying to connect?

 

 

From: Carlo Granisso [mailto:c.grani...@dnshosting.it] 
Sent: Monday, May 11, 2009 7:06 AM
To: haproxy@formilux.org
Subject: Transparent proxy

 

Hello everybody, I have a problem with haproxy (1.3.17) and kernel 2.6.29

 

I have successfully recompiled my kernel with TPROXY modules and installed
haproxy (compiled from source with tproxy option enabled) and installed
iptables 1.4.3 (that have tproxy patch).

Now I can't use transparent proxy function: if I leave in haproxy.cfg this
line source 0.0.0.0 usesrc clientip haproxy say 503 - Service
unavailable.

If I comment out the line, everything work fine (without transparent proxy).

 

My situation:

 

haproxy with two ethernet device: first one for public IP, sceond one for
private IP (192.168.XX.XX)

two web server with one ethernet for each one connected to my private
network.

 

 

 

Have you got ideas or you can provide me examples?

 

 

Thanks,

 

 

Carlo

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.320 / Virus Database: 270.12.10/2088 - Release Date: 05/05/09
13:07:00



RE: weights

2009-03-18 Thread John Lauro
That would be nice, but I don’t think so (at least not completely).  Using 
“balance  leastconn” will give the faster servers a little more as they will 
clear their connections quicker.

 

From: Sihem [mailto:stfle...@yahoo.fr] 
Sent: Wednesday, March 18, 2009 6:26 AM
To: haproxy@formilux.org
Subject: weights

 

Hello!
I would like to know whether it is possible to dynamically change the weight of 
a server depending on its response time. 
Thanks!
sihem

 



RE: The gap between ``Total'' and ``LbTot'' in stats page

2009-03-17 Thread John Lauro
Mine don't appear to have that much difference.  Are any of the servers
down, or maybe reaching their session limits?  What's your retr and redis
look like?

 

From: Sun Yijiang [mailto:sunyiji...@gmail.com] 
Sent: Tuesday, March 17, 2009 3:18 AM
To: kuan...@mail.51.com
Cc: haproxy@formilux.org
Subject: Re: The gap between ``Total'' and ``LbTot'' in stats page

 

Yeah, that's clear, thanks.  I just wonder why ``LbTot'' is much smaller
than ``Total''.

2009/3/17 FinalBSD final...@gmail.com

check it here: http://haproxy.1wt.eu/download/1.3/doc/configuration.txt

30. lbtot: total number of times a server was selected





On Tue, Mar 17, 2009 at 1:56 PM, Sun Yijiang sunyiji...@gmail.com wrote:

Hi you guys,

I noticed that there's a huge gap between ``Total'' and ``LbTot'' numbers in
the stats page.  LbTot is only about 25% of Total sessions for backend
server.  Is this the normal case?  What do they mean exactly?  I've read the
source code for a while but could not find a clear answer.

Thanks in advance.


Steve

 

 



RE: Multiple Proxies

2009-03-17 Thread John Lauro
Not built into Haproxy, but you can use heartbeat or keepalived along with
haproxy for IP takeover on a pair of physical boxes (or VMs).

 

From: Scott Pinhorne [mailto:scott.pinho...@voxit.co.uk] 
Sent: Tuesday, March 17, 2009 10:52 AM
To: haproxy@formilux.org
Subject: Multiple Proxies

 

Hi All

 

I am using haproxy to load balance/failover on a  couple of my dev HTTP
servers and it works really well.

I would like to introduce hardware redundancy for the haproxy server, is
this possible with the software?

 

Best Regards

Scott Pinhorne

 

Tel: 0845 862 0371

 

cid:image001.jpg@01C93684.B3F9B800

 

http://www.voxit.co.uk

 

P Please consider the environment before printing this email.

PRIVACY AND CONFIDENTIALITY NOTICE 

The information in this email is for the named addressee only. As this email
may contain confidential or privileged information if you are not, or
suspect that you are not, the named addressee other person responsible for
delivering the message to the named addressee, please contact us
immediately. Please note that we cannot guarantee that this message has not
been intercepted and amended. The views of the author may not necessarily
reflect those of VoxIT Ltd.

 

VIRUS NOTICE 

The contents of any attachment may contain software viruses, which could
damage your own computer. While VoxIT Ltd has taken reasonable precautions
to minimise the risk of software viruses, it cannot accept liability for any
damage, which you may suffer as a result of such viruses. We recommend that
you carry out your own virus checks before opening any attachment.

 


-- 
This message has been scanned for viruses and 
dangerous content by  http://www.voxit.co.uk/ VOXIT LIMITED, and is 
believed to be clean. 

image001.jpg

RE: Multiple Proxies

2009-03-17 Thread John Lauro
You need to explain a little more, as I am not understating something.
Perhaps what you mean by VIP?

If they share the same single VIP at the same time, then why would you use
round-robin DNS?  Round-robin is for multiple IP addresses...?

Also, if you do a virtual IP like Microsoft Windows does for their multicast
load balancing, that is just plain nasty to your network infrastructure if
you have more than those servers on the same subnet and IMHO really doesn't
scale well...


If you meant a different VIP instead of one bound to each server, I could
understand that.  However, 50% of the clients will feel the hit when first
connecting if a server is down.



 -Original Message-
 From: news [mailto:n...@ger.gmane.org] On Behalf Of Jan-Frode Myklebust
 Sent: Tuesday, March 17, 2009 2:53 PM
 To: haproxy@formilux.org
 Subject: Re: Multiple Proxies
 
 I would use one VIP bound to each server, and use round-robin DNS to
 distribute the load over them. And with cookies for pinning it
 shouldn't
 matter to the clients which VIP it reaches.
 
 
-jf





OT: BGP

2009-03-15 Thread John Lauro
Sorry for the off topic question, so feel free to reply directly.  Can
anyone recommend a BGP package for linux.  I have little experience with
BGP, and on the plus site I mainly just need to advertise a net (so a a
simple default route for outgoing is all I need in local routing table).

 

There appears to be several choices, such as bird, zebra, vyatta, etc.  and
I have no idea which might be best.  Any recommendations?

 

 

Thanks for any recommendations.



RE: load balancer and HA

2009-03-06 Thread John Lauro
 I still don't understand why people stick to heartbeat for things
 as simple as moving an IP address. Heartbeat is more of a clustering
 solution, with abilities to perform complex tasks.
 
 When it comes to just move an IP address between two machines an do
 nothing else, the VRRP protocol is really better. It's what is
 implemented in keepalived. Simple, efficient and very reliable.

One reason, heartbeat is standard in many distributions (ie: RHEL, Centos)
and vrrp and keepalived are not.  It might be overkill for just IP
addresses, but being supported in the base OS is a plus that shouldn't be
discounted.  If you have to support heartbeat on other servers, using
heartbeat for places you have to share resources is easier than using vrrp
for some and heartbeat on others.






RE: measuring haproxy performance impact

2009-03-06 Thread John Lauro
   - net.netfilter.nf_conntrack_max = 265535
   - net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
 = this proves that netfiler is indeed running on this machine
and might be responsible for session drops. 265k sessions is
very low for the large time_wait. It limits to about 2k
sessions/s, including local connections on loopback, etc...
 
 You should then increase nf_conntrack_max and nf_conntrack_buckets
 to about nf_conntrack_max/16, and reduce
 nf_conntrack_tcp_timeout_time_wait
 to about 30 seconds.
 

Minor nit...
He has:  net.netfilter.nf_conntrack_count = 0
Which if I am not mistaken, indicates connection tracking although in the
kernel, it is not being used.  (No firewall rules triggering it).






RE: Logging rsyslogd

2009-02-16 Thread John Lauro
Put a - in front of the path in syslogd.conf.

Ie:

local0.*
-/mnt/log/haproxy_0.log
local1.*
-/mnt/log/haproxy_1.log
local2.*
-/mnt/log/haproxy_2.log
local3.*
-/mnt/log/haproxy_3.log
local4.*
-/mnt/log/haproxy_4.log
local5.*
-/mnt/log/haproxy_5.log

 

That will help a lot with your load.  Without the -, syslog is asked to sync
the disk after every entry.

 

 

Also, levels are incremental.  Debug includes info, info includes crit,
notice includes warning, etc..

 

So, just list one level (ie: info) in your Haproxy config, and if you want
to break it out by level then do it in your syslogd.conf.

 

 

 

 

From: Mayur B [mailto:may...@gmail.com] 
Sent: Monday, February 16, 2009 5:14 PM
To: haproxy@formilux.org
Subject: Logging  rsyslogd

 

Hello,

After enabling logging in rsyslogd  haproxy.config using the following
lines, are CPU utilization has gone thru the roof. It used to be at 5-10%
utilization, and it is now above 95% most of the time. 

rsyslogd.conf:
# Save HA-Proxy logs
local0.*
/mnt/log/haproxy_0.log
local1.*
/mnt/log/haproxy_1.log
local2.*
/mnt/log/haproxy_2.log
local3.*
/mnt/log/haproxy_3.log
local4.*
/mnt/log/haproxy_4.log
local5.*
/mnt/log/haproxy_5.log






Client IPs logging and/or transparent

2009-01-30 Thread John Lauro
Hello,

 

Running mode tcp in case that makes a difference for any comments, as I know
there are others options for http.

 

I need to preserve for auditing the IP address of the clients and be able to
associate it with a session.  One problem, it appears the client IP and port
are logged, however it appears that only the final server is logged, but not
the source port for the outgoing connection.  In theory, assuming ntp in
sync, I should be able to tie the logs together if I had the port number
that was used in the outgoing connection.  Is there some way to turn this
on, or am I just missing it from the logged line?

 

The other option appears to be to setup haproxy act transparently.  This
appears to be rather involved and sparse on details.  Based on examples I
found on using squid with it, it appears to be more involved then just
updating kernel.  If anyone can post some hints on their setup with haproxy
(sample config files and sample iptables (or are they not required))  that
would be great.  If there is a yum repository with a patched kernel and
other bits ready to install that would be even better.

 

In some ways it looks rather messy to setup and support, but IP tracking is
important.

 

 

 



haproxy for mysql?

2009-01-28 Thread John Lauro
Hello,

 

I am considering using haproxy with mysql.  Basically one server, and one
backup server.  Has anyone used haproxy with mysql?  What were your
experiences (good and bad)?  What values do you use for timeouts, etc.?  

 

Thank you.



RE: Geographic loadbalancing

2009-01-26 Thread John Lauro
I am not sure it would be called a bad idea, just not an effective one...
don't expect it to help much when an ISP is down for only an hour.  Most
clients do not honor low TTL values, especially if they are revisiting the
site without closing the browser.


I would like to hear anyone using anycast with TCP.  What if two servers are
equal distance.  Wouldn't you have a fair chance of equal 50% packets going
each way, killing tcp state connections.  The more servers out there
advertising the same IP, the more likely you will have cases of equal
distance...  or will packets typically go to the same server each time (or
at least for several minutes) even if the costs are the same?



-Original Message-
From: tayssir.j...@googlemail.com [mailto:tayssir.j...@googlemail.com] On
Behalf Of Tayssir John Gabbour
Sent: Monday, January 26, 2009 10:39 AM
To: haproxy@formilux.org
Subject: Geographic loadbalancing

Hi,

I'm considering geographically loadbalancing a website (where people
order stuff) in case our ISP has a big network problem for an hour. Is
this within HAProxy's scope?

A bit more context:

I'm told that anycast is the natural solution, but I find little on
the net (or in books) on this. (Though there's more info on geodns,
which I'm told is like a poor man's anycast.)

I thought I could use a DNS server which polls server health (only
serving addresses that are up), but I'm told this is a bad idea for
reasons I don't yet grasp.

Any ideas?


Thanks!
Tayssir