Re: HAProxy-1.8 sometimes sends a shorter html when using multithread function

2021-05-24 Thread Ryan O'Hara
On Tue, May 18, 2021 at 12:00 PM Willy Tarreau  wrote:

> Hi Ryan,
>
> On Tue, May 18, 2021 at 10:54:11AM -0500, Ryan O'Hara wrote:
> > > > I confirmed haoproxy's log message corresponded to the
> > > > shorter html, the following line is one of them.
> > > >
> > > > 2021-03-23T15:54:46.869626+09:00 lb01 [err] haproxy[703957]:
> > > > 192.168.1.73:60572 [23/Mar/2021:15:54:46.868] vs_http01
> > > rs_http01web/web01
> > > > 0/0/0/0/0 200 2896 - - SD-- 12/12/7/2/0 0/0 "GET
> > > > /content01.html?x=170026897 HTTP/1.1"
> > >
> >
> > This is exactly the same problem as I reported on the mailing list a
> couple
> > weeks ago. I accidentally replied to off-list to Kazuhiro but will
> continue
> > the conversation here.
>
> Excellent, that will be one less issue to chase!
>
> > > > So I commented out "fdtab[fd].ev &= FD_POLL_STICKY;"
> > > > from both of haproxy-1.8.25 and haproxy-1.8.30,
> > > > then the behavior is resolved.
> > >
> > > This is very strange. I could almost have understood the opposite, i.e.
> > > remove an activity report before leaving so that we don't wake up, but
> > > here *not* removing the flags indicates that we're leaving the FD
> > > reports for another call.
> > >
> > > > I don't know why this commit resolves the behavior,
> > > > I just tried and ran the test.
> > >
> > > What I *suspect* is happening is the following:
> > >
> > >   - the server closes with a TCP reset
> > >   - sometimes for an unknown reason we do not process the event
> > > before leaving this function
> > >   - we then flush the activity flags
> > >   - the reset flag is still there, reported as POLLERR, forcing
> > > an immediate subsequent wakeup
> > >   - the I/O handler sees POLLERR without POLLIN and concludes this is
> > > a hard error and doesn't even bother to try to read, resulting in
> > > the loss of the pending segments.
> > >
> >
> > In my original message, I included a portion of the tcpdump and the RST
> > package is being sent by haproxy to the server. I never see a TCP reset
> > from the server itself.
>
> Ah yes, seeing it now. Thanks, this will help!
>
> > Under wireshark, I can see that the HTTP response is a total of 3
> segments,
> > and as far as I can tell all three segments were received by haproxy.
>
> In fact what you know from a trace is that they're received by the NIC,
> you can know they're received by the TCP stack when you see them ACKed,
> and you know they're received by haproxy if you're seeing haproxy pass
> them on the other side. I mean, most of the case something that arrives
> to the stack will reach haproxy but there are some situations where this
> will not happen, such as  if haproxy closes before, or if some errors
> are detected early and prevent it from reading. The difference is subtle
> but it explains how/why some error flags may be reported indicating an
> error at the transport layer, with the transport layer including the
> lower layers of haproxy itself.
>

Ah yes. I should have recognized this.



> > Pardon if this next bit doesn't make sense, but the third and final
> segment
> > is shown under wireshark as the HTTP response itself. In other words:
> >
> > Segment #1: 2896 bytes
> > Segment #2: 1448 bytes
> > Segment #3: 1059 bytes <- this final frame also includes the HTTP
> response
> > header
>
> No that's something common and weird about wireshark/tshark, it logs the
> status code on the last segment. Someone once told me why he found that
> convenient but I since forgot, considering all the cases where I find this
> misleading! Otherwise just use tcpdump, it doesn't try to be smart and to
> report lies, it will not cheat on you.
>
> So in your trace we can see that the stack ACKs receipt of the first two
> segments (2896 bytes of data), then the 3rd segment (1448). Then haproxy
> closes "cleanly" (something it shouldn't do on this side, at the risk of
> leaving TIME_WAIT sockets). Maybe it found an HTTP close in the response,
> but regardless it should not close like this.
>

It definitely did have connection close, but I agree it should not behave
this way.



> I'm not sure I understand this "HTTP 1125" response, whether it's the last
> bytes of response or a late retransmit for any reason, and as tshark is
> still unable to display sequence and ack numbers on *all* packets, it's
> impossible to reliably know what this cor

Re: HAProxy-1.8 sometimes sends a shorter html when using multithread function

2021-05-18 Thread Ryan O'Hara
On Tue, May 18, 2021 at 5:21 AM Willy Tarreau  wrote:

> Hello,
>
> On Mon, May 17, 2021 at 09:47:10AM +0900, Kazuhiro Takenaka wrote:
> > Hello
> >
> > This is my first post to this mailing list.
> > I am not good at using English, so feel free to ask me
> > if my text is hard to understand.
>
> Rest assured that the majority of participants here (me included) do
> not have English as their native language, so you're not special on
> this point. And I had absolutely no problem understanding all your
> explanations, your English is better than you seem to think :-)
>
> > I noticed haproxy-1.8 sometimes sent incomplete htmls to
> > clients when running haproxy with the attached haproxy.cfg
> > that uses multithread function.
> >
> > # I also attached content01.html and check.html that
> > # are deployed on backend servers.
> > # content01.html is used in the confirmnation test
> > # described below, check.html is for health check
> > # purpose.
> >
> > In this case, the client receives a shorter html.
> >
> > I confirmed haoproxy's log message corresponded to the
> > shorter html, the following line is one of them.
> >
> > 2021-03-23T15:54:46.869626+09:00 lb01 [err] haproxy[703957]:
> > 192.168.1.73:60572 [23/Mar/2021:15:54:46.868] vs_http01
> rs_http01web/web01
> > 0/0/0/0/0 200 2896 - - SD-- 12/12/7/2/0 0/0 "GET
> > /content01.html?x=170026897 HTTP/1.1"
>

This is exactly the same problem as I reported on the mailing list a couple
weeks ago. I accidentally replied to off-list to Kazuhiro but will continue
the conversation here.

So it indicates an immediate close or error that is being detected on
> the server side just after two TCP segments. This can have some
> importance because depending how the server's TCP stack is tuned, it
> is very likely that it will only send two segments before growing its
> window, leaving a short blank period after them which can allow haproxy
> to detect an error.
>

This is interesting.


> (...)
> > The following list is the frequency of abnormal access
> > when a total of 10 million accesses are made in 20 parallel
> > curl processes.
> >
> > status_code bytes_read occurence
> > 200 4344   1
> => exactly 3 segments
> > 200 2896   9
> => exactly 2 segments
> > 200 1448   6
> => exactly 1 segment
>
> It makes me think about a server which closes with a TCP reset after
> the data. It then becomes extremely timing-dependent and could explain
> why the size is a multiple of the number of segments.
>
> > 408 2162
> >
> > The following line is the log messages
> > in the case of 408.
> >
> > 2021-03-23T16:02:42.444084+09:00 lb01 [err] haproxy[703957]:
> > 192.168.1.73:37052 [23/Mar/2021:16:02:32.472] vs_http01
> vs_http01/
> > -1/-1/-1/-1/1 408 212 - - cR-- 14/14/0/0/0 0/0 ""
>
> So the client's request didn't make it into haproxy.
>
> > When I first met this behavior, I used haproxy-1.8.25 shipped
> > with RHEL8.3. So I obtained haproxy-1.8.30 from http://git.haproxy.org/
> > and built it, ran the test and got the result of the same sort.
> >
> > This behavior didn't happen without using multithread function.
>
> So that definitely indicates a race condition somewhere.
>

In my case it was also reported that nbthread must be set in order to
trigger the HTTP response status 200 with SD-- termination state, along
with shortened response size. Everything about this bug seems exactly
inline with what I am seeing.


> > Next, I tried on haproxy-2.0.0 and confirmed it ran normally
> > without this behavior.
> >
> > Then I picked up several versions of haproxy
> > between 1.8.0 and 2.0.0 and built them, test them
> > and found the commit below resolved this behavior.
> >
> > ===
> > commit 524344b4e0434b86d83869ef051f98d24505c08f
> > Author: Olivier Houchard 
> > Date:   Wed Sep 12 17:12:53 2018 +0200
> >
> > MEDIUM: connections: Don't reset the polling flags in
> conn_fd_handler().
> >
> > Resetting the polling flags at the end of conn_fd_handler()
> shouldn't be
> > needed anymore, and it will create problem when we won't handle
> > send/recv
> > from conn_fd_handler() anymore.
> >
> > diff --git a/src/connection.c b/src/connection.c
> > index ab32567b..e303f2c3 100644
> > --- a/src/connection.c
> > +++ b/src/connection.c
> > @@ -203,9 +203,6 @@ void conn_fd_handler(int fd)
> > conn->mux->wake(conn) < 0)
> > return;
> >
> > -   /* remove the events before leaving */
> > -   fdtab[fd].ev &= FD_POLL_STICKY;
> > -
> > /* commit polling changes */
> > conn->flags &= ~CO_FL_WILL_UPDATE;
> > conn_cond_update_polling(conn);
> > ===
> >
> > So I commented out "fdtab[fd].ev &= FD_POLL_STICKY;"
> > from both of haproxy-1.8.25 and haproxy-1.8.30,
> > then the behavior is resolved.
>
> This is very strange. I could almost have understood the opposite, i.e.
> remove an 

Random SD termination state

2021-05-03 Thread Ryan O'Hara
For the past few weeks I have been trying to understand a problem that was
brought to my attention when running a simple ab test through haproxy to a
single Apache HTTP server. Attached are the config file and excerpts of the
tcpdump.

This is a simple setup with 3 VMs:

- Client:  10.15.85.151
- HAProxy: 10.15.85.152
- Server:  10.15.85.153

The HAProxy node is running haproxy-1.8.30, but the issue was originally
reported with haproxy-1.8.27. Please note that I did not write this config
file. I am using what was provided to me as a way to reproduce the problem
locally.

With httpd running on the server, an empty 5kb file is created on said
server:

# dd if=/dev/zero of=/var/www/html/5kb_dummy.html bs=1k count=5

On the client, run 'ab' to perform several HTTP requests:

# ab -n 10 -c 1 http://10.15.85.152/5kb_dummy.html

Occasionally I will see in the haproxy logs that a session was terminated
with "SD" and a HTTP status of 200. Also, these responses seem truncated,
which you might expect since the termination state is "SD". This is a
static, 5kb file so we would expect to see log entries that have bytes_read
(with headers) to be 5403. Here is a log entry:

Apr 27 13:18:36 localhost haproxy[28516]: 10.15.85.151:39308
[27/Apr/2021:13:18:36.277] mesa mesa-http/mesa-virt-13 0/0/6/11/17 200 4344
- - SD-- 2000/2000/1998/1997/0 0/0 "GET /5kb_dummy.html HTTP/1.0"
10.15.85.152:59600

Here we have response code 200, termination state "SD" and bytes_read 4344.
The timing values seem good. Note the custom format here just adds the
backend source address and port, which makes it easier to find interesting
packets in the pcap.

Using tcpdump on the haproxy node and analyzing with wireshark, I filtered
with:

(ip.addr == 10.15.85.151 && tcp.port == 39308) || (ip.addr == 10.15.85.152
&& tcp.port == 59600)

Around time 13:18:36.277 is 1005899. This is where haproxy establishes a
connection with the backend, as expected. Everything seems normal until the
server sends the HTTP response, which seems to cause haproxy to send a RST
to the server. That seems odd. Then I am not sure what happens, but this
seems to be the case for every packet capture I've analyzed in which the SD
problem occured. How I read the capture, just interesting frames:

1003310 - client connects to haproxy
1003634 - client sends HTTP GET request to haproxy
1005899 - haproxy connects to server
1006909 - haproxy sends HTTP GET request to server
1008524 - haproxy sends [FIN,ACK] to server

I am assuming the FIN+ACK is haproxy telling the server is finished sending
(ie. half-closed). Continuing:

1008541 - server sends HTTP response to haproxy

If I drill down into this frame, I can see that there are 3 reassembled TCP
segments for a total of 5403 bytes, which is correct.

1008543 - haproxy sends [RST] to the server

Then it seems the connection between the client and haproxy is closed by
both ends and we never see a response sent to the client. The way I
understand the log entry is that bytes_read is the number of bytes sent to
the client. What happened here?

Now would be a good time to point out a few other observations:

* Everytime I see a session terminated with SD, the number of active
connections and frontend connections is 2000, which also happens to be
maxconn for this config. Not sure if this is significant or not.
* Using "show errors" with the stats socket shows nothing.
* A colleague of mine can recreate this much more frequently than I can,
despite the fact that we believe we have identical test environments. This
does not happen often for me. I may have to run 'ab' on the client 10+
times before I trigger this issue.

Other things I've tried:

* Running strace on haproxy. Not a good idea. The performance penalty is so
huge that it is impossible to reproduce the issue.
* Running haproxy in debug mode. Somewhat helpful but everything appeared
normal.
* Attempted to debug httpd. I have no experience with this and found
nothing useful.

Also, I was informed that this only happens when nbthread is set (8 in
case). While that seems to be true in my own testing, I don't think it is a
factor. Removing 'nbthread 8' from this configuration just slows everything
down so much that httpd isn't being hit with as many requests per second as
it would when nbthread is set. I could be wrong about this. I definitely
need to spend some time looking into request rate differences in this case.

Appreciate any comments and/or suggestions. I am happy to provide more
information if needed.

Ryan
1003310   9.446910 10.15.85.151 → 10.15.85.152 TCP 74 [TCP Port numbers reused] 
39308 → 80 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=626203413 
TSecr=0 WS=128
1003311   9.446917 10.15.85.152 → 10.15.85.151 TCP 74 80 → 39308 [SYN, ACK] 
Seq=0 Ack=1 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=2614875547 
TSecr=626203413 WS=128
1003329   9.447032 10.15.85.151 → 10.15.85.152 TCP 66 39308 → 80 [ACK] Seq=1 
Ack=1 Win=29312 Len=0 TSval=626203413 

Re: [ANNOUNCE] haproxy-2.2.0

2020-07-16 Thread Ryan O'Hara
On Fri, Jul 10, 2020 at 3:26 PM Илья Шипицин  wrote:

> instead of disabling Lua support, is it possible to build against Lua-5.3 ?
> I recall there's Lua-5.3 on Fedora-33
>

Right. I saw the same message, but it does not work. I sent a message to
the Lua maintainer for Fedora last Friday and he sent a patch that I will
pass along in a new thread. I feel bad for hijacking this 2.2.0
announcement thread! Stay tuned.

Ryan


Re: [ANNOUNCE] haproxy-2.2.0

2020-07-10 Thread Ryan O'Hara
On Thu, Jul 9, 2020 at 2:24 PM Tim Düsterhus  wrote:

> Ryan,
>
> Am 09.07.20 um 20:34 schrieb Ryan O'Hara:
> > I'm currently packaging this for Fedora. It seems to build just fine on
> > Fedora 32 and rawhide. Is there any new build options or dependencies to
> be
> > aware of? I'm looking at the Makefile now and nothing jumps out at me.
> That
> > said, I am totally capable of missing something.
> >
>
> I've just run `git diff v2.2-dev0..v2.3-dev0 -- Makefile`. The only
> thing I'm seeing that might of of interest to you is HLUA_PREPEND_PATH /
> HLUA_PREPEND_CPATH if you plan to ship any HAProxy specific Lua
> libraries that don't make sense in the global Lua library path.
>

Good thing we're talking about Lua, because I just noticed that rawhide
(and Fedora 33) are using Lua 5.4 and that will not work with haproxy. I'm
investigating. Worst case scenario ... I will have to disable Lua support
in F33/rawhide.

Ryan


Re: [ANNOUNCE] haproxy-2.2.0

2020-07-09 Thread Ryan O'Hara
On Tue, Jul 7, 2020 at 12:41 PM Willy Tarreau  wrote:

> Hi,
>
> HAProxy 2.2.0 was released on 2020/07/07. It added 24 new commits
> after version 2.2-dev12.
>

This is great. Thank you to all who contributed to this release.

I'm currently packaging this for Fedora. It seems to build just fine on
Fedora 32 and rawhide. Is there any new build options or dependencies to be
aware of? I'm looking at the Makefile now and nothing jumps out at me. That
said, I am totally capable of missing something.

Ryan


Re: [PATCH] BUG/MINOR: systemd: Wait for network to be online

2020-06-15 Thread Ryan O'Hara
I posted this patch to start some discussion here. I'm not the first to
notice this problem but I was, until now, hesitant to change the systemd
service file until now. The reason for this was that waiting for
network-online.target could delay boot time. Please see systemd network
target docs here [1].

As stated in the commit message, the common reason I was asked to change
this in RHEL/Fedora was due to attempting to bind to a non-existent IP
address, but that can be overcome with 'option transparent'. However I
recently was notified that DNS resolution will fail when haproxy starts if
the network is not fully online. Thus I suggested the patch.

I also found this discussion [2], but noticed that the upstream service
file had not been modified.

Cheers,
Ryan

[1] https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/
[2] https://discourse.haproxy.org/t/haproxy-fails-on-restart/3469/10

On Mon, Jun 15, 2020 at 12:03 PM Ryan O'Hara  wrote:

> Change systemd service file to wait for network to be completely
> online. This solves two problems:
>
> If haproxy is configured to bind to IP address(es) that are not yet
> assigned, haproxy would previously fail. The workaround is to use
> "option transparent".
>
> If haproxy us configured to use a resolver to resolve servers via DNS,
> haproxy would previously fail due to the fact that the network is not
> fully online yet. This is the most compelling reason for this patch.
>
> Signed-off-by: Ryan O'Hara 
> ---
>  contrib/systemd/haproxy.service.in | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/contrib/systemd/haproxy.service.in b/contrib/systemd/
> haproxy.service.in
> index 9b7c3d1bb..05fc59579 100644
> --- a/contrib/systemd/haproxy.service.in
> +++ b/contrib/systemd/haproxy.service.in
> @@ -1,6 +1,7 @@
>  [Unit]
>  Description=HAProxy Load Balancer
> -After=network.target
> +After=network-online.target
> +Wants=network-online.target
>
>  [Service]
>  EnvironmentFile=-/etc/default/haproxy
> --
> 2.25.1
>
>
>


[PATCH] BUG/MINOR: systemd: Wait for network to be online

2020-06-15 Thread Ryan O'Hara
Change systemd service file to wait for network to be completely
online. This solves two problems:

If haproxy is configured to bind to IP address(es) that are not yet
assigned, haproxy would previously fail. The workaround is to use
"option transparent".

If haproxy us configured to use a resolver to resolve servers via DNS,
haproxy would previously fail due to the fact that the network is not
fully online yet. This is the most compelling reason for this patch.

Signed-off-by: Ryan O'Hara 
---
 contrib/systemd/haproxy.service.in | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/contrib/systemd/haproxy.service.in 
b/contrib/systemd/haproxy.service.in
index 9b7c3d1bb..05fc59579 100644
--- a/contrib/systemd/haproxy.service.in
+++ b/contrib/systemd/haproxy.service.in
@@ -1,6 +1,7 @@
 [Unit]
 Description=HAProxy Load Balancer
-After=network.target
+After=network-online.target
+Wants=network-online.target
 
 [Service]
 EnvironmentFile=-/etc/default/haproxy
-- 
2.25.1




Re: HAProxy 2.0.10 and 2.1.0 RPM's

2019-12-16 Thread Ryan O'Hara
On Tue, Nov 26, 2019 at 9:20 PM Willy Tarreau  wrote:

>
> Indeed that looks good. We'll need to include Ryan in this discussion,
> he's the maintainer of the official RPMs for RHEL. I'm purposely not CCing
> him as I know he's very busy this week, but I sense that we're starting to
> see the light at the end of the tunnel here.
>

Indeed, I have been busy. Then on vacation. Then back in time for another
deadline. I'm always open to hear feedback on the RHEL and Fedora packages
(I maintain Fedora packages as well as RHEL). Thanks to Julien for creating
those copr builds. That makes my life so much easier!

A couple items about Fedora/RHEL packages. For Fedora, I try to be
responsive for CVEs, minor release rebases, and enabling features (eg.
Prometheus support). Lesser things like minor spec file changes usually get
a much lower priority. Note that we do not rebase to a new, major release
within a stable release. For example, if Fedora 31 has haproxy-2.0.x, it
will never have haproxy-2.1.x. That is what copr is for. For RHEL, it is
even more restrictive. I don't make the rules, just wanted to explain this
since I get a lot of email asking about these things. Cheers.

Ryan


Re: HAProxy 2.0.10 and 2.1.0 RPM's

2019-12-16 Thread Ryan O'Hara
On Tue, Nov 26, 2019 at 2:40 PM Russell Eason  wrote:

> Hello,
>
> Fedora upstream added it
> https://src.fedoraproject.org/rpms/haproxy/c/45c57ba71174f308a5f59569bac0598bb31ef767
> , and can be seen as far back as F24 here
> https://src.fedoraproject.org/rpms/haproxy/blob/f24/f/haproxy.spec . LUA
> support is in the RHEL 8 version of HAProxy, but not in 7 (yet?).
>

Sorry for the late reply. I believe the reason that RHEL7 lacks LUA support
is because the required version of LUA is not available in RHEL7. Enabling
LUA support in haproxy at build time is easy, but if the underlying bits
aren't available, there is nothing I can do. Cheers.

Ryan


Re: haproxy-1.8 in Fedora

2018-01-05 Thread Ryan O'Hara
On Fri, Jan 5, 2018 at 3:12 PM, Aleksandar Lazic <al-hapr...@none.at> wrote:

> Hi Ryan.
>
> -- Originalnachricht --
> Von: "Ryan O'Hara" <roh...@redhat.com>
> An: haproxy@formilux.org
> Gesendet: 05.01.2018 17:19:15
> Betreff: haproxy-1.8 in Fedora
>
> Just wanted to inform Fedora users that haproxy-1.8.3 is now in the master
>> branch and built for Rawhide. I will not be updating haproxy to 1.8 in
>> current stable releases of Fedora since I received some complaints about
>> doing major updates (eg. 1.6 to 1.7) is previous stables releases. That
>> said, the source rpm will build on Fedora 27. If there is enough interest,
>> I can build haproxy-1.8 in copr and provide a repository for current stable
>> Fedora releases.
>>
> I don't know what 'copr' is but how about to add the haproxy 1.8 into the
> software collection similar like nginx 1.8 and apache httpd 2.4 ?
>
> The customer then is able to use haproxy 1.8 with the software collection
> subscription.


​Which software collection are you referring to? Fedora? CentOS? RHEL?
Either way, it is something that we have discussed and are planning to do
for the next release of RHSCL, but we've not had any requests for other
collections.

You can learn more about copr here [1] and here [2]. Basically I can take
my package and build for specific releases, create a repo for the built
package(s), etc. Useful for builds that aren't included in a certain
release.

Ryan

[1] https://copr.fedorainfracloud.org/
[2] https://developer.fedoraproject.org/deployment/copr/about.html


​
>
>

>> Ryan
>>
> Best regards
> aleks
>
>


haproxy-1.8 in Fedora

2018-01-05 Thread Ryan O'Hara
Just wanted to inform Fedora users that haproxy-1.8.3 is now in the master
branch and built for Rawhide. I will not be updating haproxy to 1.8 in
current stable releases of Fedora since I received some complaints about
doing major updates (eg. 1.6 to 1.7) is previous stables releases. That
said, the source rpm will build on Fedora 27. If there is enough interest,
I can build haproxy-1.8 in copr and provide a repository for current stable
Fedora releases.

Ryan


[PATCH 2/2] Fix compiler warnings in halog.c

2017-12-15 Thread Ryan O'Hara
There were several unused variables in halog.c that each caused a
compiler warning [-Wunused-but-set-variable]. This patch simply
removes the declaration of said vairables and any instance where the
unused variable was assigned a value.
---
 contrib/halog/halog.c | 25 -
 1 file changed, 8 insertions(+), 17 deletions(-)

diff --git a/contrib/halog/halog.c b/contrib/halog/halog.c
index fc336b4d..a7248173 100644
--- a/contrib/halog/halog.c
+++ b/contrib/halog/halog.c
@@ -466,7 +466,7 @@ int convert_date(const char *field)
 {
unsigned int h, m, s, ms;
unsigned char c;
-   const char *b, *e;
+   const char *e;
 
h = m = s = ms = 0;
e = field;
@@ -481,7 +481,6 @@ int convert_date(const char *field)
}
 
/* hour + ':' */
-   b = e;
while (1) {
c = *(e++) - '0';
if (c > 9)
@@ -492,7 +491,6 @@ int convert_date(const char *field)
goto out_err;
 
/* minute + ':' */
-   b = e;
while (1) {
c = *(e++) - '0';
if (c > 9)
@@ -503,7 +501,6 @@ int convert_date(const char *field)
goto out_err;
 
/* second + '.' or ']' */
-   b = e;
while (1) {
c = *(e++) - '0';
if (c > 9)
@@ -516,7 +513,6 @@ int convert_date(const char *field)
/* if there's a '.', we have milliseconds */
if (c == (unsigned char)('.' - '0')) {
/* millisecond second + ']' */
-   b = e;
while (1) {
c = *(e++) - '0';
if (c > 9)
@@ -539,7 +535,7 @@ int convert_date_to_timestamp(const char *field)
 {
unsigned int d, mo, y, h, m, s;
unsigned char c;
-   const char *b, *e;
+   const char *e;
time_t rawtime;
static struct tm * timeinfo;
static int last_res;
@@ -626,7 +622,6 @@ int convert_date_to_timestamp(const char *field)
}
 
/* hour + ':' */
-   b = e;
while (1) {
c = *(e++) - '0';
if (c > 9)
@@ -637,7 +632,6 @@ int convert_date_to_timestamp(const char *field)
goto out_err;
 
/* minute + ':' */
-   b = e;
while (1) {
c = *(e++) - '0';
if (c > 9)
@@ -648,7 +642,6 @@ int convert_date_to_timestamp(const char *field)
goto out_err;
 
/* second + '.' or ']' */
-   b = e;
while (1) {
c = *(e++) - '0';
if (c > 9)
@@ -690,10 +683,10 @@ void truncated_line(int linenum, const char *line)
 
 int main(int argc, char **argv)
 {
-   const char *b, *e, *p, *time_field, *accept_field, *source_field;
+   const char *b, *p, *time_field, *accept_field, *source_field;
const char *filter_term_code_name = NULL;
const char *output_file = NULL;
-   int f, last, err;
+   int f, last;
struct timer *t = NULL;
struct eb32_node *n;
struct url_stat *ustat = NULL;
@@ -945,7 +938,7 @@ int main(int argc, char **argv)
}
}
 
-   e = field_stop(time_field + 1);
+   field_stop(time_field + 1);
/* we have field TIME_FIELD in [time_field]..[e-1] */
p = time_field;
f = 0;
@@ -969,17 +962,15 @@ int main(int argc, char **argv)
}
}
 
-   e = field_stop(time_field + 1);
+   field_stop(time_field + 1);
/* we have field TIME_FIELD in [time_field]..[e-1], 
let's check only the response time */
 
p = time_field;
-   err = 0;
f = 0;
while (!SEP(*p)) {
tps = str2ic(p);
if (tps < 0) {
tps = -1;
-   err = 1;
}
if (++f == 4)
break;
@@ -1706,7 +1697,7 @@ void filter_count_ip(const char *source_field, const char 
*accept_field, const c
 void filter_graphs(const char *accept_field, const char *time_field, struct 
timer **tptr)
 {
struct timer *t2;
-   const char *e, *p;
+   const char *p;
int f, err, array[5];
 
if (!time_field) {
@@ -1717,7 +1708,7 @@ void filter_graphs(const char *accept_field, const char 
*time_field, struct time
}
}
 
-   e = field_stop(time_field + 1);
+   field_stop(time_field + 1);
/* we have field TIME_FIELD in [time_field]..[e-1] */
 
p = time_field;
-- 
2.14.2




[PATCH 1/2] Fix compiler warning in iprange.c

2017-12-15 Thread Ryan O'Hara
The declaration of main() in iprange.c did not specify a type, causing
a compiler warning [-Wimplicit-int]. This patch simply declares main()
to be type 'int' and calls exit(0) at the end of the function.
---
 contrib/iprange/iprange.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/contrib/iprange/iprange.c b/contrib/iprange/iprange.c
index 91690c77..abae0076 100644
--- a/contrib/iprange/iprange.c
+++ b/contrib/iprange/iprange.c
@@ -111,7 +111,7 @@ static void usage(const char *argv0)
"\n", argv0);
 }
 
-main(int argc, char **argv)
+int main(int argc, char **argv)
 {
char line[MAXLINE];
int l, lnum;
@@ -198,4 +198,5 @@ main(int argc, char **argv)
convert_range(sa, da, he, NULL);
}
}
+   exit(0);
 }
-- 
2.14.2




Re: Config file compatibility between 1.5 and 1.6

2016-01-06 Thread Ryan O'Hara
On Wed, Jan 06, 2016 at 03:11:12PM +0100, Baptiste wrote:
> By the way, there are no 'appsession' any more :)

Thanks, Baptiste!

Ryan




Re: Config file compatibility between 1.5 and 1.6

2016-01-06 Thread Ryan O'Hara
On Wed, Jan 06, 2016 at 09:16:14AM +0100, Pavlos Parissis wrote:
> 
> 
> On 06/01/2016 08:49 πμ, Baptiste wrote:
> > On Tue, Jan 5, 2016 at 7:46 PM, Ryan O'Hara <roh...@redhat.com> wrote:
> >>
> >> Are there any known incompatibilities between a config file for
> >> haproxy version 1.5 and 1.6? Specifically, is there anything that is
> >> valid in 1.5 that is no longer valid in 1.6? I'm asking because I am
> >> considering a rebase of haproxy 1.6 in Fedora/RHEL but need to avoid
> >> such issues. If I recall, I rebased from 1.4 to 1.5 in Fedora many
> >> months back and a user ran into a problem in this regard. Any
> >> information is greatly appreciated!
> >>
> >> Ryan
> >>
> >>
> > 
> > Hi Ryan,
> > 
> > My answer won't be exhaustive, sorry about that. Hopefully, other
> > people may help.
> > 
> > I think the configuration parser is less permissive. IE, 2 frontends
> > or 2 backends can't have the same name.
> > The configuration where the listening IP:port address is set on the
> > 'frontend' line is not allowed anymore.
> > 
> > More ALERT may also be triggered when the configuration parser doesn't
> > understand a keyword while those keywords used to be silently ignored.
> > (check alertif_too_many_args_idx() ).
> > 
> > So by definition, many configuration may be broken.
> > 
> 
> It depends on the configuration. I have migrated 1.5 installations to
> 1.6 with zero configuration problems, but my configurations were quite
> simple.
> 
> People with complex configuration or configuration which was created on
> 1.4 and silently copied to 1.5 may see issues on 1.6.

This is precisely why I am asking on the upstream mailing list. If
there are known incompatibilities, this is the place to ask! :)

> Spec file can run configuration check(-f  -c) after installation
> and print a warning if configuration is in valid.

I'm well aware, but I'm the package maintainer so I don't have all the
config files to test. :) Thanks!

Ryan




Config file compatibility between 1.5 and 1.6

2016-01-05 Thread Ryan O'Hara

Are there any known incompatibilities between a config file for
haproxy version 1.5 and 1.6? Specifically, is there anything that is
valid in 1.5 that is no longer valid in 1.6? I'm asking because I am
considering a rebase of haproxy 1.6 in Fedora/RHEL but need to avoid
such issues. If I recall, I rebased from 1.4 to 1.5 in Fedora many
months back and a user ran into a problem in this regard. Any
information is greatly appreciated!

Ryan




HAProxy 1.6 in Fedora/Rawhide

2015-10-30 Thread Ryan O'Hara

I've build HAProxy 1.6.1 for Rawhide (Fedora 24), but I'm not
currently planning to add this to Fedora 23. If there is enough
interest, I will gladly provide HAProxy 1.6.1 packages for Fedora 23,
but they will most likely not be pushed into the updates
repository. Long story there.

Anyway, just wanted to let you know that builds are working and if
there are any Fedora users that would like these packages, just send
me an email. Cheers!

Ryan




man page for haproxy.cfg

2015-01-31 Thread Ryan O'Hara

I've been asked to provide a man page for haproxy.cfg, which would be
a massive endeavor. Since Cyril has done such an excellent job
generating the HTML documentation, how difficult would it be to grok
this into man page format? Has anyone done it?

Ryan




man page for haproxy.cfg

2015-01-31 Thread Ryan O'Hara

I've been asked to provide a man page for haproxy.cfg, which would be
a massive endeavor. Since Cyril has done such an excellent job
generating the HTML documentation, how difficult would it be to grok
this into man page format? Has anyone done it?

Ryan




Re: no-sslv3 option not working

2014-10-21 Thread Ryan O'Hara
On Tue, Oct 21, 2014 at 04:56:31PM +0200, Thomas Heil wrote:
 Hi,
 
 On 21.10.2014 16:26, John Leach wrote:
  Hi,
 
  I'm trying to disable sslv3 with the no-sslv3 bind option, but it's
  not working.
 
  The option is accepted and the restart is successful, but sslv3 is still
  accepted:
 
  $ openssl s_client -ssl3 -connect localhost:443
 
   New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
   Server public key is 1024 bit
   Secure Renegotiation IS supported
   Compression: NONE
   Expansion: NONE
   SSL-Session:
   Protocol  : SSLv3
   Cipher: DHE-RSA-AES256-SHA
   Session-ID:
  D74EC1760F565669B7CD8D21636D05AABC9E047DAC94133E62240B3824EB8176
   Session-ID-ctx:
   Master-Key:
  11417200F033C2B542B4FA3A7DC3C00214EFE92C7709FD406014D047D75DBA40573447ED5808962211AF323860367DEE
   Key-Arg   : None
   PSK identity: None
   PSK identity hint: None
   SRP username: None
   Start Time: 1413900818
 
  double checked with nmap.
 
  Tested with haproxy 1.5.3 and 1.5.4 on Ubuntu 14.10, Fedora 20 and Centos 7.
 
  Config is as simple as:
 
 
frontend myfrontend
  bind 0.0.0.0:443 ssl crt /etc/haproxy/mycert.pem no-sslv3
  default_backend mybackend
  reqadd X-Forwarded-Proto:\ https
 Ive checked your config on centos 7 with the official version 1.5.2 and
 it works.

I also tried 1.5.2 on RHEL7 and it also works.

Ryan

 --
 # openssl s_client -ssl3 -connect 127.0.0.1:443
 CONNECTED(0003)
 139825192679328:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert
 handshake failure:s3_pkt.c:1257:SSL alert number 40
 139825192679328:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl
 handshake failure:s3_pkt.c:596:
 ---
 no peer certificate available
 ---
 No client certificate CA names sent
 ---
 SSL handshake has read 7 bytes and written 0 bytes
 ---
 New, (NONE), Cipher is (NONE)
 Secure Renegotiation IS NOT supported
 Compression: NONE
 Expansion: NONE
 SSL-Session:
 Protocol  : SSLv3
 Cipher: 
 Session-ID:
 Session-ID-ctx:
 Master-Key:
 Key-Arg   : None
 Krb5 Principal: None
 PSK identity: None
 PSK identity hint: None
 Start Time: 1413903320
 Timeout   : 7200 (sec)
 Verify return code: 0 (ok)
 ---
 
 
 
  I've also tried disabling tls too, and that seems to have no effect either.
 
  Lots of people are recommending this as a fix against the POODLE vuln,
  so it's quite critical! Any thoughts?
 Could you post haproxy -vv?
 Where does you package come from? Did you compile it by yourself?
 
  Thanks,
 
  John.
  --
  http://brightbox.com
 
 
 
 
 cheers
 thomas
 



Re: active/passive stick-table not sticky

2014-10-13 Thread Ryan O'Hara
On Mon, Oct 13, 2014 at 08:13:29PM +0200, Benjamin Vetter wrote:
 On 13.10.2014 16:54, Baptiste wrote:
 On Sun, Oct 12, 2014 at 6:47 PM, Benjamin Vetter vet...@flakks.com wrote:
 Hi,
 
 i'm using the example from
 http://blog.haproxy.com/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/
 with haproxy 1.5.4 for a 3 node mysql+galera setup to implement
 active/passive'ness.
 
 global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 8192
uid 99
gid 99
debug
stats socket/tmp/haproxy
 
 defaults
log global
mode http
option tcplog
option dontlognull
retries 3
maxconn 8192
timeout connect 5000
timeout client 30
timeout server 30
 
 listen mysql-active-passive 0.0.0.0:3309
stick-table type ip size 1
stick on dst
mode tcp
balance roundrobin
option httpchk
server db01 192.168.0.11:3306 check port 9200 inter 12000 rise 3 fall 3
 on-marked-down shutdown-sessions
server db02 192.168.0.12:3306 check port 9200 inter 12000 rise 3 fall 3
 on-marked-down shutdown-sessions backup
server db03 192.168.0.13:3306 check port 9200 inter 12000 rise 3 fall 3
 on-marked-down shutdown-sessions backup
 
 I tested the stickyness via this tiny ruby script, which simply connects and
 asks the node for its stored ip address:
 
 require mysql2
 
 loop do
begin
  mysql2 = Mysql2::Client.new(:port = 3309, :host = 192.168.0.10,
 :username = username)
  puts mysql2.query(show variables like '%wsrep_sst_rec%').to_a
  mysql2.close
rescue
  # Nothing
end
 end
 
 First, everything's fine. On first run, stick-table gets updated:
 
 # table: mysql-active-passive, type: ip, size:1, used:1
 0x1c90224: key=192.168.0.10 use=0 exp=0 server_id=1
 
 Then i shutdown 192.168.0.11. Again, everything's fine, as the stick table
 gets updated to:
 
 # table: mysql-active-passive, type: ip, size:1, used:1
 0x1c90224: key=192.168.0.10 use=0 exp=0 server_id=2
 
 and all connections now go to db02.
 
 Then i restart/repair 192.168.0.11, the stick table stays as is (fine), such
 that all connections should still go to db02.
 However, the output of my script now starts to say:
 
 ...
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.12}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.12}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.11}
 {Variable_name=wsrep_sst_receive_address, Value=192.168.0.12}
 ...
 
 such that sometimes the connection goes to db01 and sometimes to db02.
 Do you know what the problem is?
 
 Thanks
Benjamin
 
 
 
 
 Hi Benjamin,
 
 Could you remove the 'backup' keyword from your server lines and run
 the same test?
 
 Baptiste
 
 
 
 
 Ok, after more testing and digging into the haproxy source it's more
 or less clear that size 1 is the problem - in contrast to what the
 blog post says.

You hit exactly the same issue I ran into several months back. I think
Willy responded on the mailing list that using a stick table size of
'2' is the solution, but is seems you've figured this out on your own.

 Every new client connection requires a slot in the stick table no
 matter if the new session/stick table entry will match the already
 existing stick table entry or not. Thus, if the stick table is full
 already (very likely for size 1), haproxy removes the single
 already existing entry. As a consequence, you need to have the
 size parameter at least as large as the number of client
 connections you're going to expect.

Interesting. I was recently looking at rather larger stick-table that
had 'stick on dst' and saw that there was never more than one
entry. Could you expand on this or provide reference to the code in
question? I quite like stick-tables for hot-standby emulation since
they don't have failback like 'backup' servers do.

 This is IMHO a bit counter-intuitive, but however ... with large
 size parameter it's working as expected.

How large?

Cheers.
Ryan




Re: Binaries for HAProxy.

2014-07-16 Thread Ryan O'Hara
On Wed, Jul 16, 2014 at 09:07:48AM -0500, Kuldip Madnani wrote:
 My Linux Distribution is :
 
 Red Hat Enterprise Linux Server release 6.3 (Santiago)

HAProxy is not included in RHEL 6.3. You will need RHEL 6.4 with Load
Balancer AddOn or RHEL7.

Ryan

 On Wed, Jul 16, 2014 at 9:03 AM, Mathew Levett mat...@loadbalancer.org
 wrote:
 
  Hi Kuldip,
 
  I think you may need to provide a little more information, it may be that
  your Linux distribution may already have haproxy in their repository.
  However the information supplied does not really show what your running.
  Do you know the distribution name?
 
  If its Debian then something like 'apt-get install haproxy' may be all you
  need, RedHat based distros may use yum so 'yum install haproxy'.  however
  its also not that hard to compile the latest version from source and is
  well documented in the download file.
 
  Usually on a list like this you need to supply as much information as
  possible so the people here can help.
 
  Kind Regards,
 
  Mathew
 
 
  On 16 July 2014 14:50, Kuldip Madnani k.madnan...@gmail.com wrote:
 
  Hi,
 
  Where can i find the compiled binaries for haproxy.My system
  configuration is this :
 
  $ uname -a
  Linux  2.6.32-279.22.1.el6.x86_64 #1 SMP Sun Jan 13 09:21:40 EST 2013
  x86_64 x86_64 x86_64 GNU/Linux
 
  Thanks  Regards,
  Kuldip
 
 
 



Re: Binaries for HAProxy.

2014-07-16 Thread Ryan O'Hara
,
  from include/common/cfgparse.h:29,
  from src/haproxy.c:61:
 include/types/server.h:207: error: expected specifier-qualifier-list before
 âSSL_CTXâ
 In file included from src/haproxy.c:90:
 include/proto/listener.h: In function âbind_conf_allocâ:
 include/proto/listener.h:130: error: âstruct bind_confâ has no member named
 âfileâ
 include/proto/listener.h:131: error: âstruct bind_confâ has no member named
 âlineâ
 include/proto/listener.h:133: error: âstruct bind_confâ has no member named
 âby_feâ
 include/proto/listener.h:133: error: âstruct bind_confâ has no member named
 âby_feâ
 include/proto/listener.h:133: error: âstruct bind_confâ has no member named
 âby_feâ
 include/proto/listener.h:133: error: âstruct bind_confâ has no member named
 âby_feâ
 include/proto/listener.h:133: error: âstruct bind_confâ has no member named
 âby_feâ
 include/proto/listener.h:135: error: âstruct bind_confâ has no member named
 âargâ
 include/proto/listener.h:137: error: âstruct bind_confâ has no member named
 âuxâ
 include/proto/listener.h:138: error: âstruct bind_confâ has no member named
 âuxâ
 include/proto/listener.h:139: error: âstruct bind_confâ has no member named
 âuxâ
 include/proto/listener.h:141: error: âstruct bind_confâ has no member named
 âlistenersâ
 include/proto/listener.h:141: error: âstruct bind_confâ has no member named
 âlistenersâ
 include/proto/listener.h:141: error: âstruct bind_confâ has no member named
 âlistenersâ
 In file included from src/haproxy.c:107:
 include/proto/ssl_sock.h: At top level:
 include/proto/ssl_sock.h:46: error: expected declaration specifiers or
 â...â before âSSL_CTXâ
 src/haproxy.c:153: error: âMAX_WBITSâ undeclared here (not in a function)
 src/haproxy.c: In function âdisplay_build_optsâ:
 src/haproxy.c:254: error: expected â)â before âZLIB_VERSIONâ
 src/haproxy.c:272: error: expected â)â before âOPENSSL_VERSION_TEXTâ
 src/haproxy.c:274: warning: implicit declaration of function
 âSSLeay_versionâ
 src/haproxy.c:274: error: âSSLEAY_VERSIONâ undeclared (first use in this
 function)
 src/haproxy.c:274: error: (Each undeclared identifier is reported only once
 src/haproxy.c:274: error: for each function it appears in.)
 src/haproxy.c:275: error: âOPENSSL_VERSION_NUMBERâ undeclared (first use in
 this function)
 src/haproxy.c:275: warning: implicit declaration of function âSSLeayâ
 src/haproxy.c:308: warning: implicit declaration of function âpcre_versionâ
 src/haproxy.c:308: warning: format â%sâ expects type âchar *â, but argument
 2 has type âintâ
 src/haproxy.c: In function âdeinitâ:
 src/haproxy.c:1188: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1188: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1188: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1188: warning: left-hand operand of comma expression has no
 effect
 src/haproxy.c:1188: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1188: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1188: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1196: error: âstruct bind_confâ has no member named âfileâ
 src/haproxy.c:1197: error: âstruct bind_confâ has no member named âargâ
 src/haproxy.c:1198: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1198: warning: type defaults to âintâ in declaration of
 â__retâ
 src/haproxy.c:1198: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1198: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1198: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1198: error: âstruct bind_confâ has no member named âby_feâ
 src/haproxy.c:1198: error: âstruct bind_confâ has no member named âby_feâ
 make: *** [src/haproxy.o] Error 1
 
 
 On Wed, Jul 16, 2014 at 9:18 AM, Ryan O'Hara roh...@redhat.com wrote:
 
  On Wed, Jul 16, 2014 at 09:07:48AM -0500, Kuldip Madnani wrote:
   My Linux Distribution is :
  
   Red Hat Enterprise Linux Server release 6.3 (Santiago)
 
  HAProxy is not included in RHEL 6.3. You will need RHEL 6.4 with Load
  Balancer AddOn or RHEL7.
 
  Ryan
 
   On Wed, Jul 16, 2014 at 9:03 AM, Mathew Levett mat...@loadbalancer.org
   wrote:
  
Hi Kuldip,
   
I think you may need to provide a little more information, it may be
  that
your Linux distribution may already have haproxy in their repository.
However the information supplied does not really show what your
  running.
Do you know the distribution name?
   
If its Debian then something like 'apt-get install haproxy' may be all
  you
need, RedHat based distros may use yum so 'yum install haproxy'.
   however
its also not that hard to compile the latest version from source and is
well documented in the download file.
   
Usually on a list like this you need to supply as much information as
possible so the people here can help.
   
Kind Regards,
   
Mathew

Re: [ANNOUNCE] haproxy-1.5.0

2014-06-20 Thread Ryan O'Hara
On Fri, Jun 20, 2014 at 07:14:39AM +0200, Willy Tarreau wrote:
 On Fri, Jun 20, 2014 at 03:35:55AM +0300, Eliezer Croitoru wrote:
  On 06/19/2014 10:54 PM, Willy Tarreau wrote:
  Don't forget to offer a beer to your distro packagers who make your life
  easier. It's hard to list them all, but if you don't build from sources,
  you're likely running a package made and maintained by one of these people 
  :
 - debian: Vincent Bernat, Apollon Oikonomopoulos, Prach Pongpanich
 - Fedora: Ryan O'hara
 - OpenSuSE: Marcus Rückert
 - other? just report yourself!
  Congrats!!
  
  And with a question:
  Who is the maintainer of CentOS RPMs?
 
 I could be wrong, but my understanding is that Ryan's packages are used
 in RHEL as well, so probably you have them automatically in CentOS ?

That is correct. The latest RHEL release has haproxy 1.4.24, so that
is what will be in CentOS.

  If nobody will build it for CentOS in the next month or two I will might 
  build it.
 
 Please double-check with Ryan first to ensure there's no double work.

If you want haproxy 1.5 on CentOS in the near future, there are other means to 
build
packages. I could build 1.5 against el6 in copr if you like.

Ryan




Re: [ANNOUNCE] haproxy-1.5.0

2014-06-20 Thread Ryan O'Hara
On Fri, Jun 20, 2014 at 07:58:48PM +0200, Thomas Heil wrote:
 On 20.06.2014 18:07, Ryan O'Hara wrote:
  On Fri, Jun 20, 2014 at 07:14:39AM +0200, Willy Tarreau wrote:
  On Fri, Jun 20, 2014 at 03:35:55AM +0300, Eliezer Croitoru wrote:
  On 06/19/2014 10:54 PM, Willy Tarreau wrote:
  Don't forget to offer a beer to your distro packagers who make your life
  easier. It's hard to list them all, but if you don't build from sources,
  you're likely running a package made and maintained by one of these 
  people 
  :
- debian: Vincent Bernat, Apollon Oikonomopoulos, Prach Pongpanich
- Fedora: Ryan O'hara
- OpenSuSE: Marcus Rückert
- other? just report yourself!
  Congrats!!
 
  And with a question:
  Who is the maintainer of CentOS RPMs?
  I could be wrong, but my understanding is that Ryan's packages are used
  in RHEL as well, so probably you have them automatically in CentOS ?
  That is correct. The latest RHEL release has haproxy 1.4.24, so that
  is what will be in CentOS.
 What needs to be done to upgrade it to 1.5.0? I think the official way
 could save a lot of
 time because there is no need that everbody builds his own RPM.

Are we talking about Centos? The official way to upgrade in Centos is
to pull updates from Centos repos. Centos isn't like Fedora where
there are builds/updates being done at any given time. Centos tracks
RHEL.

I recommended a el6 copr build so that Centos users who wanted 1.5.0
packages soon could get them from a common repo.

  If nobody will build it for CentOS in the next month or two I will might 
  build it.
  Please double-check with Ryan first to ensure there's no double work.
  If you want haproxy 1.5 on CentOS in the near future, there are other means 
  to build
  packages. I could build 1.5 against el6 in copr if you like.
 What do you mean by against el7 in copr ?

I'm talking about el6 only, not el7. You can read up on Copr here:

https://fedorahosted.org/copr/

Cheers.

Ryan




Re: [ANNOUNCE] haproxy-1.5.0

2014-06-19 Thread Ryan O'Hara
On Thu, Jun 19, 2014 at 09:54:29PM +0200, Willy Tarreau wrote:
 Hi everyone,
 
 The list has been unusually silent today, just as if everyone was waiting
 for something to happen :-)
 
 Today is a great day, the reward of 4 years of hard work. I'm announcing the
 release of HAProxy 1.5.0.

Congratulations! Excellent work. Fedora packages should hit testing
repos very soon.

Ryan




Re: [ANNOUNCE] haproxy-1.5-dev26 (and hopefully last)

2014-05-28 Thread Ryan O'Hara
On Wed, May 28, 2014 at 08:43:10PM +0200, Vincent Bernat wrote:
  ❦ 28 mai 2014 18:11 +0200, Willy Tarreau w...@1wt.eu :
 
  Feedback welcome as usual,
 
 When compiling with  -Werror=format-security (which is a common settings
 on a Debian-based distribution), we get:
 
 src/dumpstats.c:3059:4: error: format not a string literal and no format 
 arguments [-Werror=format-security]
 chunk_appendf(trash, srv_hlt_st[1]); /* DOWN (agent) */
 ^

I'm getting the same error when building against Fedora rawhide.

Ryan



 srv_hlt_st[1] is DOWN %s/%s, so this is not even a false positive. I
 suppose this should be srv_hlt_st[0] but then it's better to just write
 DOWN (since it avoids the warning).
 
 It leads me to the next chunk of code:
 
   chunk_appendf(trash,
 srv_hlt_st[state],
 (ref-state != SRV_ST_STOPPED) ? 
 (ref-check.health - ref-check.rise + 1) : (ref-check.health),
 (ref-state != SRV_ST_STOPPED) ? 
 (ref-check.fall) : (ref-check.rise));
 
 Not all members of srv_hlt_st have %s/%s. I cannot say for sure how
 chunk_appendf work. Is that the caller or the callee that clean up? I
 suppose that because of ..., this is automatically the caller so the
 additional arguments are harmless.
 -- 
 panic(esp: what could it be... I wonder...);
   2.2.16 /usr/src/linux/drivers/scsi/esp.c
 



Re: Recommended strategy for running 1.5 in production

2014-04-17 Thread Ryan O'Hara
On Wed, Apr 16, 2014 at 11:12:07PM +0100, Kobus Bensch wrote:
 I use haproxy on centos. So I build a RPM i then use in spacewalk to
 first roll out to test, then post testing to production.

I can add el6 to my copr build if you need an rpm build. Currently I'm
only building 1.5-dev22 in copr for F20, but it should be too much
trouble add el6.

Ryan

 On 16/04/2014 17:14, pablo platt wrote:
 An official Ubuntu dev repo will also make testing easier.
 It's much easier to use a apt-get than building from source and
 figuring out command line options.
 
 
 On Wed, Apr 16, 2014 at 7:05 PM, Philipp
 e1c1bac6253dc54a1e89ddc046585...@posteo.net
 mailto:e1c1bac6253dc54a1e89ddc046585...@posteo.net wrote:
 
 Am 16.04.2014 17:40 schrieb Willy Tarreau:
 
 I think you summarized very well how to carefully use a
 development
 version in prod. That requires a bit of care, but with that
 you can
 get both nice features and quick fixes.
 
 
 Indeed :)
 
 After 1.5 is released, I'd like to switch to a faster and more
 regular
 release cycle with less constraints on the features.
 
 
 And with above said: I, personally, give a rats a** if a version
 is called
 alpha, rc123, -dev or whatever fancy version string it has.
 
 Test the thing and find out the hairy bits after it hits
 production :-)
 
 I was sooo often burned by oh, finally release and then it was worse
 then the RC before the actual release whatsoever.
 
 My kudos to Willy and the other developers of haproxy, awesome work
 overall AND in the nitbits :-).
 
 
 
 -- 
 Kobus Bensch Trustpay Global LTD email signature Kobus Bensch
 Senior Systems Administrator
 Address:  22  24 | Frederick Sanger Road | Guildford | Surrey | GU2 7YD
 DDI:  0207 871 3958
 Tel:  0207 871 3890
 Email: kobus.ben...@trustpayglobal.com
 mailto:kobus.ben...@trustpayglobal.com
 
 -- 
 
 
 Trustpay Global Limited is an authorised Electronic Money
 Institution regulated by the Financial Conduct Authority
 registration number 900043. Company No 07427913 Registered in
 England and Wales with registered address 130 Wood Street, London,
 EC2V 6DL, United Kingdom.
 
 For further details please visit our website at www.trustpayglobal.com.
 
 The information in this email and any attachments are confidential
 and remain the property of Trustpay Global Ltd unless agreed by
 contract. It is intended solely for the person to whom or the entity
 to which it is addressed. If you are not the intended recipient you
 may not use, disclose, copy, distribute, print or rely on the
 content of this email or its attachments. If this email has been
 received by you in error please advise the sender and delete the
 email from your system. Trustpay Global Ltd does not accept any
 liability for any personal view expressed in this message.





haproxy 1.5 builds for fedora/rawhide

2014-03-11 Thread Ryan O'Hara

For those interested, I have built haproxy-1.5-dev22 for Fedora. The
packages are located in a copr repo since the distribution repos still
contain version 1.4. The project and repos can be found here:

http://copr.fedoraproject.org/coprs/rohara/haproxy/

The SRPM can be found here:

http://rohara.fedorapeople.org/haproxy/haproxy-1.5-0.3.dev22.fc20.src.rpm

Feel free to contact me if you have questions.

Ryan




Re: haproxy-systemd-wrapper spawning multiple processes

2014-02-15 Thread Ryan O'Hara
On Sun, Feb 16, 2014 at 10:08:31AM +0900, Marc-Antoine Perennou wrote:
 Hi,
 
 On 16 February 2014 01:51, Ryan O'Hara roh...@redhat.com wrote:
 
  I started tinkering with haproxy-systemd-wrapper recently and noticed
  that I get two haproxy processes when I start:
 
  # systemctl start haproxy
  # systemctl status haproxy
  haproxy.service - HAProxy Load Balancer
 Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled)
 Active: active (running) since Sat 2014-02-15 10:39:20 CST; 1s ago
   Main PID: 10065 (haproxy-systemd)
 CGroup: /system.slice/haproxy.service
 ├─10065 /usr/sbin/haproxy-systemd-wrapper -f
 /etc/haproxy/haproxy.cfg -p /run/haproxy
 ├─10066 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
 /run/haproxy.pid -Ds
 └─10067 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
 /run/haproxy.pid -Ds
 
  That doesn't seem right. A quick look the haproxy processes shows an
  interesting parent/child relationship:
 
  # ps -C haproxy -o pid,ppid
PID  PPID
  10066 10065
  10067 10066
 
  Can someone explain what is going on here? I'm using 1.5-dev22 and the
  systemd service file from the source.
 
 
 Here is how haproxy works (correct me if I'm wrong, it's not all that
 fresh in my mind):
 - the main haproxy process is started
 - it forks as many child processes as asked in its configuration file
 - it goes away letting only the worker child processes

Right. This seems pretty standard.

 First thing I did was to make it wait for the worker child processes
 instead of leaving, that's what -Ds is for. This is in order to avoid
 the double fork which would happen because of what I'll describe just
 below.
 
 Here is how haproxy reloads its configuration:
 - A new haproxy is spawned with the pids of the old workers
 - The new haproxy tells the old workers not to listen anymore, and to
 exit when they have finished dealing with their current requests
 - The new haproxy spawns its own workers and starts listening
 - The old haproxy quits eventually when it has dealt with all pending requests
 
 systemd doesn't like this behavious at all as the main process
 completely goes away, replaced by a brand new one, just for a
 *reload*.
 
 The easiest way to get it working without having to rework the core
 behaviour of haproxy was to put a wrapper around it, which spawns
 haproxy, listens to a signal which systemd emits on reload, and which
 spawns a new haproxy when this signal is received. This way, the main
 process never changes ans systemd can reload gracefully.

I understand. I went back and read the description you provided when
you submitted the patch. I just wasn't expecting the main haproxy
process to _not_ exit. The more I think about this, the more it makes
sense. My initial assumption was that the MAINPID (in systemd) would
be pid of haproxy-systemd-wrapper, the main haproxy would spawn the
workers and exist, and a 'systemctl reload' could signal the workers.

 This is why you get
 
 haproxy-systemd-wrapper - main haproxy process - haproxy worker.
 
 haproxy-systemd-wrapper waits for the main haproxy process to exit to
 avoir zombies. The main haproxy process exits when all its workers are
 done.

It has been while since I dealt with this, but can't you double fork
to avoid zombies? Is it a double fork that causes problems for systemd?

  Thanks.
  Ryan
 
 
 Hope that helps and sounds right.
 
 Marc-Antoine

It does help. Thank you.

Ryan




Re: 'packet of death' in 1.5-dev21.x86_64.el6_4

2014-02-07 Thread Ryan O'Hara
On Fri, Feb 07, 2014 at 07:23:42PM +0100, Lukas Tribus wrote:
 Hi,
 
 
  Not a problem ... our Head of IS did a detailed write up on our
  investigation process and findings at his blog if you are interested:
 
  http://blog.tinola.com/?e=36
 
 Thanks, thats really interesting and very detailed.

Indeed.

 Someone from RedHat really should take a look at this. Most likely
 EAI_NODATA is not defined in the libc, thats why upgrading libc
 helps and upgrading libkrb5 doesn't. So the real problem is that
 getaddrinfo() returns an error code unknown to the libc (other
 applications than libkrb5 may suffer from problems as well; although
 they probably don't abort()).

I've passed along the information to the appropriate
people. Interesting that it is fixed in Centos 6.5, would be great to
know how it was fixed. I took a quick look at krb5-libs and glibc and
nothing jumped out at me.

Ryan

 Looks like EAI_NODATA is deprecated, and its already removed from
 freebsd for example, in favor of EAI_NONAME [1].
 
 
 As for the workaround: you should be able to disable the kerberos
 ciphers in HAproxy configuration, so that you can continue to run
 it in chroot. Or maybe compiling with -DEAI_NODATA=EAI_NONAME would
 help?
 
 What are those ciphers anyway (openssl ciphers -v 'LOW')? I don't
 seem to have them here on ubuntu ...
 
 
 
 [1] http://krbdev.mit.edu/rt/Ticket/History.html?id=5518  
   



Re: RabbitMQ-HAProxy raising a exception.

2014-02-06 Thread Ryan O'Hara
On Thu, Feb 06, 2014 at 02:05:07PM -0600, Kuldip Madnani wrote:
 Hi,
 
 I am trying to connect my RabbitMQ cluster through HAProxy.When connected
 directly to RabbitMQ nodes it works fine but when connected through HAProxy
 it raises following exception :

What are your client/server timeouts?

Ryan

 com.rabbitmq.client.ShutdownSignalException: connection error; reason:
 java.io.EOFException
 at
 com.rabbitmq.client.impl.AMQConnection.startShutdown(AMQConnection.java:678)
 at com.rabbitmq.client.impl.AMQConnection.shutdown(AMQConnection.java:668)
 at
 com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:546)
 Caused by: java.io.EOFException
 at java.io.DataInputStream.readUnsignedByte(DataInputStream.java:290)
 at com.rabbitmq.client.impl.Frame.readFrom(Frame.java:95)
 at
 com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:131)
 at
 com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:515)
 
 What could be the reason.I see RabbitMQ guys said in many forums to check
 it with HAProxy.
 
 Thanks  Regards,
 Kuldip Madnani



Re: RabbitMQ-HAProxy raising a exception.

2014-02-06 Thread Ryan O'Hara
On Thu, Feb 06, 2014 at 02:15:31PM -0600, Kuldip Madnani wrote:
 I have the following setting for HAProxy and no settings in client for
 connectionFactory:
 
 defaults
 log global
 modetcp
 option  tcplog
 option  dontlognull
 retries 3
 option  redispatch
 maxconn 4096
 timeout connect 5s # default 5 second time out if a backend is not found
 timeout client 300s
 timeout server 300s

OK. 300s is more than enough.

 # Entries for rabbitmq_CLUSTER6 Listener
 #--#
 listen rabbitmq_CLUSTER6   *:5678
 mode   tcp
 maxconn8092
 option allbackups
 balanceroundrobin
 server LISTENER_rabbitmq_CLUSTER6_zldv3697_vci_att_com_5672
 zldv3697.XXX.XXX.com:5672 weight 10 check inter 5000 rise 2 fall 3
 ##
 
 Do these values impact and throw java.io.EOFException.

I have no idea. My first thought was the your connections were timing
out and the application didn't handle it well.

I don't think this is an haproxy issue. I have haproxy working in
front of a RabbitMQ cluster and have not hit any problems. The
configuration I am using can be found here:

http://openstack.redhat.com/RabbitMQ

Ryan

 Thanks  Regards,
 Kuldip Madnani
 
 
 
 On Thu, Feb 6, 2014 at 2:08 PM, Ryan O'Hara roh...@redhat.com wrote:
 
  On Thu, Feb 06, 2014 at 02:05:07PM -0600, Kuldip Madnani wrote:
   Hi,
  
   I am trying to connect my RabbitMQ cluster through HAProxy.When connected
   directly to RabbitMQ nodes it works fine but when connected through
  HAProxy
   it raises following exception :
 
  What are your client/server timeouts?
 
  Ryan
 
   com.rabbitmq.client.ShutdownSignalException: connection error; reason:
   java.io.EOFException
   at
  
  com.rabbitmq.client.impl.AMQConnection.startShutdown(AMQConnection.java:678)
   at
  com.rabbitmq.client.impl.AMQConnection.shutdown(AMQConnection.java:668)
   at
  
  com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:546)
   Caused by: java.io.EOFException
   at java.io.DataInputStream.readUnsignedByte(DataInputStream.java:290)
   at com.rabbitmq.client.impl.Frame.readFrom(Frame.java:95)
   at
  
  com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:131)
   at
  
  com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:515)
  
   What could be the reason.I see RabbitMQ guys said in many forums to check
   it with HAProxy.
  
   Thanks  Regards,
   Kuldip Madnani
 



Re: Question about logging in HAProxy

2014-02-04 Thread Ryan O'Hara
On Tue, Feb 04, 2014 at 02:05:24PM -0600, Kuldip Madnani wrote:
 Hi,
 
 I want to redirect the logs generated by HAProxy into some specific file .I
 read that in the global section in log option i can put a file location
 instead of IP address.I tried using that setting but it dint work for me,
 also i enabled tcp logging in my listener but no luck.Could any body tell
 if i am missing something.Here is my configuration:
 global
 
 log /opt/app/workload/haproxy/log/haproxy.log syslog info
 

On my systems (which use rsyslog) I do this:

log /dev/log local0

Then I create /etc/rsyslog.d/haproxy.conf, which contains:

local0.* /var/log/haproxy

And everything gets logged there.

 listen rabbitmq_perfCluster   *:5693
 mode   tcp
 maxconn32000
 option allbackups
 option tcplog
 option logasap
 log global
 balanceroundrobin

Interesting. I just finished setting up haproxy for a RabbitMQ
cluster. Feel free to contact me off-list to share your experience on
this endeavor.

Ryan




Re: Question about logging in HAProxy

2014-02-04 Thread Ryan O'Hara
On Tue, Feb 04, 2014 at 11:44:47PM +0100, Willy Tarreau wrote:
 Hi Ryan,
 
 On Tue, Feb 04, 2014 at 04:00:14PM -0600, Ryan O'Hara wrote:
  On Tue, Feb 04, 2014 at 02:05:24PM -0600, Kuldip Madnani wrote:
   Hi,
   
   I want to redirect the logs generated by HAProxy into some specific file 
   .I
   read that in the global section in log option i can put a file location
   instead of IP address.I tried using that setting but it dint work for me,
   also i enabled tcp logging in my listener but no luck.Could any body tell
   if i am missing something.Here is my configuration:
   global
   
   log /opt/app/workload/haproxy/log/haproxy.log syslog info
   
  
  On my systems (which use rsyslog) I do this:
  
  log /dev/log local0
  
  Then I create /etc/rsyslog.d/haproxy.conf, which contains:
  
  local0.* /var/log/haproxy
  
  And everything gets logged there.
 
 Just a minor point here, when you're dealing with a proxy which is used
 in contexts of high load (thousands to tens of thousands of requests per
 second), the unix socket's log buffers are too small on many systems, and
 many log messages are dropped. Thus on these systems, logging over UDP is
 preferred (which requires to setup the syslog server to listen to UDP,
 and preferrably only on localhost).

Good to know. I definitely don't generate enough log traffic in my
test/development environment to hit this, but I definitely see
why UDP would be preferred. Thanks for the tip.

Ryan

 Best regards,
 Willy
 



Use one backend server at a time

2014-01-30 Thread Ryan O'Hara

I'd like to define a proxy (tcp mode) that has multiple backend
servers yet only uses one at a time. In other words, traffic comes
into the frontend and is redirected to one backend server. Should that
server fail, another is chosen.

I realize this might be an odd thing to do with haproxy, and if you're
thinking that simple VIP failover (ie. keepalived) is better suited
for this, you are correct. Long story.

I've gotten fairly close to achieving this behavior by having all my
backend servers declared 'backup' and not using 'allbackups'. The only
caveat is that these backup servers have a preference based on the
order they are defined. Say my severs are defined in the backend like
this:

server foo-01 ... backup
server foo-02 ... backup
server foo-03 ... backup

If foo-01 is up, all traffic will go to it. When foo-0t is down, all
traffic will go to foo-02. When foo-01 comes back online, traffic goes
back to foo-01. Ideally the backend servers would change only when it
failed. Beside, this solution is rather ugly.

Is there a better way?

Ryan



Re: Use one backend server at a time

2014-01-30 Thread Ryan O'Hara
On Thu, Jan 30, 2014 at 07:14:30PM +0100, PiBa-NL wrote:
 Im not 100% sure but if i remember something i read correctly it was
 like using a stick on dst stick-table.
 
 That way the sticktable will make sure all traffic go's to a single
 server, and only when it fails another server will be put in the
 sticktable that will only have 1 entry.

Yes. That sounds accurate.

 
 You might want to test what happens when haproxy configuration is
 reloaded.. But if you configure 'peers' the new haproxy process
 should still have the same 'active' backend..
 
 p.s. That is if im not mixing stuff up...

This blog has something very close to what I'd like to deploy:

http://blog.exceliance.fr/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/

The only difference is that I'd like to have more than just one
backup. I'll try to find some time to experiment in the next few days.

Thanks.
Ryan


 Ryan O'Hara schreef op 30-1-2014 17:42:
 I'd like to define a proxy (tcp mode) that has multiple backend
 servers yet only uses one at a time. In other words, traffic comes
 into the frontend and is redirected to one backend server. Should that
 server fail, another is chosen.
 
 I realize this might be an odd thing to do with haproxy, and if you're
 thinking that simple VIP failover (ie. keepalived) is better suited
 for this, you are correct. Long story.
 
 I've gotten fairly close to achieving this behavior by having all my
 backend servers declared 'backup' and not using 'allbackups'. The only
 caveat is that these backup servers have a preference based on the
 order they are defined. Say my severs are defined in the backend like
 this:
 
  server foo-01 ... backup
  server foo-02 ... backup
  server foo-03 ... backup
 
 If foo-01 is up, all traffic will go to it. When foo-0t is down, all
 traffic will go to foo-02. When foo-01 comes back online, traffic goes
 back to foo-01. Ideally the backend servers would change only when it
 failed. Beside, this solution is rather ugly.
 
 Is there a better way?
 
 Ryan
 
 
 



Re: Use one backend server at a time

2014-01-30 Thread Ryan O'Hara
On Thu, Jan 30, 2014 at 08:03:37PM +0100, PiBa-NL wrote:
 can you doublecheck the sticktable fills properly with the socket
 commands, and you are running with nbproc 1 ?

It appears that 1.4 does support 'show table' via the stats
socket. Yes, nbproc is 1.

 can you post the (anonimized) config your currently using?

No need to anonymize, I'm running this all in a few kvm instances.

---

global
daemon
stats socket /tmp/haproxy

defaults
mode http
option http-server-close
timeout connect 5s
timeout client 10s
timeout server 10s

frontend http-vip
bind 192.168.122.101:80
default_backend http-servers

backend http-servers
stick-table type ip size 1
stick on dst
server node-02 192.168.122.102:80 check
server node-03 192.168.122.103:80 check backup
server node-04 192.168.122.104:80 check backup

---

Thanks for the assistance.

Ryan


 Ryan O'Hara schreef op 30-1-2014 19:50:
 On Thu, Jan 30, 2014 at 07:39:29PM +0100, PiBa-NL wrote:
 This should (i expect) work with any number of backup servers, as
 long as you only need 1 active.
 Yes, it appears this is exactly what I want. A quick test shows that
 once failback is still occurring. Not sure why. Once my primary fails,
 the first backup gets the traffic as expected. Once the primary comes
 back online, it services all requests again.
 
 I'm using 1.4 and my configuration is nearly identical to the example
 shown in the blow, sans the peers.
 
 Ryan
 
 
 
 Ryan O'Hara schreef op 30-1-2014 19:34:
 On Thu, Jan 30, 2014 at 07:14:30PM +0100, PiBa-NL wrote:
 Im not 100% sure but if i remember something i read correctly it was
 like using a stick on dst stick-table.
 
 That way the sticktable will make sure all traffic go's to a single
 server, and only when it fails another server will be put in the
 sticktable that will only have 1 entry.
 Yes. That sounds accurate.
 
 You might want to test what happens when haproxy configuration is
 reloaded.. But if you configure 'peers' the new haproxy process
 should still have the same 'active' backend..
 
 p.s. That is if im not mixing stuff up...
 This blog has something very close to what I'd like to deploy:
 
 http://blog.exceliance.fr/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/
 
 The only difference is that I'd like to have more than just one
 backup. I'll try to find some time to experiment in the next few days.
 
 Thanks.
 Ryan
 
 
 Ryan O'Hara schreef op 30-1-2014 17:42:
 I'd like to define a proxy (tcp mode) that has multiple backend
 servers yet only uses one at a time. In other words, traffic comes
 into the frontend and is redirected to one backend server. Should that
 server fail, another is chosen.
 
 I realize this might be an odd thing to do with haproxy, and if you're
 thinking that simple VIP failover (ie. keepalived) is better suited
 for this, you are correct. Long story.
 
 I've gotten fairly close to achieving this behavior by having all my
 backend servers declared 'backup' and not using 'allbackups'. The only
 caveat is that these backup servers have a preference based on the
 order they are defined. Say my severs are defined in the backend like
 this:
 
  server foo-01 ... backup
  server foo-02 ... backup
  server foo-03 ... backup
 
 If foo-01 is up, all traffic will go to it. When foo-0t is down, all
 traffic will go to foo-02. When foo-01 comes back online, traffic goes
 back to foo-01. Ideally the backend servers would change only when it
 failed. Beside, this solution is rather ugly.
 
 Is there a better way?
 
 Ryan