HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-17 Thread PiBa-NL
  192.168.0.40.81: Flags [P.], ack 
2403, win 64260, length 297
20:45:12.800753 IP 192.168.0.40.81  192.168.1.50.3588: Flags [.], ack 
580, win 65223, length 1260
20:45:12.800871 IP 192.168.0.40.81  192.168.1.50.3588: Flags [P.], ack 
580, win 65223, length 151
20:45:12.801488 IP 192.168.1.50.3588  192.168.0.40.81: Flags [.], ack 
3814, win 64260, length 0


See below my configuration of HAproxy:/
//global//
//maxconn300//
//log/var/run/loglocal6debug//
//stats socket /tmp/haproxy.socket level admin//
//nbproc1//
//chroot/var/empty//
//daemon//
//frontend test_pb3//
//bind192.168.1.22:81 //
//modehttp//
//logglobal//
//optiondontlognull//
//maxconn444//
//timeout client3//
//default_backendpb3TEST_http//
//backend pb3TEST_http//
//modehttp//
//timeout connect3//
//timeout server3//
//retries3//
//optionhttpchk OPTIONS / //
//source 192.168.0.117 usesrc clientip//
//serverpb3_srv 192.168.0.40:81  check inter 1  
weight 1 //

/
Could someone give me advice on what might need to change, what to test 
or how i could proceed further with making it work ?


Thanks in advance,
PiBa-NL



Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-17 Thread PiBa-NL
:45:12.800179 IP 192.168.1.50.3588  192.168.0.40.81: Flags [P.], 
ack 2403, win 64260, length 297
20:45:12.800753 IP 192.168.0.40.81  192.168.1.50.3588: Flags [.], ack 
580, win 65223, length 1260
20:45:12.800871 IP 192.168.0.40.81  192.168.1.50.3588: Flags [P.], 
ack 580, win 65223, length 151
20:45:12.801488 IP 192.168.1.50.3588  192.168.0.40.81: Flags [.], ack 
3814, win 64260, length 0


See below my configuration of HAproxy:/
//global//
//maxconn300//
//log/var/run/loglocal6debug//
//stats socket /tmp/haproxy.socket level admin//
//nbproc1//
//chroot/var/empty//
//daemon//
//frontend test_pb3//
//bind192.168.1.22:81 //
//modehttp//
//logglobal//
//optiondontlognull//
//maxconn444//
//timeout client3//
//default_backendpb3TEST_http//
//backend pb3TEST_http//
//modehttp//
//timeout connect3//
//timeout server3//
//retries3//
//optionhttpchk OPTIONS / //
//source 192.168.0.117 usesrc clientip//
//serverpb3_srv 192.168.0.40:81  check inter 1  
weight 1 //

/
Could someone give me advice on what might need to change, what to 
test or how i could proceed further with making it work ?


Thanks in advance,
PiBa-NL





Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-17 Thread PiBa-NL

Hi Baptiste,

Thanks for your reply, i understand that the traffic must pass through 
the router/HAProxy box, and for that part i can confirm that the routing 
of packets is 'working' like it should.


To explain my setup a little more:
*Webserver* (Win7) 192.168.0.40/24 on a OPT1/DMZ using gateway 
:192.168.0.117


*HAProxy* on FreeBSD 8.3(pfSense2.1) which performs routing between the 
2 networks has 2 interfaces(that matter):

DMZ interface has 192.168.0.117/24
LAN interface has 192.168.1.1/24

*ClientPC* (WinXP) 192.168.1.50/24 on the LAN gateway :192.168.1.1

(p.s. to make the picture complete and a little more complicated the 
FreeBSD and ClientPC are running within 'VMware Workstation' which runs 
on the Win7 machine., But im positive the networks are correctly 
separated (using a 'LAN-segment' between FreeBSD and the ClientPC) and 
traffic does flow through the FreeBSD machine.)
All traffic moving between clientPC and the Webserver 'must' go through 
the FreeBSD machine which routes the traffic, i have confirmed that 
traffic moves correctly through the FreeBSD machine by using wireshark 
and tcpdump on all machines.


So i have traced the packets of the connection to go like this:
1-ClientPC browser 'connects' to HAProxy and waits for the webpage (ok)
2-HAProxy sends SYN packet to Webserver with a spoofed source IP (ok)
3-Webserver sends response SYN-ack back to the spoofed source IP to its 
gateway 192.168.0.117 (and the MAC address using that IP) (expected)
4-FreeBSD passes the SYN-ack on to the ClientPC.(this is what should not 
happen.)
## HAproxy waits and retries for a few timeouts to occur.. and then 
eventually after about 2 minutes the browser is served the 503 page No 
server is available to handle this request. ##


So to me it seams HAproxy should in step 4 somehow get the SYN-ack 
packet, im not sure if it should listen/bind on all local IP's or if the 
traffic should get passed back to a HAProxy port by using for example a 
NAT rule.?.


PiBa-NL


Op 17-4-2013 21:21, Baptiste schreef:

Hi,

In order to work in transparent proxy mode, the server's default
gateway must be the HAproxy server.
Or at least, the traffic from the server must pass through the haproxy
box before reaching the client.

Even if HAProxy spoof the client IP, it's HAProxy which initializes
the TCP connection, so if the server try to reach the client directly,
this one would refuse the connection.

Baptiste


On Wed, Apr 17, 2013 at 8:01 PM, PiBa-NL piba.nl@gmail.com wrote:

I forgot to mention im using HAproxy 1.5dev18.

Hello HAProxy developers/users,

I would like to be able to run HAProxy transparently on FreeBSD 8.3.
This would be both for my own usage and also to make it available to a
larger public by including it in a 'haproxy-devel' package for pfSense.

However when trying to use it i get the error:
[ALERT] 104/235847 (72477) : parsing [/var/etc/haproxy.cfg:34] : 'usesrc'
not allowed here because support for TPROXY was not compiled in.

 From what i read it seams it should be possible.
For example the Makefile contains the following:
ifeq ($(TARGET),freebsd)
   USE_TPROXY = implicit
Which seams like it is supposed to be 'supported'.

Ive also tried the USE_LINUX_TPROXY=yes compile flag, but this returns 2
undeclared variables SOL_IP and SOL_IPV6. Ive tried declaring them with
substitute values like 'IP_BINDANY', or the value 6 which could stand for
the TCP protocol type, or 0. , but though the source did then compile the
end result still was that either an error was returned to the browser that
no backend was available, together with the following debug error:
[ALERT] 104/235129 (17380) : Cannot bind to tproxy source address before
connect() for backend pb3TEST_http. Aborting.
Or i dont get a response at all and HAproxy seems to be waiting for
'something' to happen..

Could it be that something is not fully supported in HAProxy toghether with
FreeBSD to allow transparent proxying? Or am i looking at the wrong side of
the problem and would i need to compile the FreeBSD kernel with tproxy
support.? Which I believe would be natively supported in version 8, but i
might be wrong on that..


I i add after setsockopt(fd, SOL_IP, IP_TRANSPARENT, one, sizeof(one)
this line:
setsockopt(fd, SOL_IP, IP_FREEBIND, one, sizeof(one));
It removes the error about 'Cannot bind to tproxy source address...' and
packets do seam to be send to the proper destination. Except the connection
never establishes..

The browser running on 192.168.1.50 contacts haproxy on its IP:port
http://192.168.1.22:81/
Haproxy then forwards the traffic to the server 192.168.0.40.81 which is
according to status page L7OK/200 in 0ms.

Also the reply packets gets routed back to the original client pc (wireshark
confirmed that..), and seam not to get intercepted by HAproxy which i think
is supposed to happen.?.. when passing through the 'FreeBSD router'.

But when performing a tcpdump on the interface in the 192.168.0.117 network
only

Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-18 Thread PiBa-NL

Hi All / Baptiste,

It seams i have found error in my initial email sorry for that. It 
should have mentioned IP_BINDANY in the example line of code instead of 
IP_FREEBIND. Still the my problem remains the same.


Returning SYN-ACK packets are not recieved/intercepted by HAProxy. And 
the browser gets a No server is available to handle this request. 
after exactly 2 minutes.


There seam to be needed at least two changes for compiling/using the 
full-transparent-proxy feature off FreeBSD.

In the file *compat.h* (or anywhere that fits better.) add:
/*#ifndef SOL_IP**
**#define SOL_IP  0**
**#define SOL_IPV6  0**
**#endif*/*
*
In the file *proto_tcp.c* change:
/if (flags  ip_transp_working) {//
//if (//
///*#ifdef **/*IP_BINDANY*/**
*/ //*setsockopt(fd, SOL_IP, IP_BINDANY, one, sizeof(one))*//*|| *//
*#endif*/*
* /setsockopt(fd, SOL_IP, IP_TRANSPARENT, one, sizeof(one)) == 0//
//|| setsockopt(fd, SOL_IP, IP_FREEBIND, one, 
sizeof(one)) == 0)//

//{//

//foreign_ok = 1;//
//}//
//else//
//ip_transp_working = 0;//
//}/

This makes it work for the sending a SYN connection request to the 
backend server... However the SYN-ACK response, though it is passing 
through the HAProxy server, is not recieved by HAProxy.


A tcpdump with mac-addresses on the LAN side of the FreeBSD router shows 
shows that .1.50 contacts haproxy on .1.22:81, gets a SYN-ACK reply 
back, but after that also SYN-ACK packets from the backend server.. as 
can be seen from the MAC adresses all recieved traffic comes through the 
FreeBSD machine:
21:03:06.018602 00:0c:29:0d:89:90  00:0c:29:b5:f0:fe, ethertype IPv4 
(0x0800), length 62: 192.168.1.50.1177  192.168.1.22.81: Flags [S], seq 
252352307, win 64240, options [mss 1460,nop,nop,sackOK], length 0
21:03:06.018735 00:0c:29:b5:f0:fe  00:0c:29:0d:89:90, ethertype IPv4 
(0x0800), length 62: 192.168.1.22.81  192.168.1.50.1177: Flags [S.], 
seq 2295999072, ack 252352308, win 65228, options [mss 1460,sackOK,eol], 
length 0
21:03:06.019066 00:0c:29:0d:89:90  00:0c:29:b5:f0:fe, ethertype IPv4 
(0x0800), length 60: 192.168.1.50.1177  192.168.1.22.81: Flags [.], ack 
1, win 64240, length 0
21:03:06.020429 00:0c:29:0d:89:90  00:0c:29:b5:f0:fe, ethertype IPv4 
(0x0800), length 336: 192.168.1.50.1177  192.168.1.22.81: Flags [P.], 
ack 1, win 64240, length 282
21:03:06.020517 00:0c:29:b5:f0:fe  00:0c:29:0d:89:90, ethertype IPv4 
(0x0800), length 54: 192.168.1.22.81  192.168.1.50.1177: Flags [.], ack 
283, win 65418, length 0
21:03:06.021205 00:0c:29:b5:f0:fe  00:0c:29:0d:89:90, ethertype IPv4 
(0x0800), length 74: 192.168.0.40.81  192.168.1.50.49496: Flags [S.], 
seq 2271887215, ack 4020735996, win 8192, options [mss 1260,nop,wscale 
8,sackOK,TS val 144738 ecr 92963], length 0
21:03:09.026308 00:0c:29:b5:f0:fe  00:0c:29:0d:89:90, ethertype IPv4 
(0x0800), length 74: 192.168.0.40.81  192.168.1.50.49496: Flags [S.], 
seq 2271887215, ack 4020735996, win 8192, options [mss 1260,nop,wscale 
8,sackOK,TS val 145038 ecr 92963], length 0
21:03:15.025375 00:0c:29:b5:f0:fe  00:0c:29:0d:89:90, ethertype IPv4 
(0x0800), length 70: 192.168.0.40.81  192.168.1.50.49496: Flags [S.], 
seq 2271887215, ack 4020735996, win 8192, options [mss 1260,sackOK,TS 
val 145638 ecr 92963], length 0


Those last three packets should not go towards the browser pc..


Op 17-4-2013 21:55, PiBa-NL schreef:

Hi Baptiste,

Thanks for your reply, i understand that the traffic must pass through 
the router/HAProxy box, and for that part i can confirm that the 
routing of packets is 'working' like it should.


To explain my setup a little more:
*Webserver* (Win7) 192.168.0.40/24 on a OPT1/DMZ using gateway 
:192.168.0.117


*HAProxy* on FreeBSD 8.3(pfSense2.1) which performs routing between 
the 2 networks has 2 interfaces(that matter):

DMZ interface has 192.168.0.117/24
LAN interface has 192.168.1.1/24

*ClientPC* (WinXP) 192.168.1.50/24 on the LAN gateway :192.168.1.1

(p.s. to make the picture complete and a little more complicated the 
FreeBSD and ClientPC are running within 'VMware Workstation' which 
runs on the Win7 machine., But im positive the networks are correctly 
separated (using a 'LAN-segment' between FreeBSD and the ClientPC) and 
traffic does flow through the FreeBSD machine.)
All traffic moving between clientPC and the Webserver 'must' go 
through the FreeBSD machine which routes the traffic, i have confirmed 
that traffic moves correctly through the FreeBSD machine by using 
wireshark and tcpdump on all machines.


So i have traced the packets of the connection to go like this:
1-ClientPC browser 'connects' to HAProxy and waits for the webpage (ok)
2-HAProxy sends SYN packet to Webserver with a spoofed source IP (ok)
3-Webserver sends response SYN-ack back to the spoofed source IP to 
its gateway 192.168.0.117 (and the MAC address using that IP) (expected)
4-FreeBSD passes the SYN-ack

Re: Block url in https

2013-04-24 Thread PiBa-NL
If your using HAProxy 1.5dev17 or later you could also give it a try 
with 'SNI'.


|use-server www if { req_ssl_sni -i www.example.com }
server www 192.168.0.1:443 weight 0
use-server mail if { req_ssl_sni -i mail.example.com }
server mail 192.168.0.1:587 weight 0|

Or use ssl deciphering to remove the encryption and then be able to use 
'http' mode processing.


|bind :443 ssl crt /etc/haproxy/site.pem
|

-PiBa-NL

Op 24-4-2013 18:35, Bryan Talbot schreef:
Since the traffic passing through your port 443 is presumably 
encrypted, by design, the proxy can't do anything with the contents 
including read it.


-Bryan



On Wed, Apr 24, 2013 at 7:57 AM, Matthieu Boret mbore...@gmail.com 
mailto:mbore...@gmail.com wrote:


Hi,

I try to block a URL(public.mydomain.com
http://public.mydomain.com) in https but this doesn't works. If
it's possible I would redirect to a 503 error page.

frontend unsecured
  bind *:80
  mode http
  redirect scheme https

frontend secure_tcp
  mode tcp
  bind *:443 name https
  reqideny ^public
  default_backend bck_tcp


Thanks


Matthieu






Re: API/Programmatic Interface

2013-04-24 Thread PiBa-NL

Hi Dave,

Some of those are possible see: 9.2. Unix Socket commands: 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2


For example these and their disable/get counterparts:
*enable server 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9-enable%20server*backend/server
*set weight 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9-set%20weight*backend/serverweight[%]


But i think it wont allow you to add new servers.. Nor change the 
loadbalancing algorithm.


Greets PiBa-NL

Op 24-4-2013 20:15, Dave Ariens schreef:


Hi subs,

I've looked around online and on the official site's add-ons and other 
solutions areas but can't find mention of a programmatic interface to 
managing an HA Proxy instance.


What are other users doing to address this?

A few top user stories are:

- Modifying server status

- Adding/removing pool members

- changing load balancing algorithm

- Adjusting weight

- more?

--

Dave Ariens

Platform Architect, Cloud Automation

BlackBerry Infrastructure Engineering

dariens (at) BlackBerry.com

+1-519-888-7465 x76792

-
This transmission (including any attachments) may contain confidential 
information, privileged material (including material protected by the 
solicitor-client or other applicable privileges), or constitute 
non-public information. Any use of this information by anyone other 
than the intended recipient is prohibited. If you have received this 
transmission in error, please immediately reply to the sender and 
delete this information from your system. Use, dissemination, 
distribution, or reproduction of this transmission by unintended 
recipients is not authorized and may be unlawful. 




Re: Client ip gets lost after a request being passed through two haproxies?

2013-04-25 Thread PiBa-NL

Hey  Wei Kong,

Your probably using *option forwardfor 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20forwardfor* 
right?


Think a second about how that option works:
- HAProxyB recieves a connection from the Client IP, and adds a header 
in the http traffic telling X-Forwarded-For:c.l.i.ent
- Then HAProxyA recieves a connection from HAProxyB, and adds another 
X-Forwarded-For:b.b.b.b header..
- Now Nginx recieves the connection from HAProxyA and the message might 
contains 2 X-Forwarded-For headers, of which only the last header is 
used (as it should be).


So now you know why it happens (if my assumptions are correct), the 
solution is simple, dont let HAProxyA add another X-Forwarded-For header 
when HAProxyB makes the connection.


So either:
-remove the option from HAProxyA
-configure an 'except network' on HAProxyA

Or possibly this might also work:
-use send-proxy and accept-proxy for the connection between HAProxyB and 
HAProxyA


PiBa-NL

Op 25-4-2013 19:29, Wei Kong schreef:

Hi,

We have an haproxy (A)in front of ngnix and it has been working great 
and we can get the client ip without any problem


HAProxy A - Nginx

Recently we added an another haproxy (B) in front of the first 
haproxy(A), from that point on, we noticed that the client IP becomes 
the new haproxy's (HAProxy B) ip instead.



HAProxy B(new one) -HAProxy A - Nginx

Is there known issue passing client ip through more than one haproxy?

Wei




Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-26 Thread PiBa-NL

Hi Willy,

Sorry for the weird syntax.. I made the text 'bold', but that seams to 
have come out differently...


Anyway i hope the 'patch' below is something you can work with.?
As for renaming the CONFIG_HAP_LINUX_TPROXY to something different would 
require everyone that on a regular basis builds HAProxy with this 
feature to change their build flags.. So i don't think it should be 
renamed/removed. Also while adding another flag for clarity purposes, i 
don't think it really adds that much ease of use, and would require new 
make scripts and several other changes though-out where transparent 
proxying is implemented.


Ive changed the defines a little more to i think be 'best compatible' 
with i think any circumstances..


Is this something that you could 'apply'?:

--- workoriginal/haproxy-1.5-dev18/include/common/compat.h 2013-04-26 
19:36:15.0 +
+++ work/haproxy-1.5-dev18/include/common/compat.h  2013-04-26 
20:32:15.0 +

@@ -81,7 +81,16 @@
 #endif

 /* On Linux, IP_TRANSPARENT and/or IP_FREEBIND generally require a 
kernel patch */

+/* On FreeBSD, IP_BINDANY is supported from FreeBSD 8 and up */
 #if defined(CONFIG_HAP_LINUX_TPROXY)
+  #if defined(BSD)  defined(IP_BINDANY)  defined(IPV6_BINDANY)
+/* FreeBSD defines */
+#define SOL_IP   IPPROTO_IP
+#define SOL_IPV6 IPPROTO_IPV6
+#define IP_TRANSPARENT   IP_BINDANY
+#define IPV6_TRANSPARENT IPV6_BINDANY
+  #endif
+
 #if !defined(IP_FREEBIND)
 #define IP_FREEBIND 15
 #endif /* !IP_FREEBIND */



Op 26-4-2013 8:33, Willy Tarreau schreef:

Hi,

On Fri, Apr 26, 2013 at 12:55:23AM +0200, PiBa-NL wrote:

Hi All / Developers,

Seams i have tranparent proxying working now on FreeBSD 8.3 with
HAProxy1.5dev18 + small modification.
Needed to add a firewall forwarding rule to forward the traffic to the
localhost for socket processing.

Could a developer please make the following change?

/*/* Add the following on line 33 of /include/common/compact.h */*//*
*//*#ifdef *//*IP_BINDANY*//*
*//*  /* FreeBSD define variables */*//*
*//*  #define SOL_IP IPPROTO_IP*//*
*//*  #define SOL_IPV6   IPPROTO_IPV6*//*
*//*  #define IP_TRANSPARENT IP_BINDANY*//*
*//*#endif*/

It's quite hard to exactly understand what needs to be changed with such
a syntax, could you please send a standard patch ? For this, just do a
diff -urN between the original source directory and the modified one.

Also I'm wondering whether we should define USE_FREEBSD_TPROXY instead of
USE_LINUX_TPROXY for this. Maybe we should rename CONFIG_HAP_LINUX_TPROXY
to CONFIG_HAP_FULL_TPROXY and adapt it depending on the OS.


After this haproxy can be successfully compiled on FreeBSD8.3 with the
USE_LINUX_TPROXY=yes build option and transparent proxying works when
the fwd firewall rule is made active.

On my pfSense2.1 system the following worked to load ipfw and add the
fwd rule in ipfw.. :
/sbin/kldload ipfw
/sbin/sysctl net.inet.ip.pfil.inbound=pf
net.inet6.ip6.pfil.inbound=pf net.inet.ip.pfil.outbound=pf
net.inet6.ip6.pfil.outbound=pf
/sbin/sysctl net.link.ether.ipfw=1
ipfw_context -a haproxy
ipfw_context -s haproxy
ipfw_context -a haproxy -n em0
*ipfw -x haproxy add 20 fwd localhost tcp from IP-BACKEND-SERVER 80 to
any in recv em0*

(This firewall rule should actually also check if the correct 'uid' of
the haproxy process is set to also allow directly contacting the
backendserver, but i could not get that part to work though that is not
a HAProxy issue so should get fixed elsewhere.) And ideally it should be
possible with 'pf' instead of 'ipfw', but that is still something i'm
trying to investigate..

Maybe such information should go into a dedicated file in the doc/ directory.


If this is not the correct way to fix/change this for FreeBSD could
someone please advice on what is.?
Thanks in advance.

PiBa-NL

Thanks,
Willy






Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-26 Thread PiBa-NL

Hi Willy / Lukas,

It seams to me OpenBSD doesn't support the IP_BINDANY flag..:
http://www.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/netinet/in.h 
http://www.openbsd.org/cgi-bin/cvsweb/%7Echeckout%7E/src/sys/netinet/in.h


While FreeBSD does:
http://svnweb.freebsd.org/base/head/sys/netinet/in.h?view=markup

But then again neither of them supports SOL_IP, so i would expect 
compilation to simply 'fail'. When trying to compile with the 
USE_LINUX_TPROXY option.
The combination i think is unlikely to cause problems for other 
currently working builds/systems..


If you want i can probably come up with a combination that makes it work 
for FreeBSD with a special USE_FREEBSD_TPROXY make option.



Or go for the 'full automatic inclusion' depending on available flags. 
Which i think is even 'nicer'. But probably needs more testing to 
confirm proper working..

I would be willing to make these changes. Is this the way to go?

Thanks for reviewing my proposed changes sofar.
PiBa-NL

Op 26-4-2013 22:40, Willy Tarreau schreef:

Hi Lukas,

On Fri, Apr 26, 2013 at 10:26:33PM +0200, Lukas Tribus wrote:

Hi,

throwing in my two cents here, based on a few uneducated guesses reading
the Makefile, etc. Feel free to disagree/correct/shout at me :)

Thanks for sharing your thoughts, I feel less alone sometimes when I can
discuss choices.


(actually I wrote this before Willy answered)



As for renaming the CONFIG_HAP_LINUX_TPROXY to something different would
require everyone that on a regular basis builds HAProxy with this
feature to change their build flags..

The name CONFIG_HAP_LINUX_TPROXY or USE_LINUX_TPROXY suggests this is for
Linux. Implementing compatibility changes for FreeBSD in those flags is
misleading, whether they are internal (like CONFIG_HAP_LINUX_TPROXY) or
external (USE_LINUX_TPROXY).

I think we should avoid that.

OK, that was my first impression as well. It looks unpolished but can
be understood sometimes during a transition, but not for the final state.


would require new make scripts and several other changes though-out where
transparent proxying is implemented.

If you built haproxy before on FreeBSD and used transparent proxying, then
it probably didn't work at all (like in your case) or those values have been
defined in another place, like by libc or by a manual definition.

Either way, we don't break anything that currently works by introducing a
new build flag. So you would only have to adjust your make line if you
actually need it and I think thats does less long-term harm than defining
those things under the USE_LINUX_TPROXY/CONFIG_HAP_LINUX_TPROXY hat.



Also I'm wondering whether we should define USE_FREEBSD_TPROXY instead of
USE_LINUX_TPROXY for this. Maybe we should rename CONFIG_HAP_LINUX_TPROXY
to CONFIG_HAP_FULL_TPROXY and adapt it depending on the OS.

Yes, that makes more sense to me.

We should probably clarify the condition with OpenBSD. I assume those
defines are the same for all BSD flavors? So should we introduce a more
generic flag like USE_BSD_TPROXY instead to avoid a USE flag for every
BSD or does the difference between them justify a per-BSD USE flag?

OK I have reviewed the existing code a bit. Most of the usages of
CONFIG_HAP_LINUX_TPROXY are :
   1) define the appropriate flags when they're not defined (kernel newer
  than libc)
   2) enable parsing the option
   3) enable the setsockopt calls (one of which is wrong for FBSD).

So what I'm thinking about is to change that :

   1) solely rely on the various per-OS flags to decide whether or not
  transparent proxy is supported (eg: IP_FREEBIND, IP_TRANSPARENT,
  IP_BINDANY, ...). That way we don't need an OS-specific option
  for something that automatically comes with the OS and that can
  be detected using a #ifdef and is enabled using a config setting
  anyway (eg: the transparent or usesrc keyword).

   2) keep CONFIG_HAP_LINUX_TPROXY to force setting the values on linux
  when they're not set (as it initially was made for)

   3) only implement the setsockopt() that have their appropriate define.

   4) report in the -vv output what options are supported.

Thus it will become trivial to add support for other OSes (I believe
OpenBSD also supports it).

What do you think about this ?

Regards,
Willy





Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-26 Thread PiBa-NL

Hi Willy,

Ill give it a try and send the patch as an attachment, though im not 
100% comfortable with the code. I think i can do it.

Will take me a few days though..

Thanks sofar.

Op 26-4-2013 23:12, Willy Tarreau schreef:

On Fri, Apr 26, 2013 at 11:03:00PM +0200, PiBa-NL wrote:

Hi Willy / Lukas,

It seams to me OpenBSD doesn't support the IP_BINDANY flag..:
http://www.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/netinet/in.h
http://www.openbsd.org/cgi-bin/cvsweb/%7Echeckout%7E/src/sys/netinet/in.h

it seems it has, but differently :

http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-07/msg00399.html


While FreeBSD does:
http://svnweb.freebsd.org/base/head/sys/netinet/in.h?view=markup

But then again neither of them supports SOL_IP, so i would expect
compilation to simply 'fail'. When trying to compile with the
USE_LINUX_TPROXY option.

Which is exactly the reason I don't want to remap these things which
are linux-specific, and instead use the proper call depending on the
available flags. Eg something like this :

#if defined(SOL_IP)  defined(IP_TRANSPARENT)
 /* linux */
 ret = setsockop(fd, SOL_IP, IP_TRANSPARENT, one, sizeof(one));
#elif defined (IP_PROTOIP)  defined(IP_BINDANY)
 /* freebsd */
 ret = setsockop(fd, IP_PROTOIP, IP_BINDANY, one, sizeof(one));
#elif defined (IP_PROTOIP)  defined(IP_BINDANY)
 /* openbsd */
 ret = setsockop(fd, SOL_SOCKET, SO_BINDANY, one, sizeof(one));
#else
 /* unsupported platform */
 ret = -1;
#endif


The combination i think is unlikely to cause problems for other
currently working builds/systems..

If you want i can probably come up with a combination that makes it work
for FreeBSD with a special USE_FREEBSD_TPROXY make option.

No, really I think something like above is much better for the long
term. It's more work to adapt existing code first but will pay in the
long term, even in the short term if it allows us to support OpenBSD
at the same time.


Or go for the 'full automatic inclusion' depending on available flags.
Which i think is even 'nicer'. But probably needs more testing to
confirm proper working..
I would be willing to make these changes. Is this the way to go?

As you like, if you feel comfortable with changing the way the current
code works (the linux-specific one), feel free to try, otherwise I can
do it over the week-end, and then a second patch derived from yours will
bring in support for FreeBSD then OpenBSD if someone here is able to
test it.


Thanks for reviewing my proposed changes sofar.

you're welcome :-)

Willy






Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-04-27 Thread PiBa-NL

Hi Willy,

It seams the changes where easier than i expected, assuming ive done it 
'correctly'...

I generated 2 patch files:
-FreeBSD IP_BINDANY git diff.patch generated with a git diff (against 
a hopefully relatively recent source tree)(i couldnt get it to fetch 
http://git.1wt.eu/git/haproxy.git ..)
-FreeBSD IP_BINDANY diff -urN.patch generated with diff -urN (against 
the 'port source')


I hope one of them can be used by you.
Please take a look and comment if something is amiss.

Greetings
PiBa-NL

Op 26-4-2013 23:40, PiBa-NL schreef:

Hi Willy,

Ill give it a try and send the patch as an attachment, though im not 
100% comfortable with the code. I think i can do it.

Will take me a few days though..

Thanks sofar.

Op 26-4-2013 23:12, Willy Tarreau schreef:

On Fri, Apr 26, 2013 at 11:03:00PM +0200, PiBa-NL wrote:

Hi Willy / Lukas,

It seams to me OpenBSD doesn't support the IP_BINDANY flag..:
http://www.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/netinet/in.h
http://www.openbsd.org/cgi-bin/cvsweb/%7Echeckout%7E/src/sys/netinet/in.h 


it seems it has, but differently :

http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-07/msg00399.html


While FreeBSD does:
http://svnweb.freebsd.org/base/head/sys/netinet/in.h?view=markup

But then again neither of them supports SOL_IP, so i would expect
compilation to simply 'fail'. When trying to compile with the
USE_LINUX_TPROXY option.

Which is exactly the reason I don't want to remap these things which
are linux-specific, and instead use the proper call depending on the
available flags. Eg something like this :

#if defined(SOL_IP)  defined(IP_TRANSPARENT)
 /* linux */
 ret = setsockop(fd, SOL_IP, IP_TRANSPARENT, one, sizeof(one));
#elif defined (IP_PROTOIP)  defined(IP_BINDANY)
 /* freebsd */
 ret = setsockop(fd, IP_PROTOIP, IP_BINDANY, one, sizeof(one));
#elif defined (IP_PROTOIP)  defined(IP_BINDANY)
 /* openbsd */
 ret = setsockop(fd, SOL_SOCKET, SO_BINDANY, one, sizeof(one));
#else
 /* unsupported platform */
 ret = -1;
#endif


The combination i think is unlikely to cause problems for other
currently working builds/systems..

If you want i can probably come up with a combination that makes it 
work

for FreeBSD with a special USE_FREEBSD_TPROXY make option.

No, really I think something like above is much better for the long
term. It's more work to adapt existing code first but will pay in the
long term, even in the short term if it allows us to support OpenBSD
at the same time.


Or go for the 'full automatic inclusion' depending on available flags.
Which i think is even 'nicer'. But probably needs more testing to
confirm proper working..
I would be willing to make these changes. Is this the way to go?

As you like, if you feel comfortable with changing the way the current
code works (the linux-specific one), feel free to try, otherwise I can
do it over the week-end, and then a second patch derived from yours will
bring in support for FreeBSD then OpenBSD if someone here is able to
test it.


Thanks for reviewing my proposed changes sofar.

you're welcome :-)

Willy





diff -urN workoriginal/haproxy-1.5-dev18/include/common/compat.h 
work/haproxy-1.5-dev18/include/common/compat.h
--- workoriginal/haproxy-1.5-dev18/include/common/compat.h  2013-04-26 
19:36:15.0 +
+++ work/haproxy-1.5-dev18/include/common/compat.h  2013-04-27 
14:56:27.0 +
@@ -93,6 +93,15 @@
 #endif /* !IPV6_TRANSPARENT */
 #endif /* CONFIG_HAP_LINUX_TPROXY */

+#if (defined(SOL_IP)defined(IP_TRANSPARENT)) \
+ || (defined(SOL_IPV6)  defined(IPV6_TRANSPARENT)) \
+ || (defined(SOL_IP)defined(IP_FREEBIND)) \
+ || (defined(IPPROTO_IP)defined(IP_BINDANY)) \
+ || (defined(IPPROTO_IPV6)  defined(IPV6_BINDANY)) \
+ || (defined(SOL_SOCKET)defined(SO_BINDANY))
+  #define HAP_TRANSPARENT
+#endif
+
 /* We'll try to enable SO_REUSEPORT on Linux 2.4 and 2.6 if not defined.
  * There are two families of values depending on the architecture. Those
  * are at least valid on Linux 2.4 and 2.6, reason why we'll rely on the
diff -urN workoriginal/haproxy-1.5-dev18/include/types/connection.h 
work/haproxy-1.5-dev18/include/types/connection.h
--- workoriginal/haproxy-1.5-dev18/include/types/connection.h   2013-04-26 
19:36:15.0 +
+++ work/haproxy-1.5-dev18/include/types/connection.h   2013-04-27 
14:56:30.0 +
@@ -219,7 +219,7 @@
char *iface_name;/* bind interface name or NULL */
struct port_range *sport_range;  /* optional per-server TCP source 
ports */
struct sockaddr_storage source_addr; /* the address to which we want to 
bind for connect() */
-#if defined(CONFIG_HAP_CTTPROXY) || defined(CONFIG_HAP_LINUX_TPROXY)
+#if defined(CONFIG_HAP_CTTPROXY) || defined(HAP_TRANSPARENT)
struct sockaddr_storage tproxy_addr; /* non-local address we want to 
bind to for connect() */
char *bind_hdr_name; /* bind

Re: SSL offloading configuration

2013-04-30 Thread PiBa-NL

Hi Chriss,

That seams possible already.?.
If you have the configuration for SSL offloading configured already all 
you need to add is the ssl option to your backend servers.


-- 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2 
--

*ssl http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5-ssl*

This option enables SSL ciphering on outgoing connections to the server. At
the moment, server certificates are not checked, so this is prone to man in
the middle attacks. The real intended use is to permit SSL communication
with software which cannot work in other modes over networks that would
otherwise be considered safe enough for clear text communications. When this
option is used, health checks are automatically sent in SSL too unless there
is a port  http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#port or 
anaddr  http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#addr directive 
indicating the check should be sent to a
different location. See the check-ssl  
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#check-ssl optino to 
force SSL health checks.

--

Op 30-4-2013 14:47, Chris Sarginson schreef:

Hi,

Are there any plans to allow HAProxy to take the traffic that it can 
now SSL offload, perform header analysis, and then use an SSL 
encrypted connection to the backend server?


I have a situation where I need to be able to use ACLs against SSL 
encrypted traffic, but then continue passing the traffic to the 
backend over an encrypted connection.  This is specifically a security 
concern, rather than an issue with poor code.


Cheers
Chris





Re: Transparent TCP LoadBalancing on FreeBSD

2013-05-02 Thread PiBa-NL

Hi ZeN  Willy,

To use transparent proxying on FreeBSD you currently need to compile 
with USE_LINUX_TPROXY=yes.

And make a few changes to the source code (else it wont compile).
As a quick and dirty fix you could (manually?) apply this patch [1]: 
http://marc.info/?l=haproxym=136700170314757w=2


For the better/cleaner fix this one should be usable [2]: 
http://marc.info/?l=haproxym=136707895800761w=2 , which is what i 
would like to get committed to the main HAProxy source tree.

@Willy could you take a look at the patch attached to that mail [2] ?

Greets,
PiBa-NL

Op 2-5-2013 5:13, ZeN schreef:

Dear Users,
sorry if i open new thread,
but i really want to solve this problem..
i manage to compile haproxy via port using TPROXY :

haproxy -vv
HA-Proxy version 1.5-dev18 2013/04/03
Copyright 2000-2013 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 
USE_PCRE=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8y 5 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes


but when i started the service with the source 0.0.0.0 usesrc 
clientip option, the haproxy wont start with this messages:


parsing [/usr/local/etc/haproxy.conf:28] : 'usesrc' not allowed here 
because support for TPROXY was not compiled in.


what i should i do to make haproxy compile with transparent option?



Rgds

ZeN






Re: TProxy debugging

2013-05-07 Thread PiBa-NL

Hi Eduard,

Im not sure about your iptables rules.. using pf/ipfw on FreeBSD myself...
But to me it looks like those last 4 [SYN] packets should have shown in 
a packetcapture on your webserver, unless they are re-routed elsewhere..


You could try a different IP in the source option :
  source 0.0.0.0 usesrc clientip

Could you also remove all special packet re-routing/divert rules from 
the haproxy box.? And check again if the webserver then does receive a 
SYN from the 'client-IP' and sends back a SYN-ACK to the HAProxy server?


It still wont work then because the HAProxy process wont actually 
receive the SYN-ACK but it should show up on the lan-interface of that 
machine.


Then the remaining issue is how to write the proper redirect rule for 
the 'return traffic' coming from the webserver and point it to the 
'local machine'..


As for the iptables, probably some other guy's can help better. But hope 
this helps in the 'debugging' a bit :).
Also i found it usefull to start haproxy with the -d -V parameters to 
show on-screen what happens (told me it couldnt bind to a nonlocal ip at 
first tries..).


Greets
PiBa-NL


Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-05-08 Thread PiBa-NL

Hi Willy,

Could you please let me know what your findings are about the proposed 
patch?
Does it need some more work, is it implemented wrongly, or would it help 
if i send my current haproxy.cfg file?


If i need to change something please let me know, thanks.

Thanks for your time,
PiBa-NL

Op 3-5-2013 18:03, Willy Tarreau schreef:
Hi,
sorry, I missed it.
I'll give it a look and merge it if it's OK.
Thanks, Willy

Op 27-4-2013 18:08, PiBa-NL schreef:

Hi Willy,

I generated 2 patch files:
-FreeBSD IP_BINDANY git diff.patch generated with a git diff 
(against a hopefully relatively recent source tree)(i couldnt get it 
to fetch http://git.1wt.eu/git/haproxy.git ..)
-FreeBSD IP_BINDANY diff -urN.patch generated with diff -urN 
(against the 'port source')


I hope one of them can be used by you.
Please take a look and comment if something is amiss.

Greetings
PiBa-NL
diff -urN workoriginal/haproxy-1.5-dev18/include/common/compat.h 
work/haproxy-1.5-dev18/include/common/compat.h
--- workoriginal/haproxy-1.5-dev18/include/common/compat.h  2013-04-26 
19:36:15.0 +
+++ work/haproxy-1.5-dev18/include/common/compat.h  2013-04-27 
14:56:27.0 +
@@ -93,6 +93,15 @@
 #endif /* !IPV6_TRANSPARENT */
 #endif /* CONFIG_HAP_LINUX_TPROXY */

+#if (defined(SOL_IP)defined(IP_TRANSPARENT)) \
+ || (defined(SOL_IPV6)  defined(IPV6_TRANSPARENT)) \
+ || (defined(SOL_IP)defined(IP_FREEBIND)) \
+ || (defined(IPPROTO_IP)defined(IP_BINDANY)) \
+ || (defined(IPPROTO_IPV6)  defined(IPV6_BINDANY)) \
+ || (defined(SOL_SOCKET)defined(SO_BINDANY))
+  #define HAP_TRANSPARENT
+#endif
+
 /* We'll try to enable SO_REUSEPORT on Linux 2.4 and 2.6 if not defined.
  * There are two families of values depending on the architecture. Those
  * are at least valid on Linux 2.4 and 2.6, reason why we'll rely on the
diff -urN workoriginal/haproxy-1.5-dev18/include/types/connection.h 
work/haproxy-1.5-dev18/include/types/connection.h
--- workoriginal/haproxy-1.5-dev18/include/types/connection.h   2013-04-26 
19:36:15.0 +
+++ work/haproxy-1.5-dev18/include/types/connection.h   2013-04-27 
14:56:30.0 +
@@ -219,7 +219,7 @@
char *iface_name;/* bind interface name or NULL */
struct port_range *sport_range;  /* optional per-server TCP source 
ports */
struct sockaddr_storage source_addr; /* the address to which we want to 
bind for connect() */
-#if defined(CONFIG_HAP_CTTPROXY) || defined(CONFIG_HAP_LINUX_TPROXY)
+#if defined(CONFIG_HAP_CTTPROXY) || defined(HAP_TRANSPARENT)
struct sockaddr_storage tproxy_addr; /* non-local address we want to 
bind to for connect() */
char *bind_hdr_name; /* bind to this header name if 
defined */
int bind_hdr_len;/* length of the name of the 
header above */
diff -urN workoriginal/haproxy-1.5-dev18/src/backend.c 
work/haproxy-1.5-dev18/src/backend.c
--- workoriginal/haproxy-1.5-dev18/src/backend.c2013-04-26 
19:36:15.0 +
+++ work/haproxy-1.5-dev18/src/backend.c2013-04-27 14:56:32.0 
+
@@ -884,7 +884,7 @@
  */
 static void assign_tproxy_address(struct session *s)
 {
-#if defined(CONFIG_HAP_CTTPROXY) || defined(CONFIG_HAP_LINUX_TPROXY)
+#if defined(CONFIG_HAP_CTTPROXY) || defined(HAP_TRANSPARENT)
struct server *srv = objt_server(s-target);
struct conn_src *src;

diff -urN workoriginal/haproxy-1.5-dev18/src/cfgparse.c 
work/haproxy-1.5-dev18/src/cfgparse.c
--- workoriginal/haproxy-1.5-dev18/src/cfgparse.c   2013-04-26 
19:36:15.0 +
+++ work/haproxy-1.5-dev18/src/cfgparse.c   2013-04-27 14:56:33.0 
+
@@ -4535,8 +4535,8 @@
cur_arg += 2;
while (*(args[cur_arg])) {
if (!strcmp(args[cur_arg], usesrc)) { 
 /* address to use outside */
-#if defined(CONFIG_HAP_CTTPROXY) || defined(CONFIG_HAP_LINUX_TPROXY)
-#if !defined(CONFIG_HAP_LINUX_TPROXY)
+#if defined(CONFIG_HAP_CTTPROXY) || defined(HAP_TRANSPARENT)
+#if !defined(HAP_TRANSPARENT)
if 
(!is_addr(newsrv-conn_src.source_addr)) {
Alert(parsing [%s:%d] 
: '%s' requires an explicit '%s' address.\n,
  file, linenum, 
usesrc, source);
@@ -4625,7 +4625,7 @@
newsrv-conn_src.opts 
|= CO_SRC_TPROXY_ADDR;
}
global.last_checks |= 
LSTCHK_NETADM;
-#if !defined(CONFIG_HAP_LINUX_TPROXY)
+#if !defined(HAP_TRANSPARENT)
global.last_checks |= 
LSTCHK_CTTPROXY;
 #endif
cur_arg += 2;
@@ -4635,7 +4635,7

Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-05-08 Thread PiBa-NL

Hi Willy,

If you make some changes to what you think/know is better and break the 
change into two parts is fine for me.


About calling setsockopt multiple times, i think the ret |=  would not 
evaluate the call behind it if ret already is 1, not absolutely sure 
about that..
I didn't think of starting a if statement with 0 || which might speed 
it up a clock tick or two so would be better anyway instead of having a 
variable assignment in between.


Thanks, could you let me know when its ready then ill give it another 
compilecheck on FreeBSD. And provide a little 'documentation' on how i 
configured the 'ipfw' firewall/nat to make it work.


p.s.
Ive spotted a issue in my patch with the IPv6 part where i forgot about 
the OpenBSD part (SOL_SOCKET  SO_BINDANY) should probably be added 
there also.


PiBa-NL

Op 8-5-2013 20:18, Willy Tarreau schreef:

Hi,

On Wed, May 08, 2013 at 07:34:19PM +0200, PiBa-NL wrote:

Hi Willy,

Could you please let me know what your findings are about the proposed
patch?

I was on it this afternoon (didn't have time earlier) :-)

I haven't finished reviewing it yet, because I was trying to figure if
there would be an easy way to merge the CTTPROXY mode into the other
transparent proxy options, but I'm not sure that's really useful.

Also I found one issue here :

+   int ret = 0;
+   #if defined(SOL_IP)defined(IP_TRANSPARENT)
+   ret |= setsockopt(fd, SOL_IP, IP_TRANSPARENT, one, 
sizeof(one)) == 0;
+   #endif
+   #if defined(SOL_IP)defined(IP_FREEBIND)
+   ret |= setsockopt(fd, SOL_IP, IP_FREEBIND, one, 
sizeof(one)) == 0;
+   #endif
+   #if defined(IPPROTO_IP)defined(IP_BINDANY)
+   ret |= setsockopt(fd, IPPROTO_IP, IP_BINDANY, one, 
sizeof(one)) == 0;
+   #endif
+   #if defined(SOL_SOCKET)defined(SO_BINDANY)
+   ret |= setsockopt(fd, SOL_SOCKET, SO_BINDANY, one, 
sizeof(one)) == 0;
+   #endif
+   if (ret)

As you can see, if we have multiple defines, we'll call setsockopt multiple
times, which we don't want. I was thinking about something like this instead :

if (0
#if cond1
 || setsockopt(fd, SOL_IP, IP_TRANSPARENT, one, sizeof(one)) == 0
#endif
#if cond2
 || setsockopt(fd, SOL_IP, IP_TRANSPARENT, one, sizeof(one)) == 0
#endif
)
...

I'm still on it right now, to ensure we don't break anything.


Does it need some more work, is it implemented wrongly, or would it help
if i send my current haproxy.cfg file?

If i need to change something please let me know, thanks.

I do not think so, I can easily perform the changes above myself, I won't
harrass you with another iteration. Overall it's good but since we're
changing many things at once, I'm cautious. I'd prefer to break it in
two BTW :
   1) change existing code to support CONFIG_HAP_TRANSPARENT everywhere
   2) add FreeBSD support

But if that's OK for you, I'll simply perform the small adjustments
before merging it.

Cheers,
Willy






Re: HAProxy on FreeBSD 8.3 with transparent proxying (TProxy?)

2013-05-09 Thread PiBa-NL

Hi Willy,

Thanks the patches look good, and when applied separately all compile 
without issues on FreeBSD. (Except when using the USE_LINUX_TPROXY flag, 
but that shouldn't be used on FreeBSD anyway.)

And transparent proxying works correctly on FreeBSD as was expected.

Ive included a transparent_proxy.cfg which could be added to the 
'examples' folder of HAProxy. Though i'm not sure anyone would find it 
there.
Also it includes quite a bit of 'supposedly' better configuration hints, 
that i'm currently unable to verify, but still might help someone in the 
future..


As for Linux i think some iptable rules are needed.  Maybe someone can 
added those to the example.
If you don't want to include it in its current form or want to reformat 
the whole thing i have no problem with that.


Thanks,
Pieter Baauw

Op 8-5-2013 23:54, Willy Tarreau schreef:

OK here's what I came up with. There are 3 patches :

   - 0001 : reorganize flags processing
   - 0002 : add support for freebsd
   - 0003 : add support for openbsd

Please review and test if you can. At least it seems OK on linux here.
I have written all the commit messages. Feel free to change them if you
want, as they're made under your name. If you want to provide additional
doc, let's just add a 4th patch on top of this.

The code is not quite beautiful, but that's always the price to pay
when playing with ifdefs, and there are already a large number of them
in the same functions anyway.

Also, if you could provide a real name for the commits, it would be nice!

Thanks!
Willy



#
# This is an example of how to configure HAProxy to be used as a 'full 
transparent proxy' for a single backend server.
#
# Note that to actually make this work extra firewall/nat rules are required.
# Also HAProxy needs to be compiled with support for this, in HAProxy1.5-dev19 
you can check if this is the case with haproxy -vv.
#

global
frontend MyFrontend
bind192.168.1.22:80
default_backend TransparentBack_http

backend TransparentBack_http
modehttp
source 0.0.0.0 usesrc client
server  MyWebServer 192.168.0.40:80

#
# To create the the nat rules perform the following:
#
# ### (FreeBSD 8) ###
# --- Step 1 ---
# ipfw is needed to get 'reply traffic' back to the HAProxy process, this can 
be achieved by configuring a rule like this:
#   fwd localhost tcp from 192.168.0.40 80 to any in recv em0
#
# The following would be even better but this did not seam to work on the 
pfSense2.1 distribution of FreeBSD 8.3:
#   fwd 127.0.0.1:80 tcp from any 80 to any in recv ${outside_iface} uid 
${proxy_uid}
#
# If only 'pf' is currently used some aditional steps are needed to load and 
configure ipfw:
# You need to configure this to always run on startup:
#
# /sbin/kldload ipfw
# /sbin/sysctl net.inet.ip.pfil.inbound=pf net.inet6.ip6.pfil.inbound=pf 
net.inet.ip.pfil.outbound=pf net.inet6.ip6.pfil.outbound=pf
# /sbin/sysctl net.link.ether.ipfw=1
# ipfw add 10 fwd localhost tcp from 192.168.0.40 80 to any in recv em0
#
# the above does the folowing:
# - load the ipfw kernal module
# - set pf as the outer firewall to keep control of routing packets for example 
to route them to a non-default gateway
# - enable ipfw
# - set a rule to catches reply traffic on em0 comming from the webserver
#
# --- Step 2 ---
# To also make the client connection transparent its possible to redirect 
incomming requests to HAProxy with a pf rule:
#   rdr on em1 proto tcp from any to 192.168.0.40 port 80 - 192.168.1.22
# here em1 is the interface that faces the clients, and traffic that is 
originally send straight to the webserver is redirected to HAProxy
#
# ### (FreeBSD 9) (OpenBSD 4.4) ###
#   pf supports divert-reply which is probably better suited for the job 
above then ipfw..
#

Re: Haproxy SSL Termination question

2013-05-15 Thread PiBa-NL

Hi Joe,
Sounds like you need the 'ssl' option for your backend server.

This option enables SSL ciphering on outgoing connections to the server.

Just below the source option: 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5-source

(#5-ssl goes to wrong part of the help.. for 'bind' instead of 'server')

Op 15-5-2013 23:14, Joseph Hardeman schreef:

Hi Everyone,

I am in need of a little help, currently I need to send traffic to a 
haproxy setup and terminate the SSL certificate there, which I have 
working, but until I can get a backend application changed from 
redirecting when it gets the https request to a login page, is there 
any way I can connect to the backend server(s) over port 443 so it 
fakes it to the server and the page redirection continues to work?  At 
least until we can get the code updated to use say port 8443 on the 
server instead of 443?


Just curious and thought I would ask the experts out there. :-)

Thanks in advance.

Joe





Re: Configuring different backends using ACL

2013-06-19 Thread PiBa-NL

Hi Ahmed,
small question/clarification request.
what happens when you directly browse to the jboss backend? like this  
http://jbosserver/jboss/ do you get this same not found ?


So haproxy is then forwarding the request like it should, but what i 
think you want is that haproxy will forward a request to 
http://haproxy/jboss/index.html to http://jbossserver/index.html , is 
that indeed what you want/expect? im not sure thats actually possible.. 
(also thinking about links send in a response would not point to the 
subfolder.)


greets PiBa-NL
Op 19-6-2013 23:26, Lukas Tribus schreef:

Hi Ahmed!



Any suggestions?

Post the complete configuration, inlcuding default, global and all backend
sections (checking mode http, httpclose, etc). If that doesn't lead to any
conclusion, we will need you to start haproxy in debug mode, capture the
request and post it on the list, so we can confront it with the
configuration.


Regards,

Lukas   





Re: GIT clone fails, how to proceed?

2013-06-22 Thread PiBa-NL

Hi Lukas,

Thanks, that works indeed.

Maybe its worth mentioning this url in the websites main page where the 
links to Latest versions is also present?
I find it strange that the 'normal' git repository (though slow) is 
unable to clone correctly. But i guess thats not so important if there 
is a good workaround / secondary up to date repository.


PiBa-NL

Op 22-6-2013 1:22, Lukas Tribus schreef:

Hi!


When trying to clone the repository it always seems to fail. (there have
been more reports of this in emails/irc of other users..)
Also it seams to take ages before it fails..

I'm using the formilux mirror, which is mentioned in the README:
git clone http://master.formilux.org/git/people/willy/haproxy.git/

Its up-to-date, reliable and fast.


Lukas   





Re: Does the transparent can't work in FreeBSD?

2013-07-09 Thread PiBa-NL

Hi Jinge,

Im not exactly sure how this is supposed to work.. did manage to get 
transparent proxy for the server side working.. (the server is presented 
with a connection from original client ip.) This works with haproxy 
1.5dev19 on FreeBSD8.3 with help of some ipfw fwd rules..


Your config also seams to be working (used some parts their-of to test..)

Did require the following ipfw rule for me..:
ipfw add 90 fwd localhost tcp from any to any  in recv em1
Actually on pfSense it also needs -x haproxy as it is a bit 
customized.. And because i run 'ipfw' combined with 'pf' i also needed 
to configure pf with floating 'pass on match' rules to allow the 
'strange traffic'.. That pf cannot handle..


If you however have FreeBSD 9 you might want to look into the divert-to 
rules that pf can make. Might make stuff simpler if it turns out to work..


Please report back your required settings (config if it changes) when 
you manage to get it working.


Greetings PiBa-NL

Op 9-7-2013 12:55, jinge schreef:

Hi,all!


We use haproxy and FreeBSD for our cache system. And we want to use 
the transparent option 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20transparent which 
for some compatiable things.

But found it doest work. Here is the configure which worked in Ubuntu.


frontend tcp-in
bind :
mode tcp
log global
option tcplog

#distingush HTTP and non-HTTP
tcp-request inspect-delay 30s
tcp-request content accept if HTTP

default_backend Direct


backend Direct
mode tcp
log global
option tcplog
no option httpclose
no option http-server-close
no option accept-invalid-http-response
option transparent


Can anyone tell my if is the FreeBSD can not support transparent here 
or my configure is not correct ? And how to make transparent work right.


Thanks!


Regards
Jinge







Re: FreeBSD with options transparent not working.

2013-07-11 Thread PiBa-NL

Hi Jinge,

What version of FreeBSD do you run? What firewall does it use pf/ipfw ?
What does haproxy -vv show? (version/transparent options)

Can you write a little about the network topology and what isn't working 
about it?

For example like this:
ClientMachine = 172.16.1.100/24
Haproxy LAN1 = 172.16.1.1/24
Haproxy LAN2 = 192.168.1.1/24
Server1 = 192.168.1.101/24
Now ClientMachine sends a tcp request to 192.168.1.101. This request is 
routed through the haproxy machine which functions as a 'router' but 
also the request is intercepted by machine firewall (make sure to NOT 
use a standard portforward rule as it will change the destination-IP..) 
and redirected to the haproxy process, which determines its not http, 
and then sends traffic further to Server1 using the option transparent.
The question then is does Server1 ever recieve a SYN packet (check with 
tcpdump/wireshark)?

Does HAProxy show all backends as 'available' in the stats page?

Does the clientmachine use the proper IP(so NOT the haproxy-ip) for 
connecting to Server1 and is traffic routed through the haproxy machine?


Is this what doesn't currently work.?
Or is the trouble with the nginx machines machines not being able to be 
connected the original client ip?


There are 3 different HAProxy options called or referred to as 
'transparent' which makes it also a bit difficult to see which option 
your asking about..

A- option transparent (for sending connection to original destination)
B- source 0.0.0.0 usesrc clientip (for sending client-IP to the backend 
servers)

C- bind transparent (for binding to a nonlocal (CARP?) IP address)

I'm sure C is not what your asking about, but i'm unclear if your 
current issue is with A or B.


Could you try and make a smallest possible haproxy configuration that 
still contains the problem you currently experience?


Greets PiBa-NL

Op 11-7-2013 14:38, Baptiste schreef:

So the problem might be in the way you compiled HAProxy or you have
configured your OS.
Unfortunately, I can't help on FreeBSD :'(

Baptiste

On Thu, Jul 11, 2013 at 11:55 AM, jinge altman87...@gmail.com wrote:

Hi, Baptiste!

But i just test with this and found no use.



Regards
Jinge



On 2013-7-11, at 下午5:35, Baptiste bed...@gmail.com wrote:


Hi Jinge,

Could you update your source statement to:
source 0.0.0.0 usesrc clientip

And let us know if that fixed your issue.

Baptiste


On Thu, Jul 11, 2013 at 11:25 AM, jinge altman87...@gmail.com wrote:

Hi,all!

We use HAproxy for our web system. And there is a statement if not HTTP will
go backend Direct.Which is client-side transparent proxying. Here is the
configure. But we found that the Direct backend not working. Is anyone can
tell me. Are there any problem in my configure? Or should there any turning
on my FreeBSD.

global
   pidfile /var/run/haproxy.pid
   maxconn 20
maxpipes 5
   daemon
   stats socket /tmp/haproxy.sock
   nbproc 4
   spread-checks 5
tune.rcvbuf.client 16384
tune.rcvbuf.server 16384
tune.sndbuf.client 32768
   tune.sndbuf.server 16384

defaults
#TCP SECTION
   maxconn 20
backlog 32768
   timeout connect 5s
   timeout client 60s
   timeout server 60s
   timeout queue 60s
   timeout check 10s
   timeout http-request 15s
   timeout http-keep-alive 1s
timeout tunnel 3600s
   option tcpka


#HTTP SECTION
   hash-type consistent
   option accept-invalid-http-request
   option accept-invalid-http-response
   option redispatch
   option http-server-close
   option http-pretend-keepalive
   retries 2
   option httplog
no option checkcache

#SYSTEM SECTION
   option dontlog-normal
   option dontlognull
   option log-separate-errors


# frontend ##
frontend tcp-in
   bind :
   mode tcp
   log global
option tcplog

tcp-request inspect-delay 30s
tcp-request content accept if HTTP

   use_backend NginxCluster if HTTP
   default_backend Direct

backend NginxCluster
   mode http
   option abortonclose
   balance uri whole
   log global
   source 0.0.0.0
   server ngx1 192.168.10.1:80 weight 20 check inter 5s maxconn 1
   server ngx2 192.168.10.2:80 weight 20 check inter 5s maxconn 1
   server ngx3 192.168.10.3:80 weight 20 check inter 5s maxconn 1

backend Direct
   mode tcp
   log global
option tcplog
no option httpclose
no option http-server-close
no option accept-invalid-http-response
no option http-pretend-keepalive
option transparent








Regards
Jinge








Re: Does the transparent can't work in FreeBSD?

2013-07-12 Thread PiBa-NL

Hi Jinge,

Nice that you have it working with ipfw.

I have no hands-on experience with FreeBSD9 and those divert-to rules. 
Reading their explanation led me to expect it should be able to work, 
and resolve the issue of needing 2 firewalls pfipfw simultaneously.


As Joris also writes you should probably not redirect all traffic that 
flows from any-to-any, but only that what was originally already going 
to the proper destination port so any-to-any.


So possibly something like this:  pass in quick on vlan64 inet proto tcp 
from any to any port  divert-to 127.0.0.1 port 


If this can actually work, i currently do not know.. My only FreeBSD 9 
pf knowledge is from reading its manual. So cant help with that.

If you do manage to get the divert-to working please do share it with us.

Greets PiBa-NL

Op 12-7-2013 7:37, jinge schreef:

Hi PiBa-NL,

I just follow your advice and find my pf configure is not correct

rdr on vlan64 proto tcp from any to any - 127.0.0.1 port 

And I change to ipfw and fwd then it works corrently.

ipfw add fwd 127.0.0.1, tcp from any to any via vlan64 in

And you tell my I can use  pf's divert-to, but after a test I found it 
doesn't work.Here is the configure


pass in quick on vlan64 inet proto tcp from any to any divert-to 
127.0.0.1 port 


So can your tell my the right configure?
Thank you.



Regards
Jinge



On 2013-7-11, at 下午12:07, jinge altman87...@gmail.com 
mailto:altman87...@gmail.com wrote:



Hi PiBa-NL,


Thanks for your reply!
And I will follow your advice!



Regards
Jinge



On 2013-7-10, at 上午4:25, PiBa-NL piba.nl@gmail.com 
mailto:piba.nl@gmail.com wrote:



Hi Jinge,

Im not exactly sure how this is supposed to work.. did manage to get 
transparent proxy for the server side working.. (the server is 
presented with a connection from original client ip.) This works 
with haproxy 1.5dev19 on FreeBSD8.3 with help of some ipfw fwd rules..


Your config also seams to be working (used some parts their-of to 
test..)


Did require the following ipfw rule for me..:
ipfw add 90 fwd localhost tcp from any to any  in recv em1
Actually on pfSense it also needs -x haproxy as it is a bit 
customized.. And because i run 'ipfw' combined with 'pf' i also 
needed to configure pf with floating 'pass on match' rules to allow 
the 'strange traffic'.. That pf cannot handle..


If you however have FreeBSD 9 you might want to look into the 
divert-to rules that pf can make. Might make stuff simpler if it 
turns out to work..


Please report back your required settings (config if it changes) 
when you manage to get it working.


Greetings PiBa-NL

Op 9-7-2013 12:55, jinge schreef:

Hi,all!


We use haproxy and FreeBSD for our cache system. And we want to use 
the transparent option 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20transparent which 
for some compatiable things.

But found it doest work. Here is the configure which worked in Ubuntu.


frontend tcp-in
bind :
mode tcp
log global
option tcplog

#distingush HTTP and non-HTTP
tcp-request inspect-delay 30s
tcp-request content accept if HTTP

default_backend Direct


backend Direct
mode tcp
log global
option tcplog
no option httpclose
no option http-server-close
no option accept-invalid-http-response
option transparent


Can anyone tell my if is the FreeBSD can not support transparent 
here or my configure is not correct ? And how to make transparent 
work right.


Thanks!


Regards
Jinge













Re: Apache logs and source IP

2013-11-13 Thread PiBa-NL

Also Hi,

Remember that for https connections no forwardfor header will be 
added.(unless you offload the ssl on or before haproxy)


Also i don't understand why you have an acl in the https backend? You 
the only bind is on port 443 so if that frontend is contacted then it 
will always pass the acl for dst_port 443.

#-
frontend PROD_webfarm_https
#-
   bind 10.2.0.101:443 http://10.2.0.101:443
   mode tcp
   acl is_port_443 dst_port 443
   use_backend PROD_https if is_port_443
   default_backend PROD_http
   maxconn 4000

(or is that some higher HAproxy logic/failsafe im missing?)
Greets PiBa-NL


Re: Can't clone repository; hangs

2013-11-28 Thread PiBa-NL
Haven't tried that one recently, but found 
http://master.formilux.org/git/people/willy/haproxy.git to be a bit 
faster and at least completing the clone without an error. Maybe it helps.

Greets PiBa-NL

Pawel schreef op 28-11-2013 4:33:



On Nov 27, 2013, at 5:24 PM, Charles Strahan 
charles.c.stra...@gmail.com mailto:charles.c.stra...@gmail.com wrote:



Hello,

Just a heads up, I can't clone the repo. This hangs:

 git clone http://git.1wt.eu/git/haproxy.git/


It's just really slow for the first clone, but it works. I ran it with 
strace -f, and left it overnight :)




-Charles




Re: example of agent-check ?

2013-12-27 Thread PiBa-NL

Simon Drake schreef op 27-12-2013 17:07:

/

/
/Would it be possible to post an example showing the correct haproxy 
config to use with the agent-check.


/
/By the way I saw the mailing list post recently about the changes to 
the agent-check, using state and percentage, and I think that the 
right way to go./

For me this config works:
serverMyServer 192.168.0.40:80  check inter 5000 
agent-check agent-inter  agent-port 2123  weight 32


I've tried a few small tests with it, and while bringing a server to 
'down' or 'drain' seemed to work, i was missing the 'up' keyword, only 
100% seems to bring a server back alive. So if your monitoring 
100-%CPUusage and sending that 1on1 back to the agent on a server with 
99% cpu capacity available wont come back up..


Re: HA-Proxy version 1.5-dev21-51437d2 2013/12/29 sticky ssl sessons are not working in my environment

2014-01-03 Thread PiBa-NL

Hi,

Have been wondering about if/how i could persist ssl sessions between 
servers myself if i ever need it.
And found the concept of a SSL-session-id rather promising, then after 
looking into how to use it and its reliability i found some articles 
saying it might not be wise..


https://www.f5.com/pdf/white-papers/cookies-sessions-persistence-wp.pdf
SSL persistence was suddenly rendered inoperable (in IE5)

http://docwiki.cisco.com/wiki/Secure_Sockets_Layer_Persistence_Configuration_Example
SSL stickiness is generally deemed unreliable.
SSL stickiness is not recommended due to the numerous limitations that 
can break client persistence


Also one might need to enforce on the backend webservers that SSLv2 is 
NOT used. (SSLv2 places the session ID within the encrypted data 
according to the Cisco doc)


So now the question arises, are current browsers (including on mobile 
devices) all sending the same SSL-session-ID properly during a hour of 
'shopping'? Or are these described issues not currently applicable 
anymore with recent software/devices?
I'm a bit doubtful if using ssl-session-id is actually a good way to 
enforce persistence. Or should one try to avoid it with ssl-offloading 
even though that puts more processing on a single haproxy instance thats 
more difficult to scale up.. And puts all certificates in a 'central' 
place which has some other security considerations..


I'm afraid i there are more questions than answers in my mail, but at 
least its some stuff to think about..
For myself its currently not a problem. In my small test with 1.5dev21 
ssl-persistence seemed to work ok. Though only 1 client pc, with 2 
browsers running and hitting F5 a lot, so hardly representative for a 
real environment...

Still thought what i found might be useful for others.

Greets PiBa-NL

Lukas Tribus schreef op 3-1-2014 22:41:

Hi,


Hello ,

Many thanks for your replay. This thing is more stranger i downloaded and
compiled serverl versions of HAproxy 1.5.x.x and the result was alwase the
same

I experimented with following versions

At first i testing with
http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev21.tar.gz

After i tested with these
http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev20.tar.gz
http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev18.tar.gz
http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev17.tar.gz

latest downloded was haproxy-ss-LATEST.tar.gz from
http://haproxy.1wt.eu/download/1.5/src/snapshot/

All the time the result was same

Well, your make line looks very specific, whats the reason you use those
CFLAGS manually and don't use on the other hand a specific TARGET?

I suggest you give this a try:
make clean; make TARGET=linux2628 CPU=native USE_PCRE=1 USE_OPENSSL=1 \
USE_ZLIB=1

With the custom make TARGET, you are not using epoll, falling back to
the slower poll().

This shouldn't make any difference regarding the ssl affinity though.


Regarding that, your configuration looks ok, and you have tested a
different releases, which make me think the issue may not be in haproxy.

How do you know HAProxy doesn't maintains the correct affinity? Are
you tcpdumping the frontent traffic? Are you sure your backend servers
have an session cache enabled and working?


Regards,

Lukas   





Re: HAProxy 1.5

2014-01-09 Thread PiBa-NL

Hi

If you need it badly then start using it. (after validatingtesting with 
your configuration which you should do anyway.)


The name 'release' wont say there wont be any bugs left. As for the 
current 1.5devX releases lots of people use them in production 
environments and they are in general very stable.


As for testing with your current configuration you should actually start 
doing that a.s.a.p. so if you do find there are problems they can still 
be fixed before the release is called 'final'.


And for a date that would be 1.5 (ETA 2013/12/31) (see roadmap in 
git.), besides that one and maybe some other estimations you probably 
wont get the actual date it until its really ready.. And it will be 
ready when its ready. As you might have read the 'release' is coming 
closer every day, as most major features are now implemented, and only a 
few development builds will probably be made for some final bug fix 
checks...


Greets PiBa-NL
Find below part of the 1.5dev20 release mail from Willy as the 
mailinglist archives are not containing this.. (followed a day later by 
dev21 to fix a small but annoying issue):



I expect to release 1.5-final around January and mostly focus on chasing
bugs till there. So I'd like to set a feature freeze. I know it doesn't
mean much considering that we won't stop contribs. But I don't want to
merge another large patch set before the release. Ideally there will not
be any dev21 version. Reality probably is that we'll have to issue one
because people will inevitably report annoying bugs that were not reported
in snapshots.



Kobus Bensch schreef op 9-1-2014 17:58:

Hi

Have you got a date for the final release of 1.5? There are a few 
features in 1.5 we badly need.


Thanks

Kobus






how to use ASPSESSIONID with stick-table?

2014-01-09 Thread PiBa-NL

Hi,

While reading about stickyness its seems like there are quite a few options.
*TCP*
1- balance source
2- stick on src
*SSL*
3- stick on payload_lv(43,1) if clienthello
*HTTP/SSLoffloading*
4- cookie cookie
5- stick on req.cook(cookie)
6- appsession cookie

But while the last 3 options can all use a 'normal' cookie. It seems 
only the appsession can process a ASPSESSIONID=.


While the stick-table can be synchronized between multiple haproxy 
instances and also has the ability to 'survive' a reload of the 
configuration and a inserted cookie doesn't need any in memory table to 
be matched to the correct backend. Only the 'appsession' will loose all 
needed information to succesfully persist a client to a single backend 
and isn't able to sync.


Ive read that appsession will be deprecated [1], will this happen 
anytime 'soon'? And if so what can be configured to match the way it 
finds and handles cookies.?


As far as i could see req.cook(cookie) cannot match on the prefix of 
a cookie-name like appsession and capture cookie are able to do.? Or 
is there a other more generic option i overlooked.? I did see cook_beg 
but that only checks the prefix of the value not the name.


The question is because i want to change configuration-webgui in a 
pfSense haproxy-devel package and want to include some easy to 
configure persistence options.. but want to know if there is a 
alternative to appsessions so everything can be done with either cookies 
and sticktables.


[1] 
http://serverfault.com/questions/550910/haproxy-appsession-vs-cookie-precedence




Re: example of agent-check ?

2014-01-11 Thread PiBa-NL
Wierd.. Attached the C# 'program' i used as an agent for a small 
test(compile it with csc.exe).. maybe something is wrong with my test 
implementation.?


When i press `  (the key besides the 1) it sends 0% and in the haproxy 
stats the server is shown as drain.
When i then press 9 for 90% is stays in drain, only when i press 0 for 
100% the server comes back up again.


Maybe someone can take a look?

*My HAProxy version:*

HA-Proxy version 1.5-dev21-6b07bf7 +2013/12/17
Copyright 2000-2013 Willy Tarreau w...@1wt.eu
Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

*The backend using the check:*

backend S_tcp
modetcp
balancestatic-rr
timeout connect3
timeout server3
retries3
fullconn 
serverasd 192.168.0.40:82  check inter 2000 agent-check 
agent-inter 2000 agent-port 2123  weight 100


Malcolm Turnbull schreef op 11-1-2014 20:30:

Sorry only just got around to looking at this and updating my blog entry:

Yes the important bit missing was agent-check

But my testing with Dev21 seems to bring the servers back fine with
any percentage reading i.e. 10% 75% etc. Please let me know if anyone
else is having an issue, thanks.

server Win2008R2 192.168.64.50:3389  weight 100  check agent-check
agent-port  inter 2000  rise 2  fall 3 minconn 0  maxconn 0
on-marked-down shutdown-sessions



On 27 December 2013 22:44, PiBa-NL piba.nl@gmail.com wrote:

Simon Drake schreef op 27-12-2013 17:07:



Would it be possible to post an example showing the correct haproxy config
to use with the agent-check.

By the way I saw the mailing list post recently about the changes to the
agent-check, using state and percentage, and I think that the right way to
go.

For me this config works:
 serverMyServer 192.168.0.40:80  check inter 5000 agent-check
agent-inter  agent-port 2123  weight 32

I've tried a few small tests with it, and while bringing a server to 'down'
or 'drain' seemed to work, i was missing the 'up' keyword, only 100% seems
to bring a server back alive. So if your monitoring 100-%CPUusage and
sending that 1on1 back to the agent on a server with 99% cpu capacity
available wont come back up..





//
// HAProxy agent check test program
//
// compile with:
// C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe /out:haproxy_agent_service.exe haproxy_agent_service.cs

using System;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Text;

class healthchecker{
	public static void Main(){
		Console.WriteLine(Starting);
		bool running = true;
		TcpListener server=null;   
		try
		{
			Int32 port = 2123;
			IPAddress localAddr = IPAddress.Parse(0.0.0.0);
			server = new TcpListener(localAddr, port);

			// Start listening for client requests.
			server.Start();
			// Buffer for reading data
			Byte[] bytes = new Byte[256];
			String data = null;

			string message = 55%\n;
			// Enter the listening loop. 
			while(running) 
			{
Console.Write(Waiting for a connection... );

// Perform a blocking call to accept requests. 
// You could also user server.AcceptSocket() here.
TcpClient client = null;
try{
	using(client = server.AcceptTcpClient()){
		Console.WriteLine(Connected!);
		ConsoleKeyInfo cki;
		while (Console.KeyAvailable)
		{
			cki = Console.ReadKey();
			Console.WriteLine(Key PRESSED: {0}, cki.Key);
			
			if (cki.Key == ConsoleKey.D)
 message = drain\n;
			if (cki.Key == ConsoleKey.S)
 message = stopped\n;
			if (cki.Key == ConsoleKey.U)
 message = up\n;
			if (cki.Key == ConsoleKey.Oem3)
 message = 0%\n;
			if (cki.Key == ConsoleKey.D1)
 message = 10%\n;
			if (cki.Key == ConsoleKey.D2)
 message = 20%\n;
			if (cki.Key == ConsoleKey.D3)
 message = 30%\n;
			if (cki.Key == ConsoleKey.D4)
 message = 40%\n;
			if (cki.Key == ConsoleKey.D5)
 message = 50%\n;
			if (cki.Key == ConsoleKey.D6)
 message = 60%\n;
			if (cki.Key == ConsoleKey.D7)
 message = 70%\n;
			if (cki.Key == ConsoleKey.D8)
 message = 80%\n;
			if (cki.Key == ConsoleKey.D9)
 message = 90%\n;
			if (cki.Key == ConsoleKey.D0)
 message = 100%\n;
			if (cki.Key == ConsoleKey.Q)
 running = false;
		}	
		data = null;

		// Get a stream object for reading and writing
		NetworkStream stream = client.GetStream();

		int i;

		ASCIIEncoding uniEncoding = new ASCIIEncoding();
		stream.Write(uniEncoding.GetBytes(message), 0, message.Length);
		stream.Flush();
		Console.WriteLine(Wrote {0} bytes: {1}, message.Length, message);
		
		// Shutdown and end connection

Re: example of agent-check ?

2014-01-12 Thread PiBa-NL

Ok seems my trouble came from using balancestatic-rr.

Actually when using the unix socket to set a weight to 50% it tells:
Backend is using a static LB algorithm and only accepts weights '0%' 
and '100%'.

So that explains my issue.

Abd the manual states Each server is used in turns, according to their 
weights., and that led me to think it should support weights.. While i 
should have also read the next sentence..: changing a server's weight 
on the fly will have no effect for static-rr..


Sorry for the noise.

Steve Howard schreef op 12-1-2014 2:36:

On 1/11/14, Malcolm Turnbull malcolm@... wrote:

Sorry only just got around to looking at this and updating my blog entry:

Yes the important bit missing was agent-check

But my testing with Dev21 seems to bring the servers back fine with
any percentage reading i.e. 10% 75% etc. Please let me know if anyone
else is having an issue, thanks.

server Win2008R2 192.168.64.50:3389  weight 100  check agent-check
agent-port  inter 2000  rise 2  fall 3 minconn 0  maxconn 0
on-marked-down shutdown-sessions

I just tested this today and can confirm that bringing a backend server back
in works for me with 100%, 99%, etc.

Also, down reliably takes the server out of the backend, and only a
percentage such as 50% brings it back in.

For those struggling, I will say that the status string returned from the
socket on the backend server must have a newline terminator, or perhaps a
carriage return.  Simon's example using echo does this by default.

I was testing with a simple python socket server, and couldn't get anything
to work.  As soon as I used client.send(down + '\n'), everything worked.
I had to add debug statements to src/checks.c to find this.

Depending on the software you use to return the status to the agent, it may
be worth checking if you are having issues.

Regards,

Steve









Re: Error 400

2014-01-13 Thread PiBa-NL
When using the backens with port 443 do you have the ssl keyword on 
the server line?

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-ssl

Also can you share your complete (anonimized) haproxy configuration file?

Kobus Bensch schreef op 13-1-2014 12:27:

A few more observations:

Hi

My environment looks like this:

Haproxy 1.5 (Also tried 1.4 with stunnel) ===  Apache1   Apache2

Each apache server uses ajp to forward traffic to tomcat servers in a 
1 to 1 relationship  from port 443 on the apache to 7000 on the tomcat 
server.


If i setup haproxy in tcpmode then it load balances correctly. If I 
directly connect to the individual apache servers, then it works.


If I however changes the haproxy to httpmode, then I get the following 
in the logs:

== ssl_request_log ==
[13/Jan/2014:10:19:16 +] 10.11.115.114 - - GET / 562

== ssl_access_log ==
10.11.115.114 - - [13/Jan/2014:10:19:16 +] GET / 400 562


On the browser I get:

502 Bad Gateway
The server returned an invalid or incomplete response.

I have tried to set the following haproxy global parameters with no 
affect:


tune.bufsize
tune.http.maxhdr

ADDED:
If I change my backend servers to plain HTTP on port 80, then all 
works as expected. Is this expected behaviour where the LB will accept 
on SSL 443 and can then only forward to apache servers on http port 
80? Is it not possible in httpmode to accept ssl on the LB and then to 
forward that traffic to the backend apache servers also on https port 
443?

ADDED END:

All my servers are Centos6 64bit.

Further searching on the internet does not really give any solutions. 
Please can you help?


Thanks

Kobus






Re: Difference frontend/backend to listen?

2014-01-16 Thread PiBa-NL

Hi Florian,

Found a a minor difference, not sure if it is the issue.?
 - The 9000 backend checks up.php versus check.php.
 - Also I don't think http-send-name-header does anything in 'tcp mode'..

If thats not it, maybe someone else has a clue. :)
p.s. You might want to configure a stats page to see if servers are 
properly checked as 'up' by haproxy.


Greets PiBa-NL

Florian Engelmann schreef op 16-1-2014 12:29:
Hi, 



I got two configurations the should do the same. One is based on a 
frontend/backend layout the second does it with just listen. The 
listen configuratiuon is working fine but the fontend/backend causes a 
problem on the backend. It looks like some request string is missing 
because the customer application is not able to resolve some lookup 
which seems to be header related. Is anybody able to tell me whats the 
difference in these two configurations?




global
log /dev/loglocal6
#log /dev/loglocal6 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
maxconn 5
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 2m

defaults
log global
modehttp
option dontlognull
option dontlog-normal
retries 2
option redispatch
timeout connect 5000
timeout client 5
timeout server 12
timeout http-request 5000
timeout http-keep-alive 5000
option http-server-close
http-check disable-on-404
http-send-name-header X-Target-Server
default-server minconn 1024 maxconn 4096
monitor-net 192.168.xxx.xxx/32

listen application9000 0.0.0.0:9000
  balance leastconn
  mode tcp
  option tcplog
  option httpchk GET /check.php HTTP/1.0
  http-check expect string Hello World
  server xcmsphp01.xxx 10.0.4.4:9000 check port 80
  server xcmsphp02.xxx 10.0.4.7:9000 check port 80
  server xcmsphp03.xxx 10.0.4.3:9000 check port 80

listen application80 0.0.0.0:80
  balance roundrobin
  option forwardfor
  option  httplog
  monitor-uri /haproxymon
  option httpchk GET /index.html HTTP/1.1\r\nHost:\ 
monitoring\r\nConnection:\ close

  http-check expect string Welcome home
  acl site_dead nbsrv lt 2
  monitor fail  if site_dead
  server xcmsfrontend01.xxx 10.2.2.1:80 check
  server xcmsfrontend02.xxx 10.2.2.2:80 check




global
log /dev/loglocal6
#log /dev/loglocal6 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
maxconn 5
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 2m

defaults
log global
modehttp
option dontlognull
option dontlog-normal
retries 2
option redispatch
timeout connect 5000
timeout client 5
timeout server 12
timeout http-request 5000
timeout http-keep-alive 5000
default-server minconn 1024 maxconn 4096
monitor-net 192.168.xxx.xxx/32


frontend http-in
  bind *:80
  option http-server-close
  option forwardfor
  option httplog
  monitor-uri /haproxymon
  acl site_dead nbsrv(http-out) lt 2
  monitor fail if site_dead
  default_backend http-out

backend http-out
  balance roundrobin
  http-send-name-header X-Target-Server
  option forwardfor
  option http-server-close
  #option accept-invalid-http-response
  http-check disable-on-404
  option httpchk GET /index.html HTTP/1.1\r\nHost:\ 
monitoring\r\nConnection:\ close

  http-check expect string Welcome home
  server xcmsfrontend01.xxx 10.2.2.1:80 check
  server xcmsfrontend02.xxx 10.2.2.2:80 check

frontend php-in
  bind *:9000
  mode tcp
  option tcplog
  http-send-name-header X-Target-Server

Wont work in tcpmode

default_backend php-out

backend php-out
  balance leastconn
  mode tcp
  option tcplog
  http-send-name-header X-Target-Server

Wont work for tcp mode.

option httpchk GET /up.php HTTP/1.0

Should this be check.php v.s. up.php?

http-check expect string Hello World
  server xcmsphp01.xxx 10.0.4.4:9000 check port 80
  server xcmsphp02.xxx 10.0.4.7:9000 check port 80
  server xcmsphp03.xxx 10.0.4.3:9000 check port 80



Regards,
Florian






issue with acl pattern -m match on a string starting with space or containing a comma, with 1.5-dev21

2014-01-16 Thread PiBa-NL

Hi,

Using HAProxy 1.5-dev21 i'm having trouble getting it to match my 
user-agent with an acl that uses -m pattern matching..


The browser is Chrome 31.0.1650.63 which sends useragent string:

Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like 
Gecko) Chrome/32.0.1700.76 Safari/537.36


My test ACLs, of which only ACL21 and ACL31 are matched with the result 
below:

*ACLexact*= A
*ACLbeg*= B, 1
*ACLend*= C, 1

I would expect at least 2 the ACLbeg acls and ACL2 to be also matched, 
also i dont understand why ACL32 is not matched as the leading space 
seems to be correctly escaped.?


The acl's used/tried..:

reqadd ACLexact:\ A
reqadd ACLbeg:\ B
reqadd ACLend:\ C
acl ACL1 hdr(User-Agent) Mozilla/5.0\ (Windows\ NT\ 6.1;\ WOW64)\ 
AppleWebKit/537.36\ (KHTML\,\ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36

reqadd ACLexact:\ 1 if ACL1
acl ACL2 hdr(User-Agent) -m str Mozilla/5.0\ (Windows\ NT\ 6.1;\ WOW64)\ 
AppleWebKit/537.36\ (KHTML\,\ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36

reqadd ACLexact:\ 2 if ACL2

acl ACL21 hdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\ 
WOW64)\ AppleWebKit/537.36\ (KHTML

reqadd ACLbeg:\ 1 if ACL21
acl ACL22 hdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\ 
WOW64)\ AppleWebKit/537.36\ (KHTML,

reqadd ACLbeg:\ 2 if ACL22
acl ACL23 hdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\ 
WOW64)\ AppleWebKit/537.36\ (KHTML\,

reqadd ACLbeg:\ 3 if ACL23

acl ACL31 hdr(User-Agent) -m end like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36

reqadd ACLend:\ 1 if ACL31
acl ACL32 hdr(User-Agent) -m end \ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36

reqadd ACLend:\ 2 if ACL32
acl ACL33 hdr(User-Agent) -m end ,\ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36

reqadd ACLend:\ 3 if ACL33
acl ACL34 hdr(User-Agent) -m end \,\ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36

reqadd ACLend:\ 4 if ACL34


HAPROXY Version used:
HA-Proxy version 1.5-dev21-6b07bf7 +2013/12/17
Copyright 2000-2013 Willy Tarreau w...@1wt.eu
Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Did i do something wrong, or can you give it a test.? Thanks.

Thanks for the great product!
Greets PiBa-NL



Re: issue with acl pattern -m match on a string starting with space or containing a comma, with 1.5-dev21

2014-01-17 Thread PiBa-NL

Hi,
Indeed req.fhdr(x) works for this. I should (again) have read the manual 
better.


Though the proper section is a bit harder to find a search for 
keyword  doesn't give any results.. Nevertheless i should r.t.fine.m. 
as it is very complete and correct for pretty much every option possible.


I knew that the comma didn't need escaping but started to try it anyway 
because it didn't seem to work, and so started to have a few doubts..


Sorry for the noise and thanks, again.
PiBa-NL

Thierry FOURNIER schreef op 17-1-2014 11:25:

Hi,

First, you must not escape the comma character.

The fetch method hdr split multivalue header before the pattern
matching operation. The header user-agent containing comma is
processed like two headers:

Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML

and

like Gecko) Chrome/32.0.1700.76 Safari/537.36

If you want to apply ACL on the full value of the header, you must use
req.fhdr ('f' like 'full'). The following configuration run as expected:

acl ACL2 req.fhdr(User-Agent) -m str Mozilla/5.0\ (Windows\ NT\ 6.1;\ 
WOW64)\ AppleWebKit/537.36\ (KHTML,\ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36
reqadd ACLexact:\ 2 if ACL2

acl ACL21 req.fhdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\ 
WOW64)\ AppleWebKit/537.36\ (KHTML
reqadd ACLbeg:\ 1 if ACL21
acl ACL22 req.fhdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\ 
WOW64)\ AppleWebKit/537.36\ (KHTML,
reqadd ACLbeg:\ 2 if ACL22

acl ACL31 req.fhdr(User-Agent) -m end like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36
reqadd ACLend:\ 1 if ACL31
acl ACL32 req.fhdr(User-Agent) -m end \ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36
reqadd ACLend:\ 2 if ACL32
acl ACL33 req.fhdr(User-Agent) -m end ,\ like\ Gecko)\ Chrome/32.0.1700.76\ 
Safari/537.36
reqadd ACLend:\ 3 if ACL33


Thierry


On Thu, 16 Jan 2014 20:54:50 +0100
PiBa-NL piba.nl@gmail.com wrote:


Hi,

Using HAProxy 1.5-dev21 i'm having trouble getting it to match my
user-agent with an acl that uses -m pattern matching..

The browser is Chrome 31.0.1650.63 which sends useragent string:

Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/32.0.1700.76 Safari/537.36

My test ACLs, of which only ACL21 and ACL31 are matched with the result
below:
*ACLexact*= A
*ACLbeg*= B, 1
*ACLend*= C, 1

I would expect at least 2 the ACLbeg acls and ACL2 to be also matched,
also i dont understand why ACL32 is not matched as the leading space
seems to be correctly escaped.?

The acl's used/tried..:

reqadd ACLexact:\ A
reqadd ACLbeg:\ B
reqadd ACLend:\ C
acl ACL1 hdr(User-Agent) Mozilla/5.0\ (Windows\ NT\ 6.1;\ WOW64)\
AppleWebKit/537.36\ (KHTML\,\ like\ Gecko)\ Chrome/32.0.1700.76\
Safari/537.36
reqadd ACLexact:\ 1 if ACL1
acl ACL2 hdr(User-Agent) -m str Mozilla/5.0\ (Windows\ NT\ 6.1;\ WOW64)\
AppleWebKit/537.36\ (KHTML\,\ like\ Gecko)\ Chrome/32.0.1700.76\
Safari/537.36
reqadd ACLexact:\ 2 if ACL2

acl ACL21 hdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\
WOW64)\ AppleWebKit/537.36\ (KHTML
reqadd ACLbeg:\ 1 if ACL21
acl ACL22 hdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\
WOW64)\ AppleWebKit/537.36\ (KHTML,
reqadd ACLbeg:\ 2 if ACL22
acl ACL23 hdr(User-Agent) -m beg Mozilla/5.0\ (Windows\ NT\ 6.1;\
WOW64)\ AppleWebKit/537.36\ (KHTML\,
reqadd ACLbeg:\ 3 if ACL23

acl ACL31 hdr(User-Agent) -m end like\ Gecko)\ Chrome/32.0.1700.76\
Safari/537.36
reqadd ACLend:\ 1 if ACL31
acl ACL32 hdr(User-Agent) -m end \ like\ Gecko)\ Chrome/32.0.1700.76\
Safari/537.36
reqadd ACLend:\ 2 if ACL32
acl ACL33 hdr(User-Agent) -m end ,\ like\ Gecko)\ Chrome/32.0.1700.76\
Safari/537.36
reqadd ACLend:\ 3 if ACL33
acl ACL34 hdr(User-Agent) -m end \,\ like\ Gecko)\ Chrome/32.0.1700.76\
Safari/537.36
reqadd ACLend:\ 4 if ACL34


HAPROXY Version used:
HA-Proxy version 1.5-dev21-6b07bf7 +2013/12/17
Copyright 2000-2013 Willy Tarreau w...@1wt.eu
Build options :
TARGET  = freebsd
CPU = generic
CC  = cc
CFLAGS  = -O2 -pipe -fno-strict-aliasing -DFREEBSD_PORTS
OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Did i do something wrong, or can you give it a test.? Thanks.

Thanks for the great product!
Greets PiBa-NL






Re: Difference frontend/backend to listen?

2014-01-17 Thread PiBa-NL

Hi Florian,

Only advice i have left is to configure a 'stats' listen section and 
syslogs.


Besides that you should try and perform some tcpdump/wireshark traffic 
capturing to see what kind of traffic / headers / content is passing 
along. As i'm not aware of any differences between frontend/backend and 
listen sections besides the obvious. And to me it seems like the changes 
in default section and frontends backends should result in the same 
behavior.. Using the traffic dump you could then try to compare those 
and spot any differences..


b.t.w. are you replacing the config on the same machine and then 
testing? Or is 1 running production and the other is under a 
test.domain.name? Maybe the backend sends a different reply if the 
request Host header is different..?


Greets PiBa-NL

Florian Engelmann schreef op 17-1-2014 13:43:

Hi PiBa-NL,



Found a a minor difference, not sure if it is the issue.?
  - The 9000 backend checks up.php versus check.php.
  - Also I don't think http-send-name-header does anything in 'tcp 
mode'..


If thats not it, maybe someone else has a clue. :)
p.s. You might want to configure a stats page to see if servers are
properly checked as 'up' by haproxy.

Greets PiBa-NL


thank you for your feedback! Sorry that was my fault - both do use 
check.php - the up.php is an old one. Just a copypaste error.


I still have no idea what makes the difference. I was not able to find 
a chapter in the HAProxy documentation describing the difference of 
listen vs. frontend/backend. Does any article about that topic exist? 
I am really interested in understanding this issue. I do not know what 
the application exactly does but all I know is - it does only work if 
I use the listen configuration.


Regards,
Florian



Florian Engelmann schreef op 16-1-2014 12:29:

Hi,

I got two configurations the should do the same. One is based on a
frontend/backend layout the second does it with just listen. The
listen configuratiuon is working fine but the fontend/backend causes a
problem on the backend. It looks like some request string is missing
because the customer application is not able to resolve some lookup
which seems to be header related. Is anybody able to tell me whats the
difference in these two configurations?



global
log /dev/loglocal6
#log /dev/loglocal6 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
maxconn 5
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 2m

defaults
log global
modehttp
option dontlognull
option dontlog-normal
retries 2
option redispatch
timeout connect 5000
timeout client 5
timeout server 12
timeout http-request 5000
timeout http-keep-alive 5000
option http-server-close
http-check disable-on-404
http-send-name-header X-Target-Server
default-server minconn 1024 maxconn 4096
monitor-net 192.168.xxx.xxx/32

listen application9000 0.0.0.0:9000
  balance leastconn
  mode tcp
  option tcplog
  option httpchk GET /check.php HTTP/1.0
  http-check expect string Hello World
  server xcmsphp01.xxx 10.0.4.4:9000 check port 80
  server xcmsphp02.xxx 10.0.4.7:9000 check port 80
  server xcmsphp03.xxx 10.0.4.3:9000 check port 80

listen application80 0.0.0.0:80
  balance roundrobin
  option forwardfor
  option  httplog
  monitor-uri /haproxymon
  option httpchk GET /index.html HTTP/1.1\r\nHost:\
monitoring\r\nConnection:\ close
  http-check expect string Welcome home
  acl site_dead nbsrv lt 2
  monitor fail  if site_dead
  server xcmsfrontend01.xxx 10.2.2.1:80 check
  server xcmsfrontend02.xxx 10.2.2.2:80 check




global
log /dev/loglocal6
#log /dev/loglocal6 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
maxconn 5
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 2m

defaults
log global
modehttp
option dontlognull
option dontlog-normal
retries 2
option redispatch
timeout connect 5000
timeout client 5
timeout server 12
timeout http-request 5000
timeout http-keep-alive 5000
default-server minconn 1024 maxconn 4096
monitor-net 192.168.xxx.xxx/32


frontend http-in
  bind *:80
  option http-server-close
  option forwardfor
  option httplog
  monitor-uri /haproxymon
  acl site_dead nbsrv(http-out) lt 2
  monitor fail if site_dead
  default_backend http-out

backend http-out
  balance roundrobin
  http-send-name-header X-Target-Server
  option forwardfor
  option http-server-close
  #option accept-invalid-http-response
  http-check disable-on-404
  option httpchk GET /index.html HTTP/1.1\r\nHost:\
monitoring\r\nConnection:\ close
  http-check expect

Re: File uploads (multipart/form-data POST ) and transparent mode fail

2014-01-18 Thread PiBa-NL

Hi Magnus,

I have integrated that 'transparent' option into the pfSense(FreeBSD8.3) 
haproxy-devel package.
And can confirm that there is an issue when sending a large POST. For 
your information the config below does not contain the Transparent 
ClientIP option.. Which would read source 0.0.0.0 usesrc clientip..


Also on pfSense the main firewall is 'pf' , but to get 'transparent' 
traffic working it was needed to in the background also load and 
configure part of ipfw.. (this is also done for captive portal..)
This so HAProxy gets to see the tcp traffic, and prevent replies from 
being routed out the wan interface..


The solution is to configure a floating rule like this:
Action: Pass
Quick: YES
Interface: DMZ (the one pointing to your server..)
Direction: Out
Protocol: TCP
Source: ANY
Destination: Server-IP
Destination: Server-PORT
State Type: sloppy state

Ill try and see if i can automate that, and if not at least put a 
warning note that this kind of rule must be added for it to work properly.


Can you confirm this indeed solves the issue?

Thanks PiBa-NL

Magnus Thomé schreef op 18-1-2014 9:32:
Last night during a couple of hours I took the time to read through 
the whole documentation from start to finish (instead of just doing 
keyword searches in it). But I really can't find anything.


I set up option forceclose (and also option forwardfor which is 
unrelated) just to see if anything happened but nope.


I wild guess from me as a total noob is that something is divided into 
64kbyte chunks,being that a buffer, a particular set of packets or 
whatnot, and the first time 64kbytes is sent it goes through ok but 
the second and further chunks go astray. I've scratched my head 
wondering if the webserveror or possibly the pfsense box has anything 
set in connection with 64kB and of course also looked for anything 
like that in the haproxy documentation. There are no problems sending 
items larger than 64kB in the other normal direction, to the 
browsers. Will setting a cookie help?



ANY help or pointers in some direction would be deeply appreciated


/Magnus Thomé




On Fri, Jan 17, 2014 at 4:50 PM, Magnus Thomé magn...@gmail.com 
mailto:magn...@gmail.com wrote:


I've really really searched for answers, both in the mailing list
archives and google but haven't been able to find anything. Would
deeply appreciate any help!

I'm running pfsense 2.1 with the only extra package installed
being haproxy-devel 1.5-dev19 pkg v 0.6

EVERYTHING works great but one single thing:

When doing a HTTP file upload with a FORM multipart/form-data POST
to any server behind the firewall it only works with very small
files, aprox max 60kbyte. With slightly larger files I get a
timeout page after a while and with even larger files I get
nothing at all.

It seems that when Transparent ClientIP is enabled and set to
DMZ the file uploads fail and with Transparent ClientIP disabled
all works perfectly as it should. I do need the transparent mode
though.


Is there a setting somewhere I've missed?


Thanks in advance for any possible help


/Magnus




--
The config created by pfsense GUI looks like this:


global
stats socket /tmp/haproxy.socket level admin
uid 80
gid 80
nbproc  1
chroot  /var/empty
daemon

frontend SRV-WEB1-merged
bind 83.250.27.152:80 http://83.250.27.152:80
default_backend SRV-WEB1_http
modehttp
log global
option  dontlognull
timeout client  3
acl 0_rejsa.nu http://0_rejsa.nuhdr_end(host) -i
rejsa.nu http://rejsa.nu
use_backend SRV-WEB1_http if 0_rejsa.nu
http://0_rejsa.nu
acl 1_rejsa.se http://1_rejsa.sehdr_end(host) -i
rejsa.se http://rejsa.se
use_backend SRV-WEB1_http if 1_rejsa.se
http://1_rejsa.se
acl 2_tystpc.nu http://2_tystpc.nu hdr_end(host) -i
tystpc.nu http://tystpc.nu
use_backend SRV-WEB2_http if 2_tystpc.nu
http://2_tystpc.nu
acl 3_tystpc.se http://3_tystpc.se hdr_end(host) -i
tystpc.se http://tystpc.se
use_backend SRV-WEB2_http if 3_tystpc.se
http://3_tystpc.se

backend SRV-WEB1_http
modehttp
balance roundrobin
timeout connect 3
timeout server  3
retries 3
option  httpchk
server  SRV-WEB1 192.168.2.2:80
http

Re: File uploads (multipart/form-data POST ) and transparent mode fail

2014-01-19 Thread PiBa-NL

Hi Magnus,

I'm currently in the process of automating the creation this rule. Needs 
a little more testing and together with some other new features a was 
already busy with i think it will be ready in a week or so it will be 
part of the pfSense package version 1.5-dev21 pkg v 0.7.


As for the current 'workaround', you can probably make an alias with all 
ip's you want to affect and use that in the floating-rule. I haven't 
tested it but cant think of a reason why that wouldn't work..


Greets PiBa-NL

Magnus Thomé schreef op 19-1-2014 9:45:

Yee!


THANK YOU!!!


Works perfectly :-D


I guess I can set the floating rule Destination: Server-IP to a 
range of ip numbers? Or should I make one floating rule per server ip?




/Magnus



PS:

 For your information the config below does not contain the 
Transparent ClientIP


Yup., Was turned off so visitors could upload :-)







On Sat, Jan 18, 2014 at 11:51 PM, PiBa-NL piba.nl@gmail.com 
mailto:piba.nl@gmail.com wrote:


Hi Magnus,

I have integrated that 'transparent' option into the
pfSense(FreeBSD8.3) haproxy-devel package.
And can confirm that there is an issue when sending a large POST.
For your information the config below does not contain the
Transparent ClientIP option.. Which would read source 0.0.0.0
usesrc clientip..

Also on pfSense the main firewall is 'pf' , but to get
'transparent' traffic working it was needed to in the background
also load and configure part of ipfw.. (this is also done for
captive portal..)
This so HAProxy gets to see the tcp traffic, and prevent replies
from being routed out the wan interface..

The solution is to configure a floating rule like this:
Action: Pass
Quick: YES
Interface: DMZ (the one pointing to your server..)
Direction: Out
Protocol: TCP
Source: ANY
Destination: Server-IP
Destination: Server-PORT
State Type: sloppy state

Ill try and see if i can automate that, and if not at least put a
warning note that this kind of rule must be added for it to work
properly.

Can you confirm this indeed solves the issue?

Thanks PiBa-NL

Magnus Thomé schreef op 18-1-2014 9:32:

Last night during a couple of hours I took the time to read
through the whole documentation from start to finish (instead of
just doing keyword searches in it). But I really can't find
anything.

I set up option forceclose (and also option forwardfor which is
unrelated) just to see if anything happened but nope.

I wild guess from me as a total noob is that something is divided
into 64kbyte chunks,being that a buffer, a particular set of
packets or whatnot, and the first time 64kbytes is sent it goes
through ok but the second and further chunks go astray. I've
scratched my head wondering if the webserveror or possibly the
pfsense box has anything set in connection with 64kB and of
course also looked for anything like that in the haproxy
documentation. There are no problems sending items larger than
64kB in the other normal direction, to the browsers. Will
setting a cookie help?


ANY help or pointers in some direction would be deeply appreciated


/Magnus Thomé




On Fri, Jan 17, 2014 at 4:50 PM, Magnus Thomé magn...@gmail.com
mailto:magn...@gmail.com wrote:

I've really really searched for answers, both in the mailing
list archives and google but haven't been able to find
anything. Would deeply appreciate any help!

I'm running pfsense 2.1 with the only extra package installed
being haproxy-devel 1.5-dev19 pkg v 0.6

EVERYTHING works great but one single thing:

When doing a HTTP file upload with a FORM multipart/form-data
POST to any server behind the firewall it only works with
very small files, aprox max 60kbyte. With slightly larger
files I get a timeout page after a while and with even larger
files I get nothing at all.

It seems that when Transparent ClientIP is enabled and set
to DMZ the file uploads fail and with Transparent ClientIP
disabled all works perfectly as it should. I do need the
transparent mode though.


Is there a setting somewhere I've missed?


Thanks in advance for any possible help


/Magnus




--
The config created by pfsense GUI looks like this:


global
stats socket /tmp/haproxy.socket level admin
uid 80
gid 80
nbproc  1
chroot  /var/empty
daemon

frontend SRV-WEB1-merged
bind 83.250.27.152:80 http://83.250.27.152:80

Re: Healthcheck via https

2014-01-20 Thread PiBa-NL

just4hapr...@t-online.de schreef op 20-1-2014 20:59:

 server backend_server_1 server_ip:1 check-ssl
 server backend_server_2 server_ip:1 check-ssl

Try it like this:
 server backend_server_1 server_ip:1 check check-ssl
 server backend_server_2 server_ip:1 check check-ssl



Re: limit sticky connection count?

2014-01-21 Thread PiBa-NL

Hi Michael,

Seems like you keep wanting to use the hdr(host), can i ask why?
You seem to explain you just want to stick 'src' to a 'server'.. So why 
configure hdr(host) ?


Can you try this config? (ps 60 seconds isnt verry long..)

listen http
  balance roundrobin
  stick-table type ip size 100 expire 60s
  stick on src
  server www01 127.0.0.1 check observe layer7
  server www02 127.0.0.2 check observe layer7
  server www03 127.0.0.3 check observe layer7

Greets PiBa-NL

Michael Johnson - MJ schreef op 21-1-2014 21:57:
Is there a way to limit the number of sticky connections to a single 
server?
I would like to have all traffic to a given virtual host tend to end 
up on the same backend server, but also allow the traffic to spread to 
multiple servers if more than say 50 connections are already stuck to 
a given server.  I do need sessions from individual ips to stay stuck, 
just new sessions from new ips to stick to a different host.  I've 
considered a few options, but none of them seem to work.


The first option I tried was:

hash-type consistent
listen http
  balance hdr(host)
  stick-table type ip size 100 expire 60s
  stick on src
  server www01 127.0.0.1 check observe layer7
  server www02 127.0.0.2 check observe layer7
  server www03 127.0.0.3 check observe layer7

The problem with this is that all requests for 'host' seem to always 
route to the same backend node unless it fails out entirely and then 
it all moves to another single node.  Perhaps that shouldn't surprise 
me, but it did.


I then considered this setup:

listen http
  balance roundrobin
  stick-table type ip size 100 expire 60s
  stick on hdr(host)
  server www01 127.0.0.1 check observe layer7
  server www02 127.0.0.2 check observe layer7
  server www03 127.0.0.3 check observe layer7

This also doesn't work and seems to result in basically the same thing 
as the first.


I've been reading through the docs, and I can't seem to come up with 
any way to do what I am trying to accomplish.  I'm guessing that is 
because there is not.  But I figured I would throw this out to the 
list before I scrap my plan and take a different route.  Thanks!


--
Michael Johnson - MJ





Re: Use one backend server at a time

2014-01-30 Thread PiBa-NL
Im not 100% sure but if i remember something i read correctly it was 
like using a stick on dst stick-table.


That way the sticktable will make sure all traffic go's to a single 
server, and only when it fails another server will be put in the 
sticktable that will only have 1 entry.


You might want to test what happens when haproxy configuration is 
reloaded.. But if you configure 'peers' the new haproxy process should 
still have the same 'active' backend..


p.s. That is if im not mixing stuff up...

Ryan O'Hara schreef op 30-1-2014 17:42:

I'd like to define a proxy (tcp mode) that has multiple backend
servers yet only uses one at a time. In other words, traffic comes
into the frontend and is redirected to one backend server. Should that
server fail, another is chosen.

I realize this might be an odd thing to do with haproxy, and if you're
thinking that simple VIP failover (ie. keepalived) is better suited
for this, you are correct. Long story.

I've gotten fairly close to achieving this behavior by having all my
backend servers declared 'backup' and not using 'allbackups'. The only
caveat is that these backup servers have a preference based on the
order they are defined. Say my severs are defined in the backend like
this:

server foo-01 ... backup
server foo-02 ... backup
server foo-03 ... backup

If foo-01 is up, all traffic will go to it. When foo-0t is down, all
traffic will go to foo-02. When foo-01 comes back online, traffic goes
back to foo-01. Ideally the backend servers would change only when it
failed. Beside, this solution is rather ugly.

Is there a better way?

Ryan






Re: Use one backend server at a time

2014-01-30 Thread PiBa-NL

ok found it again in the part about Automatic failover without failback
http://blog.exceliance.fr/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/

PiBa-NL schreef op 30-1-2014 19:14:
Im not 100% sure but if i remember something i read correctly it was 
like using a stick on dst stick-table.


That way the sticktable will make sure all traffic go's to a single 
server, and only when it fails another server will be put in the 
sticktable that will only have 1 entry.


You might want to test what happens when haproxy configuration is 
reloaded.. But if you configure 'peers' the new haproxy process should 
still have the same 'active' backend..


p.s. That is if im not mixing stuff up...

Ryan O'Hara schreef op 30-1-2014 17:42:

I'd like to define a proxy (tcp mode) that has multiple backend
servers yet only uses one at a time. In other words, traffic comes
into the frontend and is redirected to one backend server. Should that
server fail, another is chosen.

I realize this might be an odd thing to do with haproxy, and if you're
thinking that simple VIP failover (ie. keepalived) is better suited
for this, you are correct. Long story.

I've gotten fairly close to achieving this behavior by having all my
backend servers declared 'backup' and not using 'allbackups'. The only
caveat is that these backup servers have a preference based on the
order they are defined. Say my severs are defined in the backend like
this:

server foo-01 ... backup
server foo-02 ... backup
server foo-03 ... backup

If foo-01 is up, all traffic will go to it. When foo-0t is down, all
traffic will go to foo-02. When foo-01 comes back online, traffic goes
back to foo-01. Ideally the backend servers would change only when it
failed. Beside, this solution is rather ugly.

Is there a better way?

Ryan








Re: Use one backend server at a time

2014-01-30 Thread PiBa-NL
This should (i expect) work with any number of backup servers, as long 
as you only need 1 active.


Ryan O'Hara schreef op 30-1-2014 19:34:

On Thu, Jan 30, 2014 at 07:14:30PM +0100, PiBa-NL wrote:

Im not 100% sure but if i remember something i read correctly it was
like using a stick on dst stick-table.

That way the sticktable will make sure all traffic go's to a single
server, and only when it fails another server will be put in the
sticktable that will only have 1 entry.

Yes. That sounds accurate.


You might want to test what happens when haproxy configuration is
reloaded.. But if you configure 'peers' the new haproxy process
should still have the same 'active' backend..

p.s. That is if im not mixing stuff up...

This blog has something very close to what I'd like to deploy:

http://blog.exceliance.fr/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/

The only difference is that I'd like to have more than just one
backup. I'll try to find some time to experiment in the next few days.

Thanks.
Ryan



Ryan O'Hara schreef op 30-1-2014 17:42:

I'd like to define a proxy (tcp mode) that has multiple backend
servers yet only uses one at a time. In other words, traffic comes
into the frontend and is redirected to one backend server. Should that
server fail, another is chosen.

I realize this might be an odd thing to do with haproxy, and if you're
thinking that simple VIP failover (ie. keepalived) is better suited
for this, you are correct. Long story.

I've gotten fairly close to achieving this behavior by having all my
backend servers declared 'backup' and not using 'allbackups'. The only
caveat is that these backup servers have a preference based on the
order they are defined. Say my severs are defined in the backend like
this:

server foo-01 ... backup
server foo-02 ... backup
server foo-03 ... backup

If foo-01 is up, all traffic will go to it. When foo-0t is down, all
traffic will go to foo-02. When foo-01 comes back online, traffic goes
back to foo-01. Ideally the backend servers would change only when it
failed. Beside, this solution is rather ugly.

Is there a better way?

Ryan








Re: SSL load-balancing across multiple HAProxy instances

2014-02-14 Thread PiBa-NL

I think this is the issue in the mode http frontend:

req_ssl_hello_type : integer (deprecated)
this will not work with bind lines having the ssl


Patrick Hemmer schreef op 14-2-2014 22:34:
You haven't told it to use SSL when talking to the servers listening 
on :4443. By default haproxy is going to use non-SSL TCP.


Add the `ssl` option to both of your `server` parameters.

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-ssl

-Patrick



*From: *m...@hawknetdesigns.com
*Sent: * 2014-02-14 16:21:02 E
*To: *haproxy@formilux.org
*Subject: *SSL load-balancing across multiple HAProxy instances


Hi all,

I'm working on a load-balanced instance using HAProxy, Varnish, and 
back-end web servers.


I've successfully tested the new SSL termination feature using dev 
build 1.5-dev22-1a34d57 2014/02/03, and it works well, however, I 
want to load-balance the SSL terminal feature across more than 1 
HAProxy instance like so:


Main HAProxy instance on 192.168.1.5, secondary on 192.168.1.10

Varnish servers on 192.168.1.20 and 192.168.1.30

Previously, I was terminating SSL on the single active HAProxy 
(192.168.1.5), and speaking HTTP to the Varnish back-ends.  This 
works well.


What I'd like to do is

Request comes in to HAProxy on port 443.  Request is then load 
balanced to the two HAProxy servers in tcp mode to 192.168.1.5:4443 
and 192.168.1.10:4443 - maintaining SSL mode until it terminates at 
port 4443.


An example config (just the relevant sections) would be this:

listen ssl_relay
bind 192.168.1.5:443
mode tcp
option socket-stats
#option ssl-hello-chk
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
tcp-request content accept if { req_ssl_hello_type 1 }
default_backend test

frontend incoming
bind 192.168.1.5:80
mode http
log global
option forwardfor
bind 192.168.1.5:4443 no-sslv3 ssl crt /certs/haproxy.pem crt 
/certs/ ciphers RC4-SHA:AES128-SHA:AES256-SHA

mode http
log global
option forwardfor
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }


backend test
mode tcp
balance roundrobin
# maximum SSL session ID length is 32 bytes.
stick-table type binary len 32 size 30k expire 30m

acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2

# use tcp content accepts to detects ssl client and server 
hello.

tcp-request inspect-delay 5s
tcp-request content accept if clienthello

# no timeout on response inspect delay by default.
tcp-response content accept if serverhello

# SSL session ID (SSLID) may be present on a client or server 
hello.
# Its length is coded on 1 byte at offset 43 and its value 
starts

# at offset 44.

# Match and learn on request if client hello.
stick on payload_lv(43,1) if clienthello

# Learn on response if server hello.
stick store-response payload_lv(43,1) if serverhello

server test1 192.168.1.5:4443
server test2 192.168.1.10:4443

http works, and I receive requests on port 443, but this is all I get 
from the HAProxy log:


:ssl_relay.accept(0006)=0009 from [192.168.1.2:50496]
:test.clireq[0009:]:
:test.clicls[0009:]
:test.closed[0009:]

It appears that HAProxy is not speaking or passing through SSL to the 
frontend on port 4443.


curl -i https://192.168.1.5/
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown 
protocol


So... what am I missing?

Cheers,
-=Mark









Re: AW: Keeping statistics after a reload

2014-02-28 Thread PiBa-NL

Hi Andreas,

Its not like your question was wrong, but probably there is no 
good/satisfying short answer to this, and it was overrun by other mails...


As far as i know it is not possible to keep this kind information 
persisted in haproxy itself when a config restart is needed.


The -sf only makes sure old connections will nicely be closed when they 
are 'done'.


I have 'heard' of statistics gathering tools that use the haproxy unix 
stats socket to query the stats and store the information in a separate 
database that way you could get continued statistics after the config is 
changed.. I don't have any examples on how to do this or have a name of 
such a tool in mind though.. Though googling for haproxy monitoring 
quickly shows some commercial tools that have haproxy plugins and 
probably would provide answers to the questions you have.


Maybe others on the list do use programs/scripts/tools to also keep 
historical/cumulative data for haproxy and can share their experience 
with it?


Greets PiBa-NL

Andreas Mock schreef op 28-2-2014 16:33:

Hi all,

the list is normally really responsive. In this case nobody
gave an answer. So, I don't know whether my question was such a
stupid one that nobody wanted to answer.

So, I bring it up again in the hope someone is answering:
Is there a way to reload the configuration without loosing
current statistics? Or is this conceptually not possible?

Best regards
Andreas Mock

-Ursprüngliche Nachricht-
Von: Andreas Mock [mailto:andreas.m...@drumedar.de]
Gesendet: Montag, 24. Februar 2014 16:36
An: haproxy@formilux.org
Betreff: Keeping statistics after a reload

Hi all,

is there a way to reload a haproxy config without resetting the
statistics shown on the stats page?

I used

haproxy -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)

to make such a reload. But after that all statistics are reset.

Best regards
Andreas Mock








Re: inspecting incoming tcp content

2014-03-03 Thread PiBa-NL

Hi,

Im not sure if this is the exact issue that Anup was having, and maybe 
i'm hijacking his thread, if so i'm sorry for that, but when try to 
check how it works i also having difficulties getting it to work as i 
expected it to.


I'm using HAProxy v1.5dev21 on FreeBSD 8.3.

Ive written in a frontend the following which checks for a GET web 
request to determine which backend to use, this works..:

mode tcp
tcp-request inspect-delay 5s
acl PAYLOADcheck req.payload(0,3) -m bin 474554
use_backend web_80_tcp if PAYLOADcheck
tcp-request content accept if PAYLOADcheck

However when changing the match line to the following it fails:
acl PAYLOADcheck req.payload(0,3) -m str GET
or
acl PAYLOADcheck req.payload(0,3) -m sub GET
or
acl PAYLOADcheck req.payload(0,3) -m reg -i GET

The req.payload returns a piece of 'binary' data, but the 'compatibility 
matrix' seems to say that converting for use with sub/reg/others should 
not be an issue.


Then the next step is of course to not match only the first 3 characters 
but some content further in the 'middle' of the data stream..


Am i missing something ? Or might there be an issue with the implementation?

This is currently only for finding if and how that req.payload check can 
be used. Of course using 'mode http' would be much better for this 
purpose when running http traffic, but that isn't the purpose of this 
question..


Ive spoken on irc with mculp who was trying something similar but 
couldnt get it to work either, and seen a previous question 
http://comments.gmane.org/gmane.comp.web.haproxy/11942 which seems to 
have gone without a final solution as well.


So the question is, is this possible or might there be some issues in 
'converting' the checks?

Thanks for your time.

Greets PiBa-NL

Baptiste schreef op 28-2-2014 10:57:

Hi,

and where is your problem exactly?

Baptiste

On Tue, Feb 25, 2014 at 7:39 AM, anup katariya anup.katar...@gmail.com wrote:

Hi,

I wanted to inspect incoming tcp request. I wanted to something like below

payload(0, 100) match with string like 49=ABC.

Thanks,
Anup








Re: inspecting incoming tcp content

2014-03-04 Thread PiBa-NL

Ok seems to work now knowing this. Though it hase some side affects.

i could now match param=TEST using the following acl:
acl PAYLOADcheck req.payload(0,0) -m reg -i 706172616d3D54455354

Case insensitive matching works 'perfectly', but for the hex code (see 
the D and d above), but doesnt match different cases of letters which 
one would probably expect. So even though i use -i, if i use the word 
TEST in lower case it doesn't match anymore.


There might be a workaround for that with the ,lower option (i didnt 
confirm if that is applied before the hex conversion.)


Also the current documentation gives several examples which indicate a 
different working:


On systems where the regex library is much slower when using -i, it is 
possible to convert the sample to lowercase before matching, like this : 
acl script_tag payload(0,500),lower -m reg script


This doesn't work for detecting the text script  as its hex 
equivalent should be there, also if less than 500 bytes are send in the 
initial request it doesn't match at all.


So seems like this part of the manual could use a little more 
clarification. (Praise though for the overall completeness/clarity of 
the manual!)
Though if implementation now changes to match the manual, and possibly a 
additional tohex option that would be great. As its used on mode tcp 
certainly the option should exist to match binary/hex values that cannot 
be easily expressed with normal text. So the original design 
implementation does make sense, just not for 'textual' protocols.


Thanks for investigating.
PiBa-NL

Willy Tarreau schreef op 4-3-2014 17:28:

On Tue, Mar 04, 2014 at 04:51:56PM +0100, Thierry FOURNIER wrote:

The match bin get the configuration string 474554 and convert it as
the binary sequence GET. The match str get the configuration string
GET and use it as is.

The fetch req.payload() returns a binary content. When you try to
match with str method, the binary content is converted as string. The
converter produce string representing hexadecimal content: 474554.

If you write

acl PAYLOADcheck req.payload(0,3) -m str 474554

The system works perfectly.

This behavior is not intuitive. Maybe it can be change later.

Indeed, thank you for diagnosing this. Originally we chose to cast
bin to str as hex dump because it was only used in stick tables. But
now that we support other storage and usages, it becomes less and
less natural. I think we'll change this before the final release so
that bin automatically casts to str as-is and we'll add a tohex
converter for people who want to explicitly convert a bin to an hex
string.

Willy






Re: recent test for dev22 on BSD

2014-03-20 Thread PiBa-NL
Hi Simon,

1- pf divert-reply,
The issue with pf i think is the following text 'yyerror(divert-reply
has no meaning in FreeBSD pf(4));' in /sbin/pfctl/parse.y
Was surprised to find it. Even though the option is listed on the man pages.

So it sounds like it is not possible with a current native FreeBSD install.
I anyway haven't been successful in writing a firewallrule that contains
divert-reply and can be validated by pfctl. I do think that would be the
best solution reading the text in the man page it seems exactly what is
needed.. Catching reply packets for a nonlocal socket.

Where you able to write such a firewall rule?

I'm now trying to manually merge a old patch i found online..
http://lists.freebsd.org/pipermail/freebsd-net/2009-June/022166.html
Seems some parts are present already while other are 'missing' in the
current sources...
Not sure if that will work out working as some bitflags are used already
for other options and i am not sure if thats resolvable by me.

2- crash on FreeBSD,
Ive not seen it myself, but another user did report on FreeBSD8.3
(pfSense 2.1) that he also experiences crashes with dev20 and dev22 .
https://forum.pfsense.org/index.php?topic=73927.0

About 3 and 4 i have no clue..

Greets
PiBa-NL


k simon schreef op 20-3-2014 16:12:
 Hi,lists,
   I tested dev22 on FreeBSD 10-stable recently, and found:
 1. ipfw fwd works well with dev22+tproxy. It's have a nice guide in
 the /usr/local/share/examples.
 But pf's divert-to and divert-reply can't work with haproxy. Maybe
 haproxy does not use getsockname(2) and setsockopt(2).

 2. There are some issue with option http-server-close, haproxy crashed
 after a while, whennever set it on frontend or backend.

 3. Sometimes stalled with tcp-smart-connect and tcp-smart-accept,
 when I removed it, it's work normal. But I am not sure about it.

 4.The dev22 can compiled on DragonflyBSD, but it's silent stalled.



 Regards
 Simon





Re: Multiple/non-standard ssl ports on one frontend?

2014-06-03 Thread PiBa-NL

Justin Rush schreef op 3-6-2014 18:19:

Hi Manfred,


On Tue, Jun 3, 2014 at 11:12 AM, Manfred Hollstein 
mhollst...@t-online.de mailto:mhollst...@t-online.de wrote:



Can you try if curl -k http://proxy.prod:8080/health; works? If
I'm not
mistaken, https:// implicitly uses port 443, but don't know how the
explicit :8080 might interfere with that.


As I expected, this gets an empty reply:

$ curl -k http://app.prod:8080/health
curl: (52) Empty reply from server

So, haproxy is definitely listening on 8080 and expecting an SSL client.

(Also, I just realized my sanitizing above was wrong, my curl commands 
are all to app.prod, not proxy.prod.)


Can you give it a try with:
use_backend ssl_app if { hdr_sub(host) -i app.prod:8080 }

Think ive seen at least with sni requests that the (non standard) port 
is part of the sni name indication, not sure about how plain http is 
handled.


Greets PiBa-NL


Re: Multiple/non-standard ssl ports on one frontend?

2014-06-03 Thread PiBa-NL

Justin Rush schreef op 3-6-2014 18:19:

Hi Manfred,


On Tue, Jun 3, 2014 at 11:12 AM, Manfred Hollstein 
mhollst...@t-online.de mailto:mhollst...@t-online.de wrote:



Can you try if curl -k http://proxy.prod:8080/health; works? If
I'm not
mistaken, https:// implicitly uses port 443, but don't know how the
explicit :8080 might interfere with that.


As I expected, this gets an empty reply:

$ curl -k http://app.prod:8080/health
curl: (52) Empty reply from server

So, haproxy is definitely listening on 8080 and expecting an SSL client.

(Also, I just realized my sanitizing above was wrong, my curl commands 
are all to app.prod, not proxy.prod.)


Can you give it a try with:
use_backend ssl_app if { hdr_sub(host) -i app.prod:8080 }

Think ive seen at least with sni requests that the (non standard) port 
is part of the sni name indication, not sure about how plain http is 
handled.


Greets PiBa-NL


failing health checks, when using unix sockets, with ssl serverbinding, 1.5.3

2014-08-16 Thread PiBa-NL

Hi haproxy-list,

I have some strange results trying to use unix sockets to connect 
backends to frontends.

I'm using 1.5.3 on FreeBSD 8.3. (pfSense)

With the config below the result i get is that srv1,2,3 and 5 are 
serving requests correctly (i can put all others to maintenance mode and 
the stats keep working).


And srv4 is down because of lastchk: L6TOUT. It seems to me this 
behavior is inconsistent?


If anyone could confirm if this is indeed a problem in haproxy or tell 
if there is a reason for this, please let me know.


The config below is just what i narrowed it down to to have an easy to 
reproduce issue to find why i was having trouble forwarding a tcp 
backend to a ssl offloading frontend..
What i wanted to have is a TCP frontend using SNI to forward connections 
to the proper backends. And have a defaultbackend that does 
SSLoffloading, and then uses host header to send the requests to the 
proper backend. The purpose would be to minimize the load on haproxy 
itself, while maximizing supported clients (XP and older mobile devices).


Thanks in advance.
PiBa-NL

global
daemon
gid80
ssl-server-verify none
tune.ssl.default-dh-param 1024
chroot/tmp/haproxy_chroot

defaults
timeout connect3
timeout server3

frontend 3in1
bind0.0.0.0:800
modetcp
timeout client3
default_backendlocal84_tcp

backend local84_tcp
modetcp
retries3
optionhttpchk GET /
serversrv1 127.0.0.1:1000send-proxy check inter 1000
serversrv2 /stats1000.socket send-proxy check inter 1000
serversrv3 127.0.0.1:1001send-proxy ssl check inter 
1000 check-ssl
serversrv4 /stats1001.socket send-proxy ssl check inter 
1000 check-ssl

serversrv5 /stats1001.socket send-proxy ssl

frontend stats23
bind 0.0.0.0:1000 accept-proxy
bind /tmp/haproxy_chroot/stats1000.socket accept-proxy
bind 0.0.0.0:1001 accept-proxy ssl  crt 
/var/etc/haproxy/stats23.85.pem
bind /tmp/haproxy_chroot/stats1001.socket accept-proxy ssl  crt 
/var/etc/haproxy/stats23.85.pem

modehttp
timeout client3
default_backendstats_http

backend stats_http
modehttp
retries3
statsenable
statsuri /
statsadmin if TRUE
statsrefresh 1






failing health checks, when using unix sockets, with ssl serverbinding, 1.5.3

2014-08-23 Thread PiBa-NL
Resending, as i didn't see a reply sofar, think it got lost between the 
other conversations.
Would be nice if someone could tell what the problem might be, my config 
or something in haproxy. Thanks.


Hi haproxy-list,

I have some strange results trying to use unix sockets to connect
backends to frontends.
I'm using 1.5.3 on FreeBSD 8.3. (pfSense)

With the config below the result i get is that srv1,2,3 and 5 are
serving requests correctly (i can put all others to maintenance mode and
the stats keep working).

And srv4 is down because of lastchk: L6TOUT. It seems to me this
behavior is inconsistent?

If anyone could confirm if this is indeed a problem in haproxy or tell
if there is a reason for this, please let me know.

The config below is just what i narrowed it down to to have an easy to
reproduce issue to find why i was having trouble forwarding a tcp
backend to a ssl offloading frontend..
What i wanted to have is a TCP frontend using SNI to forward connections
to the proper backends. And have a defaultbackend that does
SSLoffloading, and then uses host header to send the requests to the
proper backend. The purpose would be to minimize the load on haproxy
itself, while maximizing supported clients (XP and older mobile devices).

Thanks in advance.
PiBa-NL

global
daemon
gid80
ssl-server-verify none
tune.ssl.default-dh-param 1024
chroot/tmp/haproxy_chroot

defaults
timeout connect3
timeout server3

frontend 3in1
bind0.0.0.0:800
modetcp
timeout client3
default_backendlocal84_tcp

backend local84_tcp
modetcp
retries3
optionhttpchk GET /
serversrv1 127.0.0.1:1000send-proxy check inter 1000
serversrv2 /stats1000.socket send-proxy check inter 1000
serversrv3 127.0.0.1:1001send-proxy ssl check inter
1000 check-ssl
serversrv4 /stats1001.socket send-proxy ssl check inter
1000 check-ssl
serversrv5 /stats1001.socket send-proxy ssl

frontend stats23
bind 0.0.0.0:1000 accept-proxy
bind /tmp/haproxy_chroot/stats1000.socket accept-proxy
bind 0.0.0.0:1001 accept-proxy ssl  crt
/var/etc/haproxy/stats23.85.pem
bind /tmp/haproxy_chroot/stats1001.socket accept-proxy ssl  crt
/var/etc/haproxy/stats23.85.pem
modehttp
timeout client3
default_backendstats_http

backend stats_http
modehttp
retries3
statsenable
statsuri /
statsadmin if TRUE
statsrefresh 1








Re: smtp cluster with haproxy

2014-08-27 Thread PiBa-NL

Hi Fraj,

Please define , 'doesnt work' ?

With the config you attached you should get a error while starting 
haproxy with that config? Something like this:

haproxy -c -f /tmp/Fraj_haproxy.cfg
[ALERT] 238/191742 (98396) : parsing [/tmp/Fraj_haproxy.cfg:10] : 
'server smtp1' unknown keyword 'option'. Registered keywords : .etc etc


You should replace 'option smtpchk' on the server line by 'check' the 
option smtpchk in the section will already make sure the checks will be 
using smtp.

option smtpchk
server  smtp1 192.168.1.38:10024  maxconn 100  check

If it then still doesn't work i would add a listen section for stats to 
check if haproxy does 'see' the servers are 'up'.


Greets PiBa-NL

Fraj KALLEL schreef op 27-8-2014 18:05:



hello,

i use postfix 2.11 and haproxy 1.5 for setting up cluster smtp.
above is my configuration and the cluster doesn't work.

have you any idea please for solving this problem?

i have 3 servers:
192.168.1.32 haproxy server
192.168.1.38 smtp server1
192.168.1.39 smtp server2

192.168.1.40 virtual ip

in attachements you find my configuration files

Sincerly yours,
Fraj KALLEL






tcp-request content track-sc2 with if statement doesn't work?

2014-09-06 Thread PiBa-NL

Hi list,

Inspired by a blog about wordpress bruteforce protection [0] , i'm 
trying to use this same kind of method in a frontend/backend configuration.
I did change the method from POST to GET, for easier testing, but that 
doesn't matter for retrieving the gpc counter, does it?


So i was trying to use this:
tcp-request content track-sc1  base32+src  if METH_GET login

It however doesn't seem to work using HAProxy 1.5.3, the acl containing 
sc1_get_gpc0 gt 0 never seems to get the correct gpc0 value, even 
though i have examined the stick-table and the gpc0 value there is 
increasing.

If i change it to the following it starts working:
tcp-request content track-sc1  base32+src

Even though the use_backend in both cases checks those first criteria:
acl flagged_as_abusersc1_get_gpc0 gt 0
use_backendpb3_453_http if METH_GET wp_login flagged_as_abuser

Am i doing something wrong, is the blog outdated, or was a bug 
introduced somewhere?


If more information perhaps -vv or full config is needed let me know,  
thanks for any reply.


p.s. did anyone get my other emails a while back? [1]

Kind regards,
PiBa-NL

[0] 
http://blog.haproxy.com/2013/04/26/wordpress-cms-brute-force-protection-with-haproxy/

[1] http://marc.info/?l=haproxym=140821298806125w=2



Re: tcp-request content track-sc2 with if statement doesn't work?

2014-09-07 Thread PiBa-NL

Baptiste schreef op 7-9-2014 17:13:

On Sun, Sep 7, 2014 at 2:55 PM, PiBa-NL piba.nl@gmail.com wrote:

Hi Baptiste,

Thanks that fixes my issue indeed with the following:
   tcp-request inspect-delay 10s
   tcp-request content track-sc1  base32+src  if METH_GET wp_login
   tcp-request content accept if HTTP

I didn't think about inspect-delay because both frontend and backend are
using 'mode http', and i only used to use inspect-delay with frontends using
tcp mode. Though maybe the 'tcp-request' should have given my that hint. The
'accept' must be below the 'track-sc1' to make it work.

Could you perhaps also add this to the blog article, or should i post a
comment under it for other people to not fall into the same mistake?

Thanks,
PiBa-NL

Baptiste schreef op 7-9-2014 11:38:


On Sat, Sep 6, 2014 at 9:16 PM, PiBa-NL piba.nl@gmail.com wrote:

Hi list,

Inspired by a blog about wordpress bruteforce protection [0] , i'm trying
to
use this same kind of method in a frontend/backend configuration.
I did change the method from POST to GET, for easier testing, but that
doesn't matter for retrieving the gpc counter, does it?

So i was trying to use this:
tcp-request content track-sc1  base32+src  if METH_GET login

It however doesn't seem to work using HAProxy 1.5.3, the acl containing
sc1_get_gpc0 gt 0 never seems to get the correct gpc0 value, even
though i
have examined the stick-table and the gpc0 value there is increasing.
If i change it to the following it starts working:
tcp-request content track-sc1  base32+src

Even though the use_backend in both cases checks those first criteria:
acl flagged_as_abusersc1_get_gpc0 gt 0
use_backendpb3_453_http if METH_GET wp_login
flagged_as_abuser

Am i doing something wrong, is the blog outdated, or was a bug introduced
somewhere?

If more information perhaps -vv or full config is needed let me know,
thanks for any reply.

p.s. did anyone get my other emails a while back? [1]

Kind regards,
PiBa-NL

[0]

http://blog.haproxy.com/2013/04/26/wordpress-cms-brute-force-protection-with-haproxy/
[1] http://marc.info/?l=haproxym=140821298806125w=2


Hi,

Plese let us know if you have  the following configuration lines (or
equivalent), before your tracking rule:
tcp-request inspect-delay 10s
tcp-request accept if HTTP

Baptiste



Hi,

Article updated.

Baptiste

Hi Baptiste,

Thanks, however there are now 2 issues with that.
- The 'accept' must be below the 'track-sc1' to make it work. (at least 
in my tests..)
- Syntax error missing 'content' keyword in: tcp-request content accept 
if HTTP


In the backend i didn't seem to need the inspect-delay, probably because 
the frontend has already filled buffers because it is in 'http' mode.


Thanks,
PiBa-NL



Re: About the ssl check

2014-09-15 Thread PiBa-NL

Zebra schreef op 16-9-2014 2:58:

Hi,all

  I configure one back-end using tcp mode,and I want to ssh the 
server(s) behind the back-end just for testing. So I used check-ssl to 
enable ssl check.


backend ssh_servers
mode tcp
server server2 192.168.10.95:22 check-ssl  check inter 5s fall 
1 maxconn 32000


But this always failed, that is why?

Looking forward to your reply. Thanks!

SSH != SSL
ssh uses a protocol not compatible with a normal ssl connection.
I dont think a health-check currently exists in haproxy for a ssh 
connection.
Maybe you could configure one with option tcp-check and configure your 
own send/expect values.. Not sure if that could work but the manual does 
mention ssh there, so it might work..





Re: About the health check

2014-09-15 Thread PiBa-NL

Zebra schreef op 16-9-2014 3:08:

Hi,all

  I configure the backend with one server and want to make the health 
check for it using tcp.And the configuration as below.


backend httpservers
  option tcp-check
This actually makes it perform tests on a higher layer: Perform health 
checks using tcp-check send/expect sequences
If you remove the option tcp-check from the config it will probably do 
layer4.



  server server2 192.168.10.95:22 check inter 5s fall 1 maxconn 32000

  But I find the log output  below:

Sep 16 01:03:34 localhost haproxy[30429]: Health check for server 
httpservers/server2 succeeded, reason: Layer7 check passed, code: 0, 
info: (tcp-check), check duration: 0ms, status: 1/1 UP.


  I could not understand why Layer 7 check passed for I think the 
tcp-check only work for Layer 4.


  Could you tell me more about this ?


Looking forward to your reply, thanks!








Re: 回复: About the health check

2014-09-16 Thread PiBa-NL

Hi Zebra,

I think it stops after the 3way because your configuration is not using 
any send/expect values, so after the connection is made its immediately 
done 'checking' the layer 7 part.. Something like this would be the 
proper way to use tcp-check:


|option tcp-check
tcp-check send PING\r\n
tcp-check expect +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK|

Have you tried removing that 'option tcp-check' from your configuration 
like i wrote before.? It should then default to a simple layer4 3way.


Zebra schreef op 16-9-2014 3:53:

Hi, PiBa-NL

  Thank you for your reply .
  But I used tcpdump and find the check only try to make one tcp 
three-way handshake and even the packet for tcp ACK will not send.

  This is the result :

  root@ubuntuforhaproxy:/home# tcpdump -lnvvvXei eth0 tcp port 22 and 
src 192.168.10.95 or dst 192.168.10.95
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 
65535 bytes
01:52:21.188205 fa:16:3e:29:d8:8e  fa:16:3e:05:d6:dd, ethertype IPv4 
(0x0800), length 74: (tos 0x0, ttl 64, id 46206, offset 0, flags [DF], 
proto TCP (6), length 60)
192.168.10.94.60528  192.168.10.95.22: Flags [S], cksum 0x963c 
(incorrect - 0xa91a), seq 1728571217, win 29200, options [mss 
1460,sackOK,TS val 146297647 ecr 0,nop,wscale 7], length 0

0x:  4500 003c b47e 4000 4006 f02f c0a8 0a5e  E...~@.@../...^
0x0010:  c0a8 0a5f ec70 0016 6707 e751    ..._.p..g..Q
0x0020:  a002 7210 963c  0204 05b4 0402 080a  ..r
0x0030:  08b8 532f   0103 0307  ..S/
01:52:21.189789 fa:16:3e:05:d6:dd  fa:16:3e:29:d8:8e, ethertype IPv4 
(0x0800), length 74: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], 
proto TCP (6), length 60)
192.168.10.95.22  192.168.10.94.60528: Flags [S.], cksum 0x7eeb 
(correct), seq 952013707, ack 1728571218, win 28960, options [mss 
1460,sackOK,TS val 146298380 ecr 146297647,nop,wscale 7], length 0

0x:  4500 003c  4000 4006 a4ae c0a8 0a5f  E@.@.._
0x0010:  c0a8 0a5e 0016 ec70 38be 938b 6707 e752  ...^...p8...g..R
0x0020:  a012 7120 7eeb  0204 05b4 0402 080a  ..q.~...
0x0030:  08b8 560c 08b8 532f 0103 0307  ..V...S/
01:52:21.189819 fa:16:3e:29:d8:8e  fa:16:3e:05:d6:dd, ethertype IPv4 
(0x0800), length 54: (tos 0x0, ttl 64, id 878, offset 0, flags [DF], 
proto TCP (6), length 40)
192.168.10.94.60528  192.168.10.95.22: Flags [R], cksum 0xdef1 
(correct), seq 1728571218, win 0, length 0

0x:  4500 0028 036e 4000 4006 a154 c0a8 0a5e  E..(.n@.@..T...^
0x0010:  c0a8 0a5f ec70 0016 6707 e752    ..._.p..g..R
0x0020:  5004  def1   P...


-- 原始邮 件 --
*发件人:* PiBa-NL;
*发送时间:* 2014年9月16日(星期二) 上午9:31
*收件人:* Zebra; haproxy;
*主题:* Re: About the health check
Zebra schreef op 16-9-2014 3:08:
 Hi,all

   I configure the backend with one server and want to make the health
 check for it using tcp.And the configuration as below.

 backend httpservers
   option tcp-check
This actually makes it perform tests on a higher layer: Perform health
checks using tcp-check send/expect sequences
If you remove the option tcp-check from the config it will probably do
layer4.

   server server2 192.168.10.95:22 check inter 5s fall 1 maxconn 32000

   But I find the log output  below:

 Sep 16 01:03:34 localhost haproxy[30429]: Health check for server
 httpservers/server2 succeeded, reason: Layer7 check passed, code: 0,
 info: (tcp-check), check duration: 0ms, status: 1/1 UP.

   I could not understand why Layer 7 check passed for I think the
 tcp-check only work for Layer 4.

   Could you tell me more about this ?


 Looking forward to your reply, thanks!









Re: tcp-check not checking

2014-09-19 Thread PiBa-NL

Hi Dennis,

option tcp-check  that requires more send/expect options to actually perform 
L7 checks.
For a simple L4 check remove the line completely or add :  tcp-check connect
You might also want to look at option httpchk. Which is more friendly for basic 
http checks.

Greets PiBa-NL

Dennis Jacobfeuerborn schreef op 19-9-2014 19:45:

Hi,
I just configured the load-balacing for systems that are yet to be
installed yet according to the tcp-check of haproxy these systems are
all available. This is the backend config I'm using right now:

backend back-api
 bind-process 1
 option tcp-check
 mode http
 balance roundrobin

 stick-table type ip size 100k expire 20m
 stick on src
 server web1 10.2.0.224:80 check
 server web2 10.2.0.254:80 check
 server web3 10.2.0.223:80 check
 server web4 10.2.0.253:80 check
 server web5 10.2.0.222:80 check
 server web6 10.2.0.252:80 check

When I look at the stats page all servers are marked active and LastChk
says Layer7 check passed: (tcp-check) even though none of the servers
are online yet.

Does anyone know the reason for this?

Regards,
   Dennis






Re: Session sticking to backup server

2014-09-29 Thread PiBa-NL
Take a look at 'non-stick' and or 'on-marked-up 
shutdown-backup-sessions' they might help with your issue.


Another option could be to remove the backup server from your config, 
and serve the static page with 'errorfile 503 
/etc/haproxy/errorfiles/503sorry.http'.


Dennis Jacobfeuerborn schreef op 29-9-2014 4:23:

Hi,
I just noticed something unexpected in a setup. After introducing backup
server it seems that my session sticks to the backup server even when
the live servers are back online.
Is there something special that needs to be done to tell haproxy that as
soon as a live server is back online it should send everyone to these
server(s) again instead of keeping them on the backup servers (which in
my case is just a static mainenance page)?

Regards,
   Dennis






[PATCH] DOC: httplog does not support 'no'

2014-12-11 Thread PiBa-NL

[PATCH] DOC: httplog does not support 'no'

Modified: doc/configuration.txt

doc/configuration.txt | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index aa6baab..5dc3afa 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -4475,10 +4475,8 @@ option httplog [ clf ]

   This option may be set either in the frontend or the backend.

-  If this option has been enabled in a defaults section, it can be 
disabled
-  in a specific instance by prepending the no keyword before it. 
Specifying
-  only option httplog will automatically clear the 'clf' mode if it 
was set

-  by default.
+  Specifying only option httplog will automatically clear the 'clf' mode
+  if it was set by default.

   See also :  section 8 about logging.





Re: tcp-check for IMAP SSL ?

2015-01-01 Thread PiBa-NL

Yosef Amir schreef op 1-1-2015 om 13:57:

Hi ,
I have servers that listen for plain IMAP on port 143 and servers that 
listen for IMAP SSL on port 443.
I have successfully tested  HAProxy for tcp-check proxying to IMAP 
servers listen on port 143 .
I don’t know how to configure the option tcp-check on HAProxy proxying 
to IMAP servers working over SSL only.

Any idea ?
listen IMAP_PLAIN
mode tcp
   bind :143 name VVM_PLAIN
balance roundrobin
tcp-check connect port 143
option tcp-check
tcp-check expect string  *\ OK\ IMAP4\ server\ ready\ (Multi\ 
Media\ IP\ Store)

   server MIPS1 1.1.1.1 check
   server MIPS2 2.2.2.2 check
listen IMAP_SSL
mode tcp
bind :443 name VVM_SSL
balance roundrobin
tcp-check connect port 443

Maybe try the 'ssl' keyword as below. (i have not tested it at all..)
tcp-check connect port 443 ssl

option tcp-check
tcp-check expect string  ?
server MIPS3 3.3.3.3 check
server MIPS4 4.4.4.4 check
Thanks
Amir Yosef
_ _
“This e-mail message may contain confidential, commercial or 
privileged information that constitutes proprietary information of 
Comverse Inc. or its subsidiaries. If you are not the intended 
recipient of this message, you are hereby notified that any review, 
use or distribution of this information is absolutely prohibited and 
we request that you delete all copies and contact us by e-mailing to: 
secur...@comverse.com. Thank You.”




Re: haproxy and multiple ports

2015-02-06 Thread PiBa-NL

Nick Couchman schreef op 6-2-2015 om 23:52:

It's hard to figure out exactly how to phrase what I'm trying to do, but I essentially 
need a configuration for HAProxy where I can pin the load-balancing of one 
front-end port to another one, so that both go to the same back-end port.  Here's what 
I'm trying to do...I'm using HAProxy to load-balance RDP connections.  I also have a 
piece of software that goes between the RDP client and the RDP server that provides USB 
pass-through.  So, the initial connection happens on port 3389, but then I need the 
client to also open a connection on another port - let's say 4000 - to the exact same 
back-end host.  Is this possible in HAProxy?

Thanks!
-Nick


This looks like a similar problem perhaps it will work for you to?:
http://blog.haproxy.com/2011/07/14/send-users-to-the-same-server-for-imap-and-smtp/



Re: Delaying requests with Lua

2015-06-18 Thread PiBa-NL

Thing to check, what happens to concurrent connection requests?
My guess is with 10 concurrent requests it might take up to 20 
seconds(worst case for 10 connections) for some requests instead of the 
expected max 2..


Thierry FOURNIER schreef op 18-6-2015 om 19:35:

Hi,

You can do this with Lua. Its very easy.

First, you create a lua file containing the following code. The name of
this Lua file is file.lua.

function delay_request(txn)
   core.msleep(1000 + txn.f.rand(1000))
end

Second, you configura haproxy for loading ths file. In the global
section:

lua-load file.lua

In your frontend (or backend)

http-request lua delay_request if { ... your condition ... }

Note that I didn't test this configuration, I'm just giving the main
lines. Please share your results, it's maybe interesting for everyone.

Thierry



On Thu, 18 Jun 2015 17:55:31 +0200
bjun...@gmail.com bjun...@gmail.com wrote:


Hi,

i want to delay specific requests and i want to have a random delay
for every request (for example in a range from 1000ms - 2000ms)


As an ugly hack, you can use the following (with a static value):


  tcp-request inspect-delay 2000ms
  tcp-request content accept if WAIT_END


I think i can achieve random delays with Lua. Does anyone have a
example how this can be realized with Lua ?



Thanks in advance !



---
Bjoern






Re: Delaying requests with Lua

2015-06-18 Thread PiBa-NL
Ok i didn't realize the msleep to be coming from haproxy itself the 
'core.' should have made me think twice before sending that mail.
Thanks for clearing it up, of course still actual results from Bjoern 
will be interesting to hear a well :).


Thierry schreef op 18-6-2015 om 21:12:

On Thu, 18 Jun 2015 20:27:07 +0200
PiBa-NL piba.nl@gmail.com wrote:


Thing to check, what happens to concurrent connection requests?
My guess is with 10 concurrent requests it might take up to 20
seconds(worst case for 10 connections) for some requests instead of the
expected max 2..

Note that we don't use the sleep from the standard Lua API, but the
sleep from the HAProxy Lua API.

The Lua sleep is not a real sleep. It have the behavior of a sleep only
in the Lua code. Is real behavior, is to block the request, set a task
timeour and give back the hand to the HAProxy scheduler.

So, during the sleep, HAProxy is not blocked an continue to process
other connections. Same bahavior for the TCP access; it seeme to be
blocked in the lua code, but HAProxy is not blocked.


Thierry FOURNIER schreef op 18-6-2015 om 19:35:

Hi,

You can do this with Lua. Its very easy.

First, you create a lua file containing the following code. The name of
this Lua file is file.lua.

 function delay_request(txn)
core.msleep(1000 + txn.f.rand(1000))
 end

Second, you configura haproxy for loading ths file. In the global
section:

 lua-load file.lua

In your frontend (or backend)

 http-request lua delay_request if { ... your condition ... }

Note that I didn't test this configuration, I'm just giving the main
lines. Please share your results, it's maybe interesting for everyone.

Thierry



On Thu, 18 Jun 2015 17:55:31 +0200
bjun...@gmail.com bjun...@gmail.com wrote:


Hi,

i want to delay specific requests and i want to have a random delay
for every request (for example in a range from 1000ms - 2000ms)


As an ugly hack, you can use the following (with a static value):


   tcp-request inspect-delay 2000ms
   tcp-request content accept if WAIT_END


I think i can achieve random delays with Lua. Does anyone have a
example how this can be realized with Lua ?



Thanks in advance !



---
Bjoern








Re: Lua testcase.. some 'random' data returned when loading a image.. 1.6dev2

2015-06-19 Thread PiBa-NL
Ok changed the the lua implementation a little, still seeing 'different' 
images/colors appear when requesting the page repeatedly with a single 
browser though.. Assuming the 'penguinsimage' content doesnt change nor 
move in memory,, i suspect that somehow the txn.res:send is somehow 
subject to random buffer content overwriting..

function myinit(txn)
local f = io.open(/var/etc/haproxy/Penguins.jpg, rb)
penguinsimage = f:read(*all)
f:close()
end
core.register_init(myinit)
function hello_world(txn)
txn.res:send(penguinsimage)
txn:close()
end



Thierry FOURNIER schreef op 19-6-2015 om 14:22:

On Fri, 19 Jun 2015 02:05:50 +0200
PiBa-NL piba.nl@gmail.com wrote:


Hi guys,

I'm sure i am abusing lua for completely wrong thing here.
But i do not understand why the result isn't at least consistent..

Ive got a Pinguïns.jpg of 759kB (Default Windows 7 example image)..
And have the configuration listed below.
When requesting the image from a browser the top of the image looks
normal, but further down it starts morphing, or becomes complete garbage..
Increasing bufsize to 30 makes the image show normal when reading
and writing the whole image in 1 variable.. Though the buffer is still
less than half of the image size.?.

What im wondering though is how can it be that the results with smaller
bufsizes vary..?? Is it overflowing some memory? With 1.6dev1 it would
crash after a few requests, dev2 seems to have fixed that.. Though the
random behavior is still strange.. I would expect every time the same
image is send it to be cut short at buffsize, or perhaps just work if
lua might use its own memory buffers not limited to haproxy's buffsize?

Is it a bug? Or just a wrong lua script?
Should files / sockets be closed by the script? I get an error if i
do..(attempt to use a closed file)


Hi, thank you for the repport. I'm currently trying to reproduce it,
I'm not finished.

I see two errors in your code:

First, io.* are real blocking functions. HAProxy is really
blocked waiting for the data access. It is a very bad to use it.
However, how that you write for the file acces should work.

Second, your variable f is not declared as local. This variable is
keep between two requests. Maybe is the reason of your problem.

To serve static file with Lua, the best practice is loading the full
file during the initialisation and store it into a global variable.
During the runtime, you will use only the variable and not the file.

Thierry



global
#tune.bufsize 30
  tune.lua.session-timeout  10
  tune.lua.task-timeout 10
  lua-load /var/etc/haproxy/hello_world.lua
listen proxy
  bind :10001
  tcp-request content lua hello_image

function hello_image(txn)
  txn.res:send(HTTP/1.1 200 OK\n\n)
  f = io.open(/var/etc/haproxy/Penguins.jpg, rb)
  local block = 5000
  while true do
  local bytes = f:read(block)
  if not bytes then break end
  txn.res:send(bytes)
  --core.msleep(25);   ## changes behavior to be somewhat more
succesfull..
  end
  txn:close()
  --f:close()   ## [ALERT] 168/232309 (74397) : Lua function
'hello_image': runtime error: /var/etc/haproxy/hello_world.lua:8:
attempt to use a closed file.
end

root@OPNsense:/usr/ports/net/haproxy-devel # haproxy -vv
HA-Proxy version 1.6-dev2-ad90f0d 2015/06/17
Copyright 2000-2015 Willy Tarreau wi...@haproxy.org

Build options :
TARGET  = freebsd
CPU = generic
CC  = cc
CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing
-DFREEBSD_PORTS
OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1
USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity(identity),
deflate(deflate), raw-deflate(deflate), gzip(gzip)
Built with OpenSSL version : OpenSSL 1.0.2b 11 Jun 2015
Running on OpenSSL version : OpenSSL 1.0.2b 11 Jun 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : yes
Built with Lua version : Lua 5.3.0
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
   kqueue : pref=300,  test result OK
 poll : pref=200,  test result OK
   select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.










Lua testcase.. some 'random' data returned when loading a image.. 1.6dev2

2015-06-18 Thread PiBa-NL

Hi guys,

I'm sure i am abusing lua for completely wrong thing here.
But i do not understand why the result isn't at least consistent..

Ive got a Pinguïns.jpg of 759kB (Default Windows 7 example image)..
And have the configuration listed below.
When requesting the image from a browser the top of the image looks 
normal, but further down it starts morphing, or becomes complete garbage..
Increasing bufsize to 30 makes the image show normal when reading 
and writing the whole image in 1 variable.. Though the buffer is still 
less than half of the image size.?.


What im wondering though is how can it be that the results with smaller 
bufsizes vary..?? Is it overflowing some memory? With 1.6dev1 it would 
crash after a few requests, dev2 seems to have fixed that.. Though the 
random behavior is still strange.. I would expect every time the same 
image is send it to be cut short at buffsize, or perhaps just work if 
lua might use its own memory buffers not limited to haproxy's buffsize?


Is it a bug? Or just a wrong lua script?
Should files / sockets be closed by the script? I get an error if i 
do..(attempt to use a closed file)


global
#tune.bufsize 30
tune.lua.session-timeout  10
tune.lua.task-timeout 10
lua-load /var/etc/haproxy/hello_world.lua
listen proxy
bind :10001
tcp-request content lua hello_image

function hello_image(txn)
txn.res:send(HTTP/1.1 200 OK\n\n)
f = io.open(/var/etc/haproxy/Penguins.jpg, rb)
local block = 5000
while true do
local bytes = f:read(block)
if not bytes then break end
txn.res:send(bytes)
--core.msleep(25);   ## changes behavior to be somewhat more 
succesfull..

end
txn:close()
--f:close()   ## [ALERT] 168/232309 (74397) : Lua function 
'hello_image': runtime error: /var/etc/haproxy/hello_world.lua:8: 
attempt to use a closed file.

end

root@OPNsense:/usr/ports/net/haproxy-devel # haproxy -vv
HA-Proxy version 1.6-dev2-ad90f0d 2015/06/17
Copyright 2000-2015 Willy Tarreau wi...@haproxy.org

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 
USE_STATIC_PCRE=1 USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity(identity), 
deflate(deflate), raw-deflate(deflate), gzip(gzip)

Built with OpenSSL version : OpenSSL 1.0.2b 11 Jun 2015
Running on OpenSSL version : OpenSSL 1.0.2b 11 Jun 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : yes
Built with Lua version : Lua 5.3.0
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.





Re: Receiving HTTP responses to TCP pool

2015-06-16 Thread PiBa-NL

Its a feature, a frontend running with tcp can use a backend using http.
If you want both to use tcp either put that in the defaults, or specify 
it for both frontendbackend.


It seems to me your assumption that the backend automaticaly takes over 
the mode from the frontend is wrong. Perhaps a documentation change 
could prevent this configuration mistake from being made by others.. To 
clarify that setting it in the frontend does not mean it doesnt need to 
also be set in the backend.


CJ Ess schreef op 16-6-2015 om 23:33:
I think that nails the problem. So if its not just me then the 
question is if this is intended behavior or if its a bug. If its 
intended then I don't think its entirely clear from the documentation 
that 'mode tcp' only works under certain circumstances. If we confirm 
that its a bug then I'd be willing to see if I can track it down and 
fix it.



On Tue, Jun 16, 2015 at 4:39 PM, PiBa-NL piba.nl@gmail.com 
mailto:piba.nl@gmail.com wrote:


Which does not prevent the backend from using mode http as the
defaults section sets.

CJ Ess schreef op 16-6-2015 om 22:36:

mode tcp is already present in mainfrontend definition below
the bind statement


On Mon, Jun 15, 2015 at 3:05 PM, PiBa-NL piba.nl@gmail.com
mailto:piba.nl@gmail.com wrote:

CJ Ess schreef op 15-6-2015 om 20:52:

This one has me stumped - I'm trying to proxy SMTP
connections however I'm getting an HTTP response when I try
to connect to port 25 (even though I've done mode tcp).

This is the smallest subset that reproduced the problem - I
can make this work by doing mode tcp in the default
section and then doing mode http in all of the http
frontends (not shown). But doing 'mode http' as default and
then 'mode tcp' in the smtp frontend definition seems to not
work and I'm not certain why.

global
  daemon
  maxconn 10240
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  stats socket /var/run/haproxy.sock user root group root
mode 600 level admin
  stats timeout 2m

defaults
  log global
  modehttp
  timeout client 30s
  timeout server 30s
  timeout connect 4s
  option  socket-stats

frontend mainfrontend
  bind *:25
  mode tcp
  maxconn 10240
  option smtpchk EHLO example.com http://example.com
  default_backend mxpool

backend mxpool

add:
mode tcp

  balance roundrobin
  server mailparser-xxx 172.0.0.51:25 http://172.0.0.51:25
check port 25 weight 20 maxconn 10240
  server mailparser-yyy 172.0.0.67:25 http://172.0.0.67:25
check port 25 weight 20 maxconn 10240











Re: Receiving HTTP responses to TCP pool

2015-06-16 Thread PiBa-NL
Which does not prevent the backend from using mode http as the defaults 
section sets.


CJ Ess schreef op 16-6-2015 om 22:36:
mode tcp is already present in mainfrontend definition below the 
bind statement



On Mon, Jun 15, 2015 at 3:05 PM, PiBa-NL piba.nl@gmail.com 
mailto:piba.nl@gmail.com wrote:


CJ Ess schreef op 15-6-2015 om 20:52:

This one has me stumped - I'm trying to proxy SMTP connections
however I'm getting an HTTP response when I try to connect to
port 25 (even though I've done mode tcp).

This is the smallest subset that reproduced the problem - I can
make this work by doing mode tcp in the default section and
then doing mode http in all of the http frontends (not shown).
But doing 'mode http' as default and then 'mode tcp' in the smtp
frontend definition seems to not work and I'm not certain why.

global
  daemon
  maxconn 10240
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  stats socket /var/run/haproxy.sock user root group root mode
600 level admin
  stats timeout 2m

defaults
  log global
  modehttp
  timeout client 30s
  timeout server 30s
  timeout connect 4s
  option  socket-stats

frontend mainfrontend
  bind *:25
  mode tcp
  maxconn 10240
  option smtpchk EHLO example.com http://example.com
  default_backend mxpool

backend mxpool

add:
mode tcp

  balance roundrobin
  server mailparser-xxx 172.0.0.51:25 http://172.0.0.51:25
check port 25 weight 20 maxconn 10240
  server mailparser-yyy 172.0.0.67:25 http://172.0.0.67:25
check port 25 weight 20 maxconn 10240








Re: HAProxy Stats and SSL Problems

2015-06-15 Thread PiBa-NL

Matthew Cox schreef op 15-6-2015 om 20:05:

Hello,

I've been trying to diagnose an odd issue with HAProxy (1.5.x) 
statistics and SSL. I'm seeing clients having problems with the SSL 
negotiation. When digging with openssl, there seems to be a clear text 
http 1.x response which causes the negotiation to fail:


$ openssl s_client -debug -connect lb.com:44300
CONNECTED(0003)
write to 0x7f96a3504c70 [0x7f96a3804200] (130 bytes = 130 (0x82))
 - 80 80 01 03 01 00 57 00-00 00 20 00 00 39 00 00   ..W... ..9..
0010 - 38 00 00 35 00 00 16 00-00 13 00 00 0a 07 00 c0   8..5
0020 - 00 00 33 00 00 32 00 00-2f 00 00 9a 00 00 99 00   ..3..2../...
0030 - 00 96 03 00 80 00 00 05-00 00 04 01 00 80 00 00   
0040 - 15 00 00 12 00 00 09 06-00 40 00 00 14 00 00 11   .@..
0050 - 00 00 08 00 00 06 04 00-80 00 00 03 02 00 80 00   
0060 - 00 ff 79 2a 0a d7 d8 37-c8 50 b6 f7 c3 8e ce 96   ..y*...7.P..
0070 - cf 2b d9 b8 92 c5 6f 1f-74 7f c0 d1 22 46 71 7a   .+o.t...Fqz
0080 - e2 b4 ..
read from 0x7f96a3504c70 [0x7f96a3809800] (7 bytes = 7 (0x7))
 - 48 54 54 50 2f 31 2e  HTTP/1.
1371:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown 
protocol:/SourceCache/OpenSSL098/OpenSSL098-52.20.2/src/ssl/s23_clnt.c:618:


$ telnet lb.com 44300
Trying X.X.X.X...
Connected to X.X.X.X.
Escape character is '^]'.
GET /
HTTP/1.0 403 Forbidden
Cache-Control: no-cache
Connection: close
Content-Type: text/html

htmlbodyh1403 Forbidden/h1
Request forbidden by administrative rules.
/body/html


The proxy log doesn't have anything that helps me understand what's 
going on:



Jun 15 16:47:44 lb.com haproxy[430]: X.X.X.X:55877 
[15/Jun/2015:16:47:44.967] stats stats/NOSRV -1/-1/-1/-1/0 400 187 - 
- PR-- 0/0/0/0/3 0/0 BADREQ



The pertinent configuration sections are:


global
log 127.0.0.1 local1 info
maxconn 10240
chroot /usr/share/haproxy
user haproxy
group haproxy
daemon

# local stats sockets for read access - change operator to 
admin for r/w

stats socket /var/run/haproxy/haproxy.sock mode 0600 level operator

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

# Set global SSL bind options
ssl-default-bind-options no-sslv3 no-tls-tickets

tune.ssl.default-dh-param 2048

ssl-server-verify none

defaults
log   global
mode  http
optionhttplog
optiondontlognull
retries   3
optionredispatch
maxconn   10240

# Mime types from here:
# 
http://blogs.alfresco.com/wp/developer/2013/11/13/haproxy-for-alfresco/

# and here
# http://serverfault.com/questions/575744/nginx-mime-types-and-gzip
compression algo gzip
compression type text/plain text/html text/html;charset=utf-8 
text/css text/javascript application/json


listen stats :44300

Remove the port like:
listen stats

bind *:44300 ssl crt /etc/ssl/private/the.pem.withkey.pem
mode http
http-request deny if !{ ssl_fc }
stats enable
stats refresh 5s
stats uri /stats
stats realm proxies
stats show-node
stats show-legends
option httplog
option contstats
acl auth_ok_stats http_auth(users_stats)
http-request auth if !auth_ok_stats


Does anyone have any insight?

Thank you in advance,
Matt





Re: Receiving HTTP responses to TCP pool

2015-06-15 Thread PiBa-NL

CJ Ess schreef op 15-6-2015 om 20:52:
This one has me stumped - I'm trying to proxy SMTP connections however 
I'm getting an HTTP response when I try to connect to port 25 (even 
though I've done mode tcp).


This is the smallest subset that reproduced the problem - I can make 
this work by doing mode tcp in the default section and then doing 
mode http in all of the http frontends (not shown). But doing 'mode 
http' as default and then 'mode tcp' in the smtp frontend definition 
seems to not work and I'm not certain why.


global
  daemon
  maxconn 10240
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  stats socket /var/run/haproxy.sock user root group root mode 600 
level admin

  stats timeout 2m

defaults
  log global
  modehttp
  timeout client 30s
  timeout server 30s
  timeout connect 4s
  option  socket-stats

frontend mainfrontend
  bind *:25
  mode tcp
  maxconn 10240
  option smtpchk EHLO example.com http://example.com
  default_backend mxpool

backend mxpool

add:
mode tcp

  balance roundrobin
  server mailparser-xxx 172.0.0.51:25 http://172.0.0.51:25 check 
port 25 weight 20 maxconn 10240
  server mailparser-yyy 172.0.0.67:25 http://172.0.0.67:25 check 
port 25 weight 20 maxconn 10240






[PATCH] DOC: match several lua configuration option names to those implemented in code

2015-08-16 Thread PiBa-NL

Hi,
Ive found some inconsistencies in the documentation, patch attached.
Could you take a look and merge it? Thanks.
Regards,
PiBa-NL
From 007f377f637dbafc47cb77f6650e4df55e08b608 Mon Sep 17 00:00:00 2001
From: Pieter Baauw piba.nl@gmail.com
Date: Sun, 16 Aug 2015 15:26:24 +0200
Subject: [PATCH] DOC: match several lua configuration option names to those
 implemented in code

---
 doc/configuration.txt |  2 +-
 doc/lua-api/index.rst | 18 +-
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 424b31d..83f337d 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1456,7 +1456,7 @@ It is possible to send email alerts when the state of 
servers changes.
 If configured email alerts are sent to each mailer that is configured
 in a mailers section. Email is sent to mailers using SMTP.
 
-mailer mailersect
+mailers mailersect
   Creates a new mailer list with the name mailersect. It is an
   independent section which is referenced by one or more proxies.
 
diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index 26641ba..8671f3e 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -514,7 +514,7 @@ Channel class
   If the buffer cant receive more data, a 'nil' value is returned.
 
   :param class_channel channel: The manipulated Channel.
-  :returns: a string containig the avalaiable line or nil.
+  :returns: a string containing the available line or nil.
 
 .. js:function:: Channel.set(channel, string)
 
@@ -579,21 +579,21 @@ HTTP class
 
This class contain all the HTTP manipulation functions.
 
-.. js:function:: HTTP.req_get_header(http)
+.. js:function:: HTTP.req_get_headers(http)
 
   Returns an array containing all the request headers.
 
   :param class_http http: The related http object.
   :returns: array of headers.
-  :see: HTTP.res_get_header()
+  :see: HTTP.res_get_headers()
 
-.. js:function:: HTTP.res_get_header(http)
+.. js:function:: HTTP.res_get_headers(http)
 
   Returns an array containing all the response headers.
 
   :param class_http http: The related http object.
   :returns: array of headers.
-  :see: HTTP.req_get_header()
+  :see: HTTP.req_get_headers()
 
 .. js:function:: HTTP.req_add_header(http, name, value)
 
@@ -661,9 +661,9 @@ HTTP class
   :param class_http http: The related http object.
   :param string name: The header name.
   :param string value: The header value.
-  :see: HTTP.req_set_header()
+  :see: HTTP.req_rep_header()
 
-.. js:function:: HTTP.req_replace_header(http, name, regex, replace)
+.. js:function:: HTTP.req_rep_header(http, name, regex, replace)
 
   Matches the regular expression in all occurrences of header field name
   according to regex, and replaces them with the replace argument. The
@@ -674,9 +674,9 @@ HTTP class
   :param string name: The header name.
   :param string regex: The match regular expression.
   :param string replace: The replacement value.
-  :see: HTTP.res_replace_header()
+  :see: HTTP.res_rep_header()
 
-.. js:function:: HTTP.res_replace_header(http, name, regex, string)
+.. js:function:: HTTP.res_rep_header(http, name, regex, string)
 
   Matches the regular expression in all occurrences of header field name
   according to regex, and replaces them with the replace argument. The
-- 
1.9.5.msysgit.1



[PATCH] MINOR cfgparse: Correct the mailer warning text to show the right names to the user

2015-08-16 Thread PiBa-NL

Hi Guys,

Patch attached to correct the mailer warning text to show the right 
names to the user.


Regards,
PiBa-NL
From aa2cccdf5e95d2850692ec8189fc9ed20a586575 Mon Sep 17 00:00:00 2001
From: Pieter Baauw piba.nl@gmail.com
Date: Mon, 17 Aug 2015 00:45:05 +0200
Subject: [PATCH] MINOR cfgparse: Correct the mailer warning text to show the
 right names to the user

---
 src/cfgparse.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 98ccd5d..34d029b 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -7319,9 +7319,9 @@ int check_config_validity()
if (curproxy-email_alert.set) {
if (!(curproxy-email_alert.mailers.name  
curproxy-email_alert.from  curproxy-email_alert.to)) {
Warning(config : 'email-alert' will be ignored for 
%s '%s' (the presence any of 
-   'email-alert from', 'email-alert level' 
'email-alert mailer', 
-   'email-alert hostname', or 'email-alert 
to' 
-   requrires each of 'email-alert from', 
'email-alert mailer' and 'email-alert' 
+   'email-alert from', 'email-alert level' 
'email-alert mailers', 
+   'email-alert myhostname', or 'email-alert 
to' 
+   requires each of 'email-alert from', 
'email-alert mailers' and 'email-alert to' 
to be present).\n,
proxy_type_str(curproxy), curproxy-id);
err_code |= ERR_WARN;
-- 
1.9.5.msysgit.1



Fwd: request for comment - [PATCH] MEDIUM: mailer: retry sending a mail up to 3 times

2015-08-04 Thread PiBa-NL

bump?
 Doorgestuurd bericht 
Onderwerp: 	request for comment - [PATCH] MEDIUM: mailer: retry sending 
a mail up to 3 times

Datum:  Sun, 26 Jul 2015 21:08:41 +0200
Van:PiBa-NL piba.nl@gmail.com
Aan:HAproxy Mailing Lists haproxy@formilux.org


Hi guys,

Ive created a small patch that will retry sending a mail 3 times if it
fails the first time.
Its seems to work in my limited testing..

HOWEVER.
-i have not checked for memoryleaks, sockets not being closed properly
(i dont know how to..)
-is setting current and last steps to null the proper way to reset the
step of rule evaluation?
-CO_FL_ERROR is set when there is a connection error.. this seems to be
the proper check.
-but check-conn-flags  0xFF  is a bit of s guess from observing the
flags when it could connect but the server did not respond properly.. is
there a other better way?
-i used the 'fall' variable to track the number of retries.. should i
have created a separate 'retries' variable?

Thanks for any feedback you can give me.

Best regards,
PiBa-NL



From c5110d981cf0d2c070e88331eede15b0b16e80df Mon Sep 17 00:00:00 2001
From: Pieter Baauw piba.nl@gmail.com
Date: Sun, 26 Jul 2015 20:47:27 +0200
Subject: [PATCH] MEDIUM: mailer: retry sending a mail up to 3 times

Currently only 1 connection attempt (syn packet) was send, this patch increases 
that to 3 attempts. This to make it less likely them mail is lost due to a 
single lost packet.
---
 src/checks.c | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index e386bee..cfcb1ee 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1408,7 +1408,7 @@ static struct task *server_warmup(struct task *t)
  *
  * It can return one of :
  *  - SF_ERR_NONE if everything's OK and tcpcheck_main() was not called
- *  - SF_ERR_UP if if everything's OK and tcpcheck_main() was called
+ *  - SF_ERR_UP if everything's OK and tcpcheck_main() was called
  *  - SF_ERR_SRVTO if there are no more servers
  *  - SF_ERR_SRVCL if the connection was refused by the server
  *  - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
@@ -3053,6 +3053,7 @@ static struct task *process_email_alert(struct task *t)
LIST_DEL(alert-list);
 
check-state |= CHK_ST_ENABLED;
+   check-fall = 0;
}
 
}
@@ -3060,6 +3061,16 @@ static struct task *process_email_alert(struct task *t)
process_chk(t);
 
if (!(check-state  CHK_ST_INPROGRESS)  check-tcpcheck_rules) {
+   if ((check-conn-flags  CO_FL_ERROR) || // connection failed, 
try again
+   (check-conn-flags  0xFF) // did not reach the 
'normal end', try again
+   ) {
+   if (check-fall  3) {
+   check-current_step = NULL;
+   check-last_started_step = NULL;
+   check-fall++;
+   return t;
+   }
+   }
struct email_alert *alert;
 
alert = container_of(check-tcpcheck_rules, typeof(*alert), 
tcpcheck_rules);
-- 
1.9.5.msysgit.1




Re: health checks with SNI/virtual hosts

2015-07-23 Thread PiBa-NL
I believe you need 1.6-dev3 for that: 
http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.2-sni


Jim Gronowski schreef op 23-7-2015 om 23:20:


I’m trying to do health checks on a site that is served with SNI – so 
going directly to the IP generates a 404 – the backend server is 
looking for the hostname to determine which site to send it to.


Is it correct to put the full URL in the httpchk section, like so?

___

backend foo

mode http

option forwardfor

balance roundrobin

option httplog

option httpchk https://my.URL.here/

cookie BEserver insert

server foo1 192.168.123.123:443 ssl cookie foo1 maxconn 5000 
check


server foo1 192.168.123.124:443 ssl cookie foo2 maxconn 5000 
check


___

It’s not clear to me if that will actually check each backend server, 
or if it will only try to fetch that URL, which may not be the correct 
backend.


Please let me know if I can provide any additional information.  Thank 
you!


Jim



Ditronics, LLC email disclaimer:
This communication, including attachments, is intended only for the 
exclusive use of addressee and may contain proprietary, confidential, 
or privileged information. Any use, review, duplication, disclosure, 
dissemination, or distribution is strictly prohibited. If you were not 
the intended recipient, you have received this communication in error. 
Please notify sender immediately by return e-mail, delete this 
communication, and destroy any copies.






[PATCH] BUG/MINOR: mailer: DATA part must be terminated with CRLF.CRLF

2015-07-22 Thread PiBa-NL

Hi Willy,

Please check attached patch to solve not being able to send a mail to a 
exchange server as discussed in previous mail thread.

http://marc.info/?l=haproxym=143708032708431w=2

Is it correct like this?

Thanks for the great software :).

Regards,
Pieter
From 50b34a494a9cd40536454591234f46d8d5e1abfb Mon Sep 17 00:00:00 2001
From: Pieter Baauw piba.nl@gmail.com
Date: Wed, 22 Jul 2015 19:51:54 +0200
Subject: [PATCH] BUG/MEDIUM: mailer: DATA part must be terminated with 
CRLF.CRLF

The dot is send in the wrong place.
As defined in https://www.ietf.org/rfc/rfc2821.txt 'the character sequence 
CRLF.CRLF ends the mail text'
---
 src/checks.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index 2179d4f..e386bee 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -3243,8 +3243,8 @@ static int enqueue_one_email_alert(struct email_alertq 
*q, const char *msg)
Subject: [HAproxy Alert] , msg, \n,
\n,
msg, \n,
-   .\r\n,
\r\n,
+   .\r\n,
NULL
};
 
-- 
1.9.5.msysgit.1



Re: Mailer does not work

2015-07-16 Thread PiBa-NL

It looks to me as if the dot is send in the wrong place.
Attached patch would fix that.

https://www.ietf.org/rfc/rfc2821.txt
the character sequence CRLF.CRLF ends the mail text.

Could you guy's take a look?

mlist schreef op 15-7-2015 om 14:23:

At the end of each smtp session, we see a packet with Reset  + Acknowledge nits 
set:

tcp.flags = RST + ACK

Roberto


-Original Message-
From: Baptiste [mailto:bed...@gmail.com]
Sent: mercoledì 15 luglio 2015 12.01
To: mlist
Cc: haproxy@formilux.org
Subject: Re: Mailer does not work

On Wed, Jul 15, 2015 at 9:48 AM, mlist ml...@apsystems.it wrote:

We compiled from source haproxy-1.6-dev2.tar.gz. New Mailers mechanism does
not seems to work, we configured as on manual:



mailers apsmailer1

mailer smtp1 mailserver ip:10025



…

…



backend somebackend_https

mode http

balance roundrobin

…

email-alert mailers apsmailer1

email-alert from from mail

email-alert to to mail

   email-alert level info

…



We see in haproxy.log server status change:

Jul 15 09:42:00 localhost.localdomain haproxy[3342]: Server …/server1 is
UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup
servers online. 0 sessions requeued, 0 total in queue.

Jul 15 09:42:00 localhost.localdomain haproxy[3342]: Server …/server1 is
UP, reason: Layer6 check passed, check duration: 1ms. 2 active and 0 backup
servers online. 0 sessions requeued, 0 total in queue.



But no mail alerts are sent, no error or warning logged about sending mail.



haproxy -f /etc/haproxy/haproxy.cfg –c

does not return any error. All seems to be right, but mail alerts are not
sent.


Roberto


Hi Roberto,

Could you please take a tcpdump on port 10025 and confirm HAProxy
tries to get connected to the SMTP server?

Baptiste



From ba3e1593b313752e0a3ff54aae06b18ba17c1435 Mon Sep 17 00:00:00 2001
From: Pieter Baauw
Date: Thu, 16 Jul 2015 22:38:19 +0200
Subject: [PATCH] mailer, DATA part must be terminated with  CRLF.CRLF

---
 src/checks.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index 2179d4f..e386bee 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -3243,8 +3243,8 @@ static int enqueue_one_email_alert(struct email_alertq 
*q, const char *msg)
Subject: [HAproxy Alert] , msg, \n,
\n,
msg, \n,
-   .\r\n,
\r\n,
+   .\r\n,
NULL
};
 
-- 
1.9.5.msysgit.1



Re: lua, changing response-body in http pages 'supported' ?

2015-10-24 Thread PiBa-NL

Hi Thierry, haproxy-list,

Op 19-10-2015 om 11:24 schreef thierry.fourn...@arpalert.org:

On Mon, 19 Oct 2015 01:31:42 +0200
PiBa-NL <piba.nl@gmail.com> wrote:


Hi Thierry,

Op 18-10-2015 om 21:37 schreef thierry.fourn...@arpalert.org:

On Sun, 18 Oct 2015 00:07:13 +0200
PiBa-NL <piba.nl@gmail.com> wrote:


Hi haproxy list,

For testing purposes i am trying to 'modify' a response of a webserver
but only having limited success. Is this supposed to work?
As a more usefull goal than the current LAL to TST replacement i imagine
rewriting absolute links on a webpage could be possible which is
sometimes problematic with 'dumb' webapplications..

Or is it outside of the current scope of implemented functionality? If
so, it on the 'lua todo list' ?

I tried for example a configuration like below. And get several
different results in the browser.
-Sometimes i get 4 times TSTA
-Sometimes i see after the 8th TSTA- Connection: keep-alive << this
happens most of the time..
-Sometimes i get 9 times TSTA + STOP << this would be the desired
outcome (only seen very few times..)

Probably due to the response-buffer being filled differently due to
'timing'..

The "connection: keep-alive" text is probably from the actual server
reply which is 'appended' behind the response generated by my lua
script.?. However shouldn't the .done() prevent that from being send to
the client?

Ive tried putting a loop into the lua script to call res:get() multiple
times but that didnt seem to work..

Also to properly modify a page i would need to know all changes before
sending the headers with changed content-length back to the client..

Can someone confirm this is or isn't (reliably) possible? Or how this
can be scripted in lua differently?

Hello,

Your script replace 3 bytes by 3 bytes, this must run with HTTP, but if
your replacement change the length of the response, you can have some
difficulties with clients, or with keepalive.

Yes i started with replacing with the same number of bytes to avoid some
of the possible troubles caused by changing the length.. And as seen in
the haproxy.cfg it is configured with 'mode http'.

The res:get(), returns the current content of the response buffer.
Maybe it not contains the full response. You must execute a loop with
regular "core.yield()" to get back the hand to HAProxy and wait for new

Calling yield does allow to 'wait' for more data to come in.. No
guarantee that it only takes 1 yield for data to 'grow'..

[info] 278/055943 (77431) : luahttpresponse Content-Length XYZ: 14115
[info] 278/055943 (77431) : luahttpresponse SIZE: 2477
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 6221
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 8717
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 14337
[info] 278/055943 (77431) : luahttpresponse DONE?: 14337


data. When all the data are read, res:get() returns an error.

Not sure when/how this error would happen.? The result of res:get only
seems to get bigger while the webserver is sending the response..

The res:send() is dangerous because it send data directly to the client
without the end of haproxy analysis. Maybe it is the cause o your
problem.

Try to use res:set().

Ok tried that, new try with function below.

The difficulty is that another "res:get()" returns the same data that
these you put.

I don't known if you can modify an http response greater than one
buffer.

Would be nice if that was somehow possible. But my current lua script
cannot..

The function res:close() closes the connection even if HAProxy want to
keep the connection alive. I suggest that you don't use this function.

It seems txn.res:close() does not exist? txn:done()

I reproduce the error message using curl. By default curl tries
to transfer data with keepalive, and it is not happy if the all the
announced data are not transfered.

 Connection: keep-alive curl: (18) transfer closed with outstanding
 read data remaining

It seems that i reproduce a bug. I'm looking for.

Ok if you can create a patch, let me know. Happy to test if it solves
som

Re: [LUA] Lua advanced documentation

2015-10-28 Thread PiBa-NL

Op 28-10-2015 om 9:28 schreef Thierry FOURNIER:

Hi List,

I wrote a Lua advanced documentation. This explain the Lua integration
in HAProxy, the reason of some choices. Some traps and Lua code with
advanced comments.

This doc is not terminated, but I want to release a first version. I
will fill the missing points later.

unfortunatelly I have some difficulties to write in english, if anyone
wants to correct my doc, it will be welcome.

Thank you,
Thierry

Hi Thierry,

Thanks for the doc !
Ive changed a few words here and there.. Updated doc attached.
Probably there is some more to correct for the more native English 
speaking/writing people.


I haven't tried to check it myself, but didn't see it in either of the 
documents how often is a function from core.register_task called? Or 
should it contain a loop+sleep ? Perhaps a small example could be added?


Regards
PiBa-NL

HAProxy is a powerful load balancer. It embeds many options and many
configuration styles in order to give a solution to many load balancing
problems. However, HAProxy is not universal and some special or specific
problems doesn't have solution with the native software.

This text is not a full explanation of the Lua syntax.

This text is not a replacement of the HAProxy Lua API documentation. The API
documentation can be found at the project root, in the documentation directory. 
The goal of this text is to discover how Lua is implemented in HAProxy and using
it efficiently.

However, this can be read by Lua beginners. Some examples are detailed.

Why a scripting language in HAProxy
===

HAProxy 1.5 makes at possible to do many things using samples, but some people
wants to more combining results of samples fetches, programming conditions and
loops which is not possible. Sometimes people implement these functionalities
in patches which have no meaning outside their network. These people must
maintain these patches, or worse we must integrate them in the HAProxy
mainstream.

Their need is to have an embedded programming language in order to no longer
modify the HAProxy source code, but to write their own control code. Lua is
encountered very often in the software industry, and in some open source
projects. It is easy to understand, efficient, light without external
dependencies, and leaves the resource control to the implementation. Its design
is close to the HAProxy philosophy which uses components for what they do
perfectly.

The HAProxy control block allows one to take a decision based on the comparison
between samples and patterns. The samples are extracted using fetch functions
easily extensible, and are used by actions which are also extensible. It seems
natural to allow Lua to give samples, modify them, and to be an action target.
So, Lua uses the same entities as the configuration language. This is the most
natural and reliable way fir the Lua integration. So, the Lua engine allow one
to add new sample fetch functions, new converter functions and new actions.
These new entities can access the existing samples fetches and converters
allowing to extend them without rewriting them.

The writing of the first Lua functions shows that implementing complex concepts
like protocol analysers is easy and can be extended to full services. It appears
that these services are not easy to implement with the HAProxy configuration
model which is base on four steps: fetch, convert, compare and action. HAProxy
is extended with a notion of services which are a formalisation of the existing
services like stats, cli and peers. The service is an autonomous entity with a
behaviour pattern close to that of an external client or server. The Lua engine
inherits from this new service and offers new possibilities for writing
services.

This scripting language is useful for testing new features as proof of concept.
Later, if there is general interest, the proof of concept could be integrated
with C language in the HAProxy core.

The HAProxy Lua integration also provides also a simple way for distributing Lua
packages. The final user needs only to install the Lua file, load it in HAProxy
and follow the attached documentation.

Design and technical things
===

Lua is integrated in the HAProxy event driven core. We want to preserve the
fast processing of HAProxy. To ensure this, we implement some technical concepts
between HAProxy and the Lua library.

The following paragraph describes also the interactions between Lua and HAProxy
with a technical point of view.

Prerequisite
---

Reading the following documentation links are required to understand the
current paragraph:

   HAProxy doc: http://cbonte.github.io/haproxy-dconv/configuration-1.6.html
   Lua API: http://www.lua.org/manual/5.3/
   HAProxy API: http://www.arpalert.org/src/haproxy-lua-api/1.6/index.html
   Lua guide:   http://www.lua.org/pil/

more about Lua choice
-

Lua language is very easy to extend

Re: Echo server in Lua

2015-11-10 Thread PiBa-NL

b.t.w. if sole purpose of the frontend is to echo the ip back to the client.
You should probably also check the 'use-service' applet syntax, i dont 
know if that could be faster for your purpose.
Then another thing to check would be if you want to use the tcp or http 
service mode. A TCP service could be almost 1 line of lua code.. And i 
kinda expect to be a bit faster.


http://www.arpalert.org/src/haproxy-lua-api/1.6/index.html#haproxy-lua-hello-world
Instead of sending 'hello world' you could send the client-ip..

Op 10-11-2015 om 23:46 schreef Thrawn:

OK, some explanation seems in order :).

I ran ab with concurrency 1000 and a total of 3 requests, against 
each server, 5 times, plus one run each with 15 requests (sum of 
the previous 5 tests).
For Apache+PHP, this typically resulted in 5-15ms response time for 
99% of requests, with the remaining few either taking tens of seconds 
or eventually disconnecting with an error.
For HAProxy+Lua, 99% response times were 1ms, or sometimes 2ms, with 
the last few taking about 200ms. So, HAProxy worked much better (of 
course).


However, on the larger run (150k), HAProxy too had a small percentage 
of disconnections (apr_socket_recv: Connection reset by peer). I've 
been able to reproduce this with moderate consistency whenever I push 
it beyond about 35000 total requests. It's still a better error rate 
than PHP, but I'd like to understand why the errors are occurring. For 
all I know, it's a problem with ab.


I've also tried a couple of runs with 15 requests but concurrency 
only 100, and neither server had trouble serving that, although 
interestingly, PHP is slightly more consistent: 99% within 4-5ms, then 
about 200ms for the last few, whereas HAProxy returns 99% within 1-2ms 
and 1800ms for the last few.


The box is just my workstation, 8 cores and 16GB RAM, running Ubuntu 
15.10, with no special tuning.


Any ideas on why the HAProxy tests showed disconnections or occasional 
slow response times at high loads?




On Wednesday, 11 November 2015, 8:29, Baptiste  wrote:


On Tue, Nov 10, 2015 at 10:46 PM, Thrawn
> wrote:

> OK, I've set this up locally, and tested it against PHP using ab.
>
> HAProxy was consistently faster (99% within 1ms, vs 5-15ms for PHP), 
but at
> request volumes over about 35000, with concurrency 1000, it 
consistently had
> a small percentage of socket disconnections. PHP had timeouts - or 
very long
> response times - and disconnections at pretty much any request 
volume with
> that concurrency, but I'm wondering where the errors stem from, or 
even if

> it's a limitation of ab.
>
> HAProxy config:
>
> global
>maxconn 4096
>daemon
>nbproc 1
>stats socket localhost:9461 level admin
>chroot /etc/haproxy/jail
>user haproxy
>group haproxy
>lua-load /etc/haproxy/jail/echo.lua
>
> defaults
>log 127.0.0.1 local0
>mode http
>timeout client 6
>timeout server 6
>timeout connect 6
>option forwardfor
>balance roundrobin
>option abortonclose
>maxconn 20
>
> frontend echo
>bind 127.0.1.1:1610
>timeout client 1
>mode http
>http-request lua.echo
>
> Lua:
> core.register_action("echo", { "http-req" }, function (txn)
>local buffer = txn.f:src()
>txn.res:send("HTTP/1.0 200 OK\r\nServer:
> haproxy-lua/echo\r\nContent-Type: text/html\r\nContent-Length: " ..
> buffer:len() .. "\r\nConnection: close\r\n\r\n" .. buffer)
>txn:done()
> end)
>

Hi Thrawn

I'm sorry, but I don't understand anything to all your benchmarcks!
If you could at least give an explanation  before running each ab,
this may help.

Furthermore, you don't share anything about your hardware environment
neither the tuning you did on each box.
So it's impossible to help you.

At least, I can say that Lua seems to perform very well :)

Baptiste






[PATCH] DOC: lua-api/index.rst small example fixes, spelling correction.

2015-11-08 Thread PiBa-NL

Hi List, Willy,

Attached some small example fixes, spelling correction.
Hope its ok like this :).

Regards,
PiBa-NL
From fdecc44b9bf94bfaceb9d0335ea3a185e575cd86 Mon Sep 17 00:00:00 2001
From: Pieter Baauw <piba.nl@gmail.com>
Date: Sun, 8 Nov 2015 16:38:08 +0100
Subject: [PATCH] DOC: lua-api/index.rst small example fixes, spelling
 correction.

---
 doc/lua-api/index.rst | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index c216d12..60c9725 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -406,7 +406,7 @@ Core class
 
 .. code-block:: lua
 
-  core.register_service("hello-world", "http" }, function(applet)
+  core.register_service("hello-world", "http", function(applet)
  local response = "Hello World !"
  applet:set_status(200)
  applet:add_header("content-length", string.len(response))
@@ -430,7 +430,7 @@ Core class
   Register a function executed after the configuration parsing. This is useful
   to check any parameters.
 
-  :param fuction func: is the Lua function called to work as initializer.
+  :param function func: is the Lua function called to work as initializer.
 
   The prototype of the Lua function used as argument is:
 
@@ -449,7 +449,7 @@ Core class
   main scheduler starts. For example this type of tasks can be executed to
   perform complex health checks.
 
-  :param fuction func: is the Lua function called to work as initializer.
+  :param function func: is the Lua function called to work as initializer.
 
   The prototype of the Lua function used as argument is:
 
@@ -561,7 +561,7 @@ Converters class
   * applying hash on input string (djb2, crc32, sdbm, wt6),
   * format date,
   * json escape,
-  * extracting prefered language comparing two lists,
+  * extracting preferred language comparing two lists,
   * turn to lower or upper chars,
   * deal with stick tables.
 
@@ -595,7 +595,7 @@ Channel class
   If the buffer cant receive more data, a 'nil' value is returned.
 
   :param class_channel channel: The manipulated Channel.
-  :returns: a string containig all the avalaible data or nil.
+  :returns: a string containing all the available data or nil.
 
 .. js:function:: Channel.get(channel)
 
@@ -605,7 +605,7 @@ Channel class
   If the buffer cant receive more data, a 'nil' value is returned.
 
   :param class_channel channel: The manipulated Channel.
-  :returns: a string containig all the avalaible data or nil.
+  :returns: a string containing all the available data or nil.
 
 .. js:function:: Channel.getline(channel)
 
@@ -628,7 +628,7 @@ Channel class
 
   :param class_channel channel: The manipulated Channel.
   :param string string: The data which will sent.
-  :returns: an integer containing the amount of butes copyed or -1.
+  :returns: an integer containing the amount of bytes copied or -1.
 
 .. js:function:: Channel.append(channel, string)
 
@@ -640,7 +640,7 @@ Channel class
 
   :param class_channel channel: The manipulated Channel.
   :param string string: The data which will sent.
-  :returns: an integer containing the amount of butes copyed or -1.
+  :returns: an integer containing the amount of bytes copied or -1.
 
 .. js:function:: Channel.send(channel, string)
 
@@ -649,21 +649,21 @@ Channel class
 
   :param class_channel channel: The manipulated Channel.
   :param string string: The data which will sent.
-  :returns: an integer containing the amount of butes copyed or -1.
+  :returns: an integer containing the amount of bytes copied or -1.
 
 .. js:function:: Channel.get_in_length(channel)
 
   This function returns the length of the input part of the buffer.
 
   :param class_channel channel: The manipulated Channel.
-  :returns: an integer containing the amount of avalaible bytes.
+  :returns: an integer containing the amount of available bytes.
 
 .. js:function:: Channel.get_out_length(channel)
 
   This function returns the length of the output part of the buffer.
 
   :param class_channel channel: The manipulated Channel.
-  :returns: an integer containing the amount of avalaible bytes.
+  :returns: an integer containing the amount of available bytes.
 
 .. js:function:: Channel.forward(channel, int)
 
@@ -1359,7 +1359,7 @@ AppletHTTP class
   This is an hello world sample code:
 
 .. code-block:: lua
-  core.register_service("hello-world", "http" }, function(applet)
+  core.register_service("hello-world", "http", function(applet)
  local response = "Hello World !"
  applet:set_status(200)
  applet:add_header("content-length", string.len(response))
-- 
1.9.5.msysgit.1



Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-08 Thread PiBa-NL

Forgot to include list, sorry.

Op 8-11-2015 om 17:33 schreef PiBa-NL:

Hi Ben, Willy, Simon,

Ben, thanks for the review.
Hoping 'release pressure' has cleared for Willy i'm resending the 
patch now, with with your comments incorporated.


CC, to Simon as maintainer of mailers part so he can give approval (or 
not).


The original reservations i had when sending this patch still apply. 
See the "HOWEVER." part in the bottom mail.


Hoping it might get merged to improve mailer reliability. So no 
'server down' email gets lost..

Thanks everyone for your time :) .

Regards,
PiBa-NL

Op 22-9-2015 om 16:43 schreef Ben Cabot:

Hi PiBa-NL,

Malcolm has asked me to take a look at this.  While I don't know
enough to answer the questions about the the design and implementation
I have tested the patch. In my testing it works well and I have a
couple of comments.

I had a warning when building, struct email_alert *alert; should be
before process_chk(t); or gcc moans (Warning: ISO C90 forbids mixed
declarations and code).
Ive moved the stuct to the top of the if statement where it was before 
my patch. I expect that to fix the warning.


It makes in total 4 attempts to send the mail where I believe it 
should be 3?

If the total desired attempts is 3 It looks like "if (check->fall < 3)
{ " should be "if (check->fall < 2)" with "check->fall++;" inside the
if statement. I may be wrong I've only briefly looked.
Yes it did '3 retries'. Ive changed to make it a total of '3 attempts' 
which is more like a normal 3x SYN packet when opening a failing 
connection.

While testing this I've realised it would also be nice to log when the
email fails to send after 3 attempts but that is a job for another
day.

Thanks for submitting this as its helpful for us, also for helping
with my patch.  I am still waiting for Willy to come back to me about
mine as well. As he is in the middle of a release I expect he is very
busy at the moment so I'll wait a while before giving him a poke and
following up. Hopefully I've been of some help to you.

Thanks for testing!

Kind Regards,
Ben

On 4 August 2015 at 20:35, PiBa-NL <piba.nl@gmail.com> wrote:

bump?
 Doorgestuurd bericht 
Onderwerp:  request for comment - [PATCH] MEDIUM: mailer: retry 
sending

a mail up to 3 times
Datum:  Sun, 26 Jul 2015 21:08:41 +0200
Van:PiBa-NL <piba.nl@gmail.com>
Aan:HAproxy Mailing Lists <haproxy@formilux.org>



Hi guys,

Ive created a small patch that will retry sending a mail 3 times if it
fails the first time.
Its seems to work in my limited testing..

HOWEVER.
-i have not checked for memoryleaks, sockets not being closed properly
(i dont know how to..)
-is setting current and last steps to null the proper way to reset the
step of rule evaluation?
-CO_FL_ERROR is set when there is a connection error.. this seems to be
the proper check.
-but check->conn->flags & 0xFF  is a bit of s guess from observing the
flags when it could connect but the server did not respond 
properly.. is

there a other better way?
-i used the 'fall' variable to track the number of retries.. should i
have created a separate 'retries' variable?

Thanks for any feedback you can give me.

Best regards,
PiBa-NL













Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-08 Thread PiBa-NL
Forgot to include list, sorry. And then the attachment dropped of.. 
Resending.

Op 8-11-2015 om 17:33 schreef PiBa-NL:

Hi Ben, Willy, Simon,

Ben, thanks for the review.
Hoping 'release pressure' has cleared for Willy i'm resending the 
patch now, with with your comments incorporated.


CC, to Simon as maintainer of mailers part so he can give approval (or 
not).


The original reservations i had when sending this patch still apply. 
See the "HOWEVER." part in the bottom mail.


Hoping it might get merged to improve mailer reliability. So no 
'server down' email gets lost..

Thanks everyone for your time :) .

Regards,
PiBa-NL

Op 22-9-2015 om 16:43 schreef Ben Cabot:

Hi PiBa-NL,

Malcolm has asked me to take a look at this.  While I don't know
enough to answer the questions about the the design and implementation
I have tested the patch. In my testing it works well and I have a
couple of comments.

I had a warning when building, struct email_alert *alert; should be
before process_chk(t); or gcc moans (Warning: ISO C90 forbids mixed
declarations and code).
Ive moved the stuct to the top of the if statement where it was before 
my patch. I expect that to fix the warning.


It makes in total 4 attempts to send the mail where I believe it 
should be 3?

If the total desired attempts is 3 It looks like "if (check->fall < 3)
{ " should be "if (check->fall < 2)" with "check->fall++;" inside the
if statement. I may be wrong I've only briefly looked.
Yes it did '3 retries'. Ive changed to make it a total of '3 attempts' 
which is more like a normal 3x SYN packet when opening a failing 
connection.

While testing this I've realised it would also be nice to log when the
email fails to send after 3 attempts but that is a job for another
day.

Thanks for submitting this as its helpful for us, also for helping
with my patch.  I am still waiting for Willy to come back to me about
mine as well. As he is in the middle of a release I expect he is very
busy at the moment so I'll wait a while before giving him a poke and
following up. Hopefully I've been of some help to you.

Thanks for testing!

Kind Regards,
Ben

On 4 August 2015 at 20:35, PiBa-NL <piba.nl@gmail.com> wrote:

bump?
 Doorgestuurd bericht 
Onderwerp:  request for comment - [PATCH] MEDIUM: mailer: retry 
sending

a mail up to 3 times
Datum:  Sun, 26 Jul 2015 21:08:41 +0200
Van:PiBa-NL <piba.nl@gmail.com>
Aan:HAproxy Mailing Lists <haproxy@formilux.org>



Hi guys,

Ive created a small patch that will retry sending a mail 3 times if it
fails the first time.
Its seems to work in my limited testing..

HOWEVER.
-i have not checked for memoryleaks, sockets not being closed properly
(i dont know how to..)
-is setting current and last steps to null the proper way to reset the
step of rule evaluation?
-CO_FL_ERROR is set when there is a connection error.. this seems to be
the proper check.
-but check->conn->flags & 0xFF  is a bit of s guess from observing the
flags when it could connect but the server did not respond 
properly.. is

there a other better way?
-i used the 'fall' variable to track the number of retries.. should i
have created a separate 'retries' variable?

Thanks for any feedback you can give me.

Best regards,
PiBa-NL










From 18fd2740b7c9f511e03afe9ebb8237f6a640a141 Mon Sep 17 00:00:00 2001
From: Pieter Baauw <piba.nl@gmail.com>
Date: Sun, 26 Jul 2015 20:47:27 +0200
Subject: [PATCH] MEDIUM: mailer: retry sending a mail up to 3 times

Currently only 1 connection attempt (syn packet) was send, this patch increases 
that to 3 attempts. This to make it less likely them mail is lost due to a 
single lost packet.
---
 src/checks.c | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index e77926a..335eb9a 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1408,7 +1408,7 @@ static struct task *server_warmup(struct task *t)
  *
  * It can return one of :
  *  - SF_ERR_NONE if everything's OK and tcpcheck_main() was not called
- *  - SF_ERR_UP if if everything's OK and tcpcheck_main() was called
+ *  - SF_ERR_UP if everything's OK and tcpcheck_main() was called
  *  - SF_ERR_SRVTO if there are no more servers
  *  - SF_ERR_SRVCL if the connection was refused by the server
  *  - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
@@ -3065,6 +3065,7 @@ static struct task *process_email_alert(struct task *t)
LIST_DEL(>list);
 
check->state |= CHK_ST_ENABLED;
+   check->fall = 0;
}
 
}
@@ -3074,6 +3075,17 @@ static struct task *process_email_alert(struct task *t)
if (!(check->state & CHK_ST_INPROGRESS) && check->tcpcheck_rules) {
struct email_alert *alert;
 
+   if ((check->conn->fla

Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-16 Thread PiBa-NL

Hi Willy,
Op 16-11-2015 om 7:20 schreef Willy Tarreau:

Hi Pieter,

On Mon, Nov 16, 2015 at 12:13:50AM +0100, PiBa-NL wrote:

-but check->conn->flags & 0xFF  is a bit of s guess from observing the
flags when it could connect but the server did not respond
properly.. is there a other better way?

This one is ugly. First, you should never use the numeric values
when there are defines or enums because if these flags happen to
change or move to something else, your value will not be spotted
and will not be converted.

Agreed it was ugly, but i could not find the enums based equivalent for
that value.. Now i think its only checking 1 bit of it. But that seems
to work alright to.

You could have ORed all the respective flags but even so it didn't
really make sense to have all of them.


Thus I'm attaching two proposals that I'd like you to test, the
first one verifies if the connection was established or not. The
second one checks if we've reached the last rule (implying all
of them were executed).

 From my tests both work as you describe.
v1 one retries the connection part, v2 also retries if the mail sending
did not complete normally.
I think v2 would be the preferred solution.

OK fine, thanks for the test.


Though looking through my tcpdumps again i do see it tries to connect
with 3 different client ports thats not how a normal tcp socket would
retry right?

Do not confuse haproxy and the tcp stack, that's important. Dropped
*packets* are dealt with by the TCP stack which retransmits themm over
the same connection, thus the same ports. When haproxy retries, it does
not manipulate packets, it retries failed connections, ie the ones that
TCP failed to fix (eg: multiple dropped packets at the TCP level causing
a connection to fail for whatever reason, such as a blocked source port
or a dead link requiring a new connection to be attempted via a different
path.


But i think thats not so much a problem, it does still make
me wonder a little what happens if a packet is lost in the middle of a
tcp connection, will it resend like a normal tcp connection? Its
difficult to test though..

Haproxy doesn't affect how TCP works. We never see packets, the TCP stack
guarantees a reliable connection below us.
But without the patch only 1 SYN packet is send, shouldn't the normal 
tcp stack always send 3 SYN packets when a connection is not getting 
established?



If you can apply the v2 patch i think that solves most issues that one
or two lost packets can cause in any case.

Now I'm having doubts, because I think your motivation to develop this
patch was more related to some fears of dropped *packets* (that are
already properly handled below us) than any real world issue.
My initial motivation was to get 3 SYN packets, but if the tcp 
connection is handled through the normal stack i don't understand why 
only 1 is being send without the patch, and because only 1 syn packet is 
being send i am not sure other ack / push-ack packets would be retried 
by the normal stack either.?.(i have not yet tested if that is the case)


Could you please confirm exactly what case you wanted to cover here ?

Thanks,
Willy


Regards
PiBa-NL



Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-15 Thread PiBa-NL

Hi Willy,
Op 15-11-2015 om 8:48 schreef Willy Tarreau:

Pieter,

I'm just seeing this part in your description while merging
the patch :

On Sun, Nov 08, 2015 at 07:19:21PM +0100, PiBa-NL wrote:

HOWEVER.
-i have not checked for memoryleaks, sockets not being closed properly
(i dont know how to..)

I'm not worried for this one, at first glance it looks OK.


-is setting current and last steps to null the proper way to reset the
step of rule evaluation?

Yes that's apparently it.


-CO_FL_ERROR is set when there is a connection error.. this seems to be
the proper check.

Indeed.


-but check->conn->flags & 0xFF  is a bit of s guess from observing the
flags when it could connect but the server did not respond
properly.. is there a other better way?

This one is ugly. First, you should never use the numeric values
when there are defines or enums because if these flags happen to
change or move to something else, your value will not be spotted
and will not be converted.
Agreed it was ugly, but i could not find the enums based equivalent for 
that value.. Now i think its only checking 1 bit of it. But that seems 
to work alright to.


Second, normally you would just need "!CONNECTED", as this flag
is set once the connection is established. Note that I understood
from the commit message that you wanted to cover from connection
issues, but the code makes me think that you wanted to retry even
if the connection was properly established but the mail could not
be completely delivered.

I was thinking the state might include 'more' than only !CONNECTED.


Thus I'm attaching two proposals that I'd like you to test, the
first one verifies if the connection was established or not. The
second one checks if we've reached the last rule (implying all
of them were executed).

From my tests both work as you describe.
v1 one retries the connection part, v2 also retries if the mail sending 
did not complete normally.

I think v2 would be the preferred solution.

Though looking through my tcpdumps again i do see it tries to connect 
with 3 different client ports thats not how a normal tcp socket would 
retry right? But i think thats not so much a problem, it does still make 
me wonder a little what happens if a packet is lost in the middle of a 
tcp connection, will it resend like a normal tcp connection? Its 
difficult to test though..


If you can apply the v2 patch i think that solves most issues that one 
or two lost packets can cause in any case.

Thanks,
Willy


Thanks,
PiBa-NL



Re: Echo server in Lua

2015-11-11 Thread PiBa-NL

Hi Thrawn,

I tried these configs, and there doesn't seem to be much if any 
difference. The tcp one might even be the slowest in my limited 
virtualized tests, but only my a few milliseconds..


frontend lua-replyip
bind192.168.0.120:9010
modehttp
http-request use-service lua.lua-replyip
frontend lua-replyip-copy
bind192.168.0.120:9011
modetcp
tcp-request content use-service lua.lua-replyip-tcp
frontend lua-replyip-httpreq
bind192.168.0.120:9012
modehttp
http-request lua.lua-replyip-http-req

core.register_service("lua-replyip", "http", function(applet)
   local response = applet.f:src()
   applet:set_status(200)
   applet:add_header("Server", "haproxy-lua/echo")
   applet:add_header("content-length", string.len(response))
   applet:add_header("content-type", "text/plain")
   applet:start_response()
   applet:send(response)
end)

core.register_service("lua-replyip-tcp", "tcp", function(applet)
   local buffer = applet.f:src()
   applet:send("HTTP/1.0 200 OK\r\nServer: 
haproxy-lua/echo\r\nContent-Type: text/html\r\nContent-Length: " .. 
buffer:len() .. "\r\nConnection: close\r\n\r\n" .. buffer)

end)

core.register_action("lua-replyip-http-req", { "http-req" }, function (txn)
local buffer = txn.f:src()
txn.res:send("HTTP/1.0 200 OK\r\nServer: 
haproxy-lua/echo\r\nContent-Type: text/html\r\nContent-Length: " .. 
buffer:len() .. "\r\nConnection: close\r\n\r\n" .. buffer)

txn:done()
end)


Op 11-11-2015 om 3:07 schreef Thrawn:
Hmm...I seem to be able to set up something in TCP mode, and it 
returns the expected response via curl, but its performance is awful. 
I must be doing something wrong?


Lua:

core.register_action("tcp-echo", {"tcp-req"}, function (txn)
local buffer = txn.f:src()
txn.res:send("HTTP/1.0 200 OK\r\nServer: 
haproxy-lua/echo\r\nContent-Type: text/html\r\nContent-Length: " .. 
buffer:len() .. "\r\nConnection: close\r\n\r\n" ..

missing the appending of 'buffer' in the end on the line above?

txn:done()
end)

I couldn't find a way for a TCP applet to retrieve the client IP 
address; suggestions are welcome.


HAProxy config:

frontend tcp-echo
bind 127.0.2.1:1610
timeout client 1
mode tcp
    tcp-request content lua.tcp-echo

Testing this with ab frequently hangs and times out even at tiny loads 
(10 requests with concurrency 3).




On Wednesday, 11 November 2015, 10:19, PiBa-NL <piba.nl@gmail.com> 
wrote:



b.t.w. if sole purpose of the frontend is to echo the ip back to the 
client.
You should probably also check the 'use-service' applet syntax, i dont 
know if that could be faster for your purpose.
Then another thing to check would be if you want to use the tcp or 
http service mode. A TCP service could be almost 1 line of lua code.. 
And i kinda expect to be a bit faster.


http://www.arpalert.org/src/haproxy-lua-api/1.6/index.html#haproxy-lua-hello-world
Instead of sending 'hello world' you could send the client-ip..

Op 10-11-2015 om 23:46 schreef Thrawn:

OK, some explanation seems in order :).

I ran ab with concurrency 1000 and a total of 3 requests, against 
each server, 5 times, plus one run each with 15 requests (sum of 
the previous 5 tests).
For Apache+PHP, this typically resulted in 5-15ms response time for 
99% of requests, with the remaining few either taking tens of seconds 
or eventually disconnecting with an error.
For HAProxy+Lua, 99% response times were 1ms, or sometimes 2ms, with 
the last few taking about 200ms. So, HAProxy worked much better (of 
course).


However, on the larger run (150k), HAProxy too had a small percentage 
of disconnections (apr_socket_recv: Connection reset by peer). I've 
been able to reproduce this with moderate consistency whenever I push 
it beyond about 35000 total requests. It's still a better error rate 
than PHP, but I'd like to understand why the errors are occurring. 
For all I know, it's a problem with ab.


I've also tried a couple of runs with 15 requests but concurrency 
only 100, and neither server had trouble serving that, although 
interestingly, PHP is slightly more consistent: 99% within 4-5ms, 
then about 200ms for the last few, whereas HAProxy returns 99% within 
1-2ms and 1800ms for the last few.


The box is just my workstation, 8 cores and 16GB RAM, running Ubuntu 
15.10, with no special tuning.


Any ideas on why the HAProxy tests showed disconnections or 
occasional slow response times at high loads?




On Wednesday, 11 November 2015, 8:29, Baptiste <bed...@gmail.com> 
<mailto:bed...@gmail.com> wrote:



On Tue, Nov 10, 2015 at 10:46 PM, Thrawn
<shell_layer-git...@yahoo.com.au 
<mailto:shell_layer-git...@yahoo.com.au>> wrote:

> OK,

Re: LUA, 'retry' failed requests

2015-11-02 Thread PiBa-NL

Op 2-11-2015 om 10:03 schreef Thierry FOURNIER:

On Sat, 31 Oct 2015 21:22:14 +0100
PiBa-NL <piba.nl@gmail.com> wrote:


Hi Thierry, haproxy-list,


Hi Pieter,

Hi Thierry,




I've created another possibly interesting lua script, and it works :)
(mostly). (on my test machine..)

When i visit the 192.168.0.120:9003 website i always see the 'Hello
World' page. So in that regard this is usable, it is left to the browser
to send the request again, not sure how safe this is in regard to
mutations being send twice. It should probably check for POST requests
and then just return the error without replacing it with a redirect..
Not sure if that would catch all problem cases..

Ive created a lua service that counts how many requests are made, and
returns a error status every 5th request.
Second there is a lua response script that checks the status, and
replaces it by a redirect if it sees the faulty status 500.
This does currently result in the connection being closed and reopened
probably due to the txn.res:send().?.

Though i am still struggling with what is and isn't supposed to be possible.
For example the scripts below are running in 'mode http' and mostly just
changing 'headers'.
I expected to be able to simply read the status by calling
txn.f:status() but this always seems to result in 'null'.
Manually parsing the response buffer duplicate works but seems ugly..

txn.f:status()  <  it doesnt result in the actual status.


This is a bug wich I reproduce. Can you try the attached patches ?

With the patches it works without my 'workaround', thanks.



txn.res:set()  < if used in place of send() causes 30 second delay


This function put data in the input part of the response buffer. This
new data follows the HAProxy stream when the Lua script is finished.
It is your case.

I can't reproduce this behaviour, I suppose that its because I work
locally, and I'm not impacted by the network latency.
Even when i bind everything to 0.0.0.0 and use 127.0.0.1 to query the 
9003 port it still waits for the timeout to strike..
I'm not sure why it doesn't happen in your setup.. Of course i'm running 
on FreeBSD, but i don't expect that to affect this..




txn.done()  < dumps core. (im not sure when ever to call it? the script
below seems to match the description that this function has.?.)


I can't reproduce too, for the same reasons, I guess.
Please note that both set() and done() need to be uncommented for the 
dump to happen, with the 5th request.


Not sure if it helps, but backtrace of the dump below (would 'bt full' 
be more usefull?):

(gdb) bt
#0  0x000801a76bb5 in memmove () from /lib/libc.so.7
#1  0x00417523 in buffer_insert_line2 (b=0x8024a,
pos=0x8024a0035 "\r\n\ncontent-type: text/plain\r\ncontent-length: 
394\r\n\r\nError 5\r\nversion\t\n[HTTP/1.1]\t\nf\t\n   0\t\n   
[userdata: 0x802683a68]\t\nsc\t\n   0\t\n   [userdata: 
0x802683be8]\t\nc\t\n 0\t\n   [userdata: 0x802683b68]\t\nheader"...,

str=0x58c695 "Connection: keep-alive", len=22) at src/buffer.c:126
#2  0x0047b3a5 in http_header_add_tail2 (msg=0x8024bb290, 
hdr_idx=0x8024bb280, text=0x58c695 "Connection: keep-alive", len=22)

at src/proto_http.c:595
#3  0x0047f943 in http_change_connection_header 
(txn=0x8024bb280, msg=0x8024bb290, wanted=8388608) at src/proto_http.c:2079
#4  0x004900fd in http_process_res_common (s=0x802485600, 
rep=0x802485650, an_bit=262144, px=0x8024de000) at src/proto_http.c:6882
#5  0x004d6c90 in process_stream (t=0x8024ab710) at 
src/stream.c:1918

#6  0x00420588 in process_runnable_tasks () at src/task.c:238
#7  0x0040ce0e in run_poll_loop () at src/haproxy.c:1559
#8  0x0040dcb2 in main (argc=4, argv=0x7fffeb00) at 
src/haproxy.c:1912





Am i trying to do it wrong?

p.s. Is 'health checking' using lua possible? The redis example looks
like a health 'ping'.. It could possibly be much much more flexible then
the tcp-check send  / tcp-check expect routines..


It is not possible. You can write a task which do something (like an
http request) and reuse the result in the request processing Lua code,
but this task cannot set the status of the server.
The doc does say for core.register_task(func) "For example this type of 
tasks can be executed to perform complex health checks." but if i 
understand correctly it is so that we can perform healthchecks with it, 
but the results of such a check cannot be used to change a servers 
health.? Probably the text near this register_task could use some 
tweaking then?.



Thierry



I'm currently testing with HA-Proxy version 1.7-dev0-e4c4b7d and the
following configuration files:

 haproxy.cfg ###
global
  maxconn6000
  lua-load/var/etc/haproxy/luascript_lua-count5error
defaults
  timeout connect3
  timeout server3
  timeout client3
  modeh

Re: LUA, 'retry' failed requests

2015-11-05 Thread PiBa-NL

Hi Thierry,
Op 5-11-2015 om 8:08 schreef Thierry FOURNIER:

Hi,

Now, because of you I have my own freebsd installed :). I can
reproduced the segfault. I suppose that the OS other than Freebsd are
impacted, but luckyly it not visible.
OK thanks for installing and trying on FreeBSD :) i suppose some OS's 
are more likely to catch/evade a problem then others.


I encounter an easy compilation error. So can you test the two attached
patches. The first one fix the compilation issue, and the second one
fix he segfault.
First patch removes the warning below though it always build fine as far 
as i could tell. Anyway less warnings is better.

include/common/mini-clist.h:114:9: warning: 'LIST_PREV' macro redefined
#define LIST_PREV(lh, pt, el) (LIST_ELEM((lh)->p, pt, el))

Second patch i confirm fixes the core dump.

Thanks as always!

Regards,
PiBa-NL


Thierry


On Mon, 2 Nov 2015 20:50:01 +0100
PiBa-NL <piba.nl@gmail.com> wrote:


Op 2-11-2015 om 10:03 schreef Thierry FOURNIER:

On Sat, 31 Oct 2015 21:22:14 +0100
PiBa-NL <piba.nl@gmail.com> wrote:


Hi Thierry, haproxy-list,

Hi Pieter,

Hi Thierry,



I've created another possibly interesting lua script, and it works :)
(mostly). (on my test machine..)

When i visit the 192.168.0.120:9003 website i always see the 'Hello
World' page. So in that regard this is usable, it is left to the browser
to send the request again, not sure how safe this is in regard to
mutations being send twice. It should probably check for POST requests
and then just return the error without replacing it with a redirect..
Not sure if that would catch all problem cases..

Ive created a lua service that counts how many requests are made, and
returns a error status every 5th request.
Second there is a lua response script that checks the status, and
replaces it by a redirect if it sees the faulty status 500.
This does currently result in the connection being closed and reopened
probably due to the txn.res:send().?.

Though i am still struggling with what is and isn't supposed to be possible.
For example the scripts below are running in 'mode http' and mostly just
changing 'headers'.
I expected to be able to simply read the status by calling
txn.f:status() but this always seems to result in 'null'.
Manually parsing the response buffer duplicate works but seems ugly..

txn.f:status()  <  it doesnt result in the actual status.

This is a bug wich I reproduce. Can you try the attached patches ?

With the patches it works without my 'workaround', thanks.

txn.res:set()  < if used in place of send() causes 30 second delay

This function put data in the input part of the response buffer. This
new data follows the HAProxy stream when the Lua script is finished.
It is your case.

I can't reproduce this behaviour, I suppose that its because I work
locally, and I'm not impacted by the network latency.

Even when i bind everything to 0.0.0.0 and use 127.0.0.1 to query the
9003 port it still waits for the timeout to strike..
I'm not sure why it doesn't happen in your setup.. Of course i'm running
on FreeBSD, but i don't expect that to affect this..



txn.done()  < dumps core. (im not sure when ever to call it? the script
below seems to match the description that this function has.?.)

I can't reproduce too, for the same reasons, I guess.

Please note that both set() and done() need to be uncommented for the
dump to happen, with the 5th request.

Not sure if it helps, but backtrace of the dump below (would 'bt full'
be more usefull?):
(gdb) bt
#0  0x000801a76bb5 in memmove () from /lib/libc.so.7
#1  0x00417523 in buffer_insert_line2 (b=0x8024a,
  pos=0x8024a0035 "\r\n\ncontent-type: text/plain\r\ncontent-length:
394\r\n\r\nError 5\r\nversion\t\n[HTTP/1.1]\t\nf\t\n   0\t\n
[userdata: 0x802683a68]\t\nsc\t\n   0\t\n   [userdata:
0x802683be8]\t\nc\t\n 0\t\n   [userdata: 0x802683b68]\t\nheader"...,
  str=0x58c695 "Connection: keep-alive", len=22) at src/buffer.c:126
#2  0x0047b3a5 in http_header_add_tail2 (msg=0x8024bb290,
hdr_idx=0x8024bb280, text=0x58c695 "Connection: keep-alive", len=22)
  at src/proto_http.c:595
#3  0x0047f943 in http_change_connection_header
(txn=0x8024bb280, msg=0x8024bb290, wanted=8388608) at src/proto_http.c:2079
#4  0x004900fd in http_process_res_common (s=0x802485600,
rep=0x802485650, an_bit=262144, px=0x8024de000) at src/proto_http.c:6882
#5  0x004d6c90 in process_stream (t=0x8024ab710) at
src/stream.c:1918
#6  0x00420588 in process_runnable_tasks () at src/task.c:238
#7  0x0040ce0e in run_poll_loop () at src/haproxy.c:1559
#8  0x0040dcb2 in main (argc=4, argv=0x7fffeb00) at
src/haproxy.c:1912


Am i trying to do it wrong?

p.s. Is 'health checking' using lua possible? The redis example looks
like a health 'ping'.. It could possibly be much much more flexible then
the tcp-check send  / tcp-check expect routines..

It is not p

LUA, 'retry' failed requests

2015-10-31 Thread PiBa-NL

Hi Thierry, haproxy-list,

I've created another possibly interesting lua script, and it works :) 
(mostly). (on my test machine..)


When i visit the 192.168.0.120:9003 website i always see the 'Hello 
World' page. So in that regard this is usable, it is left to the browser 
to send the request again, not sure how safe this is in regard to 
mutations being send twice. It should probably check for POST requests 
and then just return the error without replacing it with a redirect.. 
Not sure if that would catch all problem cases..


Ive created a lua service that counts how many requests are made, and 
returns a error status every 5th request.
Second there is a lua response script that checks the status, and 
replaces it by a redirect if it sees the faulty status 500.
This does currently result in the connection being closed and reopened 
probably due to the txn.res:send().?.


Though i am still struggling with what is and isn't supposed to be possible.
For example the scripts below are running in 'mode http' and mostly just 
changing 'headers'.
I expected to be able to simply read the status by calling 
txn.f:status() but this always seems to result in 'null'.

Manually parsing the response buffer duplicate works but seems ugly..

txn.f:status()  <  it doesnt result in the actual status.
txn.res:set()  < if used in place of send() causes 30 second delay
txn.done()  < dumps core. (im not sure when ever to call it? the script 
below seems to match the description that this function has.?.)


Am i trying to do it wrong?

p.s. Is 'health checking' using lua possible? The redis example looks 
like a health 'ping'.. It could possibly be much much more flexible then 
the tcp-check send  / tcp-check expect routines..


I'm currently testing with HA-Proxy version 1.7-dev0-e4c4b7d and the 
following configuration files:


 haproxy.cfg ###
global
maxconn6000
lua-load/var/etc/haproxy/luascript_lua-count5error
defaults
timeout connect3
timeout server3
timeout client3
modehttp
logglobal
frontend TEST-lua-count
bind192.168.0.120:9002
optionhttp-keep-alive
http-request use-service lua.lua-count

frontend TEST-lua-retry-serverror
bind192.168.0.120:9003
optionhttp-keep-alive
http-request lua.retrystorerequest
http-response lua.retryerrors
default_backend hap_9002_http_ipvANY

backend hap_9002_http_ipvANY
modehttp
retries3
server192.168.0.120_9002 192.168.0.120:9002

### luascript_lua-count5error ###
core.register_action("retryerrors" , { "http-res" }, function(txn)
local clientip = txn.f:src()
txn:Info("  LUA client " .. clientip)

local s = txn.f:status() -- doesnt work?
if s == null then
core.Info("LUA txn.s:status RETURNED: NULL, fallback needed ??")
local req = txn.res:dup()
local statusstr = string.sub(req, 10, 13)
s = tonumber(statusstr)
end
core.Info("LUA status " .. s)

if s ~= 200 then
txn:Info("LUA REDIRECT IT ! " .. s)

local url = txn:get_priv()
local response = ""
response = response .. "HTTP/1.1 302 Moved\r\n"
response = response .. "Location: " .. url .."\r\n"
response = response .. "\r\n"

txn.res:send(response)
--txn.res:set(response) -- causes 30 second delay..
--txn:done() --dumps core..
end
end);

core.register_action("retrystorerequest" , { "http-req" }, function(txn)
local url = txn.f:url()
txn:set_priv(url);
end);

core.register_service("lua-count", "http", function(applet)
   if test == null then
  test = 0
   end
   test = test + 1
   local response = ""
   if test % 5 == 0 then
  applet:set_status(500)
  response = "Error " .. test
   else
  applet:set_status(200)
  response = "Hello World !" .. test
   end
   applet:add_header("content-length", string.len(response))
   applet:add_header("content-type", "text/plain")
   applet:start_response()
   applet:send(response)
end)




Re: lua, changing response-body in http pages 'supported' ?

2015-10-18 Thread PiBa-NL

Hi Thierry,

Op 18-10-2015 om 21:37 schreef thierry.fourn...@arpalert.org:

On Sun, 18 Oct 2015 00:07:13 +0200
PiBa-NL <piba.nl@gmail.com> wrote:


Hi haproxy list,

For testing purposes i am trying to 'modify' a response of a webserver
but only having limited success. Is this supposed to work?
As a more usefull goal than the current LAL to TST replacement i imagine
rewriting absolute links on a webpage could be possible which is
sometimes problematic with 'dumb' webapplications..

Or is it outside of the current scope of implemented functionality? If
so, it on the 'lua todo list' ?

I tried for example a configuration like below. And get several
different results in the browser.
-Sometimes i get 4 times TSTA
-Sometimes i see after the 8th TSTA- Connection: keep-alive << this
happens most of the time..
-Sometimes i get 9 times TSTA + STOP << this would be the desired
outcome (only seen very few times..)

Probably due to the response-buffer being filled differently due to
'timing'..

The "connection: keep-alive" text is probably from the actual server
reply which is 'appended' behind the response generated by my lua
script.?. However shouldn't the .done() prevent that from being send to
the client?

Ive tried putting a loop into the lua script to call res:get() multiple
times but that didnt seem to work..

Also to properly modify a page i would need to know all changes before
sending the headers with changed content-length back to the client..

Can someone confirm this is or isn't (reliably) possible? Or how this
can be scripted in lua differently?


Hello,

Your script replace 3 bytes by 3 bytes, this must run with HTTP, but if
your replacement change the length of the response, you can have some
difficulties with clients, or with keepalive.
Yes i started with replacing with the same number of bytes to avoid some 
of the possible troubles caused by changing the length.. And as seen in 
the haproxy.cfg it is configured with 'mode http'.


The res:get(), returns the current content of the response buffer.
Maybe it not contains the full response. You must execute a loop with
regular "core.yield()" to get back the hand to HAProxy and wait for new
Calling yield does allow to 'wait' for more data to come in.. No 
guarantee that it only takes 1 yield for data to 'grow'..


[info] 278/055943 (77431) : luahttpresponse Content-Length XYZ: 14115
[info] 278/055943 (77431) : luahttpresponse SIZE: 2477
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 6221
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 7469
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 8717
[info] 278/055943 (77431) : luahttpresponse LOOP
[info] 278/055943 (77431) : luahttpresponse SIZE: 14337
[info] 278/055943 (77431) : luahttpresponse DONE?: 14337


data. When all the data are read, res:get() returns an error.
Not sure when/how this error would happen.? The result of res:get only 
seems to get bigger while the webserver is sending the response..


The res:send() is dangerous because it send data directly to the client
without the end of haproxy analysis. Maybe it is the cause o your
problem.

Try to use res:set().

Ok tried that, new try with function below.


The difficulty is that another "res:get()" returns the same data that
these you put.

I don't known if you can modify an http response greater than one
buffer.
Would be nice if that was somehow possible. But my current lua script 
cannot..


The function res:close() closes the connection even if HAProxy want to
keep the connection alive. I suggest that you don't use this function.

It seems txn.res:close() does not exist? txn:done()


I reproduce the error message using curl. By default curl tries
to transfer data with keepalive, and it is not happy if the all the
announced data are not transfered.

Connection: keep-alive curl: (18) transfer closed with outstanding
read data remaining

It seems that i reproduce a bug. I'm looking for.
Ok if you can create a patch, let me know. Happy to test if it solves 
some of the issues i see.


Thierry


This function seems to work for responses up to +-15KB.
Sometimes the number of loops it runs is different, and it seems kinda 
i

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-11 Thread PiBa-NL

Hi All,

Op 7-10-2015 om 0:31 schreef PiBa-NL:

Hi Thierry,
Op 6-10-2015 om 9:47 schreef Thierry FOURNIER:

On Mon, 5 Oct 2015 21:04:08 +0200
PiBa-NL <piba.nl@gmail.com> wrote:


Hi Thierry,

Hi Pieter,



With or without "option http-server-close" does not seem to make any
difference.


Sure, it is only an answer to the Cyril keep alive problem. I encounter
again the keepalive problem :(

The HAProxy applet (services) can't directly uses the keepalive. The
service send its response with an "internal" connection: close. If you
activate the debug, you will see the header "connection: close".

You must configure HAProxy to use keepalive between the frontend and
the client.
Ok. whell without further specific configuration it is keeping 
connections alive, but as that is the default thats ok.


Adding a empty backend does seem to resolve the problem, stats also 
show

the backend handling connections and tracking its 2xx http result
session totals when configured like this.:

frontend http_frt
mode http
bind :801
http-request use-service lua.hello-world
default_backend http-lua-service
backend http-lua-service
mode http


I can't reproduce the problem with the last dev version. But, I
regognize the backtrace, I already encounter the same. I'm believe that
is fixed in the dev6 :(

Using dev7 i can still reproduce it..

I try to bench with my http injector, and I try with ab with and
without keep alive. I try also to stress the admin page, and I can't
reproduce pthe problem.

Argg, I see a major difference: to use freebsd. I don't have the
environment for testing it. I must install a VM.



Op 5-10-2015 om 16:06 schreef Thierry FOURNIER:

Hi,

I process this email later. For waiting, I propose to you to set the
"option http-server-close". Actually, the "services" doesn't support
itself the keepalive, but HAProxy does this job.

The "option http-server-close" expectes a server-close from the 
service

stream. The front of HAProxy maintains the keep-alive between the
client and the haproxy.

This method embbed a limitation: if some servers are declared in the
backend, the "option http-server-close" forbid the keepalive between
haproxy and the serveur.

Can you test with this option ?

Thierry



On Thu, 1 Oct 2015 23:00:45 +0200
Cyril Bonté <cyril.bo...@free.fr> wrote:


Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump 
within a
few hundreds of requests.. Viewing the stats page from a chrome 
browser

while siege is running seems to crash it sooner..

Is below enough to find the cause? Anything else i should try?
This is embarrassing because with your configuration, I currently 
can't

reproduce a segfault but I can reproduce another issue with HTTP
keep-alive requests !

(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
   stats socket /tmp/hap.socket level admin
   maxconn 6
   lua-load /haproxy/brute/hello.lua

defaults
   timeout client 1
   timeout connect 1
   timeout server 1

frontend HAProxyLocalStats
   bind :2300 name localstats
   mode http
   stats enable
   stats refresh 1000
   stats admin if TRUE
   stats uri /
frontend http_frt
 bind :801
 mode http
 http-request use-service lua.hello-world

Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the
awaited behaviour.

The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned


core.register_service("hello-world", "http", function(applet)
  local response = "Hello World !"
  applet:set_status(200)
  applet:add_header("content-type", "text/plain")
  applet:start_response()
  applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
   block1 = -3306
   block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-12 Thread PiBa-NL

Hi Willy,

Op 12-10-2015 om 7:28 schreef Willy Tarreau:

Hi Pieter,

On Mon, Oct 12, 2015 at 01:22:48AM +0200, PiBa-NL wrote:

#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
   block1 = -3306
   block2 = 0

I'm puzzled by this above, no block should have a negative size.


#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
   at src/proto_http.c:2686
   cur_idx = -6336
   sess = (struct session *) 0x80241e400
   txn = (struct http_txn *) 0x802bb2140
   msg = (struct http_msg *) 0x802bb21a0
   ctx = {line = 0x2711079 , idx = 3, val = 0, vlen = 7, tws = 0, del = 33, prev = 0}

And this above, similarly cur_idx shouldn't be negative.


Seems that buffer_slow_realign() isn't used regularly during normal
haproxy operation, and it crashes first time that specific function
gets called.
Reproduction is pretty consistent with chrome browser refreshing stats
every second.
Then starting: wrk -c 200 -t 2 -d 10 http://127.0.0.1:801/
I tried adding some Alert(); items in the code to see what parameters
are set at what step, but am not understanding the exact internals of
that code..

This negative bh=-7800 is not supposed to be there i think? It is from
one of the dprintf statements, how are those supposed generate output?..
[891069718] http_wait_for_request: stream=0x80247d600 b=0x80247d610,
exp(r,w)=0,0 bf=00c08200 bh=-7800 analysers=34

Anything else i can check or provide to help get this fixed?

Best regards,
PiBa-NL

Just a little 'bump' to this issue..

Anyone know when/how this buffer_slow_realign() is suppose to work?

Yes, it's supposed to be used only when a request or response is wrapped
in the request or response buffer. It uses memcpy(), hence the "slow"
aspect of the realign.


I suspect it either contains a bug, or is called with bogus parameters..

It's very sensitive to the consistency of the buffer being realigned. So
errors such as buf->i + buf->o > buf->size, or buf->p > buf->data + buf->size,
or buf->p < buf->data etc... can lead to crashes. But these must never happen
at all otherwise it proves that there's a bug somewhere else.

Here since block1 is -3306 and block2 = 0, I suspect that they were assigned
at line 159 from buf->i, which definitely means that the buffer was already
corrupted.


How can we/i determine which it is?

The difficulty consists in finding what can lead to a corrupted buffer :-/
In the past we had such issues when trying to forward more data than was
available in the buffer, due to option send-name-header. I wouldn't be
surprized that it can happen here on corner cases when building a message
from Lua if the various message pointers are not all perfectly correct.


Even though with a small change in the config (adding a backend) i cant
reproduce it, that doesnt mean there isn't a problem with the fuction..
As the whole function doesn't seem to get called in that circumstance..

It could be related to an uninitialized variable somewhere as well. You
can try to start haproxy with "-dM" to see if it makes the issues 100%

-dM doesnt seem to make much difference if any i this case..

reproducible or not. This poisons all buffers (fills them with a constant
byte 0x50 after malloc) so that we don't rely on an uninitialized zero byte
somewhere.

Regards,
Willy

Been running some more tests with the information that req->buf->i 
should be >= 0.


What i find is that after 1 request i already see rqh=-103 , it seems 
like the initial request size which in this case is also is 103 bytes is 
subtracted twice? It does not immediately crash, but if this is already 
a sign of 'corruption' then the cause should be a little more easy to find..


@Willy can you confirm this indicates the problem could be in progress 
of heading to a crash? Even though in the last line it restores to 0..


See its full output below replaced the DPRINTF already in code of 
stream.c with Alert..


Thanks in advance,
PiBa-NL

root@OPNsense:/usr/ports/net/haproxy-devel # haproxy -f 
/var/haproxy.config -d

[ALERT] 277/063055 (61489) : SSLv3 support requested but unavailable.
Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use kqueue.
Using kqueue() as the polling mechanism.
:http_frt.accept(0006)=0008 from [127.0.0.1:62358]
[ALERT] 277/063058 (61489) : [910446771] process_stream:1655: 
task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610, 
rp=0x80247d650, exp(r,w)=0,0 rqf=00d08000 rpf=8000 rqh=0 rqt=0 rph=0 
rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
[ALERT] 277/063058 (61489) : [910446771] http_wait_for_request: 
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00d08000 bh=0 analysers=34
[ALERT] 277/063058 (61489) : [910446772] process_stream:1655: 
task=0x80244b9

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-05 Thread PiBa-NL

Hi Thierry,

With or without "option http-server-close" does not seem to make any 
difference.


Adding a empty backend does seem to resolve the problem, stats also show 
the backend handling connections and tracking its 2xx http result 
session totals when configured like this.:


frontend http_frt
  mode http
  bind :801
  http-request use-service lua.hello-world
  default_backend http-lua-service
backend http-lua-service
  mode http

Op 5-10-2015 om 16:06 schreef Thierry FOURNIER:

Hi,

I process this email later. For waiting, I propose to you to set the
"option http-server-close". Actually, the "services" doesn't support
itself the keepalive, but HAProxy does this job.

The "option http-server-close" expectes a server-close from the service
stream. The front of HAProxy maintains the keep-alive between the
client and the haproxy.

This method embbed a limitation: if some servers are declared in the
backend, the "option http-server-close" forbid the keepalive between
haproxy and the serveur.

Can you test with this option ?

Thierry



On Thu, 1 Oct 2015 23:00:45 +0200
Cyril Bonté <cyril.bo...@free.fr> wrote:


Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump within a
few hundreds of requests.. Viewing the stats page from a chrome browser
while siege is running seems to crash it sooner..

Is below enough to find the cause? Anything else i should try?

This is embarrassing because with your configuration, I currently can't
reproduce a segfault but I can reproduce another issue with HTTP
keep-alive requests !

(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
  stats socket /tmp/hap.socket level admin
  maxconn 6
  lua-load /haproxy/brute/hello.lua

defaults
  timeout client 1
  timeout connect 1
  timeout server 1

frontend HAProxyLocalStats
  bind :2300 name localstats
  mode http
  stats enable
  stats refresh 1000
  stats admin if TRUE
  stats uri /
frontend http_frt
bind :801
mode http
http-request use-service lua.hello-world

Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the
awaited behaviour.

The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned


core.register_service("hello-world", "http", function(applet)
 local response = "Hello World !"
 applet:set_status(200)
 applet:add_header("content-type", "text/plain")
 applet:start_response()
 applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
  block1 = -3306
  block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
  at src/proto_http.c:2686
  cur_idx = -6336
  sess = (struct session *) 0x80241e400
  txn = (struct http_txn *) 0x802bb2140
  msg = (struct http_msg *) 0x802bb21a0
  ctx = {line = 0x2711079 , idx
= 3, val = 0, vlen = 7, tws = 0,
del = 33, prev = 0}
#3  0x004d55b1 in process_stream (t=0x80244b390) at
src/stream.c:1759
  max_loops = 199
  ana_list = 52
  ana_back = 52
  flags = 4227584
  srv = (struct server *) 0x0
  s = (struct stream *) 0x80247d600
  sess = (struct session *) 0x80241e400
  rqf_last = 8397312
  rpf_last = 2248179715
  rq_prod_last = 7
  rq_cons_last = 9
  rp_cons_last = 7
  rp_prod_last = 0
  req_ana_back = 8192
  req = (struct channel *) 0x80247d610
  res = (struct channel *) 0x80247d650
  si_f = (struct stream_interface *) 0x80247d7f8
  si_b = (struct stream_interface *) 0x80247d818
#4  0x0041fe78 in process_runnable_tasks () at src/task.c:238
  t = (struct task *) 0x80244b390
  max_processed = 0
#5  0x0040cc4e in run_poll_loop () at src/ha

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-12 Thread PiBa-NL

Hi Willy,

Op 12-10-2015 om 23:06 schreef Willy Tarreau:

Hi Pieter,

On Mon, Oct 12, 2015 at 10:29:05PM +0200, PiBa-NL wrote:

Been running some more tests with the information that req->buf->i
should be >= 0.

What i find is that after 1 request i already see rqh=-103 , it seems
like the initial request size which in this case is also is 103 bytes is
subtracted twice? It does not immediately crash, but if this is already
a sign of 'corruption' then the cause should be a little more easy to
find..

Oh yes definitely, good catch!


@Willy can you confirm this indicates the problem could be in progress
of heading to a crash? Even though in the last line it restores to 0..

Absolutely. Everytime we're trying to track such a painful bug, I end
up looking for initial symptoms, like this one. In general the first
corruption is minor and undetected but once it's seeded its disease,
the problem will definitely occur.


See its full output below replaced the DPRINTF already in code of
stream.c with Alert..

Thanks in advance,
PiBa-NL

root@OPNsense:/usr/ports/net/haproxy-devel # haproxy -f
/var/haproxy.config -d
[ALERT] 277/063055 (61489) : SSLv3 support requested but unavailable.
Available polling systems :
  kqueue : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result FAILED
Total: 3 (2 usable), will use kqueue.
Using kqueue() as the polling mechanism.
:http_frt.accept(0006)=0008 from [127.0.0.1:62358]
[ALERT] 277/063058 (61489) : [910446771] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=0,0 rqf=00d08000 rpf=8000 rqh=0 rqt=0 rph=0
rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
[ALERT] 277/063058 (61489) : [910446771] http_wait_for_request:
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00d08000 bh=0
analysers=34
[ALERT] 277/063058 (61489) : [910446772] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x0080, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=0,0 rqf=0002 rpf=8000 rqh=103 rqt=0
rph=0 rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
[ALERT] 277/063058 (61489) : [910446772] http_wait_for_request:
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00808002 bh=103
analysers=34
:http_frt.clireq[0008:]: GET / HTTP/1.1
:http_frt.clihdr[0008:]: Host: 127.0.0.1:801
:http_frt.clihdr[0008:]: Accept: */*
:http_frt.clihdr[0008:]: User-Agent: fetch libfetch/2.0
:http_frt.clihdr[0008:]: Connection: close
[ALERT] 277/063058 (61489) : [910446772] process_switching_rules:
stream=0x80247d600 b=0x80247d610, exp(r,w)=0,0 bf=00808002 bh=103
analysers=00
[ALERT] 277/063058 (61489) : [910446772] sess_prepare_conn_req:
sess=0x80247d600 rq=0x80247d610, rp=0x80247d650, exp(r,w)=0,0
rqf=00808002 rpf=8000 rqh=0 rqt=103 rph=0 rpt=0 cs=7 ss=1
[ALERT] 277/063058 (61489) : [910446772] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x048a, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=910456772,0 rqf=00808200 rpf=8023 rqh=0
rqt=0 rph=97 rpt=0 cs=7 ss=7, cet=0x0 set=0x0 retr=0
:http_frt.srvrep[0008:]: HTTP/1.1 200 OK
:http_frt.srvhdr[0008:]: content-length: 13
:http_frt.srvhdr[0008:]: content-type: text/plain
:http_frt.srvhdr[0008:]: Connection: close
[ALERT] 277/063058 (61489) : [910446773] process_stream:1655:
task=0x80244b9b0 s=0x80247d600, sfl=0x048a, rq=0x80247d610,
rp=0x80247d650, exp(r,w)=910456773,0 rqf=00808200 rpf=8004a221 rqh=-103
rqt=0 rph=0 rpt=0 cs=7 ss=7, cet=0x0 set=0x0 retr=0

Yep so here rqh is in fact req->buf->i and as you noticed it's been
decremented a second time.

I'm seeing this which I find suspicious in hlua.c :

   5909
   5910  /* skip the requests bytes. */
   5911  bo_skip(si_oc(si), strm->txn->req.eoh + 2);

First I don't understand why "eoh+2", I suspect that it's for the CRLF
in which case it's wrong since it can be a lone LF. Second, I'm not
seeing sov being reset afterwards. Could you please just add this
after this line :

strm->txn->req.next -= strm->txn->req.sov;
strm->txn->req.sov = 0;

This did not seem to resolve the issue.


That's equivalent to what we're doing when dealing with a redirect (http.c:4258
if you're curious) since we also have to "eat" the request. There may be a few
other corner cases, the use-service mechanism is fairly new and puts its feet
in a place where things used to work just because they were trained to... But
it's a terribly powerful thing to have so we must fix it even if it needs a
few -stable cycles.

Thanks!
Willy


If you have any other idea where it might go wrong please let me know :)
Ill try and dig a little further tomorrow evening.

Regards,
PiBa-NL



Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-06 Thread PiBa-NL

Hi Thierry,
Op 6-10-2015 om 9:47 schreef Thierry FOURNIER:

On Mon, 5 Oct 2015 21:04:08 +0200
PiBa-NL <piba.nl@gmail.com> wrote:


Hi Thierry,

Hi Pieter,



With or without "option http-server-close" does not seem to make any
difference.


Sure, it is only an answer to the Cyril keep alive problem. I encounter
again the keepalive problem :(

The HAProxy applet (services) can't directly uses the keepalive. The
service send its response with an "internal" connection: close. If you
activate the debug, you will see the header "connection: close".

You must configure HAProxy to use keepalive between the frontend and
the client.
Ok. whell without further specific configuration it is keeping 
connections alive, but as that is the default thats ok.



Adding a empty backend does seem to resolve the problem, stats also show
the backend handling connections and tracking its 2xx http result
session totals when configured like this.:

frontend http_frt
mode http
bind :801
http-request use-service lua.hello-world
default_backend http-lua-service
backend http-lua-service
mode http


I can't reproduce the problem with the last dev version. But, I
regognize the backtrace, I already encounter the same. I'm believe that
is fixed in the dev6 :(

Using dev7 i can still reproduce it..

I try to bench with my http injector, and I try with ab with and
without keep alive. I try also to stress the admin page, and I can't
reproduce pthe problem.

Argg, I see a major difference: to use freebsd. I don't have the
environment for testing it. I must install a VM.



Op 5-10-2015 om 16:06 schreef Thierry FOURNIER:

Hi,

I process this email later. For waiting, I propose to you to set the
"option http-server-close". Actually, the "services" doesn't support
itself the keepalive, but HAProxy does this job.

The "option http-server-close" expectes a server-close from the service
stream. The front of HAProxy maintains the keep-alive between the
client and the haproxy.

This method embbed a limitation: if some servers are declared in the
backend, the "option http-server-close" forbid the keepalive between
haproxy and the serveur.

Can you test with this option ?

Thierry



On Thu, 1 Oct 2015 23:00:45 +0200
Cyril Bonté <cyril.bo...@free.fr> wrote:


Hi,

Le 01/10/2015 20:52, PiBa-NL a écrit :

Hi List,

With the config below while running 'siege' i get a core dump within a
few hundreds of requests.. Viewing the stats page from a chrome browser
while siege is running seems to crash it sooner..

Is below enough to find the cause? Anything else i should try?

This is embarrassing because with your configuration, I currently can't
reproduce a segfault but I can reproduce another issue with HTTP
keep-alive requests !

(details below)


Using the haproxy snapshot from: 1.6-dev6 ss-20150930
Or perhaps i just did compile it wrong?..
make NO_CHECKSUM=yes clean debug=1 reinstall WITH_DEBUG=yes

global
   stats socket /tmp/hap.socket level admin
   maxconn 6
   lua-load /haproxy/brute/hello.lua

defaults
   timeout client 1
   timeout connect 1
   timeout server 1

frontend HAProxyLocalStats
   bind :2300 name localstats
   mode http
   stats enable
   stats refresh 1000
   stats admin if TRUE
   stats uri /
frontend http_frt
 bind :801
 mode http
 http-request use-service lua.hello-world

Here, if I use :
$ ab -n100 -c1 -k http://127.0.0.1:801/
There will be a 1ms delay after the first request.

Or with another test case :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> only 1 response

Now, if I change "frontend http_frt" to "listen http_frt", I get the
awaited behaviour.

The second test case with "listen" :
echo -ne "GET / HTTP/1.1\r\nHost: localhost\r\n\r\nGET /
HTTP/1.1\r\nHost: localhost\r\n\r\n"| nc localhost 801
HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

HTTP/1.1 200 OK
content-type: text/plain
Transfer-encoding: chunked

d
Hello World !
0

=> the 2 responses are returned


core.register_service("hello-world", "http", function(applet)
  local response = "Hello World !"
  applet:set_status(200)
  applet:add_header("content-type", "text/plain")
  applet:start_response()
  applet:send(response)
end )

(gdb) bt full
#0  0x000801a2da75 in memcpy () from /lib/libc.so.7
No symbol table info available.
#1  0x00417388 in buffer_slow_realign (buf=0x7d3c90) at
src/buffer.c:166
   block1 = -3306
   block2 = 0
#2  0x00480c42 in http_wait_for_request (s=0x80247d600,
req=0x80247d610, an_bit=4)
   at src/pro

Re: core dump, lua service, 1.6-dev6 ss-20150930

2015-10-13 Thread PiBa-NL

Hi Willy, Thierry, others,

Op 13-10-2015 om 18:29 schreef Willy Tarreau:

Hi again :-)

On Tue, Oct 13, 2015 at 06:10:33PM +0200, Willy Tarreau wrote:

I can't reproduce either unfortunately. I'm seeing some other minor
issues related to how the closed input is handled and showing that
pipelining doesn't work (only the first request is handled) but that's
all I'm seeing I'm sorry.

I've tried injecting on stats in parallel to the other frontend, I've
tried with close and keep-alive etc... I tried to change the poller
just in case you would be facing a race condition, no way :-(

In general it's good to keep in mind that buffer_slow_realign() is
called to realign wrapped requests, so that normally means that
pipelining is needed. But even then for now I can't succeed.

As usual, sending an e-mail scares the bug and it starts to shake the
white flag :-)

So by configuring the buffer size to 1 and sending large 8kB requests,
I'm seeing a random behaviour. First, most of then time I end up with a
stuck session which never ends (no expiration timer set). And from time
to time it may crash. This time it was not in buffer_slow_realign() but
in buffer_insert_line2(), though the problem is the same :

(gdb) up
#2  0x0046e094 in http_header_add_tail2 (msg=0x7ce628, hdr_idx=0x7ce5c8, 
text=0x53b339 "Connection: close", len=17) at src/proto_http.c:595
595 bytes = buffer_insert_line2(msg->chn->buf, msg->chn->buf->p + 
msg->eoh, text, len);

(gdb) p msg->eoh
$6 = 8057
(gdb) p *msg->chn->buf
$7 = {p = 0x7f8e7b44bf9e "3456789.123456789\n", 'P' ..., size = 10008, 
i = 0, o = 8058, data = 0x7f8e7b44a024 "GET /1234567"}

(gdb) p msg->chn->buf->p - msg->chn->buf->data
$8 = 8058

As one may notice, since p is already 8kB from the beginning of the buffer
(hence 2kB from the end), writing at p + eoh is definitely wrong. Here we're
having a problem that msg->eoh is wrong or buf->p is wrong.

My opinion here is that buf->p is the wrong one, since we're dealing with a
8kB request, so it should definitely have been realigned. Or maybe it was
stripped and removed from the request buffer with HTTP processing still
enabled.

All this part is still totally unclear to me I'm afraid. I suggest that we
don't rush too fast on lua services and try to fix that during the stable
cycle. I don't want to postpone the release any further for something that
was added very recently and that is not causing any regression to existing
configs.

Best regards,
willy

Ok got some good news here :).. 1.6.0-release nolonger has the error i 
encountered.


The commit below fixed the issue already.
--
CLEANUP: cli: ensure we can never double-free error messages
http://git.haproxy.org/?p=haproxy.git;a=commit;h=6457d0fac304b7bba3e8af13501bf5ecf82bfa67
--

I was still testing with 1.6-dev7 the fix above came the day after.. 
Probably your testing with HEAD, which is why it doesn't happen for you. 
Using snapshots or HEAD is not as easy as just following dev releases.. 
So i usually stick to those unless i have reason to believe a newer 
version might fix it already. I should have tested again sooner sorry.. 
(I actually did test latest snapshot at the moment when i first reported 
the issue..)


Anyway i burned some more hours on both your and my side than was 
probably needed.

One more issue gone :)

Thanks for the support!

PiBa-NL



Re: haproxy management web service ?

2015-11-18 Thread PiBa-NL
Technically its possible bind the stats socked on a tcp port iirc, do 
make sure to either bind it on 127.0.0.1 or firewall it properly.
I have no clue if those admin tools can use tcp connection to perform 
their administration tasks..


Op 18-11-2015 om 17:03 schreef Pavlos Parissis:


On 18/11/2015 04:41 μμ, Ed Young wrote:

Pavlos,

I did mean a web service, but basically what I need is a programmatic
way to manage haproxy, and if it isn't a web service, I can always wrap
it with a web service.

My apologies for my python inexperience, so I need some clarification:
haproxyadmin is a python library that uses the stats socket provided by
haproxy, yes?

Yes.


haproxytool is a python tool that uses the haproxyadmin library, yes?

Yes.


haproxyadmin must be installed on the same server as haproxy, yes?

Yes.


haproxytool must be installed on the same server as haproxy, yes?

Yes.


Cheers,
Pavlos






haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)

2015-09-06 Thread PiBa-NL

Hi guys,

Hoping someone can shed some light on what i might be doing wrong?
Or is there something in FreeBSD that might be causing the trouble with 
the new resolvers options?


Thanks in advance.
PiBa-NL

haproxy -f /var/haproxy.cfg -d
[ALERT] 248/222758 (22942) : SSLv3 support requested but unavailable.
Note: setting global.maxconn to 2000.
Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use kqueue.
Using kqueue() as the polling mechanism.
[ALERT] 248/222808 (22942) : Starting [globalresolvers/googleA] 
nameserver: can't connect socket.



defaults
modehttp
timeout connect3
timeout server3
timeout client3

resolvers globalresolvers
nameserver googleA 8.8.8.8:53
resolve_retries   3
timeout retry 1s
hold valid   10s

listen www
bind 0.0.0.0:80
logglobal
servergooglesite www.google.com:80 check inter 1000 
resolvers globalresolvers



# uname -a
FreeBSD OPNsense.localdomain 10.1-RELEASE-p18 FreeBSD 10.1-RELEASE-p18 
#0 71275cd(stable/15.7): Sun Aug 23 20:32:26 CEST 2015 
root@sensey64:/usr/obj/usr/src/sys/SMP  amd64


# haproxy -vv
[ALERT] 248/221747 (72984) : SSLv3 support requested but unavailable.
HA-Proxy version 1.6-dev4-b7ce424 2015/09/03
Copyright 2000-2015 Willy Tarreau <wi...@haproxy.org>

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 
USE_STATIC_PCRE=1 USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
Running on OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.37 2015-04-28
PCRE library supports JIT : yes
Built with Lua version : Lua 5.3.0
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.




Re: haproxy resolvers, DNS query not send / result NXDomain not expected

2015-09-07 Thread PiBa-NL

Op 7-9-2015 om 23:06 schreef Baptiste:

On Mon, Sep 7, 2015 at 10:12 PM, PiBa-NL <piba.nl@gmail.com> wrote:

Hi Remi and Baptiste / haproxy users,

Thanks for the quick fix for socket issues.

Haproxy now starts succesfull and sends some DNS requests successfully.
However the google backend server immediately go's down.
Not sure if its more or less the same issue reported by Conrad.?. Tried his
fix but that did not seem to solve the issue.

See below some tcpdump results with original haproxy code + Remi's patch.

The googlesite server is marked down almost imidiately after starting.. It
does not seem to understand the 'NXDomain' reply?
The testsite2 does not send DNS query's, should it not send a dns query
every 10 seconds?

Or maybe i'm misinterpreting the 'hold valid' description?
Perhaps you guy's could take another look?

Thanks in advance, best regards,
PiBa-NL

Same environment as before (p.s. if you want to test it yourself, its quite
easy to install the OPNsense iso into a virtualbox machine, thats how im
testing it).
# uname -a
FreeBSD OPNsense.localdomain 10.1-RELEASE-p18 FreeBSD 10.1-RELEASE-p18 #0
71275cd(stable/15.7): Sun Aug 23 20:32:26 CEST 2015
root@sensey64:/usr/obj/usr/src/sys/SMP  amd64
# haproxy -v
[ALERT] 249/200618 (55609) : SSLv3 support requested but unavailable.
HA-Proxy version 1.6-dev4-b7ce424 2015/09/03
Copyright 2000-2015 Willy Tarreau <wi...@haproxy.org>

global
   maxconn 100
defaults
 modehttp
 timeout connect3
 timeout server3
 timeout client3
resolvers globalresolvers
 nameserver googleA 8.8.8.8:53
 resolve_retries   3
 timeout retry 1s
 hold valid   10s
listen www
 bind 0.0.0.0:81
 logglobal
 servergooglesite www.google.com:80 check inter 2000
resolvers globalresolvers
 servertestsite2 nu.nl:80   check inter 2000
resolvers globalresolvers

19:42:53.843549 IP 192.168.0.112.44128 > 8.8.8.8.53: 46758+ ?
www.google.com. (32)
19:42:53.859410 IP 8.8.8.8.53 > 192.168.0.112.44128: 46758 1/0/0 
2a00:1450:4013:c01::93 (60)
19:42:53.859929 IP 192.168.0.112.42866 > 8.8.8.8.53: 57888+ A? nu.nl. (23)
19:42:53.877414 IP 8.8.8.8.53 > 192.168.0.112.42866: 57888 1/0/0 A
62.69.166.254 (39)
19:42:53.877693 IP 192.168.0.112.54655 > 8.8.8.8.53: 983+ ? nu.nl. (23)
19:42:53.894598 IP 8.8.8.8.53 > 192.168.0.112.54655: 983 0/1/0 (89)
19:42:55.907078 IP 192.168.0.112.53716 > 8.8.8.8.53: 21069+ ANY?
www.google.com:80. (35)
19:42:55.924236 IP 8.8.8.8.53 > 192.168.0.112.53716: 21069 NXDomain 0/1/0
(110)
19:42:59.923338 IP 192.168.0.112.53716 > 8.8.8.8.53: 52649+ ANY?
www.google.com:80. (35)
19:42:59.940424 IP 8.8.8.8.53 > 192.168.0.112.53716: 52649 NXDomain 0/1/0
(110)
19:43:03.937163 IP 192.168.0.112.53716 > 8.8.8.8.53: 5746+ ANY?
www.google.com:80. (35)
19:43:03.955002 IP 8.8.8.8.53 > 192.168.0.112.53716: 5746 NXDomain 0/1/0
(110)
19:43:07.957851 IP 192.168.0.112.53716 > 8.8.8.8.53: 32478+ ANY?
www.google.com:80. (35)
19:43:07.973450 IP 8.8.8.8.53 > 192.168.0.112.53716: 32478 NXDomain 0/1/0
(110)
19:43:11.977145 IP 192.168.0.112.53716 > 8.8.8.8.53: 48547+ ANY?
www.google.com:80. (35)
19:43:11.994878 IP 8.8.8.8.53 > 192.168.0.112.53716: 48547 NXDomain 0/1/0
(110)
19:43:16.013370 IP 192.168.0.112.53716 > 8.8.8.8.53: 24088+ ANY?
www.google.com:80. (35)
19:43:16.01 IP 8.8.8.8.53 > 192.168.0.112.53716: 24088 NXDomain 0/1/0
(110)
19:43:20.025739 IP 192.168.0.112.53716 > 8.8.8.8.53: 52900+ ANY?
www.google.com:80. (35)
19:43:20.041989 IP 8.8.8.8.53 > 192.168.0.112.53716: 52900 NXDomain 0/1/0
(110)
19:43:24.038682 IP 192.168.0.112.53716 > 8.8.8.8.53: 28729+ ANY?
www.google.com:80. (35)
19:43:24.055154 IP 8.8.8.8.53 > 192.168.0.112.53716: 28729 NXDomain 0/1/0
(110)
19:43:28.060200 IP 192.168.0.112.53716 > 8.8.8.8.53: 27289+ ANY?
www.google.com:80. (35)
19:43:28.076947 IP 8.8.8.8.53 > 192.168.0.112.53716: 27289 NXDomain 0/1/0
(110)
19:43:32.077052 IP 192.168.0.112.53716 > 8.8.8.8.53: 54796+ ANY?
www.google.com:80. (35)
19:43:32.092108 IP 8.8.8.8.53 > 192.168.0.112.53716: 54796 NXDomain 0/1/0
(110)
19:43:36.094322 IP 192.168.0.112.53716 > 8.8.8.8.53: 4256+ ANY?
www.google.com:80. (35)
19:43:36.111877 IP 8.8.8.8.53 > 192.168.0.112.53716: 4256 NXDomain 0/1/0
(110)
19:43:40.117106 IP 192.168.0.112.53716 > 8.8.8.8.53: 7297+ ANY?
www.google.com:80. (35)
19:43:40.132362 IP 8.8.8.8.53 > 192.168.0.112.53716: 7297 NXDomain 0/1/0
(110)
19:43:44.138071 IP 192.168.0.112.53716 > 8.8.8.8.53: 46840+ ANY?
www.google.com:80. (35)
19:43:44.154351 IP 8.8.8.8.53 > 192.168.0.112.53716: 46840 NXDomain 0/1/0
(110)
19:43:48.157131 IP 192.168.0.112.53716 > 8.8.8.8.53: 13717+ ANY?
www.google.com:80. (35)
19:43:48.173579 IP 8.8.8.8.53 > 192.168.0.112.53716: 13717 NXDomain 0/1/0
(110)
19:43:52.175307 IP 192.168.0.112.53716 > 8.8.8.8.5

Re: haproxy resolvers, DNS query not send / result NXDomain not expected

2015-09-08 Thread PiBa-NL

Op 8-9-2015 om 17:39 schreef Baptiste:

Hi Piba,

Finally, Willy fixed it in a different (and smarter) way:
http://git.haproxy.org/?p=haproxy.git;a=commit;h=07101d5a162a125232d992648a8598bfdeee3f3f

Baptiste

Hi Baptiste,

Just compiled latest snapshot + the list of patches from today and now 
it works good for me.


For some reason 1 time it failed to start while testing, i suspect due 
to dns not replying properly at that moment. That produced the following 
message.
[ALERT] 250/181319 (92688) : parsing [/var/haproxy.cfg:27] : 'server 
testsite2' : invalid address: 'nu.nl' in 'nu.nl:80'
Tried again a few seconds later, and it started without issue. I suppose 
the init-addr could have solved that but i read its on the todo list (at 
least when dev3 was announced). Anyway i could not reproduce the issue 
after starting a tcpdump to see what was going on. Probably nothing.


p.s.
Dont forget to add the patch "get_addr_len(>addr)" Remi 
created ;) its not yet in todays list of dns patches. But maybe im just 
a bit to eager now :) .


Keep up the good work!
Thanks.
PiBa-NL



  1   2   3   4   >