Re: HAproxy tproxy problem when try to make transparent proxy

2010-03-20 Thread Willy Tarreau
On Sat, Mar 20, 2010 at 02:23:29AM +0100, Daniele Genetti wrote:
 I verify default gw and it seems correct.
 I also add rules suggested, but nothing change.
 The error 503 Service Unavailable persist.
 
 So, now I try to do this test.
 
 1) Without transparent proxy
 on HAPROXY_SERVER:
  netstat -ctnup | grep 192.168.1.20:80 (ok, connection established showed)
 on WEB_SERVER:
  netstat -ctnup | grep 192.168.1.21:80 (ok, connection established showed)
 
 2) With transparent proxy activated
 on HAPROXY_SERVER:
  netstat -ctnup | grep 192.168.1.20:80 (ok, connection established showed)
 on WEB_SERVER:
  netstat -ctnup | grep 192.168.1.21:80 (nothing showed)
 
 So, probably there is a problem forwarding.. I'm right?

No, you're not watching the same connections. I'm assuming that 192.168.1.20
is your web server and 192.168.1.21 is your haproxy server. In transparent
mode, the web server will see the client's IP address as the source, not the
haproxy server. So you must use exactly the same grep on both sides.

Also, be sure not to test from 127.0.0.1, otherwise it will not work. But
what I find strange in your case is that if the connection appears established
on the haproxy server, that means that everything is correct, including routing
of backwards packets. Otherwise you would see a SYN_SENT state.

 Anyone maybe have an idea to resolve this issue?

Please simplify the test first. Disable health checks on the server. That
way we'll know that health checks are not seeing the server as down. Next
step is to ensure that you're sending the request from a machine that must
be routed back via the haproxy server, so it must not be on the same local
net as your web server. If you still don't see any progress, please take a
tcpdump capture on both sides (haproxy server and web server).

Regards,
Willy




Re: queued health checks?

2010-03-20 Thread Holger Just
Hi Greg,

On 2010-03-20 6:52 AM, Greg Gard wrote:
 i remember somewhere in the archives mention of a plan to make health
 checks get queued like any other request. did that happen in 1.4.x
 branch with all the work to health checks. i searched the archives,
 but didn't turn up what i remembered. my use case is rails/mongrel
 with maxconn = 1 so i don't want health checks getting sent to a
 mongrel that might be serving a request or more critically having a
 request puke because haproxy sent a health check to the same server it
 just sent a client request to.

The haproxy - mongrel topic was discussed several times in the past.

I think, your health checks should not be a problem. If you know and
configure your haproxy well. Yes, mongrels are only able to process one
request at a time but they are also able to queue requests themselves
without puking on the second. This results in having the health checks
queued with the regular requests on the mongrels.

IMHO that should be no problem if you configure your check timeouts
accordingly such that they can handle a rather long regular request
queued in front of he health check (depending on your actual setup).

I think, you should always check all your mongrels as they have the
tendency to just die if something went wrong. If you just check one, it
might not detect some failed mongrels.

For health checks of rails apps we use a simple dedicated controller
inside the rails app (that depends on all the initializers) which just
performs a SELECT true; from the database. This works really well for
us, although we do not use mongrels anymore but glassfish+jruby. As this
check is rather fast, it should not lead to mayor issues even on mongrels.

--Holger



Re: HAproxy tproxy problem when try to make transparent proxy

2010-03-20 Thread L. Alberto Giménez
On 03/20/2010 08:27 PM, Daniele Genetti wrote:

 So, there is something that don't permit to communicate in transparent
 mode..
 Where is the barrier? mmm..

Hi,

Sorry for insist on that, but are you *completely* sure that your
routing is properly set up so transparent mode can work? This kind of
errors are almost always related to routing issues.

Please check that:

* You have the tproxy enabled in your kernel
* You have haproxy compiled with tproxy support

Your backend servers *can't* see the clients directly (i.e., they have
the haproxy box as default gateway and *no other* gateways).

The same for the clients (not mandatory, but if they can see the
servers, it may cause trouble).


Best regards,
L. Alberto Giménez



Re: HAproxy tproxy problem when try to make transparent proxy

2010-03-20 Thread Daniele Genetti

Hello,

L. Alberto Giménez ha scritto:

Please check that:

* You have the tproxy enabled in your kernel
* You have haproxy compiled with tproxy support

Your backend servers *can't* see the clients directly (i.e., they have
the haproxy box as default gateway and *no other* gateways).

The same for the clients (not mandatory, but if they can see the
servers, it may cause trouble).
Like I wrote before, I use ubuntu server 9.10, with kernel 2.6.31 and 
iptables 1.4.4, so with built-in tproxy support (if I'm not wrong).

And I compiled Haproxy by hands with correct parameters I think...

 lsmod
[...]
nf_tproxy_core24281 xt_socket,[permanent]
[...]

 haproxy -vv
HA-Proxy version 1.4.2 2010/03/17
Copyright 2000-2010 Willy Tarreau w...@1wt.eu
Build options :
 TARGET  = linux26
 CPU = i686
 CC  = gcc
 CFLAGS  = -O2 -march=i686 -g
 OPTIONS = USE_LINUX_TPROXY=1 USE_STATIC_PCRE=1
[...]

The client can't see directly the backend server.
 ping -c 1 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
From 192.168.1.2 icmp_seq=1 Destination Host Unreachable
--- 192.168.0.2 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

The backend server can't see the clients directly.
 ping -c 1 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
*From 192.168.1.21 icmp_seq=1 Destination Host Unreachable* (not From 
192.168.0.2 like expected)

--- 192.168.1.2 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

So, incredible.. I find the trick.. Alberto, you save my mind.. :-)
In backend server I have 2nd ethernet card configured with 192.168.1.21.
The cable is out but I forget to disable it (how I'm chicken..)..
So everytime the backend try to access to client from this route.

Many times errors are in the most simple things.

Thanks, thank you very much.. Really!

Daniele




Re: queued health checks?

2010-03-20 Thread Greg Gard
thanks holger,

i did some research and was able to find more on mongrel and queuing.
so that helps to clarify. i am unsure what i will do viz checking in
the end as we have some long running requests that are frankly the
bane of my existence and complicate load balancing. we need to
refactor as part of the solution.

just to be complete, are there any plans to have health checks get queued?

On Sat, Mar 20, 2010 at 8:14 AM, Holger Just w...@meine-er.de wrote:
 Hi Greg,

 On 2010-03-20 6:52 AM, Greg Gard wrote:
 i remember somewhere in the archives mention of a plan to make health
 checks get queued like any other request. did that happen in 1.4.x
 branch with all the work to health checks. i searched the archives,
 but didn't turn up what i remembered. my use case is rails/mongrel
 with maxconn = 1 so i don't want health checks getting sent to a
 mongrel that might be serving a request or more critically having a
 request puke because haproxy sent a health check to the same server it
 just sent a client request to.

 The haproxy - mongrel topic was discussed several times in the past.

 I think, your health checks should not be a problem. If you know and
 configure your haproxy well. Yes, mongrels are only able to process one
 request at a time but they are also able to queue requests themselves
 without puking on the second. This results in having the health checks
 queued with the regular requests on the mongrels.

 IMHO that should be no problem if you configure your check timeouts
 accordingly such that they can handle a rather long regular request
 queued in front of he health check (depending on your actual setup).

 I think, you should always check all your mongrels as they have the
 tendency to just die if something went wrong. If you just check one, it
 might not detect some failed mongrels.

 For health checks of rails apps we use a simple dedicated controller
 inside the rails app (that depends on all the initializers) which just
 performs a SELECT true; from the database. This works really well for
 us, although we do not use mongrels anymore but glassfish+jruby. As this
 check is rather fast, it should not lead to mayor issues even on mongrels.

 --Holger





-- 
greg gard, psyd
www.carepaths.com