Trouble starting haproxy on Debian8 (systemd)

2015-11-16 Thread SL
Hi,

Today I cloned a server which is running haproxy 1.6 on a debian8 server
without problems.  However, for some reason the clone refuses to start
haproxy (IPs and hostname have been updated of course).

Probably I'm missing something simple, but I can't think what it is.  I
have very little experience with systemd, so am now at a loss as to what's
wrong. I'm hoping someone here might be able to suggest something...


Nov 16 15:25:50 x systemd[1]: Starting HAProxy Load Balancer...
Nov 16 15:25:50 x systemd[1]: haproxy.service: control process exited,
code=exited status=2
Nov 16 15:25:50 x systemd[1]: Failed to start HAProxy Load Balancer.
Nov 16 15:25:50 x systemd[1]: Unit haproxy.service entered failed state.
Nov 16 15:25:51 x sudo[3843]: pam_unix(sudo:session): session closed
for user root
Nov 16 15:25:51 x systemd[1]: haproxy.service holdoff time over,
scheduling restart.
Nov 16 15:25:51 x systemd[1]: Stopping HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: Starting HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: haproxy.service: control process exited,
code=exited status=2
Nov 16 15:25:51 x systemd[1]: Failed to start HAProxy Load Balancer.
Nov 16 15:25:51 x systemd[1]: Unit haproxy.service entered failed state.
Nov 16 15:25:51 x systemd[1]: haproxy.service holdoff time over,
scheduling restart.
Nov 16 15:25:51 x systemd[1]: Stopping HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: Starting HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: haproxy.service: control process exited,
code=exited status=2
Nov 16 15:25:51 x systemd[1]: Failed to start HAProxy Load Balancer.
Nov 16 15:25:51 x systemd[1]: Unit haproxy.service entered failed state.
Nov 16 15:25:51 x systemd[1]: haproxy.service holdoff time over,
scheduling restart.
Nov 16 15:25:51 x systemd[1]: Stopping HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: Starting HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: haproxy.service: control process exited,
code=exited status=2
Nov 16 15:25:51 x systemd[1]: Failed to start HAProxy Load Balancer.
Nov 16 15:25:51 x systemd[1]: Unit haproxy.service entered failed state.
Nov 16 15:25:51 x systemd[1]: haproxy.service holdoff time over,
scheduling restart.
Nov 16 15:25:51 x systemd[1]: Stopping HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: Starting HAProxy Load Balancer...
Nov 16 15:25:51 x systemd[1]: haproxy.service: control process exited,
code=exited status=2
Nov 16 15:25:51 x systemd[1]: Failed to start HAProxy Load Balancer.
Nov 16 15:25:51 x systemd[1]: Unit haproxy.service entered failed state.
Nov 16 15:25:52 x systemd[1]: haproxy.service holdoff time over,
scheduling restart.
Nov 16 15:25:52 x systemd[1]: Stopping HAProxy Load Balancer...
Nov 16 15:25:52 x systemd[1]: Starting HAProxy Load Balancer...
Nov 16 15:25:52 x systemd[1]: haproxy.service start request repeated
too quickly, refusing to start.
Nov 16 15:25:52 x systemd[1]: Failed to start HAProxy Load Balancer.
Nov 16 15:25:52 x systemd[1]: Unit haproxy.service entered failed state.
Nov 16 15:29:25 x systemd[1]: Starting Cleanup of Temporary
Directories...

If I try  /etc/init.d/haproxy start, then I get:


[] Starting haproxy (via systemctl): haproxy.serviceJob for
haproxy.service failed. See 'systemctl status haproxy.service' and
'journalctl -xn' for details

journalctl -xn returns 'No journal files were found.'

 'systemctl status haproxy.service' returns:

 haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/lib/systemd/system/haproxy.service; enabled)
   Active: failed (Result: start-limit) since Mon 2015-11-16 15:49:27 GMT;
1min 39s ago
 Docs: man:haproxy(1)
   file:/usr/share/doc/haproxy/configuration.txt.gz
  Process: 3751 ExecStartPre=/usr/local/sbin/haproxy -f ${CONFIG} -c -q
(code=exited, status=2)



Any ideas about where to go from here?

Thank you!


Re: appsession replacement in 1.6

2015-11-16 Thread Sylvain Faivre

Hi Aleks,

On 11/10/2015 10:56 PM, Aleksandar Lazic wrote:

Dear Sylvain Faivre,

Am 10-11-2015 12:48, schrieb Sylvain Faivre:

On 11/10/2015 12:00 AM, Aleksandar Lazic wrote:

Hi Sylvain Faivre.

Am 09-11-2015 17:31, schrieb Sylvain Faivre:


[snipp]


So, I've got this so far :

backend http

  stick-table type string len 24 size 10m expire 1h peers prod

  stick on urlp(JSESSIONID,;)
  stick on cookie(JSESSIONID)


Does this seem right ?
The help for "stick on" tells it defines a request pattern, so I guess
this would not match JSESSIONID cookie ou url parameter set in the
reply ?


I have no java server here to test this commands but with this commands
haproxy does not warn you about some config errors ;-).

###
backend dest01
   mode http

   stick-table type string len 24 size 10m expire 1h peers prod

   stick on urlp(JSESSIONID,;)
   stick on cookie(JSESSIONID)

   stick store-response cookie(JSESSIONID)
#  stick store-response res.hdr(JSESSIONID,;)

   stick store-request cookie(JSESSIONID)
   stick store-request urlp(JSESSIONID,;)

   server srv_dest01 dest01.com:80
###

I have not seen a good option to read the JSESSIONID from the response
header in case it is not in a cookie.
Have anyone I idea?!

Please can you post a full response header which was created from the
app or appserver when the app or appserver have detected that the client
does not allow cookies.


[snipp]


In fact, the server sends the JSESSIONID as a cookie even if the
client does not support cookies, *and* it adds the JSESSIONID as a URL
parameter in all links, so this should be all right.


This would be helpfully to see the full response.
Maybe some appserver behaves different.


As far as I know, there is no way for the server to detect if the client 
has cookies enabled, by looking only at the first request from that client.


According to a quick Google search, the most common ways to detect 
cookies support are either to use Javascript (so client-side check) or 
to reply with a redirect response with the cookie set, then when 
processing the redirected URL, check if the client sent the cookie along 
with the request (so this case will be covered by the proposed HAproxy 
settings).


I don't feel comfortable giving our application server version on a 
public list, but I will send it to you in private.


Here are the headers from a client request and server reply, with a 
brand new profile on the client (with cookies disabled) :


- request :
GET /front/url1.do?m=booking=FR HTTP/1.1
Host: redacted.host.domain
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:42.0) 
Gecko/20100101 Firefox/42.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive

- reply :
HTTP/1.1 200 OK
Date: Mon, 16 Nov 2015 15:25:41 GMT
Content-Type: text/html;charset=ISO-8859-15
Set-Cookie: JSESSIONID=uNmYNvgUME5-8LYPzimsCg__.8d15fc; Path=/front
Vary: Accept-Encoding
Content-Encoding: gzip
Transfer-Encoding: chunked

And here is a URL sample from the body of the reply. You will notice 
that the jsessionid is there twice, first one after a semicolon and 
second one after a question mark. I am not sure if this comes from the 
application server of from our custom code.


src="https://redacted.host.domain/front/url2.do;jsessionid=uNmYNvgUME5-8LYPzimsCg__.8d15fc?jsessionid=uNmYNvgUME5-8LYPzimsCg__.8d15fc&langcode=FR"; 
language="JavaScript">






I just tried your config on a test server, and the session IDs are
rightly recorded in the table, whether the client accepts cookies or
not.


Perfect.


I still have some test cases to run, I will check this next week and
report back if needed.


Oh yes please tell us the results so that we can add this as migration
example for appsession.


OK, I will. This will not go into production yet, we still need to run 
it on a test environment for at least 3 weeks...


Best regards.
Sylvain



Re: Trouble starting haproxy on Debian8 (systemd)

Hmm, it seems to be working again now - not sure what has changed, but will
investigate.  Thanks for the response, and sorry for wasting your time.


On 16 November 2015 at 17:02, Marco Corte  wrote:

> Hi!
>
> Does haproxy start manually? Is it only a systemd issue?
>
>
> Il 16/11/2015 16:51, SL ha scritto:
>
>> systemctl status haproxy.service
>>
>
> systemctl status haproxy.service -l
>
>
>
> .marcoc
>
>


Re: Trouble starting haproxy on Debian8 (systemd)


Hi!

Does haproxy start manually? Is it only a systemd issue?


Il 16/11/2015 16:51, SL ha scritto:

systemctl status haproxy.service


systemctl status haproxy.service -l



.marcoc



Re: [LUA] statistics aggregation

On Mon, Nov 16, 2015 at 1:25 PM, Thierry FOURNIER
 wrote:
> Hi list,
>
> There are first useful Lua scripts:
>
>http://www.arpalert.org/haproxy-scripts.html
>
> One of these allows the aggregation of the statistics of many HAProxy
> instances. It permits to redistribute the data in CSV format via HTTP,
> or reuse the aggregated value throught sample fetches.
>
> Thierry
>

Great, amazing 

Baptiste



simply copy mapped value into acl

Hi,
I'm trying to figure out the best way to match a source ip against an ip
mapping file and make decisions based on that. What I'm now doing is this:

acl acl_is_xx src,map_ip() -m str xx
acl acl_is_yy src,map_ip() -m str yy

http-request set-header X-Test wasxx if acl_is_xx ...
http-request set-header X-Test wasyy if acl_is_yy ...

While this works my problem is that this requires two map look-ups. What
i would really like to do is this (pseudo code):

acl acl_value src,map_ip() -m copy
http-request set-header X-Test wasxx if acl_value==xx
http-request set-header X-Test wasyy if acl_value==yy

That way you only would have to do one look-up in the map and then
determine the the different cases based on simple string matches.

As far as I can tell though ACLs only allow for matching and not for a
straight forward copy like I tried to express with the "-m copy" above.

Is there an alternative way to express something like this?

Regards,
  Dennis



[SPAM] Water Retaining Gel-Reduce Irrigation And Increase Yield

Good Day!Glad to know you replied to my message and I am writing this email to give you more details about our products.SOCO Water Gel--A amazing material absorbing water for plants!Advantages:–Improve seed germination and emergence to give plants an early, health start.–Save the irrigation, increase crops and fruit yield.–It contains Potassium, Phosphorus Nitrogen and release the fertilizer efficiency slowly.Hope hear your opinion and surely 
welcome to our web: www.socochem.com .


	
	



	
	



	
	



	
	



	
	


[LUA] statistics aggregation

Hi list,

There are first useful Lua scripts:

   http://www.arpalert.org/haproxy-scripts.html

One of these allows the aggregation of the statistics of many HAProxy
instances. It permits to redistribute the data in CSV format via HTTP,
or reuse the aggregated value throught sample fetches.

Thierry



Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

Hi Pieter,

On Mon, Nov 16, 2015 at 07:14:33PM +0100, PiBa-NL wrote:
> >>But i think thats not so much a problem, it does still make
> >>me wonder a little what happens if a packet is lost in the middle of a
> >>tcp connection, will it resend like a normal tcp connection? Its
> >>difficult to test though..
> >Haproxy doesn't affect how TCP works. We never see packets, the TCP stack
> >guarantees a reliable connection below us.
> But without the patch only 1 SYN packet is send, shouldn't the normal 
> tcp stack always send 3 SYN packets when a connection is not getting 
> established?

Absolutely, so my guess is that it's purely a timeout issue, probably that
the default mailer timeout is too short to allow a retransmit to be sent.
The default SYN retransmit to an unknown target is 3s usually, maybe we're
running with a 3s (or lower) timeout here ? I don't know how they're set,
I haven't looked at this part recently I must confess.

> >>If you can apply the v2 patch i think that solves most issues that one
> >>or two lost packets can cause in any case.
> >Now I'm having doubts, because I think your motivation to develop this
> >patch was more related to some fears of dropped *packets* (that are
> >already properly handled below us) than any real world issue.
> My initial motivation was to get 3 SYN packets, but if the tcp 
> connection is handled through the normal stack i don't understand why 
> only 1 is being send without the patch, and because only 1 syn packet is 
> being send i am not sure other ack / push-ack packets would be retried 
> by the normal stack either.?.(i have not yet tested if that is the case)

I agree with you since we don't know the timeout value nor what it applies
to (connection or anything). Thus I think that we should first find and
change that value, and maybe after that take your patch to permit real
retries in case of connection failures.

Thanks,
Willy




Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times


Hi Willy,
Op 16-11-2015 om 7:20 schreef Willy Tarreau:

Hi Pieter,

On Mon, Nov 16, 2015 at 12:13:50AM +0100, PiBa-NL wrote:

-but check->conn->flags & 0xFF  is a bit of s guess from observing the
flags when it could connect but the server did not respond
properly.. is there a other better way?

This one is ugly. First, you should never use the numeric values
when there are defines or enums because if these flags happen to
change or move to something else, your value will not be spotted
and will not be converted.

Agreed it was ugly, but i could not find the enums based equivalent for
that value.. Now i think its only checking 1 bit of it. But that seems
to work alright to.

You could have ORed all the respective flags but even so it didn't
really make sense to have all of them.


Thus I'm attaching two proposals that I'd like you to test, the
first one verifies if the connection was established or not. The
second one checks if we've reached the last rule (implying all
of them were executed).

 From my tests both work as you describe.
v1 one retries the connection part, v2 also retries if the mail sending
did not complete normally.
I think v2 would be the preferred solution.

OK fine, thanks for the test.


Though looking through my tcpdumps again i do see it tries to connect
with 3 different client ports thats not how a normal tcp socket would
retry right?

Do not confuse haproxy and the tcp stack, that's important. Dropped
*packets* are dealt with by the TCP stack which retransmits themm over
the same connection, thus the same ports. When haproxy retries, it does
not manipulate packets, it retries failed connections, ie the ones that
TCP failed to fix (eg: multiple dropped packets at the TCP level causing
a connection to fail for whatever reason, such as a blocked source port
or a dead link requiring a new connection to be attempted via a different
path.


But i think thats not so much a problem, it does still make
me wonder a little what happens if a packet is lost in the middle of a
tcp connection, will it resend like a normal tcp connection? Its
difficult to test though..

Haproxy doesn't affect how TCP works. We never see packets, the TCP stack
guarantees a reliable connection below us.
But without the patch only 1 SYN packet is send, shouldn't the normal 
tcp stack always send 3 SYN packets when a connection is not getting 
established?



If you can apply the v2 patch i think that solves most issues that one
or two lost packets can cause in any case.

Now I'm having doubts, because I think your motivation to develop this
patch was more related to some fears of dropped *packets* (that are
already properly handled below us) than any real world issue.
My initial motivation was to get 3 SYN packets, but if the tcp 
connection is handled through the normal stack i don't understand why 
only 1 is being send without the patch, and because only 1 syn packet is 
being send i am not sure other ack / push-ack packets would be retried 
by the normal stack either.?.(i have not yet tested if that is the case)


Could you please confirm exactly what case you wanted to cover here ?

Thanks,
Willy


Regards
PiBa-NL



Re: Trouble starting haproxy on Debian8 (systemd)

 ❦ 16 novembre 2015 16:51 +0100, SL  :

> Process: 3751 ExecStartPre=/usr/local/sbin/haproxy -f ${CONFIG} -c -q
> (code=exited, status=2)

Execute this command manually. This is the one checking if the
configuration is correct.
-- 
Every cloud engenders not a storm.
-- William Shakespeare, "Henry VI"



Re: appsession replacement in 1.6


Hi Sylvain.

Am 16-11-2015 17:06, schrieb Sylvain Faivre:

Hi Aleks,

On 11/10/2015 10:56 PM, Aleksandar Lazic wrote:

Dear Sylvain Faivre,


[snipp]


This would be helpfully to see the full response.
Maybe some appserver behaves different.


As far as I know, there is no way for the server to detect if the
client has cookies enabled, by looking only at the first request from
that client.

According to a quick Google search, the most common ways to detect
cookies support are either to use Javascript (so client-side check) or
to reply with a redirect response with the cookie set, then when
processing the redirected URL, check if the client sent the cookie
along with the request (so this case will be covered by the proposed
HAproxy settings).


Yes. That's also my experience.


I don't feel comfortable giving our application server version on a
public list, but I will send it to you in private.


thanks.


Here are the headers from a client request and server reply, with a
brand new profile on the client (with cookies disabled) :

- request :
GET /front/url1.do?m=booking=FR HTTP/1.1
Host: redacted.host.domain
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:42.0)
Gecko/20100101 Firefox/42.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive

- reply :
HTTP/1.1 200 OK
Date: Mon, 16 Nov 2015 15:25:41 GMT
Content-Type: text/html;charset=ISO-8859-15
Set-Cookie: JSESSIONID=uNmYNvgUME5-8LYPzimsCg__.8d15fc; Path=/front
Vary: Accept-Encoding
Content-Encoding: gzip
Transfer-Encoding: chunked

And here is a URL sample from the body of the reply. You will notice
that the jsessionid is there twice, first one after a semicolon and
second one after a question mark. I am not sure if this comes from the
application server of from our custom code.

https://redacted.host.domain/front/url2.do;jsessionid=uNmYNvgUME5-8LYPzimsCg__.8d15fc?jsessionid=uNmYNvgUME5-8LYPzimsCg__.8d15fc&langcode=FR";
language="JavaScript">


thanks.

As described here

http://git.haproxy.org/?p=haproxy-1.6.git;a=blob;f=doc/configuration.txt;h=45d1aacfbe0d2d53193f7956a0dd03e5f8151ea6;hb=HEAD#l5043

option http-buffer-request

maybe you should stick on the header ;-)

[snipp]


Oh yes please tell us the results so that we can add this as migration
example for appsession.


OK, I will. This will not go into production yet, we still need to run
it on a test environment for at least 3 weeks...


Thanks.

Best regards
Aleks



Re: HAProxy and backend on the same box

Hello All,

After having a look in iptables, I am able to solve this issue.

added following line in iptables

iptables -t mangle -A OUTPUT -s 192.168.20.10 -p tcp  -j DIVERT

thanks much,

Regards,
-Abdul Jaleel

On Mon, Nov 16, 2015 at 3:31 PM, jaleel  wrote:

> Hello All,
>
> Need help regarding the iptables
>
> For the packet coming from network, I set the iptables as following
>
> iptables -t mangle -N DIVERT
> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
> iptables -t mangle -A DIVERT -j MARK --set-mark 1
> iptables -t mangle -A DIVERT -j ACCEPT
>
> ip rule add fwmark 1 lookup 100
> ip route add local 0.0.0.0/0 dev lo table 100
>
> For the packet generated locally, I think I need to set the mangle table
> in OUTPUT chain so that HAProxy will capture locally generated packet as
> well.
>
> how do I create the OUPUT chain mangle table?
>
> Regards,
> -Abdul jaleel K
>
> On Fri, Nov 13, 2015 at 1:12 PM, Aleksandar Lazic 
> wrote:
>
>> Hi.
>>
>> But do you really think this is a haproxy Problem?
>>
>> Am 13-11-2015 08:38, schrieb Aleksandar Lazic:
>>
>>> Am 13-11-2015 06:14, schrieb jaleel:
>>>
 It works if HAProxy and backend are in different box, but when both are
 in same box it didn't work

>>>
>>> Maybe because the iptables rule is a different from 'localhost' then
>>> from external.
>>>
>>> Please take a look at the picture
>>>
>>>
>>> https://ixquick-proxy.com/do/spg/show_picture.pl?l=english=1=http%3A%2F%2Ferlerobotics.gitbooks.io%2Ferle-robotics-introduction-to-linux-networking%2Fcontent%2Fsecurity%2Fimg9%2Fiptables.gif=5ac7f7d4aa8327c04f456b9db2362108
>>>
>>
>> or this one
>>
>> http://inai.de/images/nf-packet-flow.png
>>
>> from this site
>>
>>
>> http://serverfault.com/questions/345111/iptables-target-to-route-packet-to-specific-interface
>>
>>
>> and the document for this Picture.
>>>
>>>
>>> https://erlerobotics.gitbooks.io/erle-robotics-introduction-to-linux-networking/content/security/introduction_to_iptables.html
>>>
>>> I think you should add some lines into the postrouting table
>>>
>>> BR Aleks
>>>
>>> On Fri, Nov 13, 2015 at 1:56 AM, Igor Cicimov
  wrote:

 On 13/11/2015 1:04 AM, "jaleel"  wrote:
>
>>
>> Hello,
>>
>> I am trying to setup the following for deployment
>>
>> I have 2 servers.
>> server1: eth0:10.200.2.211 (255.255.252.0)
>> eth1: 192.168.10.10 (255.255.255.0)
>> server2: eth0: 10.200.2.242 (255.255.252.0)
>> eth1: 192.168.20.10 (255.255.255.0)
>>
>> VRRP between server1 and server2 eth0. VRIP is 10.200.3.84
>>
>>
>> my haproxy config:
>> --
>> listen  ingress_traffic 10.200.3.84:7000 [1]
>> mode tcp
>> source 0.0.0.0 usesrc clientip
>> balance roundrobin
>> server server1 192.168.10.10:9001 [2]
>> server server2 192.168.20.10:9001 [3]
>>
>> Iptables:
>> ---
>> iptables -t mangle -N DIVERT
>> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
>> iptables -t mangle -A DIVERT -j MARK --set-mark 1
>> iptables -t mangle -A DIVERT -j ACCEPT
>>
>> ip rule add fwmark 1 lookup 100
>> ip route add local 0.0.0.0/0 [4] dev lo table 100
>>
>>
>> Now 10.200.2.211 is the master and owns VRIP 10.200.3.84
>>
>> When traffic comes to 10.200.3.84:7000 [1], the routing to server2
>>
> is successful and end-to-end communication is fine. But the response
> from server1 (192.168.10.10:9001 [2]) is not reaching HAProxy.
>
>>
>> I cannot have 3rd box for HAProxy alone.
>>
>> Any suggestions
>>
>> Thank you
>> -Abdul Jaleel
>>
>>
>> The backends need to have haproxy set as gateway.
>



 Links:
 --
 [1] http://10.200.3.84:7000
 [2] http://192.168.10.10:9001
 [3] http://192.168.20.10:9001
 [4] http://0.0.0.0/0

>>>
>


Re: HAProxy and backend on the same box

Hello All,

Need help regarding the iptables

For the packet coming from network, I set the iptables as following

iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

For the packet generated locally, I think I need to set the mangle table in
OUTPUT chain so that HAProxy will capture locally generated packet as well.

how do I create the OUPUT chain mangle table?

Regards,
-Abdul jaleel K

On Fri, Nov 13, 2015 at 1:12 PM, Aleksandar Lazic 
wrote:

> Hi.
>
> But do you really think this is a haproxy Problem?
>
> Am 13-11-2015 08:38, schrieb Aleksandar Lazic:
>
>> Am 13-11-2015 06:14, schrieb jaleel:
>>
>>> It works if HAProxy and backend are in different box, but when both are
>>> in same box it didn't work
>>>
>>
>> Maybe because the iptables rule is a different from 'localhost' then
>> from external.
>>
>> Please take a look at the picture
>>
>>
>> https://ixquick-proxy.com/do/spg/show_picture.pl?l=english=1=http%3A%2F%2Ferlerobotics.gitbooks.io%2Ferle-robotics-introduction-to-linux-networking%2Fcontent%2Fsecurity%2Fimg9%2Fiptables.gif=5ac7f7d4aa8327c04f456b9db2362108
>>
>
> or this one
>
> http://inai.de/images/nf-packet-flow.png
>
> from this site
>
>
> http://serverfault.com/questions/345111/iptables-target-to-route-packet-to-specific-interface
>
>
> and the document for this Picture.
>>
>>
>> https://erlerobotics.gitbooks.io/erle-robotics-introduction-to-linux-networking/content/security/introduction_to_iptables.html
>>
>> I think you should add some lines into the postrouting table
>>
>> BR Aleks
>>
>> On Fri, Nov 13, 2015 at 1:56 AM, Igor Cicimov
>>>  wrote:
>>>
>>> On 13/11/2015 1:04 AM, "jaleel"  wrote:

>
> Hello,
>
> I am trying to setup the following for deployment
>
> I have 2 servers.
> server1: eth0:10.200.2.211 (255.255.252.0)
> eth1: 192.168.10.10 (255.255.255.0)
> server2: eth0: 10.200.2.242 (255.255.252.0)
> eth1: 192.168.20.10 (255.255.255.0)
>
> VRRP between server1 and server2 eth0. VRIP is 10.200.3.84
>
>
> my haproxy config:
> --
> listen  ingress_traffic 10.200.3.84:7000 [1]
> mode tcp
> source 0.0.0.0 usesrc clientip
> balance roundrobin
> server server1 192.168.10.10:9001 [2]
> server server2 192.168.20.10:9001 [3]
>
> Iptables:
> ---
> iptables -t mangle -N DIVERT
> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
> iptables -t mangle -A DIVERT -j MARK --set-mark 1
> iptables -t mangle -A DIVERT -j ACCEPT
>
> ip rule add fwmark 1 lookup 100
> ip route add local 0.0.0.0/0 [4] dev lo table 100
>
>
> Now 10.200.2.211 is the master and owns VRIP 10.200.3.84
>
> When traffic comes to 10.200.3.84:7000 [1], the routing to server2
>
 is successful and end-to-end communication is fine. But the response
 from server1 (192.168.10.10:9001 [2]) is not reaching HAProxy.

>
> I cannot have 3rd box for HAProxy alone.
>
> Any suggestions
>
> Thank you
> -Abdul Jaleel
>
>
> The backends need to have haproxy set as gateway.

>>>
>>>
>>>
>>> Links:
>>> --
>>> [1] http://10.200.3.84:7000
>>> [2] http://192.168.10.10:9001
>>> [3] http://192.168.20.10:9001
>>> [4] http://0.0.0.0/0
>>>
>>