Re: HAproxy FreeBSD no Logging?

2010-03-31 Thread Joe P.H. Chiang
Hi
Yes im using net/haproxy port
Yes the log file exist
i've tried your setup.. and still not logging..

Im using haproxy 1.4.2 , i wonder if that have anything to do with the
logging.. im going to downgrade to 1.3.x see if that makes any differences

On Wed, Mar 31, 2010 at 10:53 PM, Ross West  wrote:

>
> JPHC> I've trouble logging my haproxy on freebsd 7.2  HA-Proxy version
> 1.4.2
> JPHC> 2010/03/17
>
> Are you using the net/haproxy port?
>
> Make sure the log files exist and/or use the "-C" option (create
> non-existent log files) for syslogd.
>
> Here's an example that works on my test system:
>
> -= /etc/rc.conf
> syslogd_enable="YES"
> syslogd_flags="-b localhost -C"
> -=
>
> -= /usr/local/etc/haproxy.conf
> global
>daemon  # set to daemonize
>log 127.0.0.1:514 local1 debug  # syslog logging
> -=
>
> -= /etc/syslog.conf
> local1.*/var/log/haproxy.log
> -=
>
> Doing a /usr/local/etc/rc.d/haproxy reload generates a bunch of log
> entries nicely for each config section.  You might want to turn down
> debug mode though.  :-)
>
>
> R.
>
>
>
>
>
>
> --
>
>
>


-- 
Thanks,
Joe


Re: Haproxy monitoring with munin

2010-03-31 Thread Willy Tarreau
On Wed, Mar 31, 2010 at 07:43:26AM -0700, Hank A. Paulson wrote:
> On 1/16/10 5:46 PM, Bart van der Schans wrote:
> >Hi,
> >
> >A few days ago there's was some interest in munin plugins for haproxy.
> >I have written a few plugins in perl. To code is fairly strait forward
> >and should be quite easy to adjust to your needs. The four attached
> >plugins are:
> >
> >- haproxy_check_duration: monitor the duration of the health checks per 
> >server
> >- haproxy_errors: monitor the rate of 5xx response headers per backend
> >- haproxy_sessions: monitors the rate of (tcp) sessions per backend
> >- haproxy_volume: monitors the bps in and out per backend
> >
> >To use them you'll have to add something like the following to your
> >munin-node config:
> >
> >[haproxy*]
> >user haproxy
> >env.socket /var/run/haproxy.sock
> >
> >The user should have rights to read and write to the unix socket and
> >env.socket should point to the haproxy stats socket.
> >
> >For debugging the "dump" command line option can be quite useful. It
> >print the complete %hastats data structure containing all the info
> >read from the socket with "show stat".  I can setup some
> >sourceforge/github thingie which will make it easier to share
> >patches/updates/additions/etc. if people are interested.
> >
> >Regards,
> >Bart
> 
> I noticed the 1.4.4 version of Munin complains:
> Service 'haproxy_errors' exited with status 1/0.
> 
> The normal (non-error) exit paths seem to require exit 0 not exit 1
 
Also, be careful when monitoring 5xx responses. 501 and 505
can be triggered by the client, so they are false positives.

Willy




Re: Passing XMPP/Jabber through haproxy?

2010-03-31 Thread Willy Tarreau
On Wed, Mar 31, 2010 at 06:33:33PM -0400, Morgan Aldridge wrote:
> On Tue, Mar 2, 2010 at 2:25 PM, Morgan Aldridge
>  wrote:
> >
> > I'm running haproxy 1.3.15.7 under OpenBSD 4.6 macppc and am having
> > issues passing XMPP/Jabber through haproxy to Mac OS X 10.5 Server's
> > jabberd. My /etc/haproxy/haproxy.cfg contains the following:
> >
> >    listen xmpp_proxy
> >        bind *:5222
> >        bind *:5223
> >        bind *:5269
> >        mode tcp
> >        balance roundrobin
> >        server server_xmpp 10.0.1.3 check
> >
> > Client connections disconnect after a minute or two, whether there's
> > an active discussion or not, but there doesn't seem to be any other
> > negative issues (e.g. s2s works correctly and such). I have tried both
> > "mode tcp" and "mode http".
> >
> > The only reference I've seen to people load-balancing XMPP/Jabber is
> >  and
> > nothing's jumping out at me regarding that configuration and mine.
> 
> I've tried adding 'option tcpka', to no avail. Obviously, increasing
> 'srvtimeout' (which is set to '5' in my defaults) helps, but
> shouldn't 'option tcpka' prevent that need? Or, is there a value I
> should set 'srvtimeout' to which will be higher than the number of
> milliseconds between keep-alive packets?

No, "option tcpka" only enables TCP-level keep-alives. The application
layer cannot be aware of that. It just ensures that all components
along the whole chain see traffic and don't close the connection (eg:
firewalls). By default, TCP keepalives are quite rare, generally one
every 2 hours.

I'm sorry, but I know absolutely nothing about your protocol, so it's
hard to help for your specific case. What I can suggest you however
is to ensure that your "clitimeout" and "srvtimeout" are both equal,
because when any of the two strikes, the connection will be terminated.

Some application-level protocols provide keep-alives (eg, some remote
desktop protocols, SSH, etc...). I don't know if your protocol supports
that. If it does, it would probably be a good idea to enable them,
because the issue you're having with haproxy timing out will be true
for any other component along the chain between the client and the
server. For instance, some HTTP proxies reduce their timeouts under
high loads, and clients passing through them using a CONNECT method
may experience frequent disconnections. I've already seen sub-second
timeouts after one exchange in each direction (agreed, that was a bit
extreme and HTTPS did not work very well) !

Just a few hints...
Willy




Re: http-server-close problem (also with Tomcat backend)

2010-03-31 Thread Willy Tarreau
Hi Oscar,

On Wed, Mar 31, 2010 at 07:15:34PM +0200, Óscar Frías Barranco wrote:
> I have been looking at Tomcat source code and apparently it looks like there
> is an easy fix.

That's good news !

> Here is the class where the logic is implemented:
> http://svn.apache.org/repos/asf/tomcat/trunk/java/org/apache/coyote/http11/Http11Processor.java
> 
> And this is the patch that I have generated:
> 
> Index: java/org/apache/coyote/http11/Http11Processor.java
> ===
> --- java/org/apache/coyote/http11/Http11Processor.javaTue Mar 09
> 18:09:50 CET 2010
> +++ java/org/apache/coyote/http11/Http11Processor.javaTue Mar 09
> 18:09:50 CET 2010
> @@ -1547,7 +1547,7 @@
>  (outputFilters[Constants.IDENTITY_FILTER]);
>  contentDelimitation = true;
>  } else {
> -if (entityBody && http11 && keepAlive) {
> +if (entityBody && http11) {
>  outputBuffer.addActiveFilter
>  (outputFilters[Constants.CHUNKED_FILTER]);
>  contentDelimitation = true;

Indeed, eventhough I generally don't read Java code, this one
looks obviously right :-)

> I would like to send this to Tomcat mailing list, but if we want this change
> to be implemented I think that we must explain which are the benefits of
> using chunked encoding also when not using keepalive.
> Willy, could you help me with this ?

Yes, I will try :-)

Chunked transfer-encoding is an alternative to content-length, for
use when the content-length cannot initially be determined. While
it is mandatory to have either of them to support keep-alive, its
use is not restricted to keep-alive and one useful immediate benefit
of using it is to allow a client to distinguish a connection abort
from a complete response in order to avoid storing truncated data.

Another useful case is when a reverse-proxy is installed in front
of the server, and this reverse proxy tries to maintain keep-alive
connections with the clients and intends to close the connections
with the servers (like apache 1.3, haproxy, and I think nginx).
The lack of content-length and chunked encoding prevents the proxy
from keeping client connections alive. The "connection: close"
sent by the proxy to the server only indicates that the proxy will
send just one request to the server, not that it does not care
about the response length. The same is true when that proxy caches.
Without any content-length nor chunked encoding, the cache could
store and distribute truncated response believing they are complete,
while this would not happen with chunked encoding because the cache
will be able to know it has not seen the end of the response.

Hoping this helps !
Anyway, if Cyril's workaround works, I'll merge it because Tomcat
is probably not the only component affected by that. That way we'd
be able to enable it when needed.

Willy




Re: HAProxy for persistent TCP connections

2010-03-31 Thread Geoffrey Mina
Great.  I guess I wasn't clear on what those actually did.  I will try
increasing those numbers and see if things get better.

gracias!
Geoff

On Wed, Mar 31, 2010 at 6:03 PM, Cyril Bonté  wrote:

> Hi,
>
> Le Mercredi 31 Mars 2010 23:33:05, Geoffrey Mina a écrit :
> > Willy,
> > I know you said that HAProxy would work just fine with persistent TCP
> > connections, unfortunately I am not seeing that behavior. We are
> > establishing a socket connection and sending application level "heart
> beat"
> > messages every 60 seconds.
>
> Isn't it every 30 seconds, with a "BEAT" message coming from the backend ?
>
> > I am seeing that HAProxy is shutting down my
> > connection after a period of time.  Attached is a pcap file of the
> > shutdown... from the HAProxy server.  I have also included my
> configuration
> > below.
>
> If it's 30 seconds, then "srvtimeout  3" is not a good value, as
> HAProxy can close the connection nearly at the same time the packet is
> received.
> Can you try with a value a little bigger than the heart beat interval ?
> And if it's 60 seconds, this timeout is definitly too low.
>
> --
> Cyril Bonté
>


Re: Passing XMPP/Jabber through haproxy?

2010-03-31 Thread Morgan Aldridge
On Tue, Mar 2, 2010 at 2:25 PM, Morgan Aldridge
 wrote:
>
> I'm running haproxy 1.3.15.7 under OpenBSD 4.6 macppc and am having
> issues passing XMPP/Jabber through haproxy to Mac OS X 10.5 Server's
> jabberd. My /etc/haproxy/haproxy.cfg contains the following:
>
>    listen xmpp_proxy
>        bind *:5222
>        bind *:5223
>        bind *:5269
>        mode tcp
>        balance roundrobin
>        server server_xmpp 10.0.1.3 check
>
> Client connections disconnect after a minute or two, whether there's
> an active discussion or not, but there doesn't seem to be any other
> negative issues (e.g. s2s works correctly and such). I have tried both
> "mode tcp" and "mode http".
>
> The only reference I've seen to people load-balancing XMPP/Jabber is
>  and
> nothing's jumping out at me regarding that configuration and mine.

I've tried adding 'option tcpka', to no avail. Obviously, increasing
'srvtimeout' (which is set to '5' in my defaults) helps, but
shouldn't 'option tcpka' prevent that need? Or, is there a value I
should set 'srvtimeout' to which will be higher than the number of
milliseconds between keep-alive packets?

Morgan
---
http://www.makkintosshu.com/



Re: HAProxy for persistent TCP connections

2010-03-31 Thread Cyril Bonté
Hi,

Le Mercredi 31 Mars 2010 23:33:05, Geoffrey Mina a écrit :
> Willy,
> I know you said that HAProxy would work just fine with persistent TCP
> connections, unfortunately I am not seeing that behavior. We are
> establishing a socket connection and sending application level "heart beat"
> messages every 60 seconds.

Isn't it every 30 seconds, with a "BEAT" message coming from the backend ?

> I am seeing that HAProxy is shutting down my
> connection after a period of time.  Attached is a pcap file of the
> shutdown... from the HAProxy server.  I have also included my configuration
> below.

If it's 30 seconds, then "srvtimeout  3" is not a good value, as HAProxy 
can close the connection nearly at the same time the packet is received.
Can you try with a value a little bigger than the heart beat interval ?
And if it's 60 seconds, this timeout is definitly too low.

-- 
Cyril Bonté



Re: HAProxy for persistent TCP connections

2010-03-31 Thread Geoffrey Mina
Willy,
I know you said that HAProxy would work just fine with persistent TCP
connections, unfortunately I am not seeing that behavior. We are
establishing a socket connection and sending application level "heart beat"
messages every 60 seconds.  I am seeing that HAProxy is shutting down my
connection after a period of time.  Attached is a pcap file of the
shutdown... from the HAProxy server.  I have also included my configuration
below.

Any ideas on what's up here?


global
maxconn 4096 # Total Max Connections. This is dependent on
ulimit
daemon
nbproc  4 # Number of processing cores. Dual Dual-core Opteron
is 4 cores for example.
log 127.0.0.1 local0 debug

defaults
modehttp
clitimeout  15
srvtimeout  3
contimeout  4000
log global
#option tcplog
#option httplog
#option  httpclose # Disable Keepalive


listen  services X.X.X.131:1312
mode tcp
balance roundrobin # Load Balancing algorithm
option tcpka
retries 3
## Define your servers to balance
server rs-webserver1 X.X.X.217:1312
server rs-webserver2 X.X.X.216:1312
server rs-webserver3 X.X.X.136:1312
server rs-webserver4 X.X.X.220:1312
server rs-webserver5 X.X.X.126:1312


listen stats :8080
mode http
stats uri /



Thanks!
Geoff

On Tue, Mar 9, 2010 at 6:03 PM, Willy Tarreau  wrote:

> On Tue, Mar 09, 2010 at 05:58:07PM -0500, Geoffrey Mina wrote:
> > Great.  None of this should be an issue.  My application sends it's
> > own keepalive/heartbeat packets every 30-60 seconds.  So, it sounds
> > like the timeout will only kick in if there is no activity on the
> > socket, correct?  If that's the case, then I'll probably have fairly
> > short timeout settings, to ensure we don't have a bunch of garbage
> > connections up.
>
> yes indeed that's better that way. And it's nice to see that some
> people still think about implementing application level keep-alives !
>
> Willy
>
>


test.pcap
Description: Binary data


Re: Slow loading

2010-03-31 Thread Amanda Machutta
Thanks to everyone for their help on getting this up and running. After
replacing the windows firewall all is working beautifully. Great product!

-- Amanda

On Wed, Mar 31, 2010 at 2:49 AM, Willy Tarreau  wrote:

> On Wed, Mar 31, 2010 at 02:17:37AM -0400, Geoffrey Mina wrote:
> > There was nothing between the two but a switch... although, disabling the
> > Windows firewall on the IIS server seems to have fixed the problem!  I
> don't
> > have much experience with the built in windows firewall... but apparently
> > it's not happy about something.
>
> well then either the windows firewall is terribly buggy or the switch
> is having fun with the TTL (layer3 switch maybe ?), because it is not
> normal to have the TTL decrease by one if nothing sits between the two
> machines.
>
> > I think we'll switch over to a third party firewall application.
>
> That's a safer bet :-)
>
> > Thanks for the help!  You guys rock.
>
> You're welcome!
> Willy
>
>


-- 
   ´¨)   __o
 .·´  .·´¨)¸.·´¨)  _'\< .
(¸.·´ (¸.·´ (¸.·´¨¨  Amanda ¨¨( * )  (   )


Re: http-server-close problem (also with Tomcat backend)

2010-03-31 Thread Óscar Frías Barranco
> > In any case, if you consider that this Tomcat behavior is buggy we could
> > report the issue to Tomcat team and maybe they can fix it.
>
> If we're certain that it's just "Connection: close" which automatically
> disables use of chunked encoding, then yes it's a buggy behaviour and maybe
> the developers may be interested in fixing it. Sending a "connection:
> close"
> header is a valid choice for a client, it just indicates to the server that
> it does not intent to send anything else on the same connection after the
> first request. It is perfectly valid and should not disable use of any form
> of encoding in the response.
>


I have been looking at Tomcat source code and apparently it looks like there
is an easy fix.

Here is the class where the logic is implemented:
http://svn.apache.org/repos/asf/tomcat/trunk/java/org/apache/coyote/http11/Http11Processor.java

And this is the patch that I have generated:

Index: java/org/apache/coyote/http11/Http11Processor.java
===
--- java/org/apache/coyote/http11/Http11Processor.javaTue Mar 09
18:09:50 CET 2010
+++ java/org/apache/coyote/http11/Http11Processor.javaTue Mar 09
18:09:50 CET 2010
@@ -1547,7 +1547,7 @@
 (outputFilters[Constants.IDENTITY_FILTER]);
 contentDelimitation = true;
 } else {
-if (entityBody && http11 && keepAlive) {
+if (entityBody && http11) {
 outputBuffer.addActiveFilter
 (outputFilters[Constants.CHUNKED_FILTER]);
 contentDelimitation = true;



I would like to send this to Tomcat mailing list, but if we want this change
to be implemented I think that we must explain which are the benefits of
using chunked encoding also when not using keepalive.
Willy, could you help me with this ?

Thank you,
Oscar


RE: Love Haproxy

2010-03-31 Thread James Harris
Well they do and it has been a fun project to work with. I was wondering if 
there was a contact that I could reach out to for some real tuning questions?

Thanks,
James 

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu] 
Sent: Tuesday, March 30, 2010 10:03 PM
To: James Harris
Cc: haproxy@formilux.org
Subject: Re: Love Haproxy

Hello James,

On Tue, Mar 30, 2010 at 11:04:13AM -0700, James Harris wrote:
> Hello,
> 
> I just wanted to take this time to thank haproxy for the solid work. I have 
> enjoy the solution and followed if for some time. I am currently at 
> kontera.com and we have been using it in our primary topology for the last 
> two years. I love the project and think you guys rock.

Such encouraging messages are always appreciated. In turn, I'd say
that I'm happy that we now have a real (though small) community
working on the project and spending time helping users and proposing
nice improvements to the project. It is their continued work in very
different environments which makes the component really solid. So they
all deserve your thanks :-)

Best regards,
Willy




Re: HAproxy FreeBSD no Logging?

2010-03-31 Thread Ross West

JPHC> I've trouble logging my haproxy on freebsd 7.2  HA-Proxy version 1.4.2
JPHC> 2010/03/17

Are you using the net/haproxy port?

Make sure the log files exist and/or use the "-C" option (create
non-existent log files) for syslogd.

Here's an example that works on my test system:

-= /etc/rc.conf
syslogd_enable="YES"
syslogd_flags="-b localhost -C"
-=

-= /usr/local/etc/haproxy.conf
global
daemon  # set to daemonize
log 127.0.0.1:514 local1 debug  # syslog logging
-=

-= /etc/syslog.conf
local1.*/var/log/haproxy.log
-=

Doing a /usr/local/etc/rc.d/haproxy reload generates a bunch of log
entries nicely for each config section.  You might want to turn down
debug mode though.  :-)


R.






-- 




Re: Haproxy monitoring with munin

2010-03-31 Thread Hank A. Paulson

On 1/16/10 5:46 PM, Bart van der Schans wrote:

Hi,

A few days ago there's was some interest in munin plugins for haproxy.
I have written a few plugins in perl. To code is fairly strait forward
and should be quite easy to adjust to your needs. The four attached
plugins are:

- haproxy_check_duration: monitor the duration of the health checks per server
- haproxy_errors: monitor the rate of 5xx response headers per backend
- haproxy_sessions: monitors the rate of (tcp) sessions per backend
- haproxy_volume: monitors the bps in and out per backend

To use them you'll have to add something like the following to your
munin-node config:

[haproxy*]
user haproxy
env.socket /var/run/haproxy.sock

The user should have rights to read and write to the unix socket and
env.socket should point to the haproxy stats socket.

For debugging the "dump" command line option can be quite useful. It
print the complete %hastats data structure containing all the info
read from the socket with "show stat".  I can setup some
sourceforge/github thingie which will make it easier to share
patches/updates/additions/etc. if people are interested.

Regards,
Bart


I noticed the 1.4.4 version of Munin complains:
Service 'haproxy_errors' exited with status 1/0.

The normal (non-error) exit paths seem to require exit 0 not exit 1




Re: http-server-close problem (also with Tomcat backend)

2010-03-31 Thread Willy Tarreau
On Wed, Mar 31, 2010 at 11:47:43AM +0200, Óscar Frías Barranco wrote:
> In many cases Tomcat cannot send the content length because it is serving
> dynamic content and reporting the content length would require buffering all
> the output before serving it.

That's precisely where the chunked encoding is used. For instance, a server
which is sending gzipped data will send "Transfer-Encoding: chunked" and send
small chunks one after one, and indicate the real end of the object with a
zero-sized chunk.

> If you are closing the connection, I do not see any specific advantage of
> using chunked encoding.  You could have an undetected incomplete response
> also when using chunked encoding.

you could have detected it with chunked encoding, you cannot detect it without
chunked encoding. The problem we're facing here is precisely that the tomcat
server is neither sending a content-length NOR chunked encoding.

> In any case, if you consider that this Tomcat behavior is buggy we could
> report the issue to Tomcat team and maybe they can fix it.

If we're certain that it's just "Connection: close" which automatically
disables use of chunked encoding, then yes it's a buggy behaviour and maybe
the developers may be interested in fixing it. Sending a "connection: close"
header is a valid choice for a client, it just indicates to the server that
it does not intent to send anything else on the same connection after the
first request. It is perfectly valid and should not disable use of any form
of encoding in the response.

> As I explained in a previous email, we are using "http-server-close" to
> force "forwardfor" option to include the "X-Forwared-For" header in all the
> requests.

This is a very valid concern too !

Regards,
Willy




Re: Slow loading

2010-03-31 Thread Hank A. Paulson

On 3/30/10 11:49 PM, Willy Tarreau wrote:

On Wed, Mar 31, 2010 at 02:17:37AM -0400, Geoffrey Mina wrote:

There was nothing between the two but a switch... although, disabling the
Windows firewall on the IIS server seems to have fixed the problem!  I don't
have much experience with the built in windows firewall... but apparently
it's not happy about something.


well then either the windows firewall is terribly buggy or the switch
is having fun with the TTL (layer3 switch maybe ?), because it is not
normal to have the TTL decrease by one if nothing sits between the two
machines.


I think we'll switch over to a third party firewall application.


That's a safer bet :-)


Thanks for the help!  You guys rock.


You're welcome!
Willy


2.6.18-164.el5xen

If they are using a domU on Xen then there is either a bridge or other 
forwarding mechanism on the dom0 routing traffic to the VM. That might be 
causing the ttl decrement, the default is a bridge and I don't know if bridges 
normally decrement the ttl.


iptables and/or conntrack on the dom0 and/or the domU might be culprits in the 
disappearing packet? I ugess not in this case, but I'd watch them...


I turn off iptables completely on the dom0 and domU esp. when trying to 
troubleshoot.


Some people find slow IO with Xen:
http://lists.xensource.com/archives/html/xen-users/2009-11/msg00206.html



Re: http-server-close problem (also with Tomcat backend)

2010-03-31 Thread Óscar Frías Barranco
> > As a quick and dirty test, I've applied the following patch.
> > Note this is maybe not OK for production, it's a first look on the
> problem, so be careful (I only took some minutes on a tomcat server with
> gzip compression, which removes the Content-Length when "Connection: close"
> is provided).
> >
> > This modification modifies the request part but not the response one,
> which should :
> > - leave "keep-alived" connection go the the backend server
> > - and then close this "keep-alived" request after the response is
> received.
>
> I think this is an excellent idea. From a protocol point of view,
> it is very dirty because you ask the othe side to maintain a connection
> open longer, but it should definitely do the trick.
>
> If this works for the persons who got the issue, I'd rather add a
> specific option for this, because doing it by default will reduce
> performance of properly working servers and cause more packets to
> be exchanged on the network. For instance, nginx can benefit from
> the close, as it knows how to send the FIN immediately after the
> last response, without waiting for haproxy to close the connection
> on it (but nginx was certainly designed by someone who reads RFC,
> which is almost never the case in JAVA environments unfortunately).
>


OK, then please let us know when this option is implemented so that we can
test it in our environment.

Thank you,
Oscar


Re: http-server-close problem (also with Tomcat backend)

2010-03-31 Thread Óscar Frías Barranco
On Wed, Mar 31, 2010 at 06:37, Willy Tarreau  wrote:

> It says that if a message does not have any content length NOR chunked
> transfer encoding, THEN the only way to detect the end is the close.
> Chunked transfer encoding requires HTTP version 1.1, that's all. There's
> nothing wrong in using chunked encoding even in close mode, quite the
> opposite instead. Specifying the message length in the response is very
> important because it is the only way for the client to know whether the
> connection was aborted early or the response was complete. This is needed
> to avoid the incomplete loading and caching of objects.
>
> So I find it very strange that some servers disable chunked encoding
> when close is specified. What I suspect is that internally they don't
> support the "connection" header and simulate an 1.0 HTTP version when
> they see a close. That would explain why they refrain from sending
> the length in response (though it would be a buggy behaviour).
>


In many cases Tomcat cannot send the content length because it is serving
dynamic content and reporting the content length would require buffering all
the output before serving it.
If you are closing the connection, I do not see any specific advantage of
using chunked encoding.  You could have an undetected incomplete response
also when using chunked encoding.
In any case, if you consider that this Tomcat behavior is buggy we could
report the issue to Tomcat team and maybe they can fix it.



>
> > Why do you want to use http-server-close option ?
> > Why not to use directly Keep-Alive ability of tomcat Connector and
> specify
> > 'no option httpclose' in haproxy ? This way, haproxy should act
> > transparently using keep-alive ability of tomcat connector.
>
> Maybe because he wants to inspect and log all the requests.
>

As I explained in a previous email, we are using "http-server-close" to
force "forwardfor" option to include the "X-Forwared-For" header in all the
requests.

Regards,
Oscar