Re: client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Baptiste
Hi,

In order to be able to process layer 7 manipulation (what you want to
achieve) for *each* request, then you must enable http mode on your
frontebd/backend and to enable the option http-server-close.

cheers

On Thu, Oct 27, 2011 at 12:21 AM, Vivek Malik  wrote:
> The documentation also says
>
> In HTTP mode, it is possible to rewrite, add or delete some of the request
> and
> response headers based on regular expressions. It is also possible to block
> a
> request or a response if a particular header matches a regular expression,
> which is enough to stop most elementary protocol attacks, and to protect
> against information leak from the internal network. But there is a
> limitation
> to this : since HAProxy's HTTP engine does not support keep-alive, only
> headers
> passed during the first request of a TCP session will be seen. All
> subsequent
> headers will be considered data only and not analyzed. Furthermore, HAProxy
> never touches data contents, it stops analysis at the end of headers.
>
> The above confuses me about keep-alive. Please suggest if this applies in
> http mode.
>
> On Wed, Oct 26, 2011 at 6:15 PM, Vincent Bernat  wrote:
>>
>> OoO En cette  nuit nuageuse du jeudi 27 octobre  2011, vers 00:02, Vivek
>> Malik  disait :
>>
>> > We have been using haproxy in production for around 6 months while
>> > using httpclose. We use functions like reqidel, reqadd to manipulate
>> > request headers and use_backend to route a request to a specific
>> > backend.
>>
>> > We run websites which often have ajax calls and load javascripts and
>> > css files from the server. Thinking about keep alive, I think it
>> > would be desired to keep client side keep alive so that they can
>> > reuse connections to load images, javascript, css and make ajax calls
>> > over it.
>>
>> > From a haproxy request processing and manipulating perspective, Is
>> > there a difference between http-server-close and httpclose? Would
>> > reqadd/reqidel/use_backend work on subsequent requests during client
>> > side keep alive too?
>>
>> Yes. From the documentation:
>>
>> ,
>> | By default HAProxy operates in a tunnel-like mode with regards to
>> persistent
>> | connections: for each connection it processes the first request and
>> forwards
>> | everything else (including additional requests) to selected server. Once
>> | established, the connection is persisted both on the client and server
>> | sides. Use "option http-server-close" to preserve client persistent
>> connections
>> | while handling every incoming request individually, dispatching them one
>> after
>> | another to servers, in HTTP close mode. Use "option httpclose" to switch
>> both
>> | sides to HTTP close mode. "option forceclose" and "option
>> | http-pretend-keepalive" help working around servers misbehaving in HTTP
>> close
>> | mode.
>> `
>> --
>> Vincent Bernat ☯ http://vincent.bernat.im
>>
>> Make sure input cannot violate the limits of the program.
>>            - The Elements of Programming Style (Kernighan & Plauger)
>
>



Re: client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Vivek Malik
The documentation also says

In HTTP mode, it is possible to rewrite, add or delete some of the request and
response headers based on regular expressions. It is also possible to block a
request or a response if a particular header matches a regular expression,
which is enough to stop most elementary protocol attacks, and to protect
against information leak from the internal network. But there is a limitation
to this : since HAProxy's HTTP engine does not support keep-alive, only headers
passed during the first request of a TCP session will be seen. All subsequent
headers will be considered data only and not analyzed. Furthermore, HAProxy
never touches data contents, it stops analysis at the end of headers.


The above confuses me about keep-alive. Please suggest if this applies
in http mode.


On Wed, Oct 26, 2011 at 6:15 PM, Vincent Bernat  wrote:

> OoO En cette  nuit nuageuse du jeudi 27 octobre  2011, vers 00:02, Vivek
> Malik  disait :
>
> > We have been using haproxy in production for around 6 months while
> > using httpclose. We use functions like reqidel, reqadd to manipulate
> > request headers and use_backend to route a request to a specific
> > backend.
>
> > We run websites which often have ajax calls and load javascripts and
> > css files from the server. Thinking about keep alive, I think it
> > would be desired to keep client side keep alive so that they can
> > reuse connections to load images, javascript, css and make ajax calls
> > over it.
>
> > From a haproxy request processing and manipulating perspective, Is
> > there a difference between http-server-close and httpclose? Would
> > reqadd/reqidel/use_backend work on subsequent requests during client
> > side keep alive too?
>
> Yes. From the documentation:
>
> ,
> | By default HAProxy operates in a tunnel-like mode with regards to
> persistent
> | connections: for each connection it processes the first request and
> forwards
> | everything else (including additional requests) to selected server. Once
> | established, the connection is persisted both on the client and server
> | sides. Use "option http-server-close" to preserve client persistent
> connections
> | while handling every incoming request individually, dispatching them one
> after
> | another to servers, in HTTP close mode. Use "option httpclose" to switch
> both
> | sides to HTTP close mode. "option forceclose" and "option
> | http-pretend-keepalive" help working around servers misbehaving in HTTP
> close
> | mode.
> `
> --
> Vincent Bernat ☯ http://vincent.bernat.im
>
> Make sure input cannot violate the limits of the program.
>- The Elements of Programming Style (Kernighan & Plauger)
>


Re: client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Vincent Bernat
OoO En cette  nuit nuageuse du jeudi 27 octobre  2011, vers 00:02, Vivek
Malik  disait :

> We have been using haproxy in production for around 6 months while
> using httpclose. We use functions like reqidel, reqadd to manipulate
> request headers and use_backend to route a request to a specific
> backend.

> We run websites which often have ajax calls and load javascripts and
> css files from the server. Thinking about keep alive, I think it
> would be desired to keep client side keep alive so that they can
> reuse connections to load images, javascript, css and make ajax calls
> over it.

> From a haproxy request processing and manipulating perspective, Is
> there a difference between http-server-close and httpclose? Would
> reqadd/reqidel/use_backend work on subsequent requests during client
> side keep alive too?

Yes. From the documentation:

,
| By default HAProxy operates in a tunnel-like mode with regards to persistent
| connections: for each connection it processes the first request and forwards
| everything else (including additional requests) to selected server. Once
| established, the connection is persisted both on the client and server
| sides. Use "option http-server-close" to preserve client persistent 
connections
| while handling every incoming request individually, dispatching them one after
| another to servers, in HTTP close mode. Use "option httpclose" to switch both
| sides to HTTP close mode. "option forceclose" and "option
| http-pretend-keepalive" help working around servers misbehaving in HTTP close
| mode.
`
-- 
Vincent Bernat ☯ http://vincent.bernat.im

Make sure input cannot violate the limits of the program.
- The Elements of Programming Style (Kernighan & Plauger)



client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Vivek Malik
We have been using haproxy in production for around 6 months while using
httpclose. We use functions like reqidel, reqadd to manipulate request
headers and use_backend to route a request to a specific backend.

We run websites which often have ajax calls and load javascripts and css
files from the server. Thinking about keep alive, I think it would be
desired to keep client side keep alive so that they can reuse connections to
load images, javascript, css and make ajax calls over it.

>From a haproxy request processing and manipulating perspective, Is there a
difference between http-server-close and httpclose? Would
reqadd/reqidel/use_backend work on subsequent requests during client side
keep alive too?

I tried running some tests and I was able to reqadd/reqidel and use_backend
while using http-server-close, but I wanted to check with the group before
pushing the change to production.

Also, what's a good keep alive value for a web server. I was thinking around
10 seconds which would slow clients (include mobile) enough time to process
an html document and initiate requests for additional resources.

Thanks,
Vivek


Re: Haproxy with stunnel and a session cookie service.

2011-10-26 Thread Sean Patronis
Our session service deals with persistence... Our application is written 
in such a way that it does not matter the backend it is connected to 
(i.e. we do not need HAProxy to keep the persistence).  Currently, 
HAProxy is set to Round-Robbin (default).


Also of note, in the lab, we are only connecting to one backend.  So 
there is always persistence with one server.


I will setup some more logging and see what I come up with.

Thanks


On 10/26/2011 10:50 AM, Baptiste wrote:

Hi,

how do you achieve session persistance in HAProxy configuration?
What load-balancing algorithm do you use?
Can you configure HAProxy to log your session cookie then show us some
log lines?

cheers



On Wed, Oct 26, 2011 at 2:57 PM, Sean Patronis  wrote:


We are in the process of converting most of our HAProxy usage to be http
balanced (instead of TCP).

In our lab we are using stunnel to decrypt our https traffic to http
which then gets piped to haproxy.  We also have a load balanced session
service that stunnel/HAProxy also serves which uses uses cookies for our
sessions (the session service generates/maintains the cookies).

Whenever we are using stunnel/HAProxy to decrypt and balance our session
service, something bad seems to happen to the session cookie and our
session service returns an error.  If we use just straight http
balancing without stunnel in the loop, our session service works fine.
Also, if we just use HAProxy and tcp balancing, our session service
works fine (the current way our production service works).

What gives is there some special config in HAProxy or stunnel that I am
missing?


Thanks.











Re: Haproxy with stunnel and a session cookie service.

2011-10-26 Thread Baptiste
Hi,

how do you achieve session persistance in HAProxy configuration?
What load-balancing algorithm do you use?
Can you configure HAProxy to log your session cookie then show us some
log lines?

cheers



On Wed, Oct 26, 2011 at 2:57 PM, Sean Patronis  wrote:
>
>
> We are in the process of converting most of our HAProxy usage to be http
> balanced (instead of TCP).
>
> In our lab we are using stunnel to decrypt our https traffic to http
> which then gets piped to haproxy.  We also have a load balanced session
> service that stunnel/HAProxy also serves which uses uses cookies for our
> sessions (the session service generates/maintains the cookies).
>
> Whenever we are using stunnel/HAProxy to decrypt and balance our session
> service, something bad seems to happen to the session cookie and our
> session service returns an error.  If we use just straight http
> balancing without stunnel in the loop, our session service works fine.
> Also, if we just use HAProxy and tcp balancing, our session service
> works fine (the current way our production service works).
>
> What gives is there some special config in HAProxy or stunnel that I am
> missing?
>
>
> Thanks.
>
>
>
>



Haproxy with stunnel and a session cookie service.

2011-10-26 Thread Sean Patronis



We are in the process of converting most of our HAProxy usage to be http
balanced (instead of TCP).

In our lab we are using stunnel to decrypt our https traffic to http
which then gets piped to haproxy.  We also have a load balanced session
service that stunnel/HAProxy also serves which uses uses cookies for our
sessions (the session service generates/maintains the cookies).

Whenever we are using stunnel/HAProxy to decrypt and balance our session
service, something bad seems to happen to the session cookie and our
session service returns an error.  If we use just straight http
balancing without stunnel in the loop, our session service works fine.
Also, if we just use HAProxy and tcp balancing, our session service
works fine (the current way our production service works).

What gives is there some special config in HAProxy or stunnel that I am
missing?


Thanks.





Re: Timeout values

2011-10-26 Thread carlo flores
I can appreciate having to keep a slow application layer highly available
via long timeouts, but as a suggestion:

a) keep lots of available sockets open and think about the "timeout wait"
sysctl reuse/recycle variables

And

b) think about creating a simple page (in whatever language and environment
that your application uses) that returns 200 OK for the healthcheck on the
backend servers. With your current parameters, figure a standard 2 fails and
5 ok recovers for a backend health check, at 30s increments with a 40s
timeout, and you're down a lot longer than you need to.

Just ideas. But if you're dealing with such a slow application layer, I'm
doubt posting large timeout values will lead to anyone's approval -- more
likely Baptiste's reasonable response -- or sympathy an a call for a faster
application layer via knowns and optimization/iteration -- the dev/ops
response.

But I hope this helps,
C.

On Tuesday, October 25, 2011, Erik Torlen 
wrote:
> Hi,
>
> I would like to get feedback on these timeout values.
>
>timeout http-request40s
>timeout queue   1m
>timeout connect 120s
>timeout client  1m
>timeout server  1m
>timeout http-keep-alive 40s
>timeout check   40s
>
> I have done alot of different loadtests with different values using stud
in front of haproxy and backend on separate instances
> in the cloud (meaning there is higher latency then normal against
backend).
>
> Can't see any big difference in the loadtest result when having these
timeout fairly high. I guess that really low values will affect
> the loadtest result more.
>
> /E
>
>