Re: Keep alive with haproxy stud

2011-10-26 Thread Baptiste
Hi Erik,

You just need to enable the option httplog in your HAProxy frontend
which is verbose and provide useful information for troubleshooting.

cheers


On Tue, Oct 25, 2011 at 10:52 PM, Erik Torlen
erik.tor...@apicasystem.com wrote:
 Hi,

 I will continue testing in a few days and see how the result will turn out to 
 be. We have made a lot of changes
 so we'll see how it goes.

 All of the results include the details of the response time from the loadtest.
 Any recommendations on the logging we can use to get more information on what 
 is happening on the server side?
 We are currently just using syslog.

 /E

 -Original Message-
 From: Willy Tarreau [mailto:w...@1wt.eu]
 Sent: den 14 oktober 2011 23:16
 To: Erik Torlen
 Cc: haproxy@formilux.org
 Subject: Re: Keep alive with haproxy  stud

 Hi Erik,

 On Sat, Oct 08, 2011 at 06:40:49PM +, Erik Torlen wrote:
 Hi,

 I see different results on the keep alive using http vs https.

 Loadtest against https (through stud) gives me around 69% keep alive 
 effiency (using 3-20 objects per connection in different tests). When testing
 through http directly against haproxy I get 99% keep alive with the same 
 loadtest scripts.

 I have tried changing timeouts and different modes (http-pretend-keepalive 
 etc) but still no improvement.

 Anyone that knows how to improve this and why it's happening?

 If you're trying directly then via stud and see different things, then
 none of the haproxy options (pretend-keepalive, ...) will have any effect.
 It is very possible that timeouts were too low but that would mean you
 were using insanely low timeouts (eg: a few ms). It is also possible
 that the tool you used for the test can't run as many https concurrent
 connections as it runs http connections, and that it closes some of them
 by itself. And it is also possible that there are a few issues with stud.
 While it performs well, it's still young and it is possible that some
 pathological corner cases remain. Haproxy experienced this in its early
 age too. You need to enable logging everywhere and get more precise stats
 from your load testing tool (eg: all response times, not just an average).

 Regards,
 Willy






Re: Timeout values

2011-10-26 Thread Baptiste
Hi Erik,

What's your purpose here?
Depending on your load test and you haproxy configuration, the queue
timeout might generate 503 responses.
The other ones are related to the behavior you want for your web platform.
Basically, all the values you added seems too high.


Cheers


On Tue, Oct 25, 2011 at 11:02 PM, Erik Torlen
erik.tor...@apicasystem.com wrote:
 Hi,

 I would like to get feedback on these timeout values.

    timeout http-request    40s
    timeout queue           1m
    timeout connect         120s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 40s
    timeout check           40s

 I have done alot of different loadtests with different values using stud in 
 front of haproxy and backend on separate instances
 in the cloud (meaning there is higher latency then normal against backend).

 Can't see any big difference in the loadtest result when having these timeout 
 fairly high. I guess that really low values will affect
 the loadtest result more.

 /E





Re: Timeout values

2011-10-26 Thread carlo flores
I can appreciate having to keep a slow application layer highly available
via long timeouts, but as a suggestion:

a) keep lots of available sockets open and think about the timeout wait
sysctl reuse/recycle variables

And

b) think about creating a simple page (in whatever language and environment
that your application uses) that returns 200 OK for the healthcheck on the
backend servers. With your current parameters, figure a standard 2 fails and
5 ok recovers for a backend health check, at 30s increments with a 40s
timeout, and you're down a lot longer than you need to.

Just ideas. But if you're dealing with such a slow application layer, I'm
doubt posting large timeout values will lead to anyone's approval -- more
likely Baptiste's reasonable response -- or sympathy an a call for a faster
application layer via knowns and optimization/iteration -- the dev/ops
response.

But I hope this helps,
C.

On Tuesday, October 25, 2011, Erik Torlen erik.tor...@apicasystem.com
wrote:
 Hi,

 I would like to get feedback on these timeout values.

timeout http-request40s
timeout queue   1m
timeout connect 120s
timeout client  1m
timeout server  1m
timeout http-keep-alive 40s
timeout check   40s

 I have done alot of different loadtests with different values using stud
in front of haproxy and backend on separate instances
 in the cloud (meaning there is higher latency then normal against
backend).

 Can't see any big difference in the loadtest result when having these
timeout fairly high. I guess that really low values will affect
 the loadtest result more.

 /E




Haproxy with stunnel and a session cookie service.

2011-10-26 Thread Sean Patronis



We are in the process of converting most of our HAProxy usage to be http
balanced (instead of TCP).

In our lab we are using stunnel to decrypt our https traffic to http
which then gets piped to haproxy.  We also have a load balanced session
service that stunnel/HAProxy also serves which uses uses cookies for our
sessions (the session service generates/maintains the cookies).

Whenever we are using stunnel/HAProxy to decrypt and balance our session
service, something bad seems to happen to the session cookie and our
session service returns an error.  If we use just straight http
balancing without stunnel in the loop, our session service works fine.
Also, if we just use HAProxy and tcp balancing, our session service
works fine (the current way our production service works).

What gives is there some special config in HAProxy or stunnel that I am
missing?


Thanks.





Re: Haproxy with stunnel and a session cookie service.

2011-10-26 Thread Baptiste
Hi,

how do you achieve session persistance in HAProxy configuration?
What load-balancing algorithm do you use?
Can you configure HAProxy to log your session cookie then show us some
log lines?

cheers



On Wed, Oct 26, 2011 at 2:57 PM, Sean Patronis spatro...@add123.com wrote:


 We are in the process of converting most of our HAProxy usage to be http
 balanced (instead of TCP).

 In our lab we are using stunnel to decrypt our https traffic to http
 which then gets piped to haproxy.  We also have a load balanced session
 service that stunnel/HAProxy also serves which uses uses cookies for our
 sessions (the session service generates/maintains the cookies).

 Whenever we are using stunnel/HAProxy to decrypt and balance our session
 service, something bad seems to happen to the session cookie and our
 session service returns an error.  If we use just straight http
 balancing without stunnel in the loop, our session service works fine.
 Also, if we just use HAProxy and tcp balancing, our session service
 works fine (the current way our production service works).

 What gives is there some special config in HAProxy or stunnel that I am
 missing?


 Thanks.







Re: Haproxy with stunnel and a session cookie service.

2011-10-26 Thread Sean Patronis
Our session service deals with persistence... Our application is written 
in such a way that it does not matter the backend it is connected to 
(i.e. we do not need HAProxy to keep the persistence).  Currently, 
HAProxy is set to Round-Robbin (default).


Also of note, in the lab, we are only connecting to one backend.  So 
there is always persistence with one server.


I will setup some more logging and see what I come up with.

Thanks


On 10/26/2011 10:50 AM, Baptiste wrote:

Hi,

how do you achieve session persistance in HAProxy configuration?
What load-balancing algorithm do you use?
Can you configure HAProxy to log your session cookie then show us some
log lines?

cheers



On Wed, Oct 26, 2011 at 2:57 PM, Sean Patronisspatro...@add123.com  wrote:


We are in the process of converting most of our HAProxy usage to be http
balanced (instead of TCP).

In our lab we are using stunnel to decrypt our https traffic to http
which then gets piped to haproxy.  We also have a load balanced session
service that stunnel/HAProxy also serves which uses uses cookies for our
sessions (the session service generates/maintains the cookies).

Whenever we are using stunnel/HAProxy to decrypt and balance our session
service, something bad seems to happen to the session cookie and our
session service returns an error.  If we use just straight http
balancing without stunnel in the loop, our session service works fine.
Also, if we just use HAProxy and tcp balancing, our session service
works fine (the current way our production service works).

What gives is there some special config in HAProxy or stunnel that I am
missing?


Thanks.











client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Vivek Malik
We have been using haproxy in production for around 6 months while using
httpclose. We use functions like reqidel, reqadd to manipulate request
headers and use_backend to route a request to a specific backend.

We run websites which often have ajax calls and load javascripts and css
files from the server. Thinking about keep alive, I think it would be
desired to keep client side keep alive so that they can reuse connections to
load images, javascript, css and make ajax calls over it.

From a haproxy request processing and manipulating perspective, Is there a
difference between http-server-close and httpclose? Would
reqadd/reqidel/use_backend work on subsequent requests during client side
keep alive too?

I tried running some tests and I was able to reqadd/reqidel and use_backend
while using http-server-close, but I wanted to check with the group before
pushing the change to production.

Also, what's a good keep alive value for a web server. I was thinking around
10 seconds which would slow clients (include mobile) enough time to process
an html document and initiate requests for additional resources.

Thanks,
Vivek


Re: client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Vincent Bernat
OoO En cette  nuit nuageuse du jeudi 27 octobre  2011, vers 00:02, Vivek
Malik vivek.ma...@gmail.com disait :

 We have been using haproxy in production for around 6 months while
 using httpclose. We use functions like reqidel, reqadd to manipulate
 request headers and use_backend to route a request to a specific
 backend.

 We run websites which often have ajax calls and load javascripts and
 css files from the server. Thinking about keep alive, I think it
 would be desired to keep client side keep alive so that they can
 reuse connections to load images, javascript, css and make ajax calls
 over it.

 From a haproxy request processing and manipulating perspective, Is
 there a difference between http-server-close and httpclose? Would
 reqadd/reqidel/use_backend work on subsequent requests during client
 side keep alive too?

Yes. From the documentation:

,
| By default HAProxy operates in a tunnel-like mode with regards to persistent
| connections: for each connection it processes the first request and forwards
| everything else (including additional requests) to selected server. Once
| established, the connection is persisted both on the client and server
| sides. Use option http-server-close to preserve client persistent 
connections
| while handling every incoming request individually, dispatching them one after
| another to servers, in HTTP close mode. Use option httpclose to switch both
| sides to HTTP close mode. option forceclose and option
| http-pretend-keepalive help working around servers misbehaving in HTTP close
| mode.
`
-- 
Vincent Bernat ☯ http://vincent.bernat.im

Make sure input cannot violate the limits of the program.
- The Elements of Programming Style (Kernighan  Plauger)



Re: client side keep-alive (http-server-close vs httpclose)

2011-10-26 Thread Baptiste
Hi,

In order to be able to process layer 7 manipulation (what you want to
achieve) for *each* request, then you must enable http mode on your
frontebd/backend and to enable the option http-server-close.

cheers

On Thu, Oct 27, 2011 at 12:21 AM, Vivek Malik vivek.ma...@gmail.com wrote:
 The documentation also says

 In HTTP mode, it is possible to rewrite, add or delete some of the request
 and
 response headers based on regular expressions. It is also possible to block
 a
 request or a response if a particular header matches a regular expression,
 which is enough to stop most elementary protocol attacks, and to protect
 against information leak from the internal network. But there is a
 limitation
 to this : since HAProxy's HTTP engine does not support keep-alive, only
 headers
 passed during the first request of a TCP session will be seen. All
 subsequent
 headers will be considered data only and not analyzed. Furthermore, HAProxy
 never touches data contents, it stops analysis at the end of headers.

 The above confuses me about keep-alive. Please suggest if this applies in
 http mode.

 On Wed, Oct 26, 2011 at 6:15 PM, Vincent Bernat ber...@luffy.cx wrote:

 OoO En cette  nuit nuageuse du jeudi 27 octobre  2011, vers 00:02, Vivek
 Malik vivek.ma...@gmail.com disait :

  We have been using haproxy in production for around 6 months while
  using httpclose. We use functions like reqidel, reqadd to manipulate
  request headers and use_backend to route a request to a specific
  backend.

  We run websites which often have ajax calls and load javascripts and
  css files from the server. Thinking about keep alive, I think it
  would be desired to keep client side keep alive so that they can
  reuse connections to load images, javascript, css and make ajax calls
  over it.

  From a haproxy request processing and manipulating perspective, Is
  there a difference between http-server-close and httpclose? Would
  reqadd/reqidel/use_backend work on subsequent requests during client
  side keep alive too?

 Yes. From the documentation:

 ,
 | By default HAProxy operates in a tunnel-like mode with regards to
 persistent
 | connections: for each connection it processes the first request and
 forwards
 | everything else (including additional requests) to selected server. Once
 | established, the connection is persisted both on the client and server
 | sides. Use option http-server-close to preserve client persistent
 connections
 | while handling every incoming request individually, dispatching them one
 after
 | another to servers, in HTTP close mode. Use option httpclose to switch
 both
 | sides to HTTP close mode. option forceclose and option
 | http-pretend-keepalive help working around servers misbehaving in HTTP
 close
 | mode.
 `
 --
 Vincent Bernat ☯ http://vincent.bernat.im

 Make sure input cannot violate the limits of the program.
            - The Elements of Programming Style (Kernighan  Plauger)