Re: Binding by Hostname

2010-04-21 Thread Willy Tarreau
On Wed, Apr 21, 2010 at 11:05:45AM +0100, Laurie Young wrote:
 The websocket has to be kept open when there is no data-transfer for some
 time. So that the connection is still open the server attempts to send data,
 even if there has been a period of inactivity. This does expose us to dirty
 connections, but is required for the functionality we need from websockets,
 and we we have no choice but to accept this - or find a different solution.

I'm perfectly aware of this requirement, I understand it.

 Normal HTTP connections on the other hand can be re-established by the
 client when it wants to pull some more data from the server, so we can close
 it after a period of inactivity. Meaning we can set a lower timeout to kill
 off dirty connections.

What I'd like to be clear is that a dirty connection from a given client
to your unique frontend might as well be for the websockets or the http
part. There's no difference. You don't decide about the probability of
a client to get disconnected depending on the URL it asks. Also, in case
of random network outage, you'll get a *lot* more dirty websocket connections
than HTTP connections because the former last longer while the later are
short-lived (a few tens of milliseconds to a few seconds at most).

So trying to optimise your config to correctly handle 0.1% of your dirty
connections while 99.9% of them will be on websockets makes no sense.

 Unfortunately I am still no closer to knowing if HAProxy can do this :-(

No, you can't change the client-side timeout on the backend. However, the
server timeout still applies in case of inactivity.

Regards,
Willy




acls and httpclose

2010-04-21 Thread Angelo Höngens

Hey, I read somewhere on the list that when you use keepalives, only the
first request in the connection is matched to an acl, and then the other
requests in the connection are not evaluated.

I noticed this behavior as well. As an experiment I set up a large
config, where I select one out of 325 backends, based on one out of 8000
host headers. I noticed that only the first request in a connection is
matched to a backend, and the rest follows to the same backend, even
though the host header is different. With the httpclose option,
everything works as it should.

My question is: is this behavior by design, or is this a work-in-progress?

I want to use haproxy for content switching on a large scale (lot of
acls, lot of backends), but with httpclose on haproxy uses 25% cpu,
without httpclose haproxy uses 5% cpu. So I'd rather not use httpclose
if I don't have to..

-- 


With kind regards,


Angelo Höngens
systems administrator

MCSE on Windows 2003
MCSE on Windows 2000
MS Small Business Specialist
--
NetMatch
tourism internet software solutions

Ringbaan Oost 2b
5013 CA Tilburg
+31 (0)13 5811088
+31 (0)13 5821239

a.hong...@netmatch.nl
www.netmatch.nl
--





Re: acls and httpclose

2010-04-21 Thread Willy Tarreau
On Wed, Apr 21, 2010 at 10:55:22PM +0200, ??ukasz Jagieo wrote:
 2010/4/21 Angelo Höngens a.hong...@netmatch.nl:
  Hey, I read somewhere on the list that when you use keepalives, only the
  first request in the connection is matched to an acl, and then the other
  requests in the connection are not evaluated.
 
  I noticed this behavior as well. As an experiment I set up a large
  config, where I select one out of 325 backends, based on one out of 8000
  host headers. I noticed that only the first request in a connection is
  matched to a backend, and the rest follows to the same backend, even
  though the host header is different. With the httpclose option,
  everything works as it should.
 
  My question is: is this behavior by design, or is this a work-in-progress?
 
 From: http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
 
 As stated in section 1, HAProxy does not yes support the HTTP keep-alive
   mode. So by default, if a client communicates with a server in this mode, it
   will only analyze, log, and process the first request of each connection. To
   workaround this limitation, it is possible to specify option httpclose. It
   will check if a Connection: close header is already set in each direction,
   and will add one if missing. Each end should react to this by actively
   closing the TCP connection after each transfer, thus resulting in a switch 
 to
   the HTTP close mode. Any Connection header different from close will 
 also
   be removed.
 
 So looks like everything works as it should.

version 1.4 supports keep-alive on the client side (use option 
http-server-close
instead of httpclose).

  I want to use haproxy for content switching on a large scale (lot of
  acls, lot of backends), but with httpclose on haproxy uses 25% cpu,
  without httpclose haproxy uses 5% cpu. So I'd rather not use httpclose
  if I don't have to..
 
 Also looks ok, since if you use httpclose haproxy got more work, so
 cpu also got more work.

In fact it's not much more work for haproxy, but for the system, doing
a connect is more expensive than a send of one packet. However, if you
observe that large differences, I conclude that you're transfering very
small objects so that the connect/close overhead becomes predominant.

My observations are that http-server-close is about twice as fast as
httpclose, so you could save about half of the CPU usage here.

Willy