Re: HAProxy high SLAB_CACHE

2012-05-07 Thread Willy Tarreau
Hi,

On Wed, May 02, 2012 at 06:44:30PM +0530,   wrote:
 Hi Team,
 
 Configured HAProxy for bunch of web servers. It was working smoothly until
 one fine day I found that on the server where haproxy is running memory
 utilisation is getting high everyday. Biggest chunk in the memory
 consumption is in slab_cache which is utilising 90% to 95% of total memory.
 Server started using Swap and performance degraded.
 
 Running Below configuration
 
 OS - Debian 6.0.4 squeeze
 RAM - 6GB
 CPU - Pentium D 3.00GHz
 HAProxy - HA-Proxy version 1.4.19 2012/01/07
 
 #free -m
 total   used   free sharedbuffers
 cached
 Mem:  5959   5780178  0   3
 26
 -/+ buffers/cache:   5749209
 Swap: 5119102   5017

Could you please send the output of ps aux on this machine to see if a
process is leaking memory ? I've never seen haproxy leak memory but we
never know, maybe you have something special in your configuration. Could
you also post it BTW ? Please remove any sensible information there (admin
password and/or IP addresses).

Thanks,
Willy




Re: HAProxy Hardware LB

2012-05-07 Thread Willy Tarreau
Hi Sebastian,

On Wed, May 02, 2012 at 01:07:20PM +0200, Sebastian Fohler wrote:
 Hi,
 
 I'm trying to build a small size loadbalancing maschine which fit's into 
 a small 19 rackmountable case.
 Are there any experiences which some specific hardware, for example ATOM 
 boards or something similiar?
 Can someone recomment anything special?

I'm well placed to say it's quite hard to find nice hardware for this. There
are two important points to consider :
  - the number of NICs
  - the type of NICs

The number of NICs depends on what you want to do. Two NICs are not that hard
to find, but more become complex and remove a lot of choice.

The type of NIC is very important too. Atoms are very commonly sold with
Realtek chips which are the lowest quality you will ever find. Not only
the CPU usage will remain high at moderate traffic rates, but you'll surely
experience those annoying lockups at high rates that you don't want to
experiment in a LB.

Now depending on the amount of processing power you need, ALIX boards from
PC Engines might be excellent. They're running on a 500 MHz Geode, are totally
fanless (and even without a heatsink), but are limited to 10/100 interfaces
and 256 MB of RAM. There are some nice Atom boards at Commell but they come
with a fan and need some airflow to keep cool. And they're not necessarily
cheap depending on your budget.

Hoping this helps,
Willy




Re: Randomly wrong backend on http request

2012-05-07 Thread Willy Tarreau
Hi Baptiste,

On Thu, May 03, 2012 at 09:50:39PM +0200, Baptiste wrote:
 When using HAProxy with the option http-server-close or forceclose, haproxy
 will close the TCP conection on either the server or both the client and
 the server, after each request. Which is not compatible with websocket.

I'm sorry to say it again but this is wrong. It *IS* compatible with WebSocket
because haproxy switches to tunnel mode when it sees the WS handshake and it
keeps the connection open for as long as there is traffic. So there is never
any need to change the close options for websocket, it works out of the box
where HTTP already works. If you find a situation where it fails, then it's
a bug which must be fixed (and I'm not aware of any such).

However, I agree with you for the rest of the explanations :-)

Willy




Re: TCP reverse proxy

2012-05-07 Thread Willy Tarreau
Hi Emmanuel,

On Fri, Apr 20, 2012 at 09:02:07AM +0200, Emmanuel Bézagu wrote:
 As haproxy already accepts to reverse proxy ssl and ssh, would it be
 possible to support protocols as OpenVPN, tinc or XMPP ?

Haproxy will work with any TCP-based protocol which does not report
addresses or ports inside the payload. For instance, it works well
on SSH, SMTP, LDAP, RDP, PeSIT, SSL, etc... but not on FTP, most RPC,
etc... In general, any protocol which can easily be translated will
work. I think this is the case for all those above, but you might
prefer testing to be sure.

Regards,
Willy




Re: haproxy reload causes trouble

2012-05-07 Thread Willy Tarreau
Hi Stefan,

On Tue, Apr 24, 2012 at 08:31:34AM +0200, Stefan Majer wrote:
 Hi,
 
 while reloading haproxy on a decent loaded installation i got the
 following problem:
 2 out of 5 configured proxies stopped accepting traffic after the
 service haproxy reload.
 This machine is processing ~800 conn/sec, it is a xen virtual machine
 running rhel6.2,
 haproxy version is 1.4.18 build from source. This haproxy is behind a
 nginx instance which
 forwards to haproxy on localhost to different ports per proxy configuration.
 
 Reloading is done automatically on every configuration change and this
 worked well for the last month
 with multiple reloads every day. But yesterday we had this situation.
 
 relevant log output during reload:
 [root@lb log]# grep -ve GET -e POST -e HEAD -e PROP -e OPTIONS haproxy.log
 Apr 23 16:24:15 localhost haproxy[16721]: Pausing proxy stats.
 Apr 23 16:24:15 localhost haproxy[16721]: Pausing proxy proxy1.
 Apr 23 16:24:15 localhost haproxy[16721]: Pausing proxy proxy2.
 Apr 23 16:24:15 localhost haproxy[27195]: Proxy stats started.
 Apr 23 16:24:15 localhost haproxy[27195]: Proxy proxy1 started.
 Apr 23 16:24:15 localhost haproxy[27192]: Proxy proxy2 started.
 Apr 23 16:24:17 localhost haproxy[16721]: Enabling proxy stats.
 Apr 23 16:24:17 localhost haproxy[16721]: Port  busy while trying
 to enable proxy stats.
 Apr 23 16:24:17 localhost haproxy[16721]: Enabling proxy proxy1.
 Apr 23 16:24:17 localhost haproxy[16721]: Port 30005 busy while trying
 to enable proxy proxy1.
 Apr 23 16:24:17 localhost haproxy[16721]: Port 30001 busy while trying
 to enable proxy proxy2.

I suspect that your source port ranges covers some of your listening ports,
which generally is not a good idea at high rates as it is always possible
that a few connections remain in time-wait from time to time.

Please check /proc/sys/net/ipv4/ip_local_port_range to see if your listening
ports are in the range, and if so, either you have to change the listening
ports, or you have to shrink the range to ensure the issue can never happen.

I see nothing special in your config, it's very clean and well ordered.

Regards,
Willy




could haproxy call redis for a result?

2012-05-07 Thread S Ahmed
I'm sure this isn't possible but it would be cool if it is.

My backend services write to redis, and if a client reaches a certain
threshold, I want to hard drop all further requests until x minutes have
passed.

Would it be possible, for each request, haproxy performs a lookup in redis,
and if a 0 is returned, drop the request completly (hard drop), if it is 1,
continue processing.


Re: Randomly wrong backend on http request

2012-05-07 Thread Baptiste
On Mon, May 7, 2012 at 10:42 PM, Willy Tarreau w...@1wt.eu wrote:
 Hi Baptiste,

 On Thu, May 03, 2012 at 09:50:39PM +0200, Baptiste wrote:
 When using HAProxy with the option http-server-close or forceclose, haproxy
 will close the TCP conection on either the server or both the client and
 the server, after each request. Which is not compatible with websocket.

 I'm sorry to say it again but this is wrong. It *IS* compatible with WebSocket
 because haproxy switches to tunnel mode when it sees the WS handshake and it
 keeps the connection open for as long as there is traffic. So there is never
 any need to change the close options for websocket, it works out of the box
 where HTTP already works. If you find a situation where it fails, then it's
 a bug which must be fixed (and I'm not aware of any such).

 However, I agree with you for the rest of the explanations :-)

 Willy


Hi,

I guess it deserves an article on the blog :)

cheers



Re: could haproxy call redis for a result?

2012-05-07 Thread Baptiste
On Tue, May 8, 2012 at 12:26 AM, S Ahmed sahmed1...@gmail.com wrote:
 I'm sure this isn't possible but it would be cool if it is.

 My backend services write to redis, and if a client reaches a certain
 threshold, I want to hard drop all further requests until x minutes have
 passed.

 Would it be possible, for each request, haproxy performs a lookup in redis,
 and if a 0 is returned, drop the request completly (hard drop), if it is 1,
 continue processing.




It would introduce latency in the request processing.
Why would you need such way of serving your request?

By the way, this is not doable with HAProxy.
Well, at least, not out of the box :)
Depending on your needs, you could hack some dirty scripts which can
sync your redis DB with HAProxy server status through the stats
socket.

cheers



ACLs that depend on cookie values

2012-05-07 Thread Malcolm Handley
I'd like to write an ACL that compares the integer value of a cookie
with a constant. (My goal is to be able to block percentiles of our
users if we have more traffic than we can handle, so I want to block a
request if the cookie's value is, say, less then 25.)

I understand that I can do something like
hdr_sub(cookie) -i regular expression
but that doesn't let me treat the value as an integer and compare it.

I also know about
hdr_val(header)
but that gives me the entire value of the cookie header, not just the
value of a particular cookie.

Is there any way that I can do this?



Re: could haproxy call redis for a result?

2012-05-07 Thread S Ahmed
I agree it will add overheard for each call.

Well would there a way for me to somehow tell haproxy from my application
to block a particular url, and then send another api call to allow traffic
from that url?

That would be really cool to have an API where I could do this from.

I know haproxy has rate limiting as per:
http://blog.serverfault.com/2010/08/26/1016491873/

But wondering if one could have more control over it, like say you have
multiple haproxy servers and you want to synch them, or simply the
application layer needs to decide when to drop a url connection or when to
accept.

On Mon, May 7, 2012 at 7:39 PM, Baptiste bed...@gmail.com wrote:

 On Tue, May 8, 2012 at 12:26 AM, S Ahmed sahmed1...@gmail.com wrote:
  I'm sure this isn't possible but it would be cool if it is.
 
  My backend services write to redis, and if a client reaches a certain
  threshold, I want to hard drop all further requests until x minutes have
  passed.
 
  Would it be possible, for each request, haproxy performs a lookup in
 redis,
  and if a 0 is returned, drop the request completly (hard drop), if it is
 1,
  continue processing.
 
 


 It would introduce latency in the request processing.
 Why would you need such way of serving your request?

 By the way, this is not doable with HAProxy.
 Well, at least, not out of the box :)
 Depending on your needs, you could hack some dirty scripts which can
 sync your redis DB with HAProxy server status through the stats
 socket.

 cheers