Re: lots and lots of request erros - cR 408

2012-04-04 Thread fred hu
brower will open upto 6 connection to your server, while not  all of them
will be used.
that is one source of 408
another source is the request of your client is larger than 1460 due to
cookie or some else. so the request is splited into two packets.and the
latter lost due to network.

however both case, 10% 408 is extremely high. 1% 408 was found in my
production env.

try change timeout http-request from 1 to 2 and see what will
happen.
在 2012-4-5 半夜12:20,Alon M a...@datonics.com写道:

 Hello ,
 we started to use haproxy on our production environment.
 the request are coming to a layer 4 load balancing - ldirector which
 forwards
 them to a haproxy which forwards it to a tomcat on the same server .

 our connection handling inside tomcat is fast - ~10 mili seconds, and we
 want to
 delegate the connection handling with the WAN to haproxy instead of a
 tomcat
 thread thus getting batter throughput .

 we are getting a lot of http request error, about 10% of the requests are
 timed
 out on the client side, increasing the timeout http-request, does not
 help ,
 the connection just terminates after the max time has passed.

 echo show errors  | socat unix-connect:/tmp/haproxy stdio is clear .

 below are a few error records from the log :

 Apr  4 14:01:38 localhost.localdomain haproxy[22451]: 95.206.61.53:62766
 [04/Apr/2012:14:01:28.855] pm pm/NOSRV -1/-1/-1/-1/10001 408 212 - - cR--
 13/13/2/0/0 0/0 BADREQ
 Apr  4 14:01:39 localhost.localdomain haproxy[22451]: 67.197.4.199:55973
 [04/Apr/2012:14:01:29.025] pm pm/NOSRV -1/-1/-1/-1/1 408 212 - - cR--
 12/12/3/0/0 0/0 BADREQ

 here are some details -

 Linux tapp4.ny 2.6.18-274.3.1.el5 #1 SMP Tue Sep 6 20:13:52 EDT 2011 x86_64
 x86_64 x86_64 GNU/Linux


 [root@tapp4 haproxy-1.4.20]# more /etc/issue
 CentOS release 5.5 (Final)
 Kernel \r on an \m


 [root@tapp4 haproxy-1.4.20]# /usr/sbin/haproxy  -vv
 HA-Proxy version 1.4.20 2012/03/10
 Copyright 2000-2012 Willy Tarreau w...@1wt.eu

 Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS =

 Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

 Encrypted password support via crypt(3): yes

 Available polling systems :
 sepoll : pref=400,  test result OK
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
 Total: 4 (4 usable), will use sepoll.

 any help would be much appreciated .

 Alon








Can we perform rsp_add for specified hosts?

2012-03-30 Thread fred hu
Hi, All

I need insert Set-Cookie response header for specified request(host)

I start with creating following two lines of configuration

acl rsptest hdr_beg(host) -i www.google.com
rspadd Set-Cookie:\ GOOGLEID=123456 if rsptest

But all i got is following warning:

acl 'rsptest' involves some volatile request-only criteria which will be
ignored.

So, do we support rsp_add for specified hosts and how?

-- 
*Fred Hu*
*Best Regards*


Is this a problem of stable version 1.4.20?

2012-03-26 Thread fred hu
Hi

Actually this problem was not introduced in this version, it exists for a
long time.

The scene is using haproxy for LB http requests to different cache server
based on uri hash.

This works well for normal request. But when end-users access via a http
proxy, the uri changed.

The first line of a normal request:
GET /1.jpg HTTP/1.1

The first line of a request via some http proxy:
GET http://www.google.com/1.jpg HTTP/1.1

The result is the same two object hashed to two different cache server [A
and B].
But this is not the worst case, since only a little bit storage wasted.

The worst case is if web administrator update the origian 1.jpg and remove
1.jpg in cache server A.
The out-of-date object stored in cache server B will never be
removed/updated.

I am not sure this is a bug or not, but I guess this might be one problem.



-- 
*Fred Hu*
*Best Regards*


Is there any method to block malicious clients

2012-03-13 Thread fred hu
Hi, All

We are using haproxy since 2009 for LB.

Recently we encountered some malicious clients sending request on same URL
with especially high rate ( 100r/s and lasting for some minutes)
Is there any possibility to block such user while keep serving the normal
clients? (Surly We have no idea on malicious users ip before (s)he attacks)
I read the configuration manual and find we have fe_sess_rate/be_sess_rate
ACLs. But it seems for all clients.

So, my question here is : Can we find/block a malicious user based on his
request rate?

Thx!

-- 
*Fred Hu*
*Best Regards*