Re: About si_conn_send_loop()

2013-10-11 Thread Willy Tarreau
Hi Godbach,

On Wed, Oct 09, 2013 at 11:28:32PM +0800, Godbach wrote:
 Hi Willy,
 
 It seems that the loop will be only excuted once.
 
 The related codes as below(src/stream_interface.c):
  static int si_conn_send_loop(struct connection *conn)
  {
  ...
  while (!(conn-flags  (CO_FL_ERROR | CO_FL_SOCK_WR_SH | ...))) {
  ...
  ret = conn-xprt-snd_buf(conn, chn-buf, send_flag);
  if (ret = 0)
  break;
 
  chn-flags |= CF_WRITE_PARTIAL;
 
  if (!chn-buf-o) {
  ...
  }
 
  /* if some data remain in the buffer, it's only because the
  * system bufers are full, so we don't want to loop again.
  */
  break;
  } /* while */
 
  if (conn-flags  CO_FL_ERROR)
  return -1;
 
  return 0;
  }
 Since there is a 'break' with no condition in the last line of while
 loop, the loop will
 be only excuted once. It just looks like a 'if' as below:
  - while (!(conn-flags  (CO_FL_ERROR | CO_FL_SOCK_WR_SH | ...))) {
  + if (!(conn-flags  (CO_FL_ERROR | CO_FL_SOCK_WR_SH | ...))) {

Yes I remember about this change, it happened one year ago in the
following commit :

  ed7f836 BUG/MINOR: stream_interface: don't loop over -snd_buf()

In 1.4 and 1.5 before the connection rework, the loop was used to
call send() over each buffer's half (when data was wrapped around
at the end). Now the transport protocol's send() function does the
job and we don't need to loop anymore.

However, I left the code as is because not having to replace the
breaks by gotos etc... reduced the amount of changes at the time
(we were at a stage where the new connections were not completely
stabilized yet).

Now it's OK. Feel free to send a cleanup patch if you want (and
change the comment at the top of the loop).

Best regards,
Willy




Re: Load Balancer Replacement

2013-10-11 Thread Willy Tarreau
On Thu, Oct 10, 2013 at 04:05:47PM +, Shervey, William E wrote:
 Just reading some of your work online that discusses replacing load balancers
 with haproxy.
 
 It looks like a great solution.  http://haproxy.1wt.eu/
 
 Unfortunately I am simply not smart enough to weed through the architecture
 details to decide whether the tool will work in my environment.
 
 Perhaps you can provide some insight for me.
 
 My servers are VM's and they run Windows Server 2008 R2.
 
 I am currently using load balancers and they sometimes introduce latency that
 causes primary servers to failover to secondary servers.

That sounds quite strange. Even the worst load balancer in the world
should not induce as much latency as a properly tuned VM, so if your
workload is highly sensitive to latency, well, remove the VMs first...
Or maybe your load balancer has a bug that is fixed in a more recent
version ?

 I'm wondering if haproxy or one of the other suggestions in the Other
 Solutions section would work for me?

It really depends on what causes the issues you're facing with your LB.
If your LB has a bug, it's possible that replacing it will help. If it's
just because it runs in a VM and the hypervisor is inducing huge latencies
or dropping packets, it won't help.

You can try to set up haproxy on a dedicated box to run some tests if
you want, it's easy enough and will cost you only a few amount of time.
If you're not experienced with Linux to set up a test machine, feel free
to go to http://www.exceliance.fr/en/ and request an evalation version
of the ALOHA. It's much easier to set up. The eval version will be
limited to a few connections per second but that will very likely be
sufficient to test the latency effects you're observing.

Best regards,
Willy




Re: Assistance with Redirect

2013-10-11 Thread Willy Tarreau
Hi Errol,

On Mon, Oct 07, 2013 at 06:13:16AM -0400, Errol Neal wrote:
 Hi all. I sent this earlier last week. Still having some trouble getting it 
 to work. 
 I'm just trying to redirect all requests to http://www.$domain.com. 
 Based on some googling, here is what I have:
 
 acl host_www  hdr_beg(host) -i www.
 reqirep ^Host:\ (.*)$ Host:\ www.\1 if !host_www
 
 Then I try to redirect..
 
 redirect code 301 prefix / if host_www
 
 But the above redirect line seems to generate a loop. 
 Can anyone share any thoughts?

Yes, the problem is that the second request will match host_www
and cause the redirect as well. If you're doing that in your
frontend, I can suggest the following trick. Use a backend
dedicated to redirects and send the traffic there when you
see that the request lacks www. :

   frontend foo
   bind :80
   use_backend redirect if !{ hdr_beg(host) -i www. }
   ...

   backend redirect
   reqirep ^Host:\ (.*)$ Host:\ www.\1
   redirect code 301 prefix /

That way the choice is taken when parsing the request and
the redirect is not conditionned to the rewrite.

Willy




Re: Page allocation failure

2013-10-11 Thread Willy Tarreau
Hi Jens,

On Mon, Oct 07, 2013 at 11:23:38AM +, Jens Dueholm Christensen wrote:
 A bit of followup (again sorry for topposting)..
 
 I began looking around for the first line in the stacktrace
 __alloc_pages_nodemask+0x757/0x8d0
 and found a thread in the Linux-Kernel mailinglist:
 http://lkml.indiana.edu/hypermail/linux/kernel/1305.3/01761.html.

Good catch!

 This thread left me wondering a bit, and since I'm not a C[++]-programmer,
 I've never dealt with handling memory allocation and the intricies it
 involves, so I began wondering what and how the order was affecting memory
 allocation..

The order indicates the log of the number of consecutive pages the system
tried to allocate at once. So order 0 is 4kB, order 1 is 8kB, order 2 is
16kB etc...

 All the errors I've seen in our logs are logged as order-1 failures, and as
 far as I can understand an order-1 allocation error is not necessarily a dead
 end.

That said it's rare. It could mean that your memory is highly fragmented
due to other workloads on the same machine.

 According to
 https://www.kernel.org/doc/gorman/html/understand/understand009.html there
 should be a fallback to a lower-order allocation when a higher-order
 allocation is requested and fails. 

That's indeed sometimes possible (I don't know the exact conditions for
this to happen). That said, the worst that can happen in your case is
that some outgoing packets are dropped and will have to be retransmitted.
If it happens once in a while it might be OK, but it's not pleasant to
have a system log full of errors. Did you try to report the issue to the
distro ? It's important to report bugs, as bugs not reported to not exist.

 According to the same Linux-Kernel thread some networkdrivers are buggy, and
 since this machine contains 6 Broadcom NetXtreme II BCM5709 1000Base-T
 interfaces I am currently looking into upgrading the driver.

Are you running with jumbo frames or is your MTU set to 1500 ? I used to
experience allocation failures with jumbo frames in the past on the e1000
driver, it could be a similar issue here. Maybe your driver allocates send
buffers by large chunks, I don't know.

Regards,
Willy




Re: About si_conn_send_loop()

2013-10-11 Thread Godbach

On 2013/10/11 14:10, Willy Tarreau wrote:

Hi Godbach,

Yes I remember about this change, it happened one year ago in the
following commit :

   ed7f836 BUG/MINOR: stream_interface: don't loop over -snd_buf()

In 1.4 and 1.5 before the connection rework, the loop was used to
call send() over each buffer's half (when data was wrapped around
at the end). Now the transport protocol's send() function does the
job and we don't need to loop anymore.

However, I left the code as is because not having to replace the
breaks by gotos etc... reduced the amount of changes at the time
(we were at a stage where the new connections were not completely
stabilized yet).

Now it's OK. Feel free to send a cleanup patch if you want (and
change the comment at the top of the loop).

Best regards,
Willy




Hi Willy,

Thank you.

Yes, if 'while' is replaced by 'if', the breaks should be processed 
carefully.


I will send a cleanup patch later. The function name may be also changed 
from si_conn_send_loop() to si_conn_send() in my opinion, and the codes 
int two lines which call this function will be also modified.


--
Best Regards,
Godbach



Re: Delays from HAProxy

2013-10-11 Thread Willy Tarreau
On Fri, Oct 11, 2013 at 10:59:51AM -0400, Andy M. wrote:
 I looked at my pcap file again.  It looks really weird.  My HAProxy gets
 the GET request, and sends the response.  The the client resends the GET
 request, and there seems to be a lot of tcp_retransmission and dup ack
 packets.  Here is a picture of one request to my haproxy:
 
 http://i.imgur.com/r3oz6lz.png
 
 Any clue what would cause that problem?

Yes, a typical packet loss between you and the client.

 I tried to change the max_syn_backlog, and somaxconn values to both
 10240/20480 and 262144/262144, neither seemes to have solved the problem.

Here it's not the SYN backlog since it's the HTTP request that is retransmitted.
It is possible that your network interface has a defect. It happened to me once,
in a batch of 10 NICs, 3 had defective RAM chips which would randomly corrupt
outgoing packets. Try to use another interface or another switch port.

 Conntrack is not loaded, I checked this a while ago, and I am not using
 anything that would load it.  Here are the commands below.  It also doesn't
 look like anything is being dropped.  The interface I am using is bond1.

Great, since you're using bonding, it's easy to switch to the other NIC and
see if it works better.

From your stats, I'm assuming you're not running with both NICs attached to
the same bond in round robin. I was just checking, because doing so would
expose you to a high probability of disordering packets, which some firewalls
generally don't accept and will block, causing the client to retransmit.

I think now you need to sniff closer to the client to see where the packets
are lost. If you can make a span on your switch to check if it correctly
receives them, that will help you.

Willy




Proxy Protocol Patch for HAProxy 1.4

2013-10-11 Thread Charles-antoine Guillat-Guignard
Hello,

After testing the proxy protocol feature to balance SMTP connections to
a Postfix
2.10 farm, I have to say it is doing nicely, using HAProxy 1.5-dev19.

Thank you for this very welcome feature.

But I was wondering, is the proxy protocol patch for the current stable
version
(1.4.24) publicly available? I have been looking for the patch, but
could not find
it, only mentions of it.

Thank you in advance.

Regards,

Charles-Antoine





Re: Huge performance issues with Haproxy + SSL

2013-10-11 Thread steve
Baptiste bedis9@... writes:

 
 Hi Steve,
 
 Can you send us your configuration (anonymised if required).
 We also need your sysctls (at least the one you've modified).
 
 Baptiste
 
 On Fri, Oct 11, 2013 at 4:43 AM, steve blogad69@... wrote:
  I have been working on trouble shooting Haproxy 1.5 dev 19 with SSL for 
the
  last day or so on Cent OS 6.4 64bit.
 
  Lastest OpenSSL compiled 1.0.1e, recompiled haproxy with this make -s
  TARGET=linux2628 USE_EPOLL=1 USE_OPENSSL=1 ARCH=x86_64 clean all
 
  SSL cert wild card, plus godaddy intermediate and our key.
 
  Our current set of issues we are seeing:
  *Massive amounts of connection refused when running the test with ssl
  *Very High usage of CPU on this 8 core 32 gig box with 100 gig ssd and 
1gb
  nic
  *Maybe 1/4 the amount of traffic we can push though, compaired to a non 
ssl
  test
 
  We are using Jmeter to load test and blazemeter to do up to 40k jmeter
  threads for a full hour.
 
  Here is a list of the errors that are spit back after the test is done
  Response codes
 
  response codecountresponse message
  400 29 Bad request
 
  Non HTTP response code: javax.net.ssl.SSLPeerUnverifiedException
  86069 Non HTTP response message: peer not authenticated
 
  Non HTTP response code: org.apache.http.conn.HttpHostConnectException 
27229
  Non HTTP response message: Connection to https://.com: refused
 
  Non HTTP response code: java.net.SocketException
  88 Non HTTP response message: Connection reset
 
  4122 Precondition Failed
  Non HTTP response code: org.apache.http.NoHttpResponseException270Non 
HTTP
  response message: The target server failed to respond
 
  So this is what we are facing and we are not haproxy experts and think 
we
  have taken it to the best of what we understand about haproxy config and
  settings.
 
  special note: we do not have a web site on the backend, its user server 
for
  an upcoming game we are working on so the stack is quite simple from 
haproxy
  - node.js -- db and back.
 
  Json data is posted to the user server and returned.
 
 
 
 
 
word of warning we are not haproxy experts so we are not 100% sure if in our 
config we have a proper settig to handle 40k requests a second.. so bare 
with us..


global
log /dev/log local0 #notice
maxconn 31500
#tune.bufsize 128000
user netcom
group netcom
pidfile /home/netcom/haproxy.pid
daemon
#nbproc 7
#debug
#quiet

defaults
log global
#mode http
mode tcp
### Options ###
#option httplog
option tcplog
#option logasap
option dontlog-normal
#option dontlognull
option redispatch
#option httpchk GET /?method=echo HTTP/1.1
option tcp-smart-accept
option tcp-smart-connect
#option http-server-close
#option httpclose
#option forceclose
### load balance strategy ###
#balance leastconn
balance roundrobin
### Other ###
retries 5
maxconn 31500
backlog 10
### Timeouts ###
#timeout client  25s
timeout client  60s
#timeout connect  5s
timeout connect 60s
#timeout server  25s
timeout server  60s
timeout tunnel3600s
timeout http-keep-alive  1s
#timeout http-request15s
timeout http-request60s
#timeout queue   30s
timeout queue   30s
timeout tarpit  60s

listen stats *:1212
mode http
stats enable
stats show-node
stats show-desc AquaProxy
stats realm  AquaProxy\ Statistics
stats auth   xxx:xxx
stats refresh 5s
stats uri /

## HTTP ##
frontend http-in
bind *:
acl user_request url_reg method=user.register
use_backend user_group_http if user_request
default_backend other_group_http

backend user_group_http
stick-table type ip size 200k expire 30m
stick on src
server n2 x.195: maxconn 3500 check port 8097 inter 2000
server n10 x.197: maxconn 3500 check port 8097 inter 2000
server n13 x.199: maxconn 3500 check port 8097 inter 2000
server n15 x.201: maxconn 3500 check port 8097 inter 2000
server n21 x.202: maxconn 3500 check port 8097 inter 2000

backend other_group_http
stick-table type ip size 200k expire 30m
stick on src
server n3 x.196: maxconn 3500 check port 8097 inter 2000
server n11 x.198: maxconn 3500 check port 8097 inter 2000
server n14 x.200: maxconn 3500 check port 8097 inter 2000
server n22 x.203: maxconn 3500 check port 8097 inter 2000

## HTTPS ##
frontend https-in
bind *:
acl user_request url_reg method=user.register

Re: Huge performance issues with Haproxy + SSL

2013-10-11 Thread steve
Willy Tarreau w at 1wt.eu writes:

 
 On Fri, Oct 11, 2013 at 02:43:35AM +, steve wrote:
  I have been working on trouble shooting Haproxy 1.5 dev 19 with SSL for 
the 
  last day or so on Cent OS 6.4 64bit.
  
  Lastest OpenSSL compiled 1.0.1e, recompiled haproxy with this make -s 
  TARGET=linux2628 USE_EPOLL=1 USE_OPENSSL=1 ARCH=x86_64 clean all
  
  SSL cert wild card, plus godaddy intermediate and our key.
  
  Our current set of issues we are seeing:
  *Massive amounts of connection refused when running the test with ssl
  *Very High usage of CPU on this 8 core 32 gig box with 100 gig ssd and 
1gb 
  nic
  *Maybe 1/4 the amount of traffic we can push though, compaired to a non 
ssl 
  test
  
  We are using Jmeter to load test and blazemeter to do up to 40k jmeter 
  threads for a full hour.
 
 Are you sure your haproxy settings support these 40k concurrent 
connections ?

thats a good question, posted above the config.. we are still getting our 
feet wet with this system.

 
  Here is a list of the errors that are spit back after the test is done
  Response codes
  
  response codecountresponse message
  400 29 Bad request
 
 This means SSL could pass through but that it's the tester which is 
sending
 bad requests. Quite concerning in fact because from this point it's 
permitted
 to doubt about everything else...
 
  Non HTTP response code: javax.net.ssl.SSLPeerUnverifiedException
  86069 Non HTTP response message: peer not authenticated
 
 Possibly aborted handshakes.
 
  Non HTTP response code: org.apache.http.conn.HttpHostConnectException 
27229 
  Non HTTP response message: Connection to https://.com: refused
 
 Huh ? did you stop and restart haproxy during the test ? Are you sure
 the connectivity between the client and haproxy is OK ? A connection
 refused can only happen when either the server is stopped or when there
 is one component between the client and the server which explicitly sends
 RST packets (eg: a firewall), or some ICMP admin prohibited packets (eg:
 a router).
 
This might of been the case, we are about to run another load test and make 
sure nothing
else is done during this 1 hour test.

  Non HTTP response code: java.net.SocketException
  88 Non HTTP response message: Connection reset
 
  4122 Precondition Failed
  Non HTTP response code: org.apache.http.NoHttpResponseException270Non 
HTTP 
  response message: The target server failed to respond
  
  So this is what we are facing and we are not haproxy experts and think 
we 
  have taken it to the best of what we understand about haproxy config and 
  settings.

  special note: we do not have a web site on the backend, its user server 
for 
  an upcoming game we are working on so the stack is quite simple from 
haproxy 
  - node.js -- db and back.
 
  Json data is posted to the user server and returned.
 
 OK. Anyway this is like a website and it must work!
 
 You need to check haproxy's logs to see if it *receives* the requests that
 are reported to fail, or if it logs failed handshakes.
 
 Since you're reporting a high CPU usage, it is also possible that the 
client
 renegociates a new key for each request, which might or might not match 
what
 you expect from your target. For example, if each of your clients does 
only
 one request and leaves, this is OK. But if you have only 40k concurrent
 clients which do a number of requests, they will only negociate once at 
the
 beginning of their session.
 

So.. If I understand, ip comes in.. it creates a new registration for a user 
(if they are not
already in the game), data is return to the client, after a a few seconds or 
so a game update
is passed to the user server to store the latest changes and a reply is sent 
back, then its
rinse and repeat for this user.  

Ideally we only need one session for this.. instead of 1 for reg, 1 for each 
game state or call until
they finish.. ?  Is that what your getting at, because I *think* we might be 
doing one for everyone.. not sure tho.

We are also looking at terminating the ssl on the backends instead of 
haproxy and passing it though tcp via haproxy which
might spread the ssl load a bit more how ever with our load tests we only 
have a few ips so it seems that only certain backend
servers get hit since its from the same ip.. ?

thanks!

 Regards,
 Willy
 
 






зоркий глаз- превосходное зрение

2013-10-11 Thread deceased07
Вы восстанавливаете зрение без отрыва от должноста так же! непринужденно а так 
же с удовлетворением! http://s.coop/1td2r 


Re: Need help with 1.5 crashing when browser refreshed

2013-10-11 Thread Kevin
My initial builds were done using HomeBrew for both 1.4.24 and 1.5dev19. 

It is configured with the following arguments to make:
TARGET=generic USE_KQUEUE=1 USE_POLL=1 USE_PCRE=1

The 1.5dev19 settings add to those:
USE_OPENSSL=1 USE_ZLIB=1 ADDLIB=-lcrypto

When I did my test compiles I duplicated those parameters. I read that 
somewhere that the OSX makefile didn’t work so I didn’t spend any time trying 
it since it seemed the homebrew options worked fine with 1.4.24 in my testing.

Here is the -vv output from the currently working version.

HA-Proxy version 1.5-dev19 2013/06/17
Copyright 2000-2013 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = generic
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=1 USE_POLL=1 USE_KQUEUE=1 USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): no
Built with zlib version : 1.2.5
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8y 5 Feb 2013
Running on OpenSSL version : OpenSSL 0.9.8y 5 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built without PCRE support (using libc's regex instead)

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use queue.


Here is the -vv from the originally compiled version that exhibits the bug.

HA-Proxy version 1.5-dev19 2013/06/17
Copyright 2000-2013 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = generic
  CPU = generic
  CC  = cc
  CFLAGS  = 
  OPTIONS = USE_ZLIB=1 USE_POLL=1 USE_KQUEUE=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): no
Built with zlib version : 1.2.5
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8r 8 Feb 2011
Running on OpenSSL version : OpenSSL 0.9.8y 5 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.33 2013-05-28
PCRE library supports JIT : no (USE_PCRE_JIT not set)

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.



- Kevin

On Oct 11, 2013, at 12:58 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi Kevin,
 
 On Thu, Oct 10, 2013 at 08:28:07PM -0500, Kevin wrote:
 So after some more troubleshooting the problem seems to be related to PCRE.
 If I compile without it I don?t see the problem. In 1.4 there  does not
 appear to be any issue using PCRE.
 
 OK, thanks for tracking this down. I'm not seeing changes specific to
 PCRE in 1.5 except the support for the JIT version which is not enabled
 by default (you need USE_PCRE_JIT for this).
 
 Just a quick question, are you using the GNU make file (Makefile) or
 the OSX Makefile (Makefile.osx) to build haproxy ?
 
 Could you please send the complete output of haproxy -vv ?
 
 It's also possible that you're hitting a completely different bug
 that is triggerred by the use of PCRE but not related to it (eg: a
 use after free or something like this).
 
 Thanks,
 Willy
 
 
 




HTTP and send-proxy

2013-10-11 Thread jinge
Hi all!


I want use the haproxy PROXY protocol for our use case. To send our clients ip 
address to the peer haproxy. But after I config the send-proxy and accept-proxy 
in the configuration. The web nevent be successful responsed. The 503 error  
always there.

the configure there
ha-L0.conf
--
# frontend ##
frontend tcp-in
bind 192.168.137.41:2220 
bind 192.168.132.41:2221 
bind 192.168.133.41: 
mode tcp
log global
option tcplog

#distingush HTTP and non-HTTP
tcp-request inspect-delay 30s
tcp-request content accept if HTTP

#ACL DEFINE 
acl squid_incompatiable-Host hdr_reg(Host) -f 
/usr/local/etc/acl-define.d/squid_incompatiable-Host.txt
acl direct-dstip dst -f /usr/local/etc/acl-define.d/direct_out-dst.txt
#ACL DEFINE of websocket
acl missing_host hdr_cnt(Host) eq 0
acl QQClient hdr(User-Agent) -i QQClient
acl has_range hdr_cnt(Range) gt 0

#ACTION 
use_backend Direct if !HTTP 
use_backend Direct if HTTP_1.1 missing_host
use_backend Direct if direct-dstip
use_backend Direct if METH_CONNECT 
use_backend Direct if QQClient 
default_backend HAL1


backend HAL1
mode http
log global
source 0.0.0.0
server ha2-l1-n1  localhost:3330 send-proxy

ha-L1.conf
--
# frontend ##
frontend localhostlister
bind localhost:3330 accept-proxy
mode http

#ACL DEFINE 
acl direct-dstip dst -f /usr/local/etc/acl-define.d/direct_out-dst.txt
#ACL DEFINE of websocket
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl missing_host hdr_cnt(Host) eq 0
acl QQClient hdr(User-Agent) -i QQClient
acl has_range hdr_cnt(Range) gt 0

#ACTION 
use_backend NginxClusterWebsockets if is_websocket
default_backend SquidCluster

backend SquidCluster
mode http
option forwardfor header X-Client
balance uri whole
log global
acl mgmt-src src  -f /usr/local/etc/acl-define.d/mgmt-src.txt

errorfile 502 /usr/local/etc/errorfiles/504.http
acl is_internal_error status ge 500
rspideny . if  is_internal_error !mgmt-src

rspidel ^via:.* unless mgmt-src
rspidel ^x-cache:* unless mgmt-src
rspidel ^x-cache-lookup:* unless mgmt-src
rspidel ^X-Ecap:* unless mgmt-src
source 0.0.0.0 
option httpchk GET http://192.168.172.4/check.txt
server sq-L1-n1a x.x.x.x:3129   weight 20 check inter 5s maxconn 1


And we use the haproxy -d argument found the ha0 seems never send the msg to 
the ha1


0090:HAL1.clireq[0019:]: GET http://www.taobao.com/ HTTP/1.1
0090:HAL1.clihdr[0019:]: User-Agent: curl/7.26.0
0090:HAL1.clihdr[0019:]: Host: www.taobao.com
0090:HAL1.clihdr[0019:]: Accept: */*
0090:HAL1.clihdr[0019:]: Proxy-Connection: Keep-Alive
008d:HAL1.clicls[000e:001a]
008d:HAL1.closed[000e:001a]

Is there any one can help what's the problem there ?



---
Regards
Jinge






RE;RE; Sample\ Order

2013-10-11 Thread christine



**

Hello, wish you a nice day!
we are led maker . Our products range is work-light, led-strip, light-bar. we 
are so pleased to enclosed some for your reference. If you are interested in 
our products, please contact me for more quotation.



** follow on Twitter (Twitter Account not yet Authorized

)
** friend on Facebook (#

)
** forward to a friend 
(http://us7.forward-to-friend2.com/forward?u=c025cb7ca200e740afe85452fid=20f31d4fdee=88efdd68d0)
** unsubscribe from this list 
(http://yeah.us7.list-manage1.com/unsubscribe?u=c025cb7ca200e740afe85452fid=4137bff738e=88efdd68d0c=20f31d4fde)
** update subscription preferences 
(http://yeah.us7.list-manage1.com/profile?u=c025cb7ca200e740afe85452fid=4137bff738e=88efdd68d0)

This email was sent to haproxy@formilux.org
why did I get this? 
(http://yeah.us7.list-manage.com/about?u=c025cb7ca200e740afe85452fid=4137bff738e=88efdd68d0c=20f31d4fde)
 unsubscribe from this list 
(http://yeah.us7.list-manage1.com/unsubscribe?u=c025cb7ca200e740afe85452fid=4137bff738e=88efdd68d0c=20f31d4fde)
 update subscription preferences 
(http://yeah.us7.list-manage1.com/profile?u=c025cb7ca200e740afe85452fid=4137bff738e=88efdd68d0)
S · S · S · Guangzhou, 44 51 · USA

Email Marketing Powered by MailChimp
http://www.mailchimp.com/monkey-rewards/?utm_source=freemium_newsletterutm_medium=emailutm_campaign=monkey_rewardsaid=c025cb7ca200e740afe85452fafl=1

Re: Need help with 1.5 crashing when browser refreshed

2013-10-11 Thread Willy Tarreau
Hi Kevin,

On Fri, Oct 11, 2013 at 07:39:32PM -0500, Kevin wrote:
 My initial builds were done using HomeBrew for both 1.4.24 and 1.5dev19. 
 
 It is configured with the following arguments to make:
 TARGET=generic USE_KQUEUE=1 USE_POLL=1 USE_PCRE=1
 
 The 1.5dev19 settings add to those:
 USE_OPENSSL=1 USE_ZLIB=1 ADDLIB=-lcrypto
 
 When I did my test compiles I duplicated those parameters. I read that
 somewhere that the OSX makefile didn?t work so I didn?t spend any time trying
 it since it seemed the homebrew options worked fine with 1.4.24 in my
 testing.

OK.

 Here is the -vv output from the currently working version.
 
 HA-Proxy version 1.5-dev19 2013/06/17
 Copyright 2000-2013 Willy Tarreau w...@1wt.eu
 
 Build options :
   TARGET  = generic
   CPU = generic
   CC  = gcc
   CFLAGS  = -O2 -g -fno-strict-aliasing
   OPTIONS = USE_ZLIB=1 USE_POLL=1 USE_KQUEUE=1 USE_OPENSSL=1
 
 Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
 
 Encrypted password support via crypt(3): no
 Built with zlib version : 1.2.5
 Compression algorithms supported : identity, deflate, gzip
 Built with OpenSSL version : OpenSSL 0.9.8y 5 Feb 2013
 Running on OpenSSL version : OpenSSL 0.9.8y 5 Feb 2013
 OpenSSL library supports TLS extensions : yes
 OpenSSL library supports SNI : yes
 OpenSSL library supports prefer-server-ciphers : yes
 Built without PCRE support (using libc's regex instead)
 
 Available polling systems :
  kqueue : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
 Total: 3 (3 usable), will use queue.
 
 
 Here is the -vv from the originally compiled version that exhibits the bug.
 
 HA-Proxy version 1.5-dev19 2013/06/17
 Copyright 2000-2013 Willy Tarreau w...@1wt.eu
 
 Build options :
   TARGET  = generic
   CPU = generic
   CC  = cc
   CFLAGS  = 
   OPTIONS = USE_ZLIB=1 USE_POLL=1 USE_KQUEUE=1 USE_OPENSSL=1 USE_PCRE=1
 
 Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
 
 Encrypted password support via crypt(3): no
 Built with zlib version : 1.2.5
 Compression algorithms supported : identity, deflate, gzip
 Built with OpenSSL version : OpenSSL 0.9.8r 8 Feb 2011
 Running on OpenSSL version : OpenSSL 0.9.8y 5 Feb 2013
 OpenSSL library supports TLS extensions : yes
 OpenSSL library supports SNI : yes
 OpenSSL library supports prefer-server-ciphers : yes
 Built with PCRE version : 8.33 2013-05-28
 PCRE library supports JIT : no (USE_PCRE_JIT not set)
 
 Available polling systems :
  kqueue : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
 Total: 3 (3 usable), will use kqueue.

So everything is normal but it crashes. At this time, I'm inclined to
believe the following causes in order of most to least likely :

  - bug in libpcre 8.33

  - bug in how haproxy uses libpcre which is revealed by 8.33

  - general bug in haproxy that is revealed on your platform when pcre
is used

For last point, you could attempt something, run haproxy with the -dM
argument. It enables memory poisonning which consists in filling all
structures with a byte before using them. This immediately catches
pointers that are dereferenced before being initialized. You may want
to test with and without libpcre. Maybe it will crash from the very
first request when using libpcre now, proving there is something wrong
in our code.

Thanks,
Willy