RE: Apache translates 500 to 502 from haproxy
Well, all the problems, the original one that we hit a couple of months ago and the current one are related to one thing: Apache expects some request/response to be read by the downstream haproxy ( and its backends) which refuse to do it due to some error condition and instead sends back a error status like 404, 502, 401 abruptly. Haproxy seem to send a correct response back to Apache as we have seen before, it's the apache that misinterprets it. Yeah, I definitely need to reproduce this problem in test and see what could be the real cause. I will keep you posted. Thanks Sachin -Original Message- From: Willy Tarreau [mailto:w...@1wt.eu] Sent: Tuesday, September 20, 2011 10:31 AM To: Sachin Shetty Cc: 'Cassidy, Bryan'; haproxy@formilux.org; 'Amrit Jassal' Subject: Re: Apache translates 500 to 502 from haproxy Hi Sachin, On Mon, Sep 19, 2011 at 01:47:28PM +0530, Sachin Shetty wrote: Hey Willy, So we are now hit by the side effect of this fix i.e. disabling httpclose. Two problems: 1. Entries in the log are missing, I guess you already warned me about it. Do you think if we disable keep alive in our Apache fronting haproxy, this will problem will go away? Yes it will solve this issue at least. BTW, with what I saw in your trace, I really see no reason why http-server-close would not work, because the server advertises a correct content-length so haproxy should wait for both streams to synchronize. Are you sure you had http-server-close in both the frontend and the backend, and that you didn't have any remains of forceclose nor httpclose ? Just in doubt, if you're willing to make a new test, I'm interested in a new trace :-) 2. Related to one, but an interesting one. - A request comes to haproxy, as configured after waiting in haproxy queue for 10 seconds due to backend free connection unavailable, it sends a 503 back, logged correctly in haproxy and apache - The client retries, I think with Keep Alive over the same connection and it sees a 400 status back. Now this request is no where in haproxy logs so there is no way to see what happened in haproxy and who really dropped the ball. The connection never made it to the backed cherrypy server since it logs each request it receives. When you see the 400, is it the standard haproxy response or is it apache ? If it is haproxy, you should see it in its logs, which doesn't seem to be the case. It is possible that the client (or apache ?) continues to send a bit of the remaining POST data before the request and that this confuses the next hop (apache or haproxy). That's just a guess, of course. Cheers, Willy
Re: haproxy at amazon
On Tuesday 20 of September 2011 02:02:27 Dean Hiller wrote: We are running haproxy at amazon and running some load tests and seem to be hitting some bottleneck between haproxy and webservers or haproxy itself. How can you tell when haproxy is maxed out? Will cpu hit 100% or is it some other characteristic? our cpu is 4% and I only have 10 webservers and 10 clients, and my 10 clients generate about 1000 requests/second each on a socket and each one is stateless independent of the other, no session is saved at all. If you configure it correctly (haproxy bound to some specific core, network interrupts to some other core sharing L2 cache with haproxy core), you should see 100% on haproxy core (70% system, 30% user if running in L7 with few acls and rewrites) and around 25% on core servicing network interrupts. In full http tunneling mode you should see both cores saturated ad 100%. You should check traffic on your haproxy host both ways using tcpdump. Regards, Brane
Re: Caching
Hi, What do you mean when you say running -c? Here's my config file. Thanks for your help. Christophe global log 192.168.0.2 local0 log 127.0.0.1 local1 notice maxconn 10240 defaults logglobal option dontlognull retries2 timeout client 35s timeout server 35s timeout connect 5s timeout http-keep-alive 10s listen WebPlayer-Farm 192.168.0.2:80 mode http option httplog balance source #balance leastconn option forwardfor stats enable option http-server-close server Player1 192.168.0.10:80 check server Player2 192.168.0.11:80 check server Player3 192.168.0.12:80 check server Player4 192.168.0.13:80 check server Player5 192.168.0.14:80 check option httpchk HEAD /checkcf.cfm HTTP/1.0 listen WebPlayer-Farm-SSL 192.168.0.2:443 mode tcp option ssl-hello-chk balance source server Player1 192.168.0.10:443 check server Player2 192.168.0.11:443 check server Player3 192.168.0.12:443 check server Player4 192.168.0.13:443 check server Player5 192.168.0.14:443 check listen Manager-Farm192.168.0.2:81 mode http option httplog balance source option forwardfor stats enable option http-server-close server Manager1 192.168.0.60:80 check server Manager2 192.168.0.61:80 check option httpchk HEAD /testcf/checkcf.cfm HTTP/1.0 listen Manager-Farm-SSL 192.168.0.2:444 mode tcp option ssl-hello-chk balance source server Manager1 192.168.0.60:443 check server Manager2 192.168.0.61:443 check listen info 192.168.0.2:90 mode http balance source stats uri / Le 20/09/11 01:27, « Hank A. Paulson » h...@spamproof.nospammail.net a écrit : You can get weird results like this sometimes if you don't use http-close or any other http closing option on http backends. You should paste your config. Maybe there should be a warning, if there is not already, for that situation - maybe just when running -c. On 9/19/11 5:46 AM, Christophe Rahier wrote: I don't use Apache but IIS. I tried to disable caching on IIS but the problem is still there. There's no proxy, all requests are sent from pfSense. Christophe Le 19/09/11 13:45, « Baptiste »bed...@gmail.com a écrit : hi Christophe, HAProxy is *only* a reverse proxy. No caching functions in it. Have you tried to browse your backend servers directly? Can it be related to your browser's cache? cheers On Mon, Sep 19, 2011 at 1:39 PM, Christophe Rahier christo...@qualifio.com wrote: Hi, Is there a caching system at HAProxy? In fact, we find that when we put online new files (CSS, for example) that they are not addressed directly, it usually takes about ten minutes. Thank you in advance for your help. Christophe
Rewriting server errors, rspdeny challenges
We have various services that expose internal errors I am trying to masquerade with haproxy. The only keyword I can find that can look at the result at all is rspdeny. The documentation says It is easier, faster and more powerful to use ACLs to write access policies. Rspdeny should be avoided in new designs. But is there any way to block responses without using rspdeny? Next issue: rspdeny is not able to look at the URL: acl is-gif path_end .gif acl is-internal-error status ge 500 rspdeny . if is-gif is-internal-error [WARNING] 262/123207 (2466) : parsing [haproxy.cfg:123] : acl 'is-gif' involves some volatile request-only criteria which will be ignored. I can split into separate backends depending on the URL, but then it starts getting complicated with setting proper maxconn values etc, so I'd rather avoid that. Is there a better way? - Finn Arne