Re: Question about TCP balancing
On Wed, Aug 05, 2009 at 06:30:39AM +0200, Willy Tarreau wrote: frontend my_front acl near_usable nbsrv(near) ge 2 acl far_usable nbsrv(far) ge 2 use_backend near if near_usable use_backend far if far_usable # otherwise error backend near balance roundrobin server near1 1.1.1.1 check server near2 1.1.1.2 check server near3 1.1.1.3 check backend far balance roundrobin server far1 2.1.1.1 check server far2 2.1.1.2 check server far3 2.1.1.3 check Aha, I already came to such a solution and noticed it works only in HTTP mode. Since I actually do not want to parse HTTP-specific information, I want to stay in TCP mode (but still use ACL with nbsrv). So I should stick with 1.4 for that purpose, right? Or does HTTP mode acts like TCP mode unless I actually use something HTTP-specific? In other words, will the above configuration (used in HTTP mode) actually try to parse HTTP headers (and waste cpu cycles for that)? Thanks.
Re: 1.4 dev 1 under FreeBSD 7.2 and gmake-3.81_3 error when compiling
Hi Willy, This thing is also happened in backend.c and proto_tcp.c... I've added the types.h before tcp.h and it fixed it... But then this error popped out: gmake USE_PCRE=1 TARGET=freebsd gcc -Iinclude -Wall -O2 -g -DTPROXY -DENABLE_POLL -DENABLE_KQUEUE -DUSE_PCRE -I/usr/local/include -DCONFIG_HAPROXY_VERSION=\1.4-dev1\ -DCONFIG_HAPROXY_DATE=\2009/07/27\ -c -o src/proto_tcp.o src/proto_tcp.c src/proto_tcp.c: In function 'tcp_bind_listener': src/proto_tcp.c:256: error: 'SOL_TCP' undeclared (first use in this function) src/proto_tcp.c:256: error: (Each undeclared identifier is reported only once src/proto_tcp.c:256: error: for each function it appears in.) gmake: *** [src/proto_tcp.o] Error 1 I just changed sol_tcp to 6 (number of the protocol as stated in the man) and it compiled... Seems like freebsd 7.2 has no support for SOL_TCP anymore (even in the sources), they've got a new system using /etc/protocols. from the setsockopt man: To manipulate options at any other level the protocol number of the appropriate proto- col controlling the option is supplied. For example, to indicate that an option is to be interpreted by the TCP protocol, level should be set to the protocol number of TCP; see getprotoent(3). Willy Tarreau wrote: Hi, On Tue, Aug 04, 2009 at 01:10:52AM +0200, Andrew Azarov wrote: Hi, While compiling the 1.4 dev 1 version snapshot I encountered this error fw# gmake USE_PCRE=1 TARGET=freebsd (...) gcc -Iinclude -Wall -O2 -g -DTPROXY -DENABLE_POLL -DENABLE_KQUEUE -DUSE_PCRE -I/usr/local/include -DCONFIG_HAPROXY_VERSION=\1.4-dev1\ -DCONFIG_HAPROXY_DATE=\2009/07/27\ -c -o src/stream_sock.o src/stream_sock.c In file included from src/stream_sock.c:19: /usr/include/netinet/tcp.h:40: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'tcp_seq' /usr/include/netinet/tcp.h:50: error: expected specifier-qualifier-list before 'u_short' /usr/include/netinet/tcp.h:175: error: expected specifier-qualifier-list before 'u_int8_t' gmake: *** [src/stream_sock.o] Error 1 quite strange, it means they provide a header which does not solve its own dependencies. Could you please try to add the following line before #include netinet/tcp.h in src/stream_sock.c : #include sys/types.h I think it should fix the issue. If so, I'll add it to the source. Thanks! Willy
Re: disable-on-404
Kent Noonan a écrit : Hello all... I am working on a new setup and am having an issue, that I will admit, is probably me misreading the docs. We have a couple of other load balance solutions, so I am not new to the concept, this is just our first use of haproxy. I have 5 backend servers and I am trying to configure it so they can be taken out of service by using the availability of a file. Here is the relevant config section: backend application-servers balance roundrobin appsession session_id len 32 timeout 1h mode http http-check disable-on-404 option httpchk /alive.htm server bb-app1 10.200.35.1:80 check server bb-app2 10.200.35.2:80 check server bb-app3 10.200.35.3:80 check server bb-app4 10.200.35.4:80 check server bb-app5 10.200.35.5:80 check What I am trying to do is if a file exists at the uri of /alive.htm the server is available. If that file gets deleted off of the server that server will be taken out of service for new connections and still allow existing connections. For some reason this isn't working, when looking at the stats page all servers show UP even though only one of the servers has the /alive.htm file on it. I am running 1.3.19 on a 64bit architecture. Can anyone see what I am missing, or have any other words of wisdom to hopefully get this working for me? Thanks Kent I was using option httpchk HEAD /nagios.htm HTTP/1.0\r\nHost: 127.0.0.1 and no special htto-check disable-on-404 and it worked fine, in 1.3.15.4 at least.
Re: implementing delay...
Hi Willy, The 1.4 seems not to work, for example I have the following: frontendxx :80 mode http tcp-request inspect-delay 30s acl to_delay hdr_reg Opera.* tcp-request content accept if to_delay WAIT_END Maybe it is because of the capture cookie or redirect which I have further? P.S.: Sorry for the William Willy Tarreau wrote: Hi Andrew, On Mon, Aug 03, 2009 at 02:18:21AM +0200, Andrew Azarov wrote: BTW, with 1.4 you could already do this without additional work, because in 1.4 you can use HTTP ACLs in TCP rules. You just need to wait for a whole HTTP request first. It would look like this : 1 frontend XXX 2 mode http 3 tcp-inspect delay 30s 4 tcp-request content reject if ! HTTP 5 acl to_delay hdr_sub(user-agent) -i Opera 6 ( as many other tests as you want ) 7 tcp-request content accept if to_delay WAIT_END What this does : 3) set max tcp content inspection delay to the pause you want to induce 4) wait for a complete HTTP request. As long as the TCP contents look like HTTP but the request is not complete, the rule will block and return to the caller that it wants more data. If it sees it's not valid HTTP protocol, it will drop the request. If it sees it's a complete HTTP request, it will pas on to next rules. 5) write as many tests as you want for the to_delay, which will indicate what criteria will introduce a delay. You can also write as many ACLs as you want and combine them later. 7) We say that we accept the request if the ACL to_delay is matched as well as the implicit WAIT_END ACL. The trick here is that this WAIT_END only matches once the timeout strikes. So it will be evaluated only if the to_delay is also validated, and will cause a pause for those requests. If the to_delay ACL is not matched, then the next rules are evaluated. Once there aren't any more, we get an automatic accept, which happens for other traffic. I really encourage you to give it a try, as it may help you solve your issues without wasting time on code, and it will offer you a much more flexible mechanism too. Regards, Willy
Re: implementing delay...
Hi Willy, Yes I tried it with the ! HTTP and with !HTTP_1.1 !HTTP_1.0 - It just gives a blank page on all of the websites... BRG, Andrew Willy Tarreau wrote: On Wed, Aug 05, 2009 at 04:08:29PM +0200, Andrew Azarov wrote: Hi Willy, The 1.4 seems not to work, for example I have the following: frontendxx :80 mode http tcp-request inspect-delay 30s acl to_delay hdr_reg Opera.* tcp-request content accept if to_delay WAIT_END Maybe it is because of the capture cookie or redirect which I have further? no, there's no reason. However, I notice that you did not put this line : tcp-request content reject if ! HTTP It is important because it will pause the evaluation as long as there's not a complete HTTP request. I think that in your case the evaluation passes through because the incomplete request does not match. So please try again. If you don't get it to work, I'll try to find some time to test it. P.S.: Sorry for the William as I said, no problem :-) Cheers, Willy
Re: Connection limiting Sorry servers
Hi Willy On Mon, 2009-08-03 at 09:21 +0200, Willy Tarreau wrote: why are you saying that ? Except for rare cases of huge bugs, a server is not limited in requests per second. At full speed, it will simply use 100% of the CPU, which is why you bought it after all. When a server dies, it's almost always because a limited resource has been exhausted, and most often this resource is memory. In some cases, it may be other limits such as sockets, file descriptors, etc... which cause some unexpected exceptions not to be properly caught. We have a problem that our servers open connections to some 3rd party, and if we get too many users at the same time, they get too many connections. I'm well aware of the problem, many sites have the same. The queuing mechanism in haproxy was developped exactly for that. The first user was a gaming site which went from 50 req/s to 1 req/s on patch days. They too thought their servers could not handle that, while it was just a matter of concurrent connections once again. By enabling the queueing mechanism, they could sustain the 1 req/s with only a few hundred concurrent connections. If that is the case, I will try the same and only limit max connections and see, what will happen. If that will actually work, I will have much simpler situation to handle. Thank you for now, you have been very helpful. Best regards Bostjan
reqrep/general regex issue
Running the latest 1.3.x and have several reqrep lines in my config. No issues with rewriting /foo/(.) but I just want to rewrite: /foo to /fubar/foo and the regex that I *think* should work is not doing the job. Any help appreciated. -dave
Re: implementing delay...
On Wed, Aug 05, 2009 at 04:31:09PM +0200, Andrew Azarov wrote: Hi Willy, Yes I tried it with the ! HTTP and with !HTTP_1.1 !HTTP_1.0 - It just gives a blank page on all of the websites... OK I will check. Regards, Willy
Re: Connection limiting Sorry servers
On Wed, Aug 05, 2009 at 05:52:50PM +0200, Bo??tjan Mer??un wrote: Hi Willy On Mon, 2009-08-03 at 09:21 +0200, Willy Tarreau wrote: why are you saying that ? Except for rare cases of huge bugs, a server is not limited in requests per second. At full speed, it will simply use 100% of the CPU, which is why you bought it after all. When a server dies, it's almost always because a limited resource has been exhausted, and most often this resource is memory. In some cases, it may be other limits such as sockets, file descriptors, etc... which cause some unexpected exceptions not to be properly caught. We have a problem that our servers open connections to some 3rd party, and if we get too many users at the same time, they get too many connections. So you're agreeing that the problem comes from too many connections. This is exactly what maxconn is solving. I'm well aware of the problem, many sites have the same. The queuing mechanism in haproxy was developped exactly for that. The first user was a gaming site which went from 50 req/s to 1 req/s on patch days. They too thought their servers could not handle that, while it was just a matter of concurrent connections once again. By enabling the queueing mechanism, they could sustain the 1 req/s with only a few hundred concurrent connections. If that is the case, I will try the same and only limit max connections and see, what will happen. If that will actually work, I will have much simpler situation to handle. I bet so ;-) Willy
Re: 1.3.19 binaries for Solaris
Hi Marco, On Wed, Aug 05, 2009 at 11:50:14AM +0200, Marco Cunha wrote: Hi Willy, hi all, I've tried to download the solaris binaries for 1.3.19 from the website but it seems they're not there yet. Are they being phased out ? no, it's just that I recently moved and have not yet unpacked all my boxes. My Sun is still in a box and I don't have all the connectors at hand to easily plug it and run a build on it. I'll see if I can sort that out next week-end. Regards, Willy
Re: disable-on-404
On Wed, Aug 05, 2009 at 02:55:54PM +0200, Benoit wrote: Kent Noonan a écrit : Hello all... I am working on a new setup and am having an issue, that I will admit, is probably me misreading the docs. We have a couple of other load balance solutions, so I am not new to the concept, this is just our first use of haproxy. I have 5 backend servers and I am trying to configure it so they can be taken out of service by using the availability of a file. Here is the relevant config section: backend application-servers balance roundrobin appsession session_id len 32 timeout 1h mode http http-check disable-on-404 option httpchk /alive.htm server bb-app1 10.200.35.1:80 check server bb-app2 10.200.35.2:80 check server bb-app3 10.200.35.3:80 check server bb-app4 10.200.35.4:80 check server bb-app5 10.200.35.5:80 check What I am trying to do is if a file exists at the uri of /alive.htm the server is available. If that file gets deleted off of the server that server will be taken out of service for new connections and still allow existing connections. For some reason this isn't working, when looking at the stats page all servers show UP even though only one of the servers has the /alive.htm file on it. I am running 1.3.19 on a 64bit architecture. Can anyone see what I am missing, or have any other words of wisdom to hopefully get this working for me? Thanks Kent I was using option httpchk HEAD /nagios.htm HTTP/1.0\r\nHost: 127.0.0.1 and no special htto-check disable-on-404 and it worked fine, in 1.3.15.4 at least. And coincidently, I've been using it today too on 1.3.19 and it worked. Are you sure that it's not because of your appsession cookies that you still see traffic on your servers ? Willy
Re: reqrep/general regex issue
On Wed, Aug 05, 2009 at 12:08:12PM -0400, Dave Pascoe wrote: Running the latest 1.3.x and have several reqrep lines in my config. No issues with rewriting /foo/(.) but I just want to rewrite: /foo to /fubar/foo and the regex that I *think* should work is not doing the job. Any help appreciated. could you post the faulty regex ? Willy