Question about TCP balancing

2009-08-03 Thread Dmitry Sivachenko
Hello!

I am trying to setup haproxy 1.3.19 to use it as
TCP load balancer.

Relevant portion of config looks like:

listen  test 0.0.0.0:17000
mode tcp
balance roundrobin
server  srv1 srv1:17100 check inter 2
server  srv2 srv2:17100 check inter 2
server  srv3 srv3:17100 check inter 2

Now imagine the situation that all 3 backends are down
(no program listen on 17100 port, OS responds with Connection Refused).

In that situation haproxy still listens port 17100 and closes connection
immediately:
 telnet localhost 17101
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.

Is it possible to configure haproxy so it will stop listening the port
when all backends are down?  So clients will receive
Connection Refused as if none listens TCP port at all?

Thanks in advance!



Re: Question about TCP balancing

2009-08-04 Thread Dmitry Sivachenko
Hello!

Thanks for clarification.

I have another question then (trying to solve my problem in a different way).

I want to setup the following configuration.
I have 2 sets of servers (backends): let call one set NEAR (n1, n2, n3)
and another set FAR (f1, f2, f3).

I want to spread incoming requests between NEAR servers only
when they are alive, and move load to FAR servers in case NEAR set is down.

Is it possible to setup such configuration?

I read the manual but did not find such a solution...

Thanks in advance!


On Mon, Aug 03, 2009 at 09:46:47PM +0200, Willy Tarreau wrote:
 No it's not, and it's not only a configuration issue, it's an OS
 limitation. The only way to achieve this is to stop listening to
 the port then listen again to re-enable the port. On some OSes, it
 is possible. On other ones, you have to rebind (and sometimes close
 then recreate a new socket). But once your process has dropped
 privileges, you can't always rebind if the port is 1024 for
 instance.
 
 So instead of having various behaviours for various OSes, it's
 better to make them behave similarly.
 
 I have already thought about adding an OS-specific option to do
 that, but I have another problem with that. Imagine that your
 servers are down. You stop listening to the port. At the same time,
 someone else starts listening (eg: you start a new haproxy without
 checking the first one, or an FTP transfer uses this port, ...).
 What should be done when the servers are up again ? Haproxy will
 not be able to get its port back because someone else owns it.
 
 So, by lack of a clean and robust solution, I prefer not to
 experiment in this area.
 



Re: Question about TCP balancing

2009-08-05 Thread Dmitry Sivachenko
On Wed, Aug 05, 2009 at 06:30:39AM +0200, Willy Tarreau wrote:
 frontend my_front
   acl near_usable nbsrv(near) ge 2
   acl far_usable  nbsrv(far)  ge 2
   use_backend near if near_usable
   use_backend far  if far_usable
   # otherwise error
 
 backend near
   balance roundrobin
   server near1 1.1.1.1 check
   server near2 1.1.1.2 check
   server near3 1.1.1.3 check
 
 backend far
   balance roundrobin
   server far1  2.1.1.1 check
   server far2  2.1.1.2 check
   server far3  2.1.1.3 check
 

Aha, I already came to such a solution and noticed it works only
in HTTP mode.
Since I actually do not want to parse HTTP-specific information,
I want to stay in TCP mode (but still use ACL with nbsrv).

So I should stick with 1.4 for that purpose, right?

Or does HTTP mode acts like TCP mode unless I actually use
something HTTP-specific?
In other words, will the above configuration (used in HTTP mode)
actually try to parse HTTP headers (and waste cpu cycles for that)?

Thanks.




Re: Question about TCP balancing

2009-08-06 Thread Dmitry Sivachenko
On Thu, Aug 06, 2009 at 12:03:25AM +0200, Willy Tarreau wrote:
 On Wed, Aug 05, 2009 at 12:01:34PM +0400, Dmitry Sivachenko wrote:
  On Wed, Aug 05, 2009 at 06:30:39AM +0200, Willy Tarreau wrote:
   frontend my_front
 acl near_usable nbsrv(near) ge 2
 acl far_usable  nbsrv(far)  ge 2
 use_backend near if near_usable
 use_backend far  if far_usable
 # otherwise error
   
   backend near
 balance roundrobin
 server near1 1.1.1.1 check
 server near2 1.1.1.2 check
 server near3 1.1.1.3 check
   
   backend far
 balance roundrobin
 server far1  2.1.1.1 check
 server far2  2.1.1.2 check
 server far3  2.1.1.3 check
   
  
  Aha, I already came to such a solution and noticed it works only
  in HTTP mode.
  Since I actually do not want to parse HTTP-specific information,
  I want to stay in TCP mode (but still use ACL with nbsrv).
  
  So I should stick with 1.4 for that purpose, right?
 
 exactly. However, keep in mind that 1.4 is development, and if
 you upgrade frequently, it may break some day. So you must be
 careful.
 

Okay, what is the estimated release date of 1.4 branch?



Compilation of haproxy-1.4-dev2 on FreeBSD

2009-08-24 Thread Dmitry Sivachenko
Hello!

Please consider the following patches. They are required to
compile haproxy-1.4-dev2 on FreeBSD.

Summary:
1) include sys/types.h before netinet/tcp.h
2) Use IPPROTO_TCP instead of SOL_TCP
(they are both defined as 6, TCP protocol number)

Thanks!


--- src/backend.c.orig  2009-08-24 14:49:04.0 +0400
+++ src/backend.c   2009-08-24 14:49:19.0 +0400
@@ -17,6 +17,7 @@
 #include syslog.h
 #include string.h
 #include ctype.h
+#include sys/types.h
 
 #include netinet/tcp.h

--- src/stream_sock.c.orig  2009-08-24 14:45:15.0 +0400
+++ src/stream_sock.c   2009-08-24 14:46:19.0 +0400
@@ -16,12 +16,12 @@
 #include stdio.h
 #include stdlib.h
 
-#include netinet/tcp.h
-
 #include sys/socket.h
 #include sys/stat.h
 #include sys/types.h
 
+#include netinet/tcp.h
+
 #include common/compat.h
 #include common/config.h
 #include common/debug.h


--- src/proto_tcp.c.orig2009-08-24 14:50:03.0 +0400
+++ src/proto_tcp.c 2009-08-24 14:55:45.0 +0400
@@ -18,14 +18,14 @@
 #include string.h
 #include time.h
 
-#include netinet/tcp.h
-
 #include sys/param.h
 #include sys/socket.h
 #include sys/stat.h
 #include sys/types.h
 #include sys/un.h
 
+#include netinet/tcp.h
+
 #include common/cfgparse.h
 #include common/compat.h
 #include common/config.h
@@ -253,7 +253,7 @@ int tcp_bind_listener(struct listener *l
 #endif
 #ifdef TCP_MAXSEG
if (listener-maxseg) {
-   if (setsockopt(fd, SOL_TCP, TCP_MAXSEG,
+   if (setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG,
   listener-maxseg, sizeof(listener-maxseg)) == 
-1) {
msg = cannot set MSS;
err |= ERR_WARN;




TCP log format question

2009-08-26 Thread Dmitry Sivachenko
Hello!

I am running haproxy-1.4-dev2 with the following
configuration (excerpt):

global
log /var/run/loglocal0
user www
group www
daemon
defaults
log global
modetcp
balance roundrobin
maxconn 2000
option abortonclose
option allbackups
option httplog
option dontlog-normal
option dontlognull
option redispatch
option tcplog
retries 2

frontend M-front
bind 0.0.0.0:17306
mode tcp
acl M-acl nbsrv(M-native) ge 5
use_backend M-native if M-acl
default_backend M-foreign

backend M-native
mode tcp
server ms1 ms1:17306 check maxconn 100 maxqueue 1 weight 100
server ms2 ms2:17306 check maxconn 100 maxqueue 1 weight 100
...

backend M-foreign
mode tcp
server ms3 ms3:17306 check maxconn 100 maxqueue 1 weight 100
server ms4 ms4:17306 check maxconn 100 maxqueue 1 weight 100

Note that both frontend and 2 backends are running in TCP mode.

In my log file I see the following lines:
Aug 26 18:19:50 balancer0-00 haproxy[66301]: A.B.C.D:28689 
[26/Aug/2009:18:19:50.034] M-front M-native/ms1 -1/1/0/-1/3 -1 339 - - CD-- 
0/0/0/0/0 0/0 BADREQ

1) What does BADREQ mean? I see no description of that field in
documentation of TCP log format.
2) Why *all* requests are being logged? 
(note option dontlog-normal in default section).
How should I change configuration to log only important events
(errors) and do not log the fact connection was made and served?

Thanks in advance!



Re: TCP log format question

2009-08-27 Thread Dmitry Sivachenko
On Thu, Aug 27, 2009 at 06:39:51AM +0200, Willy Tarreau wrote:
 I'm seeing that you have both tcplog and httplog. Since they
 both add a set of flags, the union of both is enabled which means
 httplog to me. I should add a check for this so that tcplog disables
 httplog.
 
  In my log file I see the following lines:
  Aug 26 18:19:50 balancer0-00 haproxy[66301]: A.B.C.D:28689 
  [26/Aug/2009:18:19:50.034] M-front M-native/ms1 -1/1/0/-1/3 -1 339 - - CD-- 
  0/0/0/0/0 0/0 BADREQ
  
  1) What does BADREQ mean? I see no description of that field in
  documentation of TCP log format.
 
 this is because of option httplog.

Aha, I see, i had an impression 'option httplog' will be 
ignored in TCP mode.

I removed it and BADREQ disappeared from the log.


 
  2) Why *all* requests are being logged? 
  (note option dontlog-normal in default section).
  How should I change configuration to log only important events
  (errors) and do not log the fact connection was made and served?
 
 Hmmm dontlog-normal only works in HTTP mode.

Ok, I see, though it is completely unclean after reading the manual.
This should probably be explicitly mentioned.

 Could you please
 explain what type of normal connections you would want to log
 and what type you would not want to log ? It could help making
 a choice of implementation of dontlog-normal for tcplog.
 

I want to log exactly what manual states:
###
Setting this option ensures that
normal connections, those which experience no error, no timeout, no retry nor
redispatch, will not be logged.
###

... but for TCP mode proxy.

I mean I want to see in logs only those connection that were redispatched, 
timeouted, etc.

Thanks!



Re: Backend Server UP/Down Debugging?

2009-08-31 Thread Dmitry Sivachenko
On Sun, Aug 30, 2009 at 04:58:16PM +0200, Krzysztof Oledzki wrote:
 
 
 On Sun, 30 Aug 2009, Willy Tarreau wrote:
 
  On Sun, Aug 30, 2009 at 04:18:58PM +0200, Krzysztof Oledzki wrote:
  I think you wanted to put HCHK_STATUS_L57OK here, not OKD since we're
  in the 2xx/3xx state and not 404 disable. Or maybe I misunderstood the
  OKD status ?
 
  OKD means we have Layer5-7 data avalible, like for example http code.
  Several times I found that some of my servers were misconfigured and were
  returning a 3xx code redirecting to a page-not-found webpage instead of
  doing a proper healt-check, so I think it is good to know what was the
  response, even if it was OK (2xx/3xx).
 
  Ah OK that makes sense now. It's a good idea to note that data is
  available, for later when we want to capture it whole. Indeed, I'd
  like to reuse the same capture principle as is used in proxies for
  errors. It does not take *that* much space and is so much useful
  already that we ought to implement it soon there too !
 
 OK, I found where your confusion comes from - the diff was incomplete, 
 there was no include/types/checks.h file that explains how 
 HCHK_STATUS_L57OK differs from HCHK_STATUS_L57OKD and also makes it 
 possible to compile the code. :(
 
 Dmitry, could you please use this patch instead? ;)
 

Okay, thank you.



Re: redispatch optimization

2009-08-31 Thread Dmitry Sivachenko
On Mon, Aug 31, 2009 at 03:39:35PM +0200, Krzysztof Oledzki wrote:
  PS: another important suggestion is to make that delay tunable
  parameter (like timeout.connect, etc), rather than hardcode
  1000ms in code.
 
 Why would you like to change the value? I found 1s very well chosen.

In our environment we have some program asking balancer and expecting results
to be returned very fast (say, in 0.5 second maximum).

So I want to ask one server in the backend, and, if it is not responding, 
re-ask another one immediately (or even the same once again, assuming
that just first TCP SETUP packet was lost and server is
running normally).  So I use low connect.timeout (say, 30ms) and if
connection fails i retry the same one once more.

After all, we can use 1 second default and allow to customize that
value when needed.


 
 
 
  --- work/haproxy-1.4-dev2/src/session.c 2009-08-10 00:57:09.0 +0400
  +++ /tmp/session.c  2009-08-31 14:28:26.0 +0400
  @@ -306,7 +306,11 @@ int sess_update_st_cer(struct session *s
 si-err_type = SI_ET_CONN_ERR;
 
 si-state = SI_ST_TAR;
  +   if (s-srv  s-conn_retries == 0  s-be-options  PR_O_REDISP) 
  {
  +   si-exp = tick_add(now_ms, MS_TO_TICKS(0));
  +   } else {
 si-exp = tick_add(now_ms, MS_TO_TICKS(1000));
  +   }
 return 0;
 }
 return 0;
 
 
 There is no value in adding 0ms, also SI_ST_TAR should be moved inside the 
 condition I think, not sure if it is enough.
 

Okay probably it is ugly implementation (though it works), because I 
still dont completely understand the code.
Feel free to re-implement it in better way, just grab the idea.

Thanks.



Re: [PATCH] [MINOR] CSS HTML fun

2009-10-13 Thread Dmitry Sivachenko
On Mon, Oct 12, 2009 at 11:39:54PM +0200, Krzysztof Piotr Oledzki wrote:
 From 6fc49b084ad0f4513c36418dfac1cf1046af66da Mon Sep 17 00:00:00 2001
 From: Krzysztof Piotr Oledzki o...@ans.pl
 Date: Mon, 12 Oct 2009 23:09:08 +0200
 Subject: [MINOR] CSS  HTML fun
 
 This patch makes stats page about 30% smaller and
 CSS 2.1 + HTML 4.01 Transitional compliant.
 
 There should be no visible differences.
 
 Changes:
  - add missing /ul

End tag for ul is optional according to 
http://www.w3.org/TR/html401/struct/lists.html#edef-UL



Re: [PATCH] [MINOR] CSS HTML fun

2009-10-13 Thread Dmitry Sivachenko
On Tue, Oct 13, 2009 at 02:16:12PM +0200, Benedikt Fraunhofer wrote:
 Hello,
 
 2009/10/13 Dmitry Sivachenko mi...@cavia.pp.ru:
 
  End tag for ul is optional according to
 
 really? Something new to me :)
 

OMG, sorry, I am blind.

Forget about that.



Re: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-11 Thread Dmitry Sivachenko
On Mon, Jan 04, 2010 at 12:13:49AM +0100, Willy Tarreau wrote:
 Hi all,
 
 Yes that's it, it's not a joke !
 
  -- Keep-alive support is now functional on the client side. --
 

Hello!

Are there any plans to implement server-side HTTP keep-alive?

I mean I want client connecting to haproxy NOT to use keep-alive,
but to utilize keep-alive between haproxy and backend servers.

Thanks!



Re: haproxy-1.4.3 and keep-alive status

2010-04-26 Thread Dmitry Sivachenko
On Thu, Apr 08, 2010 at 11:58:25AM +0200, Willy Tarreau wrote:
  3) I have sample configuration running with option http-server-close and 
  without option httpclose set.
  
  I observe the following at haproxy side:
  
  Request comes:
  
  GET /some-url HTTP/1.1
  Host: host.pp.ru
  User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.2) 
  Gecko/20100326 Firefox/3.6.2
  Accept: */*
  Accept-Language: en-us,ru;q=0.7,en;q=0.3
  Accept-Encoding: gzip,deflate
  Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
  Keep-Alive: 115
  Connection: keep-alive
  
  So client requests keep-alive.  I suppose that haproxy should send request 
  to 
  backend with Connection: close (because http-server-close is set) but
  send response to client with keep-alive enabled.
 
 Exactly.
 
  But that does not happen:
  
  HTTP/1.1 200 OK
  Date: Thu, 08 Apr 2010 08:41:52 GMT
  Expires: Thu, 08 Apr 2010 08:42:52 GMT
  Content-Type: text/javascript; charset=utf-8
  Connection: Close
  
  jsonp1270715696732([a, [ab, and, a2, ac, are, a a, ad, a 
  b, a1, about]])
  
  
  Why haproxy responds to client with Connection: Close?
 
 Because the server did not provide information required to make the keep-alive
 possible. In your case, there is no content-length nor any 
 transfer-encoding
 header, so the only way the client has to find the response end, is the 
 closure
 of the connection.
 
 An exactly similar issue was identified on Tomcat and Jetty. They did not use
 transfer-encoding when the client announces it intends to close. The Tomcat
 team was cooperative and recently agreed to improve that. In the mean time,
 we have released haproxy 1.4.4 which includes a workaround for this : combine
 option http-pretend-keepalive with option http-server-close and your 
 server
 will believe you're doing keep-alive and may try to send a more appropriate
 response. At least this works with Jetty and Tomcat, though there is nothing
 mandatory in this area.
 

Hello!

Here is a sample HTTP session with my (hand-made) server.

1) GET /some-url HTTP/1.1
Host: hots.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) Gecko/2010041
4 Firefox/3.6.3
Accept: text/javascript, application/javascript, */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

2) 
HTTP/1.1 200 OK
Date: Mon, 26 Apr 2010 11:34:19 GMT
Expires: Mon, 26 Apr 2010 11:35:19 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Keep-Alive
Transfer-Encoding: chunked

some data

tcpdump analysis of several subsequent requests shows that HTTP keep-alive works
in my case.

When I put that server behind haproxy (version 1.4.4) I see the following:


1) GET some URL HTTP/1.1
Host: host.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) Gecko/2010041
4 Firefox/3.6.3
Accept: text/javascript, application/javascript, */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

2) 
HTTP/1.1 200 OK
Date: Mon, 26 Apr 2010 11:45:01 GMT
Expires: Mon, 26 Apr 2010 11:46:01 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Close

some data

I have
mode http
option http-server-close
option http-pretend-keepalive

in my config (tried both with and without http-pretend-keepalive).

Can you please explain in more detail what server makes wrong and why haproxy
adds Connection: Close header
(and why Firefox successfully uses HTTP keep-alive with the same server without
haproxy).

Thanks in advance!



X-Forwarded-For header

2011-03-24 Thread Dmitry Sivachenko
Hello!

With option forwardfor, haproxy adds X-Forwarded-For header at the end
of header list.

But according to wikipedia:
http://en.wikipedia.org/wiki/X-Forwarded-For

and other HTTP proxies (say, nginx)
there is standard format to specify several intermediate IP addresses:
X-Forwarded-For: client1, proxy1, proxy2

Why don't you use these standard procedure to add client IP?
(I mean if X-Forwarded-For already exists in request headers, modify
its value with client IP and do not create another header with the same name).

Thanks!



Re: X-Forwarded-For header

2011-03-25 Thread Dmitry Sivachenko
On Thu, Mar 24, 2011 at 09:12:46PM +0100, Willy Tarreau wrote:
 Hello Dmitry,
 
 On Thu, Mar 24, 2011 at 05:28:13PM +0300, Dmitry Sivachenko wrote:
  Hello!
  
  With option forwardfor, haproxy adds X-Forwarded-For header at the end
  of header list.
  
  But according to wikipedia:
  http://en.wikipedia.org/wiki/X-Forwarded-For
  
  and other HTTP proxies (say, nginx)
  there is standard format to specify several intermediate IP addresses:
  X-Forwarded-For: client1, proxy1, proxy2
  
  Why don't you use these standard procedure to add client IP?
 
 Because these are not the standards. Standards are defined by RFCs, not
 by Wikipedia :-)


I meant more like de-facto standard, sorry for the confusion.
The format with single comma-delimited X-Forwarded-For is just more common.


 
 We already got this question anyway. The short answer is that both forms
 are strictly equivalent, and any intermediary is free to fold multiple
 header lines into a single one with values delimited by commas. Your
 application will not notice the difference (otherwise it's utterly
 broken and might possibly be sensible to many vulnerabilities such as
 request smugling attacks).
 


Okay, thanks for the explanation.



haproxy-1.4.20 crashes

2012-05-15 Thread Dmitry Sivachenko

Hello!

I am using haproxy-1.4.20 on FreeBSD-9.
It was running without any problems for a long time, but after recent 
changes in configuration it began to crash from time to time.


GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as amd64-marcel-freebsd...
Core was generated by `haproxy'.
Program terminated with signal 10, Bus error.
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /lib/libc.so.7...done.
Loaded symbols for /lib/libc.so.7
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  0x00455183 in tcpv4_connect_server (si=0x8062132e8, 
be=0x8014eb000,

srv=0x8013d9400, srv_addr=0x806213470, from_addr=0x806213480)
at src/proto_tcp.c:422
422 EV_FD_SET(fd, DIR_WR);  /* for connect status */
(gdb) bt
#0  0x00455183 in tcpv4_connect_server (si=0x8062132e8, 
be=0x8014eb000,

srv=0x8013d9400, srv_addr=0x806213470, from_addr=0x806213480)
at src/proto_tcp.c:422
#1  0x004449f7 in connect_server (s=0x806213200) at 
src/backend.c:921
#2  0x00457d98 in sess_update_stream_int (s=0x806213200, 
si=0x8062132e8)

at src/session.c:374
#3  0x0045a5e7 in process_session (t=0x8057fcb40) at 
src/session.c:1403

#4  0x0040b1e3 in process_runnable_tasks (next=0x7fffdaac)
at src/task.c:234
#5  0x004047a3 in run_poll_loop () at src/haproxy.c:983
#6  0x00404f61 in main (argc=6, argv=0x7fffdb88) at 
src/haproxy.c:1264

(gdb)

Is it a known issue?
If not, I can provide more information (config, core image, etc).

Thanks in advance!



Dump of invalid requests

2012-10-20 Thread Dmitry Sivachenko

Hello!

I am using haproxy-1.4.22.
Now I can see the last invalid request haproxy rejected with Bad Request 
return code with the following command:

$ echo show errors | socat stdio unix-connect:/tmp/haproxy.stats

1) The request seems to be truncated at 16k boundary.  With very large 
GET requests I do not see the tail of URL string and (more important) 
the following HTTP headers.  I am running with tune.bufsize=32768. Is it 
possible to tune haproxy to dump the whole request?


2) The command above shows *the last* rejected request.  In some cases 
it complicates debugging, it would be convenient to see dumps of all 
rejected requests for later analysis.  Is it possible to enable logging 
of these dumps to a file or syslog?


Thanks in advance!



Re: Dump of invalid requests

2012-10-20 Thread Dmitry Sivachenko

On 10/20/12 11:49 PM, Willy Tarreau wrote:

Hello Dmitry,

On Sat, Oct 20, 2012 at 10:13:47PM +0400, Dmitry Sivachenko wrote:

Hello!

I am using haproxy-1.4.22.
Now I can see the last invalid request haproxy rejected with Bad Request
return code with the following command:
$ echo show errors | socat stdio unix-connect:/tmp/haproxy.stats

1) The request seems to be truncated at 16k boundary.  With very large
GET requests I do not see the tail of URL string and (more important)
the following HTTP headers.  I am running with tune.bufsize=32768. Is it
possible to tune haproxy to dump the whole request?

It always dumps the whole request. What you're describing is a request
too large to fit in a buffer. It is invalid by definition since haproxy
cannot parse it fully. If you absolutely need to pass that large a
request, you can increase tune.bufsize and limit tune.maxrewrite to
1024, it will be more than enough. But be careful, a website running
with that large requests will 1) not be accessible by everyone for the
same reason (some proxies will block the request) and 2) will be
extremely slow for users with a limited uplink or via 3G/GPRS.


As I wrote in my original e-mail, I use tune.bufsize=32768.  I did not 
tweak tune.maxrewrite though.
I will try to decrease maxrewrite to 1024 and see if 'show errors' will 
dump more that 16k of URL.


I don't fully understand it's meaning though.
If I need to match up to 25k size requests using reqrep directive, will
tune.bufsize=32768 and tune.maxrewrite=1024 be enough for that?

I am aware of problems 1) and 2) but we have some special service here 
at work which requires that large URLs.



2) The command above shows *the last* rejected request.  In some cases
it complicates debugging, it would be convenient to see dumps of all
rejected requests for later analysis.  Is it possible to enable logging
of these dumps to a file or syslog?

No, because haproxy does not access any file once started, and syslog
normally does not support messages larger than 1024 chars.

What is problematic with only the last request ? Can't you connect
more often to dump it ? There is an event number in the dump for that
exact purpose, that way you know if you have already seen it or not.




Problem is that you never know when next invalid request will arrive so 
it is possible to miss one no matter how ofter you poll for new errors.


Since most request should fit even into 1024 buffer, would be nice to 
dump at least first 1024 bytes via syslog for debugging.






Re: Dump of invalid requests

2012-10-21 Thread Dmitry Sivachenko

On 10/21/12 12:06 AM, Willy Tarreau wrote:

On Sun, Oct 21, 2012 at 12:01:10AM +0400, Dmitry Sivachenko wrote:

As I wrote in my original e-mail, I use tune.bufsize=32768.  I did not
tweak tune.maxrewrite though.
I will try to decrease maxrewrite to 1024 and see if 'show errors' will
dump more that 16k of URL.

I don't fully understand it's meaning though.
If I need to match up to 25k size requests using reqrep directive, will
tune.bufsize=32768 and tune.maxrewrite=1024 be enough for that?

Yes. The max request that can be read at once is bufsize-maxrewrite. And
since maxrewrite defaults to bufsize/2, I think you were limited to 16k
which is in the same range as your request.




Please consider the following patch for configuration.txt to clarify 
meaning of

bufsize, maxrewrite and the size of HTTP request which can be processed.

Thanks.

--- configuration.txt.orig  2012-08-14 11:09:31.0 +0400
+++ configuration.txt   2012-10-21 18:08:01.0 +0400
@@ -683,6 +683,8 @@
   statistics, and values larger than default size will increase memory 
usage,
   possibly causing the system to run out of memory. At least the 
global maxconn
   parameter should be decreased by the same factor as this one is 
increased.
+  If HTTP request is larger than tune.bufsize - tune.maxrewrite, 
haproxy will

+  return HTTP 400 (Bad Request) error.

 tune.chksize number
   Sets the check buffer size to this size (in bytes). Higher values 
may help

@@ -4346,8 +4348,8 @@
  # replace www.mydomain.com with www in the host name.
  reqirep ^Host:\ www.mydomain.com   Host:\ www

-  See also: reqadd, reqdel, rsprep, section 6 about HTTP header
-manipulation, and section 7 about ACLs.
+  See also: reqadd, reqdel, rsprep, tune.bufsize, section 6 about
+HTTP header manipulation, and section 7 about ACLs.


 reqtarpit  search [{if | unless} cond]



Need more info on compression

2012-11-22 Thread Dmitry Sivachenko
Hello!

I was reading docs about HTTP compression support in -dev13 and it is a bit
unclear to me how it works.

Imagine I have:
compression algo gzip
compression type text/html text/javascript text/xml text/plain

in defaults section.

What will haproxy do if:
1) backend server does NOT support compression;
2) backend server does support compression;
3) backend server does support compression and there is no these two
compression* lines in haproxy config.

I think documentation needs to clarify things a bit.

In return, I am attaching a small patch which fixes 2 typos.

Thanks!
--- configuration.txt.orig  2012-11-22 04:11:33.0 +0400
+++ configuration.txt   2012-11-22 19:58:46.0 +0400
@@ -1887,7 +1887,7 @@
 offload  makes haproxy work as a compression offloader only (see notes).
 
   The currently supported algorithms are :
-identity  this is mostly for debugging, and it was useful for developping
+identity  this is mostly for debugging, and it was useful for developing
   the compression feature. Identity does not apply any change on
   data.
 
@@ -1901,7 +1901,7 @@
   This setting is only available when support for zlib was built
   in.
 
-  Compression will be activated depending of the Accept-Encoding request
+  Compression will be activated depending on the Accept-Encoding request
   header. With identity, it does not take care of that header.
 
   The offload setting makes haproxy remove the Accept-Encoding header to


Re: Need more info on compression

2012-11-28 Thread Dmitry Sivachenko
On 24.11.2012 18:25, Willy Tarreau wrote:
 Hi Dmitry,
 
 On Thu, Nov 22, 2012 at 08:03:26PM +0400, Dmitry Sivachenko wrote:
 Hello!

 I was reading docs about HTTP compression support in -dev13 and it is a bit
 unclear to me how it works.

 Imagine I have:
 compression algo gzip
 compression type text/html text/javascript text/xml text/plain

 in defaults section.

 What will haproxy do if:
 1) backend server does NOT support compression;
 
 Haproxy will compress the matching responses.
 
 2) backend server does support compression;
 
 You have two possibilities :
   - either you just have the lines above, and the server will see
 the Accept-Encoding header from the client and will compress
 the response ; in this case, haproxy will see the compressed
 response and will not compress again ;
 
   - or you also have a compression offload line. In this case,
 haproxy will remove the Accept-Encoding header before passing
 the request to the server. The server will then *not* compress,
 and haproxy will compress the response. This is what I'm doing
 at home because the compressing server is bogus and sometimes
 emits wrong chunked encoded data!
 
 3) backend server does support compression and there is no these two
 compression* lines in haproxy config.
 
 Then haproxy's normal behaviour remains unchanged, the server compresses
 if it wants to and haproxy transfers the response unmodified.
 
 I think documentation needs to clarify things a bit.
 
 Possibly, however I don't know what to clarify nor how, it's always
 difficult to guess how people will understand a doc :-(
 
 Could you please propose some changes ? I would be happy to improve
 the doc if it helps people understand it.
 


Thank you very much for the explanation.

Please consider the attached patch, I hope it will clarify haproxy's behavior a
bit.

--- configuration.txt.orig  2012-11-26 06:11:05.0 +0400
+++ configuration.txt   2012-11-28 17:45:25.0 +0400
@@ -1903,16 +1903,23 @@
 
   Compression will be activated depending on the Accept-Encoding request
   header. With identity, it does not take care of that header.
+  If backend servers support HTTP compression, these directives
+  will be no-op: haproxy will see the compressed response and will not
+  compress again. If backend servers do not support HTTP compression and
+  there is Accept-Encoding header in request, haproxy will compress the
+  matching response.
 
   The offload setting makes haproxy remove the Accept-Encoding header to
   prevent backend servers from compressing responses. It is strongly
   recommended not to do this because this means that all the compression work
   will be done on the single point where haproxy is located. However in some
   deployment scenarios, haproxy may be installed in front of a buggy gateway
-  and need to prevent it from emitting invalid payloads. In this case, simply
-  removing the header in the configuration does not work because it applies
-  before the header is parsed, so that prevents haproxy from compressing. The
-  offload setting should then be used for such scenarios.
+  with broken HTTP compression implementation which can't be turned off.
+  In that case haproxy can be used to prevent that gateway from emitting
+  invalid payloads. In this case, simply removing the header in the
+  configuration does not work because it applies before the header is parsed,
+  so that prevents haproxy from compressing. The offload setting should
+  then be used for such scenarios.
 
   Compression is disabled when:
 * the server is not HTTP/1.1.


Re: [ANNOUNCE] haproxy-1.5-dev16

2012-12-24 Thread Dmitry Sivachenko
Hello!

After update from -dev15, the following stats listener:

listen stats9 :30009
mode http
stats enable
stats uri /
stats show-node
stats show-legends

returns 503/Service unavailable.

With -dev15 it shows statistics page.


On 24.12.2012, at 19:51, Willy Tarreau w...@1wt.eu wrote:

 Hi all,
 
 Here comes 1.5-dev16. Thanks to the amazing work Sander Klein and John
 Rood have done at Picturae ICT ( http://picturae.com/ ) we could finally
 spot the freeze bug after one week of restless digging ! This bug was
 amazingly hard to reproduce in general and would only affect POST requests
 under certain circumstances that I never could reproduce despite many
 efforts. It is likely that other users were affected too but did not
 notice it because end users did not complain (I'm thinking about webmail
 and file sharing environments for example).
 
 During this week of code review and testing, around 10 other minor to medium
 bugs related to the polling changes could be fixed.
 
 Another nasty bug was fixed on SSL. It happens that OpenSSL maintains a
 global error stack that must constantly be flushed (surely they never heard
 how errno works). The result is that some SSL errors could cause another SSL
 session to break as a side effect of this error. This issue was reported by
 J. Maurice (wiz technologies) who first encountered it when playing with the
 tests on ssllabs.com.
 
 Another bug present since 1.4 concerns the premature close of the response
 when the server responds before the end of a POST upload. This happens when
 the server responds with a redirect or with a 401, sometimes the client would
 not get the response. This has been fixed.
 
 Krzysztof Rutecki reported some issues on client certificate checks, because
 the check for the presence of the certificate applies to the connection and
 not just to the session. So this does not match upon session resumption. Thus
 another ssl_c_used ACL was added to check for such sessions.
 
 Among the other nice additions, it is now possible to log the result of any
 sample fetch method using %[]. This allows to log SSL certificates for 
 example.
 And similarly, passing such information to HTTP headers was implemented too,
 as http-request add-header and http-request set-header, using the same
 format as the logs. This also becomes useful for combining headers !
 
 Some people have been asking for logging the amount of uploaded data from the
 client to the server, so this is now available as the %U log-format tag.
 Some other log-format tags were deprecated and replaced with easier to remind
 ones. The old ones still work but emit a warning suggesting the replacement.
 
 And last, the stats HTML version was improved to present detailed information
 using hover tips instead of title attributes, allowing multi-line details on
 the page. The result is nicer, more readable and more complete.
 
 The changelog is short enough to append it here after the usual links :
 
Site index   : http://haproxy.1wt.eu/
Sources  : http://haproxy.1wt.eu/download/1.5/src/devel/
Changelog: http://haproxy.1wt.eu/download/1.5/src/CHANGELOG
Cyril's HTML doc : 
 http://cbonte.github.com/haproxy-dconv/configuration-1.5.html
 
 At the moment, nobody broke the latest snapshots, so I think we're getting
 closer to something stable to base future work on.
 
 Thanks!
 Willy
 
 --
 Changelog from 1.5-dev15 to 1.5-dev16:
  - BUG/MEDIUM: ssl: Prevent ssl error from affecting other connections.
  - BUG/MINOR: ssl: error is not reported if it occurs simultaneously with 
 peer close detection.
  - MINOR: ssl: add fetch and acl ssl_c_used to check if current SSL session 
 uses a client certificate.
  - MINOR: contrib: make the iprange tool grep for addresses
  - CLEANUP: polling: gcc doesn't always optimize constants away
  - OPTIM: poll: optimize fd management functions for low register count CPUs
  - CLEANUP: poll: remove a useless double-check on fdtab[fd].owner
  - OPTIM: epoll: use a temp variable for intermediary flag computations
  - OPTIM: epoll: current fd does not count as a new one
  - BUG/MINOR: poll: the I/O handler was called twice for polled I/Os
  - MINOR: http: make resp_ver and status ACLs check for the presence of a 
 response
  - BUG/MEDIUM: stream-interface: fix possible stalls during transfers
  - BUG/MINOR: stream_interface: don't return when the fd is already set
  - BUG/MEDIUM: connection: always update connection flags prior to computing 
 polling
  - CLEANUP: buffer: use buffer_empty() instead of buffer_len()==0
  - BUG/MAJOR: stream_interface: fix occasional data transfer freezes
  - BUG/MEDIUM: stream_interface: fix another case where the reader might not 
 be woken up
  - BUG/MINOR: http: don't abort client connection on premature responses
  - BUILD: no need to clean up when making git-tar
  - MINOR: log: add a tag for amount of bytes uploaded from client to server
  - BUG/MEDIUM: log: fix possible 

Re: [ANNOUNCE] haproxy-1.5-dev16

2012-12-26 Thread Dmitry Sivachenko

On 26.12.2012, at 1:03, Willy Tarreau w...@1wt.eu wrote:
 
 
 This fix is still wrong, as it only accepts one add-header rule, so
 please use the other fix posted in this thread by seri0528 instead.
 


Thanks a lot! Works now.




compress only if response size is big enough

2013-02-07 Thread Dmitry Sivachenko
Hello!

It would be nice to add some parameter min_compress_size.
So haproxy will compress HTTP response only if response size is bigger than 
that value.

Because compressing small data can lead to size increase and is useless.

Thanks.


Re: compress only if response size is big enough

2013-03-02 Thread Dmitry Sivachenko
Hello!

What do you guys think?

I meant something similar to nginx's  gzip_min_length.


On 07.02.2013, at 15:56, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello!
 
 It would be nice to add some parameter min_compress_size.
 So haproxy will compress HTTP response only if response size is bigger than 
 that value.
 
 Because compressing small data can lead to size increase and is useless.
 
 Thanks.




haproy dumps core when unable to resolve host names

2013-03-15 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5-dev17.  I use hostnames in my config file rather than 
IPs.
If DNS is not working, haproxy will dump core on start or config check.

How to repeat:
Put some fake stuff in /etc/resolv.conf so resolver does not work.

Run haproxy -c -f /path/to/haproxy.conf:

/tmp# ./haproxy -c -f ./haproxy.conf
Segmentation fault (core dumped)

# ./haproxy -vv
HA-Proxy version 1.5-dev17 2012/12/28
Copyright 2000-2012 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8x 10 May 2012
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.




Re: haproy dumps core when unable to resolve host names

2013-03-15 Thread Dmitry Sivachenko

On 15.03.2013, at 15:54, Willy Tarreau w...@1wt.eu wrote:

 Hi Dmitry,
 
 On Fri, Mar 15, 2013 at 03:25:10PM +0400, Dmitry Sivachenko wrote:
 Hello!
 
 I am using haproxy-1.5-dev17.  I use hostnames in my config file rather than 
 IPs.
 If DNS is not working, haproxy will dump core on start or config check.
 
 How to repeat:
 Put some fake stuff in /etc/resolv.conf so resolver does not work.
 
 Run haproxy -c -f /path/to/haproxy.conf:
 
 /tmp# ./haproxy -c -f ./haproxy.conf
 Segmentation fault (core dumped)
 
 This is a known issue with GETADDRINFO which was fixed in a
 recent snapshot :
 
  commit 58ea039115f3faaf29529e0df97f4562436fdd09
  Author: Sean Carey ca...@basho.com
  Date:   Fri Feb 15 23:39:18 2013 +0100
 
BUG/MEDIUM: config: fix parser crash with bad bind or server address
 
If an address is improperly formated on a bind or server address
and haproxy is built for using getaddrinfo, then a crash may occur
upon the call to freeaddrinfo().
 
Thanks to Jon Meredith for helping me patch this for SmartOS,
I am not a C/GDB wizard.
 
 I think you'd better update to latest snapshot until we emit dev18.
 



Ah, okay, thanks!




compile warning

2013-05-22 Thread Dmitry Sivachenko
Hello!

When compiling the latest haproxy snapshot on FreeBSD-9 I get the following 
warning:

cc -Iinclude -Iebtree -Wall -O2 -pipe -O2 -fno-strict-aliasing -pipe   -DFREEBSD
_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POL
L -DENABLE_KQUEUE -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include  -DCONFIG_HAPROX
Y_VERSION=\1.5-dev18\ -DCONFIG_HAPROXY_DATE=\2013/04/03\ -c -o src/ev_kqueue
.o src/ev_kqueue.c
In file included from include/types/listener.h:33,
 from include/types/global.h:29,
 from src/ev_kqueue.c:30:
include/common/mini-clist.h:141:1: warning: LIST_PREV redefined
In file included from /usr/include/sys/event.h:32,
 from src/ev_kqueue.c:21:
/usr/include/sys/queue.h:426:1: warning: this is the location of the previous 
definition

JFYI.


Re: compile warning

2013-05-23 Thread Dmitry Sivachenko

On 23.05.2013, at 11:22, joris dedieu joris.ded...@gmail.com wrote:

 
 For my part I can't reproduce it.
 
 $ uname -a
 FreeBSD mailhost2 9.1-RELEASE-p3 FreeBSD 9.1-RELEASE-p3 #0: Mon Apr 29
 18:27:25 UTC 2013
 r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
 
 $ cc -v
 Using built-in specs.
 Target: amd64-undermydesk-freebsd
 Configured with: FreeBSD/amd64 system compiler
 Thread model: posix
 gcc version 4.2.1 20070831 patched [FreeBSD]
 
 
 rm src/ev_kqueue.o; cc -Iinclude -Iebtree -Wall -Werror -O2 -pipe -O2
 -fno-strict-aliasing -pipe -DFREEBSD_PORTS -DTPROXY -DCONFIG_HAP_CRYPT
 -DUSE_GETADDRINFO -DUSE_ZLIB -DENABLE_POLL -DENABLE_KQUEUE
 -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include
 -DCONFIG_HAPROXY_VERSION=\1.5-dev18\
 -DCONFIG_HAPROXY_DATE=\2013/04/03\ -c -o src/ev_kqueue.o
 src/ev_kqueue.c
 
 Doesn't produce any warning with haproxy-ss-20130515.
 
 Could you please tell me how to reproduce it ?
 


Update to FreeBSD-9-STABLE if you want to reproduce it.

This change was MFC'd to 9/stable after 9.1-RELEASE:
http://svnweb.freebsd.org/base/stable/9/sys/sys/queue.h?view=log




Re: RES: RES: RES: RES: RES: RES: RES: RES: High CPU Usage (HaProxy)

2013-11-05 Thread Dmitry Sivachenko
On 05 нояб. 2013 г., at 19:33, Fred Pedrisa fredhp...@hotmail.com wrote:

 
 However, in FreeBSD we can't do that IRQ Assigning, like we can on linux.
 (As far I know).
 


JFYI: you can assign IRQs to CPUs via cpuset -x irq
(I can’t tell you if it is “like on linux” or not though).




ACL based on request parameter using POST method

2014-01-30 Thread Dmitry Sivachenko
Hello!

(haproxy-1.5-dev21)


Using urlp() I can match specific parameter value and dispatch request to 
different backends based on that value:

acl PARAM1 urlp(test) 1
use_backend BE1-back if PARAM1
acl PARAM2 urlp(test) 2
use_backend BE2-back if PARAM2

It works if I specify that parameter using GET method:
curl 'http://localhost:2/do?test=1'

But it does not work if I specify the same parameter using POST method:
curl -d test=1  'http://localhost:2/do'

Is there any way to make ACLs using request parameters regardless of method, so 
that it works with both GET and POST?

Thanks!


Re: ACL based on request parameter using POST method

2014-01-30 Thread Dmitry Sivachenko

On 30 янв. 2014 г., at 19:30, Baptiste bed...@gmail.com wrote:

 Hu Dmitry,
 
 In Post, the parameters are in the body.
 You may be able to match them using the payload ACLs (HAProxy 1.5 only).
 


Hello,

I tried
acl PARAM1 payload(0,500) -m sub test=1
use_backend BE1-back if PARAM1


and it does not match
(I test with curl -d test=1 http://...)






balance leastconn does not honor weight?

2014-03-06 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5.22.

In a single backend I have servers with different weight configured: 16, 24, 32 
(proportional to the number of CPU cores).
Most of the time they respond very fast.

When I use balance leastconn, I see in the stats web interface that they all 
receive approximately equal number of connections (Sessions-Total).
Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
backend with minimal Connections/weight value)?

Thanks.


Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 06 марта 2014 г., at 19:29, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello!
 
 I am using haproxy-1.5.22.
 
 In a single backend I have servers with different weight configured: 16, 24, 
 32 (proportional to the number of CPU cores).
 Most of the time they respond very fast.
 
 When I use balance leastconn, I see in the stats web interface that they all 
 receive approximately equal number of connections (Sessions-Total).
 Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
 backend with minimal Connections/weight value)?
 
 Thanks.


I mean that with balance leastconn, I expect the following behavior:
-- In ideal situation, when all backends respond equally fast, it should be 
effectively like balance roundrobin *honoring specified weights*;
-- When one of the backends becomes slow for some reason, it should get less 
request based on the number of active connections

Now it behaves almost this way but without  honoring specified weights.





Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 12:25, Willy Tarreau w...@1wt.eu wrote:

 Hi Dmitry,
 
 On Fri, Mar 07, 2014 at 12:16:32PM +0400, Dmitry Sivachenko wrote:
 
 On 06 ?? 2014 ??., at 19:29, Dmitry Sivachenko trtrmi...@gmail.com 
 wrote:
 
 Hello!
 
 I am using haproxy-1.5.22.
 
 In a single backend I have servers with different weight configured: 16, 
 24, 32 (proportional to the number of CPU cores).
 Most of the time they respond very fast.
 
 When I use balance leastconn, I see in the stats web interface that they 
 all receive approximately equal number of connections (Sessions-Total).
 Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
 backend with minimal Connections/weight value)?
 
 Thanks.
 
 I mean that with balance leastconn, I expect the following behavior:
 -- In ideal situation, when all backends respond equally fast, it should be
 effectively like balance roundrobin *honoring specified weights*;
 -- When one of the backends becomes slow for some reason, it should get less
 request based on the number of active connections
 
 Now it behaves almost this way but without  honoring specified weights.
 
 We cannot honnor both at the same time. Most products I've tested don't
 *even* do the round robin on equal connection counts while we do. I'm just
 restating the point I made in another thread on the same subject : leastconn
 is about balancing the active number of connections, not the total number of
 connections.


Yes, I understand that.

But in situation when backends are not equal, it would be nice to have an 
ability to specify weight to balance number of *active* connections 
proportional to backend's weight.

Otherwise I am forced to maintain a pool of backends with equal hardware for 
leastconn to work, but it is not always simple.


Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 13:02, Willy Tarreau w...@1wt.eu wrote:

 On Fri, Mar 07, 2014 at 01:01:04PM +0400, Dmitry Sivachenko wrote:
 Now it behaves almost this way but without  honoring specified weights.
 
 We cannot honnor both at the same time. Most products I've tested don't
 *even* do the round robin on equal connection counts while we do. I'm just
 restating the point I made in another thread on the same subject : leastconn
 is about balancing the active number of connections, not the total number of
 connections.
 
 
 Yes, I understand that.
 
 But in situation when backends are not equal, it would be nice to have an
 ability to specify weight to balance number of *active* connections
 proportional to backend's weight.
 
 It's not a problem of option but of algorithm unfortunately.
 
 Otherwise I am forced to maintain a pool of backends with equal hardware for
 leastconn to work, but it is not always simple.
 
 I really don't understand. I really think you're using leastconn while
 you'd prefer to use roundrobin then.
 


I will explain: imagine the backend server which mmap()s a lot of data needed 
to process a request.
On startup, data is read from disk into RAM and server responds fast 
(roundrobin works fine).

Now imagine that at some moment part of that mmap()ed memory is being freed for 
other needs.

When next request(s) arrive, server must to read missing pages back from disk.  
It takes time.  Server becomes very slow for some time.
I don't want it to be flooded by requests until it starts to respond fast 
again.  It looks like leastconn would fit this situation.

But 99.9% of time, when all servers respond equally fast, I want to be able to 
balance load between them proportionally to their CPU number (so I need 
weights).




Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 14:53, Baptiste bed...@gmail.com wrote:

 Hi All,
 
 When next request(s) arrive, server must to read missing pages back
 from disk.  It takes time.  Server becomes very slow for some time.
 I don't want it to be flooded by requests until it starts to respond
 fast again.  It looks like leastconn would fit this situation.
 
 If one server is answering at 1s per request while the other one at
 1ms in a farm of 2 servers, then server 2 will process 1000 more
 requests per second than server 1 thanks to leastconn...
 This is what you want.



Yes, provided most of the time they both answer 1ms and also that farm has not 
2 but 50 servers.
If one is ill, it's load will spread over the remaining 49...  not so scaring.

I am in the process of reading about maxconn as suggested, probably it is what 
I need but for now I am failing to understand the documentation :)


Re: Patch with some small memory usage fixes

2014-04-29 Thread Dmitry Sivachenko
Hello,

 if (groups) free(groups);

I think these checks are redundant, because according to free(3):
-- If ptr is NULL, no action occurs.


On 29 апр. 2014 г., at 3:00, Dirkjan Bussink d.buss...@gmail.com wrote:

 Hi all,
 
 When building HAProxy using the Clang Static Analyzer, it found a few cases 
 of invalid memory usage and leaks. I’ve attached a patch to fix these cases.
 
 — 
 Regards,
 
 Dirkjan Bussink
 
 0001-Fix-a-few-memory-usage-errors.patch




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-10 Thread Dmitry Sivachenko

On 07 мая 2014 г., at 18:24, Emeric Brun eb...@exceliance.fr wrote:
 
 Hi All,
 
 I suspect FreeBSD to not support process shared mutex (supported in both 
 linux and solaris).
 
 I've just made a patch to add errors check on mutex init, and to fallback on 
 SSL private session cache in error case.


Hello,

BTW, nginx does support shared SSL session cache on FreeBSD (probably by other 
means).
May be it is worth to borrow their method rather than falling back to private 
cache?


Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Hello,

thanks for your efforts on stabilizing -dev version, it looks rather solid now.

Let me try to revive an old topic in hope to get rid of my old local patch I 
must use for production builds.

Thanks :)



On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello!
 
 If haproxy can't send a request to the backend server, it will retry the same
 backend 'retries' times waiting 1 second between retries, and if 'option
 redispatch' is used, the last retry will go to another backend.
 
 There is (I think very common) usage scenario when
 1) all requests are independent of each other and all backends are equal, so
 there is no need to try to route requests to the same backend (if it failed, 
 we
 will try dead one again and again while another backend could serve the 
 request
 right now)
 
 2) there is response time policy for requests and 1 second wait time is just
 too long (all requests are handled faster than 500ms and client software will
 not wait any longer).
 
 I propose to introduce new parameters in config file:
 1) redispatch always: when set, haproxy will always retry different backend
 after connection to the first one fails.
 2) Allow to override 1 second wait time between redispatches in config file
 (including the value of 0 == immediate).
 
 Right now I use the attached patch to overcome these restrictions.  It is ugly
 hack right now, but if you could include it into distribution in better form
 with tuning via config file I think everyone would benefit from it.
 
 Thanks.
 redispatch.txt




Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Looks like attach got stripped, attaching now for real so it is easy to 
understand what I am talking about.

--- session.c.orig  2012-11-22 04:11:33.0 +0400
+++ session.c   2012-11-22 16:15:04.0 +0400
@@ -877,7 +877,7 @@ static int sess_update_st_cer(struct ses
 * bit to ignore any persistence cookie. We won't count a retry nor a
 * redispatch yet, because this will depend on what server is selected.
 */
-   if (objt_server(s-target)  si-conn_retries == 0 
+   if (objt_server(s-target) 
s-be-options  PR_O_REDISP  !(s-flags  SN_FORCE_PRST)) {
sess_change_server(s, NULL);
if (may_dequeue_tasks(objt_server(s-target), s-be))
@@ -903,7 +903,7 @@ static int sess_update_st_cer(struct ses
si-err_type = SI_ET_CONN_ERR;
 
si-state = SI_ST_TAR;
-   si-exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   si-exp = tick_add(now_ms, MS_TO_TICKS(0));
return 0;
}
return 0;

On 12 мая 2014 г., at 0:31, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello,
 
 thanks for your efforts on stabilizing -dev version, it looks rather solid 
 now.
 
 Let me try to revive an old topic in hope to get rid of my old local patch I 
 must use for production builds.
 
 Thanks :)
 
 
 
 On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko trtrmi...@gmail.com wrote:
 
 Hello!
 
 If haproxy can't send a request to the backend server, it will retry the same
 backend 'retries' times waiting 1 second between retries, and if 'option
 redispatch' is used, the last retry will go to another backend.
 
 There is (I think very common) usage scenario when
 1) all requests are independent of each other and all backends are equal, so
 there is no need to try to route requests to the same backend (if it failed, 
 we
 will try dead one again and again while another backend could serve the 
 request
 right now)
 
 2) there is response time policy for requests and 1 second wait time is just
 too long (all requests are handled faster than 500ms and client software will
 not wait any longer).
 
 I propose to introduce new parameters in config file:
 1) redispatch always: when set, haproxy will always retry different backend
 after connection to the first one fails.
 2) Allow to override 1 second wait time between redispatches in config file
 (including the value of 0 == immediate).
 
 Right now I use the attached patch to overcome these restrictions.  It is 
 ugly
 hack right now, but if you could include it into distribution in better form
 with tuning via config file I think everyone would benefit from it.
 
 Thanks.
 redispatch.txt
 



Re: Some thoughts about redispatch

2014-05-26 Thread Dmitry Sivachenko
On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello!
 
 If haproxy can't send a request to the backend server, it will retry the same
 backend 'retries' times waiting 1 second between retries, and if 'option
 redispatch' is used, the last retry will go to another backend.
 
 There is (I think very common) usage scenario when
 1) all requests are independent of each other and all backends are equal, so
 there is no need to try to route requests to the same backend (if it failed, 
 we
 will try dead one again and again while another backend could serve the 
 request
 right now)
 
 2) there is response time policy for requests and 1 second wait time is just
 too long (all requests are handled faster than 500ms and client software will
 not wait any longer).
 
 I propose to introduce new parameters in config file:
 1) redispatch always: when set, haproxy will always retry different backend
 after connection to the first one fails.
 2) Allow to override 1 second wait time between redispatches in config file
 (including the value of 0 == immediate).
 
 Right now I use the attached patch to overcome these restrictions.  It is ugly
 hack right now, but if you could include it into distribution in better form
 with tuning via config file I think everyone would benefit from it.
 
 Thanks.
 redispatch.txt



On 26 мая 2014 г., at 18:21, Willy Tarreau w...@1wt.eu wrote:
 I think it definitely makes some sense. Probably not in its exact form but
 as something to work on. In fact, I think we should only apply the 1s retry
 delay when remaining on the same server, and avoid as much a possible to
 remain on the same server. For hashes or when there's a single server, we
 have no choice, but when doing round robin for example, we can pick another
 one. This is especially true for static servers or ad servers for example
 where fastest response time is preferred over sticking to the same server.
 


Yes, that was exactly my point.  In many situations it is better to ask another 
server immediately to get fastest response rather than trying to stick to the 
same server as much as possible.


 
 Thanks,
 Willy



Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko
On 28 мая 2014 г., at 11:13, Willy Tarreau w...@1wt.eu wrote:

 Hi Dmitry,
 
 So worked a bit on this subject. It's far from being obvious. The problem
 is that at the moment where we decide of the 1s delay before a retry, we
 don't know if we'll end up on the same server or not.
 
 Thus I'm thinking about this :
 
  - if the connection is persistent (cookie, etc...), apply the current 
retry mechanism, as we absolutely don't want to break application
sessions ;


I agree.


 
  - otherwise, we redispatch starting on the first retry as you suggest. But
then we have two possibilities for the delay before reconnecting. If the
server farm has more than 1 server and the balance algorithm is not a hash
nor first, then we don't apply the delay because we expect to land on a
different server with a high probability. Otherwise we keep the delay
because we're almost certain to land on the same server.
 
 This way it continues to silently mask occasional server restarts and is
 optimally efficient in stateless farms when there's a possibility to quickly
 pick another server. Do you see any other point that needs specific care ?



I would export that magic 1 second as a configuration parameter (with 0 
meaning no delay).
After all, we could fail to connect not only because of server restart, but 
also because a switch or a router dropped a packet.
Other than that, sounds good.

Thanks!


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko

On 28 мая 2014 г., at 12:49, Willy Tarreau w...@1wt.eu wrote:

 On Wed, May 28, 2014 at 12:35:17PM +0400, Dmitry Sivachenko wrote:
 - otherwise, we redispatch starting on the first retry as you suggest. But
   then we have two possibilities for the delay before reconnecting. If the
   server farm has more than 1 server and the balance algorithm is not a hash
   nor first, then we don't apply the delay because we expect to land on a
   different server with a high probability. Otherwise we keep the delay
   because we're almost certain to land on the same server.
 
 This way it continues to silently mask occasional server restarts and is
 optimally efficient in stateless farms when there's a possibility to quickly
 pick another server. Do you see any other point that needs specific care ?
 
 
 
 I would export that magic 1 second as a configuration parameter (with 0
 meaning no delay).
 
 I'm not sure we need to add another tunable just for this.


Okay.


 
 After all, we could fail to connect not only because of server restart, but
 also because a switch or a router dropped a packet.
 
 No, because a dropped packet is already handled by the TCP stack. Here the
 haproxy retry is really about retrying after an explicit failure (server
 responded that the port was closed). Also, the typical TCP retransmit
 interval for dropped packets in the network stack is 3s, so we're already
 3 times as fast as the TCP stack. I don't think it's reasonable to always
 kill this delay when retrying on the same server. We used to have that in
 the past and people were complaining that we were hammering servers for no
 reason, since there's little chance that a server which is not started will
 suddenly be ready in the next 100 microseconds.
 

I mean that with timeout connect=100ms (good value for local network IMO), we 
are far away from TCP restransmit timeout and if a switch drops a packet (it 
drops randomly and it can transmit next one even if we retry immediately).

If we have a tunable (let's make a default 1 second), people will have more 
freedom in some situations.


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko
On 28 мая 2014 г., at 13:06, Willy Tarreau w...@1wt.eu wrote:

 
 OK but then you make an interesting point with your very low timeout connect.
 What about using the min of timeout connect and 1s then ? Thus you can simply
 use your lower timeout connect as this new timeout. Would that be OK for you ?
 


Sounds reasonable (provided we are talking only about redispatch to the same 
server, not to the other one).


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko

On 28 мая 2014 г., at 11:13, Willy Tarreau w...@1wt.eu wrote:

 
  - otherwise, we redispatch starting on the first retry as you suggest. But
then we have two possibilities for the delay before reconnecting. If the
server farm has more than 1 server and the balance algorithm is not a hash
nor first, then we don't apply the delay because we expect to land on a
different server with a high probability. 


BTW, I thought that with option redispatch we will *always* retry on another 
server (if there are several servers in backend configured and balance 
algorithm is leastconn or round-robin).
Why do you say with a high probability here?


Re: [ANNOUNCE] haproxy-1.5-dev26 (and hopefully last)

2014-05-31 Thread Dmitry Sivachenko

On 29 мая 2014 г., at 3:04, Willy Tarreau w...@1wt.eu wrote:
 
 Yes it does but it doesn't change its verdict. The test is really bogus I
 think :
 
   const char fmt[]   = blah; printf(fmt);  = OK
   const char *fmt= blah; printf(fmt);  = KO
   const char * const fmt = blah; printf(fmt);  = KO
   const char fmt[][5] = { blah }; printf(fmt[0]);  = KO
 
 This is the difference between the first one and the last one which makes
 me say the test is bogus, because it's exactly the same.
 
 And worst thing is that I guess they added this check for people who
 mistakenly use printf(string). And as usual, they don't provide an easy
 way to say don't worry it's not an error, it's on purpose... This
 compiler is becoming more and more irritating, soon we'll have more
 lines of workarounds than useful lines of code.
 
 Worse in fact, the workaround is simple, it consists in removing the
 __attribute__((printf)) on the declaration line of chunk_appendf(),
 and thus *really* opening the door to real scary bugs.
 
 OK so I'll add a dummy argument to shut it up :-(



Just for reference: clang also warns here:

cc -Iinclude -Iebtree -Wall -O2 -pipe -fno-strict-aliasing   -DFREEBSD_PORTS
-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL 
-DENABLE_KQUEUE -DUSE_OPENSSL   -DCONFIG_HAPROXY_VERSION=\1.5-dev26-2e85840\ 
-DCONFIG_HAPROXY_DATE=\2014/05/28\ -c -o src/dumpstats.o src/dumpstats.c
src/dumpstats.c:3059:26: warning: format string is not a string literal
  (potentially insecure) [-Wformat-security]
chunk_appendf(trash, srv_hlt_st[1]); /* DOWN (agent) */
  ^


FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
Target: x86_64-unknown-freebsd10.0
Thread model: posix




Re: Some thoughts about redispatch

2014-06-16 Thread Dmitry Sivachenko

On 13 июня 2014 г., at 20:00, Willy Tarreau w...@1wt.eu wrote:

 
 Done! I've just pushed this. In the end I preferred not to apply this
 principle to leastconn since there are some situations where leastconn
 farms can be highly unbalanced (after a server restart) so killing the
 delay could result in hammering the new fresh server harder.
 
 So this is what happens now :
 
 - if we're on a round-robin farm with more than 1 active server and the
   connection is not persistent, then we redispatch upon the first retry
   since we don't care at all about the server that we randomly picked.
 
 - when redispatching, we kill the delay if the farm is in RR with more
   than one active server.
 
 - the delay is always bound by the connect timeout so that sub-second
   timeouts will lead to shorter retries even for other cases.
 
 I just realized during my tests that this way you can have a retries
 value set to the number of servers and scan your whole farm looking for
 a server. Yeah this is ugly :-)
 


Hello,

after some tests it looks fine.

Thank you very much for implementing this!




keep-alive on server side

2014-06-20 Thread Dmitry Sivachenko
Hello!

Is it possible to use HTTP keep-alive between haproxy and backend even if 
client does not use it?
Client closes connection, but haproxy still maintains open connection to 
backend (based on some timeout) and re-use it when new request arrives.

It will save some time for new connection setup between haproxy and backend and 
can be useful in case when server responds very fast (and connection rate it 
high).

Thanks.


Feature request: redispatch-on-5xx

2014-06-23 Thread Dmitry Sivachenko
Hello!

One more thing which can be very useful in some setups: if backend server 
returns HTTP 5xx status code, it would be nice to have an ability to retry the 
same request on another server before reporting error to client (when you know 
for sure the same request can be sent multiple times without side effects).

Is it possible to make some configuration switch to allow such retries?

Thanks.


haproxy dumps core on reload

2014-08-02 Thread Dmitry Sivachenko
Hello,

I am running haproxy-1.5.2 on FreeBSD-10.  After some time of running, when I 
try to reload it (haproxy -sf oldpid) old process dumps core on exit.
I experienced that with -dev21 and ignored in hope this is because of old 
snapshot.  This happens only after some time of running, if I reload it 
immediately after startup it does not crash.

I can send config file if necessary.

Core was generated by `haproxy'.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /lib/libz.so.6...done.
Loaded symbols for /lib/libz.so.6
Reading symbols from /usr/lib/libssl.so.7...done.
Loaded symbols for /usr/lib/libssl.so.7
Reading symbols from /lib/libcrypto.so.7...done.
Loaded symbols for /lib/libcrypto.so.7
Reading symbols from /lib/libc.so.7...done.
Loaded symbols for /lib/libc.so.7
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  0x004652b8 in process_session (t=0x813762860) at src/session.c:658
658 bref-ref = s-list.n;
(gdb) bt
#0  0x004652b8 in process_session (t=0x813762860) at src/session.c:658
#1  0x004125e1 in process_runnable_tasks (next=0x7fffe9b4)
at src/task.c:237
#2  0x004087ee in run_poll_loop () at src/haproxy.c:1304
#3  0x0040903d in main (argc=value optimized out, 
argv=value optimized out) at src/haproxy.c:1638


number of usable servers

2014-08-20 Thread Dmitry Sivachenko
Hello!

nbsrv() return the number of usable servers for the backend *excluding* servers 
marked as backup.

Is there any way to get the number of usable servers for the backend 
*including* backup ones?

Thanks!


Strange memory usage

2014-10-12 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5.4 on FreeBSD-10.

Upon startup, it looks like this:
  PID USERNAME  THR PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
 8459 www 1  370 86376K 28824K CPU16  16   0:16  26.56% haproxy

(about 80MB RES)

After few days of running, it looks like this:

  PID USERNAME  THR PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
82720 www 1  360   244M   108M CPU29  29  29.2H  26.95% haproxy

(244MB RES).  

When I do reload, I see that old process is in swread state for some time, and 
swap usage decreases for about 150MB when old process finishes.

Does it mean memory leak is somewhere?  Any additional information I could 
provide will be useful?

Thanks!


Re: Strange memory usage

2014-10-13 Thread Dmitry Sivachenko

On 13 окт. 2014 г., at 14:37, Lukas Tribus luky...@hotmail.com wrote:

 Hi Dmitry,
 
 
 
 I am using haproxy-1.5.4 on FreeBSD-10.
 
 Upon startup, it looks like this:
 PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
 8459 www 1 37 0 86376K 28824K CPU16 16 0:16 26.56% haproxy
 
 (about 80MB RES)
 
 Its 80MB SIZE and 28M RES here.
 
 
 
 PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
 82720 www 1 36 0 244M 108M CPU29 29 29.2H 26.95% haproxy
 
 (244MB RES).
 
 Its 244M SIZE and 108M RES. So 108M of real RAM used here.
 
 


Yes, I am sorry, I meant SIZE.


 
 When I do reload, I see that old process is in swread state for some time, 
 and
 swap usage decreases for about 150MB when old process finishes.
 
 Does it mean memory leak is somewhere? Any additional information I could
 provide will be useful?
 
 Share you configuration, especially maxconn related stuff, the output of

defaults
log global
modetcp
balance roundrobin
maxconn 1
option  abortonclose
option  allbackups
#option  dontlog-normal
#option  dontlognull
option  redispatch
option  tcplog
#option  log-separate-errors
option socket-stats
retries 4
timeout check 500ms
timeout client 15s
timeout connect 100ms
timeout http-keep-alive 3s
timeout http-request 5s
timeout queue 1s
timeout server 15s
fullconn 3000
default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1 slowstart
 60s maxqueue 1 minconn 5 maxconn 150

I can send you full config in private e-mail if necessary.


 haproxy -vv


HA-Proxy version 1.5.4 2014/09/02
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -pipe -g -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PC1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1i-freebsd 6 Aug 2014
Running on OpenSSL version : OpenSSL 1.0.1i-freebsd 6 Aug 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.33 2013-05-28
PCRE library supports JIT : yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

  a


 nd possibly show info;show stat;show pools
 from the unix admin socket.
 

show info:

Name: HAProxy
Version: 1.5.4
Release_date: 2014/09/02
Nbproc: 1
Process_num: 1
Pid: 32459
Uptime: 4d 6h09m46s
Uptime_sec: 367786
Memmax_MB: 0
Ulimit-n: 131218
Maxsock: 131218
Maxconn: 65500
Hard_maxconn: 65500
CurrConns: 508
CumConns: 517986272
CumReq: 602369265
MaxSslConns: 0
CurrSslConns: 16
CumSslConns: 452700
Maxpipes: 0
PipesUsed: 0
PipesFree: 0
ConnRate: 2611
ConnRateLimit: 0
MaxConnRate: 3965
SessRate: 2611
SessRateLimit: 0
MaxSessRate: 3965
SslRate: 4
SslRateLimit: 0
MaxSslRate: 33
SslFrontendKeyRate: 2
SslFrontendMaxKeyRate: 34
SslFrontendSessionReuse_pct: 50
SslBackendKeyRate: 0
SslBackendMaxKeyRate: 0
SslCacheLookups: 74867
SslCacheMisses: 60826
CompressBpsIn: 0
CompressBpsOut: 0
CompressBpsRateLim: 0
ZlibMemUsage: 0
MaxZlibMemUsage: 0
Tasks: 1550
Run_queue: 1
Idle_pct: 55


show pools on freshly started process:

Dumping pools usage. Use SIGQUIT to flush them.
  - Pool pipe (32 bytes) : 19 allocated (608 bytes), 5 used, 3 users [SHARED]
  - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool channel (80 bytes) : 766 allocated (61280 bytes), 672 used, 1 users 
[SHARED]
  - Pool task (112 bytes) : 1426 allocated (159712 bytes), 1378 used, 1 users 
[SHARED]
  - Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool connection (320 bytes) : 424 allocated (135680 bytes), 360 used, 1 
users [SHARED]
  - Pool hdr_idx (416 bytes) : 383 allocated (159328 bytes), 335 used, 1 users 
[SHARED]
  - Pool session (864 bytes) : 385 allocated (332640 bytes), 337 used, 1 users 
[SHARED]
  - Pool requri (1024 bytes) : 51 allocated (52224 bytes), 22 used, 1 users 
[SHARED]
  - Pool buffer (32800 bytes) : 766 allocated (25124800 bytes), 672 used, 1 
users [SHARED]
Total: 10 pools, 26026272 bytes allocated, 22818112 used.


show pools after few days of uptime:
Dumping pools usage. Use SIGQUIT to flush them.
  - Pool pipe (32 bytes) : 961 allocated (30752 bytes), 5 used, 3 users [SHARED]
  - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool channel (80 bytes) : 4136 allocated (330880 bytes), 648 used, 1 users 
[SHARED]
  - Pool task (112 bytes) : 3109 allocated (348208 bytes), 1367 

Re: building haproxy with lua support

2015-03-17 Thread Dmitry Sivachenko

 On 17 марта 2015 г., at 13:17, Thierry FOURNIER tfourn...@haproxy.com wrote:
 
 On Tue, 17 Mar 2015 08:38:23 +0100
 Baptiste bed...@gmail.com wrote:
 
 On Tue, Mar 17, 2015 at 1:51 AM, Joe Williams williams@gmail.com wrote:
 List,
 
 I seem to be running into issues building haproxy with lua support using
 HEAD. Any thoughts?
 
 joe@ubuntu:~/haproxy$ make DEBUG=-ggdb CFLAGS=-O0 TARGET=linux2628
 USE_LUA=yes LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/ LDFLAGS=-ldl
 snip
 /opt/lua53/lib//liblua.a(loadlib.o): In function `lookforfunc':
 loadlib.c:(.text+0x502): undefined reference to `dlsym'
 loadlib.c:(.text+0x549): undefined reference to `dlerror'
 loadlib.c:(.text+0x576): undefined reference to `dlopen'
 loadlib.c:(.text+0x5ed): undefined reference to `dlerror'
 /opt/lua53/lib//liblua.a(loadlib.o): In function `gctm':
 loadlib.c:(.text+0x781): undefined reference to `dlclose'
 collect2: error: ld returned 1 exit status
 make: *** [haproxy] Error 1
 
 joe@ubuntu:~/haproxy$ /opt/lua53/bin/lua -v
 Lua 5.3.0  Copyright (C) 1994-2015 Lua.org, PUC-Rio
 
 Thanks!
 
 -Joe
 
 
 Thank you,
 
 In fact I build with the SSL activated, and the libssl is already
 linked with thz dl library, so I don't sew this compilation error.
 
 It is fixed, the patch is in attachment.


This patch will break FreeBSD (and other OSes) which do not have libdl.


Re: Balancing requests and backup servers

2015-02-27 Thread Dmitry Sivachenko

 On 27 февр. 2015 г., at 2:56, Baptiste bed...@gmail.com wrote:
 
 On Thu, Feb 26, 2015 at 3:58 PM, Dmitry Sivachenko trtrmi...@gmail.com 
 wrote:
 Hello!
 
 Given the following configuration
 
 backend BC
 option allbackups
 server s1 maxconn 30 check
 server s2 maxconn 30 check
 server s3 maxconn 30 check
 server b1 maxconn 30 check backup
 server b2 maxconn 30 check backup
 
 imagine that s1, s2 and s3 have 30 active sessions and (tcp) checks succeed.
 
 
 Hi Dmitry.
 
 Let me answer inline:
 
 1) subsequent requests will be balanced between b1 and b2 because s1, s2 and 
 s3 reached it's maxconn
 
 nope, they'll be queued on the backend until one of the server has a free slot
 b1 and b2 will be used when ALL s1, s2 and s3 will be operationnaly DOWN.


Okay, then how can I achieve the described setup?
I want to balance requests between s1, s2, s3 until they have less than N 
active sessions and route extra requests to b1 and b2.



 
 2) nbsrv(BC) will be still equal to 3 because checks for s1, s2 and s3 still 
 succeed
 
 nope, nbsrv is 5, since b1 and b2 should be counted as well.
 

In fact backup server does NOT count in nbsrv(), I am not sure if it is a bug 
or a feature.




Re: Balancing requests and backup servers

2015-02-27 Thread Dmitry Sivachenko

 On 27 февр. 2015 г., at 11:52, Baptiste bed...@gmail.com wrote:
 
 On Fri, Feb 27, 2015 at 9:02 AM, Dmitry Sivachenko trtrmi...@gmail.com 
 wrote:
 
 On 27 февр. 2015 г., at 2:56, Baptiste bed...@gmail.com wrote:
 
 On Thu, Feb 26, 2015 at 3:58 PM, Dmitry Sivachenko trtrmi...@gmail.com 
 wrote:
 Hello!
 
 Given the following configuration
 
 backend BC
 option allbackups
 server s1 maxconn 30 check
 server s2 maxconn 30 check
 server s3 maxconn 30 check
 server b1 maxconn 30 check backup
 server b2 maxconn 30 check backup
 
 imagine that s1, s2 and s3 have 30 active sessions and (tcp) checks 
 succeed.
 
 
 Hi Dmitry.
 
 Let me answer inline:
 
 1) subsequent requests will be balanced between b1 and b2 because s1, s2 
 and s3 reached it's maxconn
 
 nope, they'll be queued on the backend until one of the server has a free 
 slot
 b1 and b2 will be used when ALL s1, s2 and s3 will be operationnaly DOWN.
 
 
 Okay, then how can I achieve the described setup?
 I want to balance requests between s1, s2, s3 until they have less than N 
 active sessions and route extra requests to b1 and b2.
 
 
 Two solutions:
 
 - use balance first load-balancing algorithm and remove the backup keyword
 - create 2 backends, one with 3 servers, one with two, use the 'queue'
 fetch to get the number of queued request on backend1 and route to
 backend 2 if the number is greater than 0.
 


BTW what if I have maxqueue 1 in default-server?
If queue is full for all servers will that backend use backup servers?


Re: Question on distribution not according to backend weight

2015-05-08 Thread Dmitry Sivachenko

 On 8 мая 2015 г., at 4:54, Frank Schubert f.schub...@gmail.com wrote:
 
 Hi,
 
 first of all let me thank you for an amazing piece of software. I like 
 working with haproxy a lot!
 
 My problem: The weight setting for a backend seems to be ignored when the max 
 concurrent session setting is reached. I was expecting the connection to get 
 queued for this backend but it seems to flip over to the host that has 
 connections available.
 
 I simplified my setup to 2 backend smtp servers, one with weight 100, the 
 other with weight 1. The max connection setting is set to 2. I'm opening 
 multiple SMTP connections simultaneously to this haproxy server. Attached 
 screenshot from haproxy stats shows that backend with weight 1 gets way too 
 many sessions.
 
 Increasing max concurrent sessions to 5 or more seem to prevent this 
 behavior, but I'm not totally sure about this.
 
 I would like to have only a small fraction (100:1) of requests go to the 
 backend with the lower weight and wonder how to do this correctly. It's more 
 important to me to have a defined distribution of connections going to 
 backends than answering requests as quickly as possible regardless of what 
 backend is used.
 
 haproxy-distribution-ignores-weight.jpg
 ​


This screenshot also illustrates incorrect Max Request Rate calculation I 
reported 2 years ago:
http://www.serverphorums.com/read.php?10,623596




Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 10:44, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hello Dmitry,
> 
> On Thu, Oct 22, 2015 at 10:40:45AM +0300, Dmitry Sivachenko wrote:
>> 1.6.1 still does not build with OpenSSL < 1.0:
>> 
>> src/ssl_sock.o: In function `ssl_sock_do_create_cert':
>> ssl_sock.c:(.text+0x295b): undefined reference to 
>> `EVP_PKEY_get_default_digest_nid'
>> Makefile:760: recipe for target 'haproxy' failed
>> 
>> So is it intended behavior?
> 
> It's neither intended nor not intended, it's just that I was waiting for
> Marcus' confirmation that the patch fixed the issue for him, and forgot
> about this patch while waiting for a response. Can you confirm on your
> side that the patch fixes the issue for you ? If so I'm willing to merge
> the fix immediately. I prefer to be careful because on my side openssl
> 0.9.8 doesn't break so I want to be sure that there isn't a second level
> of breakage after this one.
> 


Aha, no problem, I thought it is supposed to be fixed before 1.6.1.

I tried a patch in this thread 
(0002-BUILD-ssl-fix-build-error-introduced-in-commit-7969a.patch).

It does fix the build error (FreeBSD-9, OpenSSL 0.9.8q).  Though there is the 
following warning:

src/ssl_sock.c: In function 'ssl_sock_load_cert_chain_file':
src/ssl_sock.c:1623: warning: dereferencing type-punned pointer will break 
strict-aliasing rules
src/ssl_sock.c:1636: warning: dereferencing type-punned pointer will break 
strict-aliasing rules
src/ssl_sock.c: In function 'ssl_sock_srv_verifycbk':
src/ssl_sock.c:2264: warning: dereferencing type-punned pointer will break 
strict-aliasing rules
src/ssl_sock.c:2278: warning: dereferencing type-punned pointer will break 
strict-aliasing rules





Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 19 окт. 2015 г., at 17:29, Willy Tarreau  wrote:
> 
> Hi Christopher,
> 
> On Mon, Oct 19, 2015 at 03:05:05PM +0200, Christopher Faulet wrote:
>> Damned! I generated a huge amount of disturbances with my paches! Really 
>> sorry for that.
> 
> Shit happens sometimes. I had my hours of fame with option
> http-send-name-header merged in 1.4-stable years ago, and that was so badly
> designed that it still managed to cause a lot of trouble during 1.6-dev.
> 
>> Add a #ifdef to check the OpenSSL version seems to be a good fix. I 
>> don't know if there is a workaround to do the same than 
>> EVP_PKEY_get_default_digest_nid() for old OpenSSL versions.
> 
> I was unsure how the code was supposed to work given that two blocks
> were replaced by two others and I was unsure whether there was a
> dependence. So as long as we can fall back to the pre-patch behaviour
> I'm perfectly fine.
> 
>> This function is used to get default signature digest associated to the 
>> private key used to sign generated X509 certificates. It is called when 
>> the private key differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. 
>> It should be enough for most of cases (maybe all cases ?).
> 
> OK great.
> 
>> By the way, I attached a patch to fix the bug.
> 
> Thank you. Marcus, can you confirm that it's OK for you with this fix so
> that I can merge it ?



Hello,

1.6.1 still does not build with OpenSSL < 1.0:

src/ssl_sock.o: In function `ssl_sock_do_create_cert':
ssl_sock.c:(.text+0x295b): undefined reference to 
`EVP_PKEY_get_default_digest_nid'
Makefile:760: recipe for target 'haproxy' failed


So is it intended behavior?


Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 13:54, Marcus Rueckert <da...@web.de> wrote:
> 
> On 2015-10-22 13:38:45 +0300, Dmitry Sivachenko wrote:
>> I see this warnings with gcc-4.2.1 (shipped with FreeBSD-9), but no warnings 
>> with clang 3.6.1.
>> I see a lot of such warnings with gcc48, but it seems expected according to 
>> comments in Makefile:
>>  Compiler-specific flags that may be used to disable some negative over-
>> # optimization or to silence some warnings. -fno-strict-aliasing is needed 
>> with
>> # gcc >= 4.4.
> 
> 4.3.4 on SLES 11 SP 4
> 4.8.3 on openSUSE 13.2
> 5.1.1 on openSUSE Tumbleweed
> 
> https://build.opensuse.org/package/show/server:http/haproxy (succeeded
> links on the right side)


There is  -fno-strict-aliasing option in your build logs.


Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 14:12, Marcus Rueckert <da...@web.de> wrote:
> 
> On 2015-10-22 13:59:09 +0300, Dmitry Sivachenko wrote:
>>> On 22 окт. 2015 г., at 13:54, Marcus Rueckert <da...@web.de> wrote:
>>> 
>>> On 2015-10-22 13:38:45 +0300, Dmitry Sivachenko wrote:
>>>> I see this warnings with gcc-4.2.1 (shipped with FreeBSD-9), but no 
>>>> warnings with clang 3.6.1.
>>>> I see a lot of such warnings with gcc48, but it seems expected according 
>>>> to comments in Makefile:
>>>>  Compiler-specific flags that may be used to disable some negative 
>>>> over-
>>>> # optimization or to silence some warnings. -fno-strict-aliasing is needed 
>>>> with
>>>> # gcc >= 4.4.
>>> 
>>> 4.3.4 on SLES 11 SP 4
>>> 4.8.3 on openSUSE 13.2
>>> 5.1.1 on openSUSE Tumbleweed
>>> 
>>> https://build.opensuse.org/package/show/server:http/haproxy (succeeded
>>> links on the right side)
>> 
>> 
>> There is  -fno-strict-aliasing option in your build logs.
> 
> But it is set by the upstream Makefile. so unless you break the CFLAGS
> of the makefile. shouldnt you have that too?
> 


I override CFLAGS variable during make invocation (because otherwise build 
system does not respect CFLAGS environment variable), as well as CC environment 
(FreeBSD does not have "gcc" at all).





Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 13:14, Marcus Rueckert  wrote:
> 
> 3. i can not reproduce the strict alias warnings.
> 

I see this warnings with gcc-4.2.1 (shipped with FreeBSD-9), but no warnings 
with clang 3.6.1.
I see a lot of such warnings with gcc48, but it seems expected according to 
comments in Makefile:
 Compiler-specific flags that may be used to disable some negative over-
# optimization or to silence some warnings. -fno-strict-aliasing is needed with
# gcc >= 4.4.




Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 11:45, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Thu, Oct 22, 2015 at 11:31:01AM +0300, Dmitry Sivachenko wrote:
>> 
>>> On 22 ??. 2015 ??., at 10:44, Willy Tarreau <w...@1wt.eu> wrote:
>>> 
>>> Hello Dmitry,
>>> 
>>> On Thu, Oct 22, 2015 at 10:40:45AM +0300, Dmitry Sivachenko wrote:
>>>> 1.6.1 still does not build with OpenSSL < 1.0:
>>>> 
>>>> src/ssl_sock.o: In function `ssl_sock_do_create_cert':
>>>> ssl_sock.c:(.text+0x295b): undefined reference to 
>>>> `EVP_PKEY_get_default_digest_nid'
>>>> Makefile:760: recipe for target 'haproxy' failed
>>>> 
>>>> So is it intended behavior?
>>> 
>>> It's neither intended nor not intended, it's just that I was waiting for
>>> Marcus' confirmation that the patch fixed the issue for him, and forgot
>>> about this patch while waiting for a response. Can you confirm on your
>>> side that the patch fixes the issue for you ? If so I'm willing to merge
>>> the fix immediately. I prefer to be careful because on my side openssl
>>> 0.9.8 doesn't break so I want to be sure that there isn't a second level
>>> of breakage after this one.
>>> 
>> 
>> 
>> Aha, no problem, I thought it is supposed to be fixed before 1.6.1.
>> 
>> I tried a patch in this thread 
>> (0002-BUILD-ssl-fix-build-error-introduced-in-commit-7969a.patch).
>> 
>> It does fix the build error (FreeBSD-9, OpenSSL 0.9.8q).  Though there is 
>> the following warning:
>> 
>> src/ssl_sock.c: In function 'ssl_sock_load_cert_chain_file':
>> src/ssl_sock.c:1623: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
>> src/ssl_sock.c:1636: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
>> src/ssl_sock.c: In function 'ssl_sock_srv_verifycbk':
>> src/ssl_sock.c:2264: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
>> src/ssl_sock.c:2278: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
> 
> Do you have other patches applied ? Here these line numbers only match
> closing braces so I have no idea what they correspond to :-/
> 

No, this is haproxy-1.6.1 tarball + this patch applied.

BTW, by default FreeBSD uses -fno-strict-aliasing, so this warning was here 
before most likely, I just did not see it, I suppose  it is not a problem.

Also:

src/stick_table.c: In function 'smp_to_stkey':
src/stick_table.c:490: warning: dereferencing type-punned pointer will break 
strict-aliasing rules





Re: [patch] Enable USE_CPU_AFFINITY by default on FreeBSD

2015-11-04 Thread Dmitry Sivachenko

> On 04 Nov 2015, at 23:09, Renato Botelho  wrote:
> 
> Change is being used in pfSense and also was added to FreeBSD ports tree.
> 
> Should I send a separate patch for 1.6 branch?
> 
> Thanks
> 
> <0001-Enable-USE_CPU_AFFINITY-by-default-on-FreeBSD.patch>


I would also add USE_GETADDRINFO by default, we use it unconditionally in 
FreeBSD ports tree too.




Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 7 окт. 2015 г., at 16:18, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> Hello,
> 
> I am using haproxy-1.5.14 and sometimes I see the following errors in the log:
> 
> Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428] 
> MT-front MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ-- 
> 125/124/108/0/0 0/28 "POST /some/url HTTP/1.1"
> (many similar at one moment)
> 
> Common part in these errors is "1000" in Tw and Tt, and "sQ--" termination 
> state.
> 
> Here is the relevant part on my config (I can post more if needed):
> 
> defaults
>balance roundrobin
>maxconn 1
>timeout queue 1s
>fullconn 3000
>default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1 
> slowstart 60s maxqueue 1 minconn 5 maxconn 150
> 
> backend MT_RU_EN-back
>mode http
>timeout server 30s
>server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
>server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38
>
> 
> So this error log indicates that request was sitting in the queue for timeout 
> queue==1s and his turn did not come.
> 
> In the stats web interface for MT_RU_EN-back backend I see the following 
> numbers:
> 
> Sessions: limit=3000, max=126 (for the whole backend)
> Limit=150, max=5 or 6 (for each server)


I also forgot to mention the "Queue" values from stats web-interface:
Queue max = 0 for all servers
Queue limit = 1 for all servers (as configured in default-server)
So according to stats queue was never used.


Right under the servers list, there is a "Backend" line, which has the value of 
"29" in "Queue Max" column.
What does it mean?


> 
> If I understand minconn/maxconn meaning right, each server should accept up 
> to min(150, 3000/18) connections
> 
> So according to stats the load were far from limits.
> 
> What can be the cause of such errors?
> 
> Thanks!




Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 8 окт. 2015 г., at 11:03, Baptiste  wrote:
> 
> Hi Dmitry,
> 
> 
> 
> Now the question is why such situation. Simply because your queue
> management is improperly setup (either increase minconn and or
> decrease fullconn) and combined to a server which might be quite slow
> to answer leading HAProxy to use queues.
> 

What do you mean "improperly setup"?  From the stats I provided I got an 
impression that no limits were reached for request to get into the waiting 
queue.

Or am I wrong?

(I will send you full config and logs in private soon)




Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 8 окт. 2015 г., at 3:51, Igor Cicimov  
> wrote:
> 
> 
> The only thing I can think of is you have left net.core.somaxconn = 128, try 
> increasing it to 4096 lets say to match your planned capacity of 3000
> 


I forgot to mention that I am using FreeBSD, I don't think it has similar 
sysctl.


Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 8 окт. 2015 г., at 15:30, Daren Sefcik <dsef...@hightechhigh.org> wrote:
> 
> How about kern.ipc.somaxconn


I have this set to 4096, and when it overflows it prints a line in the log 
(Listen queue overflow...)

I have no these errors in logs.

Moreover, connections sitting in socket accept queue are not seen by haproxy 
and haproxy can't count this time and trigger timeouts.



> 
> On Thu, Oct 8, 2015 at 5:22 AM, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> > On 8 окт. 2015 г., at 3:51, Igor Cicimov <ig...@encompasscorporation.com> 
> > wrote:
> >
> >
> > The only thing I can think of is you have left net.core.somaxconn = 128, 
> > try increasing it to 4096 lets say to match your planned capacity of 3000
> >
> 
> 
> I forgot to mention that I am using FreeBSD, I don't think it has similar 
> sysctl.
> 




About maxconn and minconn

2015-10-07 Thread Dmitry Sivachenko
Hello,

I am using haproxy-1.5.14 and sometimes I see the following errors in the log:

Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428] MT-front 
MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ-- 125/124/108/0/0 0/28 
"POST /some/url HTTP/1.1"
(many similar at one moment)

Common part in these errors is "1000" in Tw and Tt, and "sQ--" termination 
state.

Here is the relevant part on my config (I can post more if needed):

defaults
balance roundrobin
maxconn 1
timeout queue 1s
fullconn 3000
default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1 
slowstart 60s maxqueue 1 minconn 5 maxconn 150

backend MT_RU_EN-back
mode http
timeout server 30s
server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38


So this error log indicates that request was sitting in the queue for timeout 
queue==1s and his turn did not come.

In the stats web interface for MT_RU_EN-back backend I see the following 
numbers:

Sessions: limit=3000, max=126 (for the whole backend)
Limit=150, max=5 or 6 (for each server)

If I understand minconn/maxconn meaning right, each server should accept up to 
min(150, 3000/18) connections

So according to stats the load were far from limits.

What can be the cause of such errors?

Thanks!


TCP_NODELAY in tcp mode

2015-08-27 Thread Dmitry Sivachenko
Hello,

we have a client-server application which establish a long-living TCP 
connection and generates a lot of small request-response packets which need to 
be processed very fast.
Setting TCP_NODELAY on sockets speed things up to about 3 times.

Not I want to put a haproxy in the middle so it balances traffic between 
several servers.

Something like 

defaults
 mode tcp

frontend shard0-front
 bind *:9000
 default_backend shard0-back

backend shard0-back
 server srv1 srv1:3456 check
 server srv2 srv2:3456 check

In such configuration application slows significantly.  I suspect that setting 
frontend's and backend's sockets option TCP_NODELAY would help as it did 
without haproxy involved.  Is there any parameter which allows me to set 
TCP_NODELAY option?

Thanks!


Re: TCP_NODELAY in tcp mode

2015-08-28 Thread Dmitry Sivachenko

 On 28 авг. 2015 г., at 12:12, Lukas Tribus luky...@hotmail.com wrote:
 
 Hello,
 
 The flag TCP_NODELAY is unconditionally set on each TCP (ipv4/ipv6)
 connections between haproxy and the server, and beetwen the client and
 haproxy.
 
 That may be true, however HAProxy uses MSG_MORE to disable and
 enable Nagle based on the individual situation.
 
 Use option http-no-delay [1] to disable Nagle unconditionally.


This option requires HTTP mode, but I must use TCP mode because our protocol is 
not HTTP (some custom protocol over TCP)


 
 
 
 Regards,
 
 Lukas
 
 
 [1] 
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20http-no-delay





Re: TCP_NODELAY in tcp mode

2015-08-28 Thread Dmitry Sivachenko

 On 28 авг. 2015 г., at 12:18, Lukas Tribus luky...@hotmail.com wrote:
 
 Use option http-no-delay [1] to disable Nagle unconditionally.
 
 
 This option requires HTTP mode, but I must use TCP mode because our
 protocol is not HTTP (some custom protocol over TCP)
 
 Ok, you may be hitting a bug. Can you provide haproxy -vv output?
 


What do you mean?  I get the following warning when trying to use this option 
in tcp backend/frontend:

[WARNING] 239/121424 (71492) : config : 'option http-no-delay' ignored for 
frontend 'shard0-front' as it requires HTTP mode.
[WARNING] 239/121424 (71492) : config : 'option http-no-delay' ignored for 
backend 'shard0-back' as it requires HTTP mode.

So it is clear that this option is intended for HTTP mode only.  For reference:

HA-Proxy version 1.5.11 2015/01/31
Copyright 2000-2015 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -fstack-protector 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1l-freebsd 15 Jan 2015
Running on OpenSSL version : OpenSSL 1.0.1l-freebsd 15 Jan 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.





Re: haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)

2015-09-07 Thread Dmitry Sivachenko

> On 7 сент. 2015 г., at 9:36, Lukas Tribus  wrote:
> 
> 
> 
> Best would be to strace this, but this is Freebsd amd64,
> so that doesn't work. Can you trace the syscalls with
> the strace equivalent at least?


It fails that way:

socket(PF_INET,SOCK_DGRAM,17)= 4 (0x4)
connect(4,{ AF_INET 8.8.8.8:53 },128)ERR#22 'Invalid argument'

3rd argument for connect() looks wrong for ipv4:

ERRORS
 The connect() system call fails if:

 [EINVAL]   The namelen argument is not a valid length for the
address family.




Re: haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)

2015-09-07 Thread Dmitry Sivachenko

> On 7 сент. 2015 г., at 1:46, PiBa-NL  wrote:
> 
> Hi guys,
> 
> Hoping someone can shed some light on what i might be doing wrong?
> Or is there something in FreeBSD that might be causing the trouble with the 
> new resolvers options?
> 
> Thanks in advance.
> PiBa-NL
> 
> haproxy -f /var/haproxy.cfg -d
> [ALERT] 248/222758 (22942) : SSLv3 support requested but unavailable.
> Note: setting global.maxconn to 2000.
> Available polling systems :
> kqueue : pref=300,  test result OK
>   poll : pref=200,  test result OK
> select : pref=150,  test result FAILED


Also interesting is why you have test for select=FAILED, though in your haproxy 
-vv output below this rest result is OK.


> Total: 3 (2 usable), will use kqueue.
> Using kqueue() as the polling mechanism.
> [ALERT] 248/222808 (22942) : Starting [globalresolvers/googleA] nameserver: 
> can't connect socket.
> 
> 
> defaults
>modehttp
>timeout connect3
>timeout server3
>timeout client3
> 
> resolvers globalresolvers
>nameserver googleA 8.8.8.8:53
>resolve_retries   3
>timeout retry 1s
>hold valid   10s
> 
> listen www
>bind 0.0.0.0:80
>logglobal
>servergooglesite www.google.com:80 check inter 1000 resolvers 
> globalresolvers
> 
> 
> # uname -a
> FreeBSD OPNsense.localdomain 10.1-RELEASE-p18 FreeBSD 10.1-RELEASE-p18 #0 
> 71275cd(stable/15.7): Sun Aug 23 20:32:26 CEST 2015 
> root@sensey64:/usr/obj/usr/src/sys/SMP  amd64
> 
> # haproxy -vv
> [ALERT] 248/221747 (72984) : SSLv3 support requested but unavailable.
> HA-Proxy version 1.6-dev4-b7ce424 2015/09/03
> Copyright 2000-2015 Willy Tarreau 
> 
> Build options :
>  TARGET  = freebsd
>  CPU = generic
>  CC  = cc
>  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing -DFREEBSD_PORTS
>  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 
> USE_STATIC_PCRE=1 USE_PCRE_JIT=1
> 
> Default settings :
>  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
> 
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.8
> Compression algorithms supported : identity("identity"), deflate("deflate"), 
> raw-deflate("deflate"), gzip("gzip")
> Built with OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
> Running on OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.37 2015-04-28
> PCRE library supports JIT : yes
> Built with Lua version : Lua 5.3.0
> Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
> 
> Available polling systems :
> kqueue : pref=300,  test result OK
>   poll : pref=200,  test result OK
> select : pref=150,  test result OK
> Total: 3 (3 usable), will use kqueue.
> 
> 




Re: TCP_NODELAY in tcp mode

2015-09-08 Thread Dmitry Sivachenko

> On 30 авг. 2015 г., at 22:29, Willy Tarreau  wrote:
> 
> On Fri, Aug 28, 2015 at 11:40:18AM +0200, Lukas Tribus wrote:
 Ok, you may be hitting a bug. Can you provide haproxy -vv output?
 
>>> 
>>> 
>>> What do you mean? I get the following warning when trying to use this
>>> option in tcp backend/frontend:
>> 
>> Yes I know (I didn't realize you are using tcp mode). I don't mean the
>> warning is the bug, I mean the tcp mode is supposed to not cause any
>> delays by default, if I'm not mistaken.
> 
> You're not mistaken, tcp_nodelay is unconditional in TCP mode and MSG_MORE
> is not used there since we never know if more data follows. In fact there's
> only one case where it can happen, it's when data wrap at the end of the
> buffer and we want to send them together.
> 


Hello,

yes, you are right, the problem is not TCP_NODELAY.  I performed some testing:

Under low network load, passing TCP connection through haproxy involves almost 
zero overhead.
When load grows, at some point haproxy starts to slow things down.

In our testing scenario the application establishes long-lived TCP connection 
to server and sends many small requests.
Typical traffic at which adding haproxy in the middle causes measurable 
slowdown is ~30MB/sec, ~100kpps.

haproxy process CPU usage is about 15-20%.


Re: halog Makefile

2015-09-08 Thread Dmitry Sivachenko

> On 8 сент. 2015 г., at 17:26, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> Hello,
> 
> Can you please apply the following patch to contrib/halog/Makefile not to 
> override $CC (on some systems compiler is called "cc", not "gcc"):
> 
> --- Makefile.orig   2015-09-08 17:15:37.423168000 +0300
> +++ Makefile2015-09-08 17:14:57.938196000 +0300
> @@ -1,7 +1,7 @@
> EBTREE_DIR = ../../ebtree
> INCLUDE  = -I../../include -I$(EBTREE_DIR)
> 
> -CC   = gcc
> +CC   ?= gcc
> 
> # note: it is recommended to also add -fomit-frame-pointer on i386
> OPTIMIZE = -O3
> 


Oh, please forget this: the same CC is used in all makefiles, it is simpler to 
redefine CC in make invocation.
Sorry for the noise.


About CPU usage

2015-09-10 Thread Dmitry Sivachenko
Hello,

I have a haproxy-1.5.11 with a single frontend passing requests to a single 
backend in TCP mode (sample config).
Application establishes several long-living tcp connections and sends a lot of 
small requests over them.

In my test case I have 2 simultaneous TCP connections producing about 3 MB/sec, 
20 kpps input (as shown by netstat on backend machine) and approximately the 
same output.

haproxy process consumes about 20% of CPU core (I have a machine with 2 Intel 
Xeon E5-2650 v2 @ 2.60GHz).

In my understanding such CPU usage is rather high for the relatively low load.

I tried both FreeBSD and Linux and see similar results (I am interested in 
FreeBSD though).

Anything obvious I can tune?

Thanks.


Re: TCP_NODELAY in tcp mode

2015-09-11 Thread Dmitry Sivachenko

> On 8 сент. 2015 г., at 18:33, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> On Tue, Sep 08, 2015 at 05:25:33PM +0300, Dmitry Sivachenko wrote:
>> 
>>> On 30 ??. 2015 ??., at 22:29, Willy Tarreau <w...@1wt.eu> wrote:
>>> 
>>> On Fri, Aug 28, 2015 at 11:40:18AM +0200, Lukas Tribus wrote:
>>>>>> Ok, you may be hitting a bug. Can you provide haproxy -vv output?
>>>>>> 
>>>>> 
>>>>> 
>>>>> What do you mean? I get the following warning when trying to use this
>>>>> option in tcp backend/frontend:
>>>> 
>>>> Yes I know (I didn't realize you are using tcp mode). I don't mean the
>>>> warning is the bug, I mean the tcp mode is supposed to not cause any
>>>> delays by default, if I'm not mistaken.
>>> 
>>> You're not mistaken, tcp_nodelay is unconditional in TCP mode and MSG_MORE
>>> is not used there since we never know if more data follows. In fact there's
>>> only one case where it can happen, it's when data wrap at the end of the
>>> buffer and we want to send them together.
>>> 
>> 
>> 
>> Hello,
>> 
>> yes, you are right, the problem is not TCP_NODELAY.  I performed some 
>> testing:
>> 
>> Under low network load, passing TCP connection through haproxy involves 
>> almost zero overhead.
>> When load grows, at some point haproxy starts to slow things down.
>> 
>> In our testing scenario the application establishes long-lived TCP 
>> connection to server and sends many small requests.
>> Typical traffic at which adding haproxy in the middle causes measurable 
>> slowdown is ~30MB/sec, ~100kpps.
> 
> This is not huge, it's smaller than what can be achieved in pure HTTP mode,
> where I could achieve about 180k req/s end-to-end, which means at least 
> 180kpps
> in both directions on both sides, so 360kpps in each direction.
> 


For reference: I tracked this down to be FreeBSD-specific problem:
https://lists.freebsd.org/pipermail/freebsd-net/2015-September/043314.html

Thanks all for your help.




Address selection policy in dual-stack environments

2015-09-29 Thread Dmitry Sivachenko
Hello,

in case when machine has both A and  records, there is an address selection 
policy algorithm which determines which address to use first.
https://www.freebsd.org/cgi/man.cgi?query=ip6addrctl=8

I use it in "prefer ipv4" form, to use ipv4 first when available.

All programs like ssh work as expected.

In haproxy backends are resolved always to ipv6, even when there is an ipv4 
address.

Is it possible to make it to respect address selection policy?

Thanks.


Re: Address selection policy in dual-stack environments

2015-09-30 Thread Dmitry Sivachenko

> On 29 сент. 2015 г., at 23:06, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Tue, Sep 29, 2015 at 10:59:15PM +0300, Dmitry Sivachenko wrote:
>>> I *think* that getaddrinfo() provides this. You can try to build by
>>> adding USE_GETADDRINFO=1 to your makefile. It's not enabled by default
>>> because there are numerous bogus implementations on various systems.
>>> If it works for you it could be the best solution as other programs
>>> which work are likely using it. I don't know if it's safe to enable
>>> it by default on FreeBSD.
>>> 
>> 
>> 
>> I do have this enabled:
>> 
>> Build options :
>>  TARGET  = freebsd
>>  CPU = generic
>>  CC  = cc
>>  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -fstack-protector 
>> -DFREEBSD_PORTS
>>  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
>> USE_PCRE_JIT=1
> 
> Then I have no idea how other programs retrieve the information allowing
> them to respect your system-global choices :-(


The following patch fixes the problem for me:

--- standard.c.orig 2015-09-30 13:28:52.688425000 +0300
+++ standard.c  2015-09-30 13:29:00.826968000 +0300
@@ -599,7 +599,7 @@ static struct sockaddr_storage *str2ip(c
memset(, 0, sizeof(hints));
hints.ai_family = sa->ss_family ? sa->ss_family : AF_UNSPEC;
hints.ai_socktype = SOCK_DGRAM;
-   hints.ai_flags = AI_PASSIVE;
+   hints.ai_flags = 0;
hints.ai_protocol = 0;
 
if (getaddrinfo(str, NULL, , ) == 0) {



FreeBSD manual page for getaddrinfo() is uncertain how to treat AI_PASSIVE when 
hostname parameter is non-NULL (and this parameter is always non-NULL in 
standard.c:str2ip()).
https://www.freebsd.org/cgi/man.cgi?query=getaddrinfo=3

On  Linux manual page explicitly states that "If node is not NULL, then the 
AI_PASSIVE flag is ignored."

So this change should be harmless for Linux.

What do you think?


Re: Linux or FreeBSD ?

2015-09-30 Thread Dmitry Sivachenko

> On 30 сент. 2015 г., at 16:05, Arnall  wrote:
> 
> Hi Eveyone,
> 
> just a simple question, is FreeBSD a good choice for Haproxy ?
> Our Haproxy runs under Debian for years, but the new IT want to put it under 
> FreeBSD.
> Any cons ?
> 
> Thanks.
> 



Should be roughly the same I think.


Re: Address selection policy in dual-stack environments

2015-10-01 Thread Dmitry Sivachenko

> On 30 сент. 2015 г., at 23:28, Willy Tarreau  wrote:
> 
> 
> I think that you did a good job and that you're perfectly right. I even
> checked on one of my older systems and the text was the same in 2008.
> 
> Could you please write a commit message describing the initial issue
> and copying your analysis above so that we don't lose the elements.
> Please tag it as a bug so that we backport it to 1.5 as well.



When first parameter to getaddrinfo() is not NULL (it is always not NULL in 
str2ip()), on Linux AI_PASSIVE value for ai_flags is ignored.
On FreeBSD, when AI_PASSIVE is specified and hostname parameter is not NULL, 
getaddrinfo() ignores local address selection policy, always returning  
record.
Pass zero ai_flags to behave correctly on FreeBSD, this change should be no-op 
for Linux.





standard.c.patch
Description: Binary data


Re: Address selection policy in dual-stack environments

2015-09-29 Thread Dmitry Sivachenko

> On 29 сент. 2015 г., at 21:26, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> On Tue, Sep 29, 2015 at 08:08:51PM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> in case when machine has both A and  records, there is an address 
>> selection policy algorithm which determines which address to use first.
>> https://www.freebsd.org/cgi/man.cgi?query=ip6addrctl=8
>> 
>> I use it in "prefer ipv4" form, to use ipv4 first when available.
>> 
>> All programs like ssh work as expected.
>> 
>> In haproxy backends are resolved always to ipv6, even when there is an ipv4 
>> address.
>> 
>> Is it possible to make it to respect address selection policy?
> 
> I *think* that getaddrinfo() provides this. You can try to build by
> adding USE_GETADDRINFO=1 to your makefile. It's not enabled by default
> because there are numerous bogus implementations on various systems.
> If it works for you it could be the best solution as other programs
> which work are likely using it. I don't know if it's safe to enable
> it by default on FreeBSD.
> 


I do have this enabled:

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -fstack-protector 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1






Re: http-ignore-probes produces a warning in tcp frontend

2016-02-04 Thread Dmitry Sivachenko

> On 04 Feb 2016, at 07:04, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hello Dmitry,
> 
> On Thu, Jan 28, 2016 at 05:31:58PM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> I have an option http-ignore-probes in defaults section.
>> When I declare frontend in "tcp" mode, I get the following warning:
>> 
>> [WARNING] 027/172718 (18281) : config : 'option http-ignore-probes' ignored
>> for frontend 'MYTEST-front' as it requires HTTP mode.
>> 
>> In defaults section I have other http-specific options (e.g.
>> http-keep-alive), which does not produce a warning in tcp backend.
>> Is it intended?  It looks logical to produce such a warning only if
>> http-specific option is used directly in tcp backend and silently ignore it
>> when used in defaults.
> 
> There's no difference between having the option in defaults or explicitly
> in the section itself. You should see defaults as templates for next
> sections. The error here is that http-keep-alive should also produce a
> warning. But I think I know why it doesn't, most options are handled by
> a generic parser which checks the proxy mode, and a few other more
> complex ones are implemented "by hand" and do not necessarily run such
> checks.
> 
> It's a very bad practise to mix TCP and HTTP proxies with the same defaults
> sections. This probably is something we should document better in the doc.
> A good practise is to have one (or several) defaults sections for HTTP mode
> and then other defaults sections for TCP mode. And most often you don't even
> have the same timeouts, log settings etc.
> 


Thanks for the explanation!

I just realized that there can be multiple defaults sections, so your arguments 
look valid.




Re: Using operators in ACLs

2016-02-24 Thread Dmitry Sivachenko

> On 24 Feb 2016, at 14:07, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Wed, Feb 24, 2016 at 01:36:39PM +0300, Dmitry Sivachenko wrote:
>> I do have "mode http" (I intentionally put it here with a comment).
>> Will it work only for tcp-mode frontend?
>> Or should I use tcp-request for tcp frontend and http-request for http 
>> frontend?
> 
> Both tcp-request and http-request will work in your HTTP frontend. My point
> is that if your frontend is in HTTP mode, you won't be able to direct the
> traffic to a TCP backend, the config parser will reject this.


Ah, yes, I see.  Thanks for the explanation.


Re: Using operators in ACLs

2016-02-24 Thread Dmitry Sivachenko

> On 24 Feb 2016, at 01:02, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> On Fri, Feb 19, 2016 at 05:58:47PM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> I want to define ACL which will evaluate to true if a current number of 
>> connections to a particular backend is greater than a number of usable 
>> servers in that backend multiplied on some constant:
>> 
>> be_conn(BACK) > nbsrv(BACK) * N
>> 
>> So far I came up with the following solution:
>> 
>> frontend FRONT
>>mode http  # can be either http or tcp here
>>tcp-request content set-var(sess.nb) nbsrv(BACK)  # I use tcp-request 
>> here (not http-request) so it works for both http and tcp mode backends
>>acl my_acl be_conn(BACK),div(sess.nb) gt 10  #  "N" in 10 here
>> 
>> 
>> So I must use set-var here because div() accepts either a number or a 
>> variable.
>> 
>> Is this a good sulution for my problem or it can be done better?
> 
> It currently is the only available solution, and I'm glad that you spotted
> it because support for variables in arithmetic operators was added in great
> part to permit such things.
> 
> I do have one comment regarding your comment about tcp-request vs
> http-request. What you say is valid only if you don't have "mode http"
> in your frontend, but I assume that you simplified the config so that
> it's easy to understand here.
> 


I do have "mode http" (I intentionally put it here with a comment).  Will it 
work only for tcp-mode frontend?

Or should I use tcp-request for tcp frontend and http-request for http frontend?




Using operators in ACLs

2016-02-19 Thread Dmitry Sivachenko
Hello,

I want to define ACL which will evaluate to true if a current number of 
connections to a particular backend is greater than a number of usable servers 
in that backend multiplied on some constant:

be_conn(BACK) > nbsrv(BACK) * N

So far I came up with the following solution:

frontend FRONT
mode http  # can be either http or tcp here
tcp-request content set-var(sess.nb) nbsrv(BACK)  # I use tcp-request here 
(not http-request) so it works for both http and tcp mode backends
acl my_acl be_conn(BACK),div(sess.nb) gt 10  #  "N" in 10 here


So I must use set-var here because div() accepts either a number or a variable.

Is this a good sulution for my problem or it can be done better?

Thanks!


Re: Incorrect counters in stats interface

2016-09-08 Thread Dmitry Sivachenko

> On 07 Sep 2016, at 23:12, David Birdsong  wrote:
> 
> Queue Cur is a gauge and so not representative of historical values.
> 
> Queue Max of zero is telling though.
> 
> In addition to queue timeout, there are other ways haproxy can synthesize an 
> http response on behalf of the backend server. Check for connection errors.
> 


Also, awk '{if ($11 >= 500) print};' gives me no matches from haproxy.log, but 
counter for backend's Total Sessions 5xx respones does increase.


Incorrect counters in stats interface

2016-09-07 Thread Dmitry Sivachenko
Hello,

(sorry for reposting, but I do not see my e-mail in ML archive, so I assume it 
was blocked due to screenshots in attachments.  I replace them with links now).

I am using haproxy-1.6.9.

In web stats interface, I mouse-over backend's Total Sessions counter (1728 in 
my case), and I see HTTP 5xx responses=46
(see screenshot: https://people.freebsd.org/~demon/scr1.png)

Then I mouse-over each server's Total sessions counter and none has positive 
number of HTTP 5xx responses (see second screenshot: 
https://people.freebsd.org/~demon/scr2.png).

Is it a bug or I misunderstand these counters?

Thanks!


About tune.vars.reqres-max-size

2016-09-21 Thread Dmitry Sivachenko
Hello,

after reading documentation about

tune.vars.global-max-size 
tune.vars.reqres-max-size 
tune.vars.sess-max-size 
tune.vars.txn-max-size 

I see no default values here.  Can you clarify please?

Also it is not obvious for me if tune.vars.reqres-max-size limit applies to 
every single request processing or to total limit for all req.XXX variables?

Thanks.


Re: selecting backend based in server's load

2016-09-19 Thread Dmitry Sivachenko
 
> On 19 Sep 2016, at 23:42, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> Hello,
> 
> imagine the following configuration:
> 
> frontend F1
> use_backend BACKUP_B1 if B1_IS_FULL
> default_backend B1
> 
> backend B1
> server s_i
> ...
> server s_j
> 
> backend BACKUP_B1
> server b_i
> ...
> server b_j
> 
> -
> frontend F2
> use_backend BACKUP_B2 if B2_IS_FULL
> default_backend B2
> 
> backend B2
> server s_k
> ...
> server s_m
> 
> backend BACKUP_B2
> server b_k
> ...
> server b_m
> --
> <...>
> 
> So basically I have a number of backends B1 ... Bn which use different 
> subsets of the same server pool s_1 ... s_N.
> Each backend have "BACKUP_" backend pair, which should be used only when each 
> server in primary backend has more than a defined number of active sessions 
> (each server may have active sessions via different backends: B1, B2, ..., 
> Bn).
> 
> What is the easiest way to define Bn_IS_FULL acl?
> 
> So far I came with the following solution: in each frontend Fn section write:
> 
> tcp-request content set-var(sess.s_1_conn) srv_conn(B1/s_1)
> tcp-request content set-var(sess.s_1_conn) srv_conn(B2/s_1),add(sess.s_1_conn)
> # <...> repeat last line for each backend which has s_1.  We will have total 
> number of active connections to s_1
> 
> # Repeat the above block for each server s_2, ..., s_N
> 
> #Then define acl, assume the max number of active sessions is 7:
> acl F1_IS_FULL var(sess.s_1_conn) ge 7 var(sess.s_2_conn) ge 7 <...>
> 
> but it looks ugly, we need to replicate the same logic in each frontend and 
> use a lot of code to count sessions.  There should probably be a simpler way 
> to track down the total number of active sessions for a server which 
> participates in several backends.
> 


BTW, it would be convenient to have an ability to have one "super"-backend 
containing all servers
backend SUPER_B
server s1
...
server sN

and let other backends reference these servers similar to what we can do with 
health checks ("track SUPER_B/s1"):

backend B1
server s_1 SUPER_B/s_1



As another benefit this would allow balance algorithm to take into account 
connections each server receives via different backends.




srv_conn vs be_conn

2016-09-20 Thread Dmitry Sivachenko
Hello,

I have few questions:

1) in documentation about srv_conn we have:
---
Returns an integer value corresponding to the number of currently established 
connections on the designated server, possibly including the connection being 
evaluated. 


What does it mean "including the connection being evaluated"?

2) is it true that be_conn(B) == sum(srv_conn(B/srv)) for each srv in backend B?

3) Does srv_conn(srv) equals to what I see in Sessions->Current in haproxy 
stats page for that server?

Thanks in advance.


selecting backend based in server's load

2016-09-19 Thread Dmitry Sivachenko
Hello,

imagine the following configuration:

frontend F1
use_backend BACKUP_B1 if B1_IS_FULL
default_backend B1

backend B1
server s_i
...
server s_j

backend BACKUP_B1
server b_i
...
server b_j

-
frontend F2
use_backend BACKUP_B2 if B2_IS_FULL
default_backend B2

backend B2
server s_k
...
server s_m

backend BACKUP_B2
server b_k
...
server b_m
--
<...>

So basically I have a number of backends B1 ... Bn which use different subsets 
of the same server pool s_1 ... s_N.
Each backend have "BACKUP_" backend pair, which should be used only when each 
server in primary backend has more than a defined number of active sessions 
(each server may have active sessions via different backends: B1, B2, ..., Bn).

What is the easiest way to define Bn_IS_FULL acl?

So far I came with the following solution: in each frontend Fn section write:

tcp-request content set-var(sess.s_1_conn) srv_conn(B1/s_1)
tcp-request content set-var(sess.s_1_conn) srv_conn(B2/s_1),add(sess.s_1_conn)
# <...> repeat last line for each backend which has s_1.  We will have total 
number of active connections to s_1

# Repeat the above block for each server s_2, ..., s_N

#Then define acl, assume the max number of active sessions is 7:
acl F1_IS_FULL var(sess.s_1_conn) ge 7 var(sess.s_2_conn) ge 7 <...>

but it looks ugly, we need to replicate the same logic in each frontend and use 
a lot of code to count sessions.  There should probably be a simpler way to 
track down the total number of active sessions for a server which participates 
in several backends.

Thanks in advance.


Re: Incorrect counters in stats interface

2016-09-07 Thread Dmitry Sivachenko

> On 07 Sep 2016, at 21:10, PiBa-NL <piba.nl@gmail.com> wrote:
> 
> Hi Dmitry,
> Op 7-9-2016 om 15:54 schreef Dmitry Sivachenko:
>> Hello,
>> 
>> (sorry for reposting, but I do not see my e-mail in ML archive, so I assume 
>> it was blocked due to screenshots in attachments.  I replace them with links 
>> now).
>> 
>> I am using haproxy-1.6.9.
>> 
>> In web stats interface, I mouse-over backend's Total Sessions counter (1728 
>> in my case), and I see HTTP 5xx responses=46
>> (see screenshot: https://people.freebsd.org/~demon/scr1.png)
>> 
>> Then I mouse-over each server's Total sessions counter and none has positive 
>> number of HTTP 5xx responses (see second screenshot: 
>> https://people.freebsd.org/~demon/scr2.png).
>> 
>> Is it a bug or I misunderstand these counters?
>> 
>> Thanks!
> 
> In a case if all servers are down (or very busy).
> 
> A request could be queued and then timeout, so haproxy itself will return for 
> example a 503, while none of the servers ever returned anything for that 
> specific request.
> 
> I'm not saying this is the exact scenario you see, but it might explain it..
> 


In "Queue" section I have all zeroes in Cur and Max.




Re: Backend per-server rate limiting

2016-09-28 Thread Dmitry Sivachenko

> On 28 Sep 2016, at 10:49, Stephan Müller  wrote:
> 
> Hi,
> 
> i want to configure a rate limit (say 100 http req/sec) for each backend 
> server like this:
> 
> listen front
>  bind :80
>  balance leastconn
>  server srv1 127.0.0.1:8000 limit 100
>  server srv2 127.0.0.2:8000 limit 100
> 
> As far i can see rate limiting is only supported for frontends [1].
> However,a long time ago, someone asked about the same question [2]. The 
> proposed solution was a multi tier load balancing having an extra proxy per 
> backend server, like this:
> 
> listen front
>  bind :80
>  balance leastconn
>  server srv1 127.0.0.1:8000 maxconn 100 track back1/srv
>  server srv2 127.0.0.2:8000 maxconn 100 track back2/srv
> 
>   listen back1
>  bind 127.0.0.1:8000
>  rate-limit 10
>  server srv 192.168.0.1:80 check
> 
>   listen back2
>  bind 127.0.0.2:8000
>  rate-limit 10
>  server srv 192.168.0.2:80 check
> 
> Is there a better (new) way to do that? The old thread mentioned, its on the 
> roadmap for 1.6.
> 


As far as I understand, "track" only affects health checks.  Otherwise servers 
with the same name in different backend work independently.
So servers in your first frontend (:80) will have no ratelimit.




Re: lua support does not build on FreeBSD

2016-12-14 Thread Dmitry Sivachenko

> On 14 Dec 2016, at 16:24, David CARLIER  wrote:
> 
> Hi,
> 
> I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
> made all the fields available, not sure if would be useful one day).
> 

Well, I was not sure what this s6_addr32 is used for and if it is possible to 
avoid it's usage (since it is linux-specific).
If not, then this is probably the correct solution. 




lua support does not build on FreeBSD

2016-12-13 Thread Dmitry Sivachenko
Hello,

I am unable to build haproxy-1.7.x on FreeBSD:

cc -Iinclude -Iebtree -Wall -O2 -pipe -O2 -fno-strict-aliasing -pipe  
-fstack-protector   -DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT 
-DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY 
-DUSE_OPENSSL  -DUSE_LUA -I/usr/local/include/lua53 -DUSE_DEVICEATLAS 
-I/place/WRK/ports/net/haproxy/work/deviceatlas-enterprise-c-2.1 -DUSE_PCRE 
-I/usr/local/include -DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.7.1\" 
-DCONFIG_HAPROXY_DATE=\"2016/12/13\" -c -o src/hlua_fcn.o src/hlua_fcn.c
src/hlua_fcn.c:1019:27: error: no member named 's6_addr32' in 'struct in6_addr'
if (((addr1->addr.v6.ip.s6_addr32[0] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1019:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...if (((addr1->addr.v6.ip.s6_addr32[0] & addr2->addr.v6.mask.s6_addr32[0]...
~~~ ^
src/hlua_fcn.c:1020:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[0] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1020:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[0] & addr1->addr.v6.mask.s6_addr32[0])) &&
   ~~~ ^
src/hlua_fcn.c:1021:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[1] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1021:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[1] & addr2->addr.v6.mask.s6_addr32[1]) ==
~~~ ^
src/hlua_fcn.c:1022:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[1] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1022:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[1] & addr1->addr.v6.mask.s6_addr32[1])) &&
   ~~~ ^
src/hlua_fcn.c:1023:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[2] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1023:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[2] & addr2->addr.v6.mask.s6_addr32[2]) ==
~~~ ^
src/hlua_fcn.c:1024:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[2] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1024:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[2] & addr1->addr.v6.mask.s6_addr32[2])) &&
   ~~~ ^
src/hlua_fcn.c:1025:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[3] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1025:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[3] & addr2->addr.v6.mask.s6_addr32[3]) ==
~~~ ^
src/hlua_fcn.c:1026:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[3] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1026:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[3] & addr1->addr.v6.mask.s6_addr32[3]))) {
   ~~~ ^
16 errors generated.




In netinet6/in6.h I see:

#ifdef _KERNEL  /* XXX nonstandard */
#define s6_addr8  __u6_addr.__u6_addr8
#define s6_addr16 __u6_addr.__u6_addr16
#define s6_addr32 __u6_addr.__u6_addr32
#endif


So it seems that s6_addr32 macro is defined only when this header is included 
during kernel build.





Re: lua support does not build on FreeBSD

2016-12-23 Thread Dmitry Sivachenko

> On 23 Dec 2016, at 19:07, thierry.fourn...@arpalert.org wrote:
> 
> Ok, thanks Willy.
> 
> The news path in attachment. David, can you test the FreeBSD build ?
> The patch is tested and validated for Linux.



Yes, it does fix FreeBSD build.



> 
> Thierry
> 
> 
> On Fri, 23 Dec 2016 14:50:38 +0100
> Willy Tarreau  wrote:
> 
>> On Fri, Dec 23, 2016 at 02:37:13PM +0100, thierry.fourn...@arpalert.org 
>> wrote:
>>> thanks Willy for the idea. I will write a patch ASAP, but. why a 32bits
>>> cast and not a 64 bit cast ?
>> 
>> First because existing code uses this already and it works. Second because
>> the 64-bit check might be more expensive for 32-bit platforms than the
>> double 32-bit check is for 64-bit platforms (though that's still to be
>> verified in the assembly code, as some compilers manage to assign register
>> pairs correctly).
>> 
>> Willy
>> 
> <0001-BUILD-lua-build-failed-on-FreeBSD.patch>




Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-19 Thread Dmitry Sivachenko

> On 19 Mar 2017, at 14:40, Willy Tarreau  wrote:
> 
> Hi,
> 
> On Sat, Mar 18, 2017 at 01:12:09PM +0100, Willy Tarreau wrote:
>> OK here's a temporary patch. It includes a revert of the previous one and
>> adds a condition for the wake-up. At least it passes all my tests, including
>> those involving synchronous connection reports.
>> 
>> I'm not merging it yet as I'm wondering whether a reliable definitive
>> solution should be done once for all (and backported) by addressing the
>> root cause instead of constantly working around its consequences.
> 
> And here come two patches as a replacement for this temporary one. They
> are safer and have been done after throrough code review. I spotted a
> small tens of dirty corner cases having accumulated over the years due
> to the unclear meaning of the CO_FL_CONNECTED flag. They'll have to be
> addressed, but the current patches protect against these corner cases.
> They survived all tests involving delayed connections and checks with
> and without all handshake combinations, with tcp (immediate and delayed
> requests and responses) and http (immediate, delayed requests and responses
> and pipelining).
> 
> I'm resending the first one you already got Dmitry to make things easier
> to follow for everyone. These three are to be applied on top of 1.7.3. I
> still have a few other issues to deal with regarding 1.7 before doing a
> new release (hopefully by the beginning of this week).



Thank a lot!

I just incorporated the latest fixes to FreeBSD ports tree.


  1   2   >