Fix haproxy build on recent FreeBSD

2024-02-28 Thread Dmitry Sivachenko
Hello!Recently FreeBSD has moved some things out from libc to libsys (see e.g https://www.mail-archive.com/dev-commits-src-all@freebsd.org/msg50353.html)So haproxy stopped compiling with "ld: error: undefined symbol: __elf_aux_vector" error.Brooks Davis suggested the attached patch to fix that.Please consider including it into the tree.Thanks.

patch-src_tools.c
Description: Binary data
Begin forwarded message:From: Brooks Davis Subject: Re: About libc/libsys splitDate: 28 February 2024, 21:59:36 GMT+3To: Dmitry Sivachenko The attached patch is a more correct solution.  haproxy is wrong to use__elf_aux_vector as it is not a public interface (it is exposed so thatrtld can update it.)-- Brooks

Re: [ANNOUNCE] haproxy-2.4.9

2021-11-25 Thread Dmitry Sivachenko



> On 25 Nov 2021, at 13:29, Amaury Denoyelle  wrote:
> 
> Dmitry, the patches that Willy provided you should fix the issue. Now,
> do you need a 2.4.10 to be emitted early with it or is it possible for
> you to keep the patches in your tree so we can have a more substantial
> list of change for a new version ?
> 

As for me there is no hurry: I'll add patches to FreeBSD ports collection.




Re: [ANNOUNCE] haproxy-2.4.9

2021-11-25 Thread Dmitry Sivachenko


> On 25 Nov 2021, at 13:09, Willy Tarreau  wrote:
> 
> Please try the two attached patches. They re-backport something that
> we earlier failed to backport that simplifies the ugly ifdefs everywhere
> that virtually break every single backport related to SSL.
> 
> For me they work with/without SSL and with older versions (tested as far
> as 0.9.8).
> 
> Thanks,
> Willy
> <0001-CLEANUP-servers-do-not-include-openssl-compat.patch><0002-CLEANUP-server-always-include-the-storage-for-SSL-se.patch>


These two patches do fix the build.

Thanks!


Re: [ANNOUNCE] haproxy-2.4.9

2021-11-25 Thread Dmitry Sivachenko
On 24 Nov 2021, at 12:57, Christopher Faulet  wrote:
> 
> 
> Hi,
> 
> HAProxy 2.4.9 was released on 2021/11/23. It added 36 new commits
> after version 2.4.8.
> 


Hello,

version 2.4.9 fails to build with OpenSSL turned off:

 src/server.c:207:51: error: no member named 'ssl_ctx' in 'struct server'
if (srv->mux_proto || srv->use_ssl != 1 || !srv->ssl_ctx.alpn_str) {
~~~  ^
src/server.c:241:37: error: no member named 'ssl_ctx' in 'struct server'
const struct ist alpn = ist2(srv->ssl_ctx.alpn_str,
 ~~~  ^
src/server.c:242:37: error: no member named 'ssl_ctx' in 'struct server'
 srv->ssl_ctx.alpn_len);
 ~~~  ^

Version 2.4.8 builds fine.





Re: [ANNOUNCE] haproxy-2.5-dev7

2021-09-12 Thread Dmitry Sivachenko



> On 12 Sep 2021, at 13:06, Willy Tarreau  wrote:
> 
> Hi,
> 
> HAProxy 2.5-dev7 was released on 2021/09/12. It added 39 new commits
> after version 2.5-dev6.



Hello,

there is a new warning in -dev branch (on FreeBSD):

admin/halog/fgets2.c:38:30: warning: '__GLIBC__' is not defined, evaluates to 0 
[-Wundef]
#if defined(__x86_64__) &&  (__GLIBC__ > 2 || (__GLIBC__ == 2 && 
__GLIBC_MINOR__ >= 15))
 ^
admin/halog/fgets2.c:38:48: warning: '__GLIBC__' is not defined, evaluates to 0 
[-Wundef]
#if defined(__x86_64__) &&  (__GLIBC__ > 2 || (__GLIBC__ == 2 && 
__GLIBC_MINOR__ >= 15))

Looks like Linux-specific condition.

Thanks.


Re: [ANNOUNCE] haproxy-2.3.7

2021-03-19 Thread Dmitry Sivachenko



> On 19 Mar 2021, at 19:13, Willy Tarreau  wrote:
> 
> 
> Grrr... And C compiler authors are still wondering why people hate C
> when it's that their compilers are this pedantic :-(
> 
> Could you please try to append a 'U' after "0x8000" in 
> include/haproxy/task-t.h,
> like this:
> 
>  #define TASK_F_USR1   0x8000U  /* preserved user flag 1, 
> application-specific, def:0 */
> 
> It will mark it unsigned and hopefull make it happy again. If so, we'll
> merge it.
> 


No, it does not, just the message has changed slightly:

src/mux_h2.c:4032:49: warning: implicit conversion from 'unsigned int' to 
'unsigned short' changes value from 4294934527 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(>wait_event.tasklet->state, ~TASK_F_USR1);
~~~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)




Re: [ANNOUNCE] haproxy-2.3.7

2021-03-16 Thread Dmitry Sivachenko



> On 16 Mar 2021, at 18:01, Christopher Faulet  wrote:
> 
> Hi,
> 
> HAProxy 2.3.7 was released on 2021/03/16. It added 62 new commits
> after version 2.3.6.
> 
> This release is mainly about two subjects : The fix of bugs into the
> resolvers part, mainly revealed since the last release and several
> scalability improvements backported from the development version.
> 

<...>

Hello,

among other things, this version also introduces new warning (clang version 
11.0.0):

src/mux_h2.c:4032:49: warning: implicit conversion from 'int' to 'unsigned 
short' changes value from -32769 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(>wait_event.tasklet->state, ~TASK_F_USR1);
~~~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)



src/mux_fcgi.c:3477:51: warning: implicit conversion from 'int' to 'unsigned 
short' changes value from -32769 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(>wait_event.tasklet->state, ~TASK_F_USR1);
~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)



src/mux_h1.c:2456:49: warning: implicit conversion from 'int' to 'unsigned 
short' changes value from -32769 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(>wait_event.tasklet->state, ~TASK_F_USR1);
~~~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)





fetching layer 7 samples with tcp mode frontend

2020-11-13 Thread Dmitry Sivachenko
Hello!

Consider the following config excerpt:

frontend test-fe
mode tcp
use_backend test-be1 if { path -i -m end /set }

What is the notion of "path" sample at frontend working in TCP mode?

We experimented with haproxy-1.5.18 on Linux sending HTTP queries with path 
ending with "/set" and found that this condition sometimes hit, sometimes not.  
So the behaviour is random.

Is it expected?  At the first glance, I'd expect a warning or even an error 
when parsing such a config.
What am I missing?

Thanks.


Re: [ANNOUNCE] haproxy-2.0.16

2020-07-18 Thread Dmitry Sivachenko



> On 18 Jul 2020, at 12:40, Илья Шипицин  wrote:
> 
> What is freebsd version?
> 

It was 13.0-CURRENT, but after you asked I also tried 12.1-STABLE (clang 
version is also 10.0.0):  the same warnings/error.




Re: [ANNOUNCE] haproxy-2.0.16

2020-07-18 Thread Dmitry Sivachenko



> On 17 Jul 2020, at 17:34, Christopher Faulet  wrote:
> 
> 
> Hi,
> 
> HAProxy 2.0.16 was released on 2020/07/17. It added 45 new commits after 
> version
> 2.0.15.

Hello,

Here are new compile problems since 2.0.14 (FreeBSD/amd64, clang version 10.0.0)

1) new warnings:

src/log.c:1692:10: warning: logical not is only applied to the left hand side 
of this comparison [-Wlogical-not-parentheses]
while (HA_SPIN_TRYLOCK(LOGSRV_LOCK, >lock) != 0) {
   ^   ~~
include/common/hathreads.h:1026:33: note: expanded from macro 'HA_SPIN_TRYLOCK'
#define HA_SPIN_TRYLOCK(lbl, l) !pl_try_s(l)
^
src/log.c:1692:10: note: add parentheses after the '!' to evaluate the 
comparison first
include/common/hathreads.h:1026:33: note: expanded from macro 'HA_SPIN_TRYLOCK'
#define HA_SPIN_TRYLOCK(lbl, l) !pl_try_s(l)
^
src/log.c:1692:10: note: add parentheses around left hand side expression to 
silence this warning
while (HA_SPIN_TRYLOCK(LOGSRV_LOCK, >lock) != 0) {
   ^
   (  )
include/common/hathreads.h:1026:33: note: expanded from macro 'HA_SPIN_TRYLOCK'
#define HA_SPIN_TRYLOCK(lbl, l) !pl_try_s(l)
^


2) compile error (can be fixed by including 

ebtree/ebtree.c:43:2: error: use of undeclared identifier 'ssize_t'; did you 
mean 'sizeof'?
ssize_t ofs = -len;
^~~
sizeof
ebtree/ebtree.c:43:10: error: use of undeclared identifier 'ofs'
ssize_t ofs = -len;
^
ebtree/ebtree.c:47:13: error: use of undeclared identifier 'ofs'
diff = p1[ofs] - p2[ofs];
  ^
ebtree/ebtree.c:47:23: error: use of undeclared identifier 'ofs'
diff = p1[ofs] - p2[ofs];
^
ebtree/ebtree.c:48:22: error: use of undeclared identifier 'ofs'
} while (!diff && ++ofs);




compile issues on FreeBSD/i386

2020-06-20 Thread Dmitry Sivachenko
Hello!

I am trying to compile haproxy-2.2-dev10 on FreeBSD-12/i386 (i386 is important 
here) with clang version 9.0.1.

I get the following linker error:

  LD  haproxy
ld: error: undefined symbol: __atomic_fetch_add_8
>>> referenced by backend.c
>>>   src/backend.o:(assign_server)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(connect_server)
>>> referenced by backend.c
>>>   src/backend.o:(connect_server)
>>> referenced by backend.c
>>>   src/backend.o:(connect_server)
>>> referenced by backend.c
>>>   src/backend.o:(srv_redispatch_connect)
>>> referenced 233 more times

ld: error: undefined symbol: __atomic_store_8

For some time we apply the following patch to build on FreeBSD/i386:

--- include/common/hathreads.h.orig 2018-02-17 18:17:22.21940 +
+++ include/common/hathreads.h  2018-02-17 18:18:44.598422000 +
@@ -104,7 +104,7 @@ extern THREAD_LOCAL unsigned long tid_bit; /* The bit 
 /* TODO: thread: For now, we rely on GCC builtins but it could be a good idea 
to
  * have a header file regrouping all functions dealing with threads. */
 
-#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 7) 
&& !defined(__clang__)
+#if (defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 
7) && !defined(__clang__)) || (defined(__clang__) && defined(__i386__))
 /* gcc < 4.7 */
 
 #define HA_ATOMIC_ADD(val, i)__sync_add_and_fetch(val, i)

(it is from older -dev but still applies to include/haproxy/atomic.h and fixes 
the build).

If this patch is correct for i386, may be we include it to haproxy sources?

PS:  with that patch applied I get the following warning which can have sense:

src/stick_table.c:3462:12: warning: result of comparison 'unsigned long' > 
4294967295 is always false [-Wtautological-type-limit-compare]
val > 0x)
~~~ ^ ~~

Thanks.


Re: upcoming 2.0 release: freebsd-11 seem to be broken ?

2019-05-29 Thread Dmitry Sivachenko



> On 29 May 2019, at 13:31, Илья Шипицин  wrote:
> 
> 
> 
> ср, 29 мая 2019 г. в 15:25, Dmitry Sivachenko :
> 
> 
> > On 26 May 2019, at 23:40, Илья Шипицин  wrote:
> > 
> > Hello,
> > 
> > I added freebsd-11 to cirrus-ci
> > 
> > https://cirrus-ci.com/task/5162023978008576
> > 
> > should we fix it before 2.0 release ?
> > 
> 
> 
> BTW, latest -dev release does not build at all on FreeBSD (I tried 
> FreeBSD-12):
> 
> cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
> -fno-strict-aliasing   -fno-strict-aliasing -Wdeclaration-after-statement 
> -fwrapv  -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
> -Wno-unused-parameter  -Wno-ignored-qualifiers  
> -Wno-missing-field-initializers -Wno-implicit-fallthrough   -Wtype-limits 
> -Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS 
> -DUSE_KQUEUE-DUSE_PCRE -DUSE_PCRE_JIT   -DUSE_POLL  -DUSE_THREAD  
> -DUSE_REGPARM -DUSE_STATIC_PCRE  -DUSE_TPROXY   -DUSE_LIBCRYPT   
> -DUSE_GETADDRINFO -DUSE_OPENSSL   -DUSE_ACCEPT4  -DUSE_ZLIB  
> -DUSE_CPU_AFFINITY  -DCONFIG_REGPARM=3 -I/usr/include -DUSE_PCRE 
> -I/usr/local/include  -DCONFIG_HAPROXY_VERSION=\"2.0-dev4\" 
> -DCONFIG_HAPROXY_DATE=\"2019/05/22\" -c -o src/wdt.o src/wdt.c
> src/wdt.c:63:13: error: no member named 'si_int' in 'struct __siginfo'
> thr = si->si_int;
>   ~~  ^
> src/wdt.c:105:7: error: use of undeclared identifier 'SI_TKILL'
> case SI_TKILL:
>  ^
> 2 errors generated.
> gmake[2]: *** [Makefile:830: src/wdt.o] Error 1
> 
> it was fixed right after 2.0-dev4.
> please try current master
> 
>  


Ah, okay.

Thanks!


Re: upcoming 2.0 release: freebsd-11 seem to be broken ?

2019-05-29 Thread Dmitry Sivachenko



> On 26 May 2019, at 23:40, Илья Шипицин  wrote:
> 
> Hello,
> 
> I added freebsd-11 to cirrus-ci
> 
> https://cirrus-ci.com/task/5162023978008576
> 
> should we fix it before 2.0 release ?
> 


BTW, latest -dev release does not build at all on FreeBSD (I tried FreeBSD-12):

cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing   -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv  -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter  -Wno-ignored-qualifiers  -Wno-missing-field-initializers 
-Wno-implicit-fallthrough   -Wtype-limits -Wshift-negative-value   
-Wnull-dereference   -DFREEBSD_PORTS -DUSE_KQUEUE-DUSE_PCRE 
-DUSE_PCRE_JIT   -DUSE_POLL  -DUSE_THREAD  -DUSE_REGPARM -DUSE_STATIC_PCRE  
-DUSE_TPROXY   -DUSE_LIBCRYPT   -DUSE_GETADDRINFO -DUSE_OPENSSL   -DUSE_ACCEPT4 
 -DUSE_ZLIB  -DUSE_CPU_AFFINITY  -DCONFIG_REGPARM=3 -I/usr/include 
-DUSE_PCRE -I/usr/local/include  -DCONFIG_HAPROXY_VERSION=\"2.0-dev4\" 
-DCONFIG_HAPROXY_DATE=\"2019/05/22\" -c -o src/wdt.o src/wdt.c
src/wdt.c:63:13: error: no member named 'si_int' in 'struct __siginfo'
thr = si->si_int;
  ~~  ^
src/wdt.c:105:7: error: use of undeclared identifier 'SI_TKILL'
case SI_TKILL:
 ^
2 errors generated.
gmake[2]: *** [Makefile:830: src/wdt.o] Error 1




Re: [ANNOUNCE] haproxy-1.9-dev3

2018-09-29 Thread Dmitry Sivachenko


> On 29 Sep 2018, at 21:41, Willy Tarreau  wrote:
> 
> Ah, a small change is that we now build with -Wextra after having addressed
> all warnings reported up to gcc 7.3 and filtered a few useless ones.

Hello,

here are some warnings from clang version 6.0.0:

cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/cfgparse.o src/cfgparse.c
src/cfgparse.c:5131:34: warning: implicit conversion from 'int' to 'char' 
changes value from 130 to -126 [-Wconstant-conversion]

curproxy->check_req[5] = 130;

   ~ ^~~
src/cfgparse.c:5157:33: warning: implicit conversion from 'int' to 'char' 
changes value from 128 to -128 [-Wconstant-conversion]
curproxy->check_req[5] 
= 128;
   
~ ^~~


cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/stick_table.o src/stick_table.c
src/stick_table.c:2018:14: warning: equality comparison with extraneous 
parentheses [-Wparentheses-equality]
if ((stkctr == ))
 ~~~^
src/stick_table.c:2018:14: note: remove extraneous parentheses around the 
comparison to silence this warning
if ((stkctr == ))
~   ^~
src/stick_table.c:2018:14: note: use '=' to turn this equality comparison into 
an assignment
if ((stkctr == ))
^~


cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/mux_h2.o src/mux_h2.c
src/mux_h2.c:3532:195: warning: implicit conversion from enumeration type 'enum 
h1m_state' to different enumeration type 'enum h1_state' [-Wenum-conversion]
  ...= %d bytes out (%u in, st=%s, ep=%u, es=%s, h2cws=%d h2sws=%d) data=%u", 
h2c->st0, h2s->id, size+9, (unsigned int)total, h1_msg_state_str(h1m->state), 
h1m->err_pos, h1_ms...

   ~^


cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/peers.o src/peers.c
src/peers.c:253:16: warning: implicit conversion 

Re: Fix building haproxy 1.8.5 with LibreSSL 2.6.4

2018-04-16 Thread Dmitry Sivachenko

> On 07 Apr 2018, at 17:38, Emmanuel Hocdet  wrote:
> 
> 
> I Andy
> 
>> Le 31 mars 2018 à 16:43, Andy Postnikov  a écrit :
>> 
>> I used to rework previous patch from Alpinelinux to build with latest stable 
>> libressl
>> But found no way to run tests with openssl which is primary library as I see
>> Is it possible to accept the patch upstream or get review on it? 
>> 
>> 
> 
> 
> @@ -2208,7 +2223,7 @@
> #else
>   cipher = SSL_CIPHER_find(ssl, cipher_suites);
> #endif
> - if (cipher && SSL_CIPHER_get_auth_nid(cipher) == 
> NID_auth_ecdsa) {
> + if (cipher && SSL_CIPHER_is_ECDSA(cipher)) {
>   has_ecdsa = 1;
>   break;
>   }
> 
> No, it’s a regression in lib compatibility.
> 


Hello,

it would be nice if you come to an acceptable solution and finally merge 
LibreSSL support.
There were several attempts to propose LibreSSL support in the past and every 
time discussion dies with no result.

Thanks :)





Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-21 Thread Dmitry Sivachenko

> On 21 Feb 2018, at 16:33, David CARLIER  wrote:
> 
> Might be irrelevant idea, but is it not possible to detect it via simple code 
> test into the Makefile eventually ?


Did you mean configure?  :)


Re: Fix building without NPN support

2018-02-18 Thread Dmitry Sivachenko

> On 15 Feb 2018, at 17:58, Bernard Spil  wrote:
> 
> On 2018-02-15 15:03, Lukas Tribus wrote:
>> Hello,
>> On 15 February 2018 at 13:42, Bernard Spil  wrote:
>>> Hello HAProxy maintainers,
>>> https://github.com/Sp1l/haproxy/tree/20180215-fix-no-NPN
>>> Fix build with OpenSSL without NPN capability
>>> OpenSSL can be built without NEXTPROTONEG support by passing
>>> -no-npn to the configure script. This sets the
>>> OPENSSL_NO_NEXTPROTONEG flag in opensslconf.h
>>> Since NEXTPROTONEG is now considered deprecated, it is superseeded
>>> by ALPN (Application Layer Protocol Next), HAProxy should allow
>>> building withough NPN support.
>>> Git diff attached for your consideration.
>> Please don't remove npn config parsing (no ifdefs in "ssl_bind_kw
>> ssl_bind_kws" and "bind_kw_list bind_kws"). ssl_bind_parse_npn returns
>> a fatal configuration error when npn is configured and the library
>> doesn't support it.
>> "library does not support TLS NPN extension" is a better error message
>> than something like "npn is not a valid keyword".
>> Otherwise I agree, thanks for the patch!
>> cheers,
>> lukas
> 
> Hi Lukas,
> 
> Agree. Updated patch attached.
> 
> Bernard.


Is this patch good, Lukas?
Any plans to integrate it?




Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-17 Thread Dmitry Sivachenko
> On 14 February 2018 at 11:09, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
>> What about this change?
>> 
>> --- work/haproxy-1.8.4/include/common/hathreads.h   2018-02-08 
>> 13:05:15.0 +
>> +++ /tmp/hathreads.h2018-02-14 11:06:25.031422000 +
>> @@ -104,7 +104,7 @@ extern THREAD_LOCAL unsigned long tid_bi
>> /* TODO: thread: For now, we rely on GCC builtins but it could be a good 
>> idea to
>>  * have a header file regrouping all functions dealing with threads. */
>> 
>> -#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 
>> 7) && !defined(__clang__)
>> +#if (defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ 
>> < 7) && !defined(__clang__)) || (defined(__clang__) && defined(__i386__))
>> /* gcc < 4.7 */
>> 
>> #define HA_ATOMIC_ADD(val, i)__sync_add_and_fetch(val, i)
>> 
>> 


> On 14 Feb 2018, at 14:13, David CARLIER <devne...@gmail.com> wrote:
> Whatever works best for you. Regards.


Well, I wonder if this is worth including into haproxy src?


Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-14 Thread Dmitry Sivachenko

> On 12 Feb 2018, at 17:37, David CARLIER  wrote:
> 
> I think I m the one behing this relatively recent change ... why not adding 
> in the condition the architecture ? e.g. !defined(__clang__) && 
> !defined(__i386__) ... something like this...
> 
> Hope it is useful.
> 


What about this change?

--- work/haproxy-1.8.4/include/common/hathreads.h   2018-02-08 
13:05:15.0 +
+++ /tmp/hathreads.h2018-02-14 11:06:25.031422000 +
@@ -104,7 +104,7 @@ extern THREAD_LOCAL unsigned long tid_bi
/* TODO: thread: For now, we rely on GCC builtins but it could be a good idea to
 * have a header file regrouping all functions dealing with threads. */

-#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 7) 
&& !defined(__clang__)
+#if (defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 
7) && !defined(__clang__)) || (defined(__clang__) && defined(__i386__))
/* gcc < 4.7 */

#define HA_ATOMIC_ADD(val, i)__sync_add_and_fetch(val, i)




Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-11 Thread Dmitry Sivachenko

> On 11 Feb 2018, at 13:49, Franco Fichtner <fra...@opnsense.org> wrote:
> 
> Hi,
> 
>> On 11. Feb 2018, at 7:05 AM, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
>> 
>> src/proto_http.c:(.text+0x1209): undefined reference to 
>> `__atomic_fetch_add_8'
> 
> I believe this is a problem with older Clang versions not defining 8-bit
> operations like __atomic_fetch_add_8 on 32-bit.  This particularly affects
> FreeBSD 11.1 on i386 with LLVM 4.0.0.



I get the same error on FreeBSD-current/i386 (clang 5.0.1):

/usr/bin/ld: error: undefined symbol: __atomic_fetch_add_8
>>> referenced by src/proto_http.c
>>>   src/proto_http.o:(http_perform_server_redirect)

/usr/bin/ld: error: undefined symbol: __atomic_fetch_add_8
>>> referenced by src/proto_http.c
>>>   src/proto_http.o:(http_wait_for_request)

<...>


haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-10 Thread Dmitry Sivachenko
Hello,

haproxy-1.8 does not build on FreeBSD/i386 (clang):

src/proto_http.o: In function `http_perform_server_redirect':
src/proto_http.c:(.text+0x1209): undefined reference to `__atomic_fetch_add_8'
src/proto_http.o: In function `http_wait_for_request':
src/proto_http.c:(.text+0x275a): undefined reference to `__atomic_fetch_add_8'
src/proto_http.c:(.text+0x2e2c): undefined reference to `__atomic_fetch_add_8'
src/proto_http.c:(.text+0x2e48): undefined reference to `__atomic_fetch_add_8'
src/proto_http.c:(.text+0x30bb): undefined reference to `__atomic_fetch_add_8'
src/proto_http.o:src/proto_http.c:(.text+0x3184): more undefined references to 
`__atomic_fetch_add_8' follow
src/time.o: In function `tv_update_date':
src/time.c:(.text+0x631): undefined reference to `__atomic_compare_exchange_8'


In include/common/hathreads.h you have (line 107):
#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 7) &
& !defined(__clang__)


Why do you exclude clang here?  If I remove !defined(__clang__), it builds fine 
but produces a number of similar warnings:


In file included from src/compression.c:29:
In file included from include/common/cfgparse.h:30:
include/proto/proxy.h:116:2: warning: variable '__new' is uninitialized when
  used within its own initialization [-Wuninitialized]
HA_ATOMIC_UPDATE_MAX(>fe_counters.cps_max,
^~
include/common/hathreads.h:172:55: note: expanded from macro
  'HA_ATOMIC_UPDATE_MAX'
while (__old < __new && !HA_ATOMIC_CAS(val, &__old, __new)); \
 ~~~^~
include/common/hathreads.h:128:26: note: expanded from macro 'HA_ATOMIC_CAS'
typeof((new)) __new = (new);   \
  ~^~~


What is the proper fix for that?  May be remove !defined(__clang__) ?

Thanks!


Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-03 Thread Dmitry Sivachenko

> On 01 Nov 2017, at 02:20, Willy Tarreau  wrote:
> 
> Hi all!
> 


Hello,

several new warnings from clang, some look meaningful:

cc -Iinclude -Iebtree -Wall  -O2 -pipe  -fstack-protector -fno-strict-aliasing  
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv  
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label   
-DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  
-DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY -DUSE_ACCEPT4 
-DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL  -DUSE_PCRE -I/usr/local/include 
-DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.8-rc1-901f75c\" 
-DCONFIG_HAPROXY_DATE=\"2017/10/31\" -c -o src/standard.o src/standard.c
src/server.c:875:14: warning: address of array 'check->desc' will always
  evaluate to 'true' [-Wpointer-bool-conversion]
if (check->desc)
~~  ~~~^~~~
src/server.c:914:14: warning: address of array 'check->desc' will always
  evaluate to 'true' [-Wpointer-bool-conversion]
if (check->desc)
~~  ~~~^~~~
src/server.c:958:14: warning: address of array 'check->desc' will always
  evaluate to 'true' [-Wpointer-bool-conversion]
if (check->desc)
~~  ~~~^~~~
src/cfgparse.c:5044:34: warning: implicit conversion from 'int' to 'char'
  changes value from 130 to -126 [-Wconstant-conversion]
  ...curproxy->check_req[5] = 130;
~ ^~~
src/cfgparse.c:5070:33: warning: implicit conversion from 'int' to 'char'
  changes value from 128 to -128 [-Wconstant-conversion]
  ...curproxy->check_req[5] = 128;
~ ^~~



cc -Iinclude -Iebtree -Wall  -O2 -pipe  -fstack-protector -fno-strict-aliasing  
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv  
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label   
-DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  
-DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY -DUSE_ACCEPT4 
-DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL  -DUSE_PCRE -I/usr/local/include 
-DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.8-rc1-901f75c\" 
-DCONFIG_HAPROXY_DATE=\"2017/10/31\" -c -o src/sample.o src/sample.c
src/peers.c:255:16: warning: implicit conversion from 'int' to 'char' changes
  value from 133 to -123 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_UPDATE_TIMED;
  ~ ^~
src/peers.c:257:16: warning: implicit conversion from 'int' to 'char' changes
  value from 134 to -122 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_INCUPDATE_TIMED;
  ~ ^
src/peers.c:261:16: warning: implicit conversion from 'int' to 'char' changes
  value from 128 to -128 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_UPDATE;
  ~ ^~~~
src/peers.c:263:16: warning: implicit conversion from 'int' to 'char' changes
  value from 129 to -127 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_INCUPDATE;
  ~ ^~~
src/peers.c:450:11: warning: implicit conversion from 'int' to 'char' changes
  value from 130 to -126 [-Wconstant-conversion]
msg[1] = PEER_MSG_STKT_DEFINE;
   ~ ^~~~
src/peers.c:486:11: warning: implicit conversion from 'int' to 'char' changes
  value from 132 to -124 [-Wconstant-conversion]
msg[1] = PEER_MSG_STKT_ACK;
   ~ ^



cc -Iinclude -Iebtree -Wall  -O2 -pipe  -fstack-protector -fno-strict-aliasing  
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv  
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label   
-DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  
-DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY -DUSE_ACCEPT4 
-DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL  -DUSE_PCRE -I/usr/local/include 
-DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.8-rc1-901f75c\" 
-DCONFIG_HAPROXY_DATE=\"2017/10/31\" -c -o src/freq_ctr.o src/freq_ctr.c
src/mux_h2.c:1734:15: warning: implicit conversion from enumeration type
  'enum h2_ss' to different enumeration type 'enum h2_cs'
  [-Wenum-conversion]
h2c->st0 = H2_SS_ERROR;
 ~ ^~~
src/mux_h2.c:2321:15: warning: implicit conversion from enumeration type
  'enum h2_ss' to different enumeration type 'enum h2_cs'
  [-Wenum-conversion]
h2c->st0 = H2_SS_ERROR;
 ~ ^~~
src/mux_h2.c:2435:15: warning: implicit conversion from enumeration type
  'enum h2_ss' to different enumeration type 'enum h2_cs'
  [-Wenum-conversion]
h2c->st0 = H2_SS_ERROR;
   

Re: FreeBSD CPU Affinity

2017-08-17 Thread Dmitry Sivachenko

> On 16 Aug 2017, at 18:32, Olivier Houchard  wrote:
> 
> 
> 
> I think I know what's going on.
> Can you try the attached patch ?
> 
> Thanks !
> 
> Olivier
> <0001-MINOR-Fix-CPU-usage-on-FreeBSD.patch>


Also, it would be probably correct thing to check return code from 
cpuset_setaffinity() and treat it as fatal error, aborting haproxy startup.

It is better to get an error message on start rather than guess why it does not 
work as expected.


Re: FreeBSD CPU Affinity

2017-08-16 Thread Dmitry Sivachenko

> On 16 Aug 2017, at 17:40, Mark Staudinger <mark.staudin...@nyi.net> wrote:
> 
> On Wed, 16 Aug 2017 10:35:05 -0400, Dmitry Sivachenko <trtrmi...@gmail.com> 
> wrote:
> 
>> Hello,
>> 
>> are you installing haproxy form FreeBSD ports?
>> 
>> I just tried your configuration and it works as you expect.
>> 
>> If you are building haproxy by hand, add USE_CPU_AFFINITY=1 parameter to 
>> make manually.  FreeBSD port do that for you.
>> 
>> 
>> 
> 
> 
> Hi Dmitry,
> 
> I am running (for now) a locally compiled from source version.
> 
> Build options :
>  TARGET  = freebsd
>  CPU = generic
>  CC  = clang
>  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
>  OPTIONS = USE_CPU_AFFINITY=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 
> USE_STATIC_PCRE=1 USE_PCRE_JIT=1



Strange.  I am testing on FreeBSD-10-stable though.

May be you add return code check for cpuset_setaffinity() and log possible 
error?


Re: FreeBSD CPU Affinity

2017-08-16 Thread Dmitry Sivachenko

> On 16 Aug 2017, at 17:24, Mark Staudinger  wrote:
> 
> Hi Folks,
> 
> Running HAProxy-1.7.8 on FreeBSD-11.0.  Working with nbproc=2 to separate 
> HTTP and HTTPS portions of the config.


Hello,

are you installing haproxy form FreeBSD ports?

I just tried your configuration and it works as you expect.

If you are building haproxy by hand, add USE_CPU_AFFINITY=1 parameter to make 
manually.  FreeBSD port do that for you.






Re: Fix building haproxy with recent LibreSSL

2017-07-04 Thread Dmitry Sivachenko

> On 04 Jul 2017, at 11:04, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> [CCing Bernard, the  patch's author]
> 
> On Mon, Jul 03, 2017 at 12:34:52AM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> can you please take a look at proposed patch to fix build of haproxy with
>> recent version of LibreSSL?
>> 
>> https://www.mail-archive.com/haproxy@formilux.org/msg25819.html
> 
> 
> Do you know if the patch applies to 1.8 (it was mangled so I didn't try).


Sorry, hit reply too fast:  no, one chunk fails against 1.8-dev2 (the one 
dealing with #ifdef SSL_CTX_get_tlsext_status_arg, it requires analysis because 
it is not simple surrounding context change).




Re: Fix building haproxy with recent LibreSSL

2017-07-04 Thread Dmitry Sivachenko

> On 04 Jul 2017, at 11:04, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> [CCing Bernard, the  patch's author]
> 
> On Mon, Jul 03, 2017 at 12:34:52AM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> can you please take a look at proposed patch to fix build of haproxy with
>> recent version of LibreSSL?
>> 
>> https://www.mail-archive.com/haproxy@formilux.org/msg25819.html
> 
> I personally have no opinion on this one, as long as it doesn't break the
> build for other versions. Do you see the problem on your FreeBSD builds ?
> Do you know if the patch applies to 1.8 (it was mangled so I didn't try).
> We could relatively easily apply Bernard's patch as his description can
> be used as a commit message.



On FreeBSD it does fix a build (though new warning appear which I can't explain 
because of the lack of SSL knowledge):

src/ssl_sock.c:803:2: warning: incompatible integer to pointer conversion
 assigning to 'void (*)(void)' from 'long' [-Wint-conversion]
   SSL_CTX_get_tlsext_status_cb(ctx, );
   ^~~~
src/ssl_sock.c:801:6: note: expanded from macro 'SSL_CTX_get_tlsext_status_cb'
 ...= SSL_CTX_ctrl(ctx,SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB,0, (void (**)(void))cb)
^ ~~
1 warning generated.


The patch was taken form OpenBSD, so in general it should be fine.

Review from some SSL-aware guys on your side would be nice.




Fix building haproxy with recent LibreSSL

2017-07-02 Thread Dmitry Sivachenko
Hello,

can you please take a look at proposed patch to fix build of haproxy with 
recent version of LibreSSL?

https://www.mail-archive.com/haproxy@formilux.org/msg25819.html

Thanks.


Re: How to add custom options to CFLAGS

2017-06-07 Thread Dmitry Sivachenko

> On 07 Jun 2017, at 11:41, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> On Sun, Jun 04, 2017 at 02:54:23PM +0300, Dmitry Sivachenko wrote:
>>> Same here, and it's important not to create confusion on the way
>>> CFLAGS are computed.
>>> 
>>> By the way, usually if I need to add some specific flags (eg #define),
>>> I do it via DEFINE or SMALL_OPTS. If I want to change the optimization
>>> options, I use CPU_CFLAGS or CPU_CFLAGS..
>>> 
>>> So maybe you already have what you need and only the documentation needs
>>> to be improved.
>>> 
>> 
>> FreeBSD ports collection has a rule for CFLAGS customisation: ports framework
>> sets CFLAGS environment and expects it to be used during compilation.
>> 
>> Usually people use it to specify different -On options and other
>> optimisations they want.
>> 
>> So strictly speaking it is not CPU-specific, but rather environment-specific.
> 
> I agree on the general principle, it just happens that for a very long time
> I've had to deal with broken compilers on various CPUs that were producing
> bogus code at certain optimization levels, which is what made the optimization
> level end up in the CPU-specific CFLAGS. Good memories are gcc 3.0.4 on PARISC
> and pgcc on i586.
> 
> While things have significantly evolved since then, there are still certain
> flags which directly affect optimization and which have a different behaviour
> on various architectures (-mcpu, -mtune, -march, -mregparm). Given that in
> general you want to change them when you change the optimization level
> (typically to produce debuggable code), I tend to think it continues to make
> sense to have all of them grouped together.


I see.


> 
>> Right now I see only -O2 in CPU_CFLAGS, so it can be used for that purpose.
>> 
>> If the consensus will be to use CPU_CFLAGS for my purpose, it's OK, I will
>> switch to it.
> 
> If that's OK for you, I indeed would rather avoid touching that sensitive 
> area,
> though we can always extend it but I prefer the principle of least surprize.
> You can probably just do run make "CPU_CFLAGS=$CFLAGS" and achieve exactly 
> what
> you want.
> 

Yes, that is what I was talking about.  I'll stick to that approach then.

Thanks!




Re: How to add custom options to CFLAGS

2017-06-04 Thread Dmitry Sivachenko

> On 04 Jun 2017, at 14:37, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Sat, Jun 03, 2017 at 10:36:04AM +0200, Aleksandar Lazic wrote:
>> Hi Dmitry Sivachenko,
>> 
>> Dmitry wrote on:
>> 
>>> Hello,
>> 
>>> Right now we have in the Makefile:
>> 
>>>  Common CFLAGS
>>> # These CFLAGS contain general optimization options, CPU-specific 
>>> optimizations
>>> # and debug flags. They may be overridden by some distributions which 
>>> prefer to
>>> # set all of them at once instead of playing with the CPU and DEBUG 
>>> variables.
>>> CFLAGS = $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)
>> 
>>> So you explicitly suggest to override CFLAGS if someone want to add
>>> custom options here (say, tune optimisations).
>> 
>>> But this way now mandatory -fwrap will be lost.  Or one must remember not 
>>> to loose it.
>>> This is not convenient.
>> 
>>> I propose to add some means to inherit CFLAGS defined in haproxy's
>>> Makefile, but allow to customise it via additional options passed via 
>>> environment, example attached.
>> 
>>> What do you think?
>> 
>>> (another way would be to add $(CUSTOM_CFLAGS) at the end of CFLAGS 
>>> assignment).
>> 
>> Personally I would prefer the CUSTOM_CFLAGS way.
> 
> Same here, and it's important not to create confusion on the way
> CFLAGS are computed.
> 
> By the way, usually if I need to add some specific flags (eg #define),
> I do it via DEFINE or SMALL_OPTS. If I want to change the optimization
> options, I use CPU_CFLAGS or CPU_CFLAGS..
> 
> So maybe you already have what you need and only the documentation needs
> to be improved.
> 

FreeBSD ports collection has a rule for CFLAGS customisation: ports framework 
sets CFLAGS environment and expects it to be used during compilation.

Usually people use it to specify different -On options and other optimisations 
they want.

So strictly speaking it is not CPU-specific, but rather environment-specific.  
So exactly what comment near CFLAGS is about: "".
Right now I see only -O2 in CPU_CFLAGS, so it can be used for that purpose.

If the consensus will be to use CPU_CFLAGS for my purpose, it's OK, I will 
switch to it.

Thanks.


How to add custom options to CFLAGS

2017-06-03 Thread Dmitry Sivachenko
Hello,

Right now we have in the Makefile:

 Common CFLAGS
# These CFLAGS contain general optimization options, CPU-specific optimizations
# and debug flags. They may be overridden by some distributions which prefer to
# set all of them at once instead of playing with the CPU and DEBUG variables.
CFLAGS = $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)

So you explicitly suggest to override CFLAGS if someone want to add custom 
options here (say, tune optimisations).

But this way now mandatory -fwrap will be lost.  Or one must remember not to 
loose it.
This is not convenient.

I propose to add some means to inherit CFLAGS defined in haproxy's Makefile, 
but allow to customise it via additional options passed via environment, 
example attached.

What do you think?

(another way would be to add $(CUSTOM_CFLAGS) at the end of CFLAGS assignment).

Thanks.

--- Makefile.orig   2017-06-03 10:48:38.897518000 +0300
+++ Makefile2017-06-03 10:48:58.640446000 +0300
@@ -205,7 +205,7 @@ ARCH_FLAGS= $(ARCH_FLAGS.$(ARCH)
 # These CFLAGS contain general optimization options, CPU-specific optimizations
 # and debug flags. They may be overridden by some distributions which prefer to
 # set all of them at once instead of playing with the CPU and DEBUG variables.
-CFLAGS = $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)
+CFLAGS := $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS) $(CFLAGS)
 
  Common LDFLAGS
 # These LDFLAGS are used as the first "ld" options, regardless of any library


[PATCH]: CLEANUP/MINOR: retire obsoleted USE_GETSOCKNAME build option

2017-05-11 Thread Dmitry Sivachenko
Hello,

this is a patch to nuke obsoleted USE_GETSOCKNAME build option.

Thanks!



0001-CLEANUP-MINOR-retire-USE_GETSOCKNAME-build-option.patch
Description: Binary data


USE_GETSOCKNAME obsoleted?

2017-05-10 Thread Dmitry Sivachenko
Hello,

in Makefile I see some logic around USE_GETSOCKNAME define.
But as far as I see, in sources you use getsockname() unconditionally.

Is this an obsoleted define which should be removed from Makefile?

Thanks.



Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-19 Thread Dmitry Sivachenko

> On 19 Mar 2017, at 14:40, Willy Tarreau  wrote:
> 
> Hi,
> 
> On Sat, Mar 18, 2017 at 01:12:09PM +0100, Willy Tarreau wrote:
>> OK here's a temporary patch. It includes a revert of the previous one and
>> adds a condition for the wake-up. At least it passes all my tests, including
>> those involving synchronous connection reports.
>> 
>> I'm not merging it yet as I'm wondering whether a reliable definitive
>> solution should be done once for all (and backported) by addressing the
>> root cause instead of constantly working around its consequences.
> 
> And here come two patches as a replacement for this temporary one. They
> are safer and have been done after throrough code review. I spotted a
> small tens of dirty corner cases having accumulated over the years due
> to the unclear meaning of the CO_FL_CONNECTED flag. They'll have to be
> addressed, but the current patches protect against these corner cases.
> They survived all tests involving delayed connections and checks with
> and without all handshake combinations, with tcp (immediate and delayed
> requests and responses) and http (immediate, delayed requests and responses
> and pipelining).
> 
> I'm resending the first one you already got Dmitry to make things easier
> to follow for everyone. These three are to be applied on top of 1.7.3. I
> still have a few other issues to deal with regarding 1.7 before doing a
> new release (hopefully by the beginning of this week).



Thank a lot!

I just incorporated the latest fixes to FreeBSD ports tree.


Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-14 Thread Dmitry Sivachenko

> On 15 Mar 2017, at 00:17, Willy Tarreau  wrote:
> 
> Matthias,
> 
> I could finally track the problem down to a 5-year old bug in the
> connection handler. It already used to affect Unix sockets but it
> requires so rare a set of options and even then its occurrence rate
> is so low that probably nobody noticed it yet.
> 
> I'm attaching the patch to be applied on top of 1.7.3 which fixes it,
> it will be merged into next version.
> 
> Dmitry, you may prefer to take this one than to revert the previous
> one from your ports, especially considering that a few connect()
> immediately succeed over the loopback on FreeBSD and that it was
> absolutely needed to trigger the bug (as well as the previously fixed
> one, which had less impact).
> 

Thanks!

I committed your patch to FreeBSD ports.




Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-12 Thread Dmitry Sivachenko

> On 12 Mar 2017, at 11:34, Matthias Fechner  wrote:
> 
> 
> I checked the port again and there is one patch applied to haproxy, but
> it is a different file, so it should not cause the patch to fail, but
> maybe can cause other problems.
> --- src/hlua_fcn.c.orig 2016-12-17 13:58:44.786067000 +0300
> +++ src/hlua_fcn.c  2016-12-17 13:59:17.551256000 +0300
> @@ -39,6 +39,12 @@ static int class_listener_ref;
> 
> #define STATS_LEN (MAX((int)ST_F_TOTAL_FIELDS, (int)INF_TOTAL_FIELDS))
> 
> +#if defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)
> +#define s6_addr8   __u6_addr.__u6_addr8
> +#define s6_addr16  __u6_addr.__u6_addr16
> +#define s6_addr32  __u6_addr.__u6_addr32
> +#endif
> +
> static struct field stats[STATS_LEN];
> 
> int hlua_checkboolean(lua_State *L, int index)
> 
> 


I removed this patch from ports.
It was needed for previous version to compile and should not cause any problems.
Now it became obsoleted.


Re: HaProxy Hang

2017-03-03 Thread Dmitry Sivachenko

> On 03 Mar 2017, at 19:36, David King <king.c.da...@googlemail.com> wrote:
> 
> Thanks for the response!
> Thats interesting, i don't suppose you have the details of the other issues?


First report is 
https://www.mail-archive.com/haproxy@formilux.org/msg25060.html
Second one
https://www.mail-archive.com/haproxy@formilux.org/msg25067.html

(in the same thread)



> 
> Thanks
> Dave 
> 
> On 3 March 2017 at 14:15, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> > On 03 Mar 2017, at 17:07, David King <king.c.da...@googlemail.com> wrote:
> >
> > Hi All
> >
> > Hoping someone will be able to help, we're running a bit of an interesting 
> > setup
> >
> > we have 3 HAProxy nodes running freebsd 11.0 , each host runs 4 jails, each 
> > running haproxy, but only one of the jails is under any real load
> >
> 
> 
> If my memory does not fail me this is third report on haproxy hang on FreeBSD 
> and all these reports are about FreeBSD-11.
> 
> I wonder if any one experiences this issue with FreeBSD-10?
> 
> I am running rather heavy loaded haproxy cluster on FreeBSD-10 (version 1.6.9 
> to be specific) and never experienced any hungs (knock the wood).
>  
> 




Re: lua support does not build on FreeBSD

2016-12-23 Thread Dmitry Sivachenko

> On 23 Dec 2016, at 19:07, thierry.fourn...@arpalert.org wrote:
> 
> Ok, thanks Willy.
> 
> The news path in attachment. David, can you test the FreeBSD build ?
> The patch is tested and validated for Linux.



Yes, it does fix FreeBSD build.



> 
> Thierry
> 
> 
> On Fri, 23 Dec 2016 14:50:38 +0100
> Willy Tarreau  wrote:
> 
>> On Fri, Dec 23, 2016 at 02:37:13PM +0100, thierry.fourn...@arpalert.org 
>> wrote:
>>> thanks Willy for the idea. I will write a patch ASAP, but. why a 32bits
>>> cast and not a 64 bit cast ?
>> 
>> First because existing code uses this already and it works. Second because
>> the 64-bit check might be more expensive for 32-bit platforms than the
>> double 32-bit check is for 64-bit platforms (though that's still to be
>> verified in the assembly code, as some compilers manage to assign register
>> pairs correctly).
>> 
>> Willy
>> 
> <0001-BUILD-lua-build-failed-on-FreeBSD.patch>




Re: lua support does not build on FreeBSD

2016-12-14 Thread Dmitry Sivachenko

> On 14 Dec 2016, at 16:24, David CARLIER  wrote:
> 
> Hi,
> 
> I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
> made all the fields available, not sure if would be useful one day).
> 

Well, I was not sure what this s6_addr32 is used for and if it is possible to 
avoid it's usage (since it is linux-specific).
If not, then this is probably the correct solution. 




lua support does not build on FreeBSD

2016-12-13 Thread Dmitry Sivachenko
Hello,

I am unable to build haproxy-1.7.x on FreeBSD:

cc -Iinclude -Iebtree -Wall -O2 -pipe -O2 -fno-strict-aliasing -pipe  
-fstack-protector   -DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT 
-DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY 
-DUSE_OPENSSL  -DUSE_LUA -I/usr/local/include/lua53 -DUSE_DEVICEATLAS 
-I/place/WRK/ports/net/haproxy/work/deviceatlas-enterprise-c-2.1 -DUSE_PCRE 
-I/usr/local/include -DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.7.1\" 
-DCONFIG_HAPROXY_DATE=\"2016/12/13\" -c -o src/hlua_fcn.o src/hlua_fcn.c
src/hlua_fcn.c:1019:27: error: no member named 's6_addr32' in 'struct in6_addr'
if (((addr1->addr.v6.ip.s6_addr32[0] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1019:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...if (((addr1->addr.v6.ip.s6_addr32[0] & addr2->addr.v6.mask.s6_addr32[0]...
~~~ ^
src/hlua_fcn.c:1020:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[0] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1020:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[0] & addr1->addr.v6.mask.s6_addr32[0])) &&
   ~~~ ^
src/hlua_fcn.c:1021:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[1] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1021:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[1] & addr2->addr.v6.mask.s6_addr32[1]) ==
~~~ ^
src/hlua_fcn.c:1022:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[1] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1022:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[1] & addr1->addr.v6.mask.s6_addr32[1])) &&
   ~~~ ^
src/hlua_fcn.c:1023:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[2] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1023:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[2] & addr2->addr.v6.mask.s6_addr32[2]) ==
~~~ ^
src/hlua_fcn.c:1024:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[2] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1024:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[2] & addr1->addr.v6.mask.s6_addr32[2])) &&
   ~~~ ^
src/hlua_fcn.c:1025:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[3] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1025:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[3] & addr2->addr.v6.mask.s6_addr32[3]) ==
~~~ ^
src/hlua_fcn.c:1026:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[3] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1026:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[3] & addr1->addr.v6.mask.s6_addr32[3]))) {
   ~~~ ^
16 errors generated.




In netinet6/in6.h I see:

#ifdef _KERNEL  /* XXX nonstandard */
#define s6_addr8  __u6_addr.__u6_addr8
#define s6_addr16 __u6_addr.__u6_addr16
#define s6_addr32 __u6_addr.__u6_addr32
#endif


So it seems that s6_addr32 macro is defined only when this header is included 
during kernel build.





Re: Backend per-server rate limiting

2016-09-28 Thread Dmitry Sivachenko

> On 28 Sep 2016, at 10:49, Stephan Müller  wrote:
> 
> Hi,
> 
> i want to configure a rate limit (say 100 http req/sec) for each backend 
> server like this:
> 
> listen front
>  bind :80
>  balance leastconn
>  server srv1 127.0.0.1:8000 limit 100
>  server srv2 127.0.0.2:8000 limit 100
> 
> As far i can see rate limiting is only supported for frontends [1].
> However,a long time ago, someone asked about the same question [2]. The 
> proposed solution was a multi tier load balancing having an extra proxy per 
> backend server, like this:
> 
> listen front
>  bind :80
>  balance leastconn
>  server srv1 127.0.0.1:8000 maxconn 100 track back1/srv
>  server srv2 127.0.0.2:8000 maxconn 100 track back2/srv
> 
>   listen back1
>  bind 127.0.0.1:8000
>  rate-limit 10
>  server srv 192.168.0.1:80 check
> 
>   listen back2
>  bind 127.0.0.2:8000
>  rate-limit 10
>  server srv 192.168.0.2:80 check
> 
> Is there a better (new) way to do that? The old thread mentioned, its on the 
> roadmap for 1.6.
> 


As far as I understand, "track" only affects health checks.  Otherwise servers 
with the same name in different backend work independently.
So servers in your first frontend (:80) will have no ratelimit.




About tune.vars.reqres-max-size

2016-09-21 Thread Dmitry Sivachenko
Hello,

after reading documentation about

tune.vars.global-max-size 
tune.vars.reqres-max-size 
tune.vars.sess-max-size 
tune.vars.txn-max-size 

I see no default values here.  Can you clarify please?

Also it is not obvious for me if tune.vars.reqres-max-size limit applies to 
every single request processing or to total limit for all req.XXX variables?

Thanks.


srv_conn vs be_conn

2016-09-20 Thread Dmitry Sivachenko
Hello,

I have few questions:

1) in documentation about srv_conn we have:
---
Returns an integer value corresponding to the number of currently established 
connections on the designated server, possibly including the connection being 
evaluated. 


What does it mean "including the connection being evaluated"?

2) is it true that be_conn(B) == sum(srv_conn(B/srv)) for each srv in backend B?

3) Does srv_conn(srv) equals to what I see in Sessions->Current in haproxy 
stats page for that server?

Thanks in advance.


Re: selecting backend based in server's load

2016-09-19 Thread Dmitry Sivachenko
 
> On 19 Sep 2016, at 23:42, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> Hello,
> 
> imagine the following configuration:
> 
> frontend F1
> use_backend BACKUP_B1 if B1_IS_FULL
> default_backend B1
> 
> backend B1
> server s_i
> ...
> server s_j
> 
> backend BACKUP_B1
> server b_i
> ...
> server b_j
> 
> -
> frontend F2
> use_backend BACKUP_B2 if B2_IS_FULL
> default_backend B2
> 
> backend B2
> server s_k
> ...
> server s_m
> 
> backend BACKUP_B2
> server b_k
> ...
> server b_m
> --
> <...>
> 
> So basically I have a number of backends B1 ... Bn which use different 
> subsets of the same server pool s_1 ... s_N.
> Each backend have "BACKUP_" backend pair, which should be used only when each 
> server in primary backend has more than a defined number of active sessions 
> (each server may have active sessions via different backends: B1, B2, ..., 
> Bn).
> 
> What is the easiest way to define Bn_IS_FULL acl?
> 
> So far I came with the following solution: in each frontend Fn section write:
> 
> tcp-request content set-var(sess.s_1_conn) srv_conn(B1/s_1)
> tcp-request content set-var(sess.s_1_conn) srv_conn(B2/s_1),add(sess.s_1_conn)
> # <...> repeat last line for each backend which has s_1.  We will have total 
> number of active connections to s_1
> 
> # Repeat the above block for each server s_2, ..., s_N
> 
> #Then define acl, assume the max number of active sessions is 7:
> acl F1_IS_FULL var(sess.s_1_conn) ge 7 var(sess.s_2_conn) ge 7 <...>
> 
> but it looks ugly, we need to replicate the same logic in each frontend and 
> use a lot of code to count sessions.  There should probably be a simpler way 
> to track down the total number of active sessions for a server which 
> participates in several backends.
> 


BTW, it would be convenient to have an ability to have one "super"-backend 
containing all servers
backend SUPER_B
server s1
...
server sN

and let other backends reference these servers similar to what we can do with 
health checks ("track SUPER_B/s1"):

backend B1
server s_1 SUPER_B/s_1



As another benefit this would allow balance algorithm to take into account 
connections each server receives via different backends.




selecting backend based in server's load

2016-09-19 Thread Dmitry Sivachenko
Hello,

imagine the following configuration:

frontend F1
use_backend BACKUP_B1 if B1_IS_FULL
default_backend B1

backend B1
server s_i
...
server s_j

backend BACKUP_B1
server b_i
...
server b_j

-
frontend F2
use_backend BACKUP_B2 if B2_IS_FULL
default_backend B2

backend B2
server s_k
...
server s_m

backend BACKUP_B2
server b_k
...
server b_m
--
<...>

So basically I have a number of backends B1 ... Bn which use different subsets 
of the same server pool s_1 ... s_N.
Each backend have "BACKUP_" backend pair, which should be used only when each 
server in primary backend has more than a defined number of active sessions 
(each server may have active sessions via different backends: B1, B2, ..., Bn).

What is the easiest way to define Bn_IS_FULL acl?

So far I came with the following solution: in each frontend Fn section write:

tcp-request content set-var(sess.s_1_conn) srv_conn(B1/s_1)
tcp-request content set-var(sess.s_1_conn) srv_conn(B2/s_1),add(sess.s_1_conn)
# <...> repeat last line for each backend which has s_1.  We will have total 
number of active connections to s_1

# Repeat the above block for each server s_2, ..., s_N

#Then define acl, assume the max number of active sessions is 7:
acl F1_IS_FULL var(sess.s_1_conn) ge 7 var(sess.s_2_conn) ge 7 <...>

but it looks ugly, we need to replicate the same logic in each frontend and use 
a lot of code to count sessions.  There should probably be a simpler way to 
track down the total number of active sessions for a server which participates 
in several backends.

Thanks in advance.


Re: Incorrect counters in stats interface

2016-09-08 Thread Dmitry Sivachenko

> On 07 Sep 2016, at 23:12, David Birdsong  wrote:
> 
> Queue Cur is a gauge and so not representative of historical values.
> 
> Queue Max of zero is telling though.
> 
> In addition to queue timeout, there are other ways haproxy can synthesize an 
> http response on behalf of the backend server. Check for connection errors.
> 


Also, awk '{if ($11 >= 500) print};' gives me no matches from haproxy.log, but 
counter for backend's Total Sessions 5xx respones does increase.


Re: Incorrect counters in stats interface

2016-09-07 Thread Dmitry Sivachenko

> On 07 Sep 2016, at 21:10, PiBa-NL <piba.nl@gmail.com> wrote:
> 
> Hi Dmitry,
> Op 7-9-2016 om 15:54 schreef Dmitry Sivachenko:
>> Hello,
>> 
>> (sorry for reposting, but I do not see my e-mail in ML archive, so I assume 
>> it was blocked due to screenshots in attachments.  I replace them with links 
>> now).
>> 
>> I am using haproxy-1.6.9.
>> 
>> In web stats interface, I mouse-over backend's Total Sessions counter (1728 
>> in my case), and I see HTTP 5xx responses=46
>> (see screenshot: https://people.freebsd.org/~demon/scr1.png)
>> 
>> Then I mouse-over each server's Total sessions counter and none has positive 
>> number of HTTP 5xx responses (see second screenshot: 
>> https://people.freebsd.org/~demon/scr2.png).
>> 
>> Is it a bug or I misunderstand these counters?
>> 
>> Thanks!
> 
> In a case if all servers are down (or very busy).
> 
> A request could be queued and then timeout, so haproxy itself will return for 
> example a 503, while none of the servers ever returned anything for that 
> specific request.
> 
> I'm not saying this is the exact scenario you see, but it might explain it..
> 


In "Queue" section I have all zeroes in Cur and Max.




Incorrect counters in stats interface

2016-09-07 Thread Dmitry Sivachenko
Hello,

(sorry for reposting, but I do not see my e-mail in ML archive, so I assume it 
was blocked due to screenshots in attachments.  I replace them with links now).

I am using haproxy-1.6.9.

In web stats interface, I mouse-over backend's Total Sessions counter (1728 in 
my case), and I see HTTP 5xx responses=46
(see screenshot: https://people.freebsd.org/~demon/scr1.png)

Then I mouse-over each server's Total sessions counter and none has positive 
number of HTTP 5xx responses (see second screenshot: 
https://people.freebsd.org/~demon/scr2.png).

Is it a bug or I misunderstand these counters?

Thanks!


Re: Using operators in ACLs

2016-02-24 Thread Dmitry Sivachenko

> On 24 Feb 2016, at 14:07, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Wed, Feb 24, 2016 at 01:36:39PM +0300, Dmitry Sivachenko wrote:
>> I do have "mode http" (I intentionally put it here with a comment).
>> Will it work only for tcp-mode frontend?
>> Or should I use tcp-request for tcp frontend and http-request for http 
>> frontend?
> 
> Both tcp-request and http-request will work in your HTTP frontend. My point
> is that if your frontend is in HTTP mode, you won't be able to direct the
> traffic to a TCP backend, the config parser will reject this.


Ah, yes, I see.  Thanks for the explanation.


Re: Using operators in ACLs

2016-02-24 Thread Dmitry Sivachenko

> On 24 Feb 2016, at 01:02, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> On Fri, Feb 19, 2016 at 05:58:47PM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> I want to define ACL which will evaluate to true if a current number of 
>> connections to a particular backend is greater than a number of usable 
>> servers in that backend multiplied on some constant:
>> 
>> be_conn(BACK) > nbsrv(BACK) * N
>> 
>> So far I came up with the following solution:
>> 
>> frontend FRONT
>>mode http  # can be either http or tcp here
>>tcp-request content set-var(sess.nb) nbsrv(BACK)  # I use tcp-request 
>> here (not http-request) so it works for both http and tcp mode backends
>>acl my_acl be_conn(BACK),div(sess.nb) gt 10  #  "N" in 10 here
>> 
>> 
>> So I must use set-var here because div() accepts either a number or a 
>> variable.
>> 
>> Is this a good sulution for my problem or it can be done better?
> 
> It currently is the only available solution, and I'm glad that you spotted
> it because support for variables in arithmetic operators was added in great
> part to permit such things.
> 
> I do have one comment regarding your comment about tcp-request vs
> http-request. What you say is valid only if you don't have "mode http"
> in your frontend, but I assume that you simplified the config so that
> it's easy to understand here.
> 


I do have "mode http" (I intentionally put it here with a comment).  Will it 
work only for tcp-mode frontend?

Or should I use tcp-request for tcp frontend and http-request for http frontend?




Using operators in ACLs

2016-02-19 Thread Dmitry Sivachenko
Hello,

I want to define ACL which will evaluate to true if a current number of 
connections to a particular backend is greater than a number of usable servers 
in that backend multiplied on some constant:

be_conn(BACK) > nbsrv(BACK) * N

So far I came up with the following solution:

frontend FRONT
mode http  # can be either http or tcp here
tcp-request content set-var(sess.nb) nbsrv(BACK)  # I use tcp-request here 
(not http-request) so it works for both http and tcp mode backends
acl my_acl be_conn(BACK),div(sess.nb) gt 10  #  "N" in 10 here


So I must use set-var here because div() accepts either a number or a variable.

Is this a good sulution for my problem or it can be done better?

Thanks!


Re: http-ignore-probes produces a warning in tcp frontend

2016-02-04 Thread Dmitry Sivachenko

> On 04 Feb 2016, at 07:04, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hello Dmitry,
> 
> On Thu, Jan 28, 2016 at 05:31:58PM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> I have an option http-ignore-probes in defaults section.
>> When I declare frontend in "tcp" mode, I get the following warning:
>> 
>> [WARNING] 027/172718 (18281) : config : 'option http-ignore-probes' ignored
>> for frontend 'MYTEST-front' as it requires HTTP mode.
>> 
>> In defaults section I have other http-specific options (e.g.
>> http-keep-alive), which does not produce a warning in tcp backend.
>> Is it intended?  It looks logical to produce such a warning only if
>> http-specific option is used directly in tcp backend and silently ignore it
>> when used in defaults.
> 
> There's no difference between having the option in defaults or explicitly
> in the section itself. You should see defaults as templates for next
> sections. The error here is that http-keep-alive should also produce a
> warning. But I think I know why it doesn't, most options are handled by
> a generic parser which checks the proxy mode, and a few other more
> complex ones are implemented "by hand" and do not necessarily run such
> checks.
> 
> It's a very bad practise to mix TCP and HTTP proxies with the same defaults
> sections. This probably is something we should document better in the doc.
> A good practise is to have one (or several) defaults sections for HTTP mode
> and then other defaults sections for TCP mode. And most often you don't even
> have the same timeouts, log settings etc.
> 


Thanks for the explanation!

I just realized that there can be multiple defaults sections, so your arguments 
look valid.




Re: [patch] Enable USE_CPU_AFFINITY by default on FreeBSD

2015-11-04 Thread Dmitry Sivachenko

> On 04 Nov 2015, at 23:09, Renato Botelho  wrote:
> 
> Change is being used in pfSense and also was added to FreeBSD ports tree.
> 
> Should I send a separate patch for 1.6 branch?
> 
> Thanks
> 
> <0001-Enable-USE_CPU_AFFINITY-by-default-on-FreeBSD.patch>


I would also add USE_GETADDRINFO by default, we use it unconditionally in 
FreeBSD ports tree too.




Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 10:44, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hello Dmitry,
> 
> On Thu, Oct 22, 2015 at 10:40:45AM +0300, Dmitry Sivachenko wrote:
>> 1.6.1 still does not build with OpenSSL < 1.0:
>> 
>> src/ssl_sock.o: In function `ssl_sock_do_create_cert':
>> ssl_sock.c:(.text+0x295b): undefined reference to 
>> `EVP_PKEY_get_default_digest_nid'
>> Makefile:760: recipe for target 'haproxy' failed
>> 
>> So is it intended behavior?
> 
> It's neither intended nor not intended, it's just that I was waiting for
> Marcus' confirmation that the patch fixed the issue for him, and forgot
> about this patch while waiting for a response. Can you confirm on your
> side that the patch fixes the issue for you ? If so I'm willing to merge
> the fix immediately. I prefer to be careful because on my side openssl
> 0.9.8 doesn't break so I want to be sure that there isn't a second level
> of breakage after this one.
> 


Aha, no problem, I thought it is supposed to be fixed before 1.6.1.

I tried a patch in this thread 
(0002-BUILD-ssl-fix-build-error-introduced-in-commit-7969a.patch).

It does fix the build error (FreeBSD-9, OpenSSL 0.9.8q).  Though there is the 
following warning:

src/ssl_sock.c: In function 'ssl_sock_load_cert_chain_file':
src/ssl_sock.c:1623: warning: dereferencing type-punned pointer will break 
strict-aliasing rules
src/ssl_sock.c:1636: warning: dereferencing type-punned pointer will break 
strict-aliasing rules
src/ssl_sock.c: In function 'ssl_sock_srv_verifycbk':
src/ssl_sock.c:2264: warning: dereferencing type-punned pointer will break 
strict-aliasing rules
src/ssl_sock.c:2278: warning: dereferencing type-punned pointer will break 
strict-aliasing rules





Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 19 окт. 2015 г., at 17:29, Willy Tarreau  wrote:
> 
> Hi Christopher,
> 
> On Mon, Oct 19, 2015 at 03:05:05PM +0200, Christopher Faulet wrote:
>> Damned! I generated a huge amount of disturbances with my paches! Really 
>> sorry for that.
> 
> Shit happens sometimes. I had my hours of fame with option
> http-send-name-header merged in 1.4-stable years ago, and that was so badly
> designed that it still managed to cause a lot of trouble during 1.6-dev.
> 
>> Add a #ifdef to check the OpenSSL version seems to be a good fix. I 
>> don't know if there is a workaround to do the same than 
>> EVP_PKEY_get_default_digest_nid() for old OpenSSL versions.
> 
> I was unsure how the code was supposed to work given that two blocks
> were replaced by two others and I was unsure whether there was a
> dependence. So as long as we can fall back to the pre-patch behaviour
> I'm perfectly fine.
> 
>> This function is used to get default signature digest associated to the 
>> private key used to sign generated X509 certificates. It is called when 
>> the private key differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. 
>> It should be enough for most of cases (maybe all cases ?).
> 
> OK great.
> 
>> By the way, I attached a patch to fix the bug.
> 
> Thank you. Marcus, can you confirm that it's OK for you with this fix so
> that I can merge it ?



Hello,

1.6.1 still does not build with OpenSSL < 1.0:

src/ssl_sock.o: In function `ssl_sock_do_create_cert':
ssl_sock.c:(.text+0x295b): undefined reference to 
`EVP_PKEY_get_default_digest_nid'
Makefile:760: recipe for target 'haproxy' failed


So is it intended behavior?


Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 13:54, Marcus Rueckert <da...@web.de> wrote:
> 
> On 2015-10-22 13:38:45 +0300, Dmitry Sivachenko wrote:
>> I see this warnings with gcc-4.2.1 (shipped with FreeBSD-9), but no warnings 
>> with clang 3.6.1.
>> I see a lot of such warnings with gcc48, but it seems expected according to 
>> comments in Makefile:
>>  Compiler-specific flags that may be used to disable some negative over-
>> # optimization or to silence some warnings. -fno-strict-aliasing is needed 
>> with
>> # gcc >= 4.4.
> 
> 4.3.4 on SLES 11 SP 4
> 4.8.3 on openSUSE 13.2
> 5.1.1 on openSUSE Tumbleweed
> 
> https://build.opensuse.org/package/show/server:http/haproxy (succeeded
> links on the right side)


There is  -fno-strict-aliasing option in your build logs.


Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 14:12, Marcus Rueckert <da...@web.de> wrote:
> 
> On 2015-10-22 13:59:09 +0300, Dmitry Sivachenko wrote:
>>> On 22 окт. 2015 г., at 13:54, Marcus Rueckert <da...@web.de> wrote:
>>> 
>>> On 2015-10-22 13:38:45 +0300, Dmitry Sivachenko wrote:
>>>> I see this warnings with gcc-4.2.1 (shipped with FreeBSD-9), but no 
>>>> warnings with clang 3.6.1.
>>>> I see a lot of such warnings with gcc48, but it seems expected according 
>>>> to comments in Makefile:
>>>>  Compiler-specific flags that may be used to disable some negative 
>>>> over-
>>>> # optimization or to silence some warnings. -fno-strict-aliasing is needed 
>>>> with
>>>> # gcc >= 4.4.
>>> 
>>> 4.3.4 on SLES 11 SP 4
>>> 4.8.3 on openSUSE 13.2
>>> 5.1.1 on openSUSE Tumbleweed
>>> 
>>> https://build.opensuse.org/package/show/server:http/haproxy (succeeded
>>> links on the right side)
>> 
>> 
>> There is  -fno-strict-aliasing option in your build logs.
> 
> But it is set by the upstream Makefile. so unless you break the CFLAGS
> of the makefile. shouldnt you have that too?
> 


I override CFLAGS variable during make invocation (because otherwise build 
system does not respect CFLAGS environment variable), as well as CC environment 
(FreeBSD does not have "gcc" at all).





Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 13:14, Marcus Rueckert  wrote:
> 
> 3. i can not reproduce the strict alias warnings.
> 

I see this warnings with gcc-4.2.1 (shipped with FreeBSD-9), but no warnings 
with clang 3.6.1.
I see a lot of such warnings with gcc48, but it seems expected according to 
comments in Makefile:
 Compiler-specific flags that may be used to disable some negative over-
# optimization or to silence some warnings. -fno-strict-aliasing is needed with
# gcc >= 4.4.




Re: Build failure of 1.6 and openssl 0.9.8

2015-10-22 Thread Dmitry Sivachenko

> On 22 окт. 2015 г., at 11:45, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Thu, Oct 22, 2015 at 11:31:01AM +0300, Dmitry Sivachenko wrote:
>> 
>>> On 22 ??. 2015 ??., at 10:44, Willy Tarreau <w...@1wt.eu> wrote:
>>> 
>>> Hello Dmitry,
>>> 
>>> On Thu, Oct 22, 2015 at 10:40:45AM +0300, Dmitry Sivachenko wrote:
>>>> 1.6.1 still does not build with OpenSSL < 1.0:
>>>> 
>>>> src/ssl_sock.o: In function `ssl_sock_do_create_cert':
>>>> ssl_sock.c:(.text+0x295b): undefined reference to 
>>>> `EVP_PKEY_get_default_digest_nid'
>>>> Makefile:760: recipe for target 'haproxy' failed
>>>> 
>>>> So is it intended behavior?
>>> 
>>> It's neither intended nor not intended, it's just that I was waiting for
>>> Marcus' confirmation that the patch fixed the issue for him, and forgot
>>> about this patch while waiting for a response. Can you confirm on your
>>> side that the patch fixes the issue for you ? If so I'm willing to merge
>>> the fix immediately. I prefer to be careful because on my side openssl
>>> 0.9.8 doesn't break so I want to be sure that there isn't a second level
>>> of breakage after this one.
>>> 
>> 
>> 
>> Aha, no problem, I thought it is supposed to be fixed before 1.6.1.
>> 
>> I tried a patch in this thread 
>> (0002-BUILD-ssl-fix-build-error-introduced-in-commit-7969a.patch).
>> 
>> It does fix the build error (FreeBSD-9, OpenSSL 0.9.8q).  Though there is 
>> the following warning:
>> 
>> src/ssl_sock.c: In function 'ssl_sock_load_cert_chain_file':
>> src/ssl_sock.c:1623: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
>> src/ssl_sock.c:1636: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
>> src/ssl_sock.c: In function 'ssl_sock_srv_verifycbk':
>> src/ssl_sock.c:2264: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
>> src/ssl_sock.c:2278: warning: dereferencing type-punned pointer will break 
>> strict-aliasing rules
> 
> Do you have other patches applied ? Here these line numbers only match
> closing braces so I have no idea what they correspond to :-/
> 

No, this is haproxy-1.6.1 tarball + this patch applied.

BTW, by default FreeBSD uses -fno-strict-aliasing, so this warning was here 
before most likely, I just did not see it, I suppose  it is not a problem.

Also:

src/stick_table.c: In function 'smp_to_stkey':
src/stick_table.c:490: warning: dereferencing type-punned pointer will break 
strict-aliasing rules





Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 7 окт. 2015 г., at 16:18, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> Hello,
> 
> I am using haproxy-1.5.14 and sometimes I see the following errors in the log:
> 
> Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428] 
> MT-front MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ-- 
> 125/124/108/0/0 0/28 "POST /some/url HTTP/1.1"
> (many similar at one moment)
> 
> Common part in these errors is "1000" in Tw and Tt, and "sQ--" termination 
> state.
> 
> Here is the relevant part on my config (I can post more if needed):
> 
> defaults
>balance roundrobin
>maxconn 1
>timeout queue 1s
>fullconn 3000
>default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1 
> slowstart 60s maxqueue 1 minconn 5 maxconn 150
> 
> backend MT_RU_EN-back
>mode http
>timeout server 30s
>server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
>server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38
>
> 
> So this error log indicates that request was sitting in the queue for timeout 
> queue==1s and his turn did not come.
> 
> In the stats web interface for MT_RU_EN-back backend I see the following 
> numbers:
> 
> Sessions: limit=3000, max=126 (for the whole backend)
> Limit=150, max=5 or 6 (for each server)


I also forgot to mention the "Queue" values from stats web-interface:
Queue max = 0 for all servers
Queue limit = 1 for all servers (as configured in default-server)
So according to stats queue was never used.


Right under the servers list, there is a "Backend" line, which has the value of 
"29" in "Queue Max" column.
What does it mean?


> 
> If I understand minconn/maxconn meaning right, each server should accept up 
> to min(150, 3000/18) connections
> 
> So according to stats the load were far from limits.
> 
> What can be the cause of such errors?
> 
> Thanks!




Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 8 окт. 2015 г., at 11:03, Baptiste  wrote:
> 
> Hi Dmitry,
> 
> 
> 
> Now the question is why such situation. Simply because your queue
> management is improperly setup (either increase minconn and or
> decrease fullconn) and combined to a server which might be quite slow
> to answer leading HAProxy to use queues.
> 

What do you mean "improperly setup"?  From the stats I provided I got an 
impression that no limits were reached for request to get into the waiting 
queue.

Or am I wrong?

(I will send you full config and logs in private soon)




Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 8 окт. 2015 г., at 3:51, Igor Cicimov  
> wrote:
> 
> 
> The only thing I can think of is you have left net.core.somaxconn = 128, try 
> increasing it to 4096 lets say to match your planned capacity of 3000
> 


I forgot to mention that I am using FreeBSD, I don't think it has similar 
sysctl.


Re: About maxconn and minconn

2015-10-08 Thread Dmitry Sivachenko

> On 8 окт. 2015 г., at 15:30, Daren Sefcik <dsef...@hightechhigh.org> wrote:
> 
> How about kern.ipc.somaxconn


I have this set to 4096, and when it overflows it prints a line in the log 
(Listen queue overflow...)

I have no these errors in logs.

Moreover, connections sitting in socket accept queue are not seen by haproxy 
and haproxy can't count this time and trigger timeouts.



> 
> On Thu, Oct 8, 2015 at 5:22 AM, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> > On 8 окт. 2015 г., at 3:51, Igor Cicimov <ig...@encompasscorporation.com> 
> > wrote:
> >
> >
> > The only thing I can think of is you have left net.core.somaxconn = 128, 
> > try increasing it to 4096 lets say to match your planned capacity of 3000
> >
> 
> 
> I forgot to mention that I am using FreeBSD, I don't think it has similar 
> sysctl.
> 




About maxconn and minconn

2015-10-07 Thread Dmitry Sivachenko
Hello,

I am using haproxy-1.5.14 and sometimes I see the following errors in the log:

Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428] MT-front 
MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ-- 125/124/108/0/0 0/28 
"POST /some/url HTTP/1.1"
(many similar at one moment)

Common part in these errors is "1000" in Tw and Tt, and "sQ--" termination 
state.

Here is the relevant part on my config (I can post more if needed):

defaults
balance roundrobin
maxconn 1
timeout queue 1s
fullconn 3000
default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1 
slowstart 60s maxqueue 1 minconn 5 maxconn 150

backend MT_RU_EN-back
mode http
timeout server 30s
server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38


So this error log indicates that request was sitting in the queue for timeout 
queue==1s and his turn did not come.

In the stats web interface for MT_RU_EN-back backend I see the following 
numbers:

Sessions: limit=3000, max=126 (for the whole backend)
Limit=150, max=5 or 6 (for each server)

If I understand minconn/maxconn meaning right, each server should accept up to 
min(150, 3000/18) connections

So according to stats the load were far from limits.

What can be the cause of such errors?

Thanks!


Re: Address selection policy in dual-stack environments

2015-10-01 Thread Dmitry Sivachenko

> On 30 сент. 2015 г., at 23:28, Willy Tarreau  wrote:
> 
> 
> I think that you did a good job and that you're perfectly right. I even
> checked on one of my older systems and the text was the same in 2008.
> 
> Could you please write a commit message describing the initial issue
> and copying your analysis above so that we don't lose the elements.
> Please tag it as a bug so that we backport it to 1.5 as well.



When first parameter to getaddrinfo() is not NULL (it is always not NULL in 
str2ip()), on Linux AI_PASSIVE value for ai_flags is ignored.
On FreeBSD, when AI_PASSIVE is specified and hostname parameter is not NULL, 
getaddrinfo() ignores local address selection policy, always returning  
record.
Pass zero ai_flags to behave correctly on FreeBSD, this change should be no-op 
for Linux.





standard.c.patch
Description: Binary data


Re: Address selection policy in dual-stack environments

2015-09-30 Thread Dmitry Sivachenko

> On 29 сент. 2015 г., at 23:06, Willy Tarreau <w...@1wt.eu> wrote:
> 
> On Tue, Sep 29, 2015 at 10:59:15PM +0300, Dmitry Sivachenko wrote:
>>> I *think* that getaddrinfo() provides this. You can try to build by
>>> adding USE_GETADDRINFO=1 to your makefile. It's not enabled by default
>>> because there are numerous bogus implementations on various systems.
>>> If it works for you it could be the best solution as other programs
>>> which work are likely using it. I don't know if it's safe to enable
>>> it by default on FreeBSD.
>>> 
>> 
>> 
>> I do have this enabled:
>> 
>> Build options :
>>  TARGET  = freebsd
>>  CPU = generic
>>  CC  = cc
>>  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -fstack-protector 
>> -DFREEBSD_PORTS
>>  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
>> USE_PCRE_JIT=1
> 
> Then I have no idea how other programs retrieve the information allowing
> them to respect your system-global choices :-(


The following patch fixes the problem for me:

--- standard.c.orig 2015-09-30 13:28:52.688425000 +0300
+++ standard.c  2015-09-30 13:29:00.826968000 +0300
@@ -599,7 +599,7 @@ static struct sockaddr_storage *str2ip(c
memset(, 0, sizeof(hints));
hints.ai_family = sa->ss_family ? sa->ss_family : AF_UNSPEC;
hints.ai_socktype = SOCK_DGRAM;
-   hints.ai_flags = AI_PASSIVE;
+   hints.ai_flags = 0;
hints.ai_protocol = 0;
 
if (getaddrinfo(str, NULL, , ) == 0) {



FreeBSD manual page for getaddrinfo() is uncertain how to treat AI_PASSIVE when 
hostname parameter is non-NULL (and this parameter is always non-NULL in 
standard.c:str2ip()).
https://www.freebsd.org/cgi/man.cgi?query=getaddrinfo=3

On  Linux manual page explicitly states that "If node is not NULL, then the 
AI_PASSIVE flag is ignored."

So this change should be harmless for Linux.

What do you think?


Re: Linux or FreeBSD ?

2015-09-30 Thread Dmitry Sivachenko

> On 30 сент. 2015 г., at 16:05, Arnall  wrote:
> 
> Hi Eveyone,
> 
> just a simple question, is FreeBSD a good choice for Haproxy ?
> Our Haproxy runs under Debian for years, but the new IT want to put it under 
> FreeBSD.
> Any cons ?
> 
> Thanks.
> 



Should be roughly the same I think.


Address selection policy in dual-stack environments

2015-09-29 Thread Dmitry Sivachenko
Hello,

in case when machine has both A and  records, there is an address selection 
policy algorithm which determines which address to use first.
https://www.freebsd.org/cgi/man.cgi?query=ip6addrctl=8

I use it in "prefer ipv4" form, to use ipv4 first when available.

All programs like ssh work as expected.

In haproxy backends are resolved always to ipv6, even when there is an ipv4 
address.

Is it possible to make it to respect address selection policy?

Thanks.


Re: Address selection policy in dual-stack environments

2015-09-29 Thread Dmitry Sivachenko

> On 29 сент. 2015 г., at 21:26, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> On Tue, Sep 29, 2015 at 08:08:51PM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> in case when machine has both A and  records, there is an address 
>> selection policy algorithm which determines which address to use first.
>> https://www.freebsd.org/cgi/man.cgi?query=ip6addrctl=8
>> 
>> I use it in "prefer ipv4" form, to use ipv4 first when available.
>> 
>> All programs like ssh work as expected.
>> 
>> In haproxy backends are resolved always to ipv6, even when there is an ipv4 
>> address.
>> 
>> Is it possible to make it to respect address selection policy?
> 
> I *think* that getaddrinfo() provides this. You can try to build by
> adding USE_GETADDRINFO=1 to your makefile. It's not enabled by default
> because there are numerous bogus implementations on various systems.
> If it works for you it could be the best solution as other programs
> which work are likely using it. I don't know if it's safe to enable
> it by default on FreeBSD.
> 


I do have this enabled:

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -fstack-protector 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1






Re: TCP_NODELAY in tcp mode

2015-09-11 Thread Dmitry Sivachenko

> On 8 сент. 2015 г., at 18:33, Willy Tarreau <w...@1wt.eu> wrote:
> 
> Hi Dmitry,
> 
> On Tue, Sep 08, 2015 at 05:25:33PM +0300, Dmitry Sivachenko wrote:
>> 
>>> On 30 ??. 2015 ??., at 22:29, Willy Tarreau <w...@1wt.eu> wrote:
>>> 
>>> On Fri, Aug 28, 2015 at 11:40:18AM +0200, Lukas Tribus wrote:
>>>>>> Ok, you may be hitting a bug. Can you provide haproxy -vv output?
>>>>>> 
>>>>> 
>>>>> 
>>>>> What do you mean? I get the following warning when trying to use this
>>>>> option in tcp backend/frontend:
>>>> 
>>>> Yes I know (I didn't realize you are using tcp mode). I don't mean the
>>>> warning is the bug, I mean the tcp mode is supposed to not cause any
>>>> delays by default, if I'm not mistaken.
>>> 
>>> You're not mistaken, tcp_nodelay is unconditional in TCP mode and MSG_MORE
>>> is not used there since we never know if more data follows. In fact there's
>>> only one case where it can happen, it's when data wrap at the end of the
>>> buffer and we want to send them together.
>>> 
>> 
>> 
>> Hello,
>> 
>> yes, you are right, the problem is not TCP_NODELAY.  I performed some 
>> testing:
>> 
>> Under low network load, passing TCP connection through haproxy involves 
>> almost zero overhead.
>> When load grows, at some point haproxy starts to slow things down.
>> 
>> In our testing scenario the application establishes long-lived TCP 
>> connection to server and sends many small requests.
>> Typical traffic at which adding haproxy in the middle causes measurable 
>> slowdown is ~30MB/sec, ~100kpps.
> 
> This is not huge, it's smaller than what can be achieved in pure HTTP mode,
> where I could achieve about 180k req/s end-to-end, which means at least 
> 180kpps
> in both directions on both sides, so 360kpps in each direction.
> 


For reference: I tracked this down to be FreeBSD-specific problem:
https://lists.freebsd.org/pipermail/freebsd-net/2015-September/043314.html

Thanks all for your help.




About CPU usage

2015-09-10 Thread Dmitry Sivachenko
Hello,

I have a haproxy-1.5.11 with a single frontend passing requests to a single 
backend in TCP mode (sample config).
Application establishes several long-living tcp connections and sends a lot of 
small requests over them.

In my test case I have 2 simultaneous TCP connections producing about 3 MB/sec, 
20 kpps input (as shown by netstat on backend machine) and approximately the 
same output.

haproxy process consumes about 20% of CPU core (I have a machine with 2 Intel 
Xeon E5-2650 v2 @ 2.60GHz).

In my understanding such CPU usage is rather high for the relatively low load.

I tried both FreeBSD and Linux and see similar results (I am interested in 
FreeBSD though).

Anything obvious I can tune?

Thanks.


Re: TCP_NODELAY in tcp mode

2015-09-08 Thread Dmitry Sivachenko

> On 30 авг. 2015 г., at 22:29, Willy Tarreau  wrote:
> 
> On Fri, Aug 28, 2015 at 11:40:18AM +0200, Lukas Tribus wrote:
 Ok, you may be hitting a bug. Can you provide haproxy -vv output?
 
>>> 
>>> 
>>> What do you mean? I get the following warning when trying to use this
>>> option in tcp backend/frontend:
>> 
>> Yes I know (I didn't realize you are using tcp mode). I don't mean the
>> warning is the bug, I mean the tcp mode is supposed to not cause any
>> delays by default, if I'm not mistaken.
> 
> You're not mistaken, tcp_nodelay is unconditional in TCP mode and MSG_MORE
> is not used there since we never know if more data follows. In fact there's
> only one case where it can happen, it's when data wrap at the end of the
> buffer and we want to send them together.
> 


Hello,

yes, you are right, the problem is not TCP_NODELAY.  I performed some testing:

Under low network load, passing TCP connection through haproxy involves almost 
zero overhead.
When load grows, at some point haproxy starts to slow things down.

In our testing scenario the application establishes long-lived TCP connection 
to server and sends many small requests.
Typical traffic at which adding haproxy in the middle causes measurable 
slowdown is ~30MB/sec, ~100kpps.

haproxy process CPU usage is about 15-20%.


Re: halog Makefile

2015-09-08 Thread Dmitry Sivachenko

> On 8 сент. 2015 г., at 17:26, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
> 
> Hello,
> 
> Can you please apply the following patch to contrib/halog/Makefile not to 
> override $CC (on some systems compiler is called "cc", not "gcc"):
> 
> --- Makefile.orig   2015-09-08 17:15:37.423168000 +0300
> +++ Makefile2015-09-08 17:14:57.938196000 +0300
> @@ -1,7 +1,7 @@
> EBTREE_DIR = ../../ebtree
> INCLUDE  = -I../../include -I$(EBTREE_DIR)
> 
> -CC   = gcc
> +CC   ?= gcc
> 
> # note: it is recommended to also add -fomit-frame-pointer on i386
> OPTIMIZE = -O3
> 


Oh, please forget this: the same CC is used in all makefiles, it is simpler to 
redefine CC in make invocation.
Sorry for the noise.


Re: haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)

2015-09-07 Thread Dmitry Sivachenko

> On 7 сент. 2015 г., at 9:36, Lukas Tribus  wrote:
> 
> 
> 
> Best would be to strace this, but this is Freebsd amd64,
> so that doesn't work. Can you trace the syscalls with
> the strace equivalent at least?


It fails that way:

socket(PF_INET,SOCK_DGRAM,17)= 4 (0x4)
connect(4,{ AF_INET 8.8.8.8:53 },128)ERR#22 'Invalid argument'

3rd argument for connect() looks wrong for ipv4:

ERRORS
 The connect() system call fails if:

 [EINVAL]   The namelen argument is not a valid length for the
address family.




Re: haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)

2015-09-07 Thread Dmitry Sivachenko

> On 7 сент. 2015 г., at 1:46, PiBa-NL  wrote:
> 
> Hi guys,
> 
> Hoping someone can shed some light on what i might be doing wrong?
> Or is there something in FreeBSD that might be causing the trouble with the 
> new resolvers options?
> 
> Thanks in advance.
> PiBa-NL
> 
> haproxy -f /var/haproxy.cfg -d
> [ALERT] 248/222758 (22942) : SSLv3 support requested but unavailable.
> Note: setting global.maxconn to 2000.
> Available polling systems :
> kqueue : pref=300,  test result OK
>   poll : pref=200,  test result OK
> select : pref=150,  test result FAILED


Also interesting is why you have test for select=FAILED, though in your haproxy 
-vv output below this rest result is OK.


> Total: 3 (2 usable), will use kqueue.
> Using kqueue() as the polling mechanism.
> [ALERT] 248/222808 (22942) : Starting [globalresolvers/googleA] nameserver: 
> can't connect socket.
> 
> 
> defaults
>modehttp
>timeout connect3
>timeout server3
>timeout client3
> 
> resolvers globalresolvers
>nameserver googleA 8.8.8.8:53
>resolve_retries   3
>timeout retry 1s
>hold valid   10s
> 
> listen www
>bind 0.0.0.0:80
>logglobal
>servergooglesite www.google.com:80 check inter 1000 resolvers 
> globalresolvers
> 
> 
> # uname -a
> FreeBSD OPNsense.localdomain 10.1-RELEASE-p18 FreeBSD 10.1-RELEASE-p18 #0 
> 71275cd(stable/15.7): Sun Aug 23 20:32:26 CEST 2015 
> root@sensey64:/usr/obj/usr/src/sys/SMP  amd64
> 
> # haproxy -vv
> [ALERT] 248/221747 (72984) : SSLv3 support requested but unavailable.
> HA-Proxy version 1.6-dev4-b7ce424 2015/09/03
> Copyright 2000-2015 Willy Tarreau 
> 
> Build options :
>  TARGET  = freebsd
>  CPU = generic
>  CC  = cc
>  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing -DFREEBSD_PORTS
>  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 
> USE_STATIC_PCRE=1 USE_PCRE_JIT=1
> 
> Default settings :
>  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
> 
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.8
> Compression algorithms supported : identity("identity"), deflate("deflate"), 
> raw-deflate("deflate"), gzip("gzip")
> Built with OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
> Running on OpenSSL version : OpenSSL 1.0.2d 9 Jul 2015
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.37 2015-04-28
> PCRE library supports JIT : yes
> Built with Lua version : Lua 5.3.0
> Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
> 
> Available polling systems :
> kqueue : pref=300,  test result OK
>   poll : pref=200,  test result OK
> select : pref=150,  test result OK
> Total: 3 (3 usable), will use kqueue.
> 
> 




Re: TCP_NODELAY in tcp mode

2015-08-28 Thread Dmitry Sivachenko

 On 28 авг. 2015 г., at 12:12, Lukas Tribus luky...@hotmail.com wrote:
 
 Hello,
 
 The flag TCP_NODELAY is unconditionally set on each TCP (ipv4/ipv6)
 connections between haproxy and the server, and beetwen the client and
 haproxy.
 
 That may be true, however HAProxy uses MSG_MORE to disable and
 enable Nagle based on the individual situation.
 
 Use option http-no-delay [1] to disable Nagle unconditionally.


This option requires HTTP mode, but I must use TCP mode because our protocol is 
not HTTP (some custom protocol over TCP)


 
 
 
 Regards,
 
 Lukas
 
 
 [1] 
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20http-no-delay





Re: TCP_NODELAY in tcp mode

2015-08-28 Thread Dmitry Sivachenko

 On 28 авг. 2015 г., at 12:18, Lukas Tribus luky...@hotmail.com wrote:
 
 Use option http-no-delay [1] to disable Nagle unconditionally.
 
 
 This option requires HTTP mode, but I must use TCP mode because our
 protocol is not HTTP (some custom protocol over TCP)
 
 Ok, you may be hitting a bug. Can you provide haproxy -vv output?
 


What do you mean?  I get the following warning when trying to use this option 
in tcp backend/frontend:

[WARNING] 239/121424 (71492) : config : 'option http-no-delay' ignored for 
frontend 'shard0-front' as it requires HTTP mode.
[WARNING] 239/121424 (71492) : config : 'option http-no-delay' ignored for 
backend 'shard0-back' as it requires HTTP mode.

So it is clear that this option is intended for HTTP mode only.  For reference:

HA-Proxy version 1.5.11 2015/01/31
Copyright 2000-2015 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -fstack-protector 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1l-freebsd 15 Jan 2015
Running on OpenSSL version : OpenSSL 1.0.1l-freebsd 15 Jan 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.





TCP_NODELAY in tcp mode

2015-08-27 Thread Dmitry Sivachenko
Hello,

we have a client-server application which establish a long-living TCP 
connection and generates a lot of small request-response packets which need to 
be processed very fast.
Setting TCP_NODELAY on sockets speed things up to about 3 times.

Not I want to put a haproxy in the middle so it balances traffic between 
several servers.

Something like 

defaults
 mode tcp

frontend shard0-front
 bind *:9000
 default_backend shard0-back

backend shard0-back
 server srv1 srv1:3456 check
 server srv2 srv2:3456 check

In such configuration application slows significantly.  I suspect that setting 
frontend's and backend's sockets option TCP_NODELAY would help as it did 
without haproxy involved.  Is there any parameter which allows me to set 
TCP_NODELAY option?

Thanks!


Re: Question on distribution not according to backend weight

2015-05-08 Thread Dmitry Sivachenko

 On 8 мая 2015 г., at 4:54, Frank Schubert f.schub...@gmail.com wrote:
 
 Hi,
 
 first of all let me thank you for an amazing piece of software. I like 
 working with haproxy a lot!
 
 My problem: The weight setting for a backend seems to be ignored when the max 
 concurrent session setting is reached. I was expecting the connection to get 
 queued for this backend but it seems to flip over to the host that has 
 connections available.
 
 I simplified my setup to 2 backend smtp servers, one with weight 100, the 
 other with weight 1. The max connection setting is set to 2. I'm opening 
 multiple SMTP connections simultaneously to this haproxy server. Attached 
 screenshot from haproxy stats shows that backend with weight 1 gets way too 
 many sessions.
 
 Increasing max concurrent sessions to 5 or more seem to prevent this 
 behavior, but I'm not totally sure about this.
 
 I would like to have only a small fraction (100:1) of requests go to the 
 backend with the lower weight and wonder how to do this correctly. It's more 
 important to me to have a defined distribution of connections going to 
 backends than answering requests as quickly as possible regardless of what 
 backend is used.
 
 haproxy-distribution-ignores-weight.jpg
 ​


This screenshot also illustrates incorrect Max Request Rate calculation I 
reported 2 years ago:
http://www.serverphorums.com/read.php?10,623596




Re: building haproxy with lua support

2015-03-17 Thread Dmitry Sivachenko

 On 17 марта 2015 г., at 13:17, Thierry FOURNIER tfourn...@haproxy.com wrote:
 
 On Tue, 17 Mar 2015 08:38:23 +0100
 Baptiste bed...@gmail.com wrote:
 
 On Tue, Mar 17, 2015 at 1:51 AM, Joe Williams williams@gmail.com wrote:
 List,
 
 I seem to be running into issues building haproxy with lua support using
 HEAD. Any thoughts?
 
 joe@ubuntu:~/haproxy$ make DEBUG=-ggdb CFLAGS=-O0 TARGET=linux2628
 USE_LUA=yes LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/ LDFLAGS=-ldl
 snip
 /opt/lua53/lib//liblua.a(loadlib.o): In function `lookforfunc':
 loadlib.c:(.text+0x502): undefined reference to `dlsym'
 loadlib.c:(.text+0x549): undefined reference to `dlerror'
 loadlib.c:(.text+0x576): undefined reference to `dlopen'
 loadlib.c:(.text+0x5ed): undefined reference to `dlerror'
 /opt/lua53/lib//liblua.a(loadlib.o): In function `gctm':
 loadlib.c:(.text+0x781): undefined reference to `dlclose'
 collect2: error: ld returned 1 exit status
 make: *** [haproxy] Error 1
 
 joe@ubuntu:~/haproxy$ /opt/lua53/bin/lua -v
 Lua 5.3.0  Copyright (C) 1994-2015 Lua.org, PUC-Rio
 
 Thanks!
 
 -Joe
 
 
 Thank you,
 
 In fact I build with the SSL activated, and the libssl is already
 linked with thz dl library, so I don't sew this compilation error.
 
 It is fixed, the patch is in attachment.


This patch will break FreeBSD (and other OSes) which do not have libdl.


Re: Balancing requests and backup servers

2015-02-27 Thread Dmitry Sivachenko

 On 27 февр. 2015 г., at 2:56, Baptiste bed...@gmail.com wrote:
 
 On Thu, Feb 26, 2015 at 3:58 PM, Dmitry Sivachenko trtrmi...@gmail.com 
 wrote:
 Hello!
 
 Given the following configuration
 
 backend BC
 option allbackups
 server s1 maxconn 30 check
 server s2 maxconn 30 check
 server s3 maxconn 30 check
 server b1 maxconn 30 check backup
 server b2 maxconn 30 check backup
 
 imagine that s1, s2 and s3 have 30 active sessions and (tcp) checks succeed.
 
 
 Hi Dmitry.
 
 Let me answer inline:
 
 1) subsequent requests will be balanced between b1 and b2 because s1, s2 and 
 s3 reached it's maxconn
 
 nope, they'll be queued on the backend until one of the server has a free slot
 b1 and b2 will be used when ALL s1, s2 and s3 will be operationnaly DOWN.


Okay, then how can I achieve the described setup?
I want to balance requests between s1, s2, s3 until they have less than N 
active sessions and route extra requests to b1 and b2.



 
 2) nbsrv(BC) will be still equal to 3 because checks for s1, s2 and s3 still 
 succeed
 
 nope, nbsrv is 5, since b1 and b2 should be counted as well.
 

In fact backup server does NOT count in nbsrv(), I am not sure if it is a bug 
or a feature.




Re: Balancing requests and backup servers

2015-02-27 Thread Dmitry Sivachenko

 On 27 февр. 2015 г., at 11:52, Baptiste bed...@gmail.com wrote:
 
 On Fri, Feb 27, 2015 at 9:02 AM, Dmitry Sivachenko trtrmi...@gmail.com 
 wrote:
 
 On 27 февр. 2015 г., at 2:56, Baptiste bed...@gmail.com wrote:
 
 On Thu, Feb 26, 2015 at 3:58 PM, Dmitry Sivachenko trtrmi...@gmail.com 
 wrote:
 Hello!
 
 Given the following configuration
 
 backend BC
 option allbackups
 server s1 maxconn 30 check
 server s2 maxconn 30 check
 server s3 maxconn 30 check
 server b1 maxconn 30 check backup
 server b2 maxconn 30 check backup
 
 imagine that s1, s2 and s3 have 30 active sessions and (tcp) checks 
 succeed.
 
 
 Hi Dmitry.
 
 Let me answer inline:
 
 1) subsequent requests will be balanced between b1 and b2 because s1, s2 
 and s3 reached it's maxconn
 
 nope, they'll be queued on the backend until one of the server has a free 
 slot
 b1 and b2 will be used when ALL s1, s2 and s3 will be operationnaly DOWN.
 
 
 Okay, then how can I achieve the described setup?
 I want to balance requests between s1, s2, s3 until they have less than N 
 active sessions and route extra requests to b1 and b2.
 
 
 Two solutions:
 
 - use balance first load-balancing algorithm and remove the backup keyword
 - create 2 backends, one with 3 servers, one with two, use the 'queue'
 fetch to get the number of queued request on backend1 and route to
 backend 2 if the number is greater than 0.
 


BTW what if I have maxqueue 1 in default-server?
If queue is full for all servers will that backend use backup servers?


Re: Strange memory usage

2014-10-13 Thread Dmitry Sivachenko

On 13 окт. 2014 г., at 14:37, Lukas Tribus luky...@hotmail.com wrote:

 Hi Dmitry,
 
 
 
 I am using haproxy-1.5.4 on FreeBSD-10.
 
 Upon startup, it looks like this:
 PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
 8459 www 1 37 0 86376K 28824K CPU16 16 0:16 26.56% haproxy
 
 (about 80MB RES)
 
 Its 80MB SIZE and 28M RES here.
 
 
 
 PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
 82720 www 1 36 0 244M 108M CPU29 29 29.2H 26.95% haproxy
 
 (244MB RES).
 
 Its 244M SIZE and 108M RES. So 108M of real RAM used here.
 
 


Yes, I am sorry, I meant SIZE.


 
 When I do reload, I see that old process is in swread state for some time, 
 and
 swap usage decreases for about 150MB when old process finishes.
 
 Does it mean memory leak is somewhere? Any additional information I could
 provide will be useful?
 
 Share you configuration, especially maxconn related stuff, the output of

defaults
log global
modetcp
balance roundrobin
maxconn 1
option  abortonclose
option  allbackups
#option  dontlog-normal
#option  dontlognull
option  redispatch
option  tcplog
#option  log-separate-errors
option socket-stats
retries 4
timeout check 500ms
timeout client 15s
timeout connect 100ms
timeout http-keep-alive 3s
timeout http-request 5s
timeout queue 1s
timeout server 15s
fullconn 3000
default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1 slowstart
 60s maxqueue 1 minconn 5 maxconn 150

I can send you full config in private e-mail if necessary.


 haproxy -vv


HA-Proxy version 1.5.4 2014/09/02
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -pipe -g -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PC1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1i-freebsd 6 Aug 2014
Running on OpenSSL version : OpenSSL 1.0.1i-freebsd 6 Aug 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.33 2013-05-28
PCRE library supports JIT : yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

  a


 nd possibly show info;show stat;show pools
 from the unix admin socket.
 

show info:

Name: HAProxy
Version: 1.5.4
Release_date: 2014/09/02
Nbproc: 1
Process_num: 1
Pid: 32459
Uptime: 4d 6h09m46s
Uptime_sec: 367786
Memmax_MB: 0
Ulimit-n: 131218
Maxsock: 131218
Maxconn: 65500
Hard_maxconn: 65500
CurrConns: 508
CumConns: 517986272
CumReq: 602369265
MaxSslConns: 0
CurrSslConns: 16
CumSslConns: 452700
Maxpipes: 0
PipesUsed: 0
PipesFree: 0
ConnRate: 2611
ConnRateLimit: 0
MaxConnRate: 3965
SessRate: 2611
SessRateLimit: 0
MaxSessRate: 3965
SslRate: 4
SslRateLimit: 0
MaxSslRate: 33
SslFrontendKeyRate: 2
SslFrontendMaxKeyRate: 34
SslFrontendSessionReuse_pct: 50
SslBackendKeyRate: 0
SslBackendMaxKeyRate: 0
SslCacheLookups: 74867
SslCacheMisses: 60826
CompressBpsIn: 0
CompressBpsOut: 0
CompressBpsRateLim: 0
ZlibMemUsage: 0
MaxZlibMemUsage: 0
Tasks: 1550
Run_queue: 1
Idle_pct: 55


show pools on freshly started process:

Dumping pools usage. Use SIGQUIT to flush them.
  - Pool pipe (32 bytes) : 19 allocated (608 bytes), 5 used, 3 users [SHARED]
  - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool channel (80 bytes) : 766 allocated (61280 bytes), 672 used, 1 users 
[SHARED]
  - Pool task (112 bytes) : 1426 allocated (159712 bytes), 1378 used, 1 users 
[SHARED]
  - Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool connection (320 bytes) : 424 allocated (135680 bytes), 360 used, 1 
users [SHARED]
  - Pool hdr_idx (416 bytes) : 383 allocated (159328 bytes), 335 used, 1 users 
[SHARED]
  - Pool session (864 bytes) : 385 allocated (332640 bytes), 337 used, 1 users 
[SHARED]
  - Pool requri (1024 bytes) : 51 allocated (52224 bytes), 22 used, 1 users 
[SHARED]
  - Pool buffer (32800 bytes) : 766 allocated (25124800 bytes), 672 used, 1 
users [SHARED]
Total: 10 pools, 26026272 bytes allocated, 22818112 used.


show pools after few days of uptime:
Dumping pools usage. Use SIGQUIT to flush them.
  - Pool pipe (32 bytes) : 961 allocated (30752 bytes), 5 used, 3 users [SHARED]
  - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool channel (80 bytes) : 4136 allocated (330880 bytes), 648 used, 1 users 
[SHARED]
  - Pool task (112 bytes) : 3109 allocated (348208 bytes), 1367 

Strange memory usage

2014-10-12 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5.4 on FreeBSD-10.

Upon startup, it looks like this:
  PID USERNAME  THR PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
 8459 www 1  370 86376K 28824K CPU16  16   0:16  26.56% haproxy

(about 80MB RES)

After few days of running, it looks like this:

  PID USERNAME  THR PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
82720 www 1  360   244M   108M CPU29  29  29.2H  26.95% haproxy

(244MB RES).  

When I do reload, I see that old process is in swread state for some time, and 
swap usage decreases for about 150MB when old process finishes.

Does it mean memory leak is somewhere?  Any additional information I could 
provide will be useful?

Thanks!


number of usable servers

2014-08-20 Thread Dmitry Sivachenko
Hello!

nbsrv() return the number of usable servers for the backend *excluding* servers 
marked as backup.

Is there any way to get the number of usable servers for the backend 
*including* backup ones?

Thanks!


haproxy dumps core on reload

2014-08-02 Thread Dmitry Sivachenko
Hello,

I am running haproxy-1.5.2 on FreeBSD-10.  After some time of running, when I 
try to reload it (haproxy -sf oldpid) old process dumps core on exit.
I experienced that with -dev21 and ignored in hope this is because of old 
snapshot.  This happens only after some time of running, if I reload it 
immediately after startup it does not crash.

I can send config file if necessary.

Core was generated by `haproxy'.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /lib/libz.so.6...done.
Loaded symbols for /lib/libz.so.6
Reading symbols from /usr/lib/libssl.so.7...done.
Loaded symbols for /usr/lib/libssl.so.7
Reading symbols from /lib/libcrypto.so.7...done.
Loaded symbols for /lib/libcrypto.so.7
Reading symbols from /lib/libc.so.7...done.
Loaded symbols for /lib/libc.so.7
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  0x004652b8 in process_session (t=0x813762860) at src/session.c:658
658 bref-ref = s-list.n;
(gdb) bt
#0  0x004652b8 in process_session (t=0x813762860) at src/session.c:658
#1  0x004125e1 in process_runnable_tasks (next=0x7fffe9b4)
at src/task.c:237
#2  0x004087ee in run_poll_loop () at src/haproxy.c:1304
#3  0x0040903d in main (argc=value optimized out, 
argv=value optimized out) at src/haproxy.c:1638


Feature request: redispatch-on-5xx

2014-06-23 Thread Dmitry Sivachenko
Hello!

One more thing which can be very useful in some setups: if backend server 
returns HTTP 5xx status code, it would be nice to have an ability to retry the 
same request on another server before reporting error to client (when you know 
for sure the same request can be sent multiple times without side effects).

Is it possible to make some configuration switch to allow such retries?

Thanks.


keep-alive on server side

2014-06-20 Thread Dmitry Sivachenko
Hello!

Is it possible to use HTTP keep-alive between haproxy and backend even if 
client does not use it?
Client closes connection, but haproxy still maintains open connection to 
backend (based on some timeout) and re-use it when new request arrives.

It will save some time for new connection setup between haproxy and backend and 
can be useful in case when server responds very fast (and connection rate it 
high).

Thanks.


Re: Some thoughts about redispatch

2014-06-16 Thread Dmitry Sivachenko

On 13 июня 2014 г., at 20:00, Willy Tarreau w...@1wt.eu wrote:

 
 Done! I've just pushed this. In the end I preferred not to apply this
 principle to leastconn since there are some situations where leastconn
 farms can be highly unbalanced (after a server restart) so killing the
 delay could result in hammering the new fresh server harder.
 
 So this is what happens now :
 
 - if we're on a round-robin farm with more than 1 active server and the
   connection is not persistent, then we redispatch upon the first retry
   since we don't care at all about the server that we randomly picked.
 
 - when redispatching, we kill the delay if the farm is in RR with more
   than one active server.
 
 - the delay is always bound by the connect timeout so that sub-second
   timeouts will lead to shorter retries even for other cases.
 
 I just realized during my tests that this way you can have a retries
 value set to the number of servers and scan your whole farm looking for
 a server. Yeah this is ugly :-)
 


Hello,

after some tests it looks fine.

Thank you very much for implementing this!




Re: [ANNOUNCE] haproxy-1.5-dev26 (and hopefully last)

2014-05-31 Thread Dmitry Sivachenko

On 29 мая 2014 г., at 3:04, Willy Tarreau w...@1wt.eu wrote:
 
 Yes it does but it doesn't change its verdict. The test is really bogus I
 think :
 
   const char fmt[]   = blah; printf(fmt);  = OK
   const char *fmt= blah; printf(fmt);  = KO
   const char * const fmt = blah; printf(fmt);  = KO
   const char fmt[][5] = { blah }; printf(fmt[0]);  = KO
 
 This is the difference between the first one and the last one which makes
 me say the test is bogus, because it's exactly the same.
 
 And worst thing is that I guess they added this check for people who
 mistakenly use printf(string). And as usual, they don't provide an easy
 way to say don't worry it's not an error, it's on purpose... This
 compiler is becoming more and more irritating, soon we'll have more
 lines of workarounds than useful lines of code.
 
 Worse in fact, the workaround is simple, it consists in removing the
 __attribute__((printf)) on the declaration line of chunk_appendf(),
 and thus *really* opening the door to real scary bugs.
 
 OK so I'll add a dummy argument to shut it up :-(



Just for reference: clang also warns here:

cc -Iinclude -Iebtree -Wall -O2 -pipe -fno-strict-aliasing   -DFREEBSD_PORTS
-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL 
-DENABLE_KQUEUE -DUSE_OPENSSL   -DCONFIG_HAPROXY_VERSION=\1.5-dev26-2e85840\ 
-DCONFIG_HAPROXY_DATE=\2014/05/28\ -c -o src/dumpstats.o src/dumpstats.c
src/dumpstats.c:3059:26: warning: format string is not a string literal
  (potentially insecure) [-Wformat-security]
chunk_appendf(trash, srv_hlt_st[1]); /* DOWN (agent) */
  ^


FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
Target: x86_64-unknown-freebsd10.0
Thread model: posix




Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko
On 28 мая 2014 г., at 11:13, Willy Tarreau w...@1wt.eu wrote:

 Hi Dmitry,
 
 So worked a bit on this subject. It's far from being obvious. The problem
 is that at the moment where we decide of the 1s delay before a retry, we
 don't know if we'll end up on the same server or not.
 
 Thus I'm thinking about this :
 
  - if the connection is persistent (cookie, etc...), apply the current 
retry mechanism, as we absolutely don't want to break application
sessions ;


I agree.


 
  - otherwise, we redispatch starting on the first retry as you suggest. But
then we have two possibilities for the delay before reconnecting. If the
server farm has more than 1 server and the balance algorithm is not a hash
nor first, then we don't apply the delay because we expect to land on a
different server with a high probability. Otherwise we keep the delay
because we're almost certain to land on the same server.
 
 This way it continues to silently mask occasional server restarts and is
 optimally efficient in stateless farms when there's a possibility to quickly
 pick another server. Do you see any other point that needs specific care ?



I would export that magic 1 second as a configuration parameter (with 0 
meaning no delay).
After all, we could fail to connect not only because of server restart, but 
also because a switch or a router dropped a packet.
Other than that, sounds good.

Thanks!


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko

On 28 мая 2014 г., at 12:49, Willy Tarreau w...@1wt.eu wrote:

 On Wed, May 28, 2014 at 12:35:17PM +0400, Dmitry Sivachenko wrote:
 - otherwise, we redispatch starting on the first retry as you suggest. But
   then we have two possibilities for the delay before reconnecting. If the
   server farm has more than 1 server and the balance algorithm is not a hash
   nor first, then we don't apply the delay because we expect to land on a
   different server with a high probability. Otherwise we keep the delay
   because we're almost certain to land on the same server.
 
 This way it continues to silently mask occasional server restarts and is
 optimally efficient in stateless farms when there's a possibility to quickly
 pick another server. Do you see any other point that needs specific care ?
 
 
 
 I would export that magic 1 second as a configuration parameter (with 0
 meaning no delay).
 
 I'm not sure we need to add another tunable just for this.


Okay.


 
 After all, we could fail to connect not only because of server restart, but
 also because a switch or a router dropped a packet.
 
 No, because a dropped packet is already handled by the TCP stack. Here the
 haproxy retry is really about retrying after an explicit failure (server
 responded that the port was closed). Also, the typical TCP retransmit
 interval for dropped packets in the network stack is 3s, so we're already
 3 times as fast as the TCP stack. I don't think it's reasonable to always
 kill this delay when retrying on the same server. We used to have that in
 the past and people were complaining that we were hammering servers for no
 reason, since there's little chance that a server which is not started will
 suddenly be ready in the next 100 microseconds.
 

I mean that with timeout connect=100ms (good value for local network IMO), we 
are far away from TCP restransmit timeout and if a switch drops a packet (it 
drops randomly and it can transmit next one even if we retry immediately).

If we have a tunable (let's make a default 1 second), people will have more 
freedom in some situations.


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko
On 28 мая 2014 г., at 13:06, Willy Tarreau w...@1wt.eu wrote:

 
 OK but then you make an interesting point with your very low timeout connect.
 What about using the min of timeout connect and 1s then ? Thus you can simply
 use your lower timeout connect as this new timeout. Would that be OK for you ?
 


Sounds reasonable (provided we are talking only about redispatch to the same 
server, not to the other one).


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko

On 28 мая 2014 г., at 11:13, Willy Tarreau w...@1wt.eu wrote:

 
  - otherwise, we redispatch starting on the first retry as you suggest. But
then we have two possibilities for the delay before reconnecting. If the
server farm has more than 1 server and the balance algorithm is not a hash
nor first, then we don't apply the delay because we expect to land on a
different server with a high probability. 


BTW, I thought that with option redispatch we will *always* retry on another 
server (if there are several servers in backend configured and balance 
algorithm is leastconn or round-robin).
Why do you say with a high probability here?


Re: Some thoughts about redispatch

2014-05-26 Thread Dmitry Sivachenko
On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello!
 
 If haproxy can't send a request to the backend server, it will retry the same
 backend 'retries' times waiting 1 second between retries, and if 'option
 redispatch' is used, the last retry will go to another backend.
 
 There is (I think very common) usage scenario when
 1) all requests are independent of each other and all backends are equal, so
 there is no need to try to route requests to the same backend (if it failed, 
 we
 will try dead one again and again while another backend could serve the 
 request
 right now)
 
 2) there is response time policy for requests and 1 second wait time is just
 too long (all requests are handled faster than 500ms and client software will
 not wait any longer).
 
 I propose to introduce new parameters in config file:
 1) redispatch always: when set, haproxy will always retry different backend
 after connection to the first one fails.
 2) Allow to override 1 second wait time between redispatches in config file
 (including the value of 0 == immediate).
 
 Right now I use the attached patch to overcome these restrictions.  It is ugly
 hack right now, but if you could include it into distribution in better form
 with tuning via config file I think everyone would benefit from it.
 
 Thanks.
 redispatch.txt



On 26 мая 2014 г., at 18:21, Willy Tarreau w...@1wt.eu wrote:
 I think it definitely makes some sense. Probably not in its exact form but
 as something to work on. In fact, I think we should only apply the 1s retry
 delay when remaining on the same server, and avoid as much a possible to
 remain on the same server. For hashes or when there's a single server, we
 have no choice, but when doing round robin for example, we can pick another
 one. This is especially true for static servers or ad servers for example
 where fastest response time is preferred over sticking to the same server.
 


Yes, that was exactly my point.  In many situations it is better to ask another 
server immediately to get fastest response rather than trying to stick to the 
same server as much as possible.


 
 Thanks,
 Willy



Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Hello,

thanks for your efforts on stabilizing -dev version, it looks rather solid now.

Let me try to revive an old topic in hope to get rid of my old local patch I 
must use for production builds.

Thanks :)



On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello!
 
 If haproxy can't send a request to the backend server, it will retry the same
 backend 'retries' times waiting 1 second between retries, and if 'option
 redispatch' is used, the last retry will go to another backend.
 
 There is (I think very common) usage scenario when
 1) all requests are independent of each other and all backends are equal, so
 there is no need to try to route requests to the same backend (if it failed, 
 we
 will try dead one again and again while another backend could serve the 
 request
 right now)
 
 2) there is response time policy for requests and 1 second wait time is just
 too long (all requests are handled faster than 500ms and client software will
 not wait any longer).
 
 I propose to introduce new parameters in config file:
 1) redispatch always: when set, haproxy will always retry different backend
 after connection to the first one fails.
 2) Allow to override 1 second wait time between redispatches in config file
 (including the value of 0 == immediate).
 
 Right now I use the attached patch to overcome these restrictions.  It is ugly
 hack right now, but if you could include it into distribution in better form
 with tuning via config file I think everyone would benefit from it.
 
 Thanks.
 redispatch.txt




Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Looks like attach got stripped, attaching now for real so it is easy to 
understand what I am talking about.

--- session.c.orig  2012-11-22 04:11:33.0 +0400
+++ session.c   2012-11-22 16:15:04.0 +0400
@@ -877,7 +877,7 @@ static int sess_update_st_cer(struct ses
 * bit to ignore any persistence cookie. We won't count a retry nor a
 * redispatch yet, because this will depend on what server is selected.
 */
-   if (objt_server(s-target)  si-conn_retries == 0 
+   if (objt_server(s-target) 
s-be-options  PR_O_REDISP  !(s-flags  SN_FORCE_PRST)) {
sess_change_server(s, NULL);
if (may_dequeue_tasks(objt_server(s-target), s-be))
@@ -903,7 +903,7 @@ static int sess_update_st_cer(struct ses
si-err_type = SI_ET_CONN_ERR;
 
si-state = SI_ST_TAR;
-   si-exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   si-exp = tick_add(now_ms, MS_TO_TICKS(0));
return 0;
}
return 0;

On 12 мая 2014 г., at 0:31, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello,
 
 thanks for your efforts on stabilizing -dev version, it looks rather solid 
 now.
 
 Let me try to revive an old topic in hope to get rid of my old local patch I 
 must use for production builds.
 
 Thanks :)
 
 
 
 On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko trtrmi...@gmail.com wrote:
 
 Hello!
 
 If haproxy can't send a request to the backend server, it will retry the same
 backend 'retries' times waiting 1 second between retries, and if 'option
 redispatch' is used, the last retry will go to another backend.
 
 There is (I think very common) usage scenario when
 1) all requests are independent of each other and all backends are equal, so
 there is no need to try to route requests to the same backend (if it failed, 
 we
 will try dead one again and again while another backend could serve the 
 request
 right now)
 
 2) there is response time policy for requests and 1 second wait time is just
 too long (all requests are handled faster than 500ms and client software will
 not wait any longer).
 
 I propose to introduce new parameters in config file:
 1) redispatch always: when set, haproxy will always retry different backend
 after connection to the first one fails.
 2) Allow to override 1 second wait time between redispatches in config file
 (including the value of 0 == immediate).
 
 Right now I use the attached patch to overcome these restrictions.  It is 
 ugly
 hack right now, but if you could include it into distribution in better form
 with tuning via config file I think everyone would benefit from it.
 
 Thanks.
 redispatch.txt
 



Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-10 Thread Dmitry Sivachenko

On 07 мая 2014 г., at 18:24, Emeric Brun eb...@exceliance.fr wrote:
 
 Hi All,
 
 I suspect FreeBSD to not support process shared mutex (supported in both 
 linux and solaris).
 
 I've just made a patch to add errors check on mutex init, and to fallback on 
 SSL private session cache in error case.


Hello,

BTW, nginx does support shared SSL session cache on FreeBSD (probably by other 
means).
May be it is worth to borrow their method rather than falling back to private 
cache?


Re: Patch with some small memory usage fixes

2014-04-29 Thread Dmitry Sivachenko
Hello,

 if (groups) free(groups);

I think these checks are redundant, because according to free(3):
-- If ptr is NULL, no action occurs.


On 29 апр. 2014 г., at 3:00, Dirkjan Bussink d.buss...@gmail.com wrote:

 Hi all,
 
 When building HAProxy using the Clang Static Analyzer, it found a few cases 
 of invalid memory usage and leaks. I’ve attached a patch to fix these cases.
 
 — 
 Regards,
 
 Dirkjan Bussink
 
 0001-Fix-a-few-memory-usage-errors.patch




Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 06 марта 2014 г., at 19:29, Dmitry Sivachenko trtrmi...@gmail.com wrote:

 Hello!
 
 I am using haproxy-1.5.22.
 
 In a single backend I have servers with different weight configured: 16, 24, 
 32 (proportional to the number of CPU cores).
 Most of the time they respond very fast.
 
 When I use balance leastconn, I see in the stats web interface that they all 
 receive approximately equal number of connections (Sessions-Total).
 Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
 backend with minimal Connections/weight value)?
 
 Thanks.


I mean that with balance leastconn, I expect the following behavior:
-- In ideal situation, when all backends respond equally fast, it should be 
effectively like balance roundrobin *honoring specified weights*;
-- When one of the backends becomes slow for some reason, it should get less 
request based on the number of active connections

Now it behaves almost this way but without  honoring specified weights.





  1   2   >