Re: [ANNOUNCE] haproxy-2.5-dev7

2021-09-12 Thread Dmitry Sivachenko



> On 12 Sep 2021, at 13:06, Willy Tarreau  wrote:
> 
> Hi,
> 
> HAProxy 2.5-dev7 was released on 2021/09/12. It added 39 new commits
> after version 2.5-dev6.



Hello,

there is a new warning in -dev branch (on FreeBSD):

admin/halog/fgets2.c:38:30: warning: '__GLIBC__' is not defined, evaluates to 0 
[-Wundef]
#if defined(__x86_64__) &&  (__GLIBC__ > 2 || (__GLIBC__ == 2 && 
__GLIBC_MINOR__ >= 15))
 ^
admin/halog/fgets2.c:38:48: warning: '__GLIBC__' is not defined, evaluates to 0 
[-Wundef]
#if defined(__x86_64__) &&  (__GLIBC__ > 2 || (__GLIBC__ == 2 && 
__GLIBC_MINOR__ >= 15))

Looks like Linux-specific condition.

Thanks.


Re: [ANNOUNCE] haproxy-2.4.9

2021-11-25 Thread Dmitry Sivachenko
On 24 Nov 2021, at 12:57, Christopher Faulet  wrote:
> 
> 
> Hi,
> 
> HAProxy 2.4.9 was released on 2021/11/23. It added 36 new commits
> after version 2.4.8.
> 


Hello,

version 2.4.9 fails to build with OpenSSL turned off:

 src/server.c:207:51: error: no member named 'ssl_ctx' in 'struct server'
if (srv->mux_proto || srv->use_ssl != 1 || !srv->ssl_ctx.alpn_str) {
~~~  ^
src/server.c:241:37: error: no member named 'ssl_ctx' in 'struct server'
const struct ist alpn = ist2(srv->ssl_ctx.alpn_str,
 ~~~  ^
src/server.c:242:37: error: no member named 'ssl_ctx' in 'struct server'
 srv->ssl_ctx.alpn_len);
 ~~~  ^

Version 2.4.8 builds fine.





Re: [ANNOUNCE] haproxy-2.4.9

2021-11-25 Thread Dmitry Sivachenko


> On 25 Nov 2021, at 13:09, Willy Tarreau  wrote:
> 
> Please try the two attached patches. They re-backport something that
> we earlier failed to backport that simplifies the ugly ifdefs everywhere
> that virtually break every single backport related to SSL.
> 
> For me they work with/without SSL and with older versions (tested as far
> as 0.9.8).
> 
> Thanks,
> Willy
> <0001-CLEANUP-servers-do-not-include-openssl-compat.patch><0002-CLEANUP-server-always-include-the-storage-for-SSL-se.patch>


These two patches do fix the build.

Thanks!


Re: [ANNOUNCE] haproxy-2.4.9

2021-11-25 Thread Dmitry Sivachenko



> On 25 Nov 2021, at 13:29, Amaury Denoyelle  wrote:
> 
> Dmitry, the patches that Willy provided you should fix the issue. Now,
> do you need a 2.4.10 to be emitted early with it or is it possible for
> you to keep the patches in your tree so we can have a more substantial
> list of change for a new version ?
> 

As for me there is no hurry: I'll add patches to FreeBSD ports collection.




compile issues on FreeBSD/i386

2020-06-20 Thread Dmitry Sivachenko
Hello!

I am trying to compile haproxy-2.2-dev10 on FreeBSD-12/i386 (i386 is important 
here) with clang version 9.0.1.

I get the following linker error:

  LD  haproxy
ld: error: undefined symbol: __atomic_fetch_add_8
>>> referenced by backend.c
>>>   src/backend.o:(assign_server)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(assign_server_and_queue)
>>> referenced by backend.c
>>>   src/backend.o:(connect_server)
>>> referenced by backend.c
>>>   src/backend.o:(connect_server)
>>> referenced by backend.c
>>>   src/backend.o:(connect_server)
>>> referenced by backend.c
>>>   src/backend.o:(srv_redispatch_connect)
>>> referenced 233 more times

ld: error: undefined symbol: __atomic_store_8

For some time we apply the following patch to build on FreeBSD/i386:

--- include/common/hathreads.h.orig 2018-02-17 18:17:22.21940 +
+++ include/common/hathreads.h  2018-02-17 18:18:44.598422000 +
@@ -104,7 +104,7 @@ extern THREAD_LOCAL unsigned long tid_bit; /* The bit 
 /* TODO: thread: For now, we rely on GCC builtins but it could be a good idea 
to
  * have a header file regrouping all functions dealing with threads. */
 
-#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 7) 
&& !defined(__clang__)
+#if (defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 
7) && !defined(__clang__)) || (defined(__clang__) && defined(__i386__))
 /* gcc < 4.7 */
 
 #define HA_ATOMIC_ADD(val, i)__sync_add_and_fetch(val, i)

(it is from older -dev but still applies to include/haproxy/atomic.h and fixes 
the build).

If this patch is correct for i386, may be we include it to haproxy sources?

PS:  with that patch applied I get the following warning which can have sense:

src/stick_table.c:3462:12: warning: result of comparison 'unsigned long' > 
4294967295 is always false [-Wtautological-type-limit-compare]
val > 0x)
~~~ ^ ~~

Thanks.


Re: [ANNOUNCE] haproxy-2.0.16

2020-07-18 Thread Dmitry Sivachenko



> On 17 Jul 2020, at 17:34, Christopher Faulet  wrote:
> 
> 
> Hi,
> 
> HAProxy 2.0.16 was released on 2020/07/17. It added 45 new commits after 
> version
> 2.0.15.

Hello,

Here are new compile problems since 2.0.14 (FreeBSD/amd64, clang version 10.0.0)

1) new warnings:

src/log.c:1692:10: warning: logical not is only applied to the left hand side 
of this comparison [-Wlogical-not-parentheses]
while (HA_SPIN_TRYLOCK(LOGSRV_LOCK, &logsrv->lock) != 0) {
   ^   ~~
include/common/hathreads.h:1026:33: note: expanded from macro 'HA_SPIN_TRYLOCK'
#define HA_SPIN_TRYLOCK(lbl, l) !pl_try_s(l)
^
src/log.c:1692:10: note: add parentheses after the '!' to evaluate the 
comparison first
include/common/hathreads.h:1026:33: note: expanded from macro 'HA_SPIN_TRYLOCK'
#define HA_SPIN_TRYLOCK(lbl, l) !pl_try_s(l)
^
src/log.c:1692:10: note: add parentheses around left hand side expression to 
silence this warning
while (HA_SPIN_TRYLOCK(LOGSRV_LOCK, &logsrv->lock) != 0) {
   ^
   (  )
include/common/hathreads.h:1026:33: note: expanded from macro 'HA_SPIN_TRYLOCK'
#define HA_SPIN_TRYLOCK(lbl, l) !pl_try_s(l)
^


2) compile error (can be fixed by including 

ebtree/ebtree.c:43:2: error: use of undeclared identifier 'ssize_t'; did you 
mean 'sizeof'?
ssize_t ofs = -len;
^~~
sizeof
ebtree/ebtree.c:43:10: error: use of undeclared identifier 'ofs'
ssize_t ofs = -len;
^
ebtree/ebtree.c:47:13: error: use of undeclared identifier 'ofs'
diff = p1[ofs] - p2[ofs];
  ^
ebtree/ebtree.c:47:23: error: use of undeclared identifier 'ofs'
diff = p1[ofs] - p2[ofs];
^
ebtree/ebtree.c:48:22: error: use of undeclared identifier 'ofs'
} while (!diff && ++ofs);




Re: [ANNOUNCE] haproxy-2.0.16

2020-07-18 Thread Dmitry Sivachenko



> On 18 Jul 2020, at 12:40, Илья Шипицин  wrote:
> 
> What is freebsd version?
> 

It was 13.0-CURRENT, but after you asked I also tried 12.1-STABLE (clang 
version is also 10.0.0):  the same warnings/error.




fetching layer 7 samples with tcp mode frontend

2020-11-13 Thread Dmitry Sivachenko
Hello!

Consider the following config excerpt:

frontend test-fe
mode tcp
use_backend test-be1 if { path -i -m end /set }

What is the notion of "path" sample at frontend working in TCP mode?

We experimented with haproxy-1.5.18 on Linux sending HTTP queries with path 
ending with "/set" and found that this condition sometimes hit, sometimes not.  
So the behaviour is random.

Is it expected?  At the first glance, I'd expect a warning or even an error 
when parsing such a config.
What am I missing?

Thanks.


Re: [ANNOUNCE] haproxy-2.3.7

2021-03-16 Thread Dmitry Sivachenko



> On 16 Mar 2021, at 18:01, Christopher Faulet  wrote:
> 
> Hi,
> 
> HAProxy 2.3.7 was released on 2021/03/16. It added 62 new commits
> after version 2.3.6.
> 
> This release is mainly about two subjects : The fix of bugs into the
> resolvers part, mainly revealed since the last release and several
> scalability improvements backported from the development version.
> 

<...>

Hello,

among other things, this version also introduces new warning (clang version 
11.0.0):

src/mux_h2.c:4032:49: warning: implicit conversion from 'int' to 'unsigned 
short' changes value from -32769 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(&h2c->wait_event.tasklet->state, ~TASK_F_USR1);
~~~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)



src/mux_fcgi.c:3477:51: warning: implicit conversion from 'int' to 'unsigned 
short' changes value from -32769 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(&fconn->wait_event.tasklet->state, ~TASK_F_USR1);
~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)



src/mux_h1.c:2456:49: warning: implicit conversion from 'int' to 'unsigned 
short' changes value from -32769 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(&h1c->wait_event.tasklet->state, ~TASK_F_USR1);
~~~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)





Re: [ANNOUNCE] haproxy-2.3.7

2021-03-19 Thread Dmitry Sivachenko



> On 19 Mar 2021, at 19:13, Willy Tarreau  wrote:
> 
> 
> Grrr... And C compiler authors are still wondering why people hate C
> when it's that their compilers are this pedantic :-(
> 
> Could you please try to append a 'U' after "0x8000" in 
> include/haproxy/task-t.h,
> like this:
> 
>  #define TASK_F_USR1   0x8000U  /* preserved user flag 1, 
> application-specific, def:0 */
> 
> It will mark it unsigned and hopefull make it happy again. If so, we'll
> merge it.
> 


No, it does not, just the message has changed slightly:

src/mux_h2.c:4032:49: warning: implicit conversion from 'unsigned int' to 
'unsigned short' changes value from 4294934527 to 32767 [-Wconstant-conversion]
HA_ATOMIC_AND(&h2c->wait_event.tasklet->state, ~TASK_F_USR1);
~~~^
include/haproxy/atomic.h:270:62: note: expanded from macro 'HA_ATOMIC_AND'
#define HA_ATOMIC_AND(val, flags)__atomic_and_fetch(val, flags, 
__ATOMIC_SEQ_CST)




Question about TCP balancing

2009-08-03 Thread Dmitry Sivachenko
Hello!

I am trying to setup haproxy 1.3.19 to use it as
TCP load balancer.

Relevant portion of config looks like:

listen  test 0.0.0.0:17000
mode tcp
balance roundrobin
server  srv1 srv1:17100 check inter 2
server  srv2 srv2:17100 check inter 2
server  srv3 srv3:17100 check inter 2

Now imagine the situation that all 3 backends are down
(no program listen on 17100 port, OS responds with Connection Refused).

In that situation haproxy still listens port 17100 and closes connection
immediately:
> telnet localhost 17101
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.

Is it possible to configure haproxy so it will stop listening the port
when all backends are down?  So clients will receive
Connection Refused as if none listens TCP port at all?

Thanks in advance!



Re: Question about TCP balancing

2009-08-04 Thread Dmitry Sivachenko
Hello!

Thanks for clarification.

I have another question then (trying to solve my problem in a different way).

I want to setup the following configuration.
I have 2 sets of servers (backends): let call one set NEAR (n1, n2, n3)
and another set FAR (f1, f2, f3).

I want to spread incoming requests between NEAR servers only
when they are alive, and move load to FAR servers in case NEAR set is down.

Is it possible to setup such configuration?

I read the manual but did not find such a solution...

Thanks in advance!


On Mon, Aug 03, 2009 at 09:46:47PM +0200, Willy Tarreau wrote:
> No it's not, and it's not only a configuration issue, it's an OS
> limitation. The only way to achieve this is to stop listening to
> the port then listen again to re-enable the port. On some OSes, it
> is possible. On other ones, you have to rebind (and sometimes close
> then recreate a new socket). But once your process has dropped
> privileges, you can't always rebind if the port is <1024 for
> instance.
> 
> So instead of having various behaviours for various OSes, it's
> better to make them behave similarly.
> 
> I have already thought about adding an OS-specific option to do
> that, but I have another problem with that. Imagine that your
> servers are down. You stop listening to the port. At the same time,
> someone else starts listening (eg: you start a new haproxy without
> checking the first one, or an FTP transfer uses this port, ...).
> What should be done when the servers are up again ? Haproxy will
> not be able to get its port back because someone else owns it.
> 
> So, by lack of a clean and robust solution, I prefer not to
> experiment in this area.
> 



Re: Question about TCP balancing

2009-08-05 Thread Dmitry Sivachenko
On Wed, Aug 05, 2009 at 06:30:39AM +0200, Willy Tarreau wrote:
> frontend my_front
>   acl near_usable nbsrv(near) ge 2
>   acl far_usable  nbsrv(far)  ge 2
>   use_backend near if near_usable
>   use_backend far  if far_usable
>   # otherwise error
> 
> backend near
>   balance roundrobin
>   server near1 1.1.1.1 check
>   server near2 1.1.1.2 check
>   server near3 1.1.1.3 check
> 
> backend far
>   balance roundrobin
>   server far1  2.1.1.1 check
>   server far2  2.1.1.2 check
>   server far3  2.1.1.3 check
> 

Aha, I already came to such a solution and noticed it works only
in HTTP mode.
Since I actually do not want to parse HTTP-specific information,
I want to stay in TCP mode (but still use ACL with nbsrv).

So I should stick with 1.4 for that purpose, right?

Or does HTTP mode acts like TCP mode unless I actually use
something HTTP-specific?
In other words, will the above configuration (used in HTTP mode)
actually try to parse HTTP headers (and waste cpu cycles for that)?

Thanks.




Re: Question about TCP balancing

2009-08-06 Thread Dmitry Sivachenko
On Thu, Aug 06, 2009 at 12:03:25AM +0200, Willy Tarreau wrote:
> On Wed, Aug 05, 2009 at 12:01:34PM +0400, Dmitry Sivachenko wrote:
> > On Wed, Aug 05, 2009 at 06:30:39AM +0200, Willy Tarreau wrote:
> > > frontend my_front
> > >   acl near_usable nbsrv(near) ge 2
> > >   acl far_usable  nbsrv(far)  ge 2
> > >   use_backend near if near_usable
> > >   use_backend far  if far_usable
> > >   # otherwise error
> > > 
> > > backend near
> > >   balance roundrobin
> > >   server near1 1.1.1.1 check
> > >   server near2 1.1.1.2 check
> > >   server near3 1.1.1.3 check
> > > 
> > > backend far
> > >   balance roundrobin
> > >   server far1  2.1.1.1 check
> > >   server far2  2.1.1.2 check
> > >   server far3  2.1.1.3 check
> > > 
> > 
> > Aha, I already came to such a solution and noticed it works only
> > in HTTP mode.
> > Since I actually do not want to parse HTTP-specific information,
> > I want to stay in TCP mode (but still use ACL with nbsrv).
> > 
> > So I should stick with 1.4 for that purpose, right?
> 
> exactly. However, keep in mind that 1.4 is development, and if
> you upgrade frequently, it may break some day. So you must be
> careful.
> 

Okay, what is the estimated release date of 1.4 branch?



Compilation of haproxy-1.4-dev2 on FreeBSD

2009-08-24 Thread Dmitry Sivachenko
Hello!

Please consider the following patches. They are required to
compile haproxy-1.4-dev2 on FreeBSD.

Summary:
1) include  before 
2) Use IPPROTO_TCP instead of SOL_TCP
(they are both defined as 6, TCP protocol number)

Thanks!


--- src/backend.c.orig  2009-08-24 14:49:04.0 +0400
+++ src/backend.c   2009-08-24 14:49:19.0 +0400
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 

--- src/stream_sock.c.orig  2009-08-24 14:45:15.0 +0400
+++ src/stream_sock.c   2009-08-24 14:46:19.0 +0400
@@ -16,12 +16,12 @@
 #include 
 #include 
 
-#include 
-
 #include 
 #include 
 #include 
 
+#include 
+
 #include 
 #include 
 #include 


--- src/proto_tcp.c.orig2009-08-24 14:50:03.0 +0400
+++ src/proto_tcp.c 2009-08-24 14:55:45.0 +0400
@@ -18,14 +18,14 @@
 #include 
 #include 
 
-#include 
-
 #include 
 #include 
 #include 
 #include 
 #include 
 
+#include 
+
 #include 
 #include 
 #include 
@@ -253,7 +253,7 @@ int tcp_bind_listener(struct listener *l
 #endif
 #ifdef TCP_MAXSEG
if (listener->maxseg) {
-   if (setsockopt(fd, SOL_TCP, TCP_MAXSEG,
+   if (setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG,
   &listener->maxseg, sizeof(listener->maxseg)) == 
-1) {
msg = "cannot set MSS";
err |= ERR_WARN;




TCP log format question

2009-08-26 Thread Dmitry Sivachenko
Hello!

I am running haproxy-1.4-dev2 with the following
configuration (excerpt):

global
log /var/run/loglocal0
user www
group www
daemon
defaults
log global
modetcp
balance roundrobin
maxconn 2000
option abortonclose
option allbackups
option httplog
option dontlog-normal
option dontlognull
option redispatch
option tcplog
retries 2

frontend M-front
bind 0.0.0.0:17306
mode tcp
acl M-acl nbsrv(M-native) ge 5
use_backend M-native if M-acl
default_backend M-foreign

backend M-native
mode tcp
server ms1 ms1:17306 check maxconn 100 maxqueue 1 weight 100
server ms2 ms2:17306 check maxconn 100 maxqueue 1 weight 100
<...>

backend M-foreign
mode tcp
server ms3 ms3:17306 check maxconn 100 maxqueue 1 weight 100
server ms4 ms4:17306 check maxconn 100 maxqueue 1 weight 100

Note that both frontend and 2 backends are running in TCP mode.

In my log file I see the following lines:
Aug 26 18:19:50 balancer0-00 haproxy[66301]: A.B.C.D:28689 
[26/Aug/2009:18:19:50.034] M-front M-native/ms1 -1/1/0/-1/3 -1 339 - - CD-- 
0/0/0/0/0 0/0 ""

1) What does "" mean? I see no description of that field in
documentation of TCP log format.
2) Why *all* requests are being logged? 
(note option dontlog-normal in default section).
How should I change configuration to log only important events
(errors) and do not log the fact connection was made and served?

Thanks in advance!



Re: TCP log format question

2009-08-27 Thread Dmitry Sivachenko
On Thu, Aug 27, 2009 at 06:39:51AM +0200, Willy Tarreau wrote:
> I'm seeing that you have both "tcplog" and "httplog". Since they
> both add a set of flags, the union of both is enabled which means
> httplog to me. I should add a check for this so that tcplog disables
> httplog.
> 
> > In my log file I see the following lines:
> > Aug 26 18:19:50 balancer0-00 haproxy[66301]: A.B.C.D:28689 
> > [26/Aug/2009:18:19:50.034] M-front M-native/ms1 -1/1/0/-1/3 -1 339 - - CD-- 
> > 0/0/0/0/0 0/0 ""
> > 
> > 1) What does "" mean? I see no description of that field in
> > documentation of TCP log format.
> 
> this is because of "option httplog".

Aha, I see, i had an impression 'option httplog' will be 
ignored in TCP mode.

I removed it and "" disappeared from the log.


> 
> > 2) Why *all* requests are being logged? 
> > (note option dontlog-normal in default section).
> > How should I change configuration to log only important events
> > (errors) and do not log the fact connection was made and served?
> 
> Hmmm dontlog-normal only works in HTTP mode.

Ok, I see, though it is completely unclean after reading the manual.
This should probably be explicitly mentioned.

> Could you please
> explain what type of normal connections you would want to log
> and what type you would not want to log ? It could help making
> a choice of implementation of dontlog-normal for tcplog.
> 

I want to log exactly what manual states:
###
Setting this option ensures that
normal connections, those which experience no error, no timeout, no retry nor
redispatch, will not be logged.
###

... but for TCP mode proxy.

I mean I want to see in logs only those connection that were redispatched, 
timeouted, etc.

Thanks!



Re: Backend Server UP/Down Debugging?

2009-08-27 Thread Dmitry Sivachenko
On Thu, Aug 27, 2009 at 08:45:23AM +0200, Krzysztof Oledzki wrote:
> > On Wed, Aug 26, 2009 at 02:00:42PM -0700, Jonah Horowitz wrote:
> >> I???m watching my servers on the back end and occasionally they flap.  
> >> I???m wondering if there is a way to see why they are taken out of 
> >> service.  I???d like to see the actual response, or at least a HTTP status 
> >> code.
> >
> > right now it's not archived. I would like to keep a local copy of
> > the last request sent and response received which caused a state
> > change, but that's not implemented yet. I wanted to clean up the
> > stats socket first, but now I realize that we could keep at least
> > some info (eg: HTTP status, timeout, ...) in the server struct
> > itself and report it in the log. Nothing of that is performed right
> > now, so you may have to tcpdump at best :-(
> 
> As always, I have a patch for that, solving it nearly exactly like you 
> described it. ;) However for the last half year I have been rather silent, 
> mostly because it is very important time in my private life, so I think 
> I'm partially excused. ;) I know that there are some unfinished tasks (acl 
> for exapmple) so I'll try to push ASAP, maybe starting from the easier 
> patches, likt this ones. The rest will have to wait when I get back from 
> honeymoon.


I see flapping servers in my logs too and also have no clue why
haproxy disables them.

If you have a patch to log the reason why the particular server
was disabled, I'd love to test it (I run 1.4-dev2).

Thanks.



Re: TCP log format question

2009-08-28 Thread Dmitry Sivachenko
> On Thu, Aug 27, 2009 at 06:39:51AM +0200, Willy Tarreau wrote:
> Hmmm dontlog-normal only works in HTTP mode.
> 

BTW, the manual explicitly states that 'option dontlog-normal'
works in TCP mode. See section 8.2.2 "TCP Log format":
#
Successful connections will
not be logged if "option dontlog-normal" is specified in the frontend.
#





lua support does not build on FreeBSD

2016-12-13 Thread Dmitry Sivachenko
Hello,

I am unable to build haproxy-1.7.x on FreeBSD:

cc -Iinclude -Iebtree -Wall -O2 -pipe -O2 -fno-strict-aliasing -pipe  
-fstack-protector   -DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT 
-DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY 
-DUSE_OPENSSL  -DUSE_LUA -I/usr/local/include/lua53 -DUSE_DEVICEATLAS 
-I/place/WRK/ports/net/haproxy/work/deviceatlas-enterprise-c-2.1 -DUSE_PCRE 
-I/usr/local/include -DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.7.1\" 
-DCONFIG_HAPROXY_DATE=\"2016/12/13\" -c -o src/hlua_fcn.o src/hlua_fcn.c
src/hlua_fcn.c:1019:27: error: no member named 's6_addr32' in 'struct in6_addr'
if (((addr1->addr.v6.ip.s6_addr32[0] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1019:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...if (((addr1->addr.v6.ip.s6_addr32[0] & addr2->addr.v6.mask.s6_addr32[0]...
~~~ ^
src/hlua_fcn.c:1020:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[0] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1020:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[0] & addr1->addr.v6.mask.s6_addr32[0])) &&
   ~~~ ^
src/hlua_fcn.c:1021:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[1] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1021:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[1] & addr2->addr.v6.mask.s6_addr32[1]) ==
~~~ ^
src/hlua_fcn.c:1022:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[1] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1022:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[1] & addr1->addr.v6.mask.s6_addr32[1])) &&
   ~~~ ^
src/hlua_fcn.c:1023:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[2] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1023:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[2] & addr2->addr.v6.mask.s6_addr32[2]) ==
~~~ ^
src/hlua_fcn.c:1024:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[2] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1024:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[2] & addr1->addr.v6.mask.s6_addr32[2])) &&
   ~~~ ^
src/hlua_fcn.c:1025:27: error: no member named 's6_addr32' in 'struct in6_addr'
((addr1->addr.v6.ip.s6_addr32[3] & addr2->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1025:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...((addr1->addr.v6.ip.s6_addr32[3] & addr2->addr.v6.mask.s6_addr32[3]) ==
~~~ ^
src/hlua_fcn.c:1026:27: error: no member named 's6_addr32' in 'struct in6_addr'
 (addr2->addr.v6.ip.s6_addr32[3] & addr1->addr.v6.ma...
  ~ ^
src/hlua_fcn.c:1026:62: error: no member named 's6_addr32' in 'struct in6_addr'
  ...(addr2->addr.v6.ip.s6_addr32[3] & addr1->addr.v6.mask.s6_addr32[3]))) {
   ~~~ ^
16 errors generated.




In netinet6/in6.h I see:

#ifdef _KERNEL  /* XXX nonstandard */
#define s6_addr8  __u6_addr.__u6_addr8
#define s6_addr16 __u6_addr.__u6_addr16
#define s6_addr32 __u6_addr.__u6_addr32
#endif


So it seems that s6_addr32 macro is defined only when this header is included 
during kernel build.





Re: lua support does not build on FreeBSD

2016-12-14 Thread Dmitry Sivachenko

> On 14 Dec 2016, at 16:24, David CARLIER  wrote:
> 
> Hi,
> 
> I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
> made all the fields available, not sure if would be useful one day).
> 

Well, I was not sure what this s6_addr32 is used for and if it is possible to 
avoid it's usage (since it is linux-specific).
If not, then this is probably the correct solution. 




Re: lua support does not build on FreeBSD

2016-12-23 Thread Dmitry Sivachenko

> On 23 Dec 2016, at 19:07, thierry.fourn...@arpalert.org wrote:
> 
> Ok, thanks Willy.
> 
> The news path in attachment. David, can you test the FreeBSD build ?
> The patch is tested and validated for Linux.



Yes, it does fix FreeBSD build.



> 
> Thierry
> 
> 
> On Fri, 23 Dec 2016 14:50:38 +0100
> Willy Tarreau  wrote:
> 
>> On Fri, Dec 23, 2016 at 02:37:13PM +0100, thierry.fourn...@arpalert.org 
>> wrote:
>>> thanks Willy for the idea. I will write a patch ASAP, but. why a 32bits
>>> cast and not a 64 bit cast ?
>> 
>> First because existing code uses this already and it works. Second because
>> the 64-bit check might be more expensive for 32-bit platforms than the
>> double 32-bit check is for 64-bit platforms (though that's still to be
>> verified in the assembly code, as some compilers manage to assign register
>> pairs correctly).
>> 
>> Willy
>> 
> <0001-BUILD-lua-build-failed-on-FreeBSD.patch>




Re: HaProxy Hang

2017-03-03 Thread Dmitry Sivachenko

> On 03 Mar 2017, at 17:07, David King  wrote:
> 
> Hi All
> 
> Hoping someone will be able to help, we're running a bit of an interesting 
> setup
> 
> we have 3 HAProxy nodes running freebsd 11.0 , each host runs 4 jails, each 
> running haproxy, but only one of the jails is under any real load
> 


If my memory does not fail me this is third report on haproxy hang on FreeBSD 
and all these reports are about FreeBSD-11.

I wonder if any one experiences this issue with FreeBSD-10?

I am running rather heavy loaded haproxy cluster on FreeBSD-10 (version 1.6.9 
to be specific) and never experienced any hungs (knock the wood).
 


Re: HaProxy Hang

2017-03-03 Thread Dmitry Sivachenko

> On 03 Mar 2017, at 19:36, David King  wrote:
> 
> Thanks for the response!
> Thats interesting, i don't suppose you have the details of the other issues?


First report is 
https://www.mail-archive.com/haproxy@formilux.org/msg25060.html
Second one
https://www.mail-archive.com/haproxy@formilux.org/msg25067.html

(in the same thread)



> 
> Thanks
> Dave 
> 
> On 3 March 2017 at 14:15, Dmitry Sivachenko  wrote:
> 
> > On 03 Mar 2017, at 17:07, David King  wrote:
> >
> > Hi All
> >
> > Hoping someone will be able to help, we're running a bit of an interesting 
> > setup
> >
> > we have 3 HAProxy nodes running freebsd 11.0 , each host runs 4 jails, each 
> > running haproxy, but only one of the jails is under any real load
> >
> 
> 
> If my memory does not fail me this is third report on haproxy hang on FreeBSD 
> and all these reports are about FreeBSD-11.
> 
> I wonder if any one experiences this issue with FreeBSD-10?
> 
> I am running rather heavy loaded haproxy cluster on FreeBSD-10 (version 1.6.9 
> to be specific) and never experienced any hungs (knock the wood).
>  
> 




Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-12 Thread Dmitry Sivachenko

> On 12 Mar 2017, at 11:34, Matthias Fechner  wrote:
> 
> 
> I checked the port again and there is one patch applied to haproxy, but
> it is a different file, so it should not cause the patch to fail, but
> maybe can cause other problems.
> --- src/hlua_fcn.c.orig 2016-12-17 13:58:44.786067000 +0300
> +++ src/hlua_fcn.c  2016-12-17 13:59:17.551256000 +0300
> @@ -39,6 +39,12 @@ static int class_listener_ref;
> 
> #define STATS_LEN (MAX((int)ST_F_TOTAL_FIELDS, (int)INF_TOTAL_FIELDS))
> 
> +#if defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)
> +#define s6_addr8   __u6_addr.__u6_addr8
> +#define s6_addr16  __u6_addr.__u6_addr16
> +#define s6_addr32  __u6_addr.__u6_addr32
> +#endif
> +
> static struct field stats[STATS_LEN];
> 
> int hlua_checkboolean(lua_State *L, int index)
> 
> 


I removed this patch from ports.
It was needed for previous version to compile and should not cause any problems.
Now it became obsoleted.


Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-14 Thread Dmitry Sivachenko

> On 15 Mar 2017, at 00:17, Willy Tarreau  wrote:
> 
> Matthias,
> 
> I could finally track the problem down to a 5-year old bug in the
> connection handler. It already used to affect Unix sockets but it
> requires so rare a set of options and even then its occurrence rate
> is so low that probably nobody noticed it yet.
> 
> I'm attaching the patch to be applied on top of 1.7.3 which fixes it,
> it will be merged into next version.
> 
> Dmitry, you may prefer to take this one than to revert the previous
> one from your ports, especially considering that a few connect()
> immediately succeed over the loopback on FreeBSD and that it was
> absolutely needed to trigger the bug (as well as the previously fixed
> one, which had less impact).
> 

Thanks!

I committed your patch to FreeBSD ports.




Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-17 Thread Dmitry Sivachenko

> On 17 Mar 2017, at 12:04, Willy Tarreau  wrote:
> 
> Hi Dmitry,
> 
> On Wed, Mar 15, 2017 at 12:45:54AM +0300, Dmitry Sivachenko wrote:
>> I committed your patch to FreeBSD ports.
> 
> I was just reported an undesired side effect of this patch with smtp
> in clear without proxy-proto :-(
> 
> [...]



Okay, thanks for information.  I have no other complains so far, so I wait a 
bit for an update.


Re: Problems with haproxy 1.7.3 on FreeBSD 11.0-p8

2017-03-19 Thread Dmitry Sivachenko

> On 19 Mar 2017, at 14:40, Willy Tarreau  wrote:
> 
> Hi,
> 
> On Sat, Mar 18, 2017 at 01:12:09PM +0100, Willy Tarreau wrote:
>> OK here's a temporary patch. It includes a revert of the previous one and
>> adds a condition for the wake-up. At least it passes all my tests, including
>> those involving synchronous connection reports.
>> 
>> I'm not merging it yet as I'm wondering whether a reliable definitive
>> solution should be done once for all (and backported) by addressing the
>> root cause instead of constantly working around its consequences.
> 
> And here come two patches as a replacement for this temporary one. They
> are safer and have been done after throrough code review. I spotted a
> small tens of dirty corner cases having accumulated over the years due
> to the unclear meaning of the CO_FL_CONNECTED flag. They'll have to be
> addressed, but the current patches protect against these corner cases.
> They survived all tests involving delayed connections and checks with
> and without all handshake combinations, with tcp (immediate and delayed
> requests and responses) and http (immediate, delayed requests and responses
> and pipelining).
> 
> I'm resending the first one you already got Dmitry to make things easier
> to follow for everyone. These three are to be applied on top of 1.7.3. I
> still have a few other issues to deal with regarding 1.7 before doing a
> new release (hopefully by the beginning of this week).



Thank a lot!

I just incorporated the latest fixes to FreeBSD ports tree.


USE_GETSOCKNAME obsoleted?

2017-05-10 Thread Dmitry Sivachenko
Hello,

in Makefile I see some logic around USE_GETSOCKNAME define.
But as far as I see, in sources you use getsockname() unconditionally.

Is this an obsoleted define which should be removed from Makefile?

Thanks.



[PATCH]: CLEANUP/MINOR: retire obsoleted USE_GETSOCKNAME build option

2017-05-11 Thread Dmitry Sivachenko
Hello,

this is a patch to nuke obsoleted USE_GETSOCKNAME build option.

Thanks!



0001-CLEANUP-MINOR-retire-USE_GETSOCKNAME-build-option.patch
Description: Binary data


How to add custom options to CFLAGS

2017-06-03 Thread Dmitry Sivachenko
Hello,

Right now we have in the Makefile:

 Common CFLAGS
# These CFLAGS contain general optimization options, CPU-specific optimizations
# and debug flags. They may be overridden by some distributions which prefer to
# set all of them at once instead of playing with the CPU and DEBUG variables.
CFLAGS = $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)

So you explicitly suggest to override CFLAGS if someone want to add custom 
options here (say, tune optimisations).

But this way now mandatory -fwrap will be lost.  Or one must remember not to 
loose it.
This is not convenient.

I propose to add some means to inherit CFLAGS defined in haproxy's Makefile, 
but allow to customise it via additional options passed via environment, 
example attached.

What do you think?

(another way would be to add $(CUSTOM_CFLAGS) at the end of CFLAGS assignment).

Thanks.

--- Makefile.orig   2017-06-03 10:48:38.897518000 +0300
+++ Makefile2017-06-03 10:48:58.640446000 +0300
@@ -205,7 +205,7 @@ ARCH_FLAGS= $(ARCH_FLAGS.$(ARCH)
 # These CFLAGS contain general optimization options, CPU-specific optimizations
 # and debug flags. They may be overridden by some distributions which prefer to
 # set all of them at once instead of playing with the CPU and DEBUG variables.
-CFLAGS = $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)
+CFLAGS := $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS) $(CFLAGS)
 
  Common LDFLAGS
 # These LDFLAGS are used as the first "ld" options, regardless of any library


Re: How to add custom options to CFLAGS

2017-06-04 Thread Dmitry Sivachenko

> On 04 Jun 2017, at 14:37, Willy Tarreau  wrote:
> 
> On Sat, Jun 03, 2017 at 10:36:04AM +0200, Aleksandar Lazic wrote:
>> Hi Dmitry Sivachenko,
>> 
>> Dmitry wrote on:
>> 
>>> Hello,
>> 
>>> Right now we have in the Makefile:
>> 
>>>  Common CFLAGS
>>> # These CFLAGS contain general optimization options, CPU-specific 
>>> optimizations
>>> # and debug flags. They may be overridden by some distributions which 
>>> prefer to
>>> # set all of them at once instead of playing with the CPU and DEBUG 
>>> variables.
>>> CFLAGS = $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)
>> 
>>> So you explicitly suggest to override CFLAGS if someone want to add
>>> custom options here (say, tune optimisations).
>> 
>>> But this way now mandatory -fwrap will be lost.  Or one must remember not 
>>> to loose it.
>>> This is not convenient.
>> 
>>> I propose to add some means to inherit CFLAGS defined in haproxy's
>>> Makefile, but allow to customise it via additional options passed via 
>>> environment, example attached.
>> 
>>> What do you think?
>> 
>>> (another way would be to add $(CUSTOM_CFLAGS) at the end of CFLAGS 
>>> assignment).
>> 
>> Personally I would prefer the CUSTOM_CFLAGS way.
> 
> Same here, and it's important not to create confusion on the way
> CFLAGS are computed.
> 
> By the way, usually if I need to add some specific flags (eg #define),
> I do it via DEFINE or SMALL_OPTS. If I want to change the optimization
> options, I use CPU_CFLAGS or CPU_CFLAGS..
> 
> So maybe you already have what you need and only the documentation needs
> to be improved.
> 

FreeBSD ports collection has a rule for CFLAGS customisation: ports framework 
sets CFLAGS environment and expects it to be used during compilation.

Usually people use it to specify different -On options and other optimisations 
they want.

So strictly speaking it is not CPU-specific, but rather environment-specific.  
So exactly what comment near CFLAGS is about: "".
Right now I see only -O2 in CPU_CFLAGS, so it can be used for that purpose.

If the consensus will be to use CPU_CFLAGS for my purpose, it's OK, I will 
switch to it.

Thanks.


Re: How to add custom options to CFLAGS

2017-06-07 Thread Dmitry Sivachenko

> On 07 Jun 2017, at 11:41, Willy Tarreau  wrote:
> 
> Hi Dmitry,
> 
> On Sun, Jun 04, 2017 at 02:54:23PM +0300, Dmitry Sivachenko wrote:
>>> Same here, and it's important not to create confusion on the way
>>> CFLAGS are computed.
>>> 
>>> By the way, usually if I need to add some specific flags (eg #define),
>>> I do it via DEFINE or SMALL_OPTS. If I want to change the optimization
>>> options, I use CPU_CFLAGS or CPU_CFLAGS..
>>> 
>>> So maybe you already have what you need and only the documentation needs
>>> to be improved.
>>> 
>> 
>> FreeBSD ports collection has a rule for CFLAGS customisation: ports framework
>> sets CFLAGS environment and expects it to be used during compilation.
>> 
>> Usually people use it to specify different -On options and other
>> optimisations they want.
>> 
>> So strictly speaking it is not CPU-specific, but rather environment-specific.
> 
> I agree on the general principle, it just happens that for a very long time
> I've had to deal with broken compilers on various CPUs that were producing
> bogus code at certain optimization levels, which is what made the optimization
> level end up in the CPU-specific CFLAGS. Good memories are gcc 3.0.4 on PARISC
> and pgcc on i586.
> 
> While things have significantly evolved since then, there are still certain
> flags which directly affect optimization and which have a different behaviour
> on various architectures (-mcpu, -mtune, -march, -mregparm). Given that in
> general you want to change them when you change the optimization level
> (typically to produce debuggable code), I tend to think it continues to make
> sense to have all of them grouped together.


I see.


> 
>> Right now I see only -O2 in CPU_CFLAGS, so it can be used for that purpose.
>> 
>> If the consensus will be to use CPU_CFLAGS for my purpose, it's OK, I will
>> switch to it.
> 
> If that's OK for you, I indeed would rather avoid touching that sensitive 
> area,
> though we can always extend it but I prefer the principle of least surprize.
> You can probably just do run make "CPU_CFLAGS=$CFLAGS" and achieve exactly 
> what
> you want.
> 

Yes, that is what I was talking about.  I'll stick to that approach then.

Thanks!




Fix building haproxy with recent LibreSSL

2017-07-02 Thread Dmitry Sivachenko
Hello,

can you please take a look at proposed patch to fix build of haproxy with 
recent version of LibreSSL?

https://www.mail-archive.com/haproxy@formilux.org/msg25819.html

Thanks.


Re: Fix building haproxy with recent LibreSSL

2017-07-04 Thread Dmitry Sivachenko

> On 04 Jul 2017, at 11:04, Willy Tarreau  wrote:
> 
> Hi Dmitry,
> 
> [CCing Bernard, the  patch's author]
> 
> On Mon, Jul 03, 2017 at 12:34:52AM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> can you please take a look at proposed patch to fix build of haproxy with
>> recent version of LibreSSL?
>> 
>> https://www.mail-archive.com/haproxy@formilux.org/msg25819.html
> 
> I personally have no opinion on this one, as long as it doesn't break the
> build for other versions. Do you see the problem on your FreeBSD builds ?
> Do you know if the patch applies to 1.8 (it was mangled so I didn't try).
> We could relatively easily apply Bernard's patch as his description can
> be used as a commit message.



On FreeBSD it does fix a build (though new warning appear which I can't explain 
because of the lack of SSL knowledge):

src/ssl_sock.c:803:2: warning: incompatible integer to pointer conversion
 assigning to 'void (*)(void)' from 'long' [-Wint-conversion]
   SSL_CTX_get_tlsext_status_cb(ctx, &callback);
   ^~~~
src/ssl_sock.c:801:6: note: expanded from macro 'SSL_CTX_get_tlsext_status_cb'
 ...= SSL_CTX_ctrl(ctx,SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB,0, (void (**)(void))cb)
^ ~~
1 warning generated.


The patch was taken form OpenBSD, so in general it should be fine.

Review from some SSL-aware guys on your side would be nice.




Re: Fix building haproxy with recent LibreSSL

2017-07-04 Thread Dmitry Sivachenko

> On 04 Jul 2017, at 11:04, Willy Tarreau  wrote:
> 
> Hi Dmitry,
> 
> [CCing Bernard, the  patch's author]
> 
> On Mon, Jul 03, 2017 at 12:34:52AM +0300, Dmitry Sivachenko wrote:
>> Hello,
>> 
>> can you please take a look at proposed patch to fix build of haproxy with
>> recent version of LibreSSL?
>> 
>> https://www.mail-archive.com/haproxy@formilux.org/msg25819.html
> 
> 
> Do you know if the patch applies to 1.8 (it was mangled so I didn't try).


Sorry, hit reply too fast:  no, one chunk fails against 1.8-dev2 (the one 
dealing with #ifdef SSL_CTX_get_tlsext_status_arg, it requires analysis because 
it is not simple surrounding context change).




Re: FreeBSD CPU Affinity

2017-08-16 Thread Dmitry Sivachenko

> On 16 Aug 2017, at 17:24, Mark Staudinger  wrote:
> 
> Hi Folks,
> 
> Running HAProxy-1.7.8 on FreeBSD-11.0.  Working with nbproc=2 to separate 
> HTTP and HTTPS portions of the config.


Hello,

are you installing haproxy form FreeBSD ports?

I just tried your configuration and it works as you expect.

If you are building haproxy by hand, add USE_CPU_AFFINITY=1 parameter to make 
manually.  FreeBSD port do that for you.






Re: FreeBSD CPU Affinity

2017-08-16 Thread Dmitry Sivachenko

> On 16 Aug 2017, at 17:40, Mark Staudinger  wrote:
> 
> On Wed, 16 Aug 2017 10:35:05 -0400, Dmitry Sivachenko  
> wrote:
> 
>> Hello,
>> 
>> are you installing haproxy form FreeBSD ports?
>> 
>> I just tried your configuration and it works as you expect.
>> 
>> If you are building haproxy by hand, add USE_CPU_AFFINITY=1 parameter to 
>> make manually.  FreeBSD port do that for you.
>> 
>> 
>> 
> 
> 
> Hi Dmitry,
> 
> I am running (for now) a locally compiled from source version.
> 
> Build options :
>  TARGET  = freebsd
>  CPU = generic
>  CC  = clang
>  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
>  OPTIONS = USE_CPU_AFFINITY=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 
> USE_STATIC_PCRE=1 USE_PCRE_JIT=1



Strange.  I am testing on FreeBSD-10-stable though.

May be you add return code check for cpuset_setaffinity() and log possible 
error?


Re: FreeBSD CPU Affinity

2017-08-17 Thread Dmitry Sivachenko

> On 16 Aug 2017, at 18:32, Olivier Houchard  wrote:
> 
> 
> 
> I think I know what's going on.
> Can you try the attached patch ?
> 
> Thanks !
> 
> Olivier
> <0001-MINOR-Fix-CPU-usage-on-FreeBSD.patch>


Also, it would be probably correct thing to check return code from 
cpuset_setaffinity() and treat it as fatal error, aborting haproxy startup.

It is better to get an error message on start rather than guess why it does not 
work as expected.


Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-03 Thread Dmitry Sivachenko

> On 01 Nov 2017, at 02:20, Willy Tarreau  wrote:
> 
> Hi all!
> 


Hello,

several new warnings from clang, some look meaningful:

cc -Iinclude -Iebtree -Wall  -O2 -pipe  -fstack-protector -fno-strict-aliasing  
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv  
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label   
-DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  
-DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY -DUSE_ACCEPT4 
-DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL  -DUSE_PCRE -I/usr/local/include 
-DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.8-rc1-901f75c\" 
-DCONFIG_HAPROXY_DATE=\"2017/10/31\" -c -o src/standard.o src/standard.c
src/server.c:875:14: warning: address of array 'check->desc' will always
  evaluate to 'true' [-Wpointer-bool-conversion]
if (check->desc)
~~  ~~~^~~~
src/server.c:914:14: warning: address of array 'check->desc' will always
  evaluate to 'true' [-Wpointer-bool-conversion]
if (check->desc)
~~  ~~~^~~~
src/server.c:958:14: warning: address of array 'check->desc' will always
  evaluate to 'true' [-Wpointer-bool-conversion]
if (check->desc)
~~  ~~~^~~~
src/cfgparse.c:5044:34: warning: implicit conversion from 'int' to 'char'
  changes value from 130 to -126 [-Wconstant-conversion]
  ...curproxy->check_req[5] = 130;
~ ^~~
src/cfgparse.c:5070:33: warning: implicit conversion from 'int' to 'char'
  changes value from 128 to -128 [-Wconstant-conversion]
  ...curproxy->check_req[5] = 128;
~ ^~~



cc -Iinclude -Iebtree -Wall  -O2 -pipe  -fstack-protector -fno-strict-aliasing  
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv  
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label   
-DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  
-DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY -DUSE_ACCEPT4 
-DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL  -DUSE_PCRE -I/usr/local/include 
-DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.8-rc1-901f75c\" 
-DCONFIG_HAPROXY_DATE=\"2017/10/31\" -c -o src/sample.o src/sample.c
src/peers.c:255:16: warning: implicit conversion from 'int' to 'char' changes
  value from 133 to -123 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_UPDATE_TIMED;
  ~ ^~
src/peers.c:257:16: warning: implicit conversion from 'int' to 'char' changes
  value from 134 to -122 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_INCUPDATE_TIMED;
  ~ ^
src/peers.c:261:16: warning: implicit conversion from 'int' to 'char' changes
  value from 128 to -128 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_UPDATE;
  ~ ^~~~
src/peers.c:263:16: warning: implicit conversion from 'int' to 'char' changes
  value from 129 to -127 [-Wconstant-conversion]
*msg_type = PEER_MSG_STKT_INCUPDATE;
  ~ ^~~
src/peers.c:450:11: warning: implicit conversion from 'int' to 'char' changes
  value from 130 to -126 [-Wconstant-conversion]
msg[1] = PEER_MSG_STKT_DEFINE;
   ~ ^~~~
src/peers.c:486:11: warning: implicit conversion from 'int' to 'char' changes
  value from 132 to -124 [-Wconstant-conversion]
msg[1] = PEER_MSG_STKT_ACK;
   ~ ^



cc -Iinclude -Iebtree -Wall  -O2 -pipe  -fstack-protector -fno-strict-aliasing  
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv  
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label   
-DFREEBSD_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  
-DENABLE_POLL -DENABLE_KQUEUE -DUSE_CPU_AFFINITY -DUSE_ACCEPT4 
-DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL  -DUSE_PCRE -I/usr/local/include 
-DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.8-rc1-901f75c\" 
-DCONFIG_HAPROXY_DATE=\"2017/10/31\" -c -o src/freq_ctr.o src/freq_ctr.c
src/mux_h2.c:1734:15: warning: implicit conversion from enumeration type
  'enum h2_ss' to different enumeration type 'enum h2_cs'
  [-Wenum-conversion]
h2c->st0 = H2_SS_ERROR;
 ~ ^~~
src/mux_h2.c:2321:15: warning: implicit conversion from enumeration type
  'enum h2_ss' to different enumeration type 'enum h2_cs'
  [-Wenum-conversion]
h2c->st0 = H2_SS_ERROR;
 ~ ^~~
src/mux_h2.c:2435:15: warning: implicit conversion from enumeration type
  'enum h2_ss' to different enumeration type 'enum h2_cs'
  [-Wenum-conversion]
h2c->st0 = H2_SS_ERROR;

Re: upcoming 2.0 release: freebsd-11 seem to be broken ?

2019-05-29 Thread Dmitry Sivachenko



> On 26 May 2019, at 23:40, Илья Шипицин  wrote:
> 
> Hello,
> 
> I added freebsd-11 to cirrus-ci
> 
> https://cirrus-ci.com/task/5162023978008576
> 
> should we fix it before 2.0 release ?
> 


BTW, latest -dev release does not build at all on FreeBSD (I tried FreeBSD-12):

cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing   -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv  -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter  -Wno-ignored-qualifiers  -Wno-missing-field-initializers 
-Wno-implicit-fallthrough   -Wtype-limits -Wshift-negative-value   
-Wnull-dereference   -DFREEBSD_PORTS -DUSE_KQUEUE-DUSE_PCRE 
-DUSE_PCRE_JIT   -DUSE_POLL  -DUSE_THREAD  -DUSE_REGPARM -DUSE_STATIC_PCRE  
-DUSE_TPROXY   -DUSE_LIBCRYPT   -DUSE_GETADDRINFO -DUSE_OPENSSL   -DUSE_ACCEPT4 
 -DUSE_ZLIB  -DUSE_CPU_AFFINITY  -DCONFIG_REGPARM=3 -I/usr/include 
-DUSE_PCRE -I/usr/local/include  -DCONFIG_HAPROXY_VERSION=\"2.0-dev4\" 
-DCONFIG_HAPROXY_DATE=\"2019/05/22\" -c -o src/wdt.o src/wdt.c
src/wdt.c:63:13: error: no member named 'si_int' in 'struct __siginfo'
thr = si->si_int;
  ~~  ^
src/wdt.c:105:7: error: use of undeclared identifier 'SI_TKILL'
case SI_TKILL:
 ^
2 errors generated.
gmake[2]: *** [Makefile:830: src/wdt.o] Error 1




Re: upcoming 2.0 release: freebsd-11 seem to be broken ?

2019-05-29 Thread Dmitry Sivachenko



> On 29 May 2019, at 13:31, Илья Шипицин  wrote:
> 
> 
> 
> ср, 29 мая 2019 г. в 15:25, Dmitry Sivachenko :
> 
> 
> > On 26 May 2019, at 23:40, Илья Шипицин  wrote:
> > 
> > Hello,
> > 
> > I added freebsd-11 to cirrus-ci
> > 
> > https://cirrus-ci.com/task/5162023978008576
> > 
> > should we fix it before 2.0 release ?
> > 
> 
> 
> BTW, latest -dev release does not build at all on FreeBSD (I tried 
> FreeBSD-12):
> 
> cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
> -fno-strict-aliasing   -fno-strict-aliasing -Wdeclaration-after-statement 
> -fwrapv  -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
> -Wno-unused-parameter  -Wno-ignored-qualifiers  
> -Wno-missing-field-initializers -Wno-implicit-fallthrough   -Wtype-limits 
> -Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS 
> -DUSE_KQUEUE-DUSE_PCRE -DUSE_PCRE_JIT   -DUSE_POLL  -DUSE_THREAD  
> -DUSE_REGPARM -DUSE_STATIC_PCRE  -DUSE_TPROXY   -DUSE_LIBCRYPT   
> -DUSE_GETADDRINFO -DUSE_OPENSSL   -DUSE_ACCEPT4  -DUSE_ZLIB  
> -DUSE_CPU_AFFINITY  -DCONFIG_REGPARM=3 -I/usr/include -DUSE_PCRE 
> -I/usr/local/include  -DCONFIG_HAPROXY_VERSION=\"2.0-dev4\" 
> -DCONFIG_HAPROXY_DATE=\"2019/05/22\" -c -o src/wdt.o src/wdt.c
> src/wdt.c:63:13: error: no member named 'si_int' in 'struct __siginfo'
> thr = si->si_int;
>   ~~  ^
> src/wdt.c:105:7: error: use of undeclared identifier 'SI_TKILL'
> case SI_TKILL:
>  ^
> 2 errors generated.
> gmake[2]: *** [Makefile:830: src/wdt.o] Error 1
> 
> it was fixed right after 2.0-dev4.
> please try current master
> 
>  


Ah, okay.

Thanks!


haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-10 Thread Dmitry Sivachenko
Hello,

haproxy-1.8 does not build on FreeBSD/i386 (clang):

src/proto_http.o: In function `http_perform_server_redirect':
src/proto_http.c:(.text+0x1209): undefined reference to `__atomic_fetch_add_8'
src/proto_http.o: In function `http_wait_for_request':
src/proto_http.c:(.text+0x275a): undefined reference to `__atomic_fetch_add_8'
src/proto_http.c:(.text+0x2e2c): undefined reference to `__atomic_fetch_add_8'
src/proto_http.c:(.text+0x2e48): undefined reference to `__atomic_fetch_add_8'
src/proto_http.c:(.text+0x30bb): undefined reference to `__atomic_fetch_add_8'
src/proto_http.o:src/proto_http.c:(.text+0x3184): more undefined references to 
`__atomic_fetch_add_8' follow
src/time.o: In function `tv_update_date':
src/time.c:(.text+0x631): undefined reference to `__atomic_compare_exchange_8'


In include/common/hathreads.h you have (line 107):
#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 7) &
& !defined(__clang__)


Why do you exclude clang here?  If I remove !defined(__clang__), it builds fine 
but produces a number of similar warnings:


In file included from src/compression.c:29:
In file included from include/common/cfgparse.h:30:
include/proto/proxy.h:116:2: warning: variable '__new' is uninitialized when
  used within its own initialization [-Wuninitialized]
HA_ATOMIC_UPDATE_MAX(&fe->fe_counters.cps_max,
^~
include/common/hathreads.h:172:55: note: expanded from macro
  'HA_ATOMIC_UPDATE_MAX'
while (__old < __new && !HA_ATOMIC_CAS(val, &__old, __new)); \
 ~~~^~
include/common/hathreads.h:128:26: note: expanded from macro 'HA_ATOMIC_CAS'
typeof((new)) __new = (new);   \
  ~^~~


What is the proper fix for that?  May be remove !defined(__clang__) ?

Thanks!


Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-11 Thread Dmitry Sivachenko

> On 11 Feb 2018, at 13:49, Franco Fichtner  wrote:
> 
> Hi,
> 
>> On 11. Feb 2018, at 7:05 AM, Dmitry Sivachenko  wrote:
>> 
>> src/proto_http.c:(.text+0x1209): undefined reference to 
>> `__atomic_fetch_add_8'
> 
> I believe this is a problem with older Clang versions not defining 8-bit
> operations like __atomic_fetch_add_8 on 32-bit.  This particularly affects
> FreeBSD 11.1 on i386 with LLVM 4.0.0.



I get the same error on FreeBSD-current/i386 (clang 5.0.1):

/usr/bin/ld: error: undefined symbol: __atomic_fetch_add_8
>>> referenced by src/proto_http.c
>>>   src/proto_http.o:(http_perform_server_redirect)

/usr/bin/ld: error: undefined symbol: __atomic_fetch_add_8
>>> referenced by src/proto_http.c
>>>   src/proto_http.o:(http_wait_for_request)

<...>


Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-14 Thread Dmitry Sivachenko

> On 12 Feb 2018, at 17:37, David CARLIER  wrote:
> 
> I think I m the one behing this relatively recent change ... why not adding 
> in the condition the architecture ? e.g. !defined(__clang__) && 
> !defined(__i386__) ... something like this...
> 
> Hope it is useful.
> 


What about this change?

--- work/haproxy-1.8.4/include/common/hathreads.h   2018-02-08 
13:05:15.0 +
+++ /tmp/hathreads.h2018-02-14 11:06:25.031422000 +
@@ -104,7 +104,7 @@ extern THREAD_LOCAL unsigned long tid_bi
/* TODO: thread: For now, we rely on GCC builtins but it could be a good idea to
 * have a header file regrouping all functions dealing with threads. */

-#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 7) 
&& !defined(__clang__)
+#if (defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 
7) && !defined(__clang__)) || (defined(__clang__) && defined(__i386__))
/* gcc < 4.7 */

#define HA_ATOMIC_ADD(val, i)__sync_add_and_fetch(val, i)




Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-17 Thread Dmitry Sivachenko
> On 14 February 2018 at 11:09, Dmitry Sivachenko  wrote:
>> What about this change?
>> 
>> --- work/haproxy-1.8.4/include/common/hathreads.h   2018-02-08 
>> 13:05:15.0 +
>> +++ /tmp/hathreads.h2018-02-14 11:06:25.031422000 +
>> @@ -104,7 +104,7 @@ extern THREAD_LOCAL unsigned long tid_bi
>> /* TODO: thread: For now, we rely on GCC builtins but it could be a good 
>> idea to
>>  * have a header file regrouping all functions dealing with threads. */
>> 
>> -#if defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ < 
>> 7) && !defined(__clang__)
>> +#if (defined(__GNUC__) && (__GNUC__ < 4 || __GNUC__ == 4 && __GNUC_MINOR__ 
>> < 7) && !defined(__clang__)) || (defined(__clang__) && defined(__i386__))
>> /* gcc < 4.7 */
>> 
>> #define HA_ATOMIC_ADD(val, i)__sync_add_and_fetch(val, i)
>> 
>> 


> On 14 Feb 2018, at 14:13, David CARLIER  wrote:
> Whatever works best for you. Regards.


Well, I wonder if this is worth including into haproxy src?


Re: Fix building without NPN support

2018-02-18 Thread Dmitry Sivachenko

> On 15 Feb 2018, at 17:58, Bernard Spil  wrote:
> 
> On 2018-02-15 15:03, Lukas Tribus wrote:
>> Hello,
>> On 15 February 2018 at 13:42, Bernard Spil  wrote:
>>> Hello HAProxy maintainers,
>>> https://github.com/Sp1l/haproxy/tree/20180215-fix-no-NPN
>>> Fix build with OpenSSL without NPN capability
>>> OpenSSL can be built without NEXTPROTONEG support by passing
>>> -no-npn to the configure script. This sets the
>>> OPENSSL_NO_NEXTPROTONEG flag in opensslconf.h
>>> Since NEXTPROTONEG is now considered deprecated, it is superseeded
>>> by ALPN (Application Layer Protocol Next), HAProxy should allow
>>> building withough NPN support.
>>> Git diff attached for your consideration.
>> Please don't remove npn config parsing (no ifdefs in "ssl_bind_kw
>> ssl_bind_kws" and "bind_kw_list bind_kws"). ssl_bind_parse_npn returns
>> a fatal configuration error when npn is configured and the library
>> doesn't support it.
>> "library does not support TLS NPN extension" is a better error message
>> than something like "npn is not a valid keyword".
>> Otherwise I agree, thanks for the patch!
>> cheers,
>> lukas
> 
> Hi Lukas,
> 
> Agree. Updated patch attached.
> 
> Bernard.


Is this patch good, Lukas?
Any plans to integrate it?




Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)

2018-02-21 Thread Dmitry Sivachenko

> On 21 Feb 2018, at 16:33, David CARLIER  wrote:
> 
> Might be irrelevant idea, but is it not possible to detect it via simple code 
> test into the Makefile eventually ?


Did you mean configure?  :)


Re: Fix building haproxy 1.8.5 with LibreSSL 2.6.4

2018-04-16 Thread Dmitry Sivachenko

> On 07 Apr 2018, at 17:38, Emmanuel Hocdet  wrote:
> 
> 
> I Andy
> 
>> Le 31 mars 2018 à 16:43, Andy Postnikov  a écrit :
>> 
>> I used to rework previous patch from Alpinelinux to build with latest stable 
>> libressl
>> But found no way to run tests with openssl which is primary library as I see
>> Is it possible to accept the patch upstream or get review on it? 
>> 
>> 
> 
> 
> @@ -2208,7 +2223,7 @@
> #else
>   cipher = SSL_CIPHER_find(ssl, cipher_suites);
> #endif
> - if (cipher && SSL_CIPHER_get_auth_nid(cipher) == 
> NID_auth_ecdsa) {
> + if (cipher && SSL_CIPHER_is_ECDSA(cipher)) {
>   has_ecdsa = 1;
>   break;
>   }
> 
> No, it’s a regression in lib compatibility.
> 


Hello,

it would be nice if you come to an acceptable solution and finally merge 
LibreSSL support.
There were several attempts to propose LibreSSL support in the past and every 
time discussion dies with no result.

Thanks :)





Re: [ANNOUNCE] haproxy-1.9-dev3

2018-09-29 Thread Dmitry Sivachenko


> On 29 Sep 2018, at 21:41, Willy Tarreau  wrote:
> 
> Ah, a small change is that we now build with -Wextra after having addressed
> all warnings reported up to gcc 7.3 and filtered a few useless ones.

Hello,

here are some warnings from clang version 6.0.0:

cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/cfgparse.o src/cfgparse.c
src/cfgparse.c:5131:34: warning: implicit conversion from 'int' to 'char' 
changes value from 130 to -126 [-Wconstant-conversion]

curproxy->check_req[5] = 130;

   ~ ^~~
src/cfgparse.c:5157:33: warning: implicit conversion from 'int' to 'char' 
changes value from 128 to -128 [-Wconstant-conversion]
curproxy->check_req[5] 
= 128;
   
~ ^~~


cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/stick_table.o src/stick_table.c
src/stick_table.c:2018:14: warning: equality comparison with extraneous 
parentheses [-Wparentheses-equality]
if ((stkctr == &tmpstkctr))
 ~~~^
src/stick_table.c:2018:14: note: remove extraneous parentheses around the 
comparison to silence this warning
if ((stkctr == &tmpstkctr))
~   ^~
src/stick_table.c:2018:14: note: use '=' to turn this equality comparison into 
an assignment
if ((stkctr == &tmpstkctr))
^~


cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/mux_h2.o src/mux_h2.c
src/mux_h2.c:3532:195: warning: implicit conversion from enumeration type 'enum 
h1m_state' to different enumeration type 'enum h1_state' [-Wenum-conversion]
  ...= %d bytes out (%u in, st=%s, ep=%u, es=%s, h2cws=%d h2sws=%d) data=%u", 
h2c->st0, h2s->id, size+9, (unsigned int)total, h1_msg_state_str(h1m->state), 
h1m->err_pos, h1_ms...

   ~^


cc -Iinclude -Iebtree -Wall -Wextra  -O2 -pipe  -fstack-protector 
-fno-strict-aliasing  -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow  -Wno-address-of-packed-member -Wno-unused-label 
-Wno-sign-compare -Wno-unused-parameter  -Wno-ignored-qualifiers  
-Wno-missing-field-initializers -Wno-implicit-fallthrough -Wtype-limits 
-Wshift-negative-value   -Wnull-dereference   -DFREEBSD_PORTS-DTPROXY 
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL -DENABLE_KQUEUE 
-DUSE_CPU_AFFINITY -DUSE_ACCEPT4 -DCONFIG_REGPARM=3 -DUSE_THREAD -DUSE_OPENSSL 
-I/usr/include -DUSE_PCRE -I/usr/local/include -DUSE_PCRE_JIT  
-DCONFIG_HAPROXY_VERSION=\"1.9-dev3\" -DCONFIG_HAPROXY_DATE=\"2018/09/29\" -c 
-o src/peers.o src/peers.c
src/peers.c:253:16: 

Re: Backend Server UP/Down Debugging?

2009-08-31 Thread Dmitry Sivachenko
On Sun, Aug 30, 2009 at 04:58:16PM +0200, Krzysztof Oledzki wrote:
> 
> 
> On Sun, 30 Aug 2009, Willy Tarreau wrote:
> 
> > On Sun, Aug 30, 2009 at 04:18:58PM +0200, Krzysztof Oledzki wrote:
> >>> I think you wanted to put HCHK_STATUS_L57OK here, not OKD since we're
> >>> in the 2xx/3xx state and not 404 disable. Or maybe I misunderstood the
> >>> OKD status ?
> >>
> >> OKD means we have Layer5-7 data avalible, like for example http code.
> >> Several times I found that some of my servers were misconfigured and were
> >> returning a 3xx code redirecting to a page-not-found webpage instead of
> >> doing a proper healt-check, so I think it is good to know what was the
> >> response, even if it was OK (2xx/3xx).
> >
> > Ah OK that makes sense now. It's a good idea to note that data is
> > available, for later when we want to capture it whole. Indeed, I'd
> > like to reuse the same capture principle as is used in proxies for
> > errors. It does not take *that* much space and is so much useful
> > already that we ought to implement it soon there too !
> 
> OK, I found where your confusion comes from - the diff was incomplete, 
> there was no include/types/checks.h file that explains how 
> HCHK_STATUS_L57OK differs from HCHK_STATUS_L57OKD and also makes it 
> possible to compile the code. :(
> 
> Dmitry, could you please use this patch instead? ;)
> 

Okay, thank you.



redispatch optimization

2009-08-31 Thread Dmitry Sivachenko
Hello!

If we are running with 'option redispatch' and
'retries' parameter set to some positive value, 
the behaviur is as follows:

###
In order to avoid immediate reconnections to a server which is restarting,
  a turn-around timer of 1 second is applied before a retry occurs.

When "option redispatch" is set, the last retry may be performed on another
server even if a cookie references a different server.
###

While is makes sence to wait for some time (1 second)
before attempting another connection to the same
server, there is no reason to wait 1 second before
attempting the last connection to another server
(with option redispatch).  It is just a waste of one 
second.

Please consider the following patch to attempt last try 
immediately. (If our main server does not respond, there is
no reason to assume another one cant answer now).

PS: another important suggestion is to make that delay tunable
parameter (like timeout.connect, etc), rather than hardcode
1000ms in code.

Thanks in advance.


--- work/haproxy-1.4-dev2/src/session.c 2009-08-10 00:57:09.0 +0400
+++ /tmp/session.c  2009-08-31 14:28:26.0 +0400
@@ -306,7 +306,11 @@ int sess_update_st_cer(struct session *s
si->err_type = SI_ET_CONN_ERR;
 
si->state = SI_ST_TAR;
+   if (s->srv && s->conn_retries == 0 && s->be->options & PR_O_REDISP) {
+   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
+   } else {
si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   }
return 0;
}
return 0;



Re: redispatch optimization

2009-08-31 Thread Dmitry Sivachenko
On Mon, Aug 31, 2009 at 03:39:35PM +0200, Krzysztof Oledzki wrote:
> > PS: another important suggestion is to make that delay tunable
> > parameter (like timeout.connect, etc), rather than hardcode
> > 1000ms in code.
> 
> Why would you like to change the value? I found 1s very well chosen.

In our environment we have some program asking balancer and expecting results
to be returned very fast (say, in 0.5 second maximum).

So I want to ask one server in the backend, and, if it is not responding, 
re-ask another one immediately (or even the same once again, assuming
that just first TCP SETUP packet was lost and server is
running normally).  So I use low connect.timeout (say, 30ms) and if
connection fails i retry the same one once more.

After all, we can use 1 second default and allow to customize that
value when needed.


> 
> 
> 
> > --- work/haproxy-1.4-dev2/src/session.c 2009-08-10 00:57:09.0 +0400
> > +++ /tmp/session.c  2009-08-31 14:28:26.0 +0400
> > @@ -306,7 +306,11 @@ int sess_update_st_cer(struct session *s
> >si->err_type = SI_ET_CONN_ERR;
> >
> >si->state = SI_ST_TAR;
> > +   if (s->srv && s->conn_retries == 0 && s->be->options & PR_O_REDISP) 
> > {
> > +   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
> > +   } else {
> >si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
> > +   }
> >return 0;
> >}
> >return 0;
> 
> 
> There is no value in adding 0ms, also SI_ST_TAR should be moved inside the 
> condition I think, not sure if it is enough.
> 

Okay probably it is ugly implementation (though it works), because I 
still dont completely understand the code.
Feel free to re-implement it in better way, just grab the idea.

Thanks.



Re: [PATCH 1/2] [MEDIUM] Collect & show information about last health check

2009-09-07 Thread Dmitry Sivachenko
On Sat, Sep 05, 2009 at 07:00:05PM +0200, Willy Tarreau wrote:
> However, I found that it was hard to understand the status codes in
> the HTML stats page. Some people are already complaining about columns
> they don't trivially understand, but here I think that status codes
> are close to cryptic. Also, while it is not *that* hard to tell
> which one means what when you can compare all of them, they must be
> unambiguous when found individually.
> 

Please consider "title" attribute, which may be used for most HTML elements.
Browser displays it as popup hint when you put mouse cursor over that element.

It consumes zero space on display, but can provide helpful
information when needed.

Example:




 
aaa
bbb




Put mouse cursor on "aaa" table cell and you will see "aaa title" hint.



Re: [PATCH] [MINOR] CSS & HTML fun

2009-10-13 Thread Dmitry Sivachenko
On Mon, Oct 12, 2009 at 11:39:54PM +0200, Krzysztof Piotr Oledzki wrote:
> >From 6fc49b084ad0f4513c36418dfac1cf1046af66da Mon Sep 17 00:00:00 2001
> From: Krzysztof Piotr Oledzki 
> Date: Mon, 12 Oct 2009 23:09:08 +0200
> Subject: [MINOR] CSS & HTML fun
> 
> This patch makes stats page about 30% smaller and
> "CSS 2.1" + "HTML 4.01 Transitional" compliant.
> 
> There should be no visible differences.
> 
> Changes:
>  - add missing 

End tag for  is optional according to 
http://www.w3.org/TR/html401/struct/lists.html#edef-UL



Re: [PATCH] [MINOR] CSS & HTML fun

2009-10-13 Thread Dmitry Sivachenko
On Tue, Oct 13, 2009 at 02:16:12PM +0200, Benedikt Fraunhofer wrote:
> Hello,
> 
> 2009/10/13 Dmitry Sivachenko :
> 
> > End tag for  is optional according to
> 
> really? Something new to me :)
> 

OMG, sorry, I am blind.

Forget about that.



Re: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-11 Thread Dmitry Sivachenko
On Mon, Jan 04, 2010 at 12:13:49AM +0100, Willy Tarreau wrote:
> Hi all,
> 
> Yes that's it, it's not a joke !
> 
>  -- Keep-alive support is now functional on the client side. --
> 

Hello!

Are there any plans to implement server-side HTTP keep-alive?

I mean I want client connecting to haproxy NOT to use keep-alive,
but to utilize keep-alive between haproxy and backend servers.

Thanks!



Re: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-12 Thread Dmitry Sivachenko
On Mon, Jan 11, 2010 at 09:08:03PM +0100, Willy Tarreau wrote:
> 
> > I mean I want client connecting to haproxy NOT to use keep-alive,
> > but to utilize keep-alive between haproxy and backend servers.
> 
> Hmmm that's different. There are issues with the HTTP protocol
> itself making this extremely difficult. When you're keeping a
> connection alive in order to send a second request, you never
> know if the server will suddenly close or not. If it does, then
> the client must retransmit the request because only the client
> knows if it takes a risk to resend or not. An intermediate
> equipemnt is not allowed to do so because it might send two
> orders for one request.
> 
> The problem is, the clients are already aware of this and happily
> replay a request after the first one in case of unexpected session
> termination. But they never do this if the session terminates during
> the first request.
> 
> So by doing what you describe, your clients would regularly get some
> random server errors when a server closes a connection it does not
> want to sustain anymore before haproxy has a chance to detect it.
> 
> Another issue is that there are (still) some buggy applications which
> believe that all the requests from a same session were initiated by
> the same client. So such a feature must be used with extreme care.
> 

Imagine the following scenario: we have large number of requests from
different clients.  Each client send request rarely, so no need for keep-alive
between client and haproxy.

haproxy forwards requests to a number of backends, each request fetches 
rather small amount of data.  Now we have rather high packet rate on
proxy server, and maintaining keep alive between haproxy and backends
should reduce it greatly (eliminating connection setup/shutdown sequence,
all data fits into one-two data packets).

This is like (dynamic) pool of connections from haproxy to backends and
each request from client goes via one of already existing connection to
backend (if no connection available, then new is established).

I understand your arguments about edge cases with broken clients etc, but
this is what config file is for :) You can enable/disable features
depending on situation.

What do you think about it?



haproxy-1.4.3 and keep-alive status

2010-04-08 Thread Dmitry Sivachenko
Hello!

I am tasting version 1.4.3 of haproxy and I am getting a bit confused with
HTTP keep-alive support status.

1) Is server-side HTTP keep-alive supported at all?
The existence of option http-server-close makes me beleive that it is
enabled unless that option is used.

2) Is it true that client side HTTP keep-alive is also enabled by default unless
option httpclose is used?

3) I have sample configuration running with option http-server-close and 
without option httpclose set.

I observe the following at haproxy side:

Request comes:

GET / HTTP/1.1
Host: host.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.2) 
Gecko/20100326 Firefox/3.6.2
Accept: */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

So client requests keep-alive.  I suppose that haproxy should send request to 
backend with Connection: close (because http-server-close is set) but
send response to client with keep-alive enabled. But that does not happen:

HTTP/1.1 200 OK
Date: Thu, 08 Apr 2010 08:41:52 GMT
Expires: Thu, 08 Apr 2010 08:42:52 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Close

jsonp1270715696732(["a", ["ab", "and", "a2", "ac", "are", "a a", "ad", "a b", 
"a1", "about"]])


Why haproxy responds to client with Connection: Close?

Thanks in advance!



Re: haproxy-1.4.3 and keep-alive status

2010-04-26 Thread Dmitry Sivachenko
On Thu, Apr 08, 2010 at 11:58:25AM +0200, Willy Tarreau wrote:
> > 3) I have sample configuration running with option http-server-close and 
> > without option httpclose set.
> > 
> > I observe the following at haproxy side:
> > 
> > Request comes:
> > 
> > GET / HTTP/1.1
> > Host: host.pp.ru
> > User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.2) 
> > Gecko/20100326 Firefox/3.6.2
> > Accept: */*
> > Accept-Language: en-us,ru;q=0.7,en;q=0.3
> > Accept-Encoding: gzip,deflate
> > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
> > Keep-Alive: 115
> > Connection: keep-alive
> > 
> > So client requests keep-alive.  I suppose that haproxy should send request 
> > to 
> > backend with Connection: close (because http-server-close is set) but
> > send response to client with keep-alive enabled.
> 
> Exactly.
> 
> > But that does not happen:
> > 
> > HTTP/1.1 200 OK
> > Date: Thu, 08 Apr 2010 08:41:52 GMT
> > Expires: Thu, 08 Apr 2010 08:42:52 GMT
> > Content-Type: text/javascript; charset=utf-8
> > Connection: Close
> > 
> > jsonp1270715696732(["a", ["ab", "and", "a2", "ac", "are", "a a", "ad", "a 
> > b", "a1", "about"]])
> > 
> > 
> > Why haproxy responds to client with Connection: Close?
> 
> Because the server did not provide information required to make the keep-alive
> possible. In your case, there is no "content-length" nor any 
> "transfer-encoding"
> header, so the only way the client has to find the response end, is the 
> closure
> of the connection.
> 
> An exactly similar issue was identified on Tomcat and Jetty. They did not use
> transfer-encoding when the client announces it intends to close. The Tomcat
> team was cooperative and recently agreed to improve that. In the mean time,
> we have released haproxy 1.4.4 which includes a workaround for this : combine
> "option http-pretend-keepalive" with "option http-server-close" and your 
> server
> will believe you're doing keep-alive and may try to send a more appropriate
> response. At least this works with Jetty and Tomcat, though there is nothing
> mandatory in this area.
> 

Hello!

Here is a sample HTTP session with my (hand-made) server.

1) GET / HTTP/1.1
Host: hots.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) Gecko/2010041
4 Firefox/3.6.3
Accept: text/javascript, application/javascript, */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

2) 
HTTP/1.1 200 OK
Date: Mon, 26 Apr 2010 11:34:19 GMT
Expires: Mon, 26 Apr 2010 11:35:19 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Keep-Alive
Transfer-Encoding: chunked



tcpdump analysis of several subsequent requests shows that HTTP keep-alive works
in my case.

When I put that server behind haproxy (version 1.4.4) I see the following:


1) GET  HTTP/1.1
Host: host.pp.ru
User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) Gecko/2010041
4 Firefox/3.6.3
Accept: text/javascript, application/javascript, */*
Accept-Language: en-us,ru;q=0.7,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive

2) 
HTTP/1.1 200 OK
Date: Mon, 26 Apr 2010 11:45:01 GMT
Expires: Mon, 26 Apr 2010 11:46:01 GMT
Content-Type: text/javascript; charset=utf-8
Connection: Close



I have
mode http
option http-server-close
option http-pretend-keepalive

in my config (tried both with and without http-pretend-keepalive).

Can you please explain in more detail what server makes wrong and why haproxy
adds Connection: Close header
(and why Firefox successfully uses HTTP keep-alive with the same server without
haproxy).

Thanks in advance!



Re: haproxy-1.4.3 and keep-alive status

2010-04-26 Thread Dmitry Sivachenko
On Mon, Apr 26, 2010 at 03:41:03PM +0200, Cyril Bontй wrote:
> Hi Dmitry,
> 
> Le lundi 26 avril 2010 13:57:12, Dmitry Sivachenko a йcrit :
> > When I put that server behind haproxy (version 1.4.4) I see the following:
> > 
> > 
> > 1) GET  HTTP/1.1
> > Host: host.pp.ru
> > User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.3) 
> > Gecko/2010041
> > 4 Firefox/3.6.3
> > Accept: text/javascript, application/javascript, */*
> > Accept-Language: en-us,ru;q=0.7,en;q=0.3
> > Accept-Encoding: gzip,deflate
> > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
> > Keep-Alive: 115
> > Connection: keep-alive
> > 
> > 2) 
> > HTTP/1.1 200 OK
> > Date: Mon, 26 Apr 2010 11:45:01 GMT
> > Expires: Mon, 26 Apr 2010 11:46:01 GMT
> > Content-Type: text/javascript; charset=utf-8
> > Connection: Close
> > 
> > 
> > 
> > I have
> > mode http
> > option http-server-close
> > option http-pretend-keepalive
> > 
> > in my config (tried both with and without http-pretend-keepalive).
> > 
> > Can you please explain in more detail what server makes wrong and why 
> > haproxy
> > adds Connection: Close header
> > (and why Firefox successfully uses HTTP keep-alive with the same server 
> > without
> > haproxy).
> 
> HAProxy can't accept the connection to be keep-alived as it doesn't provide a 
> Content-Length (nor the communication allows chunked transfer).
> Try to add a Content-Length header equal to your data length and Keep-Alive 
> should be accepted.
> 

Okay, I'll try, thanks for suggestion.

By the way: why haproxy behaves that way?  What is the technical problem to
allow HTTP keep-alive with chunked transfer?  AFAIK last chunk is specially
formed to indicate the end of data so haproxy should see the end of transmitted
data without Content-Length header?



X-Forwarded-For and option http-server-close

2011-03-21 Thread Dmitry Sivachenko
Hello!

We are using haproxy version 1.4.
We are trying to setup HTTP mode backend with support of HTTP keep-alive
between clients and haproxy.
For that reason we add "option http-server-close" in backend configuration.
But we also want to pass real client IP address in X-Forwarded-For header.
For that reason we add "option forwardfor" in backend configuration.

What we observe is that these two options do not work together.
If we enable HTTP keep-alive, haproxy stops adding X-Forwarded-For header.

>From the other hand, if we setup nginx as proxy server instead of haproxy,
we get both HTTP keep-alive and correct X-Forwarded-For header.

What are the reasons to not allow both freatures to work together?

Thanks in advance!



X-Forwarded-For header

2011-03-24 Thread Dmitry Sivachenko
Hello!

With "option forwardfor", haproxy adds X-Forwarded-For header at the end
of header list.

But according to wikipedia:
http://en.wikipedia.org/wiki/X-Forwarded-For

and other HTTP proxies (say, nginx)
there is standard format to specify several intermediate IP addresses:
X-Forwarded-For: client1, proxy1, proxy2

Why don't you use these standard procedure to add client IP?
(I mean if X-Forwarded-For already exists in request headers, modify
its value with client IP and do not create another header with the same name).

Thanks!



Re: X-Forwarded-For header

2011-03-25 Thread Dmitry Sivachenko
On Thu, Mar 24, 2011 at 09:12:46PM +0100, Willy Tarreau wrote:
> Hello Dmitry,
> 
> On Thu, Mar 24, 2011 at 05:28:13PM +0300, Dmitry Sivachenko wrote:
> > Hello!
> > 
> > With "option forwardfor", haproxy adds X-Forwarded-For header at the end
> > of header list.
> > 
> > But according to wikipedia:
> > http://en.wikipedia.org/wiki/X-Forwarded-For
> > 
> > and other HTTP proxies (say, nginx)
> > there is standard format to specify several intermediate IP addresses:
> > X-Forwarded-For: client1, proxy1, proxy2
> > 
> > Why don't you use these standard procedure to add client IP?
> 
> Because these are not the standards. Standards are defined by RFCs, not
> by Wikipedia :-)


I meant more like "de-facto standard", sorry for the confusion.
The format with single comma-delimited X-Forwarded-For is just more common.


> 
> We already got this question anyway. The short answer is that both forms
> are strictly equivalent, and any intermediary is free to fold multiple
> header lines into a single one with values delimited by commas. Your
> application will not notice the difference (otherwise it's utterly
> broken and might possibly be sensible to many vulnerabilities such as
> request smugling attacks).
> 


Okay, thanks for the explanation.



haproxy-1.4.20 crashes

2012-05-15 Thread Dmitry Sivachenko

Hello!

I am using haproxy-1.4.20 on FreeBSD-9.
It was running without any problems for a long time, but after recent 
changes in configuration it began to crash from time to time.


GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...
Core was generated by `haproxy'.
Program terminated with signal 10, Bus error.
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /lib/libc.so.7...done.
Loaded symbols for /lib/libc.so.7
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  0x00455183 in tcpv4_connect_server (si=0x8062132e8, 
be=0x8014eb000,

srv=0x8013d9400, srv_addr=0x806213470, from_addr=0x806213480)
at src/proto_tcp.c:422
422 EV_FD_SET(fd, DIR_WR);  /* for connect status */
(gdb) bt
#0  0x00455183 in tcpv4_connect_server (si=0x8062132e8, 
be=0x8014eb000,

srv=0x8013d9400, srv_addr=0x806213470, from_addr=0x806213480)
at src/proto_tcp.c:422
#1  0x004449f7 in connect_server (s=0x806213200) at 
src/backend.c:921
#2  0x00457d98 in sess_update_stream_int (s=0x806213200, 
si=0x8062132e8)

at src/session.c:374
#3  0x0045a5e7 in process_session (t=0x8057fcb40) at 
src/session.c:1403

#4  0x0040b1e3 in process_runnable_tasks (next=0x7fffdaac)
at src/task.c:234
#5  0x004047a3 in run_poll_loop () at src/haproxy.c:983
#6  0x00404f61 in main (argc=6, argv=0x7fffdb88) at 
src/haproxy.c:1264

(gdb)

Is it a known issue?
If not, I can provide more information (config, core image, etc).

Thanks in advance!



Dump of invalid requests

2012-10-20 Thread Dmitry Sivachenko

Hello!

I am using haproxy-1.4.22.
Now I can see the last invalid request haproxy rejected with Bad Request 
return code with the following command:

$ echo "show errors" | socat stdio unix-connect:/tmp/haproxy.stats

1) The request seems to be truncated at 16k boundary.  With very large 
GET requests I do not see the tail of URL string and (more important) 
the following HTTP headers.  I am running with tune.bufsize=32768. Is it 
possible to tune haproxy to dump the whole request?


2) The command above shows *the last* rejected request.  In some cases 
it complicates debugging, it would be convenient to see dumps of all 
rejected requests for later analysis.  Is it possible to enable logging 
of these dumps to a file or syslog?


Thanks in advance!



Re: Dump of invalid requests

2012-10-20 Thread Dmitry Sivachenko

On 10/20/12 11:49 PM, Willy Tarreau wrote:

Hello Dmitry,

On Sat, Oct 20, 2012 at 10:13:47PM +0400, Dmitry Sivachenko wrote:

Hello!

I am using haproxy-1.4.22.
Now I can see the last invalid request haproxy rejected with Bad Request
return code with the following command:
$ echo "show errors" | socat stdio unix-connect:/tmp/haproxy.stats

1) The request seems to be truncated at 16k boundary.  With very large
GET requests I do not see the tail of URL string and (more important)
the following HTTP headers.  I am running with tune.bufsize=32768. Is it
possible to tune haproxy to dump the whole request?

It always dumps the whole request. What you're describing is a request
too large to fit in a buffer. It is invalid by definition since haproxy
cannot parse it fully. If you absolutely need to pass that large a
request, you can increase tune.bufsize and limit tune.maxrewrite to
1024, it will be more than enough. But be careful, a website running
with that large requests will 1) not be accessible by everyone for the
same reason (some proxies will block the request) and 2) will be
extremely slow for users with a limited uplink or via 3G/GPRS.


As I wrote in my original e-mail, I use tune.bufsize=32768.  I did not 
tweak tune.maxrewrite though.
I will try to decrease maxrewrite to 1024 and see if 'show errors' will 
dump more that 16k of URL.


I don't fully understand it's meaning though.
If I need to match up to 25k size requests using reqrep directive, will
tune.bufsize=32768 and tune.maxrewrite=1024 be enough for that?

I am aware of problems 1) and 2) but we have some special service here 
at work which requires that large URLs.



2) The command above shows *the last* rejected request.  In some cases
it complicates debugging, it would be convenient to see dumps of all
rejected requests for later analysis.  Is it possible to enable logging
of these dumps to a file or syslog?

No, because haproxy does not access any file once started, and syslog
normally does not support messages larger than 1024 chars.

What is problematic with only the last request ? Can't you connect
more often to dump it ? There is an event number in the dump for that
exact purpose, that way you know if you have already seen it or not.




Problem is that you never know when next invalid request will arrive so 
it is possible to miss one no matter how ofter you poll for new errors.


Since most request should fit even into 1024 buffer, would be nice to 
dump at least first 1024 bytes via syslog for debugging.






Re: Dump of invalid requests

2012-10-21 Thread Dmitry Sivachenko

On 10/21/12 12:06 AM, Willy Tarreau wrote:

On Sun, Oct 21, 2012 at 12:01:10AM +0400, Dmitry Sivachenko wrote:

As I wrote in my original e-mail, I use tune.bufsize=32768.  I did not
tweak tune.maxrewrite though.
I will try to decrease maxrewrite to 1024 and see if 'show errors' will
dump more that 16k of URL.

I don't fully understand it's meaning though.
If I need to match up to 25k size requests using reqrep directive, will
tune.bufsize=32768 and tune.maxrewrite=1024 be enough for that?

Yes. The max request that can be read at once is bufsize-maxrewrite. And
since maxrewrite defaults to bufsize/2, I think you were limited to 16k
which is in the same range as your request.




Please consider the following patch for configuration.txt to clarify 
meaning of

bufsize, maxrewrite and the size of HTTP request which can be processed.

Thanks.

--- configuration.txt.orig  2012-08-14 11:09:31.0 +0400
+++ configuration.txt   2012-10-21 18:08:01.0 +0400
@@ -683,6 +683,8 @@
   statistics, and values larger than default size will increase memory 
usage,
   possibly causing the system to run out of memory. At least the 
global maxconn
   parameter should be decreased by the same factor as this one is 
increased.
+  If HTTP request is larger than tune.bufsize - tune.maxrewrite, 
haproxy will

+  return HTTP 400 (Bad Request) error.

 tune.chksize 
   Sets the check buffer size to this size (in bytes). Higher values 
may help

@@ -4346,8 +4348,8 @@
  # replace "www.mydomain.com" with "www" in the host name.
  reqirep ^Host:\ www.mydomain.com   Host:\ www

-  See also: "reqadd", "reqdel", "rsprep", section 6 about HTTP header
-manipulation, and section 7 about ACLs.
+  See also: "reqadd", "reqdel", "rsprep", "tune.bufsize", section 6 about
+HTTP header manipulation, and section 7 about ACLs.


 reqtarpit   [{if | unless} ]



option accept-invalid-http-request

2012-10-24 Thread Dmitry Sivachenko

Hello!

I am running haproxy-1.4.22 with option accept-invalid-http-request turned on 
(the default).


It seems that haproxy successfully validates requests with unencoded '%' 
characted in it:


http://some.host.net/api/v1/do_smth?lang=en-ru&text=100%%20Pure%20Mulberry%20Queen

(note unencoded % after 100).

I see such requests in my backend's log.  I expect haproxy return HTTP 400 (Bad 
Request) in such cases.


Is it a bug or am I missing something?

Thanks!



Re: option accept-invalid-http-request

2012-10-24 Thread Dmitry Sivachenko

On 24.10.2012 19:13, Jonathan Matthews wrote:

On 24 October 2012 16:03, Dmitry Sivachenko  wrote:

Hello!

I am running haproxy-1.4.22 with option accept-invalid-http-request turned
on (the default).


Do you actually mean "off" here?



Yes, sorry.





It seems that haproxy successfully validates requests with unencoded '%'
characted in it:

http://some.host.net/api/v1/do_smth?lang=en-ru&text=100%%20Pure%20Mulberry%20Queen

(note unencoded % after 100).

I see such requests in my backend's log.  I expect haproxy return HTTP 400
(Bad Request) in such cases.

Is it a bug or am I missing something?


Percentage signs are valid in URIs. Your application could be doing
/anything/ with them; HAProxy doesn't know what.
I don't /believe/ it's a validating parser's job to disallow these -
it sounds like you want more of a WAF.



Well, at least from Wikipedia:
http://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_the_percent_character

Because the percent ("%") character serves as the indicator for percent-encoded 
octets, it must be percent-encoded as "%25" for that octet to be used as data 
within a URI.


When haproxy encounters, say, unencoded whitespace character, it returns HTTP 
400.  Why '%' should be an exception?






Need more info on compression

2012-11-22 Thread Dmitry Sivachenko
Hello!

I was reading docs about HTTP compression support in -dev13 and it is a bit
unclear to me how it works.

Imagine I have:
compression algo gzip
compression type text/html text/javascript text/xml text/plain

in defaults section.

What will haproxy do if:
1) backend server does NOT support compression;
2) backend server does support compression;
3) backend server does support compression and there is no these two
compression* lines in haproxy config.

I think documentation needs to clarify things a bit.

In return, I am attaching a small patch which fixes 2 typos.

Thanks!
--- configuration.txt.orig  2012-11-22 04:11:33.0 +0400
+++ configuration.txt   2012-11-22 19:58:46.0 +0400
@@ -1887,7 +1887,7 @@
 offload  makes haproxy work as a compression offloader only (see notes).
 
   The currently supported algorithms are :
-identity  this is mostly for debugging, and it was useful for developping
+identity  this is mostly for debugging, and it was useful for developing
   the compression feature. Identity does not apply any change on
   data.
 
@@ -1901,7 +1901,7 @@
   This setting is only available when support for zlib was built
   in.
 
-  Compression will be activated depending of the Accept-Encoding request
+  Compression will be activated depending on the Accept-Encoding request
   header. With identity, it does not take care of that header.
 
   The "offload" setting makes haproxy remove the Accept-Encoding header to


Re: -dev13 dumps core on reload

2012-11-22 Thread Dmitry Sivachenko
On 23.11.2012 11:18, Willy Tarreau wrote:
> I'd be interested in knowing if your config enables compression, because
> that's an area where we very recently introduced new pools, so there could
> be a relation.
> 


It does, but it does not matter: when I comment compression out it also dumps
core.  And it does not relate to graceful restart (haproxy -sf), it dump core
on normal exit too.

I have setups with and without SSL too: coredump does not depend on SSL.



Re: Need more info on compression

2012-11-28 Thread Dmitry Sivachenko
On 24.11.2012 18:25, Willy Tarreau wrote:
> Hi Dmitry,
> 
> On Thu, Nov 22, 2012 at 08:03:26PM +0400, Dmitry Sivachenko wrote:
>> Hello!
>>
>> I was reading docs about HTTP compression support in -dev13 and it is a bit
>> unclear to me how it works.
>>
>> Imagine I have:
>> compression algo gzip
>> compression type text/html text/javascript text/xml text/plain
>>
>> in defaults section.
>>
>> What will haproxy do if:
>> 1) backend server does NOT support compression;
> 
> Haproxy will compress the matching responses.
> 
>> 2) backend server does support compression;
> 
> You have two possibilities :
>   - either you just have the lines above, and the server will see
> the Accept-Encoding header from the client and will compress
> the response ; in this case, haproxy will see the compressed
> response and will not compress again ;
> 
>   - or you also have a "compression offload" line. In this case,
> haproxy will remove the "Accept-Encoding" header before passing
> the request to the server. The server will then *not* compress,
> and haproxy will compress the response. This is what I'm doing
> at home because the compressing server is bogus and sometimes
> emits wrong chunked encoded data!
> 
>> 3) backend server does support compression and there is no these two
>> compression* lines in haproxy config.
> 
> Then haproxy's normal behaviour remains unchanged, the server compresses
> if it wants to and haproxy transfers the response unmodified.
> 
>> I think documentation needs to clarify things a bit.
> 
> Possibly, however I don't know what to clarify nor how, it's always
> difficult to guess how people will understand a doc :-(
> 
> Could you please propose some changes ? I would be happy to improve
> the doc if it helps people understand it.
> 


Thank you very much for the explanation.

Please consider the attached patch, I hope it will clarify haproxy's behavior a
bit.

--- configuration.txt.orig  2012-11-26 06:11:05.0 +0400
+++ configuration.txt   2012-11-28 17:45:25.0 +0400
@@ -1903,16 +1903,23 @@
 
   Compression will be activated depending on the Accept-Encoding request
   header. With identity, it does not take care of that header.
+  If backend servers support HTTP compression, these directives
+  will be no-op: haproxy will see the compressed response and will not
+  compress again. If backend servers do not support HTTP compression and
+  there is Accept-Encoding header in request, haproxy will compress the
+  matching response.
 
   The "offload" setting makes haproxy remove the Accept-Encoding header to
   prevent backend servers from compressing responses. It is strongly
   recommended not to do this because this means that all the compression work
   will be done on the single point where haproxy is located. However in some
   deployment scenarios, haproxy may be installed in front of a buggy gateway
-  and need to prevent it from emitting invalid payloads. In this case, simply
-  removing the header in the configuration does not work because it applies
-  before the header is parsed, so that prevents haproxy from compressing. The
-  "offload" setting should then be used for such scenarios.
+  with broken HTTP compression implementation which can't be turned off.
+  In that case haproxy can be used to prevent that gateway from emitting
+  invalid payloads. In this case, simply removing the header in the
+  configuration does not work because it applies before the header is parsed,
+  so that prevents haproxy from compressing. The "offload" setting should
+  then be used for such scenarios.
 
   Compression is disabled when:
 * the server is not HTTP/1.1.


Some thoughts about redispatch

2012-11-28 Thread Dmitry Sivachenko
Hello!

If haproxy can't send a request to the backend server, it will retry the same
backend 'retries' times waiting 1 second between retries, and if 'option
redispatch' is used, the last retry will go to another backend.

There is (I think very common) usage scenario when
1) all requests are independent of each other and all backends are equal, so
there is no need to try to route requests to the same backend (if it failed, we
will try dead one again and again while another backend could serve the request
right now)

2) there is response time policy for requests and 1 second wait time is just
too long (all requests are handled faster than 500ms and client software will
not wait any longer).

I propose to introduce new parameters in config file:
1) "redispatch always": when set, haproxy will always retry different backend
after connection to the first one fails.
2) Allow to override 1 second wait time between redispatches in config file
(including the value of 0 == immediate).

Right now I use the attached patch to overcome these restrictions.  It is ugly
hack right now, but if you could include it into distribution in better form
with tuning via config file I think everyone would benefit from it.

Thanks.
--- session.c.orig  2012-11-22 04:11:33.0 +0400
+++ session.c   2012-11-22 16:15:04.0 +0400
@@ -877,7 +877,7 @@ static int sess_update_st_cer(struct ses
 * bit to ignore any persistence cookie. We won't count a retry nor a
 * redispatch yet, because this will depend on what server is selected.
 */
-   if (objt_server(s->target) && si->conn_retries == 0 &&
+   if (objt_server(s->target) &&
s->be->options & PR_O_REDISP && !(s->flags & SN_FORCE_PRST)) {
sess_change_server(s, NULL);
if (may_dequeue_tasks(objt_server(s->target), s->be))
@@ -903,7 +903,7 @@ static int sess_update_st_cer(struct ses
si->err_type = SI_ET_CONN_ERR;
 
si->state = SI_ST_TAR;
-   si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
return 0;
}
return 0;


unlink local sockets upon exit

2012-12-11 Thread Dmitry Sivachenko
Hello!

Why haproxy does not unlink local sockets (stats socket, other local sockets if 
there are frontends bound to
local unix socket) upon exit?

Is there any special reason not to do it?

Thanks!


Re: [ANNOUNCE] haproxy-1.5-dev16

2012-12-24 Thread Dmitry Sivachenko
Hello!

After update from -dev15, the following stats listener:

listen stats9 :30009
mode http
stats enable
stats uri /
stats show-node
stats show-legends

returns 503/Service unavailable.

With -dev15 it shows statistics page.


On 24.12.2012, at 19:51, Willy Tarreau  wrote:

> Hi all,
> 
> Here comes 1.5-dev16. Thanks to the amazing work Sander Klein and John
> Rood have done at Picturae ICT ( http://picturae.com/ ) we could finally
> spot the freeze bug after one week of restless digging ! This bug was
> amazingly hard to reproduce in general and would only affect POST requests
> under certain circumstances that I never could reproduce despite many
> efforts. It is likely that other users were affected too but did not
> notice it because end users did not complain (I'm thinking about webmail
> and file sharing environments for example).
> 
> During this week of code review and testing, around 10 other minor to medium
> bugs related to the polling changes could be fixed.
> 
> Another nasty bug was fixed on SSL. It happens that OpenSSL maintains a
> global error stack that must constantly be flushed (surely they never heard
> how errno works). The result is that some SSL errors could cause another SSL
> session to break as a side effect of this error. This issue was reported by
> J. Maurice (wiz technologies) who first encountered it when playing with the
> tests on ssllabs.com.
> 
> Another bug present since 1.4 concerns the premature close of the response
> when the server responds before the end of a POST upload. This happens when
> the server responds with a redirect or with a 401, sometimes the client would
> not get the response. This has been fixed.
> 
> Krzysztof Rutecki reported some issues on client certificate checks, because
> the check for the presence of the certificate applies to the connection and
> not just to the session. So this does not match upon session resumption. Thus
> another ssl_c_used ACL was added to check for such sessions.
> 
> Among the other nice additions, it is now possible to log the result of any
> sample fetch method using %[]. This allows to log SSL certificates for 
> example.
> And similarly, passing such information to HTTP headers was implemented too,
> as "http-request add-header" and "http-request set-header", using the same
> format as the logs. This also becomes useful for combining headers !
> 
> Some people have been asking for logging the amount of uploaded data from the
> client to the server, so this is now available as the %U log-format tag.
> Some other log-format tags were deprecated and replaced with easier to remind
> ones. The old ones still work but emit a warning suggesting the replacement.
> 
> And last, the stats HTML version was improved to present detailed information
> using hover tips instead of title attributes, allowing multi-line details on
> the page. The result is nicer, more readable and more complete.
> 
> The changelog is short enough to append it here after the usual links :
> 
>Site index   : http://haproxy.1wt.eu/
>Sources  : http://haproxy.1wt.eu/download/1.5/src/devel/
>Changelog: http://haproxy.1wt.eu/download/1.5/src/CHANGELOG
>Cyril's HTML doc : 
> http://cbonte.github.com/haproxy-dconv/configuration-1.5.html
> 
> At the moment, nobody broke the latest snapshots, so I think we're getting
> closer to something stable to base future work on.
> 
> Thanks!
> Willy
> 
> --
> Changelog from 1.5-dev15 to 1.5-dev16:
>  - BUG/MEDIUM: ssl: Prevent ssl error from affecting other connections.
>  - BUG/MINOR: ssl: error is not reported if it occurs simultaneously with 
> peer close detection.
>  - MINOR: ssl: add fetch and acl "ssl_c_used" to check if current SSL session 
> uses a client certificate.
>  - MINOR: contrib: make the iprange tool grep for addresses
>  - CLEANUP: polling: gcc doesn't always optimize constants away
>  - OPTIM: poll: optimize fd management functions for low register count CPUs
>  - CLEANUP: poll: remove a useless double-check on fdtab[fd].owner
>  - OPTIM: epoll: use a temp variable for intermediary flag computations
>  - OPTIM: epoll: current fd does not count as a new one
>  - BUG/MINOR: poll: the I/O handler was called twice for polled I/Os
>  - MINOR: http: make resp_ver and status ACLs check for the presence of a 
> response
>  - BUG/MEDIUM: stream-interface: fix possible stalls during transfers
>  - BUG/MINOR: stream_interface: don't return when the fd is already set
>  - BUG/MEDIUM: connection: always update connection flags prior to computing 
> polling
>  - CLEANUP: buffer: use buffer_empty() instead of buffer_len()==0
>  - BUG/MAJOR: stream_interface: fix occasional data transfer freezes
>  - BUG/MEDIUM: stream_interface: fix another case where the reader might not 
> be woken up
>  - BUG/MINOR: http: don't abort client connection on premature responses
>  - BUILD: no need to clean up when making git-tar
>  - MINOR: log: add a tag for am

Re: [ANNOUNCE] haproxy-1.5-dev16

2012-12-26 Thread Dmitry Sivachenko

On 26.12.2012, at 1:03, Willy Tarreau  wrote:
>> 
> 
> This fix is still wrong, as it only accepts one add-header rule, so
> please use the other fix posted in this thread by "seri0528" instead.
> 


Thanks a lot! Works now.




max sessions rate on stat page

2012-12-27 Thread Dmitry Sivachenko
Hello!

Every time I reload haproxy process, I see on stat page "max session rate" for 
backends and frontends
2-3 times higher than "current session rate".

Even if I open stat page few seconds after reload, and it is 100% reproducible.

Seems like haproxy calculates these numbers incorrect during first second of 
running.


compress only if response size is big enough

2013-02-07 Thread Dmitry Sivachenko
Hello!

It would be nice to add some parameter .
So haproxy will compress HTTP response only if response size is bigger than 
that value.

Because compressing small data can lead to size increase and is useless.

Thanks.


Re: compress only if response size is big enough

2013-03-02 Thread Dmitry Sivachenko
Hello!

What do you guys think?

I meant something similar to nginx's  gzip_min_length.


On 07.02.2013, at 15:56, Dmitry Sivachenko  wrote:

> Hello!
> 
> It would be nice to add some parameter .
> So haproxy will compress HTTP response only if response size is bigger than 
> that value.
> 
> Because compressing small data can lead to size increase and is useless.
> 
> Thanks.




haproy dumps core when unable to resolve host names

2013-03-15 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5-dev17.  I use hostnames in my config file rather than 
IPs.
If DNS is not working, haproxy will dump core on start or config check.

How to repeat:
Put some fake stuff in /etc/resolv.conf so resolver does not work.

Run haproxy -c -f :

/tmp# ./haproxy -c -f ./haproxy.conf
Segmentation fault (core dumped)

# ./haproxy -vv
HA-Proxy version 1.5-dev17 2012/12/28
Copyright 2000-2012 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -O2 -fno-strict-aliasing -pipe -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8x 10 May 2012
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.




Re: haproy dumps core when unable to resolve host names

2013-03-15 Thread Dmitry Sivachenko

On 15.03.2013, at 15:54, Willy Tarreau  wrote:

> Hi Dmitry,
> 
> On Fri, Mar 15, 2013 at 03:25:10PM +0400, Dmitry Sivachenko wrote:
>> Hello!
>> 
>> I am using haproxy-1.5-dev17.  I use hostnames in my config file rather than 
>> IPs.
>> If DNS is not working, haproxy will dump core on start or config check.
>> 
>> How to repeat:
>> Put some fake stuff in /etc/resolv.conf so resolver does not work.
>> 
>> Run haproxy -c -f :
>> 
>> /tmp# ./haproxy -c -f ./haproxy.conf
>> Segmentation fault (core dumped)
> 
> This is a known issue with GETADDRINFO which was fixed in a
> recent snapshot :
> 
>  commit 58ea039115f3faaf29529e0df97f4562436fdd09
>  Author: Sean Carey 
>  Date:   Fri Feb 15 23:39:18 2013 +0100
> 
>BUG/MEDIUM: config: fix parser crash with bad bind or server address
> 
>If an address is improperly formated on a bind or server address
>and haproxy is built for using getaddrinfo, then a crash may occur
>upon the call to freeaddrinfo().
> 
>Thanks to Jon Meredith for helping me patch this for SmartOS,
>I am not a C/GDB wizard.
> 
> I think you'd better update to latest snapshot until we emit dev18.
> 



Ah, okay, thanks!




compile warning

2013-05-22 Thread Dmitry Sivachenko
Hello!

When compiling the latest haproxy snapshot on FreeBSD-9 I get the following 
warning:

cc -Iinclude -Iebtree -Wall -O2 -pipe -O2 -fno-strict-aliasing -pipe   -DFREEBSD
_PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POL
L -DENABLE_KQUEUE -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include  -DCONFIG_HAPROX
Y_VERSION=\"1.5-dev18\" -DCONFIG_HAPROXY_DATE=\"2013/04/03\" -c -o src/ev_kqueue
.o src/ev_kqueue.c
In file included from include/types/listener.h:33,
 from include/types/global.h:29,
 from src/ev_kqueue.c:30:
include/common/mini-clist.h:141:1: warning: "LIST_PREV" redefined
In file included from /usr/include/sys/event.h:32,
 from src/ev_kqueue.c:21:
/usr/include/sys/queue.h:426:1: warning: this is the location of the previous 
definition

JFYI.


Re: compile warning

2013-05-23 Thread Dmitry Sivachenko

On 23.05.2013, at 11:22, joris dedieu  wrote:

> 
> For my part I can't reproduce it.
> 
> $ uname -a
> FreeBSD mailhost2 9.1-RELEASE-p3 FreeBSD 9.1-RELEASE-p3 #0: Mon Apr 29
> 18:27:25 UTC 2013
> r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
> 
> $ cc -v
> Using built-in specs.
> Target: amd64-undermydesk-freebsd
> Configured with: FreeBSD/amd64 system compiler
> Thread model: posix
> gcc version 4.2.1 20070831 patched [FreeBSD]
> 
> 
> rm src/ev_kqueue.o; cc -Iinclude -Iebtree -Wall -Werror -O2 -pipe -O2
> -fno-strict-aliasing -pipe -DFREEBSD_PORTS -DTPROXY -DCONFIG_HAP_CRYPT
> -DUSE_GETADDRINFO -DUSE_ZLIB -DENABLE_POLL -DENABLE_KQUEUE
> -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include
> -DCONFIG_HAPROXY_VERSION=\"1.5-dev18\"
> -DCONFIG_HAPROXY_DATE=\"2013/04/03\" -c -o src/ev_kqueue.o
> src/ev_kqueue.c
> 
> Doesn't produce any warning with haproxy-ss-20130515.
> 
> Could you please tell me how to reproduce it ?
> 


Update to FreeBSD-9-STABLE if you want to reproduce it.

This change was MFC'd to 9/stable after 9.1-RELEASE:
http://svnweb.freebsd.org/base/stable/9/sys/sys/queue.h?view=log




Re: RES: RES: RES: RES: RES: RES: RES: RES: High CPU Usage (HaProxy)

2013-11-05 Thread Dmitry Sivachenko
On 05 нояб. 2013 г., at 19:33, Fred Pedrisa  wrote:

> 
> However, in FreeBSD we can't do that IRQ Assigning, like we can on linux.
> (As far I know).
> 


JFYI: you can assign IRQs to CPUs via cpuset -x 
(I can’t tell you if it is “like on linux” or not though).




ACL based on request parameter using POST method

2014-01-30 Thread Dmitry Sivachenko
Hello!

(haproxy-1.5-dev21)


Using urlp() I can match specific parameter value and dispatch request to 
different backends based on that value:

acl PARAM1 urlp(test) 1
use_backend BE1-back if PARAM1
acl PARAM2 urlp(test) 2
use_backend BE2-back if PARAM2

It works if I specify that parameter using GET method:
curl 'http://localhost:2/do?test=1'

But it does not work if I specify the same parameter using POST method:
curl -d test=1  'http://localhost:2/do'

Is there any way to make ACLs using request parameters regardless of method, so 
that it works with both GET and POST?

Thanks!


Re: ACL based on request parameter using POST method

2014-01-30 Thread Dmitry Sivachenko

On 30 янв. 2014 г., at 19:30, Baptiste  wrote:

> Hu Dmitry,
> 
> In Post, the parameters are in the body.
> You may be able to match them using the payload ACLs (HAProxy 1.5 only).
> 


Hello,

I tried
acl PARAM1 payload(0,500) -m sub test=1
use_backend BE1-back if PARAM1


and it does not match
(I test with curl -d test=1 http://...)






balance leastconn does not honor weight?

2014-03-06 Thread Dmitry Sivachenko
Hello!

I am using haproxy-1.5.22.

In a single backend I have servers with different weight configured: 16, 24, 32 
(proportional to the number of CPU cores).
Most of the time they respond very fast.

When I use balance leastconn, I see in the stats web interface that they all 
receive approximately equal number of connections (Sessions->Total).
Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
backend with minimal Connections/weight value)?

Thanks.


Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 06 марта 2014 г., at 19:29, Dmitry Sivachenko  wrote:

> Hello!
> 
> I am using haproxy-1.5.22.
> 
> In a single backend I have servers with different weight configured: 16, 24, 
> 32 (proportional to the number of CPU cores).
> Most of the time they respond very fast.
> 
> When I use balance leastconn, I see in the stats web interface that they all 
> receive approximately equal number of connections (Sessions->Total).
> Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
> backend with minimal Connections/weight value)?
> 
> Thanks.


I mean that with balance leastconn, I expect the following behavior:
-- In ideal situation, when all backends respond equally fast, it should be 
effectively like balance roundrobin *honoring specified weights*;
-- When one of the backends becomes slow for some reason, it should get less 
request based on the number of active connections

Now it behaves almost this way but without  "honoring specified weights".





Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 12:25, Willy Tarreau  wrote:

> Hi Dmitry,
> 
> On Fri, Mar 07, 2014 at 12:16:32PM +0400, Dmitry Sivachenko wrote:
>> 
>> On 06 ?? 2014 ??., at 19:29, Dmitry Sivachenko  
>> wrote:
>> 
>>> Hello!
>>> 
>>> I am using haproxy-1.5.22.
>>> 
>>> In a single backend I have servers with different weight configured: 16, 
>>> 24, 32 (proportional to the number of CPU cores).
>>> Most of the time they respond very fast.
>>> 
>>> When I use balance leastconn, I see in the stats web interface that they 
>>> all receive approximately equal number of connections (Sessions->Total).
>>> Shouldn't leastconn algorithm also honor weights of each backend (to pick a 
>>> backend with minimal Connections/weight value)?
>>> 
>>> Thanks.
>> 
>> I mean that with balance leastconn, I expect the following behavior:
>> -- In ideal situation, when all backends respond equally fast, it should be
>> effectively like balance roundrobin *honoring specified weights*;
>> -- When one of the backends becomes slow for some reason, it should get less
>> request based on the number of active connections
>> 
>> Now it behaves almost this way but without  "honoring specified weights".
> 
> We cannot honnor both at the same time. Most products I've tested don't
> *even* do the round robin on equal connection counts while we do. I'm just
> restating the point I made in another thread on the same subject : leastconn
> is about balancing the active number of connections, not the total number of
> connections.


Yes, I understand that.

But in situation when backends are not equal, it would be nice to have an 
ability to specify "weight" to balance number of *active* connections 
proportional to backend's weight.

Otherwise I am forced to maintain a pool of backends with equal hardware for 
leastconn to work, but it is not always simple.


Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 13:02, Willy Tarreau  wrote:

> On Fri, Mar 07, 2014 at 01:01:04PM +0400, Dmitry Sivachenko wrote:
>>>> Now it behaves almost this way but without  "honoring specified weights".
>>> 
>>> We cannot honnor both at the same time. Most products I've tested don't
>>> *even* do the round robin on equal connection counts while we do. I'm just
>>> restating the point I made in another thread on the same subject : leastconn
>>> is about balancing the active number of connections, not the total number of
>>> connections.
>> 
>> 
>> Yes, I understand that.
>> 
>> But in situation when backends are not equal, it would be nice to have an
>> ability to specify "weight" to balance number of *active* connections
>> proportional to backend's weight.
> 
> It's not a problem of option but of algorithm unfortunately.
> 
>> Otherwise I am forced to maintain a pool of backends with equal hardware for
>> leastconn to work, but it is not always simple.
> 
> I really don't understand. I really think you're using leastconn while
> you'd prefer to use roundrobin then.
> 


I will explain: imagine the backend server which mmap()s a lot of data needed 
to process a request.
On startup, data is read from disk into RAM and server responds fast 
(roundrobin works fine).

Now imagine that at some moment part of that mmap()ed memory is being freed for 
other needs.

When next request(s) arrive, server must to read missing pages back from disk.  
It takes time.  Server becomes very slow for some time.
I don't want it to be flooded by requests until it starts to respond fast 
again.  It looks like leastconn would fit this situation.

But 99.9% of time, when all servers respond equally fast, I want to be able to 
balance load between them proportionally to their CPU number (so I need 
weights).




Re: balance leastconn does not honor weight?

2014-03-07 Thread Dmitry Sivachenko

On 07 марта 2014 г., at 14:53, Baptiste  wrote:

> Hi All,
> 
> "When next request(s) arrive, server must to read missing pages back
> from disk.  It takes time.  Server becomes very slow for some time.
> I don't want it to be flooded by requests until it starts to respond
> fast again.  It looks like leastconn would fit this situation."
> 
> If one server is answering at 1s per request while the other one at
> 1ms in a farm of 2 servers, then server 2 will process 1000 more
> requests per second than server 1 thanks to leastconn...
> This is what you want.



Yes, provided most of the time they both answer 1ms and also that farm has not 
2 but 50 servers.
If one is ill, it's load will spread over the remaining 49...  not so scaring.

I am in the process of reading about maxconn as suggested, probably it is what 
I need but for now I am failing to understand the documentation :)


Re: Patch with some small memory usage fixes

2014-04-28 Thread Dmitry Sivachenko
Hello,

> if (groups) free(groups);

I think these checks are redundant, because according to free(3):
-- If ptr is NULL, no action occurs.


On 29 апр. 2014 г., at 3:00, Dirkjan Bussink  wrote:

> Hi all,
> 
> When building HAProxy using the Clang Static Analyzer, it found a few cases 
> of invalid memory usage and leaks. I’ve attached a patch to fix these cases.
> 
> — 
> Regards,
> 
> Dirkjan Bussink
> 
> <0001-Fix-a-few-memory-usage-errors.patch>




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-10 Thread Dmitry Sivachenko

On 07 мая 2014 г., at 18:24, Emeric Brun  wrote:
> 
> Hi All,
> 
> I suspect FreeBSD to not support process shared mutex (supported in both 
> linux and solaris).
> 
> I've just made a patch to add errors check on mutex init, and to fallback on 
> SSL private session cache in error case.


Hello,

BTW, nginx does support shared SSL session cache on FreeBSD (probably by other 
means).
May be it is worth to borrow their method rather than falling back to private 
cache?


Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Hello,

thanks for your efforts on stabilizing -dev version, it looks rather solid now.

Let me try to revive an old topic in hope to get rid of my old local patch I 
must use for production builds.

Thanks :)



On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko  wrote:

> Hello!
> 
> If haproxy can't send a request to the backend server, it will retry the same
> backend 'retries' times waiting 1 second between retries, and if 'option
> redispatch' is used, the last retry will go to another backend.
> 
> There is (I think very common) usage scenario when
> 1) all requests are independent of each other and all backends are equal, so
> there is no need to try to route requests to the same backend (if it failed, 
> we
> will try dead one again and again while another backend could serve the 
> request
> right now)
> 
> 2) there is response time policy for requests and 1 second wait time is just
> too long (all requests are handled faster than 500ms and client software will
> not wait any longer).
> 
> I propose to introduce new parameters in config file:
> 1) "redispatch always": when set, haproxy will always retry different backend
> after connection to the first one fails.
> 2) Allow to override 1 second wait time between redispatches in config file
> (including the value of 0 == immediate).
> 
> Right now I use the attached patch to overcome these restrictions.  It is ugly
> hack right now, but if you could include it into distribution in better form
> with tuning via config file I think everyone would benefit from it.
> 
> Thanks.
> 




Re: Some thoughts about redispatch

2014-05-11 Thread Dmitry Sivachenko
Looks like attach got stripped, attaching now for real so it is easy to 
understand what I am talking about.

--- session.c.orig  2012-11-22 04:11:33.0 +0400
+++ session.c   2012-11-22 16:15:04.0 +0400
@@ -877,7 +877,7 @@ static int sess_update_st_cer(struct ses
 * bit to ignore any persistence cookie. We won't count a retry nor a
 * redispatch yet, because this will depend on what server is selected.
 */
-   if (objt_server(s->target) && si->conn_retries == 0 &&
+   if (objt_server(s->target) &&
s->be->options & PR_O_REDISP && !(s->flags & SN_FORCE_PRST)) {
sess_change_server(s, NULL);
if (may_dequeue_tasks(objt_server(s->target), s->be))
@@ -903,7 +903,7 @@ static int sess_update_st_cer(struct ses
si->err_type = SI_ET_CONN_ERR;
 
si->state = SI_ST_TAR;
-   si->exp = tick_add(now_ms, MS_TO_TICKS(1000));
+   si->exp = tick_add(now_ms, MS_TO_TICKS(0));
        return 0;
}
return 0;

On 12 мая 2014 г., at 0:31, Dmitry Sivachenko  wrote:

> Hello,
> 
> thanks for your efforts on stabilizing -dev version, it looks rather solid 
> now.
> 
> Let me try to revive an old topic in hope to get rid of my old local patch I 
> must use for production builds.
> 
> Thanks :)
> 
> 
> 
> On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko  wrote:
> 
>> Hello!
>> 
>> If haproxy can't send a request to the backend server, it will retry the same
>> backend 'retries' times waiting 1 second between retries, and if 'option
>> redispatch' is used, the last retry will go to another backend.
>> 
>> There is (I think very common) usage scenario when
>> 1) all requests are independent of each other and all backends are equal, so
>> there is no need to try to route requests to the same backend (if it failed, 
>> we
>> will try dead one again and again while another backend could serve the 
>> request
>> right now)
>> 
>> 2) there is response time policy for requests and 1 second wait time is just
>> too long (all requests are handled faster than 500ms and client software will
>> not wait any longer).
>> 
>> I propose to introduce new parameters in config file:
>> 1) "redispatch always": when set, haproxy will always retry different backend
>> after connection to the first one fails.
>> 2) Allow to override 1 second wait time between redispatches in config file
>> (including the value of 0 == immediate).
>> 
>> Right now I use the attached patch to overcome these restrictions.  It is 
>> ugly
>> hack right now, but if you could include it into distribution in better form
>> with tuning via config file I think everyone would benefit from it.
>> 
>> Thanks.
>> 
> 



Re: Some thoughts about redispatch

2014-05-26 Thread Dmitry Sivachenko
On 28 нояб. 2012 г., at 18:10, Dmitry Sivachenko  wrote:

> Hello!
> 
> If haproxy can't send a request to the backend server, it will retry the same
> backend 'retries' times waiting 1 second between retries, and if 'option
> redispatch' is used, the last retry will go to another backend.
> 
> There is (I think very common) usage scenario when
> 1) all requests are independent of each other and all backends are equal, so
> there is no need to try to route requests to the same backend (if it failed, 
> we
> will try dead one again and again while another backend could serve the 
> request
> right now)
> 
> 2) there is response time policy for requests and 1 second wait time is just
> too long (all requests are handled faster than 500ms and client software will
> not wait any longer).
> 
> I propose to introduce new parameters in config file:
> 1) "redispatch always": when set, haproxy will always retry different backend
> after connection to the first one fails.
> 2) Allow to override 1 second wait time between redispatches in config file
> (including the value of 0 == immediate).
> 
> Right now I use the attached patch to overcome these restrictions.  It is ugly
> hack right now, but if you could include it into distribution in better form
> with tuning via config file I think everyone would benefit from it.
> 
> Thanks.
> 



On 26 мая 2014 г., at 18:21, Willy Tarreau  wrote:
> I think it definitely makes some sense. Probably not in its exact form but
> as something to work on. In fact, I think we should only apply the 1s retry
> delay when remaining on the same server, and avoid as much a possible to
> remain on the same server. For hashes or when there's a single server, we
> have no choice, but when doing round robin for example, we can pick another
> one. This is especially true for static servers or ad servers for example
> where fastest response time is preferred over sticking to the same server.
> 


Yes, that was exactly my point.  In many situations it is better to ask another 
server immediately to get fastest response rather than trying to stick to the 
same server as much as possible.


> 
> Thanks,
> Willy



Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko
On 28 мая 2014 г., at 11:13, Willy Tarreau  wrote:

> Hi Dmitry,
> 
> So worked a bit on this subject. It's far from being obvious. The problem
> is that at the moment where we decide of the 1s delay before a retry, we
> don't know if we'll end up on the same server or not.
> 
> Thus I'm thinking about this :
> 
>  - if the connection is persistent (cookie, etc...), apply the current 
>retry mechanism, as we absolutely don't want to break application
>sessions ;


I agree.


> 
>  - otherwise, we redispatch starting on the first retry as you suggest. But
>then we have two possibilities for the delay before reconnecting. If the
>server farm has more than 1 server and the balance algorithm is not a hash
>nor "first", then we don't apply the delay because we expect to land on a
>different server with a high probability. Otherwise we keep the delay
>because we're almost certain to land on the same server.
> 
> This way it continues to silently mask occasional server restarts and is
> optimally efficient in stateless farms when there's a possibility to quickly
> pick another server. Do you see any other point that needs specific care ?



I would export that magic "1 second" as a configuration parameter (with 0 
meaning no delay).
After all, we could fail to connect not only because of server restart, but 
also because a switch or a router dropped a packet.
Other than that, sounds good.

Thanks!


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko

On 28 мая 2014 г., at 12:49, Willy Tarreau  wrote:

> On Wed, May 28, 2014 at 12:35:17PM +0400, Dmitry Sivachenko wrote:
>>> - otherwise, we redispatch starting on the first retry as you suggest. But
>>>   then we have two possibilities for the delay before reconnecting. If the
>>>   server farm has more than 1 server and the balance algorithm is not a hash
>>>   nor "first", then we don't apply the delay because we expect to land on a
>>>   different server with a high probability. Otherwise we keep the delay
>>>   because we're almost certain to land on the same server.
>>> 
>>> This way it continues to silently mask occasional server restarts and is
>>> optimally efficient in stateless farms when there's a possibility to quickly
>>> pick another server. Do you see any other point that needs specific care ?
>> 
>> 
>> 
>> I would export that magic "1 second" as a configuration parameter (with 0
>> meaning no delay).
> 
> I'm not sure we need to add another tunable just for this.


Okay.


> 
>> After all, we could fail to connect not only because of server restart, but
>> also because a switch or a router dropped a packet.
> 
> No, because a dropped packet is already handled by the TCP stack. Here the
> haproxy retry is really about retrying after an explicit failure (server
> responded that the port was closed). Also, the typical TCP retransmit
> interval for dropped packets in the network stack is 3s, so we're already
> 3 times as fast as the TCP stack. I don't think it's reasonable to always
> kill this delay when retrying on the same server. We used to have that in
> the past and people were complaining that we were hammering servers for no
> reason, since there's little chance that a server which is not started will
> suddenly be ready in the next 100 microseconds.
> 

I mean that with timeout connect=100ms (good value for local network IMO), we 
are far away from TCP restransmit timeout and if a switch drops a packet (it 
drops randomly and it can transmit next one even if we retry immediately).

If we have a tunable (let's make a default 1 second), people will have more 
freedom in some situations.


Re: Some thoughts about redispatch

2014-05-28 Thread Dmitry Sivachenko
On 28 мая 2014 г., at 13:06, Willy Tarreau  wrote:

> 
> OK but then you make an interesting point with your very low timeout connect.
> What about using the min of timeout connect and 1s then ? Thus you can simply
> use your lower timeout connect as this new timeout. Would that be OK for you ?
> 


Sounds reasonable (provided we are talking only about redispatch to the same 
server, not to the other one).


  1   2   >