Re: Link error building haproxy-1.9.7

2019-05-09 Thread Chris Packham
On 9/05/19 9:50 PM, William Lallemand wrote:
> On Thu, May 09, 2019 at 03:52:45AM +, Chris Packham wrote:
>> Hi,
>>
>> I'm encountering the following linker error when building haproxy-1.9.7
>>
>> make CC=arm-softfloat-linux-gnueabi USE_OPENSSL=1
>> ...
>> LD  haproxy
>>   
>> /usr/bin/../lib/gcc/arm-softfloat-linux-gnueabi/8.3.0/../../../../arm-softfloat-linux-gnueabi/bin/ld:
>> src/fd.o: in function `fd_rm_from_fd_list':
>>haproxy-1.9.7/src/fd.c:267: undefined reference to `__ha_cas_dw'
>>collect2: error: ld returned 1 exit status
>>Makefile:994: recipe for target 'haproxy' failed
>>make: *** [haproxy] Error 1
>>
>> Eyeballing the code I think it's because USE_THREAD is not defined and
>> __ha_cas_dw is only defined when USE_THREAD is defined
>>
>>
> 
> HAProxy is not supposed to build without a TARGET argument, I can't reproduce
> your problem, what is your complete make line?
> 

Here's the full make invocation (MUA wrapped unfortunately)

make -j32 -l16 CC=arm-unknown-linux-gnueabihf-gcc 
LD=arm-unknown-linux-gnueabihf-gcc 
DESTDIR=output/armv7/haproxy/new/install PREFIX=/usr CFLAGS=-"O2 -g2 
-mtune=cortex-a9 -march=armv7-a -mabi=aapcs-linux 
--sysroot=output/armv7/haproxy/staging 
LDFLAGS=--sysroot=output/armv7/haproxy/staging USE_OPENSSL=1 
SSL_INC=output/armv7/haproxy/staging/usr/include 
SSL_LIB=output/armv7/haproxy/staging/usr/lib TARGET=linux26





Re: [PATCH] wurfl device detection build fixes and dummy library

2019-05-09 Thread Willy Tarreau
Hi Max,

I'll respond on some points here.

On Thu, May 09, 2019 at 06:03:58PM +0200, Massimiliano Bellomi wrote:
> Hi Christopher,
> 
> here Massimiliano, from Scientiamobile Engineering team.
> 
> We started working on your suggestions.
> 
> Doing this, I noticed that *send_log()* seems not working if called inside
> module's init function.

Indeed. There's a historical distinction between logs (which are only for
runtime traffic) and alerts/warnings that are only for startup (typically
config errors). So while you should use send_log() and friends to report
critical events (it's already done for you at the end of the stream for
regular activity), you should only use ha_warning() to report startup
warnings that can be avoided by a functionally equivalent configuration,
and ha_alert() to report alerts corresponding to a refusal to start.
Typically you'll use a warning to indicate that an old keyword is
deprecated and is supposed to be replaced by something else but that it
was still done for the user, and you'll use an alert for something wrong,
that makes no sense or that cannot work.

> Has anything changed in the latest version of HA (in 1.7 that works) that
> we need to pay attention to ?

I find it strange that logs used to work upon startup in 1.7 or that
could have been just an accident, such as a startup race depending on
the declaration order, since logs were quite simpler by then and
probably didn't require as much initialization, so maybe the socket
was initialized during parsing and everything was immediately usable.

Hoping this helps,
Willy



haproxy stopped balancing after about 2 weeks

2019-05-09 Thread ericr
A couple of weeks ago I installed haproxy on our server running FreeBSD
11.0-RELEASE-p16. (yes, I know it's an old version of the OS, I'm going to
upgrade it as soon as I solve my haproxy problem.)

Haproxy is supposed to load balance between 2 web servers running apache.
haproxy ran fine and balanced well for about 2 weeks, and then it stopped
sending client connections to the second web server.

It still does health checks to both servers just fine, and reports L7OK/200
at every check for both servers. I've tried using both roundrobin and
leastconn, with no luck.  I've restarted haproxy several times, and
rebooted the server it's running on, and it the behavior doesn't change.
I'm out of ideas, does anyone have suggestions for fixing this (or
improving my config in general)?

Here's my config file:


# global holds defaults, global variables, etc.
global
daemon
user haproxy
group haproxy
log /dev/log local0
stats socket /var/run/haproxy/admin.sock user haproxy group haproxy
mode 660 level admin

# https://www.haproxy.com/blog/multithreading-in-haproxy/
maxconn 2048 # max connections we handle at once
nbproc 1 # number of haproxy processes to start
nbthread 4 # max threads, 1 per CPU core

# cpu map = number of cpu cores
cpu-map all 0-3

ssl-default-bind-ciphers "EECDH+ECDSA+AESGCM ECDH+aRSA+AESGCM
EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256
EECDH+aRSA+RC4 EECDH EDH+aRSA RC4"
ssl-default-bind-options ssl-min-ver TLSv1.2

defaults
timeout connect 30s
timeout client 600s
timeout server 30s
log global
mode http

stats enable
stats uri /haproxy?stats
stats realm Statistics
stats auth REMOVED
stats refresh 10s

# frontend holds info about the public face of the site
frontend vi-gate2.docbasedirect.com
bind XXX.XX.XX.XXX:80
bind XXX.XX.XX.XXX:443 ssl crt
"/usr/local/etc/2019-www-prod-SSL.crt"
http-request redirect scheme https if !{ ssl_fc }
default_backend web_servers
option httplog

# info about backend servers
backend web_servers
balance leastconn
cookie phpsessid insert indirect nocache
option httpchk HEAD /

default-server check maxconn 2048

server vi-www3 10.3.3.10:8080 cookie phpsessid inter 120s
server vi-www4 10.3.3.11:8080 cookie phpsessid inter 120s

email-alert mailers vi-mailer
email-alert from REMOVED
email-alert to REMOVED

mailers vi-mailer
mailer localhost 127.0.0.1:25
mailer vi-backup2 10.3.3.100:25


Thanks!

-- ericr


Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-09 Thread Maciej Zdeb
Hi again,

I have bad news, HAProxy 1.9.7-35b44da still looping :/

gdb session:
h2_process_mux (h2c=0x1432420) at src/mux_h2.c:2609
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb) n
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2619if (!h2s->send_wait) {
(gdb)
2620LIST_DEL_INIT(>list);
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2619if (!h2s->send_wait) {
(gdb)
2620LIST_DEL_INIT(>list);
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb)
2613if (!LIST_ISEMPTY(>sending_list))
(gdb)
2619if (!h2s->send_wait) {
(gdb)
2620LIST_DEL_INIT(>list);
(gdb)
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb) p *h2s
$1 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m
= {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next =
411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
{0x13dcf50,
  0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx
= 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode =
H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf =
{size = 0, area = 0x0,
data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0,
events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p =
0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}}
(gdb) p *h2s_back
$2 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m
= {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next =
411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
{0x13dcf50,
  0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx
= 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode =
H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf =
{size = 0, area = 0x0,
data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0,
events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p =
0x15b31a8}, sending_list = {n = 0x15b31b8, p = 0x15b31b8}}
(gdb) p *h2c
$3 = {conn = 0x17e3310, st0 = H2_CS_FRAME_H, errcode = H2_ERR_NO_ERROR,
flags = 0, streams_limit = 100, max_id = 13, rcvd_c = 0, rcvd_s = 0, ddht =
0x1e99a40, dbuf = {size = 0, area = 0x0, data = 0, head = 0}, dsi = 13, dfl
= 4,
  dft = 8 '\b', dff = 0 '\000', dpl = 0 '\000', last_sid = -1, mbuf = {size
= 16384, area = 0x1e573a0 "", data = 13700, head = 0}, msi = -1, mfl = 0,
mft = 0 '\000', mff = 0 '\000', miw = 65535, mws = 10159243, mfs = 16384,
  timeout = 2, shut_timeout = 2, nb_streams = 2, nb_cs = 3,
nb_reserved = 0, stream_cnt = 7, proxy = 0xb85fc0, task = 0x126aa30,
streams_by_id = {b = {0x125ab91, 0x0}}, send_list = {n = 0x15b31a8, p =
0x125ac18}, fctl_list = {
n = 0x14324f8, p = 0x14324f8}, sending_list = {n = 0x1432508, p =
0x1432508}, buf_wait = {target = 0x0, wakeup_cb = 0x0, list = {n =
0x1432528, p = 0x1432528}}, wait_event = {task = 0x1420fa0, handle = 0x0,
events = 1}}
(gdb) p list
$4 = (int *) 0x0
(gdb) n
2610if (h2c->st0 >= H2_CS_ERROR || h2c->flags &
H2_CF_MUX_BLOCK_ANY)
(gdb) n
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb) p *h2s
$5 = {cs = 0x1499030, sess = 0x819580 , h2c = 0x1432420, h1m
= {state = H1_MSG_DONE, flags = 29, curr_len = 0, body_len = 111976, next =
411, err_pos = -1, err_state = 0}, by_id = {node = {branches = {b =
{0x13dcf50,
  0x15b3120}}, node_p = 0x125ab90, leaf_p = 0x15b3121, bit = 1, pfx
= 0}, key = 11}, id = 11, flags = 28675, mws = 977198, errcode =
H2_ERR_NO_ERROR, st = H2_SS_CLOSED, status = 200, body_len = 0, rxbuf =
{size = 0, area = 0x0,
data = 0, head = 0}, wait_event = {task = 0x15077a0, handle = 0x0,
events = 0}, recv_wait = 0x0, send_wait = 0x0, list = {n = 0x15b31a8, p =
0x15b31a8}, sending_list = {n = 

Re: List operations: Auto responders

2019-05-09 Thread Willy Tarreau
On Thu, May 09, 2019 at 06:18:59PM +0200, Tim Düsterhus wrote:
> Willy,
> 
> Am 26.04.19 um 21:23 schrieb Tim Düsterhus:
> > I can confirm that the header is present now, thanks.
> 
> As an update from my side: I can't remember receiving any more
> out-of-office autoresponders after April, 26th, despite sending quite a
> few emails.

Good point, same here.

> So it appears to have worked, thanks again!

Yes it seems, thanks for the hint ;-)

Willy



Re: List operations: Auto responders

2019-05-09 Thread Tim Düsterhus
Willy,

Am 26.04.19 um 21:23 schrieb Tim Düsterhus:
> I can confirm that the header is present now, thanks.

As an update from my side: I can't remember receiving any more
out-of-office autoresponders after April, 26th, despite sending quite a
few emails.

So it appears to have worked, thanks again!

Best regards
Tim Düsterhus



Re: [PATCH] wurfl device detection build fixes and dummy library

2019-05-09 Thread Massimiliano Bellomi
Hi Christopher,

here Massimiliano, from Scientiamobile Engineering team.

We started working on your suggestions.

Doing this, I noticed that *send_log()* seems not working if called inside
module's init function.

e.g.
*send_log(NULL, LOG_NOTICE, "WURFL: Loading module v.%s\n",
HA_WURFL_MODULE_VERSION);*
doesn't produce any syslog message, while if called inside our fetch
functions, it works perfectly.

Digging quicky into this, the reason is that *send_log()* early returns
because of logline==NULL (when called inside init)
Are we doing something wrong ?
Has anything changed in the latest version of HA (in 1.7 that works) that
we need to pay attention to ?

Thank you in advance for any suggestion

Regards,
-Max

On Wed, Apr 24, 2019 at 6:11 PM Christopher Faulet 
wrote:

> Le 23/04/2019 à 11:10, Willy Tarreau a écrit :
> > Hi Paul,
> >
> > On Fri, Apr 19, 2019 at 06:45:22PM +0200, Paul Stephen Borile wrote:
> >> Hi Willy,
> >>
> >> fine for me, thanks for the adjustments and no problem backporting this
> to
> >> 1.9.
> >> I also confirm that the contact email address is working correctly.
> >
> > Fine thank you. I could finish the polishing (add USE_WURFL to the list
> > of known and reported build options in the makefile) and I've
> reintegrated
> > the code now.
> >
> > You probably don't see the value in having the dummy library, but for
> > developers it's invaluable. I have now updated my build script to build
> > with USE_WURFL=1 by default so that I'll now see if anything causes
> > warnings or errors there if we touch structures that are used by your
> > code.
> >
> > It would be really awesome if Device Atlas and 51Degrees could do the
> > same, as the build coverage becomes much better with very little effort
> > for everyone. David, Ben, if you read this, please have a look at
> > contrib/wurfl to get an idea of what is sufficient to have your code
> > always built by anyone. Patches welcome :-)
> >
> Hi Paul,
>
> I quickly reviewed the wurfl integration. I tested it with the dummy
> library. It really made my tests easier, many thanks.
>
> First, I have a segfault when I use the sample fetch "wurfl-get" at line
> 521. It happens when I try to retrieve an unknown data (I mean not
> listed in wurfl-information-list). Here is my config:
>
>  global
>  ...
>  wurfl-data-file /usr/share/wurfl/wurfl.zip
>  wurfl-information-list wurfl_id model_name
>  ...
>
>  frontend http
>  ...
>  http-request set-header X-WURFL-Properties
> %[wurfl-get(wurfl_id,is_tablet)] # is_tablet is not in the list
>  
>
> Then, at the beginning of the wurfl sample fetches, the channel validity
> must be checked calling the macro CHECK_HTTP_MESSAGE_FIRST(). Otherwise,
> some processing can be performed on empty buffers or uninitialized data.
>
> Finally, the function ha_wurfl_retrieve_header() is not HTX aware. Take
> a look at others HTTP sample fetches in src/http_fetch.c.
>
> Just a suggestion. It could be cool to call it from the dummy library,
> in wurfl_lookup(). This way, we will be able to test this part.
>
> Regards,
> --
> Christopher Faulet
>
>

-- 
Massimiliano Bellomi
Senior Software Engineer
Scientiamobile Italy -  massimili...@scientiamobile.com +39 338 6990288
Milano Office : +39 02 620227260
skype: massimiliano.bellomi


Did you know your society can be made more secure and convenient without any additional infrastructure?

2019-05-09 Thread Rohit Jindal



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-09 Thread Maciej Zdeb
I'm happy to help! :) Checking Olivier patch.

Thanks!

czw., 9 maj 2019 o 14:34 Willy Tarreau  napisał(a):

> On Thu, May 09, 2019 at 02:31:58PM +0200, Maciej Zdeb wrote:
> > What a bad luck :D I must have compiled it just before you pushed that
> > change (segfault above is from haproxy 1.9.7-9b8ac0f).
>
> Great, so there's still some hope. I really appreciate your help and
> feedback here, such issues are extremely difficult to track down and
> even to reproduce and your help is invaluable here.
>
> Cheers,
> Willy
>


Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-09 Thread Willy Tarreau
On Thu, May 09, 2019 at 02:31:58PM +0200, Maciej Zdeb wrote:
> What a bad luck :D I must have compiled it just before you pushed that
> change (segfault above is from haproxy 1.9.7-9b8ac0f).

Great, so there's still some hope. I really appreciate your help and
feedback here, such issues are extremely difficult to track down and
even to reproduce and your help is invaluable here.

Cheers,
Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-09 Thread Willy Tarreau
On Thu, May 09, 2019 at 02:19:26PM +0200, Maciej Zdeb wrote:
> Hi Willy,
> 
> I've built 1.9 from head, unfortunately something is wrong, right now I've
> got segfault:
> 
> Core was generated by `/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
> /var/run/haproxy.pid -D -sf 75'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x00484ab4 in h2_process_mux (h2c=0x1cda990) at
> src/mux_h2.c:2609
> 2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
> (gdb) bt f
> #0  0x00484ab4 in h2_process_mux (h2c=0x1cda990) at
> src/mux_h2.c:2609
> h2s = 0x2
  ^

So now we have the proof that the memory is corrupted! Olivier thinks
he found a possible culprit one hour ago, which could cause a detached
stream to stay referenced, and thus to corrupt the lists. I'm unsure
to what extent it can cause the presence of a dead stream in this list,
but from the beginning we're seeing wrong stuff there in your reports
and it affects the same list :-/  For reference the commit ID in 1.9
is 07a9f0265 ("BUG/MEDIUM: h2: Make sure we set send_list to NULL in
h2_detach()."). I hope that's not what you're running so that there's
still some hope. Care to double-check ?

Thanks,
Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-09 Thread Maciej Zdeb
Hi Willy,

I've built 1.9 from head, unfortunately something is wrong, right now I've
got segfault:

Core was generated by `/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p
/var/run/haproxy.pid -D -sf 75'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00484ab4 in h2_process_mux (h2c=0x1cda990) at
src/mux_h2.c:2609
2609list_for_each_entry_safe(h2s, h2s_back, >send_list, list) {
(gdb) bt f
#0  0x00484ab4 in h2_process_mux (h2c=0x1cda990) at
src/mux_h2.c:2609
h2s = 0x2
h2s_back = 
#1  h2_send (h2c=0x1cda990) at src/mux_h2.c:2747
flags = 
conn = 0x2d12190
done = 0
sent = 0
#2  0x0048d488 in pool_free (ptr=0x1cdaa68, pool=0x0) at
include/common/memory.h:327
No locals.
#3  __b_drop (buf=) at include/common/buffer.h:112
No locals.
#4  b_drop (buf=0x1cda9b8) at include/common/buffer.h:119
No locals.
#5  b_free (buf=0x1cda9b8) at include/common/buffer.h:125
No locals.
#6  h2_release_buf (h2c=0x1cda990, bptr=0x1cda9b8) at src/mux_h2.c:407
bptr = 0x1cda9b8
#7  h2_process (h2c=0x1cda990) at src/mux_h2.c:2889
conn = 0xfffdd7b8
#8  0x004bf425 in run_thread_poll_loop (data=0x0) at
src/haproxy.c:2710
ptif = 
ptdf = 
start_lock = 0
#9  0x0001 in ?? ()
No symbol table info available.
#10 0x0041fb86 in main (argc=, argv=0x7ffe885af938)
at src/haproxy.c:3354
tids = 0x1924900
threads = 0xfffc
i = 
old_sig = {__val = {0, 140199073187936, 140199073187072,
140199071033905, 0, 140199073173696, 1, 0, 18446603340516163585,
140199073187072, 1, 140199073235400, 0, 140199073236256, 0, 24}}
blocked_sig = {__val = {1844674406710583, 18446744073709551615
}}
err = 
retry = 
limit = {rlim_cur = 100, rlim_max = 100}
errmsg = "\000\000\000\000\000\000\000\000n\000\000\000w", '\000'
,
"p1K\243\202\177\000\000\000\000\000\000\000\000\000\000`'\\\242\202\177\000\000\030\000\000\000\000\000\000\000\200f4\001\000\000\000\000>\001\000\024\000\000\000\000
\352J\243\202\177\000\000`\221\065\001\000\000\000\000\340*(\242\202\177\000\000\370\371Z\210"
pidfd = 

czw., 9 maj 2019 o 10:51 Willy Tarreau  napisał(a):

> Hi Maciej,
>
> I've just pushed a number of fixes into 1.9-master, including the one
> I was talking about, if you want to try again.
>
> Cheers,
> Willy
>


Re: Link error building haproxy-1.9.7

2019-05-09 Thread William Lallemand
On Thu, May 09, 2019 at 03:52:45AM +, Chris Packham wrote:
> Hi,
> 
> I'm encountering the following linker error when building haproxy-1.9.7
> 
>make CC=arm-softfloat-linux-gnueabi USE_OPENSSL=1
>...
>LD  haproxy
>  
> /usr/bin/../lib/gcc/arm-softfloat-linux-gnueabi/8.3.0/../../../../arm-softfloat-linux-gnueabi/bin/ld:
>  
> src/fd.o: in function `fd_rm_from_fd_list':
>   haproxy-1.9.7/src/fd.c:267: undefined reference to `__ha_cas_dw'
>   collect2: error: ld returned 1 exit status
>   Makefile:994: recipe for target 'haproxy' failed
>   make: *** [haproxy] Error 1
> 
> Eyeballing the code I think it's because USE_THREAD is not defined and 
> __ha_cas_dw is only defined when USE_THREAD is defined
> 
> 

HAProxy is not supposed to build without a TARGET argument, I can't reproduce
your problem, what is your complete make line?

-- 
William Lallemand



Re: haproxy-1.9 sanitizers finding

2019-05-09 Thread Willy Tarreau
On Wed, May 08, 2019 at 01:04:36PM +0500,  ??? wrote:
> I would like to run sanitizers before new 1.9 release is out

OK I've pushed 1.9-master with the last pending fixes. I guess it will
break on libressl, but it passes the sanitize=address (except the one
you already reported in libssl.so.1).

Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-09 Thread Willy Tarreau
Hi Maciej,

I've just pushed a number of fixes into 1.9-master, including the one
I was talking about, if you want to try again.

Cheers,
Willy