Re: HAProxy keeps using outdated IPs when backend (ELB) address changes

2018-08-27 Thread Igor Cicimov
Hi Daniel,

On Tue, Aug 28, 2018 at 1:46 AM Daniel Schneller <
daniel.schnel...@centerdevice.com> wrote:

> Hi!
>
> There seems to be some kind of problem when backend servers (in this case
> ELBs) change their IP
> addresses.
>
> At some point, apparently, the ELB behind the DNS name in my config
> changed it address(es).
> Lots of haproxys we use as sidecars on our application servers failed
> their health checks afterwards with
> L4 timeouts. For testing, I reloaded haproxy on one of them, and the error
> went away.
>
> The resolvers section has two servers: a local dnsmasq and the AWS VPC DNS
> server at the "magic"
> address 169.254.19.253.
>
> On a different instance I captured some traffic. The pcap shows the DNS
> queries and responses
> for the backend server name going to both 127.0.0.1:53 and
> 169.254.169.253:53. Both servers
> reply with the same answers, carrying the current IPs. Those are the same
> as shown by dig in a shell.
> (10.205.100.120 and 10.205.100.61).
>
> However, haproxy apparently still uses and old address 10.205.100.53 that
> the ELB probably had at
> some point -- hard to tell after the fact. In the pcap I can "ICMP Host
> Unreachable" responses for
> attempts to connect to 10.205.100.53 on all the ports my backends specify.
>
> At first I suspected length issues, but the responses are just 174 bytes
> long.
> If needed, I can provide the pcap privately.
>
> This can be a real fun-killer when all the sidecars suddenly lose
> connection across tens of VMs...
>
> Am I missing something in my config, or is this an actual (maybe known?)
> bug?
>
> Configuration, dig output, and version info below.
>
> Kind regards,
>
> Daniel
>
>
>
>
> dig output (actual name is a little longer,
> I cut off the name for brevity and privacy).
> -
> [aws:staging-staging] root:~# dig @169.254.169.253
> loadbalancer-internal.private
>
> ; <<>> DiG 9.9.5-3ubuntu0.17-Ubuntu <<>> @169.254.169.253
> loadbalancer-internal.private
> ; (1 server found)
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 501
> ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
>
> ;; OPT PSEUDOSECTION:
> ; EDNS: version: 0, flags:; udp: 4096
> ;; QUESTION SECTION:
> ;loadbalancer-internal.private. IN A
>
> ;; ANSWER SECTION:
> loadbalancer-internal.private. 11 IN A 10.205.100.120
> loadbalancer-internal.private. 11 IN A 10.205.100.61
>
> ;; Query time: 0 msec
> ;; SERVER: 169.254.169.253#53(169.254.169.253)
> ;; WHEN: Mon Aug 27 15:51:48 CEST 2018
> ;; MSG SIZE  rcvd: 141
>
>
>
> [aws:staging-staging] root:~# dig @127.0.0.1 loadbalancer-internal.private
>
> ; <<>> DiG 9.9.5-3ubuntu0.17-Ubuntu <<>> @127.0.0.1
> loadbalancer-internal.private
> ; (1 server found)
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20706
> ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
>
> ;; QUESTION SECTION:
> ;loadbalancer-internal.private. IN A
>
> ;; ANSWER SECTION:
> loadbalancer-internal.private. 5 IN A 10.205.100.61
> loadbalancer-internal.private. 5 IN A 10.205.100.120
>
> ;; Query time: 0 msec
> ;; SERVER: 127.0.0.1#53(127.0.0.1)
> ;; WHEN: Mon Aug 27 15:51:54 CEST 2018
> ;; MSG SIZE  rcvd: 130
> -
>
>
>
> haproxy.cfg (there are more proxies in the real thing, but they are all
> the the same,
> just for different ports):
> -
> global
>   log /dev/log len 350 local1 info
>   log-tag haproxy
>   stats socket /var/run/haproxy.stat user haproxy group haproxy mode 600
> level admin
>   chroot /var/lib/haproxy
>   user haproxy
>   group haproxy
>   hard-stop-after 30s
>
>
> defaults
>   mode tcp
>   log global
>   option tcplog
>   option dontlognull
>   option http-keep-alive
>   timeout http-request 10s
>   timeout queue 1m
>   timeout connect 5s
>   timeout client 2m
>   timeout server 2m
>   timeout http-keep-alive 10s
>   timeout check 5s
>   retries 3
>   maxconn 2000
>
> resolvers default
>   nameserver local 127.0.0.1:53
>   nameserver aws 169.254.169.253:53
>
>
Maybe try tuning the "hold valid" parameter, see
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.3.2
The default value is 30s so setting it to 1s would make more sense when the
backend IP's often change.



>
> listen rabbitmq
>   bind 127.0.0.1:5671
>   option dontlog-normal
>   server lb-internal loadbalancer-internal.private:5671 resolvers default
> check addr loadbalancer-internal.private port 5671
> -
>
>
>
> Log:
> 
> ...
> Aug 27 16:49:09 xxx haproxy[2090]: 127.0.0.1:35891
> [27/Aug/2018:16:49:09.031] rabbitmq rabbitmq/ -1/-1/0 0 SC 0/0/0/0/0
> 0/0
> ...
> 
>
>
>
> Version info:
> -
> [aws:staging-staging] root:~# haproxy -vvv
> HA-Proxy version 1.7.11-1ppa1~trusty 2018/04/30
> Copyright 2000-2018 Willy Tarreau 
>
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  

RE: URL rewrite

2018-08-27 Thread Norman Branitsky
Your examples are all correct.

-Original Message-
From: Tim Düsterhus  
Sent: Monday, August 27, 2018 6:22 PM
To: Norman Branitsky ; haproxy 

Subject: Re: URL rewrite

Norman,

Am 27.08.2018 um 23:45 schrieb Norman Branitsky:
> I need to rewrite my URLs according to the following pattern:
> cloud.example.com/?query
> becomes:
> .cloud.example.com/main?query
> 
> HAProxy will terminate SSL - I have a wildcard certificate for 
> *.cloud.example.com.
> As the target servers are running Docker Enterprise, I do not need DNS 
> entries for every possible instance of  as Docker EE will handle 
> this internally.
> Is there a way to do this?
> 

What exactly do you mean by rewrite? Do you want that when a user requests

https://cloud.example.com/?query

in their web browser it gets proxied to a backend running at

https://.cloud.example.com/main?query

? Is it possible that there follows a path that you need to preserve:

https://cloud.example.com///?query
to
https://.cloud.example.com/main//?query

?

Best regards
Tim Düsterhus



Re: URL rewrite

2018-08-27 Thread Tim Düsterhus
Norman,

Am 27.08.2018 um 23:45 schrieb Norman Branitsky:
> I need to rewrite my URLs according to the following pattern:
> cloud.example.com/?query
> becomes:
> .cloud.example.com/main?query
> 
> HAProxy will terminate SSL - I have a wildcard certificate for 
> *.cloud.example.com.
> As the target servers are running Docker Enterprise, I do not need DNS 
> entries for
> every possible instance of  as Docker EE will handle this 
> internally.
> Is there a way to do this?
> 

What exactly do you mean by rewrite? Do you want that when a user requests

https://cloud.example.com/?query

in their web browser it gets proxied to a backend running at

https://.cloud.example.com/main?query

? Is it possible that there follows a path that you need to preserve:

https://cloud.example.com///?query
to
https://.cloud.example.com/main//?query

?

Best regards
Tim Düsterhus



URL rewrite

2018-08-27 Thread Norman Branitsky
I need to rewrite my URLs according to the following pattern:
cloud.example.com/?query
becomes:
.cloud.example.com/main?query

HAProxy will terminate SSL - I have a wildcard certificate for 
*.cloud.example.com.
As the target servers are running Docker Enterprise, I do not need DNS entries 
for
every possible instance of  as Docker EE will handle this 
internally.
Is there a way to do this?


Re: lua script, 200% cpu usage with nbthread 3 - haproxy hangs - __spin_lock - HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

2018-08-27 Thread PiBa-NL

Hi Frederic, Oliver,

Thanks for your investigations :).
I've made a little reg-test (files attached). Its probably not 'correct' 
to commit as-is, but should be enough to get a reproduction.. I hope..


changing it to nbthread 1 makes it work every time..(that i tried)

The test actually seems to show a variety of issues.
## Every once in a while it takes like 7 seconds to run a test.. During 
which cpu usage is high..


     c0    7.6 HTTP rx timeout (fd:5 7500 ms)

## But most of the time, it just doesn't finish with a correct result 
(ive seen haproxy do core dumps also while testing..). There is of 
course the option that i did something wrong in the lua as well...


Does the test itself work for you guys? (with nbthread 1)

Did i do something crazy in the lua code? , i do have several loops.. 
but i don't think thats where it 'hangs' ?..


Regards,

PiBa-NL (Pieter)

Luacurl = {}
Luacurl.__index = Luacurl
setmetatable(Luacurl, {
__call = function (cls, ...)
return cls.new(...)
end,
})
function Luacurl.new(server, port, ssl)
local self = setmetatable({}, Luacurl)
self.sockconnected = false
self.server = server
self.port = port
self.ssl = ssl
self.cookies = {}
return self
end

function Luacurl:get(method,url,headers,data)
core.Info("MAKING SOCKET")
if self.sockconnected == false then
  self.sock = core.tcp()
  if self.ssl then
local r = self.sock:connect_ssl(self.server,self.port)
  else
local r = self.sock:connect(self.server,self.port)
  end
  self.sockconnected = true
end
core.Info("SOCKET MADE")
local request = method.." "..url.." HTTP/1.1"
if data ~= nil then
request = request .. "\r\nContent-Length: "..string.len(data)
end
if headers ~= null then
for h,v in pairs(headers) do
request = request .. "\r\n"..h..": "..v
end
end
cookstring = ""
for cook,cookval in pairs(self.cookies) do
cookstring = cookstring .. cook.."="..cookval.."; "
end
if string.len(cookstring) > 0 then
request = request .. "\r\nCookie: "..cookstring
end

request = request .. "\r\n\r\n"
if data and string.len(data) > 0 then
request = request .. data
end
--print(request)
core.Info("SENDING REQUEST")
self.sock:send(request)

--  core.Info("PROCESSING RESPONSE")
return processhttpresponse(self.sock)
end

function processhttpresponse(socket)
local res = {}
core.Info("1")
res.status = socket:receive("*l")
core.Info("2")

if res.status == nil then
core.Info(" processhttpresponse RECEIVING status: NIL")
return res
end
core.Info(" processhttpresponse RECEIVING status:"..res.status)
res.headers = {}
res.headerslist = {}
repeat
core.Info("3")
local header = socket:receive("*l")
if header == nil then
return "error"
end
local valuestart = header:find(":")
if valuestart ~= nil then
local head = header:sub(1,valuestart-1)
local value = header:sub(valuestart+2)
table.insert(res.headerslist, {head,value})
res.headers[head] = value
end
until header == ""
local bodydone = false
if res.headers["Connection"] ~= nil and res.headers["Connection"] == 
"close" then
--  core.Info("luacurl processresponse with connection:close")
res.body = ""
repeat
core.Info("4")
local d = socket:receive("*a")
if d ~= nil then
res.body = res.body .. d
end
until d == nil or d == 0
bodydone = true
end
if bodydone == false and res.headers["Content-Length"] ~= nil then
res.contentlength = tonumber(res.headers["Content-Length"])
if res.contentlength == nil then
  core.Warning("res.contentlength ~NIL = 
"..res.headers["Content-Length"])
end
--  core.Info("luacur, contentlength="..res.contentlength)
res.body = ""
repeat
local d = socket:receive(res.contentlength)
if d == nil then
--  core.Info("luacurl, ERROR?: recieved NIL, 
expecting "..res.contentlength.." bytes only got "..string.len(res.body).." 
sofar")
return
else
res.body = res.body..d
--

Re: HTTP response sent in TCP FIN packet - Haproxy 1.8.13 on Ubuntu 16.04

2018-08-27 Thread Pieter Thysebaert
Thanks both,

that explains it!

I was confused because we have a production HAproxy system that also has
"mode http" in the defaults but sends an empty FIN ACK in this case.
That is because that HAProxy is SSL-enabled, however, so it makes sense now!

Kind regards,
Pieter

On Mon, Aug 27, 2018 at 5:28 PM Willy Tarreau  wrote:

> Hi Aleks, Pieter,
>
> On Mon, Aug 27, 2018 at 04:26:29PM +0200, Aleksandar Lazic wrote:
> > Hi.
> >
> > Am 27.08.2018 um 15:03 schrieb Pieter Thysebaert:
> (...)
> > > defaults
> > > log global
> > > modehttp
> >
> > The default mode is http.
> > When you change to tcp no http error message will be send.
>
> Sure but that's very likely not the goal here :-) The 400 is returned
> by default if an invalid or incomplete response is sent on the socket.
> If it is desired that an empty connection is not accounted as an error
> for example because you have a monitoring system sending probes to check
> the ports, then it's possible to do it by adding this line to the frontend
> :
>
>  option http-ignore-probes
>
> However, be careful as it also means that requests truncated due to MTU
> issues caused by VPNs will not be detected. It really depends on the
> environment. I'd say as a rule of thumb, do not use this option unless
> you're annoyed by logs of empty requests or the error response causes
> trouble to a picky client.
>
> Regards,
> Willy
>


HAProxy keeps using outdated IPs when backend (ELB) address changes

2018-08-27 Thread Daniel Schneller
Hi!

There seems to be some kind of problem when backend servers (in this case ELBs) 
change their IP
addresses.

At some point, apparently, the ELB behind the DNS name in my config changed it 
address(es).
Lots of haproxys we use as sidecars on our application servers failed their 
health checks afterwards with
L4 timeouts. For testing, I reloaded haproxy on one of them, and the error went 
away.

The resolvers section has two servers: a local dnsmasq and the AWS VPC DNS 
server at the "magic"
address 169.254.19.253.

On a different instance I captured some traffic. The pcap shows the DNS queries 
and responses
for the backend server name going to both 127.0.0.1:53 and 169.254.169.253:53. 
Both servers
reply with the same answers, carrying the current IPs. Those are the same as 
shown by dig in a shell.
(10.205.100.120 and 10.205.100.61).

However, haproxy apparently still uses and old address 10.205.100.53 that the 
ELB probably had at
some point -- hard to tell after the fact. In the pcap I can "ICMP Host 
Unreachable" responses for
attempts to connect to 10.205.100.53 on all the ports my backends specify.

At first I suspected length issues, but the responses are just 174 bytes long.
If needed, I can provide the pcap privately.

This can be a real fun-killer when all the sidecars suddenly lose connection 
across tens of VMs...

Am I missing something in my config, or is this an actual (maybe known?) bug?

Configuration, dig output, and version info below.

Kind regards,

Daniel




dig output (actual name is a little longer,
I cut off the name for brevity and privacy).
-
[aws:staging-staging] root:~# dig @169.254.169.253 loadbalancer-internal.private

; <<>> DiG 9.9.5-3ubuntu0.17-Ubuntu <<>> @169.254.169.253 
loadbalancer-internal.private
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 501
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;loadbalancer-internal.private. IN A

;; ANSWER SECTION:
loadbalancer-internal.private. 11 INA 10.205.100.120
loadbalancer-internal.private. 11 INA 10.205.100.61

;; Query time: 0 msec
;; SERVER: 169.254.169.253#53(169.254.169.253)
;; WHEN: Mon Aug 27 15:51:48 CEST 2018
;; MSG SIZE  rcvd: 141



[aws:staging-staging] root:~# dig @127.0.0.1 loadbalancer-internal.private

; <<>> DiG 9.9.5-3ubuntu0.17-Ubuntu <<>> @127.0.0.1 
loadbalancer-internal.private
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20706
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;loadbalancer-internal.private. IN A

;; ANSWER SECTION:
loadbalancer-internal.private. 5 IN A 10.205.100.61
loadbalancer-internal.private. 5 IN A 10.205.100.120

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Mon Aug 27 15:51:54 CEST 2018
;; MSG SIZE  rcvd: 130
-



haproxy.cfg (there are more proxies in the real thing, but they are all the the 
same,
just for different ports):
-
global
  log /dev/log len 350 local1 info
  log-tag haproxy
  stats socket /var/run/haproxy.stat user haproxy group haproxy mode 600 level 
admin
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  hard-stop-after 30s


defaults
  mode tcp
  log global
  option tcplog
  option dontlognull
  option http-keep-alive
  timeout http-request 10s
  timeout queue 1m
  timeout connect 5s
  timeout client 2m
  timeout server 2m
  timeout http-keep-alive 10s
  timeout check 5s
  retries 3
  maxconn 2000

resolvers default
  nameserver local 127.0.0.1:53
  nameserver aws 169.254.169.253:53


listen rabbitmq
  bind 127.0.0.1:5671
  option dontlog-normal
  server lb-internal loadbalancer-internal.private:5671 resolvers default check 
addr loadbalancer-internal.private port 5671
-



Log:

...
Aug 27 16:49:09 xxx haproxy[2090]: 127.0.0.1:35891 [27/Aug/2018:16:49:09.031] 
rabbitmq rabbitmq/ -1/-1/0 0 SC 0/0/0/0/0 0/0
...




Version info:
-
[aws:staging-staging] root:~# haproxy -vvv
HA-Proxy version 1.7.11-1ppa1~trusty 2018/04/30
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 
USE_PCRE=1 USE_PCRE_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running 

Re: HTTP response sent in TCP FIN packet - Haproxy 1.8.13 on Ubuntu 16.04

2018-08-27 Thread Willy Tarreau
Hi Aleks, Pieter,

On Mon, Aug 27, 2018 at 04:26:29PM +0200, Aleksandar Lazic wrote:
> Hi.
> 
> Am 27.08.2018 um 15:03 schrieb Pieter Thysebaert:
(...)
> > defaults
> >     log global
> >     mode    http
> 
> The default mode is http.
> When you change to tcp no http error message will be send.

Sure but that's very likely not the goal here :-) The 400 is returned
by default if an invalid or incomplete response is sent on the socket.
If it is desired that an empty connection is not accounted as an error
for example because you have a monitoring system sending probes to check
the ports, then it's possible to do it by adding this line to the frontend :

 option http-ignore-probes

However, be careful as it also means that requests truncated due to MTU
issues caused by VPNs will not be detected. It really depends on the
environment. I'd say as a rule of thumb, do not use this option unless
you're annoyed by logs of empty requests or the error response causes
trouble to a picky client.

Regards,
Willy



Re: [PATCH 2/2] MINOR: Add srv_conn_free sample fetch

2018-08-27 Thread Patrick Hemmer


On 2018/8/22 04:05, Willy Tarreau wrote:
> On Thu, Aug 09, 2018 at 06:46:29PM -0400, Patrick Hemmer wrote:
>> This adds the 'srv_conn_free([/])' sample fetch. This fetch
>> provides the number of available connections on the designated server.
> Fine with this as well, though just like with the previous one, I
> disagree with this special case of -1 and would rather only count
> the really available connections (i.e. 0 if it's not possible to
> use the server).
>
> Willy

Adjusted from previous submission to handle dynamic maxconn, maxconn <
currconn, and cleanup documentation note.

-Patrick
From 2e3a908f229a1fcc11381c602aa131284b165a63 Mon Sep 17 00:00:00 2001
From: Patrick Hemmer 
Date: Thu, 14 Jun 2018 18:01:35 -0400
Subject: [PATCH] MINOR: Add srv_conn_free sample fetch

This adds the 'srv_conn_free([/])' sample fetch. This fetch
provides the number of available connections on the designated server.
---
 doc/configuration.txt | 19 ---
 src/backend.c | 28 
 2 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6eec8c10b..513ef0c49 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13690,7 +13690,8 @@ be_conn_free([]) : integer
   servers are also not included, unless all other servers are down. If no
   backend name is specified, the current one is used. But it is also possible
   to check another backend. It can be used to use a specific farm when the
-  nominal one is full. See also the "be_conn" and "connslots" criteria.
+  nominal one is full. See also the "be_conn", "connslots", and "srv_conn_free"
+  criteria.
 
   OTHER CAVEATS AND NOTES: if any of the server maxconn, or maxqueue is 0
   (meaning unlimited), then this fetch clearly does not make sense, in which
@@ -13908,8 +13909,20 @@ srv_conn([/]) : integer
   evaluated. If  is omitted, then the server is looked up in the
   current backend. It can be used to use a specific farm when one server is
   full, or to inform the server about our view of the number of active
-  connections with it. See also the "fe_conn", "be_conn" and "queue" fetch
-  methods.
+  connections with it. See also the "fe_conn", "be_conn", "queue", and
+  "srv_conn_free" fetch methods.
+
+srv_conn_free([/]) : integer
+  Returns an integer value corresponding to the number of available connections
+  on the designated server, possibly including the connection being evaluated.
+  The value does not include queue slots. If  is omitted, then the
+  server is looked up in the current backend. It can be used to use a specific
+  farm when one server is full, or to inform the server about our view of the
+  number of active connections with it. See also the "be_conn_free" and
+  "srv_conn" fetch methods.
+
+  OTHER CAVEATS AND NOTES: If the server maxconn is 0, then this fetch clearly
+  does not make sense, in which case the value returned will be -1.
 
 srv_is_up([/]) : boolean
   Returns true when the designated server is UP, and false when it is either
diff --git a/src/backend.c b/src/backend.c
index 01bd4b161..5a22b0fd0 100644
--- a/src/backend.c
+++ b/src/backend.c
@@ -1886,6 +1886,33 @@ smp_fetch_srv_conn(const struct arg *args, struct sample 
*smp, const char *kw, v
return 1;
 }
 
+/* set temp integer to the number of available connections on the server in 
the backend.
+ * Accepts exactly 1 argument. Argument is a server, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_srv_conn_free(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
+{
+   unsigned int maxconn;
+
+   smp->flags = SMP_F_VOL_TEST;
+   smp->data.type = SMP_T_SINT;
+
+   if (args->data.srv->maxconn == 0) {
+   /* one active server is unlimited, return -1 */
+   smp->data.u.sint = -1;
+   return 1;
+   }
+
+   maxconn = srv_dynamic_maxconn(args->data.srv);
+   if (maxconn > args->data.srv->cur_sess)
+   smp->data.u.sint = maxconn - args->data.srv->cur_sess;
+   else
+   smp->data.u.sint = 0;
+
+   return 1;
+}
+
 /* set temp integer to the number of connections pending in the server's queue.
  * Accepts exactly 1 argument. Argument is a server, other types will lead to
  * undefined behaviour.
@@ -1945,6 +1972,7 @@ static struct sample_fetch_kw_list smp_kws = {ILH, {
{ "nbsrv", smp_fetch_nbsrv,  ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
{ "queue", smp_fetch_queue_size, ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
{ "srv_conn",  smp_fetch_srv_conn,   ARG1(1,SRV), NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
+   { "srv_conn_free", smp_fetch_srv_conn_free,  ARG1(1,SRV), NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
{ "srv_id",smp_fetch_srv_id, 0,   NULL, 
SMP_T_SINT, SMP_USE_SERVR, },
{ "srv_is_up",

Re: [PATCH 2/2] MINOR: Add srv_conn_free sample fetch

2018-08-27 Thread Willy Tarreau
On Mon, Aug 27, 2018 at 10:27:57AM -0400, Patrick Hemmer wrote:
> Adjusted from previous submission to handle dynamic maxconn, maxconn <
> currconn, and cleanup documentation note.

Applied, thanks Patrick.

Willy



Re: HTTP response sent in TCP FIN packet - Haproxy 1.8.13 on Ubuntu 16.04

2018-08-27 Thread Aleksandar Lazic
Hi.

Am 27.08.2018 um 15:03 schrieb Pieter Thysebaert:
> Hi,
> 
> I am running HA-Proxy version 1.8.13-1ppa1~xenial 2018/08/01 on
> Linux backendsc01 4.15.0-1021-azure #21~16.04.1-Ubuntu SMP (in Azure).
> 
> This is a simple test setup; HAproxy is listening on port 444 (no SSL), the
> backlend is a Python SimpleHTTPServer on port 8000 on localhost.
> 
> When I setup a TCP connection (nc -zw30 127.0.0.1 444), the packet capture 
> shows
> the expected SYN, SYN ACK, ACK then FIN, ACK, FIN ACK sequence, however:
> 
> The FIN ACK packet sent back by HAProxy  includes the content of the error-400
> page (even if no HTTP client was used to connect in the first place) followed 
> by
> a RST from the client - see Wireshark screenshot.
> 
> I am looking for the configuration items in the haproxy.cfg / system
> configuration items that enable this behaviour.  How would I get back a plain
> FIN ACK from the server and not trigger a client RST in this case (TCP
> connection setup, no HTTP request sent)? 

That's expected.

> For reference, my HAProxy config (that shows this behaviour):
> global

[snipp]

> defaults
>     log global
>     mode    http

The default mode is http.
When you change to tcp no http error message will be send.

>     option  httplog

[snipp]

>     errorfile 400 /etc/haproxy/errors/400.http
> 
> backend python
>     mode http

Here also http mode.
What does the pythonserver sends back when you don't send a valid http request?

Does the behavior change when you change to tcp?

>     server pythonserver 127.0.0.1:8000
> 
> frontend app
>     bind :444
>     default_backend python
> 
> 
> Kind regards,
> Pieter

Best regards
Aleks



ca-file with verify required and multiple root ca's

2018-08-27 Thread Coen Rosdorff
Hi all,

We have a customer who wants to protect a site with client certificates.
However the client certificates are created with two different root ca's.

If we configure one CA cert in the ca-file everything works great.
When I add the second CA, access for clients with a cert from the first ca
are allowed. Clients with certificates from the second ca are refused.
If I change the order off CA certificates it's just the other way around.

Example off our configuration:

-

frontend frontend_with_ca
mode http
bind 10.11.12.13:443 ssl crt-list
/etc/haproxy/crt-list-frontend_with_ca transparent no-tlsv10 no-tlsv11
ca-file /etc/haproxy/trusted_ca.pem verify required

-


Is it to possible to allow client certificates from two different root ca's
in one frontend?

We are using HA-Proxy version 1.8.12 from IUS.


Thanks in advance!

Kind regards,
Coen


Re: lua script, 200% cpu usage with nbthread 3 - haproxy hangs - __spin_lock - HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

2018-08-27 Thread Frederic Lecaille

On 08/27/2018 03:09 PM, Olivier Houchard wrote:

On Mon, Aug 27, 2018 at 02:29:42PM +0200, Frederic Lecaille wrote:

On 08/27/2018 01:33 PM, Olivier Houchard wrote:

Hi Pieter,

On Sat, Aug 25, 2018 at 10:00:04PM +0200, PiBa-NL wrote:

Hi List, Thierry, Olivier,

Using a lua-socket with connect_ssl and haproxy running with nbthread 3..
results in haproxy hanging with 3 threads for me.

This while using both 1.9-7/30 version (with the 2 extra patches from
Olivier avoiding 100% on a single thread.) and also a build of today's
snapshot: HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

Below info is at the bottom of the mail:
- haproxy -vv
- gdb backtraces

This one is easy to reproduce after just a few calls to the lua function
with the lua code i'm writing on a test-box.. So if a 'simple' config that
makes a reproduction is desired i can likely come up with one.
Same lua code with nbthread 1 seems to work properly.

Is below info (the stack traces) enough to come up with a fix? If not lemme
know and ill try and make a small reproduction of it.


root@freebsd11:~ # haproxy -vv
HA-Proxy version 1.9-dev1-e3faf02 2018/08/25
Copyright 2000-2018 Willy Tarreau 

Build options :
    TARGET  = freebsd
    CPU = generic
    CC  = cc
    CFLAGS  = -DDEBUG_THREAD -DDEBUG_MEMORY -pipe -g -fstack-protector
-fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-address-of-packed-member
-Wno-null-dereference -Wno-unused-label -DFREEBSD_PORTS -DFREEBSD_PORTS
    OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1
USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
    maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with Lua version : Lua 5.3.4
Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
   kqueue : pref=300,  test result OK
     poll : pref=200,  test result OK
   select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available multiplexer protocols :
(protocols markes as  cannot be specified using 'proto' keyword)
      : mode=TCP|HTTP   side=FE|BE
    h2 : mode=HTTP   side=FE

Available filters :
      [TRACE] trace
      [COMP] compression
      [SPOE] spoe

root@freebsd11:~ # /usr/local/bin/gdb81 --pid 39649
GNU gdb (GDB) 8.1 [GDB v8.1 for FreeBSD]
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-portbld-freebsd11.1".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 39649
Reading symbols from /usr/local/sbin/haproxy...done.
[New LWP 101651 of process 39649]
[New LWP 101652 of process 39649]
Reading symbols from /lib/libcrypt.so.5...(no debugging symbols
found)...done.
Reading symbols from /lib/libz.so.6...(no debugging symbols found)...done.
Reading symbols from /lib/libthr.so.3...(no debugging symbols found)...done.
Reading symbols from /usr/lib/libssl.so.8...(no debugging symbols
found)...done.
Reading symbols from /lib/libcrypto.so.8...(no debugging symbols
found)...done.
Reading symbols from /usr/local/lib/liblua-5.3.so...(no debugging symbols
found)...done.
Reading symbols from /lib/libm.so.5...(no debugging symbols found)...done.
Reading symbols from /lib/libc.so.7...(no debugging symbols found)...done.
Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols
found)...done.
[Switching to LWP 101650 of process 39649]
0x000801e11e3a in _kevent () from /lib/libc.so.7
(gdb) info thread
    Id   Target Id Frame
* 1    LWP 101650 of process 39649 0x000801e11e3a in _kevent () from
/lib/libc.so.7
    2    LWP 101651 of process 39649 0x00437b92 in __spin_lock
(lbl=LUA_LOCK, 

Re: lua script, 200% cpu usage with nbthread 3 - haproxy hangs - __spin_lock - HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

2018-08-27 Thread Olivier Houchard
On Mon, Aug 27, 2018 at 02:29:42PM +0200, Frederic Lecaille wrote:
> On 08/27/2018 01:33 PM, Olivier Houchard wrote:
> > Hi Pieter,
> > 
> > On Sat, Aug 25, 2018 at 10:00:04PM +0200, PiBa-NL wrote:
> > > Hi List, Thierry, Olivier,
> > > 
> > > Using a lua-socket with connect_ssl and haproxy running with nbthread 3..
> > > results in haproxy hanging with 3 threads for me.
> > > 
> > > This while using both 1.9-7/30 version (with the 2 extra patches from
> > > Olivier avoiding 100% on a single thread.) and also a build of today's
> > > snapshot: HA-Proxy version 1.9-dev1-e3faf02 2018/08/25
> > > 
> > > Below info is at the bottom of the mail:
> > > - haproxy -vv
> > > - gdb backtraces
> > > 
> > > This one is easy to reproduce after just a few calls to the lua function
> > > with the lua code i'm writing on a test-box.. So if a 'simple' config that
> > > makes a reproduction is desired i can likely come up with one.
> > > Same lua code with nbthread 1 seems to work properly.
> > > 
> > > Is below info (the stack traces) enough to come up with a fix? If not 
> > > lemme
> > > know and ill try and make a small reproduction of it.
> > > 
> > > 
> > > root@freebsd11:~ # haproxy -vv
> > > HA-Proxy version 1.9-dev1-e3faf02 2018/08/25
> > > Copyright 2000-2018 Willy Tarreau 
> > > 
> > > Build options :
> > >    TARGET  = freebsd
> > >    CPU = generic
> > >    CC  = cc
> > >    CFLAGS  = -DDEBUG_THREAD -DDEBUG_MEMORY -pipe -g -fstack-protector
> > > -fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement
> > > -fwrapv -fno-strict-overflow -Wno-address-of-packed-member
> > > -Wno-null-dereference -Wno-unused-label -DFREEBSD_PORTS -DFREEBSD_PORTS
> > >    OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1
> > > USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1
> > > 
> > > Default settings :
> > >    maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> > > 
> > > Built with network namespace support.
> > > Built with zlib version : 1.2.11
> > > Running on zlib version : 1.2.11
> > > Compression algorithms supported : identity("identity"), 
> > > deflate("deflate"),
> > > raw-deflate("deflate"), gzip("gzip")
> > > Built with PCRE version : 8.40 2017-01-11
> > > Running on PCRE version : 8.40 2017-01-11
> > > PCRE library supports JIT : yes
> > > Built with multi-threading support.
> > > Encrypted password support via crypt(3): yes
> > > Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
> > > Built with Lua version : Lua 5.3.4
> > > Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
> > > Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
> > > OpenSSL library supports TLS extensions : yes
> > > OpenSSL library supports SNI : yes
> > > OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
> > > 
> > > Available polling systems :
> > >   kqueue : pref=300,  test result OK
> > >     poll : pref=200,  test result OK
> > >   select : pref=150,  test result OK
> > > Total: 3 (3 usable), will use kqueue.
> > > 
> > > Available multiplexer protocols :
> > > (protocols markes as  cannot be specified using 'proto' keyword)
> > >      : mode=TCP|HTTP   side=FE|BE
> > >    h2 : mode=HTTP   side=FE
> > > 
> > > Available filters :
> > >      [TRACE] trace
> > >      [COMP] compression
> > >      [SPOE] spoe
> > > 
> > > root@freebsd11:~ # /usr/local/bin/gdb81 --pid 39649
> > > GNU gdb (GDB) 8.1 [GDB v8.1 for FreeBSD]
> > > Copyright (C) 2018 Free Software Foundation, Inc.
> > > License GPLv3+: GNU GPL version 3 or later
> > > 
> > > This is free software: you are free to change and redistribute it.
> > > There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> > > and "show warranty" for details.
> > > This GDB was configured as "x86_64-portbld-freebsd11.1".
> > > Type "show configuration" for configuration details.
> > > For bug reporting instructions, please see:
> > > .
> > > Find the GDB manual and other documentation resources online at:
> > > .
> > > For help, type "help".
> > > Type "apropos word" to search for commands related to "word".
> > > Attaching to process 39649
> > > Reading symbols from /usr/local/sbin/haproxy...done.
> > > [New LWP 101651 of process 39649]
> > > [New LWP 101652 of process 39649]
> > > Reading symbols from /lib/libcrypt.so.5...(no debugging symbols
> > > found)...done.
> > > Reading symbols from /lib/libz.so.6...(no debugging symbols found)...done.
> > > Reading symbols from /lib/libthr.so.3...(no debugging symbols 
> > > found)...done.
> > > Reading symbols from /usr/lib/libssl.so.8...(no debugging symbols
> > > found)...done.
> > > Reading symbols from /lib/libcrypto.so.8...(no debugging symbols
> > > found)...done.
> > > Reading symbols from /usr/local/lib/liblua-5.3.so...(no 

Re: lua script, 200% cpu usage with nbthread 3 - haproxy hangs - __spin_lock - HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

2018-08-27 Thread Frederic Lecaille

On 08/27/2018 01:33 PM, Olivier Houchard wrote:

Hi Pieter,

On Sat, Aug 25, 2018 at 10:00:04PM +0200, PiBa-NL wrote:

Hi List, Thierry, Olivier,

Using a lua-socket with connect_ssl and haproxy running with nbthread 3..
results in haproxy hanging with 3 threads for me.

This while using both 1.9-7/30 version (with the 2 extra patches from
Olivier avoiding 100% on a single thread.) and also a build of today's
snapshot: HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

Below info is at the bottom of the mail:
- haproxy -vv
- gdb backtraces

This one is easy to reproduce after just a few calls to the lua function
with the lua code i'm writing on a test-box.. So if a 'simple' config that
makes a reproduction is desired i can likely come up with one.
Same lua code with nbthread 1 seems to work properly.

Is below info (the stack traces) enough to come up with a fix? If not lemme
know and ill try and make a small reproduction of it.


root@freebsd11:~ # haproxy -vv
HA-Proxy version 1.9-dev1-e3faf02 2018/08/25
Copyright 2000-2018 Willy Tarreau 

Build options :
   TARGET  = freebsd
   CPU = generic
   CC  = cc
   CFLAGS  = -DDEBUG_THREAD -DDEBUG_MEMORY -pipe -g -fstack-protector
-fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-address-of-packed-member
-Wno-null-dereference -Wno-unused-label -DFREEBSD_PORTS -DFREEBSD_PORTS
   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1
USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with Lua version : Lua 5.3.4
Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
  kqueue : pref=300,  test result OK
    poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available multiplexer protocols :
(protocols markes as  cannot be specified using 'proto' keyword)
     : mode=TCP|HTTP   side=FE|BE
   h2 : mode=HTTP   side=FE

Available filters :
     [TRACE] trace
     [COMP] compression
     [SPOE] spoe

root@freebsd11:~ # /usr/local/bin/gdb81 --pid 39649
GNU gdb (GDB) 8.1 [GDB v8.1 for FreeBSD]
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-portbld-freebsd11.1".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 39649
Reading symbols from /usr/local/sbin/haproxy...done.
[New LWP 101651 of process 39649]
[New LWP 101652 of process 39649]
Reading symbols from /lib/libcrypt.so.5...(no debugging symbols
found)...done.
Reading symbols from /lib/libz.so.6...(no debugging symbols found)...done.
Reading symbols from /lib/libthr.so.3...(no debugging symbols found)...done.
Reading symbols from /usr/lib/libssl.so.8...(no debugging symbols
found)...done.
Reading symbols from /lib/libcrypto.so.8...(no debugging symbols
found)...done.
Reading symbols from /usr/local/lib/liblua-5.3.so...(no debugging symbols
found)...done.
Reading symbols from /lib/libm.so.5...(no debugging symbols found)...done.
Reading symbols from /lib/libc.so.7...(no debugging symbols found)...done.
Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols
found)...done.
[Switching to LWP 101650 of process 39649]
0x000801e11e3a in _kevent () from /lib/libc.so.7
(gdb) info thread
   Id   Target Id Frame
* 1    LWP 101650 of process 39649 0x000801e11e3a in _kevent () from
/lib/libc.so.7
   2    LWP 101651 of process 39649 0x00437b92 in __spin_lock
(lbl=LUA_LOCK, l=0x8cf1d8 , func=0x62a781
"hlua_ctx_resume",
     file=0x62a328 "src/hlua.c", line=1070) at include/common/hathreads.h:731
   3    

Re: lua script, 200% cpu usage with nbthread 3 - haproxy hangs - __spin_lock - HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

2018-08-27 Thread Olivier Houchard
Hi Pieter,

On Sat, Aug 25, 2018 at 10:00:04PM +0200, PiBa-NL wrote:
> Hi List, Thierry, Olivier,
> 
> Using a lua-socket with connect_ssl and haproxy running with nbthread 3..
> results in haproxy hanging with 3 threads for me.
> 
> This while using both 1.9-7/30 version (with the 2 extra patches from
> Olivier avoiding 100% on a single thread.) and also a build of today's
> snapshot: HA-Proxy version 1.9-dev1-e3faf02 2018/08/25
> 
> Below info is at the bottom of the mail:
> - haproxy -vv
> - gdb backtraces
> 
> This one is easy to reproduce after just a few calls to the lua function
> with the lua code i'm writing on a test-box.. So if a 'simple' config that
> makes a reproduction is desired i can likely come up with one.
> Same lua code with nbthread 1 seems to work properly.
> 
> Is below info (the stack traces) enough to come up with a fix? If not lemme
> know and ill try and make a small reproduction of it.
> 
> 
> root@freebsd11:~ # haproxy -vv
> HA-Proxy version 1.9-dev1-e3faf02 2018/08/25
> Copyright 2000-2018 Willy Tarreau 
> 
> Build options :
>   TARGET  = freebsd
>   CPU = generic
>   CC  = cc
>   CFLAGS  = -DDEBUG_THREAD -DDEBUG_MEMORY -pipe -g -fstack-protector
> -fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement
> -fwrapv -fno-strict-overflow -Wno-address-of-packed-member
> -Wno-null-dereference -Wno-unused-label -DFREEBSD_PORTS -DFREEBSD_PORTS
>   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1
> USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1
> 
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with network namespace support.
> Built with zlib version : 1.2.11
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"), deflate("deflate"),
> raw-deflate("deflate"), gzip("gzip")
> Built with PCRE version : 8.40 2017-01-11
> Running on PCRE version : 8.40 2017-01-11
> PCRE library supports JIT : yes
> Built with multi-threading support.
> Encrypted password support via crypt(3): yes
> Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
> Built with Lua version : Lua 5.3.4
> Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
> Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
> 
> Available polling systems :
>  kqueue : pref=300,  test result OK
>    poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use kqueue.
> 
> Available multiplexer protocols :
> (protocols markes as  cannot be specified using 'proto' keyword)
>     : mode=TCP|HTTP   side=FE|BE
>   h2 : mode=HTTP   side=FE
> 
> Available filters :
>     [TRACE] trace
>     [COMP] compression
>     [SPOE] spoe
> 
> root@freebsd11:~ # /usr/local/bin/gdb81 --pid 39649
> GNU gdb (GDB) 8.1 [GDB v8.1 for FreeBSD]
> Copyright (C) 2018 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later
> 
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-portbld-freebsd11.1".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> .
> Find the GDB manual and other documentation resources online at:
> .
> For help, type "help".
> Type "apropos word" to search for commands related to "word".
> Attaching to process 39649
> Reading symbols from /usr/local/sbin/haproxy...done.
> [New LWP 101651 of process 39649]
> [New LWP 101652 of process 39649]
> Reading symbols from /lib/libcrypt.so.5...(no debugging symbols
> found)...done.
> Reading symbols from /lib/libz.so.6...(no debugging symbols found)...done.
> Reading symbols from /lib/libthr.so.3...(no debugging symbols found)...done.
> Reading symbols from /usr/lib/libssl.so.8...(no debugging symbols
> found)...done.
> Reading symbols from /lib/libcrypto.so.8...(no debugging symbols
> found)...done.
> Reading symbols from /usr/local/lib/liblua-5.3.so...(no debugging symbols
> found)...done.
> Reading symbols from /lib/libm.so.5...(no debugging symbols found)...done.
> Reading symbols from /lib/libc.so.7...(no debugging symbols found)...done.
> Reading symbols from /libexec/ld-elf.so.1...(no debugging symbols
> found)...done.
> [Switching to LWP 101650 of process 39649]
> 0x000801e11e3a in _kevent () from /lib/libc.so.7
> (gdb) info thread
>   Id   Target Id Frame
> * 1    LWP 101650 of process 39649 0x000801e11e3a in _kevent () from
> /lib/libc.so.7
>   2    LWP 101651 of process 39649 0x00437b92 

Re: lua script, 200% cpu usage with nbthread 3 - haproxy hangs - __spin_lock - HA-Proxy version 1.9-dev1-e3faf02 2018/08/25

2018-08-27 Thread Frederic Lecaille

On 08/25/2018 10:00 PM, PiBa-NL wrote:

Hi List, Thierry, Olivier,


Hi,

Using a lua-socket with connect_ssl and haproxy running with nbthread 
3.. results in haproxy hanging with 3 threads for me.


If your configuration is simple do not hesitate to provide it. Perhaps 
we will be able to write a reg testing file for this bug to reproduce it.