Re: [PATCH] MEDIUM: reset lua transaction between http requests

2018-08-24 Thread Willy Tarreau
On Fri, Aug 24, 2018 at 11:19:26PM +0200, Thierry Fournier wrote:
> Technically, the patch works, but  the reset of the Lua session was done
> in the stream. This reset is done in the proto_http.c, so I'm not sure
> that's the correct place for this reset.
> 
> If the proto_http is the only one place which known the stop of one
> request and the start of another, there is the right place, otherwise ... ?

Got it, thanks for the explanation. The transaction reset code has moved
a few times already since the filters were introduced. Now the call to
http_end_txn_clean_session() is made from the filters (flt_end_analyze),
which do this regardless of HTTP, as soon as there's a transaction reset.
I think it could make sense to place this call there and to remove the
existing one from stream.c, but I wouldn't experiment with this for a
1.8 backport. Thus I'll take Patrick's patch as-is, that's safer.

Thanks,
Willy



Re: [PATCH 1/2] MINOR: add be_conn_free sample fetch

2018-08-24 Thread Willy Tarreau
On Fri, Aug 24, 2018 at 06:18:23PM -0400, Patrick Hemmer wrote:
> > I disagree with making a special case above for maxconn 0. In fact for me
> > it just means that such a server cannot accept connections, so it simply
> > doesn't count in the sum, just as if it were not present.
> On a server, maxconn=0 means unlimited, not that it can't accept
> connections. If we return 0, how will the caller differentiate between
> unlimited, and no free connections?

Ah OK I didn't remember about this one, the it completely makes sense.
Thanks for refreshing me on this one. It deserves a comment because it's
not obvious ;-)  The current comment says the configuration is stupid
while in fact it's just that this server is not limited. I think I
still don't agree much with reporting -1 even if I'm the one having
set it for connslots, which probably means I changed my mind regarding
this. But I'm not seeing any better value that can easily be checked,
so that's probably the least bad solution.

> Also made 2 additional changes. One is to handle dynamic maxconn. The
> other is to handle the case where the maxconn is adjusted (via stats
> socket) to be less than the number of currently active connections,
> which would result in the value wrapping.

Good point. I'll adjust the doc then since it still says that it doesn't
handle dynamic maxconn. Just let me know if you agree, and I'll do it
myself to save you from having to respin a patch.

> Will update the srv_conn_free fetch with similar changes pending outcome
> of this one.

OK, thanks.

Willy



Re: [PATCH 1/2] MINOR: add be_conn_free sample fetch

2018-08-24 Thread Patrick Hemmer


On 2018/8/22 04:04, Willy Tarreau wrote:
> Hi Patrick,
>
> On Thu, Aug 09, 2018 at 06:46:28PM -0400, Patrick Hemmer wrote:
>> This adds the sample fetch 'be_conn_free([])'. This sample fetch
>> provides the total number of unused connections across available servers
>> in the specified backend.
> Thanks for writing this one, I recently figured I needed the same for my
> build farm :-)
>
>> +be_conn_free([]) : integer
>> +  Returns an integer value corresponding to the number of available 
>> connections
>> +  across available servers in the backend. Queue slots are not included. 
>> Backup
>> +  servers are also not included, unless all other servers are down. If no
>> +  backend name is specified, the current one is used. But it is also 
>> possible
>> +  to check another backend. It can be used to use a specific farm when the
>> +  nominal one is full. See also the "be_conn" and "connslots" criteria.
>> +
>> +  OTHER CAVEATS AND NOTES: at this point in time, the code does not take 
>> care
>> +  of dynamic connections. Also, if any of the server maxconn, or maxqueue 
>> is 0,
>> +  then this fetch clearly does not make sense, in which case the value 
>> returned
>> +  will be -1.
> I disagree with making a special case above for maxconn 0. In fact for me
> it just means that such a server cannot accept connections, so it simply
> doesn't count in the sum, just as if it were not present.
On a server, maxconn=0 means unlimited, not that it can't accept
connections. If we return 0, how will the caller differentiate between
unlimited, and no free connections?
This is the same behavior provided by the `connslots` fetch, and is also
where I stole the doc snippet from.

>
>> +px = iterator->proxy;
>> +if (!srv_currently_usable(iterator) || ((iterator->flags & 
>> SRV_F_BACKUP) &&
>> +(px->srv_act ||
>> +(iterator != 
>> px->lbprm.fbck && !(px->options & PR_O_USE_ALL_BK)
>> +continue;
> Please slightly reorder the condition to improve the indent above for
> better readability, for example :
>
>   if (!srv_currently_usable(iterator) ||
>   ((iterator->flags & SRV_F_BACKUP) &&
>(px->srv_act || (iterator != px->lbprm.fbck && 
> !(px->options & PR_O_USE_ALL_BK)
>
> After checking, I'm OK with the condition :-)
>
>> +if (iterator->maxconn == 0) {
>> +/* configuration is stupid */
>> +smp->data.u.sint = -1;  /* FIXME: stupid value! */
>> +return 1;
>> +}
>> +
>> +smp->data.u.sint += (iterator->maxconn - iterator->cur_sess);
> Here I'd simply suggest this to replace the whole block :
>
>   if (iterator->maxconn > iterator->cur_sess)
>   smp->data.u.sint += (iterator->maxconn - 
> iterator->cur_sess);
>
> And then it can properly count available connections through all
> available servers, regardless of their individual configuration.
>
> Otherwise I'm fine with this patch.
>
> Thanks,
> Willy
Also made 2 additional changes. One is to handle dynamic maxconn. The
other is to handle the case where the maxconn is adjusted (via stats
socket) to be less than the number of currently active connections,
which would result in the value wrapping.

Will update the srv_conn_free fetch with similar changes pending outcome
of this one.

-Patrick
From 151d032b9cb396df47435742c03817818878f5af Mon Sep 17 00:00:00 2001
From: Patrick Hemmer 
Date: Thu, 14 Jun 2018 17:10:27 -0400
Subject: [PATCH 1/2] MINOR: add be_conn_free sample fetch

This adds the sample fetch 'be_conn_free([])'. This sample fetch
provides the total number of unused connections across available servers in the
specified backend.
---
 doc/configuration.txt | 15 ++-
 src/backend.c | 41 +
 2 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6e33f5994..f65efce95 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13682,7 +13682,20 @@ be_conn([]) : integer
   possibly including the connection being evaluated. If no backend name is
   specified, the current one is used. But it is also possible to check another
   backend. It can be used to use a specific farm when the nominal one is full.
-  See also the "fe_conn", "queue" and "be_sess_rate" criteria.
+  See also the "fe_conn", "queue", "be_conn_free", and "be_sess_rate" criteria.
+
+be_conn_free([]) : integer
+  Returns an integer value corresponding to the number of available connections
+  across available servers in the backend. Queue slots are not included. Backup
+  servers are also not included, unless all other servers are down. If no
+  backend name is specified, the current one is used. But it is also possible
+  to check another backend. It can be used to 

Re: [PATCH] MEDIUM: lua: Add stick table support for Lua

2018-08-24 Thread Adis Nezirovic
Thierry,

Something for Monday :-)

Latest version of the patch in attachment:

- Filter table format is flattened/simplified
- I've tried to address filter table format error messages
  (what is the error, and which filter entry is wrong)
- Fixed one bug: stktable_get_data_type() return value can be < 0
  (i.e. filter table contains unknown data column)

I've run a few unit tests for my Lua code using stick tables, hitting
all methods and handling regular return values and filter table errors.
So far so good, it looks good on my side.

Best regards,
Adis
>From 6b702ff6f12f919ba4d2f42a7962fa2345272382 Mon Sep 17 00:00:00 2001
From: Adis Nezirovic 
Date: Fri, 13 Jul 2018 12:18:33 +0200
Subject: [PATCH] MEDIUM: lua: Add stick table support for Lua.

This ads support for accessing stick tables from Lua. The supported
operations are reading general table info, lookup by string/IP key, and
dumping the table.

Similar to "show table", a data filter is available during dump, and as
an improvement over "show table" it's possible to use up to 4 filter
expressions instead of just one (with implicit AND clause binding the
expressions). Dumping with/without filters can take a long time for
large tables, and should be used sparingly.
---
 doc/lua-api/index.rst |  72 
 include/types/hlua.h  |   1 +
 src/hlua_fcn.c| 399 ++
 3 files changed, 472 insertions(+)

diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index 0c79766e..9a2e039a 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -852,6 +852,10 @@ Proxy class
   Contain a table with the attached servers. The table is indexed by server
   name, and each server entry is an object of type :ref:`server_class`.
 
+.. js:attribute:: Proxy.stktable
+
+  Contains a stick table object attached to the proxy.
+
 .. js:attribute:: Proxy.listeners
 
   Contain a table with the attached listeners. The table is indexed by listener
@@ -2489,6 +2493,74 @@ AppletTCP class
   :see: :js:func:`AppletTCP.unset_var`
   :see: :js:func:`AppletTCP.set_var`
 
+StickTable class
+
+
+.. js:class:: StickTable
+
+  **context**: task, action, sample-fetch
+
+  This class can be used to access the HAProxy stick tables from Lua.
+
+.. js:function:: StickTable.info()
+
+  Returns stick table attributes as a Lua table. See HAProxy documentation for
+  "stick-table" for canonical info, or check out example bellow.
+
+  :returns: Lua table
+
+  Assume our table has IPv4 key and gpc0 and conn_rate "columns":
+
+.. code-block:: lua
+
+  {
+expire=,  # Value in ms
+size=,# Maximum table size
+used=,# Actual number of entries in table
+data={ # Data columns, with types as key, and periods as values
+ (-1 if type is not rate counter)
+  conn_rate=,
+  gpc0=-1
+},
+length=,  # max string length for string table keys, key length
+   # otherwise
+nopurge=, # purge oldest entries when table is full
+type="ip"  # can be "ip", "ipv6", "integer", "string", "binary"
+  }
+
+.. js:function:: StickTable.lookup(key)
+
+   Returns stick table entry for given 
+
+   :param string key: Stick table key (IP addresses and strings are supported)
+   :returns: Lua table
+
+.. js:function:: StickTable.dump([filter])
+
+   Returns all entries in stick table. An optional filter can be used
+   to extract entries with specific data values. Filter is a table with valid
+   comparison operators as keys followed by data type name and value pairs.
+   Check out the HAProxy docs for "show table" for more details. For the
+   reference, the supported operators are:
+ "eq", "ne", "le", "lt", "ge", "gt"
+
+   For large tables, execution of this function can take a long time (for
+   HAProxy standards). That's also true when filter is used, so take care and
+   measure the impact.
+
+   :param table filter: Stick table filter
+   :returns: Stick table entries (table)
+
+   See below for example filter, which contains 4 entries (or comparisons).
+   (Maximum number of filter entries is 4, defined in the source code)
+
+.. code-block:: lua
+
+local filter = {
+  {"gpc0", "gt", 30}, {"gpc1", "gt", 20}}, {"conn_rate", "le", 10}
+}
+
+
 External Lua libraries
 ==
 
diff --git a/include/types/hlua.h b/include/types/hlua.h
index 5a8173f3..2e453351 100644
--- a/include/types/hlua.h
+++ b/include/types/hlua.h
@@ -25,6 +25,7 @@
 #define CLASS_SERVER   "Server"
 #define CLASS_LISTENER "Listener"
 #define CLASS_REGEX"Regex"
+#define CLASS_STKTABLE "StickTable"
 
 struct stream;
 
diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
index cebce224..6f6d8380 100644
--- a/src/hlua_fcn.c
+++ b/src/hlua_fcn.c
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* Contains the class reference of the concat object. */
 static int class_concat_ref;
@@ -37,7 +38,9 @@ static int class_proxy_ref;
 static int 

Re: [PATCH] MEDIUM: reset lua transaction between http requests

2018-08-24 Thread Thierry Fournier



> On 24 Aug 2018, at 04:19, Willy Tarreau  wrote:
> 
> Hi Thierry,
> 
> On Thu, Aug 23, 2018 at 09:37:43AM +0200, Thierry Fournier wrote:
>> Hi,
>> 
>> Your patch make sense, that's the right appoach, but I have a doubt about
>> the place to use for doing the reinitialization. 
>> 
>> I add Willy in this thread in order to have http2 advisor.
>> 
>> Before the 1.8 the Lua context was reinitialized with the stream because
>> the stream was reinitialized between each http request in a keepalive
>> session.
>> 
>> With http2 I guess that this behavior change. So, Willy, do you have
>> an opinion on the place to use to perform the Lua reinit ?
> 
> Oh with H2 it's even simpler, streams are distinct from each other
> so we don't reuse them and the issue doesn't exist :-)
> 
> Does this mean I should take Patrick's patch ?


Technically, the patch works, but  the reset of the Lua session was done
in the stream. This reset is done in the proto_http.c, so I’m not sure
that’s the correct place for this reset.

If the proto_http is the only one place which known the stop of one
request and the start of another, there is the right place, otherwise ... ?

I don’t known.

A+
Thierry


Re: Issue with TCP splicing

2018-08-24 Thread haproxy

private keys


private key defiantly need revoked
i attached key to show how easy it is to xtract from coredump
you can download certificate https://crt.sh/?d=511527676 and verify 
signature of message.txt using


openssl smime -verify -in message.txt.p7s -content message.txt -inform 
PEM -noverify 511527676.crt


Private-Key: (2048 bit)
modulus:
00:b8:94:de:cc:4f:9a:a5:2b:d5:56:4f:62:3c:c1:
78:75:e4:ed:b8:f5:1d:2c:d3:27:2a:01:de:39:72:
4d:ef:54:db:d7:a2:e2:e3:ed:a5:6c:36:f4:fc:d0:
1f:3e:07:20:e6:b7:e3:4b:43:70:63:99:d1:df:58:
bb:1e:c1:b4:61:81:48:38:da:00:8f:0f:62:f8:d3:
86:70:bc:3b:d4:0d:ad:ce:b6:53:a3:fe:0e:fb:d6:
d0:bf:13:e9:b7:a3:7c:2c:10:06:41:6b:15:aa:81:
41:89:23:7d:32:27:a7:74:50:94:8e:15:67:0b:5c:
ad:51:f1:58:24:d8:88:02:62:32:e9:de:a7:5b:8a:
cc:ff:fc:d1:9b:f5:6e:17:2f:bc:0f:a6:d5:9f:26:
f4:a8:f7:48:9a:37:3f:22:f1:8f:77:70:38:28:96:
d7:6f:af:2d:de:74:32:2c:e5:21:6b:df:0b:41:b8:
f2:d6:5c:91:17:70:56:ad:6d:71:e4:b1:a5:2a:65:
ca:51:f4:ec:b6:fb:8b:d1:f3:bf:cf:19:83:d5:d9:
61:03:c1:87:7d:8d:27:4e:f4:d6:e4:5d:4f:56:cc:
02:c3:b1:73:63:24:38:2f:e1:5c:19:94:c4:6d:40:
82:43:ef:6d:15:98:04:73:47:d7:c2:c9:11:46:e4:
3c:0f
publicExponent: 65537 (0x10001)
privateExponent:
18:94:fa:f7:0a:c2:f5:ac:58:c5:1d:dd:5f:6a:04:
b8:ee:bc:1a:1d:ca:bc:e5:82:19:be:15:f2:60:9e:
b0:79:04:ae:3b:2b:2c:5f:c1:e0:1f:91:90:f9:c6:
af:64:13:a5:a6:67:c6:e6:3c:59:87:6a:c3:eb:f5:
3f:ab:5c:72:7f:dd:36:75:12:0d:fb:66:9a:ec:d0:
c2:c2:ce:d4:f6:dd:66:e2:31:51:6d:cc:61:0d:c2:
cf:2f:bf:b8:8d:35:44:48:fe:0c:48:4e:a2:5e:84:
73:d7:1e:1d:47:da:ad:4a:ed:fd:de:2b:d2:ff:8c:
b5:95:06:c0:21:76:3b:9a:ce:06:86:88:4f:b2:6a:
2f:e0:84:79:d0:e4:cd:6c:8a:cf:33:3e:fd:43:da:
e3:63:c0:d6:11:c0:12:ec:2f:85:7d:f8:23:67:b7:
6d:5d:c6:d6:2e:99:28:7d:2b:40:6e:4f:f5:d5:55:
b9:01:97:4b:d4:08:14:2d:71:19:9e:e4:0b:f3:0f:
6e:a2:4a:9f:fe:fb:34:37:d4:b7:e3:ce:45:c9:c4:
41:07:69:45:71:37:30:c7:fc:3b:1e:49:bd:7a:c4:
f3:02:82:55:6c:a5:de:47:62:f1:a8:09:16:61:05:
8c:df:3f:62:6c:fb:5a:28:36:2f:70:f0:ff:28:dd:
81
prime1:
00:c0:3c:af:12:53:99:c5:0a:f7:32:7e:f7:74:5d:
d6:67:a8:f2:6a:03:f4:97:28:e6:e8:ab:e6:54:35:
b9:d7:e9:2d:11:df:76:01:0f:6a:af:91:9d:8a:b1:
79:ae:45:8e:b9:23:a0:f3:35:2f:65:a2:8c:d2:5e:
8f:ba:53:86:53:96:b1:5d:10:e1:57:90:31:47:d5:
e9:b3:62:17:72:c8:23:ab:d7:ea:c4:7c:67:32:63:
0f:ef:f0:d5:30:04:7d:09:e5:da:5d:4d:d6:32:3e:
9c:f1:4c:95:f9:f9:c8:63:15:5d:ed:bf:fa:a5:19:
41:8b:fd:39:61:a5:5e:e3:99
prime2:
00:f5:ce:23:2e:12:a7:c4:13:ae:70:95:ad:88:34:
43:bc:3d:76:c2:e1:45:d6:21:8a:80:7e:50:44:a8:
cf:76:46:4c:9a:dd:6d:f4:06:f2:f2:91:aa:53:45:
43:eb:5c:00:87:7e:d9:02:42:66:2e:1d:08:79:fa:
3b:2f:bb:e0:10:bb:26:d5:db:6c:7a:19:9a:4d:be:
f1:26:02:b2:93:3c:67:46:92:09:9c:d9:6b:82:d3:
0b:b4:e1:63:d8:1c:e9:4c:77:50:b6:1d:50:09:d8:
79:a5:6e:94:6a:4a:d0:3b:de:e1:db:44:5b:80:76:
4f:f7:13:05:3b:3e:35:e5:e7
exponent1:
63:39:ef:94:22:1a:e9:1e:73:e2:58:af:1a:1d:a5:
a1:f4:0e:cc:b2:25:fa:30:5e:a0:12:ba:dd:14:ae:
4c:c8:4b:3f:42:7d:02:a7:16:86:71:3f:44:6b:bf:
47:39:18:26:70:41:8f:c8:10:23:01:f8:76:4d:e1:
1a:68:2a:99:d2:da:d2:12:f8:7d:de:2b:d1:cc:94:
c8:c7:05:1b:76:3b:13:64:6c:05:e7:c0:cc:bd:5d:
68:98:83:32:39:de:e0:d1:08:19:c9:27:9a:df:be:
da:be:91:5b:6a:97:08:ad:ea:c1:e1:aa:5a:b5:e2:
a3:83:9d:ae:cd:51:61:61
exponent2:
00:9a:d4:72:a2:75:cb:c9:1d:60:96:b8:21:6b:97:
08:47:8d:2b:be:8b:69:92:fc:e3:a2:16:6e:77:21:
22:34:ed:09:19:cf:7a:8f:e8:c4:a5:78:8d:a2:10:
12:3d:31:61:7f:f7:ad:b7:d7:9d:47:54:b0:5f:2c:
f8:95:13:b1:8a:b8:68:38:f3:12:fc:42:1e:48:f4:
8a:2f:98:29:65:c6:f9:82:a1:40:7e:d5:10:fc:81:
f5:70:c5:3c:40:07:ce:08:85:6b:88:9b:24:2c:5f:
78:18:75:73:f5:14:14:e0:71:7f:30:bf:79:27:8c:
de:c7:d1:ea:4c:ab:de:05:67
coefficient:
79:98:31:aa:49:d9:02:cb:2b:c5:f6:a3:33:32:ca:
97:a1:12:28:6b:e5:9a:48:6e:47:bf:01:59:7c:e6:
a1:78:8e:dc:cf:f4:69:b7:9b:f9:f3:5b:84:98:cd:
2f:1f:71:7b:e8:10:e0:55:f4:c0:f1:59:5c:72:05:
aa:af:96:56:68:53:e1:9e:25:84:f9:fc:a9:2b:29:
61:60:42:55:a9:05:3f:0c:db:0b:f1:a6:62:cb:69:
e1:c3:c4:35:ed:fc:94:4e:24:16:f4:66:7a:03:5e:
e0:8e:af:50:21:63:cd:f4:ae:fe:9e:da:07:0b:e9:
8c:7c:7b:fa:c3:60:a5:bacertificate is compromised
-BEGIN PKCS7-
MIIIkAYJKoZIhvcNAQcCoIIIgTCCCH0CAQExDzANBglghkgBZQMEAgEFADALBgkq
hkiG9w0BBwGgggV/MIIFezCCBGOgAwIBAgIJALUFEZ7ciuLlMA0GCSqGSIb3DQEB
CwUAMIG0MQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMK
U2NvdHRzZGFsZTEaMBgGA1UEChMRR29EYWRkeS5jb20sIEluYy4xLTArBgNVBAsT
JGh0dHA6Ly9jZXJ0cy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5LzEzMDEGA1UEAxMq
R28gRGFkZHkgU2VjdXJlIENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTE4
MDYwNzEyMjQ0OFoXDTIwMDYyODEzMDAzMVowTDEhMB8GA1UECxMYRG9tYWluIENv
bnRyb2wgVmFsaWRhdGVkMScwJQYDVQQDEx5wYWNrZXRmZW5jZS5jc2NoaWMtY2hv
Y3MucWMuY2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC4lN7MT5ql

Re: [PATCH] BUG/MINOR: lua: Bad HTTP client request duration.

2018-08-24 Thread Willy Tarreau
On Fri, Aug 24, 2018 at 05:58:22PM +0200, Tim Düsterhus wrote:
> Willy,
> 
> Am 24.08.2018 um 14:53 schrieb Willy Tarreau:
> > Applied, thank you Fred.
> > 
> 
> While you updated the commit reference in the commit message of the
> reg-test patch you forgot to do so in the reg-test itself. It still
> refers to 7b6cc52784526c32efda44b873a4258d3ae0b8c7 (which does not
> exist, because the hash changed when applying).

Ah yes good catch, thank you.

> Frederic: That's why I believe that reg-tests should be provided in the
> commit fixing the issue, instead of being provided in a separate commit.

I generally agree with this principle so that it also eases backporting
of the tests. The main use I'm seeing of regtests related to bugs is to
test if a backport works as expected. We're still exploring this area,
I predict we'll go back and forth between different methods until we
figure which one is the most suited to everyone's use case.

Cheers,
Willy



Re: [PATCH] BUG/MINOR: lua: Bad HTTP client request duration.

2018-08-24 Thread Tim Düsterhus
Willy,

Am 24.08.2018 um 14:53 schrieb Willy Tarreau:
> Applied, thank you Fred.
> 

While you updated the commit reference in the commit message of the
reg-test patch you forgot to do so in the reg-test itself. It still
refers to 7b6cc52784526c32efda44b873a4258d3ae0b8c7 (which does not
exist, because the hash changed when applying).

Frederic: That's why I believe that reg-tests should be provided in the
commit fixing the issue, instead of being provided in a separate commit.

Best regards
Tim Düsterhus



Re: [PATCH] MEDIUM: reset lua transaction between http requests

2018-08-24 Thread Tim Düsterhus
Willy,

Am 24.08.2018 um 04:19 schrieb Willy Tarreau:
> Oh with H2 it's even simpler, streams are distinct from each other
> so we don't reuse them and the issue doesn't exist :-)
> 
> Does this mean I should take Patrick's patch ?
> 

If you do so: Don't forget to squash in my reg-test (with the minor
change as requested by Frederic).

Best regards
Tim Düsterhus



Re: [PATCH] DOC: typo fix in the configuration manual.

2018-08-24 Thread Arnaud Brousseau
Bump :)

I'm still seeing this typo in the development branch's configuration manual [0].
The original patch [1] should still apply, let me know if you need
anything else.

Cheers,
Arnaud.

[0]: https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt#L16133
[1]: https://www.mail-archive.com/haproxy@formilux.org/msg30587.html

Le dim. 1 juil. 2018 à 20:36, Arnaud Brousseau
 a écrit :
>
> This commit fixes a typo in the description of what the "Tr" timer is
> (Section 8.2.3: "HTTP log format")
> ---
>  doc/configuration.txt | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/doc/configuration.txt b/doc/configuration.txt
> index e901d7ee..06229a91 100644
> --- a/doc/configuration.txt
> +++ b/doc/configuration.txt
> @@ -16028,7 +16028,7 @@ Detailed fields description :
>- "TR" is the total time in milliseconds spent waiting for a full HTTP
>  request from the client (not counting body) after the first byte was
>  received. It can be "-1" if the connection was aborted before a complete
> -request could be received or the a bad request was received. It should
> +request could be received or a bad request was received. It should
>  always be very small because a request generally fits in one single 
> packet.
>  Large times here generally indicate network issues between the client and
>  haproxy or requests being typed by hand. See "Timers" below for
> more details.
> --
> 2.14.3 (Apple Git-98)



Re: HTTP/2 issues and segfaults with current 1.9-dev [7ee465]

2018-08-24 Thread Willy Tarreau
On Fri, Aug 24, 2018 at 04:25:02PM +0200, Frederic Lecaille wrote:
> Here is a reg testing file for this bug.

Thanks, applied.

Willy



Re: HTTP/2 issues and segfaults with current 1.9-dev [7ee465]

2018-08-24 Thread Frederic Lecaille

On 08/22/2018 04:32 AM, Willy Tarreau wrote:

On Wed, Aug 22, 2018 at 12:46:47AM +0200, Cyril Bonté wrote:

Le 22/08/2018 à 00:40, Cyril Bonté a écrit :

Hi again Willy,

Le 21/08/2018 à 22:55, Cyril Bonté a écrit :

Thanks for the diag. I don't remember changing anything around the proxy
protocol, but it's possible that something subtle changed. Also it's not
on the regular send/receive path so maybe I overlooked certain parts and
broke it by accident when changing the buffers.

Same here, if you have a small reproducer it will really help.


I try to find a configuration that could help identify the issue,
but currently I fail (timings seems to have a role). I let you know
once I have a good reproducer.


OK, I have a small reproducer that triggers the issue quite often on my
laptop:
      global
      log /dev/log len 2048 local7 debug err

      nbproc 4

      defaults
      mode http
      log global
      option log-health-checks

      listen ssl-offload-http
      bind :4443 ssl crt localhost.pem ssl no-sslv3 alpn h2,http/1.1
      bind-process 2-4

      server http abns@http send-proxy

      listen http
      bind-process 1
      bind abns@http accept-proxy name ssl-offload-http

      option forwardfor

then execute several H2 requests on the same connection, for example:
$ curl -k -d 'x=x' $(printf 'https://localhost:4443/%s ' {1..8})
503 Service Unavailable
No server is available to handle this request.

503 Service Unavailable
No server is available to handle this request.

503 Service Unavailable
No server is available to handle this request.

curl: (16) Error in the HTTP2 framing layer
503 Service Unavailable
No server is available to handle this request.

curl: (16) Error in the HTTP2 framing layer
curl: (16) Error in the HTTP2 framing layer
503 Service Unavailable
No server is available to handle this request.


In the logs, I can see:
Aug 22 00:34:19 asus-wifi haproxy[12623]: 127.0.0.1:37538
[22/Aug/2018:00:34:19.459] http http/ 0/-1/-1/-1/0 503 212 - -
SC-- 1/1/0/0/0 0/0 "POST /1 HTTP/1.1"
Aug 22 00:34:19 asus-wifi haproxy[12625]: 127.0.0.1:37538
[22/Aug/2018:00:34:19.458] ssl-offload-http~ ssl-offload-http/http
0/0/0/0/0 503 212 - -  1/1/0/0/0 0/0 "POST /1 HTTP/1.1"
Aug 22 00:34:19 asus-wifi haproxy[12623]: 127.0.0.1:37538
[22/Aug/2018:00:34:19.459] http http/ 0/-1/-1/-1/0 503 212 - -
SC-- 1/1/0/0/0 0/0 "POST /2 HTTP/1.1"
Aug 22 00:34:19 asus-wifi haproxy[12625]: 127.0.0.1:37538
[22/Aug/2018:00:34:19.459] ssl-offload-http~ ssl-offload-http/http
0/0/0/0/0 503 212 - -  1/1/0/0/0 0/0 "POST /2 HTTP/1.1"
Aug 22 00:34:19 asus-wifi haproxy[12623]: 127.0.0.1:37538
[22/Aug/2018:00:34:19.460] http http/ 0/-1/-1/-1/0 503 212 - -
SC-- 1/1/0/0/0 0/0 "POST /3 HTTP/1.1"
Aug 22 00:34:19 asus-wifi haproxy[12625]: 127.0.0.1:37538
[22/Aug/2018:00:34:19.459] ssl-offload-http~ ssl-offload-http/http
0/0/0/0/0 503 212 - -  1/1/0/0/0 0/0 "POST /3 HTTP/1.1"
Aug 22 00:34:19 asus-wifi haproxy[12623]: PROXY SIG ERROR
X-Forwarded-For: 127.0.0.1
Aug 22 00:34:19 asus-wifi haproxy[12623]: unix:1
[22/Aug/2018:00:34:19.460] http/ssl-offload-http: Received something
which does not look like a PROXY protocol header
Aug 22 00:34:19 asus-wifi haproxy[12625]: 127.0.0.1:37538
[22/Aug/2018:00:34:19.459] ssl-offload-http~ ssl-offload-http/http
0/0/0/-1/0 -1 0 - - SD-- 1/1/0/0/0 0/0 "POST /4 HTTP/1.1"
Aug 22 00:34:19 asus-wifi haproxy[12623]: PROXY SIG ERROR unix:1
[22/Aug/2018:00:34:19.460] http/ssl-offload-http
Aug 22 00:34:19 asus-wifi haproxy[12623]: unix:1
[22/Aug/2018:00:34:19.461] http/ssl-offload-http: Received something
which does not look like a PROXY protocol header
Aug 22 00:34:20 asus-wifi haproxy[12623]: 127.0.0.1:37542
[22/Aug/2018:00:34:20.462] http http/ 0/-1/-1/-1/0 503 212 - -
SC-- 1/1/0/0/0 0/0 "POST /5 HTTP/1.1"
Aug 22 00:34:20 asus-wifi haproxy[12625]: 127.0.0.1:37542
[22/Aug/2018:00:34:19.460] ssl-offload-http~ ssl-offload-http/http
0/0/1002/0/1002 503 212 - -  1/1/0/0/1 0/0 "POST /5 HTTP/1.1"
Aug 22 00:34:20 asus-wifi haproxy[12623]: PROXY SIG ERROR
X-Forwarded-For: 127.0.0.1
Aug 22 00:34:20 asus-wifi haproxy[12623]: unix:1
[22/Aug/2018:00:34:20.463] http/ssl-offload-http: Received something
which does not look like a PROXY protocol header
Aug 22 00:34:20 asus-wifi haproxy[12625]: 127.0.0.1:37542
[22/Aug/2018:00:34:20.463] ssl-offload-http~ ssl-offload-http/http
0/0/0/-1/0 -1 0 - - SD-- 1/1/0/0/0 0/0 "POST /6 HTTP/1.1"
Aug 22 00:34:20 asus-wifi haproxy[12623]: PROXY SIG ERROR unix:1
[22/Aug/2018:00:34:20.463] http/ssl-offload-http
Aug 22 00:34:20 asus-wifi haproxy[12623]: unix:1
[22/Aug/2018:00:34:20.469] http/ssl-offload-http: Received something
which does not look like a PROXY protocol header
Aug 22 00:34:20 asus-wifi haproxy[12625]: 127.0.0.1:37546
[22/Aug/2018:00:34:20.469] ssl-offload-http~ ssl-offload-http/http
0/0/0/-1/0 -1 0 - - SD-- 1/1/0/0/0 0/0 "POST /7 HTTP/1.1"
Aug 22 00:34:20 asus-wifi haproxy[12623]: 127.0.0.1:37550

Re: [PATCH] BUG/MINOR: lua: Bad HTTP client request duration.

2018-08-24 Thread Willy Tarreau
On Fri, Aug 24, 2018 at 08:42:20AM +0200, Frederic Lecaille wrote:
> Here is a patch to fix the issue reported by Patrick in this thread (BUG: Tw
> is negative with lua sleep
> https://www.mail-archive.com/haproxy@formilux.org/msg30474.html).
(...)

Applied, thank you Fred.

Willy



Re: Clarification re Timeouts and Session State in the Logs

2018-08-24 Thread Aleksandar Lazic
Hi.

Am 24.08.2018 um 11:04 schrieb Daniel Schneller:
> Hi!
> 
> Thanks for that input. I would like to understand what's going before making
> changes. :)

There is a how-it-works.txt
http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=doc/design-thoughts/how-it-works.txt;h=2d1cb89a059e477469b2f980e970c22f4af6da66;hb=d804e5e6b76bfd34576305ff33fe32aacb1fa5b7
document

Well that's tricky as they are ticks in haproxy.

for example:

http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/backend.c;hb=d804e5e6b76bfd34576305ff33fe32aacb1fa5b7#l1224

###
1224 /* set connect timeout */
1225 s->si[1].exp = tick_add_ifset(now_ms, s->be->timeout.connect);
###

The ticks are explained here
http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=include/common/ticks.h;hb=d804e5e6b76bfd34576305ff33fe32aacb1fa5b7

There is a scheduler which handles the run queue and all this timeouts and other
tasks in haproxy.

More below.

> Cheers,
> Daniel
> 
> 
>> On 24. Aug 2018, at 00:56, Igor Cicimov > > wrote:
>>
>> Hi Daniel,
>>
>> We had similar issue in 2015, and the answer was: server timeout was too
>> short. Simple.
>>
>> On Thu, 23 Aug 2018 9:56 pm Daniel Schneller
>> > >
>> wrote:
>>
>> Friendly bump. 
>> I'd volunteer to do some documentation amendments once I understand the
>> issue better :D
>>
>>> On 21. Aug 2018, at 16:17, Daniel Schneller
>>> >> > wrote:
>>>
>>> Hi!
>>>
>>> I am trying to wrap my head around an issue we are seeing where there 
>>> are
>>> many HTTP 504 responses sent out to clients.
>>>
>>> I suspect that due to a client bug they stop sending data midway during
>>> the data phase of the request, but they keep the connection open.
>>>
>>> What I see in the haproxy logs is a 504 response with termination flags
>>> "sHNN". 
>>> That I read as haproxy getting impatient (timeout server) waiting for
>>> response headers from the backend. The backend, not having seen the
>>> complete request yet, can't really answer at this point, of course.
>>> I am wondering though, why it is that I see the I don't see a 
>>> termination
>>> state indicating a client problem.
>>>
>>> So my question (for now ;-)) boils down to these points:
>>>  
>>> 1) When does the server timeout actually start counting? Am I right to
>>> assume it is from the last moment the server sent or (in this case)
>>> received some data?

Let's say it like this.
If the connection is established and no event was triggered then the timeout
will be triggered.

>>> 2) If both "timeout server" and "timeout client" are set to the same
>>> value, and the input stalls (after the headers) longer than that, is it
>>> just that the implementation is such that the server side timeout "wins"
>>> when it comes to setting the termination flags? 

As far as I understand haproxy the client will win as this tick starts earlier.

client -> timeout client -> server -> timeout server
Client stalls which is the first element in the chain.

>>> 3) If I set the client timeout shorter than the server timeout and
>>> produced this situation, should I then see a cD state?  If so, would I 
>>> be
>>> right to assume that if the server were now to stall, the log could 
>>> again
>>> be misleading in telling me that the client timeout expired first?

cD yes.
You should see the sD also just aside of cD and some unusual timing values.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#8.4

>>> I understand it is difficult to tell "who's to blame" for an inactivity
>>> timeout without knowledge about the content or final size of the request
>>> -- I just need some clarity on how the read the logs :)
>>>
>>>
>>> Thanks!
>>> Daniel

Best regards
Aleks

>>> -- 
>>> Daniel Schneller
>>> Principal Cloud Engineer
>>>
>>> CenterDevice GmbH
>>> Rheinwerkallee 3
>>> 53227 Bonn
>>> www.centerdevice.com 
>>>
>>> __
>>> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael
>>> Rosbach, Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn,
>>> USt-IdNr.: DE-815299431
>>>
>>> Diese E-Mail einschließlich evtl. beigefügter Dateien
>>> enthält vertrauliche und/oder rechtlich geschützte Informationen.
>>> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
>>> irrtümlich erhalten haben, informieren Sie bitte sofort den Absender
>>> und löschen Sie diese E-Mail und evtl. beigefügter Dateien umgehend. Das
>>> unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter Dateien
>>> sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.
>>>
>>>
>>
> 




signature.asc
Description: OpenPGP digital signature


Timeouts on 1.8.13

2018-08-24 Thread Kilian Ries
Hi,


we are seeing timeouts with haproxy 1.8.13 running in http loadbalancing mode. 
We are running under high load / traffic and about 10-20k active frontend 
sessions. With haproxy 1.8.12 we cant reproduce the timeouts and it seems to be 
fine.


Timeouts occure if we are running in threaded mode (1 process and 20 threads) 
or even if we are running with nbproc 20 and zero threads! But nbproc runs more 
stable than threaded mode what means we see less timeouts. As i said, on 1.8.12 
we don't see any timeouts wether we are running with threads or with nbproc.


It seems that we can reproduce the issue even in one of our low traffic setups 
(200-500 active frontend sessions) but then the error occures much less than in 
the high traffic setup.


If you need any more details, feel free to ask.


Greets,

Kilian


Re: Clarification re Timeouts and Session State in the Logs

2018-08-24 Thread Daniel Schneller
Hi!

Thanks for that input. I would like to understand what's going before making 
changes. :)

Cheers,
Daniel


> On 24. Aug 2018, at 00:56, Igor Cicimov  
> wrote:
> 
> Hi Daniel,
> 
> We had similar issue in 2015, and the answer was: server timeout was too 
> short. Simple.
> 
> On Thu, 23 Aug 2018 9:56 pm Daniel Schneller 
>  > wrote:
> Friendly bump.
> I'd volunteer to do some documentation amendments once I understand the issue 
> better :D
> 
>> On 21. Aug 2018, at 16:17, Daniel Schneller 
>> > > wrote:
>> 
>> Hi!
>> 
>> I am trying to wrap my head around an issue we are seeing where there are 
>> many HTTP 504 responses sent out to clients.
>> 
>> I suspect that due to a client bug they stop sending data midway during the 
>> data phase of the request, but they keep the connection open.
>> 
>> What I see in the haproxy logs is a 504 response with termination flags 
>> "sHNN".
>> That I read as haproxy getting impatient (timeout server) waiting for 
>> response headers from the backend. The backend, not having seen the complete 
>> request yet, can't really answer at this point, of course.
>> I am wondering though, why it is that I see the I don't see a termination 
>> state indicating a client problem.
>> 
>> So my question (for now ;-)) boils down to these points:
>> 
>> 1) When does the server timeout actually start counting? Am I right to 
>> assume it is from the last moment the server sent or (in this case) received 
>> some data?
>> 
>> 2) If both "timeout server" and "timeout client" are set to the same value, 
>> and the input stalls (after the headers) longer than that, is it just that 
>> the implementation is such that the server side timeout "wins" when it comes 
>> to setting the termination flags?
>> 
>> 3) If I set the client timeout shorter than the server timeout and produced 
>> this situation, should I then see a cD state?  If so, would I be right to 
>> assume that if the server were now to stall, the log could again be 
>> misleading in telling me that the client timeout expired first?
>> 
>> I understand it is difficult to tell "who's to blame" for an inactivity 
>> timeout without knowledge about the content or final size of the request -- 
>> I just need some clarity on how the read the logs :)
>> 
>> 
>> Thanks!
>> Daniel
>> 
>> 
>> 
>> 
>> --
>> Daniel Schneller
>> Principal Cloud Engineer
>> 
>> CenterDevice GmbH
>> Rheinwerkallee 3
>> 53227 Bonn
>> www.centerdevice.com 
>> 
>> __
>> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael Rosbach, 
>> Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: DE-815299431
>> 
>> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche 
>> und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige 
>> Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie 
>> bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügter 
>> Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. 
>> beigefügter Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht 
>> gestattet.
>> 
>> 
> 



signature.asc
Description: Message signed with OpenPGP


Re: Issue with TCP splicing

2018-08-24 Thread Lukas Tribus
Hello Julien,


On Thu, 23 Aug 2018 at 20:49, Julien Semaan  wrote:
>
> Hi Olivier,
>
> Sorry for the delay, obtaining the core dump from a production environment 
> was a bit tricky.
>
> So, I have attached the core dump to this email. I hope this will help you 
> identify the issue.

The executable is also needed to analyze the coredump, please share it
as well; also you probably want to be more careful sending around
those core-dumps. It can contain all kinds of private data, like
private keys, and unencrypted HTTP data, etc. By sending it to the
list, you shared it with everyone, usually you'd just unicast it to
whoever is working with you on this crash.


Regards,
Lukas



Re: [PATCH] BUG/MINOR: lua: Bad HTTP client request duration.

2018-08-24 Thread Frederic Lecaille

On 08/24/2018 08:42 AM, Frederic Lecaille wrote:
Here is a patch to fix the issue reported by Patrick in this thread 
(BUG: Tw is negative with lua sleep 
https://www.mail-archive.com/haproxy@formilux.org/msg30474.html).


Note that I provide a reg testing file to test both HTTP and TCP LUA 
applet callbacks used when registering an HTTP or TCP service.


I dit not manage to reproduce any issue similar to the one reported by 
Patrick about HTTP.


I meant, I did not reproduce any issue *in TCP mode*, similar to the one 
reported by Patrick about HTTP.




[PATCH] BUG/MINOR: lua: Bad HTTP client request duration.

2018-08-24 Thread Frederic Lecaille
Here is a patch to fix the issue reported by Patrick in this thread 
(BUG: Tw is negative with lua sleep 
https://www.mail-archive.com/haproxy@formilux.org/msg30474.html).


Note that I provide a reg testing file to test both HTTP and TCP LUA 
applet callbacks used when registering an HTTP or TCP service.


I dit not manage to reproduce any issue similar to the one reported by 
Patrick about HTTP. Note that when we modify struct logs->tv_request 
field in src/hlua.c only %TR, %Tq and %Tw timing events may be affected, 
and among these three timing events, only %Tw is valid for TCP mode.



Fred.
>From 410ed466774f40c221afb1a8224d64bc21c4407c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Thu, 23 Aug 2018 18:06:35 +0200
Subject: [PATCH 2/2] REGEST/MINOR: Add reg testing files.

Reg testing files for a LUA bug fixed by 7b6cc527 commit.
---
 reg-tests/lua/b1.lua |  8 +
 reg-tests/lua/b1.vtc | 80 
 2 files changed, 88 insertions(+)
 create mode 100644 reg-tests/lua/b1.lua
 create mode 100644 reg-tests/lua/b1.vtc

diff --git a/reg-tests/lua/b1.lua b/reg-tests/lua/b1.lua
new file mode 100644
index ..2c2ab1dc
--- /dev/null
+++ b/reg-tests/lua/b1.lua
@@ -0,0 +1,8 @@
+core.register_service("foo.http", "http", function(applet)
+core.msleep(10)
+applet:start_response()
+end)
+
+core.register_service("foo.tcp", "tcp", function(applet)
+   applet:send("HTTP/1.1 200 OK\r\nTransfer-encoding: chunked\r\n\r\n0\r\n\r\n")
+end)
diff --git a/reg-tests/lua/b1.vtc b/reg-tests/lua/b1.vtc
new file mode 100644
index ..a7a2d699
--- /dev/null
+++ b/reg-tests/lua/b1.vtc
@@ -0,0 +1,80 @@
+# commit 7b6cc52784526c32efda44b873a4258d3ae0b8c7
+# BUG/MINOR: lua: Bad HTTP client request duration.
+#
+# HTTP LUA applet callback should not update the date on which the HTTP client requests
+# arrive. This was done just after the LUA applet has completed its job.
+#
+# This patch simply removes the affected statement. The same fixe has been applied
+# to TCP LUA applet callback.
+#
+# To reproduce this issue, as reported by Patrick Hemmer, implement an HTTP LUA applet
+# which sleeps a bit before replying:
+#
+#   core.register_service("foo", "http", function(applet)
+#   core.msleep(100)
+#   applet:set_status(200)
+#   applet:start_response()
+#   end)
+#
+# This had as a consequence to log %TR field with approximatively the same value as
+# the LUA sleep time.
+
+varnishtest "LUA bug"
+
+feature ignore_unknown_macro
+
+syslog Slog {
+recv notice
+expect ~ "haproxy\\[[0-9]*\\]: Proxy f1 started"
+
+recv notice
+expect ~ "haproxy\\[[0-9]*\\]: Proxy f2 started"
+
+recv info
+expect ~ "haproxy\\[[0-9]*\\]: Ta=[0-9]* Tc=[0-9]* Td=[0-9]* Th=[0-9]* Ti=[0-9]* Tq=[0-9]* TR=[0-9]* Tr=[0-9]* Tt=[0-9]* Tw=[0-9]*$"
+
+recv info
+expect ~ "haproxy\\[[0-9]*\\]: Tc=[0-9]* Td=[0-9]* Th=[0-9]* Tt=[0-9]* Tw=[0-9]*$"
+} -start
+
+haproxy h1 -conf {
+global
+lua-load ${testdir}/b1.lua
+
+defaults
+timeout client 1s
+timeout server 1s
+timeout connect 1s
+
+frontend f1
+mode http
+bind "fd@${f1}"
+log ${Slog_addr}:${Slog_port} daemon
+log-format Ta=%Ta\ Tc=%Tc\ Td=%Td\ Th=%Th\ Ti=%Ti\ Tq=%Tq\ TR=%TR\ Tr=%Tr\ Tt=%Tt\ Tw=%Tw
+default_backend b1
+
+backend b1
+mode http
+http-request use-service lua.foo.http
+
+frontend f2
+mode tcp
+bind "fd@${f2}"
+log ${Slog_addr}:${Slog_port} daemon
+log-format Tc=%Tc\ Td=%Td\ Th=%Th\ Tt=%Tt\ Tw=%Tw
+
+tcp-request inspect-delay 1s
+tcp-request content use-service lua.foo.tcp
+} -start
+
+client c1 -connect "${h1_f1_sock}" {
+txreq
+rxresp
+} -run
+
+client c2 -connect "${h1_f2_sock}" {
+txreq
+rxresp
+} -run
+
+syslog Slog -wait
-- 
2.11.0

>From 7b6cc52784526c32efda44b873a4258d3ae0b8c7 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Wed, 18 Jul 2018 14:25:26 +0200
Subject: [PATCH 1/2] BUG/MINOR: lua: Bad HTTP client request duration.

HTTP LUA applet callback should not update the date on which the HTTP client requests
arrive. This was done just after the LUA applet has completed its job.

This patch simply removes the affected statement. The same fixe has been applied
to TCP LUA applet callback.

To reproduce this issue, as reported by Patrick Hemmer, implement an HTTP LUA applet
which sleeps a bit before replying:

  core.register_service("foo", "http", function(applet)
  core.msleep(100)
  applet:set_status(200)
  applet:start_response()
  end)

This had as a consequence to log %TR field with approximatively the same value as
the LUA sleep time.

Thank you to Patrick Hemmer for having reported this issue.

Must be backported to 1.8, 1.7 and 1.6.
---
 src/hlua.c | 6 +-
 1 file changed, 1 insertion(+), 5