Re: [PATCH] MEDIUM: reset lua transaction between http requests

2018-08-23 Thread Willy Tarreau
Hi Thierry,

On Thu, Aug 23, 2018 at 09:37:43AM +0200, Thierry Fournier wrote:
> Hi,
> 
> Your patch make sense, that's the right appoach, but I have a doubt about
> the place to use for doing the reinitialization. 
> 
> I add Willy in this thread in order to have http2 advisor.
> 
> Before the 1.8 the Lua context was reinitialized with the stream because
> the stream was reinitialized between each http request in a keepalive
> session.
> 
> With http2 I guess that this behavior change. So, Willy, do you have
> an opinion on the place to use to perform the Lua reinit ?

Oh with H2 it's even simpler, streams are distinct from each other
so we don't reuse them and the issue doesn't exist :-)

Does this mean I should take Patrick's patch ?

Willy



Re: Clarification re Timeouts and Session State in the Logs

2018-08-23 Thread Igor Cicimov
Hi Daniel,

We had similar issue in 2015, and the answer was: server timeout was too
short. Simple.

On Thu, 23 Aug 2018 9:56 pm Daniel Schneller <
daniel.schnel...@centerdevice.com> wrote:

> Friendly bump.
> I'd volunteer to do some documentation amendments once I understand the
> issue better :D
>
> On 21. Aug 2018, at 16:17, Daniel Schneller <
> daniel.schnel...@centerdevice.com> wrote:
>
> Hi!
>
> I am trying to wrap my head around an issue we are seeing where there are
> many HTTP 504 responses sent out to clients.
>
> I suspect that due to a client bug they stop sending data midway during
> the data phase of the request, but they keep the connection open.
>
> What I see in the haproxy logs is a 504 response with termination flags
> "sHNN".
> That I read as haproxy getting impatient (timeout server) waiting for
> response headers from the backend. The backend, not having seen the
> complete request yet, can't really answer at this point, of course.
> I am wondering though, why it is that I see the I don't see a termination
> state indicating a client problem.
>
> So my question (for now ;-)) boils down to these points:
>
> 1) When does the server timeout actually start counting? Am I right to
> assume it is from the last moment the server sent or (in this case)
> received some data?
>
> 2) If both "timeout server" and "timeout client" are set to the same
> value, and the input stalls (after the headers) longer than that, is it
> just that the implementation is such that the server side timeout "wins"
> when it comes to setting the termination flags?
>
> 3) If I set the client timeout shorter than the server timeout and
> produced this situation, should I then see a cD state?  If so, would I be
> right to assume that if the server were now to stall, the log could again
> be misleading in telling me that the client timeout expired first?
>
> I understand it is difficult to tell "who's to blame" for an inactivity
> timeout without knowledge about the content or final size of the request --
> I just need some clarity on how the read the logs :)
>
>
> Thanks!
> Daniel
>
>
>
>
> --
> Daniel Schneller
> Principal Cloud Engineer
>
> CenterDevice GmbH
> Rheinwerkallee 3
> 53227 Bonn
> www.centerdevice.com
>
> __
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael
> Rosbach, Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn,
> USt-IdNr.: DE-815299431
>
> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche
> und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige
> Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren
> Sie bitte sofort den Absender und löschen Sie diese E-Mail und evtl.
> beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder
> Öffnen evtl. beigefügter Dateien sowie die unbefugte Weitergabe
> dieser E-Mail ist nicht gestattet.
>
>
>
>


RE: Docker Swarm configuration

2018-08-23 Thread Norman Branitsky
Actually items 2 and 3 below are what I want:
If hostname "ucp.mydomain.com" then "reencrypt" i.e. https -> https
else normal SSL termination - "edge" i.e.  https -> http.

-Original Message-
From: Aleksandar Lazic  
Sent: Thursday, August 23, 2018 4:30 PM
To: Norman Branitsky ; haproxy@formilux.org; 
haproxy 
Subject: RE: Docker Swarm configuration

yes it looks complicated but you only need the edge one as I understood your 
requirement.

TCP -> HTTP only if some name
 -> else go further with TCP

And it's great that this is possible with this software ;-)))

Regards
Aleks 


 Ursprüngliche Nachricht 
Von: Norman Branitsky 
Gesendet: 23. August 2018 21:59:03 MESZ
An: Aleksandar Lazic , "haproxy@formilux.org" 
, haproxy 
Betreff: RE: Docker Swarm configuration

Looking at the openshift router definition, I can see it implements what I want:

   2. If termination is type 'edge': This is https -> http.  Create a 
be_edge_http: backend.
  Incoming https traffic is terminated and sent as http to the pods.

   3. If termination is type 'reencrypt': This is https -> https.  Create a 
be_secure: backend.
Incoming https traffic is terminated and then sent as https to the pods.

BUT wow! Is this implementation complicated!

-Original Message-
From: Aleksandar Lazic  
Sent: Thursday, August 23, 2018 3:25 PM
To: haproxy@formilux.org; Norman Branitsky ; 
haproxy 
Subject: Re: Docker Swarm configuration

Hi.

How about to use the following setup.

frontend tcp
  mode tcp
  bind 443

  use_backend default

backend default
  mode http
  bind 444

  ...

You can take a look into the openshift router for a more detailed solution.

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L183

Regards
Aleks
  


 Ursprüngliche Nachricht 
Von: Norman Branitsky 
Gesendet: 23. August 2018 20:56:31 MESZ
An: haproxy 
Betreff: Docker Swarm configuration

My plan was to by default terminate SSL and send http traffic to the worker 
servers on port 88 while traffic with a "ucp.mydomain.com" header would be 
passed thru as https to the UCP management servers on port 8443.
Docker Enterprise Manager nodes insist on seeing incoming commands as https and 
require an SSL certificate and key to configure correctly.
Problem is, the only way I know to pass thru https traffic without terminating 
the SSL is to use mode tcp.
But mode tcp can only listen on specific ports - it can't see http headers to 
detect the "ucp" hostname, so how do I select the correct backend?
I could make the ucp frontend listen on a different port e.g. 444 and direct to 
8443 but that seems klutzy.


RE: Docker Swarm configuration

2018-08-23 Thread Aleksandar Lazic
yes it looks complicated but you only need the edge one as I understood your 
requirement.

TCP -> HTTP only if some name
 -> else go further with TCP

And it's great that this is possible with this software ;-)))

Regards
Aleks 


 Ursprüngliche Nachricht 
Von: Norman Branitsky 
Gesendet: 23. August 2018 21:59:03 MESZ
An: Aleksandar Lazic , "haproxy@formilux.org" 
, haproxy 
Betreff: RE: Docker Swarm configuration

Looking at the openshift router definition, I can see it implements what I want:

   2. If termination is type 'edge': This is https -> http.  Create a 
be_edge_http: backend.
  Incoming https traffic is terminated and sent as http to the pods.

   3. If termination is type 'reencrypt': This is https -> https.  Create a 
be_secure: backend.
Incoming https traffic is terminated and then sent as https to the pods.

BUT wow! Is this implementation complicated!

-Original Message-
From: Aleksandar Lazic  
Sent: Thursday, August 23, 2018 3:25 PM
To: haproxy@formilux.org; Norman Branitsky ; 
haproxy 
Subject: Re: Docker Swarm configuration

Hi.

How about to use the following setup.

frontend tcp
  mode tcp
  bind 443

  use_backend default

backend default
  mode http
  bind 444

  ...

You can take a look into the openshift router for a more detailed solution.

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L183

Regards
Aleks
  


 Ursprüngliche Nachricht 
Von: Norman Branitsky 
Gesendet: 23. August 2018 20:56:31 MESZ
An: haproxy 
Betreff: Docker Swarm configuration

My plan was to by default terminate SSL and send http traffic to the worker 
servers on port 88 while traffic with a "ucp.mydomain.com" header would be 
passed thru as https to the UCP management servers on port 8443.
Docker Enterprise Manager nodes insist on seeing incoming commands as https and 
require an SSL certificate and key to configure correctly.
Problem is, the only way I know to pass thru https traffic without terminating 
the SSL is to use mode tcp.
But mode tcp can only listen on specific ports - it can't see http headers to 
detect the "ucp" hostname, so how do I select the correct backend?
I could make the ucp frontend listen on a different port e.g. 444 and direct to 
8443 but that seems klutzy.



Re: HAProxy dynamic server address based off of variable

2018-08-23 Thread Aleksandar Lazic
Hi.

First of all i suggest to update the haproxy via rpm 
https://haproxy.hongens.nl/.

I don't think think that's possible but I can be wrong on this.

Can't you use dns for this  as a name is possible as server address.

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Phani Gopal Achanta 
Gesendet: 23. August 2018 22:10:18 MESZ
An: haproxy@formilux.org
Betreff: HAProxy dynamic server address based off of variable

 I want to dynamically route to a server by making use of a request
variable ssl_fc_sni.
Is it possible to configure haproxy server statement to use a variable for
server address and/or port?
Example
backend citest3bk_spice_default
   server compute1 %[ssl_fc_sni] ssl verify required crt server.pem
ca-file ca.pem force-tlsv12 weight 0

Currently, I get an error parsing [/etc/haproxy/haproxy.cfg:157] : 'server
compute1' : invalid address: '%[ssl_fc_sni]' in '%[ssl_fc_sni]'
I am running on Centos 7.5 with HAProxy 1.5.18
Thanks
Phani


HAProxy dynamic server address based off of variable

2018-08-23 Thread Phani Gopal Achanta
 I want to dynamically route to a server by making use of a request
variable ssl_fc_sni.
Is it possible to configure haproxy server statement to use a variable for
server address and/or port?
Example
backend citest3bk_spice_default
   server compute1 %[ssl_fc_sni] ssl verify required crt server.pem
ca-file ca.pem force-tlsv12 weight 0

Currently, I get an error parsing [/etc/haproxy/haproxy.cfg:157] : 'server
compute1' : invalid address: '%[ssl_fc_sni]' in '%[ssl_fc_sni]'
I am running on Centos 7.5 with HAProxy 1.5.18
Thanks
Phani


RE: Docker Swarm configuration

2018-08-23 Thread Norman Branitsky
Looking at the openshift router definition, I can see it implements what I want:

   2. If termination is type 'edge': This is https -> http.  Create a 
be_edge_http: backend.
  Incoming https traffic is terminated and sent as http to the pods.

   3. If termination is type 'reencrypt': This is https -> https.  Create a 
be_secure: backend.
Incoming https traffic is terminated and then sent as https to the pods.

BUT wow! Is this implementation complicated!

-Original Message-
From: Aleksandar Lazic  
Sent: Thursday, August 23, 2018 3:25 PM
To: haproxy@formilux.org; Norman Branitsky ; 
haproxy 
Subject: Re: Docker Swarm configuration

Hi.

How about to use the following setup.

frontend tcp
  mode tcp
  bind 443

  use_backend default

backend default
  mode http
  bind 444

  ...

You can take a look into the openshift router for a more detailed solution.

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L183

Regards
Aleks
  


 Ursprüngliche Nachricht 
Von: Norman Branitsky 
Gesendet: 23. August 2018 20:56:31 MESZ
An: haproxy 
Betreff: Docker Swarm configuration

My plan was to by default terminate SSL and send http traffic to the worker 
servers on port 88 while traffic with a "ucp.mydomain.com" header would be 
passed thru as https to the UCP management servers on port 8443.
Docker Enterprise Manager nodes insist on seeing incoming commands as https and 
require an SSL certificate and key to configure correctly.
Problem is, the only way I know to pass thru https traffic without terminating 
the SSL is to use mode tcp.
But mode tcp can only listen on specific ports - it can't see http headers to 
detect the "ucp" hostname, so how do I select the correct backend?
I could make the ucp frontend listen on a different port e.g. 444 and direct to 
8443 but that seems klutzy.


Re: Docker Swarm configuration

2018-08-23 Thread Aleksandar Lazic
Hi.

How about to use the following setup.

frontend tcp
  mode tcp
  bind 443

  use_backend default

backend default
  mode http
  bind 444

  ...

You can take a look into the openshift router for a more detailed solution.

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L183

Regards
Aleks
  


 Ursprüngliche Nachricht 
Von: Norman Branitsky 
Gesendet: 23. August 2018 20:56:31 MESZ
An: haproxy 
Betreff: Docker Swarm configuration

My plan was to by default terminate SSL and send http traffic to the worker 
servers on port 88 while traffic with a "ucp.mydomain.com" header
would be passed thru as https to the UCP management servers on port 8443.
Docker Enterprise Manager nodes insist on seeing incoming commands as https and 
require an SSL certificate and key to configure correctly.
Problem is, the only way I know to pass thru https traffic without terminating 
the SSL is to use mode tcp.
But mode tcp can only listen on specific ports - it can't see http headers to 
detect the "ucp" hostname,
so how do I select the correct backend?
I could make the ucp frontend listen on a different port e.g. 444 and direct to 
8443 but that seems klutzy.



Docker Swarm configuration

2018-08-23 Thread Norman Branitsky
My plan was to by default terminate SSL and send http traffic to the worker 
servers on port 88 while traffic with a "ucp.mydomain.com" header
would be passed thru as https to the UCP management servers on port 8443.
Docker Enterprise Manager nodes insist on seeing incoming commands as https and 
require an SSL certificate and key to configure correctly.
Problem is, the only way I know to pass thru https traffic without terminating 
the SSL is to use mode tcp.
But mode tcp can only listen on specific ports - it can't see http headers to 
detect the "ucp" hostname,
so how do I select the correct backend?
I could make the ucp frontend listen on a different port e.g. 444 and direct to 
8443 but that seems klutzy.


Re: [PATCH] MEDIUM: lua: Add stick table support for Lua

2018-08-23 Thread Adis Nezirovic
On Thu, Aug 23, 2018 at 03:43:59PM +0200, Willy Tarreau wrote:
> Does this mean I should merge Adis' patch or do you want to verify
> other things ? Just let me know.

Willy,

I'll submit new patch later today with simplified filter definitions and
then we can ask Thierry for final ack for the patch.

Best regards,
Adis



Re: [PATCH] REGTEST/MINOR

2018-08-23 Thread Willy Tarreau
On Thu, Aug 23, 2018 at 09:01:45AM +0200, Frederic Lecaille wrote:
> Hi ML,
> 
> Here are two patches for haproxy reg testing.
(...)

Applied, thanks Fred!
Willy



Re: [PATCH] DOC: Fix spelling error in configuration doc

2018-08-23 Thread Willy Tarreau
On Thu, Aug 23, 2018 at 02:11:27PM +0200, Jens Bissinger wrote:
> Fix spelling error in logging section of configuration doc.

now applied, thank you!
Willy



Re: [PATCH] MEDIUM: lua: Add stick table support for Lua

2018-08-23 Thread Willy Tarreau
Hi Thierry,

On Thu, Aug 23, 2018 at 10:53:15AM +0200, Thierry Fournier wrote:
(...)
> Ok, it sounds good. I think this kind of syntax is easily understandable
> and it allow a good way for filtering values.

Does this mean I should merge Adis' patch or do you want to verify
other things ? Just let me know.

Thanks,
Willy



[PATCH] DOC: Fix spelling error in configuration doc

2018-08-23 Thread Jens Bissinger
Fix spelling error in logging section of configuration doc.
---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 4e66aad8..6e33f599 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -16414,11 +16414,11 @@ Please refer to the table below for currently defined 
variables :
   | S | %sslc| ssl_ciphers (ex: AES-SHA) | string  |
   | S | %sslv| ssl_version (ex: TLSv1)   | string  |
   |   | %t   | date_time  (with millisecond resolution)  | date|
   | H | %tr  | date_time of HTTP request | date|
   | H | %trg | gmt_date_time of start of HTTP request| date|
-  | H | %trl | locla_date_time of start of HTTP request  | date|
+  | H | %trl | local_date_time of start of HTTP request  | date|
   |   | %ts  | termination_state | string  |
   | H | %tsc | termination_state with cookie status  | string  |
   +---+--+---+-+
 
 R = Restrictions : H = mode http only ; S = SSL only
-- 
2.15.2 (Apple Git-101.1)




Re: [PATCH] MEDIUM: lua: Add stick table support for Lua

2018-08-23 Thread Thierry Fournier
Hi

[...]

>> I miss also the relation between oprators and between the content
>> of operators. I mean AND or OR. How I understand your example:
>> 
>> +local filter = {
>> +  lt={{"gpc0", 1}, {"gpc1", 2}},
>> +  gt={{"conn_rate", 3}},
>> +  eq={{"conn_cur", 4}}
>> +}
>> 
>> Are you sure that the syntax =
>> is a good format ? Maybe something like the following, with the operator
>> as argument between the two operands. lines are implicitly OR, and columns
>> are AND:
>> 
>> +local filter = {
>> +  {{"gpc0", "lt", 1}, {"gpc1", "lt", 2}},
>> +  {{"conn_rate", "gt", 3}},
>> +  {{"conn_cur", "eq", 4}}
>> +}
> Actually, I was playing with some other ideas, and it was useful to be
> able to "preselect" filter operators.
> However, the CLI doesn't even support more than one, maybe we don't need
> to complicate too much. Maybe we can simplify to this:
> 
>  local filter = {
>{"gpc0", "lt", 1},
>{"gpc1", "lt", 2},
>{"conn_rate", "gt", 3},
>{"conn_cur", "eq", 4}
>  }
> 
> The default operator would be AND, and we would not support other
> operators (to keep the things simple). e.g. example use case for the
> filter would be to filter out on gpc0 > X AND gpc1 > Y
> 
> If this sounds good, I can update/simplify the code.


Ok, it sounds good. I think this kind of syntax is easily understandable
and it allow a good way for filtering values.


>> Idea of extension for the future: Maybe it will be safe to compile
>> sticktable filter during the initialisation of the Lua code, to avoid
>> runtime errors ?
> I'm returning runtime errors since it can be easy to mix up data from
> the client side (most probably data would come as json table, then
> transformed to Lua table)


ok

[...]


>> Line 274 of your patch, I don't see any HA_SPIN_LOCK(STK_TABLE_LOCK
>> I don't known very well the thread, so maybe there are useles, maybe no.
> hlua_stktable_lookup() uses stktable_lookup_key() which does have locks,
> so I guess that it should be fine then?


sure !


[...]


>> l.365, 369: The user doesn't have context about the error. there are the
>> first entry of the table, the second ? Which operator doesn't exists ?
>> 
>> L.380, 384: Which line is wrong ?
> Yes, it is somwehat cryptic. I've tried to avoid returning user supplied
> data in the error messages. We can revisit this if/when we change the
> filter table format.


ok


>> L.431: Your release the lock, so the next element relative to the current
>> "n", can disappear and the ebmb_next() can return wrong memory.
> I was under impression that we only have to acquire lock and increment
> ref_cnt (so we can be sure our current node n is not deleted)
> ebmb_next() is called only when we're holding lock, first and every
> other iteration, i.e.
> 
>  HA_SPIN_LOCK(STK_TABLE_LOCK, >lock);
>  eb = ebmb_first(>keys);
>  for (n = eb; n; n = ebmb_next(n)) {
>  ...
>  ts->ref_cnt++;
>  HA_SPIN_UNLOCK(STK_TABLE_LOCK, >lock);
>  ...
> 
>  HA_SPIN_LOCK(STK_TABLE_LOCK, >lock);
>  }
> 
> Or I didn't get your point?


ok, you probably right. 


Thierry





Re: BUG: Tw is negative with lua sleep

2018-08-23 Thread Thierry Fournier



> On 22 Aug 2018, at 06:00, Patrick Hemmer  wrote:
> 
> 
> 
> On 2018/7/18 09:03, Frederic Lecaille wrote:
>> Hello Patrick, 
>> 
>> On 07/17/2018 03:59 PM, Patrick Hemmer wrote: 
>>> Ping? 
>>> 
>>> -Patrick 
>>> 
>>> On 2018/6/22 15:10, Patrick Hemmer wrote: 
 When using core.msleep in lua, the %Tw metric is a negative value. 
 
 For example with the following config: 
 haproxy.cfg: 
 global 
 lua-load /tmp/haproxy.lua 
 
 frontend f1 
 mode http 
 bind :8000 
 default_backend b1 
 log 127.0.0.1:1234 daemon 
 log-format Ta=%Ta\ Tc=%Tc\ Td=%Td\ Th=%Th\ Ti=%Ti\ Tq=%Tq\ 
 TR=%TR\ Tr=%Tr\ Tt=%Tt\ Tw=%Tw 
 
 backend b1 
 mode http 
 http-request use-service lua.foo 
 
 haproxy.lua: 
 core.register_service("foo", "http", function(applet) 
 core.msleep(100) 
 applet:set_status(200) 
 applet:start_response() 
 end) 
 
 The log contains: 
 Ta=104 Tc=0 Td=0 Th=0 Ti=0 Tq=104 TR=104 Tr=104 Tt=104 Tw=-104 
 
 ^ TR also looks wrong, as it did not take 104ms to receive the full 
 request. 
 
 This is built from the commit before current master: d8fd2af 
 
 -Patrick 
>>> 
>> 
>> The patch attached to this mail fixes this issue at least for %TR field. 
>> 
>> But I am not sure at all it is correct or if there is no remaining issues. 
>> For instance the LUA tcp callback also updates the tv_request log field. 
>> 
>> So, let's wait for Thierry's validation.


Hi,

Applets should be considered as server independent from HAProxy, so applet 
should not
change HAProxy information like log times.

I guess that your patch works, and the function hlua_applet_tcp_fct() should 
follow
the same way.

unfortunately I do not have free time to test all of this changes.

Thierry


>> Regards. 
>> 
> 
> Any update on this?
> 
> -Patrick
> 




Re: [PATCH] MEDIUM: reset lua transaction between http requests

2018-08-23 Thread Thierry Fournier
Hi,

Your patch make sense, that’s the right appoach, but I have a doubt about
the place to use for doing the reinitialization. 

I add Willy in this thread in order to have http2 advisor.

Before the 1.8 the Lua context was reinitialized with the stream because
the stream was reinitialized between each http request in a keepalive
session.

With http2 I guess that this behavior change. So, Willy, do you have
an opinion on the place to use to perform the Lua reinit ?

Thierry


> On 22 Aug 2018, at 16:09, Patrick Hemmer  wrote:
> 
> Not sure if this is the right approach, but this addresses the issue for me.
> This should be backported to 1.8.
> 
> -Patrick
> <0001-MEDIUM-reset-lua-transaction-between-http-requests.patch>




Re: Multiple connections to the same URL with H2

2018-08-23 Thread Ing. Andrea Vettori
Hello, enabling http log on one of our frontends I found this in the haproxy 
logs when I see the ‘broken pipe’ error on the application log. Note the two 
calls are separated by just 11 milliseconds.

[23/Aug/2018:08:38:46.731] b2b~ b2b-ssl-servers/web3 0/0/1/-1/7 -1 0 - - CDVN 
36/20/0/0/0 0/0 "GET /reparto/ELE HTTP/1.1"
[23/Aug/2018:08:38:46.742] b2b~ b2b-ssl-servers/web3 0/0/0/73/75 200 31114 - - 
--VN 36/20/0/1/0 0/0 "GET /reparto/ELE HTTP/1.1”

Looking at the manual for termination states I found 

> CD   The client unexpectedly aborted during data transfer. This can be
>   caused by a browser crash, by an intermediate equipment between the
>   client and haproxy which decided to actively break the connection,
>   by network routing issues between the client and haproxy, or by a
>   keep-alive session between the server and the client terminated 
> first
>   by the client.

Since this happens only since we enabled H2 on haproxy, can we assume that this 
is caused by the client closing the connection? Or can this be related to how 
haproxy handles H2 and convert it to multiple http1 calls ?


Thanks
—
Ing. Andrea Vettori
Responsabile Sistemi Informativi

[PATCH] REGTEST/MINOR

2018-08-23 Thread Frederic Lecaille

Hi ML,

Here are two patches for haproxy reg testing.

Note that we have recently added an new feature to varnishtest so that 
to send commands to the CLI without running a shell, socat etc 
(https://varnish-cache.org/docs/trunk/reference/vtc.html#haproxy). This 
breaks reg-tests/spoe/h0.vtc reg test case (fixed by the first patch).


We can send commands to the CLI as follows:

haproxy h1 -conf {...}

haproxy h1 -cli {
send "show info"
expect ~ "something to expect"
} -wait


With the 2nd patch we changed the prefix of the VTC files from 'h' 
letter to 'b' (as "bug") and added a new LEVEL 4 to run all the VTC 
files prefixed with 'b' letter. This VTC files are in relation with bugs 
they help to reproduce (and prevent from coming back).


So, for now on, if you run reg-tests, as there is no more LEVEL 1 
(default LEVEL) VTC files, no VTC file will be run.


To run the LEVEL 4 (VTC files for already fixed bugs) you will have to 
use such a command:


$ 
VARNISHTEST_PROGRAM=~/src/varnish-cache-trunk/bin/varnishtest/varnishtest 
LEVEL=4 make reg-tests



Fred.
>From 35486065bf63a47d17ba65b94d7b42b08d958abb Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Wed, 22 Aug 2018 10:41:33 +0200
Subject: [PATCH 2/2] REGTEST/MINOR: Add a new class of regression testing
 files.

Add LEVEL #4 regression testing files which is dedicated to
VTC files in relation with bugs they help to reproduce.
At the date of this commit, all VTC files are LEVEL 4 VTC files.
---
 Makefile | 7 +++
 reg-tests/log/{h0.vtc => b0.vtc} | 0
 reg-tests/lua/{h0.lua => b0.lua} | 0
 reg-tests/lua/{h0.vtc => b0.vtc} | 2 +-
 reg-tests/seamless-reload/{h0.vtc => b0.vtc} | 0
 reg-tests/spoe/{h0.vtc => b0.vtc}| 0
 reg-tests/ssl/{h0.vtc => b0.vtc} | 0
 reg-tests/stick-table/{h0.vtc => b0.vtc} | 0
 8 files changed, 8 insertions(+), 1 deletion(-)
 rename reg-tests/log/{h0.vtc => b0.vtc} (100%)
 rename reg-tests/lua/{h0.lua => b0.lua} (100%)
 rename reg-tests/lua/{h0.vtc => b0.vtc} (97%)
 rename reg-tests/seamless-reload/{h0.vtc => b0.vtc} (100%)
 rename reg-tests/spoe/{h0.vtc => b0.vtc} (100%)
 rename reg-tests/ssl/{h0.vtc => b0.vtc} (100%)
 rename reg-tests/stick-table/{h0.vtc => b0.vtc} (100%)

diff --git a/Makefile b/Makefile
index 817161f7..c4923995 100644
--- a/Makefile
+++ b/Makefile
@@ -999,6 +999,11 @@ update-version:
 	echo "$(SUBVERS)" > SUBVERS
 	echo "$(VERDATE)" > VERDATE
 
+# Target to run the regression testing script files.
+# LEVEL 1 scripts are dedicated to pure haproxy compliance tests (prefixed with 'h' letter).
+# LEVEL 2 scripts are slow scripts (prefixed with 's' letter).
+# LEVEL 3 scripts are low interest scripts (prefixed with 'l' letter).
+# LEVEL 4 scripts are in relation with bugs they help to reproduce (prefixed with 'b' letter).
 reg-tests:
 	@if [ ! -x "$(VARNISHTEST_PROGRAM)" ]; then \
 		echo "Please make the VARNISHTEST_PROGRAM variable point to the location of the varnishtest program."; \
@@ -1011,6 +1016,8 @@ reg-tests:
 	   EXPR='s*.vtc'; \
 	elif [ $$LEVEL = 3 ] ; then \
 	   EXPR='l*.vtc'; \
+	elif [ $$LEVEL = 4 ] ; then \
+	   EXPR='b*.vtc'; \
 	fi ; \
 	if [ -n "$$EXPR" ] ; then \
 	   find reg-tests -type f -name "$$EXPR" -print0 | \
diff --git a/reg-tests/log/h0.vtc b/reg-tests/log/b0.vtc
similarity index 100%
rename from reg-tests/log/h0.vtc
rename to reg-tests/log/b0.vtc
diff --git a/reg-tests/lua/h0.lua b/reg-tests/lua/b0.lua
similarity index 100%
rename from reg-tests/lua/h0.lua
rename to reg-tests/lua/b0.lua
diff --git a/reg-tests/lua/h0.vtc b/reg-tests/lua/b0.vtc
similarity index 97%
rename from reg-tests/lua/h0.vtc
rename to reg-tests/lua/b0.vtc
index 2b2ffb0e..4229eeb0 100644
--- a/reg-tests/lua/h0.vtc
+++ b/reg-tests/lua/b0.vtc
@@ -40,7 +40,7 @@ server s1 -repeat 2 {
 
 haproxy h1 -conf {
 global
-lua-load ${testdir}/h0.lua
+lua-load ${testdir}/b0.lua
 
 frontend fe1
 mode http
diff --git a/reg-tests/seamless-reload/h0.vtc b/reg-tests/seamless-reload/b0.vtc
similarity index 100%
rename from reg-tests/seamless-reload/h0.vtc
rename to reg-tests/seamless-reload/b0.vtc
diff --git a/reg-tests/spoe/h0.vtc b/reg-tests/spoe/b0.vtc
similarity index 100%
rename from reg-tests/spoe/h0.vtc
rename to reg-tests/spoe/b0.vtc
diff --git a/reg-tests/ssl/h0.vtc b/reg-tests/ssl/b0.vtc
similarity index 100%
rename from reg-tests/ssl/h0.vtc
rename to reg-tests/ssl/b0.vtc
diff --git a/reg-tests/stick-table/h0.vtc b/reg-tests/stick-table/b0.vtc
similarity index 100%
rename from reg-tests/stick-table/h0.vtc
rename to reg-tests/stick-table/b0.vtc
-- 
2.11.0

>From