Would you be interested in helping us integrate Alexa voice control into HAProxy?

2017-03-31 Thread Malcolm Turnbull
Willy et al.

Would you be interested in helping us integrate Alexa voice control
into HAProxy?
Maybe we could re-write the whole thing in LUA?

Initially I was sceptical but I'm now really impressed with what our
customers are achieving with our new product:

https://www.loadbalancer.org/blog/5988
https://www.loadbalancer.org/products/hardware/enterprise-val

Its awesome, 92.3% of the time it does exactly what you ask and you
can also control the office music with it!

I hope you are as excited as I am by the possibilities.




-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 380 1064
http://www.loadbalancer.org/



Re: OpenSSL engine and async support

2017-03-31 Thread Grant Zhang

Hi Emeric,

Sorry for my delayed reply.


On 03/28/2017 01:47 AM, Emeric Brun wrote:



This is an atom C2518 and it seems that --disable-prf has cut the performance
in half. We should receive a 8920 soon.


Stopping the injection, the haproxy process continue to steal cpu doing nothing 
(top shows ~50% of one core, mainly in user):

Hmm, an idle haproxy process with qat enabled consumes about 5% of a core in
my test. 50% is too much:-(.

In theory it should not consume anything anymore if it has nothing to do,
so maybe the 5% you observed will help understand what is happening.

I've just noticed 50% cpu usage directly at start-up if we enable the engine (w 
or wout ssl-async):
global
 tune.ssl.default-dh-param 2048
ssl-engine qat
#   ssl-async

listen gg
 mode http
 bind 0.0.0.0:9443 ssl crt /root/2048.pem ciphers AES
 redirect location
Somehow I cannot reproduce the cpu usage issue using the above config. 
In my test with the above config, when haproxy is idle, pidstat shows 4% 
cpu usage


11:49:14 PM3592473.331.330.004.67 1 haproxy_nodebug
11:49:17 PM3592473.331.330.004.67 1 haproxy_nodebug
11:49:20 PM3592472.671.330.004.00 1 haproxy_nodebug

When it is under load test the cpu usage jumps to 100%(single process mode):
11:51:26 PM359247   85.67   21.670.00  107.33 8 haproxy_nodebug

I am not sure whether it is the different hardware(c2000 vs. 895X), or 
some difference in software. Just something to check:
* your kernel version (I tested with 4.4/4.7/4.9 without problem 
though), and qat driver version?

* openssl version (1.1.0b-e?)
* are you using the latest QAT_ENGINE https://github.com/01org/QAT_Engine
* I assume you use qat_contig_mem kernel module?
* are you using the following config file for your c2000 card? 
https://github.com/01org/QAT_Engine/blob/master/qat/config/c2xxx/multi_process_optimized/c2xxx_qa_dev0.conf


Thanks,

Grant



Re: client connections being help open, despite option foceclose

2017-03-31 Thread Patrick Kaeding
Thanks Lukas, that makes sense. I will give this a shot and see what I can
come up with.

Thanks,
Patrick

On Fri, Mar 31, 2017 at 11:18 AM, Lukas Tribus  wrote:

> Hello,
>
>
> Am 31.03.2017 um 19:59 schrieb Patrick Kaeding:
>
>> Okay, thanks Holger!  We were hitting the maxconn limit, which is what
>> sparked this investigation. When we were at that limit, the discrepancy
>> between frontend and backend was higher than when I could observe it above
>> (we restarted HAProxy to re-establish the connections and start anew).
>>
>> I also realized that my `netstat` command above isn't quite right, since
>> it is counting connections in the TIME_WAIT state, while HAProxy would only
>> be concerned with ESTABLISHED connections, right?
>>
>> So is the solution to just increase the maxconn (and/or add more HAProxy
>> nodes)?
>>
>
> No, increasing maxconn seems like hiding the problem to me.
> There is no good answer to this, unless you know with certainty what those
> connections are about. TCPdumping those idle sessions and analyzing the
> behavior may be needed.
>
> Note that "option forceclose" may not have a positive effect. If a browser
> sees that the server does not support keepalive, it may more aggressively
> pre-connect which is the opposite of what you are trying to achieve.
>
> I would suggest you to transition the configuration to a keep-alive
> configuration with short timeouts (like timeout http-keep-alive 1s [1]),
> instead of working "against" the browsers.
> Also, closing idle pre-connect sessions with a short "timeout
> http-request" [2] may also help limiting the amount of maxconn slots those
> browsers block.
>
>
>
> Regards,
>
> Lukas
>
>
> [1] https://cbonte.github.io/haproxy-dconv/1.7/configuration.
> html#4-timeout%20http-request
> [2] https://cbonte.github.io/haproxy-dconv/1.7/configuration.
> html#timeout%20http-keep-alive
>



-- 
Patrick Kaeding
pkaed...@launchdarkly.com


Re: log-format & defaults section bug ?

2017-03-31 Thread Willy Tarreau
On Fri, Mar 31, 2017 at 08:04:34PM +0200, de Lafond Guillaume wrote:
> Hello,
> 
> Please find the patches in a better format. 

Great, thank you Guillaume, both patches applied.

Willy



Re: ssl & default_backend

2017-03-31 Thread Lukas Tribus

Hello Antonio,


Am 31.03.2017 um 19:36 schrieb Antonio Trujillo Carmona:

El 30/03/17 a las 10:51:58, Antonio Trujillo Carmona escribió:


I'm try to use haproxy for balancing Citrix.

I prove with:

acl aplicaciones req_ssl_sni -i aplicaciones.gra.sas.junta-andalucia.es
acl citrixsf req_ssl_sni -i ssiiprovincial.hvn.sas.junta-andalucia.es

use_backend CitrixSF-SSL if citrixsf
use_backend SevidoresWeblogic-12c-Balanceador-SSL
default_backend CitrixSF-SSL

The goal is Wpx witch can't use sni are redirected to CitrixSF-SSL.


You did not tell us what Wpx is. We also don't know your complete 
configuration.


Please post the complete configuration and the output of haproxy -vv.





I try commenting acl req_ssl_sni (right now, I have no Wpx to probe) but
I recive. Error-404 Not Found.


With that statement I don't know which of the above lines you commented. Can
you explain?

Haproxy never generates a "404 Not found message", this comes from one 
of your

backends.





The issue of get diferent result in be redirected from a use_backend or
from default_backend occurs in all equipmen, Windows XP,7 or even in 
linux.

I can't understand it


I don't understand what you are saying. I suggest you explain in a few 
sentences

what you expect from haproxy, and then, explain what the actual result is.



Lukas




Re: client connections being help open, despite option foceclose

2017-03-31 Thread Lukas Tribus

Hello,


Am 31.03.2017 um 19:59 schrieb Patrick Kaeding:
Okay, thanks Holger!  We were hitting the maxconn limit, which is what 
sparked this investigation. When we were at that limit, the 
discrepancy between frontend and backend was higher than when I could 
observe it above (we restarted HAProxy to re-establish the connections 
and start anew).


I also realized that my `netstat` command above isn't quite right, 
since it is counting connections in the TIME_WAIT state, while HAProxy 
would only be concerned with ESTABLISHED connections, right?


So is the solution to just increase the maxconn (and/or add more 
HAProxy nodes)?


No, increasing maxconn seems like hiding the problem to me.
There is no good answer to this, unless you know with certainty what 
those connections are about. TCPdumping those idle sessions and 
analyzing the behavior may be needed.


Note that "option forceclose" may not have a positive effect. If a 
browser sees that the server does not support keepalive, it may more 
aggressively pre-connect which is the opposite of what you are trying to 
achieve.


I would suggest you to transition the configuration to a keep-alive 
configuration with short timeouts (like timeout http-keep-alive 1s [1]), 
instead of working "against" the browsers.
Also, closing idle pre-connect sessions with a short "timeout 
http-request" [2] may also help limiting the amount of maxconn slots 
those browsers block.




Regards,

Lukas


[1] 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-timeout%20http-request
[2] 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#timeout%20http-keep-alive




Re: log-format & defaults section bug ?

2017-03-31 Thread de Lafond Guillaume
Hello,

Please find the patches in a better format. 



0001-DOC-log-format-tcplog-httplog-update.patch
Description: Binary data


0002-MINOR-config-parsing-add-warning-when-log-format-tcp.patch
Description: Binary data


Thank you Willy ;-)

-- 
de Lafond Guillaume


>>> Maybe we should emit a warning about the conflict as the log-format is
>>> silently ignored in this case or do not allow at all to have ???option
>>> httplog??? (or "option tcplog") with ???log-format??? in the same section. 
>> 
>> It's difficult to do this due to the fact that we inherit settings from
>> the defaults section. We don't want a frontend to emit a warning when
>> it uses option httplog while it had inherited the log-format from the
>> defaults section. However I think we could add such a warning when the
>> defaults section contains both since we know that by definition it does
>> not inherit the settings from anywhere else.
>> 
>> Are you interested in trying to implement this ? I think it can be done
>> in cfgparse.c where "httplog" is checked. There are already checks for
>> a previous logformat_string in order to free it, I think that we can
>> insert a check before this so that if curproxy == &defproxy and
>> logformat_string is set, then we warn that a previous log-format was
>> already specified in this section and will be overridden. The same
>> test should be added in "tcplog" and in "log-format". In fact you
>> just need to search "logformat_string", there are not that many.

> This patch does not catch the same thing in a frontend like :
> 
> frontend MyFrontend
>   bind127.0.0.1:8087
>   log-format "%Tt"
>   option httplog
>   log-format "%Tt %Tr"
>   default_backend TransparentBack_http
> 
> To handle this in a frontend, I think I have to create a new variable in the 
> proxy struct that could be used to track previous declaration inside a 
> "proxy" ?
> 
> in pseudo code : 
> 
> ...
> if (config_directive_line == "")) {
>   if (curproxy->tmp_config[curproxy->cap][""])
>   {
>  previous_logformat = curproxy->tmp_config[curproxy->cap][""];
>  oldfile = previous_logformat[0];
>  oldlinenum = previous_logformat[1];
>  oldlogtype = previous_logformat[2];
>   Warning("parsing [%s:%d]: '%s' override previous '%s' 
> (%s:%d).\n", file, linenum, , oldlogformat, old file, oldlinenum);   
>   }
>   curproxy->conf.logformat_string = ...;
>   curproxy->tmp_config[curproxy->cap][""] = array(file, linenum, 
> "");
> }
> 
> 
> If curproxy->tmp_config is an array, we may use it to check other config 
> variables that should not be duplicate in the same "proxy" like :
> 
> frontend MyFrontend
> mode tcp
>
> mode http
>   ...



Re: log-format & defaults section bug ?

2017-03-31 Thread de Lafond Guillaume
Hello,

Please find the patches in a better format. 



0001-DOC-log-format-tcplog-httplog-update.patch
Description: Binary data


0002-MINOR-config-parsing-add-warning-when-log-format-tcp.patch
Description: Binary data


Thank you Willy ;-)

-- 
de Lafond Guillaume


>>> Maybe we should emit a warning about the conflict as the log-format is
>>> silently ignored in this case or do not allow at all to have ???option
>>> httplog??? (or "option tcplog") with ???log-format??? in the same section. 
>> 
>> It's difficult to do this due to the fact that we inherit settings from
>> the defaults section. We don't want a frontend to emit a warning when
>> it uses option httplog while it had inherited the log-format from the
>> defaults section. However I think we could add such a warning when the
>> defaults section contains both since we know that by definition it does
>> not inherit the settings from anywhere else.
>> 
>> Are you interested in trying to implement this ? I think it can be done
>> in cfgparse.c where "httplog" is checked. There are already checks for
>> a previous logformat_string in order to free it, I think that we can
>> insert a check before this so that if curproxy == &defproxy and
>> logformat_string is set, then we warn that a previous log-format was
>> already specified in this section and will be overridden. The same
>> test should be added in "tcplog" and in "log-format". In fact you
>> just need to search "logformat_string", there are not that many.

> This patch does not catch the same thing in a frontend like :
> 
> frontend MyFrontend
>   bind127.0.0.1:8087
>   log-format "%Tt"
>   option httplog
>   log-format "%Tt %Tr"
>   default_backend TransparentBack_http
> 
> To handle this in a frontend, I think I have to create a new variable in the 
> proxy struct that could be used to track previous declaration inside a 
> "proxy" ?
> 
> in pseudo code : 
> 
> ...
> if (config_directive_line == "")) {
>   if (curproxy->tmp_config[curproxy->cap][""])
>   {
>  previous_logformat = curproxy->tmp_config[curproxy->cap][""];
>  oldfile = previous_logformat[0];
>  oldlinenum = previous_logformat[1];
>  oldlogtype = previous_logformat[2];
>   Warning("parsing [%s:%d]: '%s' override previous '%s' 
> (%s:%d).\n", file, linenum, , oldlogformat, old file, oldlinenum);   
>   }
>   curproxy->conf.logformat_string = ...;
>   curproxy->tmp_config[curproxy->cap][""] = array(file, linenum, 
> "");
> }
> 
> 
> If curproxy->tmp_config is an array, we may use it to check other config 
> variables that should not be duplicate in the same "proxy" like :
> 
> frontend MyFrontend
> mode tcp
>
> mode http
>   ...



Re: client connections being help open, despite option foceclose

2017-03-31 Thread Patrick Kaeding
Okay, thanks Holger!  We were hitting the maxconn limit, which is what
sparked this investigation. When we were at that limit, the discrepancy
between frontend and backend was higher than when I could observe it above
(we restarted HAProxy to re-establish the connections and start anew).

I also realized that my `netstat` command above isn't quite right, since it
is counting connections in the TIME_WAIT state, while HAProxy would only be
concerned with ESTABLISHED connections, right?

So is the solution to just increase the maxconn (and/or add more HAProxy
nodes)?

On Fri, Mar 31, 2017 at 10:00 AM, Holger Just  wrote:

> Hi Patrick,
>
> Patrick Kaeding wrote:
> > I have one frontend, listening on port 443, and two backends, which send
> > traffic to either port 5050 or 5051.  The haproxy stats screen is
> > showing many more frontend connections than backend (in one case, 113k
> > on the frontend, 97k on one backend, and 3k on the other backend).
>
> Most browser nowadays speculatively create more than one connection to
> the server (HAProxy in this case) to use them for parallel downloading
> of assets.
>
> Now, such a connection to the frontend will only result in a connection
> to the backend once the full HTTP request have been received and parsed
> by HAProxy. Since some of these speculative connections will just sit
> idle and will eventually get closed without having received any data,
> the number of frontend-connections is almost always higher than the sum
> of backend-connections.
>
> In addition to that, you might observe more connections accepted by the
> kernel than are shown in HAProxy's frontend. This is due to the fact
> that a new connection is only forwarded to HAProxy from the kernel once
> it is fully established and HAProxy has actively accepted in.
>
> If you are running against your maxconn or generally on high load, some
> connections might be accepted by the kernel already but not yet handled
> by HAProxy.
>
> Cheers,
> Holger
>



-- 
Patrick Kaeding
pkaed...@launchdarkly.com


Re: ssl & default_backend

2017-03-31 Thread Antonio Trujillo Carmona

El 30/03/17 a las 10:51:58, Antonio Trujillo Carmona escribió:


I'm try to use haproxy for balancing Citrix.

I prove with:

acl aplicaciones req_ssl_sni -i 
aplicaciones.gra.sas.junta-andalucia.es

acl citrixsf req_ssl_sni -i ssiiprovincial.hvn.sas.junta-andalucia.es

use_backend CitrixSF-SSL if citrixsf
use_backend SevidoresWeblogic-12c-Balanceador-SSL
default_backend CitrixSF-SSL

The goal is Wpx witch can't use sni are redirected to CitrixSF-SSL.

I try commenting acl req_ssl_sni (right now, I have no Wpx to probe) 
but

I recive. Error-404 Not Found.

Why?

Thank in advance.

--

*Antonio Trujillo Carmona*

*Técnico de redes y sistemas.*

*Subdirección de Tecnologías de la Información y Comunicaciones*

Servicio Andaluz de Salud. Consejería de Salud de la Junta de 
Andalucía


_antonio.trujillo.s...@juntadeandalucia.es [1]_

Tel. +34 670947670 747670)


The issue of get diferent result in be redirected from a use_backend or
from default_backend occurs in all equipmen, Windows XP,7 or even in 
linux.

I can't understand it

Links:
--
[1] mailto:_antonio.trujillo.s...@juntadeandalucia.es



Re: client connections being help open, despite option foceclose

2017-03-31 Thread Holger Just
Hi Patrick,

Patrick Kaeding wrote:
> I have one frontend, listening on port 443, and two backends, which send
> traffic to either port 5050 or 5051.  The haproxy stats screen is
> showing many more frontend connections than backend (in one case, 113k
> on the frontend, 97k on one backend, and 3k on the other backend).

Most browser nowadays speculatively create more than one connection to
the server (HAProxy in this case) to use them for parallel downloading
of assets.

Now, such a connection to the frontend will only result in a connection
to the backend once the full HTTP request have been received and parsed
by HAProxy. Since some of these speculative connections will just sit
idle and will eventually get closed without having received any data,
the number of frontend-connections is almost always higher than the sum
of backend-connections.

In addition to that, you might observe more connections accepted by the
kernel than are shown in HAProxy's frontend. This is due to the fact
that a new connection is only forwarded to HAProxy from the kernel once
it is fully established and HAProxy has actively accepted in.

If you are running against your maxconn or generally on high load, some
connections might be accepted by the kernel already but not yet handled
by HAProxy.

Cheers,
Holger



Re: client connections being help open, despite option foceclose

2017-03-31 Thread Patrick Kaeding
Sorry, I forgot to mention that we are running HAProxy 1.7.3-1ppa1~xenial,
released 2017/03/01, on Ubuntu 16.04, in EC2.

On Fri, Mar 31, 2017 at 8:19 AM, Patrick Kaeding 
wrote:

> Hi all
>
> I am trying to determine the cause of an issue where the number of
> frontend connections is much higher than the number of backend connections.
>
> I have one frontend, listening on port 443, and two backends, which send
> traffic to either port 5050 or 5051.  The haproxy stats screen is showing
> many more frontend connections than backend (in one case, 113k on the
> frontend, 97k on one backend, and 3k on the other backend).
>
> netstat confirms that there are more connections to the frontend than to
> the sum of the backends (but it shows a higher number than haproxy reports):
>
> ubuntu@ip-10-10-7-135:~$ netstat -an |grep 5050|wc -l
> 2718
> ubuntu@ip-10-10-7-135:~$ netstat -an |grep 5051|wc -l
> 88413
> ubuntu@ip-10-10-7-135:~$ netstat -an |grep 10.10.7.135:443|wc -l
> 170442
>
> My first thought was that the connection to the clients was being kept
> open to be reused for more HTTP/1.1 requests, but we have 'option
> forceclose' in the defaults section of the haproxy config.
>
> Any ideas?
>
> --
> Patrick Kaeding
> pkaed...@launchdarkly.com
>



-- 
Patrick Kaeding
pkaed...@launchdarkly.com


client connections being help open, despite option foceclose

2017-03-31 Thread Patrick Kaeding
Hi all

I am trying to determine the cause of an issue where the number of frontend
connections is much higher than the number of backend connections.

I have one frontend, listening on port 443, and two backends, which send
traffic to either port 5050 or 5051.  The haproxy stats screen is showing
many more frontend connections than backend (in one case, 113k on the
frontend, 97k on one backend, and 3k on the other backend).

netstat confirms that there are more connections to the frontend than to
the sum of the backends (but it shows a higher number than haproxy reports):

ubuntu@ip-10-10-7-135:~$ netstat -an |grep 5050|wc -l
2718
ubuntu@ip-10-10-7-135:~$ netstat -an |grep 5051|wc -l
88413
ubuntu@ip-10-10-7-135:~$ netstat -an |grep 10.10.7.135:443|wc -l
170442

My first thought was that the connection to the clients was being kept open
to be reused for more HTTP/1.1 requests, but we have 'option forceclose' in
the defaults section of the haproxy config.

Any ideas?

-- 
Patrick Kaeding
pkaed...@launchdarkly.com


Re: [Patches] TLS methods configuration reworked

2017-03-31 Thread Emmanuel Hocdet
Le 31 mars 2017 à 11:02, Emeric Brun  a écrit :Hi Emmanuel,On 03/30/2017 07:44 PM, Emmanuel Hocdet wrote:The right patch series ...Le 30 mars 2017 à 19:00, Emmanuel Hocdet  a écrit :Hi Emeric, WillyRework of patches serie to match default-server requirement and talk with Willy.It should be easier to follow.Le 27 mars 2017 à 16:15, Emeric Brun  a écrit :Hi Manu,What kind of api and dependency? To generate haproxy configuration?Generate min-tlsv10  or ssl-min tlsv10  will not change anything.For word based API parser/generator, as the one embedded in haproxy's hardware proxy, it does :)I pushed the min-/max- parameter at the end to easily change with your needs.This not what i expected. The haproxy's appliances API parser is word based.To be clear, it is much more easy to maintain if there is only two attributes 'ssl-min-ver' and 'ssl-max-ver' with a set of possible values (sslv3 tlsv10 ...)So what i expect is only two keyword configuration 'ssl-max-ver' ans 'ssl-min-ver' which should handle an arg containing the protocol version in a string format:i.e.bind 0.0.0.0:443 ssl crt my.pem ssl-min-ver tlsv10 ssl-max-ver tlsv13yes, i delayed this change (lack of time).last patch with  'ssl-min-ver' and 'ssl-max-ver' with argument SSLv3, TLSv1.0, TLSv1.1, TLSv1.2 or TLSv1.3Manu

0006-MEDIUM-ssl-add-ssl-min-ver-and-ssl-max-ver-parameter.patch
Description: Binary data


Re: 100% cpu usage with compression in haproxy.cfg

2017-03-31 Thread Willy Tarreau
On Fri, Mar 31, 2017 at 02:47:36PM +0200, Cyril Bonté wrote:
> Hi,
> 
> > De: "Willy Tarreau" 
> > À: "Cyril Bonté" 
> > Cc: "Christopher Faulet" , haproxy@formilux.org, 
> > nos...@mrietzler.de
> > Envoyé: Vendredi 31 Mars 2017 14:44:41
> > Objet: Re: 100% cpu usage with compression in haproxy.cfg
> > 
> > Hi guys,
> > 
> > On Thu, Mar 30, 2017 at 12:12:44PM +0200, Cyril Bonté wrote:
> > > From my first tests, it fixes the issue.
> > > This morning, I had the issue on 3 connections. I've applied the
> > > patches on
> > > this instance, let's wait 24h to see if it happens again.
> > 
> > Now that the 24h observation period is elapsed, I consider the
> > patches
> > fine and I've merged them. I'll probably have to issue 1.7.5 very
> > soon...
> 
> Yes, since the patches are applied, I've not seen any new issue.

Thanks for testing and confirming!

Willy



Re: [PATCH] BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers

2017-03-31 Thread Willy Tarreau
On Fri, Mar 31, 2017 at 02:45:15PM +0200, Christopher Faulet wrote:
> > These situations cause trouble when not using the proper arithmetics. Either
> > all the computations are made without wrapping, or all are made with 
> > wrapping.
> > Any mix of the two causes issues.
> > 
> 
> Yes, of course. I implicitly considered them as special cases of the
> wrapping ones. From the moment you use pointers (bi_ptr/bi_end and
> bo_ptr/bo_end), it is easier. But, it never hurts to mention it :)

In fact it depends how the wrapping is detected. If you just want to measure
a length, wrapping can become cumbersome. For example, detecting this case is
trivial without wrapping :

 buf->p+buf->i == buf->data+buf->size

But it's difficult with wrapping as it's more or less bi_end(p) == data &&
i > 0. Detecting that you've reached the right side of the buffer when
reading one char at a time also makes this not always convenient. In fact
it sometimes happens when you need to add an extra hypothetical length to
the equation to check if a string can be appended.

So there are many places where we purposely don't use them (no need to
spend cycles taking care of wrapping when just comparing two pointers,
or where not wrapping leaves less cases to consider) so I preferred to
mention it here so that other contributors reading this right now or
reading the archives later know that these two methods must never be
mixed and that these special cases can easily become traps.

Willy



Re: configuration.txt questions

2017-03-31 Thread Willy Tarreau
On Fri, Mar 31, 2017 at 02:59:37PM +0300, Jarno Huuskonen wrote:
> On Fri, Mar 31, Jarno Huuskonen wrote:
> > First I'm attaching a patch that corrects ]) order for urlp_val
> > and adds 'Example:' string to
> > src_clr_gpc0,src_inc_gpc0,sc2_clr_gpc0,sc2_inc_gpc0,ssl_c_sha1
> > (I assume that Example: is what generates the example formatting in
> > html/dconv documentation).
> 
> This time with correct attachment(patch).

Now merged, and fixed the mangled subject line / commit message :

 Subject: [PATCH] DOC: urlp_val missing ) DOC:
   src_clr_gpc0,src_inc_gpc0,sc2_clr_gpc0,sc2_inc_gpc0,ssl_c_sha1 Example:
   string.

:-)

Thanks!
Willy




Re: 100% cpu usage with compression in haproxy.cfg

2017-03-31 Thread Cyril Bonté
Hi,

> De: "Willy Tarreau" 
> À: "Cyril Bonté" 
> Cc: "Christopher Faulet" , haproxy@formilux.org, 
> nos...@mrietzler.de
> Envoyé: Vendredi 31 Mars 2017 14:44:41
> Objet: Re: 100% cpu usage with compression in haproxy.cfg
> 
> Hi guys,
> 
> On Thu, Mar 30, 2017 at 12:12:44PM +0200, Cyril Bonté wrote:
> > From my first tests, it fixes the issue.
> > This morning, I had the issue on 3 connections. I've applied the
> > patches on
> > this instance, let's wait 24h to see if it happens again.
> 
> Now that the 24h observation period is elapsed, I consider the
> patches
> fine and I've merged them. I'll probably have to issue 1.7.5 very
> soon...

Yes, since the patches are applied, I've not seen any new issue.

Cheers !
Cyril



Re: [PATCH] BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers

2017-03-31 Thread Christopher Faulet

Le 31/03/2017 à 14:26, Willy Tarreau a écrit :

On Fri, Mar 31, 2017 at 11:29:43AM +0200, Christopher Faulet wrote:

Willy,

I tagged this patch as a bug. But I don't found a way to hit it for now. It
can be backported or not, as you wish.


Thanks Christopher. I don't know either how to trigger it since the only
problematic case I've found is the one where input wraps, which doesn't
happen when we're processing data. However I agree that leaving such a bug
behind us is scary and a future fix might rely on this to work correctly
so I'd rather backport the fix anyway.

I checked your solution and for me it works fine in all situations. By
the way, just FYI, there aren't 3 cases to consider for a buffer, but at
least 5 ; here are the additional 2 ones (that your patch properly handles) :
  - input may not wrap but end exactly at the end of the buffer, making
buf->p+buf->i == buf->data+buf->size, but bi_end() == buf->data. This
is a common error case when computing input lengths.

  - the output data may end at the end of the buffer and the input be placed
at the beginning, causing buf->p to equal buf->data. Similarly it's a
common error case when computing output data length.

These situations cause trouble when not using the proper arithmetics. Either
all the computations are made without wrapping, or all are made with wrapping.
Any mix of the two causes issues.



Yes, of course. I implicitly considered them as special cases of the 
wrapping ones. From the moment you use pointers (bi_ptr/bi_end and 
bo_ptr/bo_end), it is easier. But, it never hurts to mention it :)


Thanks
--
Christopher Faulet



Re: 100% cpu usage with compression in haproxy.cfg

2017-03-31 Thread Willy Tarreau
Hi guys,

On Thu, Mar 30, 2017 at 12:12:44PM +0200, Cyril Bonté wrote:
> From my first tests, it fixes the issue.
> This morning, I had the issue on 3 connections. I've applied the patches on
> this instance, let's wait 24h to see if it happens again.

Now that the 24h observation period is elapsed, I consider the patches
fine and I've merged them. I'll probably have to issue 1.7.5 very soon...

Thanks very much!
Willy



Re: [PATCH] BUG/MINOR: http: Fix conditions to clean up a txn and to handle the next request

2017-03-31 Thread Willy Tarreau
On Fri, Mar 31, 2017 at 11:36:22AM +0200, Christopher Faulet wrote:
> Willy,
> 
> Another fix (with some cleanup in other paches). The first one (and probably
> the second one) can be backported. But I don't know if this is mandatory. It
> is really tricky to find conditions where it could be a problem.

Thanks. I'd rather not touch this area for now in stable releases given
the surprizes we've got recently, and maybe we'll backport them later if
we have to deal with yet another bug involving how the end of the
transaction is handled.

Cheers
Willy



Re: [PATCH] BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers

2017-03-31 Thread Willy Tarreau
On Fri, Mar 31, 2017 at 11:29:43AM +0200, Christopher Faulet wrote:
> Willy,
> 
> I tagged this patch as a bug. But I don't found a way to hit it for now. It
> can be backported or not, as you wish.

Thanks Christopher. I don't know either how to trigger it since the only
problematic case I've found is the one where input wraps, which doesn't
happen when we're processing data. However I agree that leaving such a bug
behind us is scary and a future fix might rely on this to work correctly
so I'd rather backport the fix anyway.

I checked your solution and for me it works fine in all situations. By
the way, just FYI, there aren't 3 cases to consider for a buffer, but at
least 5 ; here are the additional 2 ones (that your patch properly handles) :
  - input may not wrap but end exactly at the end of the buffer, making
buf->p+buf->i == buf->data+buf->size, but bi_end() == buf->data. This
is a common error case when computing input lengths.

  - the output data may end at the end of the buffer and the input be placed
at the beginning, causing buf->p to equal buf->data. Similarly it's a
common error case when computing output data length.

These situations cause trouble when not using the proper arithmetics. Either
all the computations are made without wrapping, or all are made with wrapping.
Any mix of the two causes issues.

Thanks!
Willy



Re: configuration.txt questions

2017-03-31 Thread Jarno Huuskonen
On Fri, Mar 31, Jarno Huuskonen wrote:
> First I'm attaching a patch that corrects ]) order for urlp_val
> and adds 'Example:' string to
> src_clr_gpc0,src_inc_gpc0,sc2_clr_gpc0,sc2_inc_gpc0,ssl_c_sha1
> (I assume that Example: is what generates the example formatting in
> html/dconv documentation).

This time with correct attachment(patch).

-Jarno

-- 
Jarno Huuskonen
>From ce4ac377ee917cb66c8ffb123e08d4ddf6d611cd Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Thu, 30 Mar 2017 09:19:45 +0300
Subject: [PATCH] DOC: urlp_val missing ) DOC:
 src_clr_gpc0,src_inc_gpc0,sc2_clr_gpc0,sc2_inc_gpc0,ssl_c_sha1 Example:
 string.

---
 doc/configuration.txt | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 09aaf1d..0ba2b02 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13383,6 +13383,7 @@ sc2_clr_gpc0([]) : integer
   typically used as a second ACL in an expression in order to mark a connection
   when a first ACL was verified :
 
+  Example:
 # block if 5 consecutive requests continue to come faster than 10 sess
 # per second, and reset the counter as soon as the traffic slows down.
 acl abuse sc0_http_req_rate gt 10
@@ -13483,6 +13484,7 @@ sc2_inc_gpc0([]) : integer
   return 1. This is typically used as a second ACL in an expression in order
   to mark a connection when a first ACL was verified :
 
+  Example:
 acl abuse sc0_http_req_rate gt 10
 acl kill  sc0_inc_gpc0 gt 0
 tcp-request connection reject if abuse kill
@@ -13585,6 +13587,7 @@ src_clr_gpc0([]) : integer
   second ACL in an expression in order to mark a connection when a first ACL
   was verified :
 
+  Example:
 # block if 5 consecutive requests continue to come faster than 10 sess
 # per second, and reset the counter as soon as the traffic slows down.
 acl abuse src_http_req_rate gt 10
@@ -13667,6 +13670,7 @@ src_inc_gpc0([]) : integer
   This is typically used as a second ACL in an expression in order to mark a
   connection when a first ACL was verified :
 
+  Example:
 acl abuse src_http_req_rate gt 10
 acl kill  src_inc_gpc0 gt 0
 tcp-request connection reject if abuse kill
@@ -13870,6 +13874,7 @@ ssl_c_sha1 : binary
   Note that the output is binary, so if you want to pass that signature to the
   server, you need to encode it in hex or base64, such as in the example below:
 
+  Example:
  http-request set-header X-SSL-Client-SHA1 %[ssl_c_sha1,hex]
 
 ssl_c_sig_alg : string
@@ -14833,7 +14838,7 @@ url_param([[,]]) : string
   # match http://example.com/foo;JSESSIONID=some_id
   stick on urlp(JSESSIONID,;)
 
-urlp_val([[,])] : integer
+urlp_val([[,]]) : integer
   See "urlp" above. This one extracts the URL parameter  in the request
   and converts it to an integer value. This can be used for session stickiness
   based on a user ID for example, or with ACLs to match a page number or price.
-- 
1.8.3.1



HAproxy reload

2017-03-31 Thread Preeti Saini
Hi,

I am currently using haproxy and we need to make it production ready.
I have few questions.

Currently we are maintaining single haproxy file. Having one frontened , some 
acl rules and multiple backends. I want to split blackened into multiple files. 
Is it possible, we can selectively include blackened? Or we always need to have 
backend , if we have entry in acl.

Is there a good way to reload the config files, if we add or modify a 
particular blackened file only.

What about running multiple instances of same haproxy. And respective 
application maintain their own config and reload them independent. Is it a good 
practice?

Request you advice on this.

Regards,
Preeti


[PATCH] BUG/MINOR: http: Fix conditions to clean up a txn and to handle the next request

2017-03-31 Thread Christopher Faulet

Willy,

Another fix (with some cleanup in other paches). The first one (and 
probably the second one) can be backported. But I don't know if this is 
mandatory. It is really tricky to find conditions where it could be a 
problem.


Thanks
--
Christopher Faulet
>From 0cdf21fc9678ce44eddf10efb4fdaf737e2115b3 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Tue, 28 Mar 2017 11:51:33 +0200
Subject: [PATCH 1/4] BUG/MINOR: http: Fix conditions to clean up a txn and to
 handle the next request

To finish a HTTP transaction and to start the new one, we check, among other
things, that there is enough space in the reponse buffer to eventually inject a
message during the parsing of the next request. Because these messages can reach
the maximum buffers size, it is mandatory to have an empty response
buffer. Remaining input data are trimmed during the txn cleanup (in
http_reset_txn), so we just need to check that the output data were flushed.

The current implementation depends on channel_congested, which does check the
reserved area is available. That's not of course good enough. There are other
tests on the reponse buffer is http_wait_for_request. But conditions to move on
are almost the same. So, we can imagine some scenarii where some output data
remaining in the reponse buffer during the request parsing prevent any messages
injection.

To fix this bug, we just wait that output data were flushed before cleaning up
the HTTP txn (ie. s->res.buf->o == 0). In addition, in http_reset_txn we realign
the response buffer (note the buffer is empty at this step).

Thanks to this changes, there is no more need to set CF_EXPECT_MORE on the
response channel in http_end_txn_clean_session. And more important, there is no
more need to check the response buffer state in http_wait_for_request. This
remove a workaround on response analysers to handle HTTP pipelining.

This patch can be backported in 1.7, 1.6 and 1.5.
---
 src/proto_http.c | 48 +++-
 src/stream.c | 12 
 2 files changed, 7 insertions(+), 53 deletions(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index 2d97fbe..24d034a 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -2649,29 +2649,6 @@ int http_wait_for_request(struct stream *s, struct channel *req, int an_bit)
 buffer_slow_realign(req->buf);
 		}
 
-		/* Note that we have the same problem with the response ; we
-		 * may want to send a redirect, error or anything which requires
-		 * some spare space. So we'll ensure that we have at least
-		 * maxrewrite bytes available in the response buffer before
-		 * processing that one. This will only affect pipelined
-		 * keep-alive requests.
-		 */
-		if ((txn->flags & TX_NOT_FIRST) &&
-		unlikely(!channel_is_rewritable(&s->res) ||
-			 bi_end(s->res.buf) < b_ptr(s->res.buf, txn->rsp.next) ||
-			 bi_end(s->res.buf) > s->res.buf->data + s->res.buf->size - global.tune.maxrewrite)) {
-			if (s->res.buf->o) {
-if (s->res.flags & (CF_SHUTW|CF_SHUTW_NOW|CF_WRITE_ERROR|CF_WRITE_TIMEOUT))
-	goto failed_keep_alive;
-/* don't let a connection request be initiated */
-channel_dont_connect(req);
-s->res.flags &= ~CF_EXPECT_MORE; /* speed up sending a previous response */
-s->res.flags |= CF_WAKE_WRITE;
-s->res.analysers |= an_bit; /* wake us up once it changes */
-return 0;
-			}
-		}
-
 		if (likely(msg->next < req->buf->i)) /* some unparsed data are available */
 			http_msg_analyzer(msg, &txn->hdr_idx);
 	}
@@ -5292,20 +5269,6 @@ void http_end_txn_clean_session(struct stream *s)
 		s->res.flags |= CF_NEVER_WAIT;
 	}
 
-	/* if the request buffer is not empty, it means we're
-	 * about to process another request, so send pending
-	 * data with MSG_MORE to merge TCP packets when possible.
-	 * Just don't do this if the buffer is close to be full,
-	 * because the request will wait for it to flush a little
-	 * bit before proceeding.
-	 */
-	if (s->req.buf->i) {
-		if (s->res.buf->o &&
-		!buffer_full(s->res.buf, global.tune.maxrewrite) &&
-		bi_end(s->res.buf) <= s->res.buf->data + s->res.buf->size - global.tune.maxrewrite)
-			s->res.flags |= CF_EXPECT_MORE;
-	}
-
 	/* we're removing the analysers, we MUST re-enable events detection.
 	 * We don't enable close on the response channel since it's either
 	 * already closed, or in keep-alive with an idle connection handler.
@@ -5686,13 +5649,13 @@ int http_resync_states(struct stream *s)
 		 * possibly killing the server connection and reinitialize
 		 * a fresh-new transaction, but only once we're sure there's
 		 * enough room in the request and response buffer to process
-		 * another request. The request buffer must not hold any
-		 * pending output data and the request buffer must not have
-		 * output data occupying the reserve.
+		 * another request. They must not hold any pending output data
+		 * and the response buffer must realigned
+		 * (realign is done is http_end_txn_clean_session).
 		

[PATCH] BUG/MEDIUM: buffers: Fix how input/output data are injected into buffers

2017-03-31 Thread Christopher Faulet

Willy,

I tagged this patch as a bug. But I don't found a way to hit it for now. 
It can be backported or not, as you wish.


--
Christopher Faulet
>From 4ffdfbed993eaeb6c777c148e1eb6a712bfc9e18 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Wed, 29 Mar 2017 11:58:28 +0200
Subject: [PATCH] BUG/MEDIUM: buffers: Fix how input/output data are injected
 into buffers

The function buffer_contig_space is buggy and could lead to pernicious bugs
(never hitted until now, AFAIK). This function should return the number of bytes
that can be written into the buffer at once (without wrapping).

First, this function is used to inject input data (bi_putblk) and to inject
output data (bo_putblk and bo_inject). But there is no context. So it cannot
decide where contiguous space should placed. For input data, it should be after
bi_end(buf) (ie, buf->p + buf->i modulo wrapping calculation). For output data,
it should be after bo_end(buf) (ie, buf->p) and input data are assumed to not
exist (else there is no space at all).

Then, considering we need to inject input data, this function does not always
returns the right value. And when we need to inject output data, we must be sure
to have no input data at all (buf->i == 0), else the result can also be wrong
(but this is the caller responsibility, so everything should be fine here).

The buffer can be in 3 different states:

 1) no wrapping

  < o ><- i ->
 +++-++
 |||i||
 +++-++
   ^ 
   p ^^
			 lr

 2) input wrapping

 ...--->< o >< i ---...
 +-++++
 |i||||
 +-++++
   ^
   ^^p
   lr

 3) output wrapping

 ...-- o --><- i -><...
 +--+-++--+
 |oo|i||oo|
 +--+-++--+
^ 
p ^^
		  lr

buffer_contig_space returns (l - r). The cases 1 and 3 are correctly
handled. But for the second case, r is wrong. It points on the buffer's end
(buf->data + buf->size). It should be bo_end(buf) (ie, buf->p - buf->o).

To fix the bug, the function has been splitted. Now, bi_contig_space and
bo_contig_space should be used to know the contiguous space available to insert,
respectively, input data and output data. For bo_contig_space, input data are
assumed to not exist. And the right version is used, depending what we want to
do.

In addition, to clarify the buffer's API, buffer_realign does not return value
anymore. So it has the same API than buffer_slow_realign.

This patch can be backported in 1.7, 1.6 and 1.5.
---
 include/common/buffer.h | 60 +++--
 src/channel.c   |  6 ++---
 2 files changed, 41 insertions(+), 25 deletions(-)

diff --git a/include/common/buffer.h b/include/common/buffer.h
index ce3eb40..3a6dfd7 100644
--- a/include/common/buffer.h
+++ b/include/common/buffer.h
@@ -156,6 +156,41 @@ static inline int bo_contig_data(const struct buffer *b)
 	return b->o;
 }
 
+/* Return the amount of bytes that can be written into the input area at once
+ * including reserved space which may be overwritten (this is the caller
+ * responsibility to know if the reserved space is protected or not).
+*/
+static inline int bi_contig_space(const struct buffer *b)
+{
+	const char *left, *right;
+
+	left  = bi_end(b);
+	right = bo_ptr(b);
+
+	if (left >= right)
+		right = b->data + b->size;
+
+	return (right - left);
+}
+
+/* Return the amount of bytes that can be written into the output area at once
+ * including reserved space which may be overwritten (this is the caller
+ * responsibility to know if the reserved space is protected or not). Input data
+ * are assumed to not exist.
+*/
+static inline int bo_contig_space(const struct buffer *b)
+{
+	const char *left, *right;
+
+	left  = bo_end(b);
+	right = bo_ptr(b);
+
+	if (left >= right)
+		right = b->data + b->size;
+
+	return (right - left);
+}
+
 /* Return the buffer's length in bytes by summing the input and the output */
 static inline int buffer_len(const struct buffer *buf)
 {
@@ -226,21 +261,6 @@ static inline int buffer_contig_area(const struct buffer *buf, const char *start
 	return count;
 }
 
-/* Return the amount of bytes that can be written into the buffer at once,
- * including reserved space which may be overwritten.
- */
-static inline int buffer_contig_space(const struct buffer *buf)
-{
-	const char *l

Minor HTTP patches

2017-03-31 Thread Christopher Faulet

Hi Willy,

Following my recent patches on HTTP/1.0 responses without content-length 
when compression filter is enabled, here is 2 small patches. The first 
one is a small code cleanup and the second one adds handy debug messages.


Thanks,
--
Christopher Faulet
>From c998beb94b02be0f07adc950400c495f60776b91 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Thu, 30 Mar 2017 11:21:53 +0200
Subject: [PATCH 1/2] MINOR: http: remove useless check on HTTP_MSGF_XFER_LEN
 for the request
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.2.4

The flag HTTP_MSGF_XFER_LEN is always set for an HTTP request because we always
now the body length. So there is no need to do check on it.
---
 src/proto_http.c | 39 +++
 1 file changed, 15 insertions(+), 24 deletions(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index 487c0fc..a00ea78 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -3122,7 +3122,7 @@ int http_wait_for_request(struct stream *s, struct channel *req, int an_bit)
 	/* set TE_CHNK and XFER_LEN only if "chunked" is seen last */
 	while (http_find_header2("Transfer-Encoding", 17, req->buf->p, &txn->hdr_idx, &ctx)) {
 		if (ctx.vlen == 7 && strncasecmp(ctx.line + ctx.val, "chunked", 7) == 0)
-			msg->flags |= (HTTP_MSGF_TE_CHNK | HTTP_MSGF_XFER_LEN);
+			msg->flags |= HTTP_MSGF_TE_CHNK;
 		else if (msg->flags & HTTP_MSGF_TE_CHNK) {
 			/* chunked not last, return badreq */
 			goto return_bad_req;
@@ -3158,7 +3158,7 @@ int http_wait_for_request(struct stream *s, struct channel *req, int an_bit)
 			goto return_bad_req; /* already specified, was different */
 		}
 
-		msg->flags |= HTTP_MSGF_CNT_LEN | HTTP_MSGF_XFER_LEN;
+		msg->flags |= HTTP_MSGF_CNT_LEN;
 		msg->body_len = msg->chunk_len = cl;
 	}
 
@@ -4256,8 +4256,7 @@ static int http_apply_redirect_rule(struct redirect_rule *rule, struct stream *s
 	/* let's log the request time */
 	s->logs.tv_request = now;
 
-	if ((req->flags & HTTP_MSGF_XFER_LEN) &&
-	((!(req->flags & HTTP_MSGF_TE_CHNK) && !req->body_len) || (req->msg_state == HTTP_MSG_DONE)) &&
+	if (((!(req->flags & HTTP_MSGF_TE_CHNK) && !req->body_len) || (req->msg_state == HTTP_MSG_DONE)) &&
 	((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL ||
 	 (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL)) {
 		/* keep-alive possible */
@@ -4847,22 +4846,20 @@ int http_process_request(struct stream *s, struct channel *req, int an_bit)
 		req->analysers |= AN_REQ_HTTP_BODY;
 	}
 
-	if (msg->flags & HTTP_MSGF_XFER_LEN) {
-		req->analysers &= ~AN_REQ_FLT_XFER_DATA;
-		req->analysers |= AN_REQ_HTTP_XFER_BODY;
+	req->analysers &= ~AN_REQ_FLT_XFER_DATA;
+	req->analysers |= AN_REQ_HTTP_XFER_BODY;
 #ifdef TCP_QUICKACK
-		/* We expect some data from the client. Unless we know for sure
-		 * we already have a full request, we have to re-enable quick-ack
-		 * in case we previously disabled it, otherwise we might cause
-		 * the client to delay further data.
-		 */
-		if ((sess->listener->options & LI_O_NOQUICKACK) &&
-		cli_conn && conn_ctrl_ready(cli_conn) &&
-		((msg->flags & HTTP_MSGF_TE_CHNK) ||
-		 (msg->body_len > req->buf->i - txn->req.eoh - 2)))
-			setsockopt(cli_conn->t.sock.fd, IPPROTO_TCP, TCP_QUICKACK, &one, sizeof(one));
+	/* We expect some data from the client. Unless we know for sure
+	 * we already have a full request, we have to re-enable quick-ack
+	 * in case we previously disabled it, otherwise we might cause
+	 * the client to delay further data.
+	 */
+	if ((sess->listener->options & LI_O_NOQUICKACK) &&
+	cli_conn && conn_ctrl_ready(cli_conn) &&
+	((msg->flags & HTTP_MSGF_TE_CHNK) ||
+	 (msg->body_len > req->buf->i - txn->req.eoh - 2)))
+		setsockopt(cli_conn->t.sock.fd, IPPROTO_TCP, TCP_QUICKACK, &one, sizeof(one));
 #endif
-	}
 
 	/*
 	 * OK, that's finished for the headers. We have done what we *
@@ -4871,12 +4868,6 @@ int http_process_request(struct stream *s, struct channel *req, int an_bit)
 	req->analyse_exp = TICK_ETERNITY;
 	req->analysers &= ~an_bit;
 
-	/* if the server closes the connection, we want to immediately react
-	 * and close the socket to save packets and syscalls.
-	 */
-	if (!(req->analysers & AN_REQ_HTTP_XFER_BODY))
-		s->si[1].flags |= SI_FL_NOHALF;
-
 	s->logs.tv_request = now;
 	/* OK let's go on with the BODY now */
 	return 1;
-- 
2.9.3

>From 776963e253b229f8e4749d8c4e22bd01dea6c7aa Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Thu, 30 Mar 2017 11:33:44 +0200
Subject: [PATCH 2/2] MINOR: http: Add debug messages when HTTP body analyzers
 are called
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.2.4

---
 src/proto_http.c | 25 +
 1 file changed, 25 insertions(+)

diff --git a/src/proto_http.c b/src/proto_http.c
index a00ea78..2d97fbe 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -5630,6 +5630,13 @@ int http_resync_states(struct stream *s)

Re: stick-table ,show table, use field

2017-03-31 Thread Arnall

Thanks Brian !

i have searched in management guides, but at "show table  [ 
data.   ] | [ key  ]" :)


BTW the doc says 2 things :
1] "their size in maximum possible number of entries, and the number of 
entries currently in use."


it seems that's, in reality, the size of the table in bytes, not really 
a number of entries. Size is still the same when adding or removing 
datatype.
stick-table size 50m => sized:  52428800 no matter the number of 
data_type per key.


2] their type (currently zero, always IP)

i have type: string when i set "type string". Maybe i missunderstand the 
sentence ?


echo "show table " | sudo socat stdio  /run/haproxy/admin.sock
# table: web_plain, type: ip, size:52428800, used:0
# table: dummy_stick_table, type: string, size:52428800, used:0

Thanks

Le 30/03/2017 à 22:50, Bryan Talbot a écrit :


On Mar 30, 2017, at Mar 30, 10:19 AM, Arnall > wrote:


Hello everyone,

when using socat to show a stick-table i have lines like this :

# table: dummy_table, type: ip, size:52428800, used:33207

0x7f202f800720: key=aaa.bbb.ccc.ddd use=0 exp=599440 gpc0=0 
conn_rate(5000)=19 conn_cur=0 http_req_rate(1)=55


../...

I understand all the fields except 2 :

used:33207

use=0

I found nothing in the doc, any idea ?




I believe that these are documented in the management guides and not 
the config guides.


https://cbonte.github.io/haproxy-dconv/1.6/management.html#9.2-show%20table

Here, I think that ‘used’ for the table is the number of entries that 
currently exist in the table, and ‘use’ for an entry is the number of 
sessions that concurrently match that entry.


-Bryan





Re: [Patches] TLS methods configuration reworked

2017-03-31 Thread Emeric Brun
Hi Emmanuel,

On 03/30/2017 07:44 PM, Emmanuel Hocdet wrote:
> The right patch series ...
> 
>> Le 30 mars 2017 à 19:00, Emmanuel Hocdet  a écrit :
>>
>> Hi Emeric, Willy
>>
>> Rework of patches serie to match default-server requirement and talk with 
>> Willy.
>> It should be easier to follow.
>>
>>
>>> Le 27 mars 2017 à 16:15, Emeric Brun  a écrit :
>>>
>>> Hi Manu,
>>>

 What kind of api and dependency? To generate haproxy configuration?
 Generate min-tlsv10  or ssl-min tlsv10  will not change anything.
>>>
>>> For word based API parser/generator, as the one embedded in haproxy's 
>>> hardware proxy, it does :)

>>
>> I pushed the min-/max- parameter at the end to easily change with your needs.
>>
> 

This not what i expected. The haproxy's appliances API parser is word based.

To be clear, it is much more easy to maintain if there is only two attributes 
'ssl-min-ver' and 'ssl-max-ver' with a set of possible values (sslv3 tlsv10 ...)

So what i expect is only two keyword configuration 'ssl-max-ver' ans 
'ssl-min-ver' which should handle an arg containing the protocol version in a 
string format:

i.e.

bind 0.0.0.0:443 ssl crt my.pem ssl-min-ver tlsv10 ssl-max-ver tlsv13


R,
Emeric



configuration.txt questions

2017-03-31 Thread Jarno Huuskonen
Hi,

Here's couple of questions/suggestions for configuration.txt.

First I'm attaching a patch that corrects ]) order for urlp_val
and adds 'Example:' string to
src_clr_gpc0,src_inc_gpc0,sc2_clr_gpc0,sc2_inc_gpc0,ssl_c_sha1
(I assume that Example: is what generates the example formatting in
html/dconv documentation).

The html/dconv documentation doesn't seem to correctly parse some
fetches/converters:
7.3.1 converters:
set-var()
unset-var()

7.3.2
ipv4()
ipv6()

I was thinking that it would nice to have a keyword matrix for
fetches / converters (similar to what's already available for proxy
keywords: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.1)
Maybe this could as simple as table with keyword name and link(html/dconv)
to more detailed descripton. Something like:
converters:
| add  | and| base64 |
| bool | bytes([,]) | cpl|
...
layer 4 fetches:
| be_id| dst  | dst_conn   |
| dst_is_local | dst_port | fc_rtt() |
...
layer 5 fetches:
...

This would make it faster/easier to scan what converters/fetches
are available. Thoughts ?

Some fetches are deprecated (for example cook), is the cook() ACL also
deprecated ?

Do these ACLs have same meaning ?
cook / req.cook / req.cook -m str
cook_beg / req.cook -m beg
cook_dir / req.cook -m dir
cook_dom / req.cook -m dom
cook_end / req.cook -m end
cook_len / req.cook -m len
cook_req / req.cook -m reg
cook_sub / req.cook -m sub

If they do, then does it make sense to have both formats in
configuration.txt ?
(http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.6-req.cook)

-Jarno

-- 
Jarno Huuskonen
>From 2a7aea981d8ea482830317f6caf6b0fc93c90dbc Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Thu, 30 Mar 2017 09:19:45 +0300
Subject: [PATCH] DOC: urlp_val missing ), src_inc_gpc0/src_clr_gpc0 add
 Example: string.
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.2.4

---
 doc/configuration.txt | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 09aaf1d..795f5ed 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13583,8 +13583,9 @@ src_clr_gpc0([]) : integer
   designated stick-table, and returns its previous value. If the address is not
   found, an entry is created and 0 is returned. This is typically used as a
   second ACL in an expression in order to mark a connection when a first ACL
-  was verified :
+  was verified.
 
+  Example:
 # block if 5 consecutive requests continue to come faster than 10 sess
 # per second, and reset the counter as soon as the traffic slows down.
 acl abuse src_http_req_rate gt 10
@@ -13665,8 +13666,9 @@ src_inc_gpc0([]) : integer
   designated stick-table, and returns its new value. If the address is not
   found, an entry is created and 1 is returned. See also sc0/sc2/sc2_inc_gpc0.
   This is typically used as a second ACL in an expression in order to mark a
-  connection when a first ACL was verified :
+  connection when a first ACL was verified.
 
+  Example:
 acl abuse src_http_req_rate gt 10
 acl kill  src_inc_gpc0 gt 0
 tcp-request connection reject if abuse kill
@@ -14833,7 +14835,7 @@ url_param([[,]]) : string
   # match http://example.com/foo;JSESSIONID=some_id
   stick on urlp(JSESSIONID,;)
 
-urlp_val([[,])] : integer
+urlp_val([[,]]) : integer
   See "urlp" above. This one extracts the URL parameter  in the request
   and converts it to an integer value. This can be used for session stickiness
   based on a user ID for example, or with ACLs to match a page number or price.
-- 
1.8.3.1