Re: WHY they are different when checking concurrent limit?

2015-11-09 Thread Zhou,Qingzhi
Hi:
Thanks very much.
But I think we can use listener_full instead of limit_listener if we want
wake up the listener when there’s a connection closed. Like in the
beginning of listener_accept:

 if (unlikely(l->nbconn >= l->maxconn)) {
listener_full(l);
return;
}


WHY not using listener_full ?

Thanks,
zhou



在 15/11/10 下午3:30, "Willy Tarreau"  写入:

>Hi,
>
>On Mon, Nov 09, 2015 at 12:46:57PM +, Zhou,Qingzhi wrote:
>> if (unlikely(actconn >= global.maxconn) && !(l->options &
>>LI_O_UNLIMITED)) {
>> limit_listener(l, &global_listener_queue);
>> task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /*
>>try again in 1 second */
>> return;
>> }
>> 
>> if (unlikely(p && p->feconn >= p->maxconn)) {
>> limit_listener(l, &p->listener_queue);   > return;
>> }
>> 
>> My question is why the task_schedule is not called again here? Any
>>purpose?
>> In my knowledge, if the upper limit is reached, we should re-schedule
>>the
>> task with expire time, and the listener will wake up when the task is
>>ran.
>
>No because if we're limited by the frontend itself, after we disable the
>listener, we will automatically be woken up once a connection is released
>there. It's when the global maxconn is reached that we want to reschedule
>because there are some situations where we cannot reliably detect if
>certain connections impacting global.maxconn have been released (eg:
>outgoing peers connections and Lua cosockets count here).
>
>Regards,
>Willy
>



Re: WHY they are different when checking concurrent limit?

2015-11-09 Thread Willy Tarreau
Hi,

On Mon, Nov 09, 2015 at 12:46:57PM +, Zhou,Qingzhi wrote:
> if (unlikely(actconn >= global.maxconn) && !(l->options & LI_O_UNLIMITED)) {
> limit_listener(l, &global_listener_queue);
> task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /* try 
> again in 1 second */
> return;
> }
> 
> if (unlikely(p && p->feconn >= p->maxconn)) {
> limit_listener(l, &p->listener_queue);return;
> }
> 
> My question is why the task_schedule is not called again here? Any purpose?
> In my knowledge, if the upper limit is reached, we should re-schedule the
> task with expire time, and the listener will wake up when the task is ran.

No because if we're limited by the frontend itself, after we disable the
listener, we will automatically be woken up once a connection is released
there. It's when the global maxconn is reached that we want to reschedule
because there are some situations where we cannot reliably detect if
certain connections impacting global.maxconn have been released (eg:
outgoing peers connections and Lua cosockets count here).

Regards,
Willy




Re: MINOR: Makefile deviceatlas

2015-11-09 Thread Willy Tarreau
Thanks David,

merged into 1.7 and 1.6.

Willy




Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-09 Thread Willy Tarreau
On Tue, Nov 10, 2015 at 09:52:21AM +0900, Simon Horman wrote:
> I would slightly prefer if there was a more substantial comment in
> process_email_alert() noting that retry occurs 3 times.

Good point, I'll add this when merging.

> But regardless:
> Acked-by: Simon Horman 

Thanks Simon!

Willy




Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-09 Thread Simon Horman
On Mon, Nov 09, 2015 at 11:11:53AM +0100, Willy Tarreau wrote:
> Hi Pieter,
> 
> > Hi Ben, Willy, Simon,
> > 
> > Ben, thanks for the review.
> > Hoping 'release pressure' has cleared for Willy i'm resending the 
> > patch now, with with your comments incorporated.
> > 
> > CC, to Simon as maintainer of mailers part so he can give approval (or 
> > not).
> > 
> > The original reservations i had when sending this patch still apply. 
> > See the "HOWEVER." part in the bottom mail.
> > 
> > Hoping it might get merged to improve mailer reliability. So no 
> > 'server down' email gets lost..
> > Thanks everyone for your time :) .
> 
> Looks good to me. Just waiting for Simon's approval.

I would slightly prefer if there was a more substantial comment in
process_email_alert() noting that retry occurs 3 times. But regardless:

Acked-by: Simon Horman 




RE: HAProxy with multiple CRL's

2015-11-09 Thread Harvan, Michael P
Thank you for the information. The Root CA did indeed have a CRL. I added that 
to my combined CRL, in the order local CRL, external root CRL, external 
intermediate CRL and everything worked.

Thanks again,
Mike

-Original Message-
From: Toft Alex (HEALTH AND SOCIAL CARE INFORMATION CENTRE) 
[mailto:alex.t...@hscic.gov.uk] 
Sent: Saturday, November 07, 2015 11:51 AM
To: haproxy@formilux.org
Subject: EXTERNAL: Re: HAProxy with multiple CRL's

That error message is somewhat unhelpful, as a colleague discovered recently.

HAProxy will check the chain right the way up, so CA>Cert needs handling 
differently to CA>IntermediateCA>Cert which is generally what you'll get from a 
commercial CA. In the latter situation the IntermediateCA will also have a CRL 
Distribution Point attribute and you need that CRL too (technically it's called 
an ARL and generated much less frequently). Root CA certs at the top of the 
chain should NOT have a CRLDP, but some people do make that mistake.

"ssl_sock_bind_verifycbk" in ssl_sock.c is where the verification is performed 
by handing off to OpenSSL. Sure enough, the OpenSSL error code is there c/o 
"err = X509_STORE_CTX_get_error(x_store)" but is never logged or otherwise used 
outside this function; instead the somewhat more generic message you received 
is output. General premise is that every link in the CA chain is checked by 
OpenSSL; if the cert has a CRLDP it will try to verify it ­ even for a badly 
configured root CA.

If you want to get rid of ARL checking higher up the chain but retain normal 
CRL checking it's a very quick tweak. In ssl_sock.c, after these linesŠ

 /* check if CA error needs to be ignored */
 if (depth > 0) {

Add something like this:

 if (err == 3) {
  // Uncomment line below to output debug to stdout
  // printf("ARL could not be checked in the client CA chain at depth 
%d - activating hideously dirty hack :)\n",depth);
  ERR_clear_error();
  return 1;
 }


If the depth is greater than 0 you're verifying the revocation status of a CA 
certificate. If the error code is 3 it corresponds to 
X509_V_ERR_UNABLE_TO_GET_CRL as per the OpenSSL x509_vfy.h

Alex T


From:  "Harvan, Michael P" 
Date:  Friday, 6 November 2015 at 21:03
To:  "haproxy@formilux.org" 
Subject:  HAProxy with multiple CRL's


Hi. I would like to configure HAProxy to allow multiple CRL¹s.


First, for testing I created my own CA. I created a server cert and signed it. 
I created a client cert and signed it. I created a CRL.


I setup HAProxy like:
bind *:443 ssl crt server.crt ca-file my_ca.crt crl-file my_ca.crl

That worked fine. The ssl connection prompted me for a cert signed by the CA 
present in the ca.crt file. I could give it a valid cert, an expired cert and a 
revoked cert and they all worked as expected.

Then I tried integrating with an external CA for which I have a valid client 
cert, the CA cert and the CA CRL. I concatenated the CA certs to a combined.crt 
file. Then I concatenated the CRL files to a combined.crl file even though I 
have  read posts that say that invalidates the CRL. There are other posts that 
say that should work.


My HAProxy config is now:
bind *:443 ssl crt server.crt ca-file combined.crt crl-file combined.crl

The interface will accept a client cert signed by my own CA. If I don¹t specify 
a CRL it will also accept a client cert signed by the external CA.
But, if I specify the crl-file, it will not accept the client cert from the 
external CA.

I tried using just the external CA cert and the external CRL:
bind *:443 ssl crt server.crt ca-file external.crt crl-file external.crl

That will not work either. The error in both cases is ³SSL client CA chain 
cannot be verified² But I only get that if I specify the crl-file. I

Any help is appreciated! Thanks.

Mike





This message may contain confidential information. If you are not the intended 
recipient please inform the sender that you have received the message in error 
before deleting it.
Please do not disclose, copy or distribute information in this e-mail or take 
any action in reliance on its contents:
to do so is strictly prohibited and may be unlawful.

Thank you for your co-operation.

NHSmail is the secure email and directory service available for all NHS staff 
in England and Scotland NHSmail is approved for exchanging patient data and 
other sensitive information with NHSmail and GSi recipients NHSmail provides an 
email address for your career in the NHS and can be accessed anywhere






Re: appsession replacement in 1.6

2015-11-09 Thread Aleksandar Lazic

Hi Sylvain Faivre.

Am 09-11-2015 17:31, schrieb Sylvain Faivre:

Hi,

Sorry I'm late on this discussion, following this thread :
https://marc.info/?l=haproxy&m=143345620219498&w=2

We are using appsession with HAproxy 1.5 like this :


Thanks ;-)


backend http
appsession JSESSIONID len 24 timeout 1h request-learn

We would like to be able to do the same thing with HAproxy 1.6.
If it is possible, we'd like to catch the JSESSIONID in cookies and
URL parameters, either in the request or in the response.

I tried to use the info posted previously by Aleksandar, but I
encountered several problems :

- using "stick on" in frontend section fails with :

'stick' ignored because frontend 'web' has no backend capability.


- using "stick store-response urlp(JSESSIONID,;)" in backend section
fails with :
'stick': fetch method 'urlp' extracts information from 'HTTP request 
headers', none of which is available for 'store-response'.



So, I've got this so far :

backend http

  stick-table type string len 24 size 10m expire 1h peers prod

  stick on urlp(JSESSIONID,;)
  stick on cookie(JSESSIONID)


Does this seem right ?
The help for "stick on" tells it defines a request pattern, so I guess
this would not match JSESSIONID cookie ou url parameter set in the
reply ?


I have no java server here to test this commands but with this commands 
haproxy does not warn you about some config errors ;-).


###
backend dest01
  mode http

  stick-table type string len 24 size 10m expire 1h peers prod

  stick on urlp(JSESSIONID,;)
  stick on cookie(JSESSIONID)

  stick store-response cookie(JSESSIONID)
#  stick store-response res.hdr(JSESSIONID,;)

  stick store-request cookie(JSESSIONID)
  stick store-request urlp(JSESSIONID,;)

  server srv_dest01 dest01.com:80
###

I have not seen a good option to read the JSESSIONID from the response 
header in case it is not in a cookie.

Have anyone I idea?!

Please can you post a full response header which was created from the 
app or appserver when the app or appserver have detected that the client 
does not allow cookies.


cheers
Aleks



Re: Debug mode not working?!

2015-11-09 Thread Aleksandar Lazic

Hi Nenad,

Am 09-11-2015 22:52, schrieb Nenad Merdanovic:

Hello Aleksandar,


Okay after removing accept-proxy from

bind *:${HTTP_BIND_PORT} accept-proxy tfo

It comes what expected.


If you are using 'accept-proxy', HAproxy expects the payload to start
with a PROXY protocol header.

http://www.haproxy.org/download/1.6/doc/proxy-protocol.txt


Full Ack.

maybe it would be helpfull to write out something like.

PROXY protocol expected but not found

in src/connection.c:conn_recv_proxy() or in conn_fd_handler() if no 
proxy protocol comes.


BR Aleks




Re: Debug mode not working?!

2015-11-09 Thread Nenad Merdanovic
Hello Aleksandar,

> Okay after removing accept-proxy from
> 
> bind *:${HTTP_BIND_PORT} accept-proxy tfo
> 
> It comes what expected.

If you are using 'accept-proxy', HAproxy expects the payload to start
with a PROXY protocol header.

http://www.haproxy.org/download/1.6/doc/proxy-protocol.txt

Regards,
Nenad



Re: Debug mode not working?!

2015-11-09 Thread Aleksandar Lazic



Am 09-11-2015 22:21, schrieb Willy Tarreau:

On Mon, Nov 09, 2015 at 10:15:46PM +0100, Aleksandar Lazic wrote:


...
epoll_wait(3, {}, 200, 1000)= 0
epoll_wait(3, {{EPOLLIN, {u32=5, u64=5}}}, 200, 1000) = 1
accept4(5, {sa_family=AF_INET, sin_port=htons(52310),
sin_addr=inet_addr("127.0.0.1")}, [16], SOCK_NONBLOCK) = 7
setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
accept4(5, 0x7ffca18022c0, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource
temporarily unavailable)
recvfrom(7, "GET / HTTP/1.1\r\nUser-Agent: curl/7.22.0
(x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4
libidn/1.23 librtmp/2.3\r\nHost: 127.0.0.1:7992\r\nAccept: 
*/*\r\n\r\n",

16384, MSG_PEEK, NULL, NULL) = 166
close(7)= 0
epoll_wait(3, {}, 200, 1000)= 0
...



It was aborted very early, I think it even didn't become a session,
though I could be wrong. You need a session for a minimum of debugging
to work.

(...)

Other terminal.


curl -vk http://127.0.0.1:7992/
* About to connect() to 127.0.0.1 port 7992 (#0)
*   Trying 127.0.0.1... connected
>GET / HTTP/1.1
>User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
>OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
>Host: 127.0.0.1:7992
>Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer



Confirmed here.


Okay after removing accept-proxy from

bind *:${HTTP_BIND_PORT} accept-proxy tfo

It comes what expected.

Using epoll() as the polling mechanism.
:http-in.accept(0005)=0007 from [127.0.0.1:53420]
[3995514114] process_stream:1662: task=0xa27410 s=0xa59600, 
sfl=0x0080, rq=0xa59610, rp=0xa59650, exp(r,w)=0,0 rqf=00908002 
rpf=8000 rqh=166 rqt=0 rph=0 rpt=0 cs=7 ss=0, cet=0x0 set=0x0 retr=0
[3995514114] tcp_inspect_request: stream=0xa59600 b=0xa59610, 
exp(r,w)=0,0 bf=00908002 bh=166 analysers=36
[3995514114] http_wait_for_request: stream=0xa59600 b=0xa59610, 
exp(r,w)=0,0 bf=00908002 bh=166 analysers=34

:http-in.clireq[0007:]: GET / HTTP/1.1
:http-in.clihdr[0007:]: User-Agent: curl/7.22.0 
(x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 
libidn/1.23 librtmp/2.3

:http-in.clihdr[0007:]: Host: 127.0.0.1:7992
:http-in.clihdr[0007:]: Accept: */*
[3995514114] http_process_req_common: stream=0xa59600 b=0xa59610, 
exp(r,w)=0,0 bf=00908002 bh=166 analysers=30
[3995514114] process_switching_rules: stream=0xa59600 b=0xa59610, 
exp(r,w)=0,0 bf=04908002 bh=166 analysers=00
[3995514114] http_process_req_common: stream=0xa59600 b=0xa59610, 
exp(r,w)=0,0 bf=04908002 bh=166 analysers=280
[3995514114] http_process_request: stream=0xa59600 b=0xa59610, 
exp(r,w)=0,0 bf=04908002 bh=166 analysers=200
[3995514114] sess_prepare_conn_req: sess=0xa59600 rq=0xa59610, 
rp=0xa59650, exp(r,w)=0,0 rqf=0492 rpf=8000 rqh=0 rqt=194 rph=0 
rpt=0 cs=7 ss=1

assign_server : s=0xa59600
[3995514114] sess_update_stream_int: sess=0xa59600 rq=0xa59610, 
rp=0xa59650, exp(r,w)=0,0 rqf=0492 rpf=8000 rqh=0 rqt=194 rph=0 
rpt=0 cs=7 ss=4

assign_server_address : s=0xa59600
[3995514114] queuing with exp=3995519114 req->rex=3995544114 req->wex=0 
req->ana_exp=0 rep->rex=0 rep->wex=0, si[0].exp=0, si[1].exp=3995519114, 
cs=7, ss=5
[3995514115] process_stream:1662: task=0xa27410 s=0xa59600, 
sfl=0x04ce, rq=0xa59610, rp=0xa59650, exp(r,w)=3995544114,0 
rqf=00840300 rpf=8050 rqh=0 rqt=0 rph=0 rpt=0 cs=7 ss=7, cet=0x0 
set=0x0 retr=3
[3995514115] http_wait_for_response: stream=0xa59600 b=0xa59650, 
exp(r,w)=0,0 bf=80508000 bh=0 analysers=6
[3995514115] queuing with exp=3995544115 req->rex=0 req->wex=0 
req->ana_exp=0 rep->rex=3995544115 rep->wex=0, si[0].exp=0, si[1].exp=0, 
cs=7, ss=7
[3995514115] process_stream:1662: task=0xa27410 s=0xa59600, 
sfl=0x04ce, rq=0xa59610, rp=0xa59650, exp(r,w)=0,0 rqf=0084 
rpf=8002 rqh=0 rqt=0 rph= rpt=0 cs=7 ss=7, cet=0x0 set=0x0 
retr=3
[3995514115] http_wait_for_response: stream=0xa59600 b=0xa59650, 
exp(r,w)=0,0 bf=80008002 bh= analysers=6

:.srvrep[0007:0008]: HTTP/1.1 200 OK
:.srvhdr[0007:0008]: Server: nginx/1.9.6
:.srvhdr[0007:0008]: Date: Mon, 09 Nov 2015 21:28:45 GMT
:.srvhdr[0007:0008]: Content-Type: text/html
:.srvhdr[0007:0008]: Content-Length: 3095
:.srvhdr[0007:0008]: Last-Modified: Wed, 18 Jan 2012 
10:17:45 GMT

:.srvhdr[0007:0008]: Connection: keep-alive
:.srvhdr[0007:0008]: ETag: "4f169c49-c17"
:.srvhdr[0007:0008]: Accept-Ranges: bytes
[3995514115] http_process_res_common: stream=0xa59600 b=0xa59650, 
exp(r,w)=0,0 bf=80008002 bh=3309 analysers=4
[3995514115] tcp_inspect_request: stream=0xa59600 b=0xa59610, 
exp(r,w)=0,0 bf=00c08000 bh=0 analysers=36
[3995514115] queuing with exp=3995519115 req->rex=3995544115 req->wex=0 
req->ana_exp=3995519115 rep->rex=0 rep->wex=3995544115, si[0].exp=0,

Re: Debug mode not working?!

2015-11-09 Thread Willy Tarreau
On Mon, Nov 09, 2015 at 10:15:46PM +0100, Aleksandar Lazic wrote:
> 
> ...
> epoll_wait(3, {}, 200, 1000)= 0
> epoll_wait(3, {{EPOLLIN, {u32=5, u64=5}}}, 200, 1000) = 1
> accept4(5, {sa_family=AF_INET, sin_port=htons(52310), 
> sin_addr=inet_addr("127.0.0.1")}, [16], SOCK_NONBLOCK) = 7
> setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
> accept4(5, 0x7ffca18022c0, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource 
> temporarily unavailable)
> recvfrom(7, "GET / HTTP/1.1\r\nUser-Agent: curl/7.22.0 
> (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 
> libidn/1.23 librtmp/2.3\r\nHost: 127.0.0.1:7992\r\nAccept: */*\r\n\r\n", 
> 16384, MSG_PEEK, NULL, NULL) = 166
> close(7)= 0
> epoll_wait(3, {}, 200, 1000)= 0
> ...
> 

It was aborted very early, I think it even didn't become a session,
though I could be wrong. You need a session for a minimum of debugging
to work.

(...)
> Other terminal.
> 
> 
> curl -vk http://127.0.0.1:7992/
> * About to connect() to 127.0.0.1 port 7992 (#0)
> *   Trying 127.0.0.1... connected
> >GET / HTTP/1.1
> >User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 
> >OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> >Host: 127.0.0.1:7992
> >Accept: */*
> >
> * Recv failure: Connection reset by peer
> * Closing connection #0
> curl: (56) Recv failure: Connection reset by peer
> 

Confirmed here.

Willy




Re: Debug mode not working?!

2015-11-09 Thread Aleksandar Lazic



Am 09-11-2015 11:34, schrieb Willy Tarreau:

Hi Aleks,

On Sun, Nov 08, 2015 at 04:24:29PM +0100, Aleksandar Lazic wrote:

Hi.

Today I have tried to debug haproxy as in the old days ;-), I was not
able to see the communication on stderr.

I'm sure I have something missed in the past on the list to be able to
see the output.


I use it every day and I just retested, it still works for me. Are you
sure you don't have another instance still listening to the same port
and receiving the traffic ? It already happened to me a few times,
reason why I'm asking :-)


Thanks. I also tough like this but no the request reaches the right 
instanze.


export MONITOR_BIND_PORT=7991 && export HTTP_BIND_PORT=7992  && export 
HTTPS_BIND_PORT=7993 && strace -fveall -s1024 haproxy-1.6.2/haproxy -f 
haproxy.conf -d -V



...
epoll_wait(3, {}, 200, 1000)= 0
epoll_wait(3, {{EPOLLIN, {u32=5, u64=5}}}, 200, 1000) = 1
accept4(5, {sa_family=AF_INET, sin_port=htons(52310), 
sin_addr=inet_addr("127.0.0.1")}, [16], SOCK_NONBLOCK) = 7

setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
accept4(5, 0x7ffca18022c0, [128], SOCK_NONBLOCK) = -1 EAGAIN (Resource 
temporarily unavailable)
recvfrom(7, "GET / HTTP/1.1\r\nUser-Agent: curl/7.22.0 
(x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 
libidn/1.23 librtmp/2.3\r\nHost: 127.0.0.1:7992\r\nAccept: */*\r\n\r\n", 
16384, MSG_PEEK, NULL, NULL) = 166

close(7)= 0
epoll_wait(3, {}, 200, 1000)= 0
...


Other terminal.


curl -vk http://127.0.0.1:7992/
* About to connect() to 127.0.0.1 port 7992 (#0)
*   Trying 127.0.0.1... connected

GET / HTTP/1.1
User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 
OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3

Host: 127.0.0.1:7992
Accept: */*


* Recv failure: Connection reset by peer
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer


gcc --version
gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is 
NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR 
PURPOSE.



Willy




appsession replacement in 1.6

2015-11-09 Thread Sylvain Faivre

Hi,

Sorry I'm late on this discussion, following this thread :
https://marc.info/?l=haproxy&m=143345620219498&w=2

We are using appsession with HAproxy 1.5 like this :

backend http
appsession JSESSIONID len 24 timeout 1h request-learn

We would like to be able to do the same thing with HAproxy 1.6.
If it is possible, we'd like to catch the JSESSIONID in cookies and URL 
parameters, either in the request or in the response.


I tried to use the info posted previously by Aleksandar, but I 
encountered several problems :


- using "stick on" in frontend section fails with :
> 'stick' ignored because frontend 'web' has no backend capability.

- using "stick store-response urlp(JSESSIONID,;)" in backend section 
fails with :
> 'stick': fetch method 'urlp' extracts information from 'HTTP request 
headers', none of which is available for 'store-response'.



So, I've got this so far :

backend http

  stick-table type string len 24 size 10m expire 1h peers prod

  stick on urlp(JSESSIONID,;)
  stick on cookie(JSESSIONID)


Does this seem right ?
The help for "stick on" tells it defines a request pattern, so I guess 
this would not match JSESSIONID cookie ou url parameter set in the reply ?



Regards,
Sylvain




WHY they are different when checking concurrent limit?

2015-11-09 Thread Zhou,Qingzhi

Hi guys:
I’m reading the source code of version 1.6.2, in function listener_accept:

/* Note: if we fail to allocate a connection because of configured
* limits, we'll schedule a new attempt worst 1 second later in the
* worst case. If we fail due to system limits or temporary resource
* shortage, we try again 100ms later in the worst case.
*/
while (max_accept--) {
struct sockaddr_storage addr;
socklen_t laddr = sizeof(addr);

if (unlikely(actconn >= global.maxconn) && !(l->options & LI_O_UNLIMITED)) {
limit_listener(l, &global_listener_queue);
task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /* try again 
in 1 second */
return;
}

if (unlikely(p && p->feconn >= p->maxconn)) {
limit_listener(l, &p->listener_queue);   <―――here is my question.
return;
}

My question is why the task_schedule is not called again here? Any purpose?
In my knowledge, if the upper limit is reached, we should re-schedule the task 
with expire time, and the listener will wake up when the task is ran.

With great thanks,
Zhou


Re:

2015-11-09 Thread Hoggins!


Le 09/11/2015 11:39, Willy Tarreau a écrit :
> On Sat, Nov 07, 2015 at 09:55:39PM +0100, Baptiste wrote:
>> Hi,
>>
>> This is an english mailing list!
> And I just checked, he is *not* subscribed to the list!
>
> Willy
>
>

Yup. I wouldn't be surprised if haproxy@formilux.org was used by bots as
a From: address to send spam, hence his rude exasperated message.




signature.asc
Description: OpenPGP digital signature


Re:

2015-11-09 Thread Willy Tarreau
On Sat, Nov 07, 2015 at 09:55:39PM +0100, Baptiste wrote:
> Hi,
> 
> This is an english mailing list!

And I just checked, he is *not* subscribed to the list!

Willy




Re: Debug mode not working?!

2015-11-09 Thread Willy Tarreau
Hi Aleks,

On Sun, Nov 08, 2015 at 04:24:29PM +0100, Aleksandar Lazic wrote:
> Hi.
> 
> Today I have tried to debug haproxy as in the old days ;-), I was not 
> able to see the communication on stderr.
> 
> I'm sure I have something missed in the past on the list to be able to 
> see the output.

I use it every day and I just retested, it still works for me. Are you
sure you don't have another instance still listening to the same port
and receiving the traffic ? It already happened to me a few times,
reason why I'm asking :-)

Willy




Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-09 Thread Willy Tarreau
Hi Pieter,

> Hi Ben, Willy, Simon,
> 
> Ben, thanks for the review.
> Hoping 'release pressure' has cleared for Willy i'm resending the 
> patch now, with with your comments incorporated.
> 
> CC, to Simon as maintainer of mailers part so he can give approval (or 
> not).
> 
> The original reservations i had when sending this patch still apply. 
> See the "HOWEVER." part in the bottom mail.
> 
> Hoping it might get merged to improve mailer reliability. So no 
> 'server down' email gets lost..
> Thanks everyone for your time :) .

Looks good to me. Just waiting for Simon's approval.

Willy




Re: [PATCH] DOC: lua-api/index.rst small example fixes, spelling correction.

2015-11-09 Thread Willy Tarreau
Hi Pieter,

On Sun, Nov 08, 2015 at 04:44:18PM +0100, PiBa-NL wrote:
> Hi List, Willy,
> 
> Attached some small example fixes, spelling correction.
> Hope its ok like this :).

Applied, thanks!

Willy