Re: [PATCH] Support statistics in multi-process mode

2015-09-14 Thread Philipp Kolmann

Hi Willi,

On 09/14/15 12:17, Willy Tarreau wrote:

OK I now found a moment to spare some time on your patch. During my
first lecture I didn't understand that it relied on SIGUSR2 to
aggregate counters. I'm seeing several issues with that approach :


I never had the intent to look like I did the patch. The original mail 
is from Hiep Nguyen, hie...@vccloud.vn (CCed).


I just wanted to re-raise the topic again, since the mail from Hiep 
seemed to have drowned and I am interested in this feature.


@Hiep: Please look at Willi's  suggestions.

Thanks
Philipp

--
---
DI Mag. Philipp Kolmann  mail: kolm...@zid.tuwien.ac.at
Technische Universitaet Wien  web: www.zid.tuwien.ac.at
Zentraler Informatikdienst (ZID) tel: +43(1)58801-42011
Wiedner Hauptstr. 8-10, A-1040 WienDVR: 0005886
---




Cannot enable a config "disabled" server via socket command

2015-09-14 Thread Ayush Goyal
Hi,

We are testing haproxy-1.6dev4, we have added a server in backend as
disabled, but we are not able
to bring it up using socket command.

Our backend conf looks like this:

=cut
backend apiservers
server api101 localhost:1234   maxconn 128 weight 1 check
server api102 localhost:1235 disabled  maxconn 128 weight 1 check
server api103 localhost:1236 disabled  maxconn 128 weight 1 check
=cut

But, when I run the "enable apiservers/api103" command, it is still in
MAINT mode. Disabling and enabling of non "disabled" servers like api101
are happening properly.

Enabling a config "disabled" server works correctly with haproxy1.5. Can
you confirm whether its a bug in 1.6-dev4?

Thanks,
Ayush Goyal


Re: [PATCH] Support statistics in multi-process mode

2015-09-14 Thread Aleksandar Lazic

Hi.

Am 14-09-2015 12:17, schrieb Willy Tarreau:

Hi Philipp,



[snipped]

What I'd like to have instead would be a per-proxy shared memory 
segment

for stats in addition to the per-process one, that is updated using
atomic operations each time other stats are updated. The max are a bit
tricky as you need to use a compare-and-swap operation but that's no 
big

deal. Please note that before doing this, it would be wise to move all
existing stats to a few dedicated structures so that in each proxy, 
server,

listener and so on we could simply have something like this :

struct proxy_stats *local;
struct proxy_stats *global;

As you guessed it local would be allocated in per process while global
would be shared between all of them.

Another benefit would be that we could improve the current sample fetch
functions which already look at some stats and use the global ones. 
That's
even more important for maxconn where it would *open* the possibility 
to
monitor the global connection count and not just the per-process one 
(but

there are other things to do prior to this being possible, such as
inter-process calls). However without inter-process calls we could 
decide

that we can slightly overbook the queue by up to one connection max per
process and that could be reasonably acceptable while waiting for a 
longterm

multi-threaded approach though.


I follow for some time uwsgi

https://uwsgi-docs.readthedocs.org/

Now I have implemented uwsgi now for some cgi's, yes there are such 
thinga still out ;-)


I like the stats solution, which is similar as you suggest, as far as I 
have understood you solution properly.


https://uwsgi-docs.readthedocs.org/en/latest/StatsServer.html

As far as I have understood there solution they use a registry pattern 
similar as already exist in haproxy.


https://github.com/unbit/uwsgi/search?utf8=%E2%9C%93=uwsgi_register_stats_pusher
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/proto_http.c;h=eb3582bd77be9fb96a0babb0e5390c276c77e50e;hb=HEAD#l13043

How about to 'just' add a stats_register to the modules.

For example in proto_http
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/proto_http.c;h=eb3582bd77be9fb96a0babb0e5390c276c77e50e;hb=HEAD#l13066


13066 static void __http_protocol_init(void)
13067 {
13068 acl_register_keywords(_kws);
13069 sample_register_fetches(_fetch_keywords);
13070 sample_register_convs(_conv_kws);

  stats_register_global_and_or_local($global|$proxy,...);

13071 http_req_keywords_register(_req_actions);
13072 http_res_keywords_register(_res_actions);
13073 }


From my point of view the benefit is that the stats server is in haproxy 
and the data could be stored in efficient way.


Maybe there is a standalone instance with 'mode stats' like the other 
modes.


http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#4-mode

Best regads
Aleks



Accepting both, SSL- and non-SSL connections when acting as SSL end point

2015-09-14 Thread Martin Schmid

Hello list

I'm quite new to haproxy, and I've managed to use it with SSL passthru 
and as SSL termination.
I've also startet looking into the code to find the answers or solutions 
to what I want to achieve.


I have OpenVPN and HTTPS running on the same port. This can be done with 
several setups whereof using the openvpn port sharing feature is the 
easiest.


But now I need to know the remote IP addresses in order to be able to 
lock out abusive access to the web server. Https used to be unharmed by 
exploitative access, but now it's getting a problem. With http, I can 
reduce the traffic by locking out ip adresses using fail2ban. With 
https, I cannot see the ip address, so there is no way to lock them out 
selectively.
Any tool that does the backend switching cannot add an x-forwarded-for 
http header and be the SSL end point at the same time. Haproxy seems to 
be the only tool that might be able to handle both.


Looking at the code of haproxy, it seems to me that once I configure a 
bind with ssl, it just drops all connections that do not begin wih a SSL 
handshake.
However, it seems to be feasible to alter the code in order to fall back 
to a non-ssl connection if the hadshake fails.


Has someone of you already tried to accomplish such, or am I missing a 
detail that makes this impossible?



Regards

Martin




Re: [ANNOUNCE] haproxy-1.6-dev5

2015-09-14 Thread PiBa-NL

Op 14-9-2015 om 13:37 schreef Willy Tarreau:

Hi all,

we've fixed several bugs since -dev4 so in order to encourage people to
safely test the code, here comes -dev5.


Hi Willy,

As always its nice to have a new -dev release when some fixes have been 
added.

Though i think a day or two heads-up would be nice before every dev release.

As this time a patch is missing which i was hoping to get with dev5.
'haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)'

Should i re-send that patch by Remi with some commit comment and a PATCH 
subject? I thought Baptiste was going to do that.


Just let me know, thanks.

Thanks,
PiBa-NL



RE: rate limiting according to "total time" - possible ?

2015-09-14 Thread Roland RoLaNd
That's exactly what i wanted!!
thank you willy


> Date: Mon, 14 Sep 2015 07:38:08 +0200
> From: w...@1wt.eu
> To: r_o_l_a_...@hotmail.com
> CC: haproxy@formilux.org
> Subject: Re: rate limiting according to "total time" - possible ?
> 
> Hi Roland,
> 
> On Fri, Sep 11, 2015 at 05:11:11PM +0300, Roland RoLaNd wrote:
> > hello
> > i have haproxy directing traffic to a number of backends.
> > these backends can auto scale upon traffic; my goal is to change "maxcon" 
> > depending on "total time" or "backend time"  that a request took to 
> > complete.
> > for example:
> > if totaltime < 1 second ; maxcon = 1000if totaltime < 2 second: maxconn = 
> > 500etc...
> > 
> > the goal is to hold connections in queue till backend auto scaling is in 
> > effect.
> > 
> > Can i do the above scenario within haproxy config or a cron that checks 
> > haproxy socket/totaltime and act accordingly is a better idea?
> > 
> > do you have an alternative advice for me to accomplish that goal ?
> 
> I could be wrong, but I think you're trying to re-implement by hand the
> dynamic rate limiting you can get using minconn,maxconn and fullconn. It
> dynamically increases or decreases the effective per-server maxconn
> depending on the total number of connections on all servers in the
> backend so that queues decrease when connection count increases.
> 
> Willy
> 
> 
  

Re: [PATCH] Support statistics in multi-process mode

2015-09-14 Thread Willy Tarreau
On Mon, Sep 14, 2015 at 01:07:45PM +0200, Philipp Kolmann wrote:
> Hi Willi,
> 
> On 09/14/15 12:17, Willy Tarreau wrote:
> >OK I now found a moment to spare some time on your patch. During my
> >first lecture I didn't understand that it relied on SIGUSR2 to
> >aggregate counters. I'm seeing several issues with that approach :
> 
> I never had the intent to look like I did the patch. The original mail 
> is from Hiep Nguyen, hie...@vccloud.vn (CCed).

OK sorry for the confusion. I indeed saw a different name in the from
header but the name displayed was "root", letting me think it was
probably sent from a development server or something.

> I just wanted to re-raise the topic again, since the mail from Hiep 
> seemed to have drowned and I am interested in this feature.

Thanks for the clarification.

Best regards,
Willy




Re: Accepting both, SSL- and non-SSL connections when acting as SSL end point

2015-09-14 Thread PiBa-NL

Op 14-9-2015 om 14:32 schreef Martin Schmid:

Hello list

I'm quite new to haproxy, and I've managed to use it with SSL passthru 
and as SSL termination.
I've also startet looking into the code to find the answers or 
solutions to what I want to achieve.


I have OpenVPN and HTTPS running on the same port. This can be done 
with several setups whereof using the openvpn port sharing feature is 
the easiest.


But now I need to know the remote IP addresses in order to be able to 
lock out abusive access to the web server. Https used to be unharmed 
by exploitative access, but now it's getting a problem. With http, I 
can reduce the traffic by locking out ip adresses using fail2ban. With 
https, I cannot see the ip address, so there is no way to lock them 
out selectively.
Any tool that does the backend switching cannot add an x-forwarded-for 
http header and be the SSL end point at the same time. Haproxy seems 
to be the only tool that might be able to handle both.


Looking at the code of haproxy, it seems to me that once I configure a 
bind with ssl, it just drops all connections that do not begin wih a 
SSL handshake.
However, it seems to be feasible to alter the code in order to fall 
back to a non-ssl connection if the hadshake fails.


Has someone of you already tried to accomplish such, or am I missing a 
detail that makes this impossible?



Regards

Martin



Hi Martin,

Not sure if this will work with openvpn, but you could try it..
This mail might interest you: 
http://marc.info/?l=haproxy=132375969032305=2


First split out TCP traffic to different backends depending on data send 
from the client.
Then possibly feed it from a backend server back to a second frontend 
where you handle the ssl-offloading if desired, while using proxy 
protocol to keep client-ip information, and namespaces or unixsockets 
for the connection between the two.


Again, i have not tested it, but this seems like it could be a way to 
configure it with current options..


Regards,
PiBa-NL



sticky session not always sticky

2015-09-14 Thread Yves Van Wert
Hi list,

we have a backend configuration that uses sticky session based on a
cookie.  This works wel in 99.99% of all requests. The problem is with the
0.01% of requests where the client switches backend.  Is there any way i
can debug this ? The backend server is not reported to be DOWN.  I can't
seem to find any reason why a session would switch backend.  It's not like
all sessions on that backend move server.

backend weblogic-forms
mode http
balance roundrobin
cookie SERVERID insert indirect nocache
option httpclose
option forwardfor
option httpchk HEAD /check.txt HTTP/1.0
option log-health-checks
server ias03 10.64.0.81:  cookie ias03 check inter 3000 rise 5
fall 6 weight 40
server ias04 10.64.0.82:  cookie ias04 check inter 3000 rise 5
fall 6 weight 10
server ias05 10.64.0.181: cookie ias05 check inter 3000 rise 5
fall 6 weight 40
server ias06 10.64.0.182: cookie ias06 check inter 3000 rise 5
fall 6 weight 10

the haproxy version we are using is :

/usr/sbin/haproxy -v
HA-Proxy version 1.5.11 2015/01/31
Copyright 2000-2015 Willy Tarreau 

thank you
Yves


Re: sticky session not always sticky

2015-09-14 Thread Willy Tarreau
Hi Yves,

On Mon, Sep 14, 2015 at 04:30:22PM +0200, Yves Van Wert wrote:
> Hi list,
> 
> we have a backend configuration that uses sticky session based on a
> cookie.  This works wel in 99.99% of all requests. The problem is with the
> 0.01% of requests where the client switches backend.  Is there any way i
> can debug this ? The backend server is not reported to be DOWN.  I can't
> seem to find any reason why a session would switch backend.  It's not like
> all sessions on that backend move server.
> 
> backend weblogic-forms
> mode http
> balance roundrobin
> cookie SERVERID insert indirect nocache
> option httpclose
> option forwardfor
> option httpchk HEAD /check.txt HTTP/1.0
> option log-health-checks
> server ias03 10.64.0.81:  cookie ias03 check inter 3000 rise 5
> fall 6 weight 40
> server ias04 10.64.0.82:  cookie ias04 check inter 3000 rise 5
> fall 6 weight 10
> server ias05 10.64.0.181: cookie ias05 check inter 3000 rise 5
> fall 6 weight 40
> server ias06 10.64.0.182: cookie ias06 check inter 3000 rise 5
> fall 6 weight 10

I'm pretty sure the cause is only "option httpclose" which is also called
"passive close" : haproxy only adds "Connection: close" in both directions
and lets the other ends close by themselves. If they're deciding not to
close and to pass extra information, they will remain connected. It could
for example happen with some broken proxies which aggregate outgoing
connections from multiple clients without properly watching the Connection
header. Please just replace "option httpclose" with "option http-server-close"
and I'd bet the problem will disappear.

> the haproxy version we are using is :
> 
> /usr/sbin/haproxy -v
> HA-Proxy version 1.5.11 2015/01/31
> Copyright 2000-2015 Willy Tarreau 

1.5.14 has been available for quite some time to fix several bugs, and 1.5.15
will be available shortly to fix even more. Once your problem is solved, please
consider upgrading.

Regards,
Willy




Re:premium printed swim caps

2015-09-14 Thread POQSWIM
DearManager Areyouinterestedinsilicon=esiwmcapswithlogoprint?Qty=:1000pcs 
Price:USD0.86/pcs  =bsp;Logo:USD0.2/color/side 
100%puresiliconeswimcap,=Smallorder,from100pcsCanproduceanypantonecolor,CanbeprintanylogoEasyorder,sentustheartandwedoth=erest
 
Customswimcaps,swi=mgoggles,swimwear,swimpaddle,kickboards,pullbuoysareallavai=lable.
 BestRegards JackLou Whatsapp:+86-13510770554 Skype:poqswim

Re: [ANNOUNCE] haproxy-1.6-dev5

2015-09-14 Thread Baptiste
On Mon, Sep 14, 2015 at 9:12 PM, Willy Tarreau  wrote:
> On Mon, Sep 14, 2015 at 09:08:49PM +0200, PiBa-NL wrote:
>> Op 14-9-2015 om 18:48 schreef Willy Tarreau:
>> >BTW as a general rule, patches being merged are ACKed to their authors
>> >or rejected, so if you don't get a response, simply consider it lost.
>> I didn't sent a patch so to speak, Remi did send a 'diff --git' but
>> without the comment to put into the haproxy repository, after which
>> Baptiste then wrote he would submit it after confirmation that it did
>> solved the issue, which i gave.
>
> Thanks for the explanation, I indeed missed all this exchange I guess.
>
>> Anyway its not that important i suppose,
>> otherwise we / you could always issue another dev release..
>
> OK so Baptiste will catch it when he has time and forward it to me
> once he's OK with it.
>
>> Also the patch was added to FreeBSD ports repository, so should come
>> through with the binary repositories building from there. That will
>> solve my 'problem' for the moment.
>
> OK fine. Thanks!
> Willy
>


Willy,

The issue is related to the connect() function to establish the UDP connection.
Currently, I use a sizeof() to get the len of the address structure
and Remi suggested to use get_addr_len() instead.

Pieter confirmed Remi's suggestion fixes the issue.
I can reproduce the issue in a FreeBSD VM I have on my computer. I'll
show you tomorrow at the office.

Baptiste



Re: [ANNOUNCE] haproxy-1.6-dev5

2015-09-14 Thread Willy Tarreau
On Mon, Sep 14, 2015 at 10:46:00PM +0200, Baptiste wrote:
> The issue is related to the connect() function to establish the UDP 
> connection.
> Currently, I use a sizeof() to get the len of the address structure
> and Remi suggested to use get_addr_len() instead.
> 
> Pieter confirmed Remi's suggestion fixes the issue.
> I can reproduce the issue in a FreeBSD VM I have on my computer. I'll
> show you tomorrow at the office.

OK fine.

Willy




Re: Multiple log entries with exact same Tq, Tc, Tr and Tt

2015-09-14 Thread Dave Stern
Willy,

Thank you so much for your response. We are not running in a VM, although
our backend DB does, but I can't see how that would be relevant. This is a
basic haproxy install via apt on ubuntu. I parsed the log myself to make it
easier to read, but I realize I didn't include some important fields. Below
are the entries without any fields removed and some modified for privacy.
I've also included our log-format config.

Thanks again for your input.

Dave


log-format [%t]\ %ID\ ac=%ac\ fc=%fc\ bc=%bc\ bq=%bq\ sc=%sc\ sq=%sq\
rc=%rc\ Tq=%Tq\ Tw=%Tw\ Tc=%Tc\ Tr=%Tr\ Tt=%Tt\ tsc=%tsc\ cip=%ci:%cp\
req=%{+Q}r\ ST=%ST\ H=%H\ bs=%b:%s\ hdrs="%hr"\ v=3\ reqB=%U\ resB=%B


Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:9268_0A954CD4:01BB_55F1E13D_61AF:2AB0 ac=2266 fc=2265 bc=1058 bq=0
sc=383 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37480 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-04 hdrs="{query_1}" v=3 reqB=834 resB=241
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:9269_0A954CD4:01BB_55F1E13D_61B0:2AB0 ac=2266 fc=2265 bc=1057 bq=0
sc=303 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37481 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-02 hdrs="{query_2}" v=3 reqB=910 resB=249
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:926A_0A954CD4:01BB_55F1E13D_61B1:2AB0 ac=2266 fc=2265 bc=1056 bq=0
sc=311 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37482 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-03 hdrs="{query_3}" v=3 reqB=963 resB=247
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:926C_0A954CD4:01BB_55F1E13D_61B2:2AB0 ac=2266 fc=2265 bc=1055 bq=0
sc=382 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37484 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-04 hdrs="{query_4}" v=3 reqB=1372 resB=304
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:926B_0A954CD4:01BB_55F1E13D_61B3:2AB0 ac=2266 fc=2265 bc=1054 bq=0
sc=302 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37483 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-02 hdrs="{query_5}" v=3 reqB=1229 resB=291
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:926D_0A954CD4:01BB_55F1E13D_61B4:2AB0 ac=2266 fc=2265 bc=1053 bq=0
sc=310 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37485 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-03 hdrs="{query_6}" v=3 reqB=1402 resB=314
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:926E_0A954CD4:01BB_55F1E13D_61B5:2AB0 ac=2266 fc=2265 bc=1052 bq=0
sc=62 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37486 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-05 hdrs="{query_7}" v=3 reqB=1419 resB=292
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
36CDCE43:9270_0A954CD4:01BB_55F1E13D_61B6:2AB0 ac=2266 fc=2265 bc=1051 bq=0
sc=309 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
3.3.3.3:37488 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-03 hdrs="{query_8}" v=3 reqB=935 resB=274
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:926F_0A954CD4:01BB_55F1E13D_61B7:2AB0 ac=2266 fc=2265 bc=383 bq=0
sc=384 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37487 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_write:production-01 hdrs="{query_9}" v=3 reqB=626 resB=1928
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:9270_0A954CD4:01BB_55F1E13D_61B8:2AB0 ac=2266 fc=2265 bc=1050 bq=0
sc=381 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37488 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-04 hdrs="{query_10}" v=3 reqB=669 resB=352
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:9271_0A954CD4:01BB_55F1E13D_61B9:2AB0 ac=2266 fc=2265 bc=1049 bq=0
sc=301 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37489 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-02 hdrs="{query_11}" v=3 reqB=1879 resB=413
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:9272_0A954CD4:01BB_55F1E13D_61BA:2AB0 ac=2266 fc=2265 bc=1048 bq=0
sc=308 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37490 req="POST /path/hidden HTTP/1.1" ST=200 H=
bs=db_production_read:production-03 hdrs="{query_12}" v=3 reqB=1899 resB=413
Sep 10 20:00:00  haproxy[10928]: [10/Sep/2015:19:59:57.070]
3690005E:9273_0A954CD4:01BB_55F1E13D_61BB:2AB0 ac=2266 fc=2265 bc=382 bq=0
sc=383 sq=0 rc=0 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 tsc= cip=
1.1.1.1:37491 req="POST /path/hidden HTTP/1.1" ST=200 H=

Re: [ANNOUNCE] haproxy-1.6-dev5

2015-09-14 Thread PiBa-NL

Op 14-9-2015 om 18:48 schreef Willy Tarreau:

BTW as a general rule, patches being merged are ACKed to their authors
or rejected, so if you don't get a response, simply consider it lost.
I didn't sent a patch so to speak, Remi did send a 'diff --git' but 
without the comment to put into the haproxy repository, after which 
Baptiste then wrote he would submit it after confirmation that it did 
solved the issue, which i gave. Anyway its not that important i suppose, 
otherwise we / you could always issue another dev release..


Also the patch was added to FreeBSD ports repository, so should come 
through with the binary repositories building from there. That will 
solve my 'problem' for the moment.


Thanks for your reply.
PiBa-NL



Re: [ANNOUNCE] haproxy-1.6-dev5

2015-09-14 Thread Willy Tarreau
On Mon, Sep 14, 2015 at 09:08:49PM +0200, PiBa-NL wrote:
> Op 14-9-2015 om 18:48 schreef Willy Tarreau:
> >BTW as a general rule, patches being merged are ACKed to their authors
> >or rejected, so if you don't get a response, simply consider it lost.
> I didn't sent a patch so to speak, Remi did send a 'diff --git' but 
> without the comment to put into the haproxy repository, after which 
> Baptiste then wrote he would submit it after confirmation that it did 
> solved the issue, which i gave.

Thanks for the explanation, I indeed missed all this exchange I guess.

> Anyway its not that important i suppose, 
> otherwise we / you could always issue another dev release..

OK so Baptiste will catch it when he has time and forward it to me
once he's OK with it.

> Also the patch was added to FreeBSD ports repository, so should come 
> through with the binary repositories building from there. That will 
> solve my 'problem' for the moment.

OK fine. Thanks!
Willy




Re: Two things: proxy protocol v2 example and a missing article.

2015-09-14 Thread Willy Tarreau
Hi Eliezer,

On Fri, Sep 11, 2015 at 01:45:23PM +0300, Eliezer Croitoru wrote:
> Hey List,
> 
> I am writing a proxy protocol parser in golang and I need some help.
> I am looking for couple proxy protocol v2 examples for testing purposes.
> I am looking for couple strings which I can throw at my parser.
> The first thing to do is just run a haproxy and dump the strings but I 
> think it's missing from the docs of v2 compared to v1.(from what I was 
> reading)

I don't understand, what is missing from the docs exactly ?

Also indeed, it's very simple to run haproxy to get the protocol on your
input. Just use "send-proxy" to get protocol v1, and "send-proxy-v2" to
get protocol v2. You can also try other programs such as stunnel, stud
or squid which are able to send it as well.

Note that the example code in the protocol documentation has been used as
a basis for a number of implementations (which is why bugs were reported).
So you can also try to start from that.

> Another issue is that
(...)

Personal advice : you should never ever mix two different questions in a
single e-mail, that's the best way to never get a response to half of
them because most people willing to help only look for unreplied e-mails.

Regards,
Willy




Re: AW: Delete response headers unless condition give me a warning

2015-09-14 Thread Alex
Ricardo F  writes:

> 
> Hello,
> Thanks for the responses, At least i found that my tought are correct.
> 
> If you have enought time, Willy, i will read the explanation gladly but if
someone are in the same situation as me, this is other workaroung:
> 
>         rspidel ^X-[abcdefghiklmnopqrstuvwxyz].*
> 
> All letters except "J" ;)
> 
> 
> Regards,
> 
> Ricardo F.
> 
> > From: andreas.mock-jie1t0cjddqb1svskn2...@public.gmane.org> To:
bedis9-re5jqeeqqe8avxtiumw...@public.gmane.org; w  1wt.eu> CC:
rikr_-pkbjnfxxiarbdgjk7y7...@public.gmane.org;
haproxy-jklxk3lifipg9huczpv...@public.gmane.org> Subject: AW: Delete
response headers unless condition give me a warning> Date: Fri, 13 Sep 2013
11:18:08 +> > Me too, I love hearing dirty things.> That doesn't mean
I'll do them... > > Best regards> Andreas Mock> > > -Ursprüngliche
Nachricht-> Von: Baptiste
[mailto:bedis9-re5jqeeqqe8avxtiumw...@public.gmane.org] > Gesendet:
Donnerstag, 12. September 2013 22:53> An: Willy Tarreau> Cc: Ricardo F;
haproxy  formilux.org> Betreff: Re: Delete response headers unless
condition give me a warning> > > If this is the case, I do have a solution
that we elaborated a few weeks> > ago when discussing with Bertrand (who
confirmed it worked). A very dirty> > one. I'd prefer to explain it only if
absolutely needed though!> >> > Willy> >> > Actually, I'm interested :)> >
Baptiste> > > 
> 
> 
Why not write : rspidel ^X-[^j]
?
regards



Re: [PATCH] Support statistics in multi-process mode

2015-09-14 Thread Willy Tarreau
Hi Philipp,

OK I now found a moment to spare some time on your patch. During my
first lecture I didn't understand that it relied on SIGUSR2 to
aggregate counters. I'm seeing several issues with that approach :

  - the time to scan all the proxies can be huge. Some people on this
list run with more than 5 backends. And the processing is
serialized in shm_proxy_update() so that means that even if they
run 32 processes on 32 cores, they'll have to wait 32 times the
time it takes to process 5 backends.

  - stats cannot easily be summed this way. The max are never valid
this way, because max(A+B) is somwhere between max(A,B) and
max(max(A)+max(B)). For this reason it's important to update stats
in real time using atomic operations.

  - the use of semaphores should *really* be avoided, as they create
huge trouble in production because they last after the process'
death. Here for example if you send a SIGUSR2 to your processes,
find that they're taking too much time to aggregate and decide
to kill the culprit, the other ones will be stuck forever and
when you decide to kill and restart the service, the old IPC is
still there until you reach the moment where there are no more
left and admins reboot the system because nowadays nobody thinks
to run "ipcs -a" then "ipcrm". I'd rather welcome a solution
involving atomic operations and/or mutexes even if it's limited
to a few architectures/operating systems. Note that the shared_ctx
used for SSL actually does that.

  - you have a process that remains stuck for up to 100ms per process
in the update_shm_proxy() function! By this time it's not processing
any traffic and is waiting for nothing, you must never ever do such
a thing. Imagine if someone sends a signal even just once a minute
to collect stats, you'll have big holes in your traffic graphs!

What I'd like to have instead would be a per-proxy shared memory segment
for stats in addition to the per-process one, that is updated using
atomic operations each time other stats are updated. The max are a bit
tricky as you need to use a compare-and-swap operation but that's no big
deal. Please note that before doing this, it would be wise to move all
existing stats to a few dedicated structures so that in each proxy, server,
listener and so on we could simply have something like this :

struct proxy_stats *local;
struct proxy_stats *global;

As you guessed it local would be allocated in per process while global
would be shared between all of them.

Another benefit would be that we could improve the current sample fetch
functions which already look at some stats and use the global ones. That's
even more important for maxconn where it would *open* the possibility to
monitor the global connection count and not just the per-process one (but
there are other things to do prior to this being possible, such as
inter-process calls). However without inter-process calls we could decide
that we can slightly overbook the queue by up to one connection max per
process and that could be reasonably acceptable while waiting for a longterm
multi-threaded approach though.

Best regards,
Willy