Re: a question on load balancing algorithm

2014-09-03 Thread Baptiste
On Thu, Sep 4, 2014 at 2:25 AM, Steven Le Roux  wrote:
> Hi,
>
> You can either play with :
>
>  - balance url_param ld
>
>  - sticky table,  and then set the param you want to stick on :
>   stick on urlp(ld) table ...
>
>  - appsession ld len 3 timeout  request-learn mode query-string
>
> w/ something like :
>
> server server_A ... cookie lb=100 weight 10
> server server_A ...  weight 10
>
>
> But I think the most simple configuration for you here is to use
> multiple backend if you can :
>
> frontend App
>   ...
>   acl url_ld100 url_sub ld=100
>   use_backend bk_ld100 if url_ld100
>   default_backend default
>
> backend bk_ld100
>   ...
>   balance roundrobin
>   server server_A  check
>   server server_B  check backup
>
>
> backend default
>   ...
>   balance roundrobin
>   server server_A  check
>   server server_B  check
>
>
> It means that, when your front parse ld=100 in the url query string,
> it matches the acl url_ld100, then use a dedicated backend : bk_ld100.
> In this backend only server A server request since server B is defined
> as a backup server. If A goes down, B will answer request. So you need
> "check" on your servers.
>
> If ld=100 is not matched, your request will end to the default server,
> which will roundrobin btw A and B.
>
>
> Regards,
>
> On Wed, Sep 3, 2014 at 11:47 PM, S. Zhou  wrote:
>> We are thinking of the following LB algorithm but we are not sure if current
>> HAProxy supports it:
>>given a http request, LB should always forward it to a certain backend
>> server (say Server A) based on its http parameter (e.g. request with
>> parameter "Id=100" always go to server A). The only exception is: when the
>> designated server (e.g. Server A) is down, then the request should be
>> forwarded to another (fixed) server (e.g. Server B).
>>
>> Thanks
>>
>
>
>
> --
> Steven Le Roux
> Jabber-ID : ste...@jabber.fr
> 0x39494CCB 
> 2FF7 226B 552E 4709 03F0  6281 72D7 A010 3949 4CCB
>


Hi Guys,

You can do this directly in your backend using the use-server
directive and some acl like urlp and srv_is_up.

Baptiste



Re: tcp-request content track-sc2 with if statement doesn't work?

2014-09-07 Thread Baptiste
On Sat, Sep 6, 2014 at 9:16 PM, PiBa-NL  wrote:
> Hi list,
>
> Inspired by a blog about wordpress bruteforce protection [0] , i'm trying to
> use this same kind of method in a frontend/backend configuration.
> I did change the method from POST to GET, for easier testing, but that
> doesn't matter for retrieving the gpc counter, does it?
>
> So i was trying to use this:
> tcp-request content track-sc1  base32+src  if METH_GET login
>
> It however doesn't seem to work using HAProxy 1.5.3, the acl containing
> "sc1_get_gpc0 gt 0" never seems to get the correct gpc0 value, even though i
> have examined the stick-table and the gpc0 value there is increasing.
> If i change it to the following it starts working:
> tcp-request content track-sc1  base32+src
>
> Even though the use_backend in both cases checks those first criteria:
> acl flagged_as_abusersc1_get_gpc0 gt 0
> use_backendpb3_453_http if METH_GET wp_login flagged_as_abuser
>
> Am i doing something wrong, is the blog outdated, or was a bug introduced
> somewhere?
>
> If more information perhaps -vv or full config is needed let me know,
> thanks for any reply.
>
> p.s. did anyone get my other emails a while back? [1]
>
> Kind regards,
> PiBa-NL
>
> [0]
> http://blog.haproxy.com/2013/04/26/wordpress-cms-brute-force-protection-with-haproxy/
> [1] http://marc.info/?l=haproxy&m=140821298806125&w=2
>


Hi,

Plese let us know if you have  the following configuration lines (or
equivalent), before your tracking rule:
  tcp-request inspect-delay 10s
  tcp-request accept if HTTP

Baptiste



Re: tcp-request content track-sc2 with if statement doesn't work?

2014-09-07 Thread Baptiste
On Sun, Sep 7, 2014 at 2:55 PM, PiBa-NL  wrote:
> Hi Baptiste,
>
> Thanks that fixes my issue indeed with the following:
>   tcp-request inspect-delay 10s
>   tcp-request content track-sc1  base32+src  if METH_GET wp_login
>   tcp-request content accept if HTTP
>
> I didn't think about inspect-delay because both frontend and backend are
> using 'mode http', and i only used to use inspect-delay with frontends using
> tcp mode. Though maybe the 'tcp-request' should have given my that hint. The
> 'accept' must be below the 'track-sc1' to make it work.
>
> Could you perhaps also add this to the blog article, or should i post a
> comment under it for other people to not fall into the same mistake?
>
> Thanks,
> PiBa-NL
>
> Baptiste schreef op 7-9-2014 11:38:
>
>> On Sat, Sep 6, 2014 at 9:16 PM, PiBa-NL  wrote:
>>>
>>> Hi list,
>>>
>>> Inspired by a blog about wordpress bruteforce protection [0] , i'm trying
>>> to
>>> use this same kind of method in a frontend/backend configuration.
>>> I did change the method from POST to GET, for easier testing, but that
>>> doesn't matter for retrieving the gpc counter, does it?
>>>
>>> So i was trying to use this:
>>> tcp-request content track-sc1  base32+src  if METH_GET login
>>>
>>> It however doesn't seem to work using HAProxy 1.5.3, the acl containing
>>> "sc1_get_gpc0 gt 0" never seems to get the correct gpc0 value, even
>>> though i
>>> have examined the stick-table and the gpc0 value there is increasing.
>>> If i change it to the following it starts working:
>>> tcp-request content track-sc1  base32+src
>>>
>>> Even though the use_backend in both cases checks those first criteria:
>>> acl flagged_as_abusersc1_get_gpc0 gt 0
>>> use_backendpb3_453_http if METH_GET wp_login
>>> flagged_as_abuser
>>>
>>> Am i doing something wrong, is the blog outdated, or was a bug introduced
>>> somewhere?
>>>
>>> If more information perhaps -vv or full config is needed let me know,
>>> thanks for any reply.
>>>
>>> p.s. did anyone get my other emails a while back? [1]
>>>
>>> Kind regards,
>>> PiBa-NL
>>>
>>> [0]
>>>
>>> http://blog.haproxy.com/2013/04/26/wordpress-cms-brute-force-protection-with-haproxy/
>>> [1] http://marc.info/?l=haproxy&m=140821298806125&w=2
>>>
>>
>> Hi,
>>
>> Plese let us know if you have  the following configuration lines (or
>> equivalent), before your tracking rule:
>>tcp-request inspect-delay 10s
>>tcp-request accept if HTTP
>>
>> Baptiste
>
>

Hi,

Article updated.

Baptiste



Re: HAProxy equivalent to ProxyPreserveHost off

2014-09-08 Thread Baptiste
On Sat, Sep 6, 2014 at 6:07 PM, Diana Hsu (ditsai)  wrote:
> Hi Willy,
>
> We are migrating applications from siteA to siteB, the technology used to
> run the applications is different in these 2 sites.
>
> All the applications in siteA share the same domain sitea.sample.com, but
> each application has its unique uri (ex:  /proxy1, /proxy2, /proxy3 ..).
> We want to migrate applications in siteA to start using new technology that
> is offered in siteB, however migrated applications need to keep existing
> domain name sitea.sample.com.
>
> siteB is managed by the different group and we have no privilege to change
> siteB's configuration.
> Each application provisioned in siteB is configured to have its own vanity
> name (ex:  abc.sample.com, xyz.sample.com, ..) which is the CNAME(alias) of
> siteb.sample.com.
> siteB's HAProxy uses Host header matching to route the requests to the
> proper backend application servers, for example:
>
> acl ABC hdr_dom(host) -i abc.sample.com
> use_backend SFARM-ABC if ABC
>
> How can we configure in siteA's HAProxy to meet above requirement?  Should
> we configure HAProxy as reverse-proxy in siteA?
>
>
> Thanks,
> Diana
>


Hi,

Please avoid HTML mails :)

It's up to you to match uri /proxy1 of site A to CNAME of site B.
Then in your configuration, you could do:
 acl sitea_ABC path_beg -i /abc
 acl siteb_ABC hdr_dom(host) -i abc.sample.com
 use_backend SFARM-ABC if sitea_ABC || siteb_ABC

Well, this is my understanding of your problem!

Baptiste



Re: Session stickiness on multi-process haproxy with ssl

2014-09-09 Thread Baptiste
On Tue, Sep 9, 2014 at 4:01 PM,   wrote:
> Hello,
>
> I have HAproxy 1.5.4 installed in Debian Wheezy x64. My configuration file
> is attached. I want session stickiness so i use appsession attribute but I
> have a serious performance issue with ssl. Initially I didn't use nbproc
> parameter and haproxy could only serve 50reqs/sec with 100% cpu using only
> one core in a 8-core virtual machine. This is very low performance for my
> expectations, so I considered using nbproc=8 but then, as I have read, I
> can't have correct session stickiness.
> Is it expected that haproxy has initially (with 1 process) so low
> performance with ssl?
> Do I necessarily have to choose between performance and stickiness in my
> case, because I can't give up on either. Is there an alternative for
> session stickiness in multi-process haproxy?
>
> Kind regards,
> Evie


Hi Evie,

how big is your SSL key size???
What type of web application are you load-balancing and what type of
clients have access to your application?
Can you explain us the reason of the cipher you forced?
(ssl-default-bind-ciphers)

Also, you're using httpclose mode, maybe using http-keep-alive' would
help a bit.

can you check if your conntrack table is full? (using dmesg)

you can also use log-format and log TLS version, negociated cipher and
SSL session ID.
If SSL session ID change all the time for a single user, it means
you're not resuming SSL session and spend your time computing keys.

Baptiste



Re: Session stickiness on multi-process haproxy with ssl

2014-09-09 Thread Baptiste
On Tue, Sep 9, 2014 at 4:47 PM,   wrote:
>> On Tue, Sep 9, 2014 at 4:01 PM,   wrote:
>>> Hello,
>>>
>>> I have HAproxy 1.5.4 installed in Debian Wheezy x64. My configuration
>>> file
>>> is attached. I want session stickiness so i use appsession attribute but
>>> I
>>> have a serious performance issue with ssl. Initially I didn't use nbproc
>>> parameter and haproxy could only serve 50reqs/sec with 100% cpu using
>>> only
>>> one core in a 8-core virtual machine. This is very low performance for
>>> my
>>> expectations, so I considered using nbproc=8 but then, as I have read, I
>>> can't have correct session stickiness.
>>> Is it expected that haproxy has initially (with 1 process) so low
>>> performance with ssl?
>>> Do I necessarily have to choose between performance and stickiness in my
>>> case, because I can't give up on either. Is there an alternative for
>>> session stickiness in multi-process haproxy?
>>>
>>> Kind regards,
>>> Evie
>>
>>
>> Hi Evie,
>>
>> how big is your SSL key size???
>
> My key is 2048-bit.
>
>> What type of web application are you load-balancing and what type of
>> clients have access to your application?
>
> Apache2 webservers are used as backends that serve a django-based site
> with user authentication.
>
>> Can you explain us the reason of the cipher you forced?
>> (ssl-default-bind-ciphers)
>>
>> Also, you're using httpclose mode, maybe using http-keep-alive' would
>> help a bit.
>>
> I tested http-keep-alive and a simple cipher such as RC4-SHA suitable for
> my key but saw no difference.
>
>> can you check if your conntrack table is full? (using dmesg)
>>
>> you can also use log-format and log TLS version, negociated cipher and
>> SSL session ID.
>> If SSL session ID change all the time for a single user, it means
>> you're not resuming SSL session and spend your time computing keys.
>>
> How can I check if ssl session id changes? Can I override this with a
> proxy config if it happens?
>
> Thanks
>

Please keep the ML in Cc :)

You can use the log-format directive below, in your frontend, to log
SSL related informations:
 log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\
%CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\
{%sslv/%sslc/%[ssl_fc_sni]/%[ssl_fc_session_id]}\
"%[capture.req.method]\ %[capture.req.hdr(0)]%[capture.req.uri]\
HTTP/1.1"

then try to anonymize logs and post some lines in attachment.

Baptiste



Re: SSL timing information?

2014-09-09 Thread Baptiste
On Tue, Sep 9, 2014 at 11:37 PM, Shawn Heisey  wrote:
> On 9/3/2014 4:40 PM, Shawn Heisey wrote:
>> I am having some problems with SSL negotiation taking a really long
>> time.  There were 20 seconds between client hello and server hello on
>> one session noticed with a packet capture, 28 seconds on another.
>> Currently that connection is being handled by a load balancer based on
>> the LVS-NAT solution - the linux kernel.
>
> Did anyone have any ideas on this?  See the original message (2014/09/03
> at 22:40 UTC) for full details.  I'm having very long SSL negotiation
> with a load balancer other than haproxy, hoping haproxy will fix it, but
> the logging available won't tell me whether it's fixed or not.
>
> I am having a different problem specifically with haproxy that I will
> put in another email thread.
>
> Thanks,
> Shawn
>
>

Hi Shawn,

Please explain how your LB layers are architectured
Also, if you're able to reproduce easily the problem, out of
production, then a tcpdump + strace of HAProxy may help.
Share them privately if you want.

Baptiste



Re: Session maintenance with weights

2014-09-11 Thread Baptiste
On Thu, Sep 11, 2014 at 4:56 AM, Prashanth Ganesh
 wrote:
> Hi
>
> I have a scenario where i have two tomcat servers A and B behind the
> haproxy, now one of the app servers have a new version of the war and the
> other tomcat has a old version of the war file.So at a point of time we will
> have only the server A active which has a set of users inside it , after a
> while we enable the server B now any new request that comes should go only
> to app server B and the old users should still be at app server A. How
> should this be done in haproxy , i have enabled session persistence using
> appsession . Is there a way that this could be achieved. I tried setting the
> weight as 0 to ther server which should not participate in loadbalancing and
> a higher weight to the other. As a result the session persistence does not
> get maintained and the sessions from the old app get shifted to the new one.
> Please could you help with this.
>
> --
> Regards
>
>
> Prashanth Ganesh
> Linux Server Administrator
> Unmetric, Chennai
>
>


Hi,

Without your configuration, it's hard to explain HAProxy's behavior.
That said, if you want to force everyone to failover to a new server,
it's better to disable the server (can be done in conf file or on the
stats socket).
Turning a server's weight to 0 will failover only NEW users to the
other server. Users with a persistence information pointing server A
will carry on being redirected to this server.

Baptiste



Re: Session stickiness on multi-process haproxy with ssl

2014-09-12 Thread Baptiste
> Ok i can see it now, thanks. I will try to find out when is my django app
> actually using ssl_fc_session_id, but I haven't understood after all how
> is an empty ssl_fc_session_id related to haproxy low performance?
>
> Evie
>

hi Evie,

When no SSL Session ID is sent, the server has to compute a key, while
it can resume a previous session when the session ID is provided by
the client.
More info here:
http://blog.haproxy.com/2011/09/16/benchmarking_ssl_performance/

Keep in mind on a single core of a modern CPU, you can compute around
700 keys per second (with a private key size of 2048 bits) and you can
do 12000 TPS (TLS1.2) (when the SSL session ID is provided).

So SSL performance depends on your work load. If your service is a
webservice where clients get connected, and disappear, you'll have a
few hundreds req/s. If you host a webste with many objects, then
you'll have many thousends req/s.

Baptiste



Re: Is it possible to query the query the status of a server and use it in an ACL?

2014-09-12 Thread Baptiste
On Thu, Sep 11, 2014 at 5:56 PM, Rainer Duffner  wrote:
> Hi,
>
> I want to take the status of a server of a given backend and use it in
> another backend or in the frontend.
> If that possible?
> I though there might be something simular to
> "nbsrv()" - but I haven't found anything.



Hi Rainer,

There is an acl called 'srv_is_up' which should do the trick for you.

Baptiste



Re: read ACL to block ip's from file to prevent DDoS?

2014-09-15 Thread Baptiste
On Mon, Sep 15, 2014 at 9:08 PM, Marc Cortinas Val
 wrote:
> Hello,
>  First of all, congratulations, I think modify ACL in runtime within reload 
> all daemon configuration is a
> big HIT.
>  For other hand, I applied ipabuser cal with keymap managing it with socat 
> and it works fine,
> but it is NOT permanent when daemon is restarted.
>
>  it could be an option for this but i'm not sure, do you know it?
>
>  Furthermore, I'm interesting in dynamic ACL, what's this? Could you explain 
> more?
>
> Thanks in advance,
> Marc
>
>

Hi Marc,

I would recommend you to load your IPs from a file.
Then, when you update HAProxy's running content using socat, simply
also append (or delete) the IP into the flat file.
That way at next reload, HAProxy will load the IPs and will be in sync.
An other way would to dump ACL content into the file from time to time.

Dynamic ACL is the ability to update ACL content using HAProxy's stats
socket, as you're currently doing!

Baptiste



Re: MariaDB 5.5.39 + HAProxy v1.5.3

2014-09-16 Thread Baptiste
On Tue, Sep 16, 2014 at 3:52 PM, Hoggins!  wrote:
> Wow. First post, first mistake, I hit the button twice after having
> corrected.
> Sorry, *this* is the current config I have problems with.
>
> Le 16/09/2014 15:48, Hoggins! a écrit :
>> Hello list,
>>
>> This is my first post. I checked a bit online to find a solution to my
>> problem, but it seems that not many people are having the same issues.
>>
>> I use a very simple loadbalancing between two synchronized (Galera)
>> MariaDB nodes. Here is the HAProxy config file :
>>
>>
>>  SNIP ---
>> global
>>   log 127.0.0.1 local0 notice
>> user haproxy
>> group haproxy
>>
>> defaults
>> log global
>> retries 3
>> option dontlognull
>> option redispatch
>> maxconn1024
>> timeout connect 5000
>> timeout server 5
>> timeout client 5
>>
>> listen mysql-cluster
>> bind 127.0.0.1:3306
>> mode tcp
>> option tcpka
>> option mysql-check user haproxy_check
>> balance roundrobin
>> server db-1 db-1.network.hoggins.fr:3306 check
>> server db-2 db-2.network.hoggins.fr:3306 check
>>
>> listen httpstats
>> bind 127.0.0.1:8080
>> mode http
>> stats enable
>> stats uri /
>>
>> ---
>>
>> When I connect to HAProxy as a MySQL host, and perform a simple request,
>> everything is fine :
>>
>> MariaDB [(none)]> show variables like '%wsrep_node_name%';
>> +-+---+
>> | Variable_name   | Value |
>> +-+---+
>> | wsrep_node_name | db1   |
>> +-+---+
>> 1 row in set (0.00 sec)
>>
>> If I issue the same command after a few seconds, here is what I get :
>>
>> MariaDB [(none)]> show variables like '%wsrep_node_name%';
>> ERROR 2006 (HY000): MySQL server has gone away
>> No connection. Trying to reconnect...
>> Connection id:743181
>> Current database: *** NONE ***
>>
>> +-+---+
>> | Variable_name   | Value |
>> +-+---+
>> | wsrep_node_name | db1   |
>> +-+---+
>> 1 row in set (0.00 sec)
>>
>> Does it have something to do with my MariaDB setup ? Connecting directly
>> to any of my nodes is normal, I can stay "connected" as long as I wish
>> without having the "MySQL server has gone away" message.
>>
>>
>>
>> If someone has a lead to a solution...
>>
>> Thanks in advance !
>>
>>
>>
>
>

Hi,

Are you able to reproduce the behavior in a test environment?
If so, please turn log level to 'info' then report here the logs of
the sequence you ran above.

Baptiste



Re: haproxy sending RSTs to backend-servers

2014-09-18 Thread Baptiste
On Thu, Sep 18, 2014 at 10:50 AM, Rainer Duffner  wrote:
> Hi,
>
> I've configured nginx+haproxy in front of a couple of IIS servers.
> NGINX terminates SSL.
>
> configuration is as following:
>
> global
>   log /var/run/log   local5
>   log /var/run/log   local1 notice
>   #log loghostlocal0 info
>   maxconn 4096
>   #debug
>   #quiet
>   user www
>   group www
>   daemon
>
> defaults
>   log global
>   modehttp
>   retries 2
>   timeout client 50s
>   timeout connect 5s
>   timeout server 50s
>   option dontlognull
>   option forwardfor
>   option httplog
>   option redispatch
>   balance  leastconn
>   http-check expect string server_up
>   http-check disable-on-404
>   default-server minconn 50 maxconn 100
>
> # Set up application listeners here.
>
> frontend app-main-prod
>   mode http
>   bind 0.0.0.0:8000
>   maxconn 2000
>   default_backend app-main-prod-back
>
> frontend app-import
>   mode http
>   bind 0.0.0.0:8001
>   maxconn 2000
>   default_backend app-import-back
>
> frontend app-images
>   mode http
>   bind 0.0.0.0:8002
>   maxconn 2000
>   default_backend app-images-back
>
>
> backend app-main-prod-back
>   balance leastconn
>   fullconn 2000
>   mode http
>   option httpchk GET /healthcheck.aspx HTTP/1.1\r\nHost:\
> www.app.ch\r\nConnection:\ close cookie SERVERID insert indirect nocache
>   server appsrv-one  192.168.69.17:80 weight 1 maxconn 1000 check
> cookie s1 server appsrv-two  192.168.69.18:80 weight 1 maxconn 1000
> check cookie s2
>
> backend app-import-back
>   balance leastconn
>   fullconn 2000
>   mode http
>   #option httpchk GET /healthcheck.aspx HTTP/1.1\r\nHost:\
> import.app.ch\r\nConnection:\ close server appsrv-import-one
> 192.168.69.32:80 weight 1 maxconn 1000 check #server appsrv-import-two
> 192.168.69.33:80 weight 1 maxconn 1000 check
>
> backend app-images-back
>   balance leastconn
>   fullconn 2000
>   mode http
>   option httpchk GET /healthcheck.aspx HTTP/1.1\r\nHost:\
> images.app.ch\r\nConnection:\ close server appsrv-images-one
> 192.168.69.41:80 weight 1 maxconn 1000 check #server appsrv-images-two
> 192.168.69.42:80 weight 1 maxconn 1000 check
>
>
> listen admin 0.0.0.0:22002
>   mode http
>   stats uri /
>
>
>
> What happens is that it will mostly work, but in wireshark, I see a lot
> of RST being sent from the haproxy-server to the backend IIS-servers.
> This doesn't make sense and is probably the reason I see so many 50x in
> the logs and why occasionally gateway-errors are being shown to users
> because nginx can't find any live servers...
>
> Can anyone see any obvious error in the config?
>
>

Hi Rainer,

HAProxy uses RST to close connections on the server side to allow fast
reuse of the source port.
So this behavior is expected and normal.

That said, 50x errors are not normal...
Can you tell us who is generating thoses errors?
Can you share your HAProxy logs showing these errors?

Baptiste



Re: change the size of a stick-table at run time

2014-09-18 Thread Baptiste
Le 18 sept. 2014 21:43, "Tobias Gunkel"  a écrit :
>
> Hello,
>
> I want to change the size of a stick-table at run time, but I found no
suitable management command in the docs.
> If I change the size in the config file, the stick-table gets flushed at
reload, but I need to preserve its content.
>
> The only way I can think of at the moment is to dump the content manually
before reload (show table) and refill it immediately afterwards (set table).
> But maybe there is a more elegant way...?
>
>
> Best regards,
> Tobi

Hi tobi,

Do you have a 'peer' section in haproxy's configuration file?
Also, why don't you simply set up a stick table size big enough for your
needs?

Baptiste


Re: change the size of a stick-table at run time

2014-09-18 Thread Baptiste
On Thu, Sep 18, 2014 at 11:23 PM, Tobias Gunkel  wrote:
>
>
> Hi Baptiste,
>>
>> Do you have a 'peer' section in haproxy's configuration file?
>
> No. I thought this is for replicating stick-tables between remote servers,
> but I have only one server.
> But I'll have a look at the docs anyways, maybe this will fit my needs.
>
>> Also, why don't you simply set up a stick table size big enough for your
>> needs?
>
> I use stick-tables to limit access to a site based on the client ip address,
> independently from connection rate etc.
> So the size of the stick-table represents the pool of available slots to the
> site, which I eventually want to increase later at run time (if the server
> load allows it).
>
> -Tobi
>

Hi Tobias,

Please don't forget to Cc the ML, unless there are private information
in your email.
'peer' section is also used to synchronize data between HAProxy
processes, when HAProxy is reloaded.
You should use it. create a 'peer' section and don't forget to use the
'peers' directive on the stick-table definition.

concerning the stick-table size, you have acls which allows you to
retrieve the number of entries in a table and so you can take allow or
deny decision based on this.
That said, you may have to reload HAProxy to update the number of
entries the ACL should allow.
Not sure there is a simple solution there.

Baptiste



Re: use_backend map failing me

2014-09-19 Thread Baptiste
On Fri, Sep 19, 2014 at 3:09 PM, Klavs Klavsen  wrote:
> dooh.. point to correct file and things work.. :)
>

Hi,

I like your config :)

Baptiste



Re: Mix option httpchk and ssl-hello-chk

2014-09-22 Thread Baptiste
On Mon, Sep 22, 2014 at 3:33 PM, Kevin COUSIN  wrote:
> Hi list,
>
> Can I mix the option httpchk and ssl-hello-chk to check the health of an 
> HTTPS website ?
>
> Thanks a lot
>
> 
>
>Kevin C.
>

Hi Kevin,

No, you can't.

It would be easier to answer you with your backend configuration!
That said, you can have a look at the check-ssl option, which may help.

Baptiste



Re: shellshock and haproxy

2014-09-30 Thread Baptiste
On Mon, Sep 29, 2014 at 2:36 PM, Thomas Heil
 wrote:
> Hi,
>
> To mitigate the shellshock attack we added two lines in our frontends.
>
> --
> frontend fe_80
> --
> reqideny  ^[^:]+:\s*\(\s*\)
> reqideny  ^[^:]+:\s+.*?(<<[^<]+){5,}
> --
>
> and checked this via
>
> --
>  curl --referer "x() { :; }; ping 127.0.0.1" http://my-haproxy-url/
>  curl --referer "true < <http://my-haproxy-url/
> --
>
> Any hints or further sugestions?
>
> cheers
> thomas
>
>
>

Hi Thomas,

Thanks for the tips.
I blogged it with some differences:
http://blog.haproxy.com/2014/09/30/mitigating-the-shellshock-vulnerability-with-haproxy/

Baptiste



Re: Server Sent Events on iOS

2014-09-30 Thread Baptiste
On Mon, Sep 29, 2014 at 9:15 PM, William Lewis  wrote:

> Hi all
>
> I have a problem with a website which uses Server-Sent Events where the
> long lived connection for the Server Events seems to be blocking other
> resources from loading on iOS clients only and only when I have haproxy
> between client and server.
>
> This is my test case.
>
>  * Create a node express app which serves a html page which subscribes to
> an EventSource and asynchronously adds 200 300x100px images to the DOM
>  * Node app is configured to serve resources with 500ms delay to reliably
> reproduce the problem
>  * Configure basic haproxy between node app and client
>  * Reset cache on iOS device and connect to server
>
> Expected result
>
>  * Client open 5 simultaneous http connections to the server
>  * 1 connection is blocked listening for events from the EventSource
>  * The remaining 4 connections are used to download the 200 images
>
> Actual Result
>
>  * Connection to EventSource is established and events start to be logged
> to the console
>  * Images start to download on the page
>  * Several of the images get blocked and never load
>
>
> Clearing the device cache and connecting directly to the server, all
> resources load, although the loading pattern of images is significantly
> different.
>
> If anyone has any ideas I would greatly appreciate any suggestions??
>
>
> Sources and config included below.
>
> ** index.html*
>
> 
> 
> 
> img {
> width: 30px;
> height: 10px;
> border-style: solid;
> border-color: black;
> border-width: 1px;
> }
> 
> 
> 
> 
> var source = new EventSource('/events');
>
> source.onmessage = function(e) {
> console.log(e.data);
> }
>
> var body = document.querySelectorAll('body');
> var createImage = function(i) {
> var element = document.createElement('img');
> element.src = '/' + i + '.png';
>
> body[0].appendChild(element);
> }
>
> window.setTimeout(function() {
> for (var i = 1; i < 200; i++) {
> createImage(i);
> }
> }, 1000);
> 
> 
> 
>
> ** app.js*
>
> var express = require('express');
> var app = express();
>
> app.get('/events', function(req, res) {
>
> // let request last as long as possible
> req.socket.setTimeout(Infinity);
>
> var messageCount = 0;
>
> res.writeHead(200, {
> 'Content-Type': 'text/event-stream',
> 'Cache-Control': 'no-cache',
> 'Connection': 'keep-alive'
> });
> res.write('\n');
>
> var timeout;
>
> var emitEvent = function() {
> res.write('id:' + ++messageCount + '\n');
> res.write('data:' + new Date().getTime() + '\n\n');
>
> timeout = setTimeout(emitEvent, 3000);
> }
>
> req.on("close", function() {
> clearTimeout(timeout);
> });
>
> emitEvent();
>
> });
>
> var staticHandler = express.static(__dirname + '/public');
>
> app.use(function serveStatic(req, res, next) {
> setTimeout(function() {
> staticHandler(req, res, next);
> }, 500);
> });
>
> var server = app.listen(3000, function() {
> console.log('Listening on port %d', server.address().port);
> });
>
>
> ** haproxy config*
>
> global
> daemon
> quiet
> maxconn 1024
> pidfile haproxy.pid
> log 127.0.0.1   local0
> log 127.0.0.1   local1 notice
>
> defaults
> log global
>
> balance roundrobin
> mode http
>
>
> frontend external
> bind :80
> default_backend test
>
> backend test
> server test localhost:3000
>
>
>
>
>

Hi William,

Could you please turn on option httplog and provide us the logs reported by
HAProxy?
Also, which version of HAProxy are you running?

Baptiste


Re: source based loadbalancing hash algorithm

2014-09-30 Thread Baptiste
On Thu, Sep 25, 2014 at 3:45 PM, Gerd Müller  wrote:
> Hi list,
>
> we want to stress test our system. We have 8 nodes behind the haproxy and 8
> server infront to generate the request. Since we are using source based
> loadbalancing I would like to know how the hash is build so I can give the
> requesting the proper ips.
>
> Thank you,
>
> Gerd

Hi Gerd,

What's your problem exactly?
What do you want to test: performance, hash, etc ???

Baptiste



Re: sending traffic to one backend server based on which another backend server sticky session

2014-09-30 Thread Baptiste
On Sat, Sep 27, 2014 at 1:33 AM, Joseph Hardeman  wrote:
> So I have a need to send a remote visitor to one specific server on another
> port/backend  based on the first backend server they logged in to.  Its
> really the same server just different IP's.
>
> Is this possible?
>
> Joe
>

Hi Joseph,

This is possible with the dev version of HAProxy and using a common
stick tables between your two farms.
Also server order will be very important, each server and its peer
must be in the same order in each farm.
And it should do the trick.

Baptiste



Re: Please remove me

2014-09-30 Thread Baptiste
Please send a mail to  haproxy+unsubscr...@formilux.org

Baptiste

On Tue, Sep 30, 2014 at 4:13 PM, Sparr  wrote:
> From this list



Re: shellshock and haproxy

2014-09-30 Thread Baptiste
I'm going to update the article as well :)

Baptiste



Re: Server Sent Events on iOS

2014-09-30 Thread Baptiste
William,

HAProxy says it was trying to establish a connection to the server.
(I have not yet take a look at the pcap files)

Please add some timeouts in your defaults section:
timeout connect 3s
timeout server 30s
timeout client 30s

and run again the test, and let us know the logs generated by HAProxy.

Baptiste


On Tue, Sep 30, 2014 at 2:14 PM, William Lewis  wrote:
> Hi Baptiste / Benjamin,
>
> I've attached haproxy log, and 3 pcap files
>
> The test with haproxy ended with me killing the node process, so the event 
> source request terminated and the hanging resource requests 503'd as shown at 
> the end of the log.
>
> Looking at the tcpdumps.
>
> 1. With haproxy
>
> * You can see that there are 6 concurrent http connections between iOS and 
> haproxy.
> * In the first connection stream you can see the initial document, followed 
> by the event stream
> * Then you can see the client has used http piping (pretty dumb considering 
> the browser should know this connection is occupied) to send requests for 
> /21.png /22.png /23.png ( the hanging resources)
> * The first connection stream carried on responding with data from the event 
> source and the stuck resources are eventually 503'd when the node app is 
> killed
>
> 2. Without haproxy
>
> * This time we there are 12 distinct http connections that have been made 
> between iOS and node
> * Again in the first connection stream you see the initial document followed 
> by the event stream and the pipelined requests for the same resources that go 
> stuck above
> * However this time after the next event is emitted by the event stream, the 
> connection is terminated and carries on with a new connection
> * And you see this in the browser console, but the event stream carries on 
> seamlessly
>
>
>
>
> * The requests that were piped lined in that request get dealt with in other 
> streams e.g. /21.png is in stream 8
>
>
> I am by no means an expert analysing tcpdumps or how http pipelining is 
> supposed to work but it looks to me that without haproxy in the middle node 
> has managed to identify there are requests stuck in a http pipeline and reset 
> the connection to allow the browser to continue. Is there anyway to achieve 
> the same with haproxy?
>
>
>
>
>
>
>
>
>
> On 30 Sep 2014, at 12:21, Benjamin Lee  wrote:
>
>> On 30 September 2014 18:54, Baptiste  wrote:
>>>
>>>
>>>
>>> On Mon, Sep 29, 2014 at 9:15 PM, William Lewis  wrote:
>>>>
>>>> Hi all
>>>>
>>>> I have a problem with a website which uses Server-Sent Events where the 
>>>> long lived connection for the Server Events seems to be blocking other 
>>>> resources from loading on iOS clients only and only when I have haproxy 
>>>> between client and server.
>>>>
>>>> This is my test case.
>>>>
>>>> * Create a node express app which serves a html page which subscribes to 
>>>> an EventSource and asynchronously adds 200 300x100px images to the DOM
>>>> * Node app is configured to serve resources with 500ms delay to reliably 
>>>> reproduce the problem
>>>> * Configure basic haproxy between node app and client
>>>> * Reset cache on iOS device and connect to server
>>>>
>>>> Expected result
>>>>
>>>> * Client open 5 simultaneous http connections to the server
>>>> * 1 connection is blocked listening for events from the EventSource
>>>> * The remaining 4 connections are used to download the 200 images
>>>>
>>>> Actual Result
>>>>
>>>> * Connection to EventSource is established and events start to be logged 
>>>> to the console
>>>> * Images start to download on the page
>>>> * Several of the images get blocked and never load
>>>>
>>>>
>>>> Clearing the device cache and connecting directly to the server, all 
>>>> resources load, although the loading pattern of images is significantly 
>>>> different.
>>>>
>>>> If anyone has any ideas I would greatly appreciate any suggestions??
>>>>
>>>>
>>>> Sources and config included below.
>>>>
>>>> * index.html
>>>>
>>>> 
>>>> 
>>>>
>>>>img {
>>>>width: 30px;
>>>>height: 10px;
>>>>border-style: solid;
>>>>border-color: black;
>>>>border-width: 1px;
>>>>}
>>&

Re: Forcing an HTTP close in certain cases

2014-10-01 Thread Baptiste
On Wed, Oct 1, 2014 at 2:07 PM, David Pollak
 wrote:
> Howdy,
>
> I'm using HAProxy to choose among a series of dynamically allocated HTTP
> backends. Basically, a user goes to URL A and clicks on the "start my
> service" link. A new browser window/tab is popped up and they get the new
> service/URL in the tab.
>
> Basically, got to /service click on a link, get a new browser window at
> /special/x where the  piece is routed to the dynamically created
> service for that user.
>
> On the back end, the service is created in my cluster and I update
> haproxy.cfg and do a "service haproxy reload".
>
> The issue I seem to be facing is that the browser has a keep-alive'd
> connection to my server so the http request goes to the old HAProxy
> instance.
>
> Is there a way to selectively force close the keep-alive for just the
> browser that connects to the /special/x URL? Or maybe insert an
> intermediate redirect URL that forces the close so the browser is forced to
> re-establish a connection to the new HAProxy instance?
>
> Thanks,
>
> David
>

Hi David,

I'm the person behind @haproxy_tech twitter account :)

Could you please post your "anonymized) configuration and tell us
which version of HAProxy you're using.
With these information, I'll be able to understand what happens
exactly and give you an accurate response.

Baptiste



Re: maxconn question

2014-10-02 Thread Baptiste
Hi Lukas, Diana,

> - haproxy is globally limited to 10240 connections

Actually, HAProxy is limited to 10240 "incoming" connections.
>From the documentation: "Proxies will stop accepting connections when
this limit is reached."

Baptiste



Re: Forcing an HTTP close in certain cases

2014-10-02 Thread Baptiste
On Wed, Oct 1, 2014 at 9:10 PM, David Pollak
 wrote:
> Baptiste,
>
> Thanks for your help. I'm using HAProxy 1.5.4 and here's my config:
>
> global
> log /dev/loglocal0
> log /dev/loglocal1 notice
> chroot /var/lib/haproxy
> stats socket /run/haproxy/admin.sock mode 660 level admin
> stats timeout 30s
> user haproxy
> group haproxy
> daemon
>
> defaults
> logglobal
> modehttp
> option forwardfor
> option http-server-close
> # option forceclose
> optionhttplog
> optiondontlognull
> timeout connect 50
> timeout client  500
> timeout server  500
> errorfile 400 /etc/haproxy/errors/400.http
> errorfile 403 /etc/haproxy/errors/403.http
> errorfile 408 /etc/haproxy/errors/408.http
> errorfile 500 /etc/haproxy/errors/500.http
> errorfile 502 /etc/haproxy/errors/502.http
> errorfile 503 /etc/haproxy/errors/503.http
> errorfile 504 /etc/haproxy/errors/504.http
>
>
> frontend www-http
>bind *:80
>use_backend Jetty-close if { path_beg /redirect }
>default_backend Jetty
>
>
> backend Jetty
> server jetty localhost:8080
>
> backend Jetty-close
> option forceclose
> server jetty localhost:8080
>

hi David,

Your configuration seems to be right.
If it doesn't work, it means you're hitting a bug:
http://git.haproxy.org/?p=haproxy-1.5.git;a=commit;h=2e47a3ab11188239abadb6bba7bd901d764aa4fb
Please give a try to the latest 1.5 git version of HAProxy, it should work.

Baptiste



Re: connection resets during transfers

2014-10-08 Thread Baptiste
On Wed, Oct 8, 2014 at 12:51 PM, Glenn Elliott
 wrote:
>
> Hi All,
>
>
>
> I am in the process of migrating from ultramonkey (lvs & heartbeat) to 
> haproxy 1.5.4 for our environment. I have been really impressed with haproxy 
> so far particularly the ssl offload feature and the Layer 7 flexibility for 
> our jboss apps.
>
>
>
> One of the VIPS that I have moved to haproxy is our exchange 2013 environment 
> which is running in tcp mode (expecting approx 1500 concurrent connections on 
> this VIP). I don't have any application/user issues yet but I wanted to get a 
> handle on the haproxy stats page and particularly the 'resp errors' on the 
> backend servers. The total 'resp error' count for the backend is 249 but when 
> I hover over the cell it tells me 'connection resets during transfer 314 
> client, 597 server'. This doesn't seem to add up?
>
>
>
> I assume this counter is accumulative?
>
>
>
> As a rule of thumb what sort of percentage would I be concerned with when 
> looking at this figure?
>
>
>
>
>
>
>
>
> My config snippets are:
>
>
>
> defaults
>
> log global
>
> modehttp
>
> option  tcplog
>
> option  dontlognull
>
> option  redispatch
>
> retries 3
>
> timeout http-request15s
>
> timeout queue   30s
>
> timeout connect 5s
>
> timeout client  5m
>
> timeout server  5m
>
> timeout http-keep-alive 1s
>
> timeout check   10s
>
> timeout tarpit  1m
>
> backlog 1
>
> maxconn 2000
>
>
>
>
>
> #-
>
> # exchange vip
>
> #-
>
> frontend  exchange
>
> bind 192.168.1.172:443
>
> bind 192.168.1.172:25
>
> bind 192.168.1.172:80
>
> bind 192.168.1.172:587
>
> bind 192.168.1.172:995
>
> mode tcp
>
> maxconn 1
>
>
>
> default_backend exchange-backend
>
>
>
> #-
>
> # exchange backend
>
> #-
>
> backend exchange-backend
>
> mode tcp
>
> option ssl-hello-chk
>
> balance roundrobin
>
> server  exch01 exch01 maxconn 5000 check port 443 inter 15s
>
> server  exch02 exch02 maxconn 5000 check port 443 inter 15s
>
> server  exch03 exch03 maxconn 5000 check port 443 inter 15s
>
> server  exch04 exch04 maxconn 5000 check port 443 inter 15s
>
>
>
>
>
> Thanks very much for your time!
>
>
>
> Rgds,
>
>
>
> Glenn Elliott.
>
>
> __
> For the purposes of protecting the integrity and security of the SVHA network 
> and the information held on it, all emails to and from any email address on 
> the "svha.org.au" domain (or any other domain of St Vincent's Health 
> Australia Limited or any of its related bodies corporate) (an "SVHA Email 
> Address") will pass through and be scanned by the Symantec.cloud anti virus 
> and anti spam filter service. These services may be provided by Symantec from 
> locations outside of Australia and, if so, this will involve any email you 
> send to or receive from an SVHA Email Address being sent to and scanned in 
> those locations.



Hi Glenn,

It means either the client or the server purposely closed the
connection (using RST) during the DATA phase (after handshake since
you're in TCP mode).
Have a look in your logs and search for 'SD' or 'CD' termination flags
to know on which service did the problem occurred.

If you want / need to dig further, you may have to improve the log
line generated or split your configuration in frontend/backend per
service.
That way, you'll know on which TCP port (hence service) those errors
are generated.

Note that you can get some configuration templates for HAProxy and
Exchange 2013 from our appliance documentation:
http://haproxy.com/static/media/uploads/eng/resources/aloha_load_balancer_appnotes_0065_exchange_2013_deployment_guide_en.pdf

Baptiste



Re: SNI in logs

2014-10-12 Thread Baptiste
On Fri, Oct 10, 2014 at 5:54 AM, Eugene Istomin  wrote:
> Hello,
>
>
>
> can we log SNI headers (req_ssl_sni) or generally, SNI availability
> (ssl_fc_has_sni) the same way we log SSL version (%sslv)?
>
> ---
>
> Best regards,
>
> Eugene Istomin
>
>

Hi Eugene,

You can log sni information using the following sample fetch on a
log-format directive: %[ssl_fc_sni]

Baptiste



Re: HAProxy in TCP mode, but with L7 checks

2014-10-12 Thread Baptiste
On Sun, Oct 12, 2014 at 1:34 PM, Hoggins!  wrote:
> Hello list,
>
> This must be a stupid question, but I'm still wondering, because this
> would help me : I would like to perform some load-balancing between two
> HTTP / HTTPS backends. The HTTP operations do not pose a problem, and
> it's actually working absolutely fine, based on L7 checks (specific web
> page that returns OK when all the applicative checks are performed).
>
> Because the underneath application often switches from HTTP to HTTPS, I
> couldn't find a better way to balance it than to use TCP load-balancing
> to achieve this : the HTTP / HTTPS switch is handled by the application
> itself.
>
> Also, I use some websockets that I would like to load-balance.
>
> Anyway, here is the question : for my TCP mode sections, I would like to
> know if it's possible for HAProxy to take decisions based on L7 tests. I
> hope my question is clear, I'm fairly new to this and it might be a very
> fuzzy setup for an expert point of view.
>
> Thanks for your help.
>


Hi Hoggins,

Just perform you 'option httpchk' on your TCP backend.
Then you have 2 options:
- tell HAProxy that the check should be ciphered. See 'check-ssl'
- tell HAProxy to run the check on an unciphered port. See 'port'

Baptiste



Re: sharepoint 2013

2014-10-13 Thread Baptiste
Hi,

This is rawly a NTLM question.
So HAProxy must be run in either tunnel mode (1.4 and 1.5) or
http-keep-alive mode (1.5 only).

Baptiste


On Mon, Oct 13, 2014 at 4:43 PM, Nicolas ZEDDE
 wrote:
> Hi,
>
> I faced your problem and had to remove the http-close / http-server-close
> option in the backend section for the Sharepoint sites.
>
> Hope this helps.
>
> Regards,
>
> Nicolas ZEDDE
>
> From: Richard Bassler [mailto:richard.bass...@rsli.com]
> Sent: Monday, October 13, 2014 4:35 PM
> To: haproxy@formilux.org
> Subject: sharepoint 2013
>
>
>
> I have a working sharepoint 2013 installation using windows integrated
> authentication.
>
>
>
> I am attempting to put haproxy in front of the web server farms. When I do,
> I am not getting proper authentication with the windows integrated
> authentication. Has anyone successfully used haproxy with windows integrated
> authentication and sharepoint?
>
>
>
> As a note, I have other haproxy installation working with no problem on
> anonymous sites.
>
>
>
> Thanks.
>
>
>
> "CONFIDENTIALITY AND PROPRIETARY INFORMATION NOTICE: This email, including
> attachments, is covered by the Electronic Communications Privacy Act (18
> U.S.C. 2510-2521) and contains confidential information belonging to the
> sender which may be legally privileged. The information is intended only for
> the use of the individual or entity to which it is addressed. If you are not
> the intended recipient, you are hereby notified that any disclosure,
> copying, distribution or the taking of any action in reliance of the
> contents of this information is strictly prohibited. If you have received
> this electronic transmission in error, please immediately notify the sender
> by return e-mail and delete this message from your computer or arrange for
> the return of any transmitted information."



Re: active/passive stick-table not sticky

2014-10-13 Thread Baptiste
On Sun, Oct 12, 2014 at 6:47 PM, Benjamin Vetter  wrote:
> Hi,
>
> i'm using the example from
> http://blog.haproxy.com/2014/01/17/emulating-activepassing-application-clustering-with-haproxy/
> with haproxy 1.5.4 for a 3 node mysql+galera setup to implement
> active/passive'ness.
>
> global
>   log 127.0.0.1 local0
>   log 127.0.0.1 local1 notice
>   maxconn 8192
>   uid 99
>   gid 99
>   debug
>   stats socket/tmp/haproxy
>
> defaults
>   log global
>   mode http
>   option tcplog
>   option dontlognull
>   retries 3
>   maxconn 8192
>   timeout connect 5000
>   timeout client 30
>   timeout server 30
>
> listen mysql-active-passive 0.0.0.0:3309
>   stick-table type ip size 1
>   stick on dst
>   mode tcp
>   balance roundrobin
>   option httpchk
>   server db01 192.168.0.11:3306 check port 9200 inter 12000 rise 3 fall 3
> on-marked-down shutdown-sessions
>   server db02 192.168.0.12:3306 check port 9200 inter 12000 rise 3 fall 3
> on-marked-down shutdown-sessions backup
>   server db03 192.168.0.13:3306 check port 9200 inter 12000 rise 3 fall 3
> on-marked-down shutdown-sessions backup
>
> I tested the stickyness via this tiny ruby script, which simply connects and
> asks the node for its stored ip address:
>
> require "mysql2"
>
> loop do
>   begin
> mysql2 = Mysql2::Client.new(:port => 3309, :host => "192.168.0.10",
> :username => "username")
> puts mysql2.query("show variables like '%wsrep_sst_rec%'").to_a
> mysql2.close
>   rescue
> # Nothing
>   end
> end
>
> First, everything's fine. On first run, stick-table gets updated:
>
> # table: mysql-active-passive, type: ip, size:1, used:1
> 0x1c90224: key=192.168.0.10 use=0 exp=0 server_id=1
>
> Then i shutdown 192.168.0.11. Again, everything's fine, as the stick table
> gets updated to:
>
> # table: mysql-active-passive, type: ip, size:1, used:1
> 0x1c90224: key=192.168.0.10 use=0 exp=0 server_id=2
>
> and all connections now go to db02.
>
> Then i restart/repair 192.168.0.11, the stick table stays as is (fine), such
> that all connections should still go to db02.
> However, the output of my script now starts to say:
>
> ...
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.12"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.12"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.11"}
> {"Variable_name"=>"wsrep_sst_receive_address", "Value"=>"192.168.0.12"}
> ...
>
> such that sometimes the connection goes to db01 and sometimes to db02.
> Do you know what the problem is?
>
> Thanks
>   Benjamin
>
>


Hi Benjamin,

Could you remove the 'backup' keyword from your server lines and run
the same test?

Baptiste



Re: sharepoint 2013

2014-10-14 Thread Baptiste
On Mon, Oct 13, 2014 at 7:20 PM, Richard Bassler
 wrote:
> I had removed the http-close option and I was using tunneling. I had "check"
> set but I had no page that was anonymous.
>
> I removed the "check" option in the backend and the server no longer failed
> on authentication.
> I need to make a special anonymous page to check health or figure out how to
> add authentication into the backend "check".
>

No, the dirty trick consist in accepting a 401 answer as valid, using
'http-check expect' directive.

Baptiste



Re: Issues with HTTP CONNECT proxying

2014-10-15 Thread Baptiste
On Wed, Oct 15, 2014 at 8:57 AM, Jason J. W. Williams
 wrote:
> Are there any known issues with using HAProxy to load balance forward
> proxies? I'm seeing an issue where when I put HAProxy in front of the
> forward proxies, the connection just hangs after the forward proxy
> replies "200 Connection Established".
>
> All other HTTP methods work fine. And if I connect directly from a
> browser like Firefox to the forward proxies, HTTP CONNECT works fine.
>
> Is there something HAProxy is expecting besides the 200 Connection 
> Established?
>
> Thank you in advance.
>
> -J
>

Not enough information to help you.
Can you post HAProxy logs and your configuration as well???

Can you also give a try to this option in your frontend section:
option http-use-proxy-header

Baptiste



Re: Just had a thought about the poodle issue....

2014-10-20 Thread Baptiste
> Is something like this also possible with SNI or strict-SNI enabled? I would
> like to issue a message when a browser doesn't support SNI.
>
> Sander
>

Hi Sander,

Yes, you can.

Baptiste



Re: multiple installations on the same macine

2014-10-29 Thread Baptiste
On Tue, Oct 28, 2014 at 8:13 PM, Lukas Tribus  wrote:
>> One reason for that would be to separate the flow and configuration of
>> different systems.
>> If i use the same installation for multiple systems (propelled by
>> different teams and agendas)
>> then each time the config file is touched, all systems are likely to be
>> affected and hence the changes would need to be tested against all
>> their requirements.
>> However if i use the same server but different instances, my changes to
>> the configuration would be impacting only the corresponding system.
>
> If thats the issue then I would suggest to install it to different boxes or 
> VMs.
>
> Otherwise, HAproxy can be started or installed multiple times without
> any problem, but you will have to adjust configurations, init-scripts, etc, 
> for
> example to use unique PID files.
>
> You probably also want different chroot paths.
>
>
> Lukas
>
>

Note that if you do so, HAProxy can collect information from
environment variables.
This may help :)

Baptiste



Re: the order of evaluation of acl's

2014-10-29 Thread Baptiste
On Tue, Oct 28, 2014 at 5:42 PM, Conrad Hoffmann  wrote:
> Hi,
>
> On 10/24/2014 02:12 PM, jeff saremi wrote:
>> What is the order of evaluation of 'and's and 'or's in a use_backend clause?
>>
>> This is what the docs say:
>>  [!]acl1 [!]acl2 ... [!]acln  { or [!]acl1 [!]acl2 ... [!]acln } ...
>>
>> and apparently i cannot use paranthesis to group them. However i need to 
>> write something like the following:
>> use_backend some_backend if ( ( acl1 acl2) or (acl3 acl4) ) or acl5
>
> Why not just break it down into several lines:
>
> use_backend some_backend if acl1 acl2
> use_backend some_backend if acl3 acl4
> use_backend some_backend if acl5
>
> Especially if you care about the order of execution, this concern is
> much more explicitly expressed this way.
>
> Regards,
> Conrad
> --
> Conrad Hoffmann
> Traffic Engineer
>
> SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany
>
> Managing Director: Alexander Ljung | Incorporated in England & Wales
> with Company No. 6343600 | Local Branch Office | AG Charlottenburg |
> HRB 110657B
>

I agree with Conrad.
Just adding a piece of information here:
HAProxy will process the use_backend as they are written.
So the first matching will be used.

Baptiste



Re: Running multiple haproxy instances to use multiple cores efficiently

2014-10-29 Thread Baptiste
On Mon, Oct 27, 2014 at 7:41 PM, Chris Allen  wrote:
> We're running haproxy on a 2x4 core Intel E5-2609 box. At present haproxy is
> running on
> a single core and saturating that core at about 15,000 requests per second.
>
> Our application has four distinct front-ends (listening on four separate
> ports) so it would be
> very easy for us to run four haproxy instances, each handling one of the
> four front-ends.
>
> This should then allow us to use four of our eight cores. However we won't
> be able to tie hardware
> interrupts to any particular core.
>
> Is this arrangement likely to give us a significant performance boost? Or
> are we heading for trouble because
> we can't tie interrupts to any particular core?
>
> Any advice would be much appreciated. Many thanks,
>
> Chris.
>
>

Hi Chris,

You can use nbproc, cpu-map and bind-process keywords to startup
multiple processes and bind frontends and backends to multiple CPU
cores.

Haproxy



Re: Running multiple haproxy instances to use multiple cores efficiently

2014-10-29 Thread Baptiste
> If a backend is used only by 1 FE and that FE is bound to a certain CPU(s),
> do we still need to bind the backend to the same CPU(s) set ?
>
>
> Cheers,
> Pavlos

Yes, this is a requirement and will be performed by HAProxy automatically.

Baptiste



Re: Wrestling with rewrites

2014-10-29 Thread Baptiste
On Wed, Oct 29, 2014 at 11:07 AM, M. Lebbink  wrote:
> Hi list,
>
> I've been wrestling with rewrite rules withing haproxy and httpd, but I
> can't find the docs I would like to read.
>
> I keep reading examples with all sorts of rules containing hdr_dom(host) &
> hdr_beg(host). But I can't find
> any description of what is actually contained in these values.
>
> Does anyone have a link or pdf listing and describing these headers?
>
> All I want to do is check if there is a specific string in the URL for one
> of the backends and if it is present, rewrite
> that rule by replacing parts of it to create something the httpd server will
> actually understand.
>
>
> Michiel
>


Hi,

What you want to do is a reqirep conditionned by an ACL on path.
The doc:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#reqirep

The example:
  reqrep ^([^\ :]*)\ /static/(.*) \1\ /\2 if {path_beg /static/ }

The doc about path acl:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#path

other ACLs are defined in the same chapter.

Baptiste



Re: Wrestling with rewrites

2014-10-29 Thread Baptiste
On Wed, Oct 29, 2014 at 4:02 PM, M. Lebbink  wrote:
> Hi Baptiste,
>
> Thank you for your response, it helped somewhat. but I'm starting to
> think I'm to stupid for this
>
> My issue
> I have multiple websites on multiple servers running behind 2 haproxy
> servers.
>
> One of the websites servers photo's using fotoplayer. In order for people to
> link to photo's I get
> the following request:
>sub.domain.com/?folder=Harmelen%2F2014-09-27%2F&file=_DSC0001.jpg
>
> Using https rewrite rules:
>RewriteCond %{QUERY_STRING} ^(.*)?folder=(.*)&file=(.*).jpg(.*)$
>RewriteRule / /%1%2slides/%3.jpg
>
> I can rebuild this to
>sub.domain.com/Harmelen%2F2014-09-27/slides/_DSC0001.jpg
>
> But my webserver is not understanding the %2F and issues a html 404.
> Resubmitting the same request directly onto the
> server will produce the requested photo (tried playing with
> AllowEncodedSlashes but that does not help).
>
> So, I thought, let HAproxy 1.5.x do the url rebuild I have tried all
> sorts of combinations and itterations of this:
>acl has_jpg path_sub jpg
>reqirep ^([^\ ]*)\ /?folder=(.*)&file=(.*).jpg(.*) \1\2/slides/\3.jpg if
> has_jpg
>
> But sofar, no dice
>
> Any hint's or tips on getting this to actually work?

Well, your rewrite rule breaks HTTP protocol :)
You may want to try
  acl has_jpg path_sub jpg
  reqirep ^([^\ ]*)\ /?folder=(.*)&file=(.*).jpg(.*)
\1\2/slides/\3.jpg\4 if has_jpg

=> your missing the \4 which tells your rule to copy the data after
the extension of the file. Should be something like " HTTP/1.1".

Now your HTTP protocol is fixed in your reqirep rule, you may be able
to test it and update it accordingly.
You should update your acl, because yours matches in the path, which
obviously is '/' in your case.
You want to match anywhere in the URL I guess, give a try to url_sub.

Baptiste



Re: Check cookie with backend1 before forwarding to backend2

2014-10-29 Thread Baptiste
On Tue, Oct 28, 2014 at 2:23 PM, Julian Pawlowski  wrote:
> On 28.10.2014, at 10:13, Julian Pawlowski  wrote:
>> I was wondering if there is a way to have HAproxy check for an existing 
>> Cookie the client sends and have it verify with a specific backend (say 
>> backend1). Based on that backends HTTP error code (e.g. 200 or 403), HAproxy 
>> should allow forwarding to backend2. Of course this would need to be checked 
>> for every request but as this is not a high traffic site that wouldn't be an 
>> issue.
>
> Okay I guess I made some progress. Maybe that helps for somebody else to give 
> me a helping hand in completing this.
>
> My primary backend application now once sends a customized header like these 
> after the user was successfully authorized:
>
> X-APPNAME-AllowUser: APPSESSION=lkjhgsadkfjhsadjfhg
> X-APPNAME-Validity: 
> Location: /backend2
>
> I think I can now add some ACLs in the HAproxy configuration of my primary 
> backend:
>
> acl allowAPPNAMEUserres.hdr(X-APPNAME-AllowUser) -m found
> acl disallowAPPNAMEUser res.hdr(X-APPNAME-DisallowUser) -m found
> http-response set-map(/var/lib/haproxy/appname_user_sessions.lst) 
> %[res.hdr(X-APPNAME-AllowUser)] %[res.hdr(X-APPNAME-Validity)] if 
> allowAPPNAMEUser
> http-response del-map(/var/lib/haproxy/appname_user_sessions.lst) 
> %[res.hdr(X-APPNAME-DisallowUser)] if disallowAPPNAMEUser
>
> I tried these but don't know if they are actually working cause I wasn't able 
> to get anything about it from the logfiles.
> Also the map files are not written, even though I created empty files and 
> ensured r/w access for the HAproxy daemon user.
>
> For /backend2, I think I might just need to add an ACL to my frontend similar 
> to this but I'm not sure:
> use_backend bk_backend2 if { 
> hdr_sub(cookie),map_str(/var/lib/haproxy/appname_user_sessions.lst) -m found }
>
> About session expiration: I think I cannot have HAproxy make any cleanups on 
> it's own beside using the info should a user explicitly use the logout 
> function via the primary backend (X-APPNAME-DisallowUser ...). The plan is to 
> have a cronjob running cleaning the appname_user_sessions.lst based on the 
> second column I added from X-APPNAME-Validity header.
>
> However, I'm still stuck into this somehow.
> Any help would be very much appreciated (it's for an OpenSource project if 
> that counts).
>
>
> Many thanks in advance.
>
> Julian


Hi Julian,

This is doable with HAProxy 1.6-dev.
You have to store the answered cookie in a table when generated by the
server and then match it into the same table when the client sends it.

Baptiste



Webinar about HAProxy and SSL

2014-10-29 Thread Baptiste
Hey guys,

Just to let you know HAProxy Technologies (in partnership with SSL247)
is going propose a Webinar about impacts of SSL on web applications.
Of course many tips on how to deploy / configure HAProxy will be
delivered during this session.

Agenda of the session:
Introduction to SSL/TLS

Role of an SSL certificate
Levels of authentification
Options for certificates
Certificate order process
Certificates chains
Algorithms & encrytion
Concrete examples
TLS & IPv4 exhaustion
Impacts on performances
Impacts on client side
Impacts on web applications
Impacts on 'SSL Offloading'
Sensitive data protection
Impacts on SEO
Optimal HTTPs usage


To register:

- session in english: Nov 13th, 5pm CEST
http://www.haproxy.com/company/events/event-registration-form/

- session in french: 13 novembre, 14h:
http://www.haproxy.com/fr/company/ev%C3%A9nements/participer-%C3%A0-un-%C3%A9v%C3%A8nement-1/

Baptiste



Re: change backend for an existing connection?

2014-10-29 Thread Baptiste
On Fri, Oct 24, 2014 at 4:37 PM, Ian Cooke  wrote:
> Hi,
>
> Can haproxy change the backend for an existing session?  I have a stateless
> client/server and I thought 'redispatch' did what I want but it seems that's
> only for the initial connection.  What I'd like is for a client that's
> already been connected to maintain the frontend's connection but change the
> session's backend server if the one it's connected to goes down.
>
> Thanks,
> Ian

Hi,

maybe http-server-close is the option you're looking for.

Baptiste



Re: Thank you to cbonte

2014-10-29 Thread Baptiste
On Fri, Oct 24, 2014 at 1:35 AM, Jason J. W. Williams
 wrote:
> Just wanted to say thank you to cbonte for the searchable version of
> the docs at http://cbonte.github.io/haproxy-dconv/
>
> They're fantastic. Thank you for putting the effort into making that 
> interface.
>
> -J
>

+1 !!!
Thank you Cyril :)

Baptiste



Re: Health check and flapping

2015-10-05 Thread Baptiste
Hi,

I've not forgotten you, I'm just running out of time.

Baptiste


On Tue, Sep 29, 2015 at 5:43 PM,   wrote:
> Le 2015-08-28 16:40, Baptiste a écrit :
>>
>> Le 28 août 2015 15:45,  a écrit :
>>  >
>>  > Hello,
>>  >
>>  > We have tcp-check configured on some backends, which works fine,
>> except when service is flapping.
>>  >
>>  > If the backend server is in transitional state, for example
>> transitionally DOWN (going up), the counter is not reset to 0 if
>> tcp-check give a KO state between some OK state. The result is that if
>> the service is flapping, backend become up for a few seconds quite
>> often, even if all OK state are not consecutives.
>>  >
>>  > Example of sequence with rise 3:
>>  >
>>  > KO -> 0/3
>>  > KO -> 0/3
>>  > OK -> 1/3
>>  > KO -> 1/3 <- should back to 0/3
>>  > KO -> 1/3
>>  > KO -> 1/3
>>  > OK -> 2/3
>>  > KO -> 2/3
>>  > KO -> 2/3
>>  > OK -> 3/3 -> Server UP
>>  >
>>  > Is there a way to configure the counter to reset itself in case of
>> flapping ?
>>  >
>>  > Thanks.
>>
>> Hi there,
>>
>> Thanks for reporting this behavior.
>>
>> I'll have a look and come back to you.
>>
>> Baptiste
>
>
> Hello,
>
> Are you able to reproduce on your side ?
>
> Thanks



Re: [PATCH 1/1] MINOR: cli: Dump all resolvers stats if no resolver

2015-10-05 Thread Baptiste
Hi,

No problem for me about the feature itself.
That said, a few things should be changed in the code:

- use of proxy_find_by_name() instead of parsing the proxy list
- the following statement is hardly readable: "if
(appctx->ctx.resolvers.ptr != NULL && appctx->ctx.resolvers.ptr !=
presolvers) continue;"
Please write "continue" on a new line.

Please repost an updated patch and I'll give it a try before final approval.

Baptiste


On Sun, Oct 4, 2015 at 11:00 AM, Willy Tarreau  wrote:
> At first glance it seems OK. Baptiste, can you please quickly check
> and let me know if you don't see any issue there so that I can merge
> it ?
>
> Willy
>
> On Fri, Oct 02, 2015 at 03:41:06PM -0500, Andrew Hayworth wrote:
>> Hi all -
>>
>> Below is a patch for the 'show stat resolvers' cli command. It changes
>> the command such that it will show all resolvers configured, if you do
>> not specify a resolver id.
>>
>> I found this useful for debugging, and plan on using it in the hatop tool.
>>
>> Let me know if you have any feedback on it!
>>
>> Thanks -
>>
>> Andrew Hayworth
>> --
>>
>> >From c4061d948d21cabb95f093b5d9655c9d226724af Mon Sep 17 00:00:00 2001
>> From: Andrew Hayworth 
>> Date: Fri, 2 Oct 2015 20:33:01 +
>> Subject: [PATCH 1/1] MINOR: cli: Dump all resolvers stats if no resolver
>>  section is given
>>
>> This commit adds support for dumping all resolver stats. Specifically
>> if a command 'show stats resolvers' is issued withOUT a resolver section
>> id, we dump all known resolver sections. If none are configured, a
>> message is displayed indicating that.
>> ---
>>  doc/configuration.txt |  6 +++--
>>  src/dumpstats.c   | 72 
>> +++
>>  2 files changed, 42 insertions(+), 36 deletions(-)
>>
>> diff --git a/doc/configuration.txt b/doc/configuration.txt
>> index 3102516..e519662 100644
>> --- a/doc/configuration.txt
>> +++ b/doc/configuration.txt
>> @@ -16043,8 +16043,10 @@ show stat [  ]
>>  A similar empty line appears at the end of the second block (stats) so 
>> that
>>  the reader knows the output has not been truncated.
>>
>> -show stat resolvers 
>> -  Dump statistics for the given resolvers section.
>> +show stat resolvers []
>> +  Dump statistics for the given resolvers section, or all resolvers sections
>> +  if no section is supplied.
>> +
>>For each name server, the following counters are reported:
>>  sent: number of DNS requests sent to this server
>>  valid: number of DNS valid responses received from this server
>> diff --git a/src/dumpstats.c b/src/dumpstats.c
>> index 1a39258..ea3f49a 100644
>> --- a/src/dumpstats.c
>> +++ b/src/dumpstats.c
>> @@ -1166,23 +1166,19 @@ static int stats_sock_parse_request(struct
>> stream_interface *si, char *line)
>>   if (strcmp(args[2], "resolvers") == 0) {
>> struct dns_resolvers *presolvers;
>>
>> -   if (!*args[3]) {
>> - appctx->ctx.cli.msg = "Missing resolver section identifier.\n";
>> - appctx->st0 = STAT_CLI_PRINT;
>> - return 1;
>> -   }
>> -
>> -   appctx->ctx.resolvers.ptr = NULL;
>> -   list_for_each_entry(presolvers, &dns_resolvers, list) {
>> - if (strcmp(presolvers->id, args[3]) == 0) {
>> -   appctx->ctx.resolvers.ptr = presolvers;
>> -   break;
>> +   if (*args[3]) {
>> + appctx->ctx.resolvers.ptr = NULL;
>> + list_for_each_entry(presolvers, &dns_resolvers, list) {
>> +   if (strcmp(presolvers->id, args[3]) == 0) {
>> + appctx->ctx.resolvers.ptr = presolvers;
>> + break;
>> +   }
>> + }
>> + if (appctx->ctx.resolvers.ptr == NULL) {
>> +   appctx->ctx.cli.msg = "Can't find that resolvers section\n";
>> +   appctx->st0 = STAT_CLI_PRINT;
>> +   return 1;
>>   }
>> -   }
>> -   if (appctx->ctx.resolvers.ptr == NULL) {
>> - appctx->ctx.cli.msg = "Can't find resolvers section.\n";
>> - appctx->st0 = STAT_CLI_PRINT;
>> - return 1;
>> }
>>
>> appctx->st2 = STAT_ST_INIT;
>> @@ -6402,24 +6398,32 @@ static int
>> stats_dump_resolvers_to_buffer(struct stream_interface *si)
>> /* fall through */
>

Re: NOSRV error

2015-10-05 Thread Baptiste
On Mon, Oct 5, 2015 at 5:24 PM, Kevin COUSIN  wrote:
> Hi,
>
> - Mail original -
>> De: "Conrad Hoffmann" 
>> À: "Kevin COUSIN" , haproxy@formilux.org
>> Envoyé: Lundi 5 Octobre 2015 15:49:36
>> Objet: Re: NOSRV error
>
>> Hi,
>>
>> (comments inline)
>>
>> On 10/05/2015 03:23 PM, Kevin COUSIN wrote:
>>> Hi list
>>>
>>
>> This usually means that there is no server in the backend because they were
>> either misconfigured or taken out of the rotation, e.g. due to failed
>> health checks.
>>
>
> We disabled server tests to debug.

Kevin, bear in mind that checks are never the problem, but they are
the solution ;)


>>
>> Not sure what exactly you want to achieve here. If you want to loadbalance
>> on TCP level, HAProxy doesn't need to know anything about any TLS parameters.
>
>
> It's a lab HAproxy instance, the ssl ciphers options are for some other Layer 
> 7 LB configuration.
> 43
>>>
>>> I got the certificate on my server If I use openssl s_client.
>>
>> Can you elaborate on this? Are you connecting with s_client to haproxy or
>> to your server?
>> Can you confirm that you want you web server to do the actual TLS handshake
>> and not HAProxy?
>
> I'm connecting to my server with openssl, from the haproxy (to check if SSL 
> certificate is installed on the target).
>
> Yes, we want the backend server to do the TLS handshake.
>
> We try to LB the Citrix Broker :
>
> User -> Citrix Netscaler Gateway -> HAproxy --> Citrix Brokers
>
> We used the Windows NLB between Citrix NS Gateway and Citrix Brokers and we 
> want to replace it with HAproxy.
> With the HTTP frontend, we can see "HTTP/XML 479 POST /Scripts/CtxSTA.dll 
> HTTP/1.1". It doesn't work with HTTPS, the Netscaler gateway seems to close 
> the connection with FIN,ACK.


Why mixing HAProxy between citrix products?

As Conrad said, there are servers available for your connection. you
should investigate first why the citrix brokers reject the traffic.

Baptiste



Re: [PATCH 1/1] MINOR: cli: Dump all resolvers stats if no resolver

2015-10-05 Thread Baptiste
Andrew,

My appologies about the proxy_find_by_name function, I was not in the
right context!!!
Tested and approved.

Willy, you can apply :)

Thanks a lot for your contribution, Andrew.

Baptiste


On Mon, Oct 5, 2015 at 5:47 PM, Andrew Hayworth
 wrote:
> On Mon, Oct 5, 2015 at 7:24 AM, Baptiste  wrote:
>> Hi,
>>
>> No problem for me about the feature itself.
>> That said, a few things should be changed in the code:
>>
>> - use of proxy_find_by_name() instead of parsing the proxy list
>
> I'm fairly certain that 'proxy_find_by_name' does not search the
> dns_resolvers list (both from reading the code, and from empirically
> testing it). Notably, this looping-through-the-list behavior was
> already present in src/dumpstats.c before I touched it, and we also do
> the same thing when parsing the config files. I _do_ believe we should
> have a nice function for finding a resolvers section by name (either
> 'resolver_find_by_name' or by extending 'proxy_find_by_name'), but I
> don't think this is the commit to do that right before a release.
>
>> - the following statement is hardly readable: "if
>> (appctx->ctx.resolvers.ptr != NULL && appctx->ctx.resolvers.ptr !=
>> presolvers) continue;"
>> Please write "continue" on a new line.
>
> Done.
>
>>
>> Please repost an updated patch and I'll give it a try before final approval.
>>
>> Baptiste
>
> Updated patch below:
>
> From 190fee509a81755a8be3d9281c2edd7d3f72ff19 Mon Sep 17 00:00:00 2001
> From: Andrew Hayworth 
> Date: Fri, 2 Oct 2015 20:33:01 +
> Subject: [PATCH] MINOR: cli: Dump all resolvers stats if no resolver section
>  is given
>
> This commit adds support for dumping all resolver stats. Specifically
> if a command 'show stats resolvers' is issued withOUT a resolver section
> id, we dump all known resolver sections. If none are configured, a
> message is displayed indicating that.
> ---
>  doc/configuration.txt |  6 +++--
>  src/dumpstats.c   | 73 
> +++
>  2 files changed, 43 insertions(+), 36 deletions(-)
>
> diff --git a/doc/configuration.txt b/doc/configuration.txt
> index 3102516..e519662 100644
> --- a/doc/configuration.txt
> +++ b/doc/configuration.txt
> @@ -16043,8 +16043,10 @@ show stat [  ]
>  A similar empty line appears at the end of the second block (stats) so 
> that
>  the reader knows the output has not been truncated.
>
> -show stat resolvers 
> -  Dump statistics for the given resolvers section.
> +show stat resolvers []
> +  Dump statistics for the given resolvers section, or all resolvers sections
> +  if no section is supplied.
> +
>For each name server, the following counters are reported:
>  sent: number of DNS requests sent to this server
>  valid: number of DNS valid responses received from this server
> diff --git a/src/dumpstats.c b/src/dumpstats.c
> index bdfb7e3..ff44120 100644
> --- a/src/dumpstats.c
> +++ b/src/dumpstats.c
> @@ -1166,23 +1166,19 @@ static int stats_sock_parse_request(struct
> stream_interface *si, char *line)
>   if (strcmp(args[2], "resolvers") == 0) {
>   struct dns_resolvers *presolvers;
>
> - if (!*args[3]) {
> - appctx->ctx.cli.msg = "Missing resolver section identifier.\n";
> - appctx->st0 = STAT_CLI_PRINT;
> - return 1;
> - }
> -
> - appctx->ctx.resolvers.ptr = NULL;
> - list_for_each_entry(presolvers, &dns_resolvers, list) {
> - if (strcmp(presolvers->id, args[3]) == 0) {
> - appctx->ctx.resolvers.ptr = presolvers;
> - break;
> + if (*args[3]) {
> + appctx->ctx.resolvers.ptr = NULL;
> + list_for_each_entry(presolvers, &dns_resolvers, list) {
> + if (strcmp(presolvers->id, args[3]) == 0) {
> + appctx->ctx.resolvers.ptr = presolvers;
> + break;
> + }
> + }
> + if (appctx->ctx.resolvers.ptr == NULL) {
> + appctx->ctx.cli.msg = "Can't find that resolvers section\n";
> + appctx->st0 = STAT_CLI_PRINT;
> + return 1;
>   }
> - }
> - if (appctx->ctx.resolvers.ptr == NULL) {
> - appctx->ctx.cli.msg = "Can't find resolvers section.\n";
> - appctx->st0 = STAT_CLI_PRINT;
> - return 1;
>   }
>
>   appctx->st2 = STAT_ST_INIT;
> @@ -6400,24 +6396,33 @@ static int
> stats_dump_resolvers_to_buffer(struct stream_interface *si)
>   /* fall through */
>
>   case STAT_ST_LIST:
> - presolvers = appctx->ctx.resolvers.ptr;
> - chunk_appendf(&trash, "Resolvers section %s\n", presolvers->id);
> - list_for_each_entry(pnameserver, &presolvers->nameserver_lis

Re: About maxconn and minconn

2015-10-08 Thread Baptiste
Hi Dmitry,

It says what it says: you configured HAProxy to manage queue to
protect your servers. during your workload, a request remain in queue
for too long (1s) so HAProxy simply return an error.

Now the question is why such situation. Simply because your queue
management is improperly setup (either increase minconn and or
decrease fullconn) and combined to a server which might be quite slow
to answer leading HAProxy to use queues.

Or you met a bug :)

We need the full configuration and log lines around the sQ event
(right before and right after), so we may help.

Baptiste




On Wed, Oct 7, 2015 at 3:18 PM, Dmitry Sivachenko  wrote:
> Hello,
>
> I am using haproxy-1.5.14 and sometimes I see the following errors in the log:
>
> Oct  7 08:33:03 srv1 haproxy[77565]: unix:1 [07/Oct/2015:08:33:02.428] 
> MT-front MT_RU_EN-back/ 0/1000/-1/-1/1000 503 212 - - sQ-- 
> 125/124/108/0/0 0/28 "POST /some/url HTTP/1.1"
> (many similar at one moment)
>
> Common part in these errors is "1000" in Tw and Tt, and "sQ--" termination 
> state.
>
> Here is the relevant part on my config (I can post more if needed):
>
> defaults
> balance roundrobin
> maxconn 1
> timeout queue 1s
> fullconn 3000
> default-server inter 5s downinter 1s fastinter 500ms fall 3 rise 1 
> slowstart 60s maxqueue 1 minconn 5 maxconn 150
>
> backend MT_RU_EN-back
> mode http
> timeout server 30s
> server mt1-34 mt1-34:19016 track MT-back/mt1-34 weight 38
> server mt1-35 mt1-35:19016 track MT-back/mt1-35 weight 38
> 
>
> So this error log indicates that request was sitting in the queue for timeout 
> queue==1s and his turn did not come.
>
> In the stats web interface for MT_RU_EN-back backend I see the following 
> numbers:
>
> Sessions: limit=3000, max=126 (for the whole backend)
> Limit=150, max=5 or 6 (for each server)
>
> If I understand minconn/maxconn meaning right, each server should accept up 
> to min(150, 3000/18) connections
>
> So according to stats the load were far from limits.
>
> What can be the cause of such errors?
>
> Thanks!



Re: Haproxy dropping request

2015-10-09 Thread Baptiste
Wonderfull,

Please tell afbbank to change their password !

Baptiste


On Fri, Oct 9, 2015 at 3:26 PM, Bosco Mutunga  wrote:
> Hi,
>
> I’m experiencing a strange issue whereby Haproxy completely hangs when it 
> receives a certain request, i have confirmed that the request is received 
> through the following tcpdump, but it does not appear in the haproxy logs, 
> neither is it forwarded.
>
> 09:24:05.853373 IP (tos 0x0, ttl 58, id 62847, offset 0, flags [DF], proto 
> TCP (6), length 299)
> ZMTESTGUI.59564 > ip-172-31-6-24.eu-west-1.compute.internal.8000: Flags 
> [P.], cksum 0x2261 (correct), seq 532:779, ack 947, win 129, options 
> [nop,nop,TS val 1098390234 ecr 1169684055], length 247
> E..+..@.:...
> ..@.}+."a.
> Ax..E..WPOST /mtn/zm/consumer/register HTTP/1.1
> Authorization: Basic YWZiYmFuazpFFmMxMjl0NTYh
> Content-Length: 1049
> Content-Type: text/xml; charset=UTF-8
> Host: 172.31.6.24:8000
> Connection: Keep-Alive
> User-Agent: Apache-HttpClient/4.2.3 (java 1.5)
>
>
> 09:24:05.853440 IP (tos 0x0, ttl 58, id 62848, offset 0, flags [DF], proto 
> TCP (6), length 1101)
> ZMTESTGUI.59564 > ip-172-31-6-24.eu-west-1.compute.internal.8000: Flags 
> [P.], cksum 0x12a8 (correct), seq 779:1828, ack 947, win 129, options 
> [nop,nop,TS val 1098390234 ecr 1169684055], length 1049
> E..M..@.:..z
> ..@.},
> Ax..E..W
>  xmlns:ns2="http://www.ericsson.com/em/emm/sp/backend"; 
> xmlns:ns3="http://www.ericsson.com/em/emm/sp/frontend";>
> FRI:260969524530/MSISDN
> 
> Andrea
> Oxenham
> 
> FEMA
> 1989-10-07
> PASS
> 1ABCD1
> 2026-10-16
> en
> 
> 
> HOME
> false
> false
> 
> 
> Lusaka
> LUSAKA
> Lusaka
> LUSAKA
> ZM
> 
> 
> 
> 
> 
>
>
> Of interest to note is the newline at the end of the body, that’s what makes 
> the content-length add up to 1049, is there any reason why this request is 
> being dropped.?



Re: Haproxy dropping request

2015-10-09 Thread Baptiste
cool :)
Ok, we need configuration and log lines relative to this POST.

Baptiste

On Fri, Oct 9, 2015 at 3:43 PM, Bosco Mutunga  wrote:
> Those are not the actual credentials, any idea what might be wrong?
>
>> On 9 Oct 2015, at 16:40, Baptiste  wrote:
>>
>> Wonderfull,
>>
>> Please tell afbbank to change their password !
>>
>> Baptiste
>>
>>
>> On Fri, Oct 9, 2015 at 3:26 PM, Bosco Mutunga  
>> wrote:
>>> Hi,
>>>
>>> I’m experiencing a strange issue whereby Haproxy completely hangs when it 
>>> receives a certain request, i have confirmed that the request is received 
>>> through the following tcpdump, but it does not appear in the haproxy logs, 
>>> neither is it forwarded.
>>>
>>> 09:24:05.853373 IP (tos 0x0, ttl 58, id 62847, offset 0, flags [DF], proto 
>>> TCP (6), length 299)
>>>ZMTESTGUI.59564 > ip-172-31-6-24.eu-west-1.compute.internal.8000: Flags 
>>> [P.], cksum 0x2261 (correct), seq 532:779, ack 947, win 129, options 
>>> [nop,nop,TS val 1098390234 ecr 1169684055], length 247
>>> E..+..@.:...
>>> ..@.}+."a.
>>> Ax..E..WPOST /mtn/zm/consumer/register HTTP/1.1
>>> Authorization: Basic YWZiYmFuazpFFmMxMjl0NTYh
>>> Content-Length: 1049
>>> Content-Type: text/xml; charset=UTF-8
>>> Host: 172.31.6.24:8000
>>> Connection: Keep-Alive
>>> User-Agent: Apache-HttpClient/4.2.3 (java 1.5)
>>>
>>>
>>> 09:24:05.853440 IP (tos 0x0, ttl 58, id 62848, offset 0, flags [DF], proto 
>>> TCP (6), length 1101)
>>>ZMTESTGUI.59564 > ip-172-31-6-24.eu-west-1.compute.internal.8000: Flags 
>>> [P.], cksum 0x12a8 (correct), seq 779:1828, ack 947, win 129, options 
>>> [nop,nop,TS val 1098390234 ecr 1169684055], length 1049
>>> E..M..@.:..z
>>> ..@.},
>>> Ax..E..W
>>> >> xmlns:ns2="http://www.ericsson.com/em/emm/sp/backend"; 
>>> xmlns:ns3="http://www.ericsson.com/em/emm/sp/frontend";>
>>>FRI:260969524530/MSISDN
>>>
>>>Andrea
>>>Oxenham
>>>
>>>FEMA
>>>1989-10-07
>>>PASS
>>>1ABCD1
>>>2026-10-16
>>>en
>>>
>>>
>>>HOME
>>>false
>>>false
>>>
>>>
>>>Lusaka
>>>LUSAKA
>>>Lusaka
>>>LUSAKA
>>>ZM
>>>
>>>
>>>
>>>
>>> 
>>>
>>>
>>> Of interest to note is the newline at the end of the body, that’s what 
>>> makes the content-length add up to 1049, is there any reason why this 
>>> request is being dropped.?
>



Re: Try request again if response body is empty?

2015-10-10 Thread Baptiste
On Sun, Oct 11, 2015 at 5:29 AM, Shawn Heisey  wrote:
> On 10/10/2015 12:31 AM, Willy Tarreau wrote:
>> Is the response closed when this happens (eg: server crash) ? If so,
>> we could add some sample fetches to detect that the request or response
>> channels are closed in case that could help. This is trivial to do, but
>> it will only be reliable if the close is immediately encountered, so it
>> still depends on the timing.
>
> We don't really understand why it happens, though we have been able to
> track down an exception that we *think* is related.  It's a common
> problem seen with servlet containers: "java.lang.IllegalStateException:
> Cannot forward after response has been committed" and
> "java.lang.IllegalStateException: Cannot call sendError() after the
> response has been committed".  The underlying cause virtually every time
> these exceptions occur is programmer error.  The difficult part is
> tracking it down and getting a fix deployed.


Usually, all the app servers serves the same code, so if app on
serverA is buggy, then app on serverB must end up with the same
result.
This type of replay may make sense in a A/B testing mode, but slightly
revisited.
IE use farm A until one fail occurs, then replay to farm B where we
have a more verbose (hence much slower) version of the app or a
"fixed" version under testing, etc...

In such case, I would agree it may make sense.

Baptiste



Re: Interactive stats socket broken on master

2015-10-12 Thread Baptiste
On Mon, Oct 12, 2015 at 12:06 AM, Willy Tarreau  wrote:
> On Sat, Oct 10, 2015 at 08:55:44PM -0500, Andrew Hayworth wrote:
>> Bump -
>>
>> I don't mind maintaining my own HAProxy package, but it seems bad to
>> release a major version with the interactive stats socket broken. Any
>> thoughts on the patch?
>
> Has anyone else tested it ? Since the beginning of the thread I must
> confess it's unclear to me as Jesse reported the issue, you said that
> your patch works for you then Jesse asks whether we should merge it.
> Jesse, have you tested it as well, so that we ensure you're facing
> the same issue ?
>
> Andrew BTW, your patch looks good and seems to do what you described
> in the message, I'm just asking to be sure that it addresses Jesse's
> bug as well.
>
> Last point guys, please keep in mind that not everybody reads all
> e-mails, so when you want to have a patch integrated, clearly mark
> it in the subject and don't leave it pending at the trail of a
> thread like this.
>
> Thanks,
> Willy
>


Hi all,

I confirm attached patch fix the issue: natively on my computer
(Willy, I was wrong this morning, I still had the bug) and when
HAProxy runs in docker too.

Baptiste



Re: How to configure frontend/backend for SSL OR Non SSL Backend?

2015-10-12 Thread Baptiste
Hi Daren,

Do you want/need to decipher the traffic when using SSL?

Baptiste

On Mon, Oct 12, 2015 at 4:24 PM, Daren Sefcik  wrote:
> I am probably totally overlooking something but how do I configure a
> frontend/backend to pass to the same server for both SSL and Non SSL
> requests?  We have server that require ssl for some applications but most of
> the time not.
>



Re: How to configure frontend/backend for SSL OR Non SSL Backend?

2015-10-12 Thread Baptiste
So basically, here is what you want to do:
peers mypeers
 # read the doc for the info to store here

frontend ftapp
 bind :80
 bind :443
 mode tcp
 default_backend bkapp

backend bkapp
 mode tcp
 stick-table type ip size 10k peers mypeers
 stick on src
 server s1 a.b.c.d check port 80
 server s2 a.b.c.e check port 80


Baptiste


On Mon, Oct 12, 2015 at 4:40 PM, Daren Sefcik  wrote:
> humm...not sure I know how to answer that...we have servers that require SSL
> for some requests and not for others. I am not needing to do anything other
> than pass the traffic along, not doing any inspection or verifying of cert
> or anything. I tried to setup a frontend with 2 servers in the backend, one
> with 443 and the other with 80 but that didn't seem to work, like it would
> pick the wrong one to send to.
>
> On Mon, Oct 12, 2015 at 7:29 AM, Baptiste  wrote:
>>
>> Hi Daren,
>>
>> Do you want/need to decipher the traffic when using SSL?
>>
>> Baptiste
>>
>> On Mon, Oct 12, 2015 at 4:24 PM, Daren Sefcik 
>> wrote:
>> > I am probably totally overlooking something but how do I configure a
>> > frontend/backend to pass to the same server for both SSL and Non SSL
>> > requests?  We have server that require ssl for some applications but
>> > most of
>> > the time not.
>> >
>
>



Re: rebalance sessions when re-restarting server

2015-10-12 Thread Baptiste
Hi Stephen,

you have to wait for either the client or the server to close the connection.
As you said, the "sessions don't end", so your problem is by design on
your application.

Baptiste


On Mon, Oct 12, 2015 at 5:59 PM, Walsh, Stephen
 wrote:
> Hi all,
>
>
>
> We are using HaProxy in trial. We use it as a TCP Load Balancer for SSL
> connections.
>
> These sessions don’t end and are persistent.
>
>
>
> However when a node is restarted all sessions are moved the other nodes and
> never come back to the restarted one.
>
> How can we rebalance these backend nodes without restarting Ha Proxy?
>
>
>
>
>
> Regards
>
> Stephen W
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and destroy
> any copies. You may not further disclose or distribute this email or its
> attachments.



Re: [ANNOUNCE] haproxy-1.6.0 now released!

2015-10-13 Thread Baptiste
Great, amazing!
Looking forward to 1.7!

Baptiste


[blog] What's new in HAProxy 1.6

2015-10-14 Thread Baptiste
Hey,

I summarized what's new in HAProxy 1.6 with some configuration
examples in a blog post to help quick adoption of new features:
http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/

Baptiste



Re: Squid Backend Health Checks

2015-10-14 Thread Baptiste
Hi Daren,

What type of errors are reported?

Baptiste

On Wed, Oct 14, 2015 at 8:19 AM, Daren Sefcik  wrote:
> I followed Willy's advice from this post
>
> http://www.mail-archive.com/haproxy@formilux.org/msg05171.html
>
> but seem to get a lot of health check errors and (false) Dwntme. Is there a
> newer or better way to do health checks or am I missing something?
>
> TIA..
>
> BTW, nice work on 1.6, am looking forward to trying it out soon...
>
>
> Here is my relevant code, http://10.1.4.105:9090 is the CARP address my
> clienst also use as the proxy ip to use. I tried using the local IP and had
> the same problems.
>
> listen check-responder
> bind *:9090
> mode http
> monitor-uri /
> timeout client 5000
> timeout connect 5000
> timeout server 5000
>
>
> backend HTPL_WEB_PROXY_http_ipvANY
> mode http
> stick-table type ip size 50k expire 5m
> stick on src
> balance roundrobin
> timeout connect 5
> timeout server 5
> retries 3
> option httpchk GET http://10.1.4.105:9090/ HTTP/1.0
> server HTPL-PROXY-01_10.1.4.103 10.1.4.103:3128 cookie HTPLPROXY01 check
> inter 3  weight 175 maxconn 1500 fastinter 1000 fall 5
> server HTPL-PROXY-02_10.1.4.104 10.1.4.104:3128 cookie HTPLPROXY02 check
> inter 3  weight 175 maxconn 1500 fastinter 1000 fall 5
> server HTPL-PROXY-03_10.1.4.107 10.1.4.107:3128 cookie HTPLPROXY03 check
> inter 3  weight 100 maxconn 1500 fastinter 1000 fall 5
> server HTPL-PROXY-04_10.1.4.108 10.1.4.108:3128 cookie HTPLPROXY04 check
> inter 3  weight 200 maxconn 1500 fastinter 1000 fall 5
> server HTHPL-PROXY-02_10.1.4.101 10.1.4.101:3128 cookie HTHPLPROXY02 check
> inter 3  weight 150 maxconn 1500 fastinter 1000 fall 5
> server HTHPL-PROXY-03_10.1.4.102 10.1.4.102:3128 cookie HTHPLPROXY03 check
> inter 3  weight 125 maxconn 1000 fastinter 1000 fall 5
>
>



Re: Unexpected error messages

2015-10-14 Thread Baptiste
On Wed, Oct 14, 2015 at 3:03 PM, Krishna Kumar (Engineering)
 wrote:
> Hi all,
>
> We are occasionally getting these messages (about 25 errors/per occurrence,
> 1 occurrence per hour) in the *error* log:
>
> 10.xx.xxx.xx:60086 [14/Oct/2015:04:21:25.048] Alert-FE
> Alert-BE/10.xx.xx.xx 0/5000/1/32/+5033 200 +149 - - --NN 370/4/1/0/+1
> 0/0 {10.xx.x.xxx||367||} {|||432} "POST /fk-alert-service/nsca
> HTTP/1.1"
> 10.xx.xxx.xx:60046 [14/Oct/2015:04:21:19.936] Alert-FE
> Alert-BE/10.xx.xx.xx 0/5000/1/21/+5022 200 +149 - - --NN 302/8/2/0/+1
> 0/0 {10.xx.x.xxx||237||} {|||302} "POST /fk-alert-service/nsca
> HTTP/1.1"
> ...
>
> We are unsure what errors were seen at the client. What could possibly be the
> reason for these? Every error line has retries value as "+1", as seen above. 
> The
> specific options in the configuration are (HAProxy v1.5.12):
>
> 1. "retries 1"
> 2. "option redispatch"
> 3. "option logasap"
> 4. "timeout connect 5000", server and client timeouts are high - 300s
> 5. Number of backend servers is 7.
> 6. ulimit is 512K
> 7. balance is "roundrobin"
>
> Thank you for any leads/insights.
>
> Regards,
> - Krishna Kumar
>

Hi Krishna,

First, I don't understand how the "retries 1" and the "redispatch"
works together in your case.
I mean, redispatch is supposed to be applied at 'retries - 1'...

So basically, what may be happening:
- because of logasap, HAProxy does not wait until the end of the
session to generate the log line
- this log is in error because a connection was attempted (and failed)
on a server

You should not setup any ulimit and let HAProxy do the job for you.

Baptiste



Re: HAProxy 1.6 and HAProxy EE

2015-10-14 Thread Baptiste
Hi Jonathan,

First, we don't speak about "license", since HAPEE is open source. We
speak about "subscription".

Second, please send your HAPEE related questions to
supp...@haproxy.com directly :)
When writing to support, send the list of backports you'd like and
we'll answer you quickly.

Be aware that we'll release soon a new version of HAPEE (1.5r2) which
will embed more backports, without impacting reliability of the
solution.

Note the HAPEE contracts also cover your deployment of HAProxy
Community (until you can migrate to HAPEE). We simply can't engage our
SLAs on this version.

Baptiste



On Thu, Oct 15, 2015 at 1:18 AM, Jonathan Winer  wrote:
> Hi - we currently run a licensed (Business) version of HAPEE, but are
> interested in some of the new capabilities of 1.6.  What options do we have
> to 'upgrade' to 1.6 but keep our current support license?
>
> Thanks.
>
> --
> Jonathan Winer
>
> Director of R&D - Digital Marketing Services
>
> __
>
> MyWebGrocer
>
> Champlain Mill
> 20 Winooski Falls Way, 5th Floor
> Winooski, VT 05404
>
> D: 802.654.9743 F: 802.654.9699
>
> + MyWebGrocer.com
>
>



Re: DNS resolvers issue with haproxy 1.6

2015-10-15 Thread Baptiste
On Thu, Oct 15, 2015 at 10:24 AM, Øyvind Johnsen  wrote:
> Hi all,
>
> We are running HAProxy on our Docker / Swarm / Weave cluster also featuring 
> Weave-DNS for service discovery between the containers in the cluster. We are 
> deploying fairly often to the cluster for both dev and stage environments and 
> was very happy to see the DNS Resolvers feature introduced with HAProxy 1.6. 
> Problem is that I cannot seem to get this feature to work with our setup. 
> HAProxy does never pick up a DNS change as it is supposed to, so when a 
> container is redeployed the backend will go down whenever the container gets 
> assigned a new IP from Weave.
>
> Weave-DNS is available on every node in the cluster on IP 172.17.42.1 and I 
> can resolve all the internal DNS names using the resolver at this address to 
> the correct IP from inside the container running HAProxy. The DNS changes 
> immediately when a container is redeployed and gets assigned a new IP.
>
> A simplified and anonymised version of our HAProxy config:
>
> defaults
> log global
> option httplog
> option dontlognull
> option log-health-checks
> option httpchk
> mode http
> option http-server-close
> timeout connect 7s
> timeout client 10s
> timeout server 10s
> timeout check 5s
>
> resolvers weave-dns
> nameserver dns1 172.17.42.1:53
> timeout retry 1s
> hold valid 10s
>
> frontend http-in
> bind *:80
> acl acl_domain1 hdr(host) -i domain1.io
> use_backend backend_domain1 if acl_domain1
>
> acl acl_domain2 hdr(host) -i domain2.io
> use_backend backend_domain2 if acl_domain2
>
> frontend https-in
> bind *:443 ssl crt /data/ssl-certs/
> reqadd X-Forwarded-Proto:\ https
>
> acl acl_domain1 hdr(host) -i domain1.io
> use_backend backend_domain1 if acl_domain1
>
> acl acl_domain2 hdr(host) -i domain2.io
> use_backend backend_domain2 if acl_domain2
>
> backend backend_domain1
> server domain1-server domain1.weave.local:80 check inter 1000 resolvers 
> weave-dns resolve-prefer ipv4
>
> backend backend_domain2
> server domain2-server domain2.weave.local:80 check inter 1000 resolvers 
> weave-dns resolve-prefer ipv4
>
> Is there any reason why the server check should not pick up the DNS change 
> and update HAProxy with the new IP so the backend continue to work when we do 
> a redeploy?
>
>
> I also encountered another issue when trying to upgrade to the final 1.6.0 
> version. The server is using two wildcard certificates in the folder 
> specified in the config. When running the ssllabs.com SSL test on the server 
> at domain2 (the cert that is not the default one, but using SNI) then HAProxy 
> segfaults and dies completely. This behaviour is not observed on neither of 
> the 1.6.0-devX builds.



Hi Oyvind,

Please repost your SSL question in a new thread with an appropriate subject.
Next time avoid mixing 2 very different topics in the same thread.

Have you enabled stats socket in your global section?
If not, please enable it.
Then run a "show stat resolvers" and report here the output of the command.

A packet capture of a few DNS packets would be much appreciated.

Baptiste



Re: DNS resolvers issue with haproxy 1.6

2015-10-15 Thread Baptiste
On Thu, Oct 15, 2015 at 11:02 AM, Øyvind Johnsen  wrote:
> Sorry about the mixing of topics. I will repost the SSL question when I am
> done investigating the DNS topic which currently is the deal breaker :)

Thanks a lot!


> I did some DNS packet sniffing and it seems the problem is that haproxy does
> a type=ANY request to DNS for the domain names, and weave-DNS then replies
> with "No such name"... if I check with nslookup, then I get the same
> behaviour for type=ANY requests. The DNS will only answer with the IP for
> type=A requests.

Please send me the packet capture. I need to understand what did the
server answered.
Actually, HAProxy is already supposed to failover to either A or 
then to  or A if no valid response are received or in case of some
errors returned by the DNS server.
More information here:
http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#5.3.2

I'll see what happens with your DNS server and how we could workaround
it in HAProxy.

Baptiste



>
> Is there any way to tune this kind of behaviour in the resolvers section of
> HAProxy now?
>
> Best regards
>
> Øyvind Johnsen
> System Admin
> +47 99242547
> +852 67157472
>
> On Thu, Oct 15, 2015 at 10:58 AM, Baptiste  wrote:
>>
>> On Thu, Oct 15, 2015 at 10:24 AM, Øyvind Johnsen 
>> wrote:
>> > Hi all,
>> >
>> > We are running HAProxy on our Docker / Swarm / Weave cluster also
>> > featuring Weave-DNS for service discovery between the containers in the
>> > cluster. We are deploying fairly often to the cluster for both dev and 
>> > stage
>> > environments and was very happy to see the DNS Resolvers feature introduced
>> > with HAProxy 1.6. Problem is that I cannot seem to get this feature to work
>> > with our setup. HAProxy does never pick up a DNS change as it is supposed
>> > to, so when a container is redeployed the backend will go down whenever the
>> > container gets assigned a new IP from Weave.
>> >
>> > Weave-DNS is available on every node in the cluster on IP 172.17.42.1
>> > and I can resolve all the internal DNS names using the resolver at this
>> > address to the correct IP from inside the container running HAProxy. The 
>> > DNS
>> > changes immediately when a container is redeployed and gets assigned a new
>> > IP.
>> >
>> > A simplified and anonymised version of our HAProxy config:
>> >
>> > defaults
>> > log global
>> > option httplog
>> > option dontlognull
>> > option log-health-checks
>> > option httpchk
>> > mode http
>> > option http-server-close
>> > timeout connect 7s
>> > timeout client 10s
>> > timeout server 10s
>> > timeout check 5s
>> >
>> > resolvers weave-dns
>> > nameserver dns1 172.17.42.1:53
>> > timeout retry 1s
>> > hold valid 10s
>> >
>> > frontend http-in
>> > bind *:80
>> > acl acl_domain1 hdr(host) -i domain1.io
>> > use_backend backend_domain1 if acl_domain1
>> >
>> > acl acl_domain2 hdr(host) -i domain2.io
>> > use_backend backend_domain2 if acl_domain2
>> >
>> > frontend https-in
>> > bind *:443 ssl crt /data/ssl-certs/
>> > reqadd X-Forwarded-Proto:\ https
>> >
>> > acl acl_domain1 hdr(host) -i domain1.io
>> > use_backend backend_domain1 if acl_domain1
>> >
>> > acl acl_domain2 hdr(host) -i domain2.io
>> > use_backend backend_domain2 if acl_domain2
>> >
>> > backend backend_domain1
>> > server domain1-server domain1.weave.local:80 check inter 1000
>> > resolvers weave-dns resolve-prefer ipv4
>> >
>> > backend backend_domain2
>> > server domain2-server domain2.weave.local:80 check inter 1000
>> > resolvers weave-dns resolve-prefer ipv4
>> >
>> > Is there any reason why the server check should not pick up the DNS
>> > change and update HAProxy with the new IP so the backend continue to work
>> > when we do a redeploy?
>> >
>> >
>> > I also encountered another issue when trying to upgrade to the final
>> > 1.6.0 version. The server is using two wildcard certificates in the folder
>> > specified in the config. When running the ssllabs.com SSL test on the 
>> > server
>> > at domain2 (the cert that is not the default one, but using SNI) then
>> > HAProxy segfaults and dies completely. This behaviour is not observed on
>> > neither of the 1.6.0-devX builds.
>>
>>
>>
>> Hi Oyvind,
>>
>> Please repost your SSL question in a new thread with an appropriate
>> subject.
>> Next time avoid mixing 2 very different topics in the same thread.
>>
>> Have you enabled stats socket in your global section?
>> If not, please enable it.
>> Then run a "show stat resolvers" and report here the output of the
>> command.
>>
>> A packet capture of a few DNS packets would be much appreciated.
>>
>> Baptiste
>
>
>



[call to comment] HAProxy's DNS resolution default query type

2015-10-15 Thread Baptiste
Hey guys,

by default, HAProxy tries to resolve server IPs using an ANY query
type, then fails over to resolve-prefer type, then to "remaining"
type.
So ANY -> A ->  or ANY ->  -> A.

In some cases, ANY query type is ignored or response contains no
records, which leads HAProxy to try next query type.
Today, 0yvind reported that weave DNS server actually answers with an
NX response, preventing HAProxy to failover to next query type (this
is by design).

Jan, a fellow HAProxy user, already reported me that ANY query types
are less and less fashion (for many reasons I'm not going to develop
here).

Amongs the many way to fix this issue, the one below has my preference:
 A new resolvers section directive (flag in that case) which prevent
HAProxy from sending a ANY query type for the nameservers in this
section ie "option dont-send-any-qtype".

An other option, would to make HAProxy to failover to next query type
in case of NX response.
This would also cover the case where a server returns a NX because no
 records exists.

Any comments are welcome.

Baptiste



Re: Resolvable host names in backend server throw invalid address error

2015-10-15 Thread Baptiste
Le 16 oct. 2015 06:27, "Mark Betz"  a écrit :
>
> Hi, I have a hopefully quick question about setting up backends for
resolvable internal service addresses.
>
> We are putting together a cluster on Google Container Engine (kubernetes)
and have haproxy deployed in a container based on Ubuntu 14.04 LTS.
>
> Our backend server specifications are declared using an internal
resolvable service name. For example:
>
> logdata-svc
> logdata-svc.default.svc.cluster.local
>
> Both of these names correctly resolve to an internal IP address in the
range 10.xxx.xxx.xxx, as shown by installing dnsutils into the container
and running nslookup on the name prior to starting haproxy:
>
> Name: logdata-svc.default.svc.cluster.local
> Address: 10.179.xxx.xxx
>
> However regardless of whether I use the short form or fqdn haproxy fails
to start, emitting the following to stdout:
>
> [ALERT] 288/041651 (52) : parsing [/etc/haproxy/haproxy.cfg:99] : 'server
logdata-service' : invalid address: 'logdata-svc.default.svc.cluster.local'
in 'logdata-svc.default.svc.cluster.local:1'
>
> We can use IPV4 addresses in the config, but if we do so we would be
giving up a certain amount of flexibility and resilience obtained from the
kubedns service name resolution layer.
>
> Anything we can do here? Thanks!
>
> --
> Mark Betz
> Sr. Software Engineer
> icitizen
>
> Email: mark.b...@icitizen.com
> Twitter: @markbetz

Hi,

Weird. Configuration parsing is failing, which means it's a libc/system
problem.
Is your resolv.conf properly set up and the server responsive?
Can you run a tcpdump at haproxy's start up and over your raw container (no
dnsutils installed).

Baptiste


Re: Looking for help about "req.body" logging

2015-10-16 Thread Baptiste
Le 16 oct. 2015 10:46, "Alberto Zaccagni" <
alberto.zacca...@lazywithclass.com> a écrit :
>
> Hello,
>
> Sorry for the repost, but it's really not clear to me how to use this
feature: "Processing of HTTP request body" in
http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/, can it be
used to log the body of a request?
>
> I am trying to use it like this in both my HTTP and HTTPS frontends:
>
> option http-buffer-request
> log-format "%[req.body]"
>
> The error I get is "'log-format' : sample fetch  may not be
reliably used here because it needs 'HTTP request headers' which is not
available here.", where should I be using it?
> Does that mean that we cannot log req.body at all or that I have to
enable another option before trying to use it?
>
> Any hint or help is much appreciated.
> Thank you.
>
> Cheers

Have you turned on 'mode http'?

Baptiste


Re: Unexpected error messages

2015-10-16 Thread Baptiste
Is your problem fixed?

We may emit a warning for such configuration.

Baptiste
Le 15 oct. 2015 07:34, "Krishna Kumar (Engineering)" <
krishna...@flipkart.com> a écrit :

> Hi Baptiste,
>
> Thank you for the advise and solution, I didn't realize retries had to be
> >1.
>
> Regards,
> - Krishna Kumar
>
> On Wed, Oct 14, 2015 at 7:51 PM, Baptiste  wrote:
> > On Wed, Oct 14, 2015 at 3:03 PM, Krishna Kumar (Engineering)
> >  wrote:
> >> Hi all,
> >>
> >> We are occasionally getting these messages (about 25 errors/per
> occurrence,
> >> 1 occurrence per hour) in the *error* log:
> >>
> >> 10.xx.xxx.xx:60086 [14/Oct/2015:04:21:25.048] Alert-FE
> >> Alert-BE/10.xx.xx.xx 0/5000/1/32/+5033 200 +149 - - --NN 370/4/1/0/+1
> >> 0/0 {10.xx.x.xxx||367||} {|||432} "POST /fk-alert-service/nsca
> >> HTTP/1.1"
> >> 10.xx.xxx.xx:60046 [14/Oct/2015:04:21:19.936] Alert-FE
> >> Alert-BE/10.xx.xx.xx 0/5000/1/21/+5022 200 +149 - - --NN 302/8/2/0/+1
> >> 0/0 {10.xx.x.xxx||237||} {|||302} "POST /fk-alert-service/nsca
> >> HTTP/1.1"
> >> ...
> >>
> >> We are unsure what errors were seen at the client. What could possibly
> be the
> >> reason for these? Every error line has retries value as "+1", as seen
> above. The
> >> specific options in the configuration are (HAProxy v1.5.12):
> >>
> >> 1. "retries 1"
> >> 2. "option redispatch"
> >> 3. "option logasap"
> >> 4. "timeout connect 5000", server and client timeouts are high - 300s
> >> 5. Number of backend servers is 7.
> >> 6. ulimit is 512K
> >> 7. balance is "roundrobin"
> >>
> >> Thank you for any leads/insights.
> >>
> >> Regards,
> >> - Krishna Kumar
> >>
> >
> > Hi Krishna,
> >
> > First, I don't understand how the "retries 1" and the "redispatch"
> > works together in your case.
> > I mean, redispatch is supposed to be applied at 'retries - 1'...
> >
> > So basically, what may be happening:
> > - because of logasap, HAProxy does not wait until the end of the
> > session to generate the log line
> > - this log is in error because a connection was attempted (and failed)
> > on a server
> >
> > You should not setup any ulimit and let HAProxy do the job for you.
> >
> > Baptiste
>


Re: haproxy + ipsec -> general socket error

2015-10-16 Thread Baptiste
Have you 'tunned' your sysctls?

Baptiste
Le 16 oct. 2015 14:56, "wbmtfrdlxm"  a écrit :

> what linux distribution are you using?
>
> light traffic is simulating 100 users browsing a website, simple http
> requests. we have 2 backend nodes and after a while, both of them become
> unavailable. after lowering or stopping traffic, everything goes back to
> normal.
> without ipsec, no problem at all.
>
>
>  On Fri, 16 Oct 2015 14:40:51 +0200 *Jarno
> Huuskonen>* wrote 
>
> Hi,
>
> On Fri, Oct 16, wbmtfrdlxm wrote:
> > when using ipsec on the backend side, this error pops up in the haproxy
> log from time to time:
> >
> > Layer4 connection problem, info: "General socket error (No buffer space
> available)
>
> We're using ipsec(libreswan) on backend, but I haven't seen any problems
> with ipsec (just checked logs for past few months).
>
> > we have tried both strongswan and libreswan, error is still the same.
> there is nothing strange in the ipsec logs, connection seems stable. but as
> soon as we start generating some light traffic, haproxy loses connectivity
> with the backend nodes.
> > we are running centos 7, standard repositories.
>
> What's light traffice for you ? Have you tried w/out ipsec (does it
> work w/out problems) ?
>
> -Jarno
>
> --
> Jarno Huuskonen
>
>
>
>


Re: [call to comment] HAProxy's DNS resolution default query type

2015-10-20 Thread Baptiste
Hi all,

Thanks a lot for your feedbacks. Really valuable.
I'll discuss with Willy the best approach for the change.

Baptiste


On Mon, Oct 19, 2015 at 11:50 PM, Andrew Hayworth
 wrote:
> Hi all -
>
> Just to chime in, we just got bit by this in production. Our dns
> resolver (unbound) does not follow CNAMES -> A records when you send
> an ANY query type. This is by design, so I can't just configure it
> differently (and ripping out our DNS resolver is not immediately
> feasible).
>
> I therefore vote to stop sending the ANY query type, and instead rely
> on A and  queries. I don't have any comments on behavior regarding
> NX behavior.
>
> NB: There is also support amongst some bigger internet companies to
> fully deprecate this query type:
> https://blog.cloudflare.com/deprecating-dns-any-meta-query-type/
>
> On Thu, Oct 15, 2015 at 12:49 PM, Lukas Tribus  wrote:
>>> I second this opinion. Removing ANY altogether would be the best case.
>>>
>>> In reality, I think it should use the OS's resolver libraries which
>>> in turn will honor whatever the admin has configured for preference
>>> order at the base OS level.
>>>
>>>
>>> As a sysadmin, one should reasonably expect that tweaking the
>>> preference knob at the OS level should affect most (and ideally, all)
>>> applications they are running rather than having to manually fiddle
>>> knobs at the OS and various application levels.
>>> If there is some discussion and *good* reasons to ignore the OS
>>> defaults, I feel this should likely be an *optional* config option
>>> in haproxy.cfg ie "use OS resolver, unless specifically told not to
>>> for $reason)
>>
>> Its exactly like you are saying.
>>
>> I don't think there is any doubt that HAproxy will bypass OS level
>> resolvers, since you are statically configuring DNS server IPs in the
>> haproxy configuration file.
>>
>> When you don't configure any resolvers, HAproxy does use libc's
>> gethostbyname() or getaddrinfo(), but both are fundamentally broken.
>>
>> Thats why some applications have to implement there own resolvers
>> (including nginx).
>>
>> First of all the OS resolver doesn't provide the TTL value. So you would
>> have to guess or use fixed TTL values. Second, both calls are blocking,
>> which is a big no-go for any event-loop based application (for this
>> reason, it can only be queried at startup, not while the application
>> is running).
>>
>> Just configure a hostname without resolver parameters, and haproxy
>> will resolve your hostnames at startup via OS (and then maintain those
>> IP's).
>>
>>
>> Applications either have to implement a resolver on their own (haproxy,
>> nginx), or use yet another external library, like getdnsapi [1].
>>
>>
>> The point is: there is a reason for this implementation, and you can
>> fallback to OS resolvers without any problems (just with their drawbacks).
>>
>>
>>
>>
>> Regards,
>>
>> Lukas
>>
>>
>> [1] https://getdnsapi.net/
>>
>
>
>
> --
> - Andrew Hayworth



Re: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-20 Thread Baptiste
Hi Andrew,

There is a bug repeated twice in your code.
In both dns_reset_resolution() and trigger_resolution(), you use
"resolution->resolver_family_priority" before it is positioned. This
may lead to using the last resolution->resolver_family_priority, which
may be different than the server one.
Please move the line "resolution->resolver_family_priority =
s->resolver_family_priority;" before using the value stored in it.

Appart this, it looks good.

Baptiste


On Tue, Oct 20, 2015 at 12:39 AM, Andrew Hayworth
 wrote:
> The ANY query type is weird, and some resolvers don't 'do the legwork'
> of resolving useful things like CNAMEs. Given that upstream resolver
> behavior is not always under the control of the HAProxy administrator,
> we should not use the ANY query type. Rather, we should use A or 
> according to either the explicit preferences of the operator, or the
> implicit default (/IPv6).
>
> - Andrew Hayworth
>
> From 8ed172424cbd79197aacacd1fd89ddcfa46e213d Mon Sep 17 00:00:00 2001
> From: Andrew Hayworth 
> Date: Mon, 19 Oct 2015 22:29:51 +
> Subject: [PATCH] MEDIUM: dns: Don't use the ANY query type
>
> Basically, it's ill-defined and shouldn't really be used going forward.
> We can't guarantee that resolvers will do the 'legwork' for us and
> actually resolve CNAMES when we request the ANY query-type. Case in point
> (obfuscated, clearly):
>
>   PRODUCTION! ahaywo...@secret-hostname.com:~$
>   dig @10.11.12.53 ANY api.somestartup.io
>
>   ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> @10.11.12.53 ANY api.somestartup.io
>   ; (1 server found)
>   ;; global options: +cmd
>   ;; Got answer:
>   ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62454
>   ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0
>
>   ;; QUESTION SECTION:
>   ;api.somestartup.io.IN  ANY
>
>   ;; ANSWER SECTION:
>   api.somestartup.io. 20  IN  CNAME
> api-somestartup-production.ap-southeast-2.elb.amazonaws.com.
>
>   ;; AUTHORITY SECTION:
>   somestartup.io.   166687  IN  NS  ns-1254.awsdns-28.org.
>   somestartup.io.   166687  IN  NS  
> ns-1884.awsdns-43.co.uk.
>   somestartup.io.   166687  IN  NS  ns-440.awsdns-55.com.
>   somestartup.io.   166687  IN  NS  ns-577.awsdns-08.net.
>
>   ;; Query time: 1 msec
>   ;; SERVER: 10.11.12.53#53(10.11.12.53)
>   ;; WHEN: Mon Oct 19 22:02:29 2015
>   ;; MSG SIZE  rcvd: 242
>
> HAProxy can't handle that response correctly.
>
> Rather than try to build in support for resolving CNAMEs presented
> without an A record in an answer section (which may be a valid
> improvement further on), this change just skips ANY record types
> altogether. A and  are much more well-defined and predictable.
>
> Notably, this commit preserves the implicit "Prefer IPV6 behavior."
> ---
>  include/types/dns.h |  3 ++-
>  src/checks.c|  6 +-
>  src/dns.c   |  6 +-
>  src/server.c| 18 +++---
>  4 files changed, 19 insertions(+), 14 deletions(-)
>
> diff --git a/include/types/dns.h b/include/types/dns.h
> index f8edb73..ea1a9f9 100644
> --- a/include/types/dns.h
> +++ b/include/types/dns.h
> @@ -161,7 +161,8 @@ struct dns_resolution {
>   unsigned int last_status_change; /* time of the latest DNS
> resolution status change */
>   int query_id; /* DNS query ID dedicated for this resolution */
>   struct eb32_node qid; /* ebtree query id */
> - int query_type; /* query type to send. By default DNS_RTYPE_ANY */
> + int query_type;
> + /* query type to send. By default DNS_RTYPE_A or DNS_RTYPE_
> depending on resolver_family_priority */
>   int status; /* status of the resolution being processed RSLV_STATUS_* */
>   int step; /* */
>   int try; /* current resolution try */
> diff --git a/src/checks.c b/src/checks.c
> index ade2428..d3cd567 100644
> --- a/src/checks.c
> +++ b/src/checks.c
> @@ -2214,7 +2214,11 @@ int trigger_resolution(struct server *s)
>   resolution->query_id = query_id;
>   resolution->qid.key = query_id;
>   resolution->step = RSLV_STEP_RUNNING;
> - resolution->query_type = DNS_RTYPE_ANY;
> + if (resolution->resolver_family_priority == AF_INET) {
> + resolution->query_type = DNS_RTYPE_A;
> + } else {
> + resolution->query_type = DNS_RTYPE_;
> + }
>   resolution->try = resolvers->resolve_retries;
>   resolution->try_cname = 0;
>   resolution->nb_responses = 0;
> diff --git a/src/dns.c b/src/dns.c
> index 7f71ac7..53b65ab 100644
> --- a/sr

Re: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-20 Thread Baptiste
> Also, we will have to address the issue that a server may just use
> a single address-family, therefor we have to fallback between A
> and , because a NX on a  query doesn't mean there are no
> A records.

Hi Lukas,

I do agree on this point.
A simple option in the resolvers section to instruct HAPoxy to not
forgive on NX and failover to next family:
 option on-nx-try-next-family

The magic should happen in snr_resolution_error_cb().

Baptiste



Re: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-20 Thread Baptiste
Hi Andrew,

I've updated your patch quickly so Willy can integrate it.
I've also updated the commit message to follow Lukas recommendations.

Baptiste

On Tue, Oct 20, 2015 at 2:26 PM, Baptiste  wrote:
> Hi Andrew,
>
> There is a bug repeated twice in your code.
> In both dns_reset_resolution() and trigger_resolution(), you use
> "resolution->resolver_family_priority" before it is positioned. This
> may lead to using the last resolution->resolver_family_priority, which
> may be different than the server one.
> Please move the line "resolution->resolver_family_priority =
> s->resolver_family_priority;" before using the value stored in it.
>
> Appart this, it looks good.
>
> Baptiste
>
>
> On Tue, Oct 20, 2015 at 12:39 AM, Andrew Hayworth
>  wrote:
>> The ANY query type is weird, and some resolvers don't 'do the legwork'
>> of resolving useful things like CNAMEs. Given that upstream resolver
>> behavior is not always under the control of the HAProxy administrator,
>> we should not use the ANY query type. Rather, we should use A or 
>> according to either the explicit preferences of the operator, or the
>> implicit default (/IPv6).
>>
>> - Andrew Hayworth
>>
>> From 8ed172424cbd79197aacacd1fd89ddcfa46e213d Mon Sep 17 00:00:00 2001
>> From: Andrew Hayworth 
>> Date: Mon, 19 Oct 2015 22:29:51 +
>> Subject: [PATCH] MEDIUM: dns: Don't use the ANY query type
>>
>> Basically, it's ill-defined and shouldn't really be used going forward.
>> We can't guarantee that resolvers will do the 'legwork' for us and
>> actually resolve CNAMES when we request the ANY query-type. Case in point
>> (obfuscated, clearly):
>>
>>   PRODUCTION! ahaywo...@secret-hostname.com:~$
>>   dig @10.11.12.53 ANY api.somestartup.io
>>
>>   ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> @10.11.12.53 ANY api.somestartup.io
>>   ; (1 server found)
>>   ;; global options: +cmd
>>   ;; Got answer:
>>   ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62454
>>   ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0
>>
>>   ;; QUESTION SECTION:
>>   ;api.somestartup.io.IN  ANY
>>
>>   ;; ANSWER SECTION:
>>   api.somestartup.io. 20  IN  CNAME
>> api-somestartup-production.ap-southeast-2.elb.amazonaws.com.
>>
>>   ;; AUTHORITY SECTION:
>>   somestartup.io.   166687  IN  NS  
>> ns-1254.awsdns-28.org.
>>   somestartup.io.   166687  IN  NS  
>> ns-1884.awsdns-43.co.uk.
>>   somestartup.io.   166687  IN  NS  ns-440.awsdns-55.com.
>>   somestartup.io.   166687  IN  NS  ns-577.awsdns-08.net.
>>
>>   ;; Query time: 1 msec
>>   ;; SERVER: 10.11.12.53#53(10.11.12.53)
>>   ;; WHEN: Mon Oct 19 22:02:29 2015
>>   ;; MSG SIZE  rcvd: 242
>>
>> HAProxy can't handle that response correctly.
>>
>> Rather than try to build in support for resolving CNAMEs presented
>> without an A record in an answer section (which may be a valid
>> improvement further on), this change just skips ANY record types
>> altogether. A and  are much more well-defined and predictable.
>>
>> Notably, this commit preserves the implicit "Prefer IPV6 behavior."
>> ---
>>  include/types/dns.h |  3 ++-
>>  src/checks.c|  6 +-
>>  src/dns.c   |  6 +-
>>  src/server.c| 18 +++---
>>  4 files changed, 19 insertions(+), 14 deletions(-)
>>
>> diff --git a/include/types/dns.h b/include/types/dns.h
>> index f8edb73..ea1a9f9 100644
>> --- a/include/types/dns.h
>> +++ b/include/types/dns.h
>> @@ -161,7 +161,8 @@ struct dns_resolution {
>>   unsigned int last_status_change; /* time of the latest DNS
>> resolution status change */
>>   int query_id; /* DNS query ID dedicated for this resolution */
>>   struct eb32_node qid; /* ebtree query id */
>> - int query_type; /* query type to send. By default DNS_RTYPE_ANY */
>> + int query_type;
>> + /* query type to send. By default DNS_RTYPE_A or DNS_RTYPE_
>> depending on resolver_family_priority */
>>   int status; /* status of the resolution being processed RSLV_STATUS_* */
>>   int step; /* */
>>   int try; /* current resolution try */
>> diff --git a/src/checks.c b/src/checks.c
>> index ade2428..d3cd567 100644
>> --- a/src/checks.c
>> +++ b/src/checks.c
>> @@ -2214,7 +2214,11 @@ int t

Re: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-20 Thread Baptiste
On Tue, Oct 20, 2015 at 9:09 PM, Lukas Tribus  wrote:
>> I don't know. I'm always only focused on the combination of user-visible
>> changes and risks of bugs (which are user-visible changes btw). So if we
>> can do it without breaking too much code, then it can be backported. What
>> we have now is something which is apparently insufficient to some users
>> so we can improve the situation. I wouldn't want to remove prefer-* or
>> change the options behavior or whatever for example.
>
> Ok, if we don't remove existing prefer-* keywords a 1.6 backport sounds
> possible without user visible breakage, great.
>
> lukas

Ok, just to make it clear, let me write a few conf examples:
- server home-v4 home-v4.mydomain check resolve-prefer ipv4
 => A then  (failover on NX)
- server home-v4 home-v4.mydomain check v4only
 => A only (stop on NX)

If both 'resolve-prefer ipv[46]' and 'v[46]only' are set, whatever
combination, then, v[46]only applies, but configuration parsing may
return a warning.

So we don't break compatibility with current code and way of working!
Brilliant guys :)

Baptiste



Re: [PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-21 Thread Baptiste
On Wed, Oct 21, 2015 at 1:24 PM, Lukas Tribus  wrote:
> Hi Robin,
>
>
>> Hey guys,
>>
>> Actually when you get an NXDOMAIN reply you can just stop resolving that
>> domain. Basically there are 2 types of "negative" replies in DNS:
>>
>> NODATA: basically this is when you don't get an error (NOERROR in dig),
>> but not the actual data you are looking for. You might have gotten some
>> CNAME data but no A or  record (depending on what you wanted
>> obviously). This means that the actual domain name does exist, but
>> doesn't have data of the type you requested. The term NODATA is used in
>> DNS RFC's but it doesn't actually have its own error code.
>>
>> NXDOMAIN: This is denoted by the NXDOMAIN error code. It means that
>> either the domain you requested itself or the last target domain from a
>> CNAME does not exist at all (IE no data whatsoever) and there also isn't
>> a wildcard available that matches it. So if you asked for an A record,
>> getting an NXDOMAIN means there also won't be an  record.
>>
>> The above explanation is a bit of an over simplification cause there are
>> also things like empty non-terminals which also don't have any data, but
>> instead of an NXDOMAIN actually return a NODATA (in most cases, there
>> are some authoritative servers that don't do it properly). But the end
>> result is that you can pretty much say that when you get NXDOMAIN, there
>> really is nothing there for you so you can just stop looking (at least
>> at that the current server).
>
> Thanks for clarifying, I didn't know about this. Good thing we didn't
> implemented anything yet.
>
> Baptiste, whats the current behavior when an empty response with
> NOERROR is received?
>
> Regards,
>
> Lukas


Hi,

This is already handled when I detect response without NX code and no
response records (DNS_RESP_ANCOUNT_ZERO) or no response record
corresponding to the query (DNS_RESP_NO_EXPECTED_RECORD).

And of course, both codes above triggers a query type failover.

Baptiste



Re: Looking for help about "req.body" logging

2015-10-21 Thread Baptiste
Hi,

I guess this is because the sample applies to a request element while
logging happens after the response has been sent, so data is not
available anymore.
Look for capture in this page
http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/ and use
it to capture the body at the request time and log it.

If it still doesn't work, post your configuration.

Baptiste


On Wed, Oct 21, 2015 at 5:31 PM, Alberto Zaccagni
 wrote:
> Did anyone succeeded in logging req.body?
> If so I would likely appreciate an example / some hints / a pointer into the
> docs, even though I've looked into this last one and could not find how to
> do it.
>
> Thank you
>
> Alberto
>
> On Fri, 16 Oct 2015 at 10:40 Alberto Zaccagni
>  wrote:
>>
>> Yes, I did turn it on. Or so I think, please have a look at my
>> configuration file:
>> https://gist.github.com/lazywithclass/d255bb4d2086b07be178
>>
>> Thank you
>>
>> Alberto
>>
>>
>> On Fri, 16 Oct 2015 at 10:12 Baptiste  wrote:
>>>
>>>
>>> Le 16 oct. 2015 10:46, "Alberto Zaccagni"
>>>  a écrit :
>>> >
>>> > Hello,
>>> >
>>> > Sorry for the repost, but it's really not clear to me how to use this
>>> > feature: "Processing of HTTP request body" in
>>> > http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/, can it be 
>>> > used
>>> > to log the body of a request?
>>> >
>>> > I am trying to use it like this in both my HTTP and HTTPS frontends:
>>> >
>>> > option http-buffer-request
>>> > log-format "%[req.body]"
>>> >
>>> > The error I get is "'log-format' : sample fetch  may not be
>>> > reliably used here because it needs 'HTTP request headers' which is not
>>> > available here.", where should I be using it?
>>> > Does that mean that we cannot log req.body at all or that I have to
>>> > enable another option before trying to use it?
>>> >
>>> > Any hint or help is much appreciated.
>>> > Thank you.
>>> >
>>> > Cheers
>>>
>>> Have you turned on 'mode http'?
>>>
>>> Baptiste



Re: Looking for help about "req.body" logging

2015-10-22 Thread Baptiste
You might have missed the most important part of my previous mail, so
I'm repeating it again:
"Look for capture in this page
http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/ and use
it to capture the body at the request time and log it."

Use req.body in the example and log the capture.req.hdr in your
log-format and your done.

Baptiste


On Thu, Oct 22, 2015 at 9:09 AM, Alberto Zaccagni
 wrote:
> Hello Baptiste,
>
> I've read both the 1.6 announcement and the docs about this feature, but u
> could not get it to work, I know I'm doing something wrong, I just don't
> know what.
> I've posted my configuration in the previous email, here is it:
> https://gist.github.com/lazywithclass/d255bb4d2086b07be178
>
> Thanks for your help
>
> Alberto
>
>
> On Wed, 21 Oct 2015, 9:58 p.m. Baptiste  wrote:
>>
>> Hi,
>>
>> I guess this is because the sample applies to a request element while
>> logging happens after the response has been sent, so data is not
>> available anymore.
>> Look for capture in this page
>> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/ and use
>> it to capture the body at the request time and log it.
>>
>> If it still doesn't work, post your configuration.
>>
>> Baptiste
>>
>>
>> On Wed, Oct 21, 2015 at 5:31 PM, Alberto Zaccagni
>>  wrote:
>> > Did anyone succeeded in logging req.body?
>> > If so I would likely appreciate an example / some hints / a pointer into
>> > the
>> > docs, even though I've looked into this last one and could not find how
>> > to
>> > do it.
>> >
>> > Thank you
>> >
>> > Alberto
>> >
>> > On Fri, 16 Oct 2015 at 10:40 Alberto Zaccagni
>> >  wrote:
>> >>
>> >> Yes, I did turn it on. Or so I think, please have a look at my
>> >> configuration file:
>> >> https://gist.github.com/lazywithclass/d255bb4d2086b07be178
>> >>
>> >> Thank you
>> >>
>> >> Alberto
>> >>
>> >>
>> >> On Fri, 16 Oct 2015 at 10:12 Baptiste  wrote:
>> >>>
>> >>>
>> >>> Le 16 oct. 2015 10:46, "Alberto Zaccagni"
>> >>>  a écrit :
>> >>> >
>> >>> > Hello,
>> >>> >
>> >>> > Sorry for the repost, but it's really not clear to me how to use
>> >>> > this
>> >>> > feature: "Processing of HTTP request body" in
>> >>> > http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/, can it
>> >>> > be used
>> >>> > to log the body of a request?
>> >>> >
>> >>> > I am trying to use it like this in both my HTTP and HTTPS frontends:
>> >>> >
>> >>> > option http-buffer-request
>> >>> > log-format "%[req.body]"
>> >>> >
>> >>> > The error I get is "'log-format' : sample fetch  may not
>> >>> > be
>> >>> > reliably used here because it needs 'HTTP request headers' which is
>> >>> > not
>> >>> > available here.", where should I be using it?
>> >>> > Does that mean that we cannot log req.body at all or that I have to
>> >>> > enable another option before trying to use it?
>> >>> >
>> >>> > Any hint or help is much appreciated.
>> >>> > Thank you.
>> >>> >
>> >>> > Cheers
>> >>>
>> >>> Have you turned on 'mode http'?
>> >>>
>> >>> Baptiste



Re: HAproxy version 1.5 on centos 6.5

2015-10-22 Thread Baptiste
Hi,

Either download the right RPM for your operating system version or
install it from source.

Baptiste

On Thu, Oct 22, 2015 at 10:00 AM, Wilence Yao  wrote:
> Hi,
> I am a software developer from China. HAProxy is widely used in our company
> and it help build our system  stable and available. Thank you very much for
> your efforts.
> To make our system more stable and high availablity, our engineers combine
> haproxy and keepalived to suffer from one point failure of loadbalancer.
> It's very excited to know haproxy peers to synchronize session.
>
> Unfortunately, our most  production environments are centos 6.5. Rpm
> installation output:
>
>>>>
> $ rpm -ivh haproxy-1.5.14-3.1.x86_64.rpm
>
> warning: haproxy-1.5.14-3.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key
> ID 8e1431d5: NOKEY
>
> error: Failed dependencies:
>
> libc.so.6(GLIBC_2.14)(64bit) is needed by haproxy-1.5.14-1.fc22.x86_64
>
> libc.so.6(GLIBC_2.15)(64bit) is needed by haproxy-1.5.14-1.fc22.x86_64
>
> libpcre.so.1()(64bit) is needed by haproxy-1.5.14-1.fc22.x86_64
>
> systemd is needed by haproxy-1.5.14-1.fc22.x86_64
>>>>
> Because of systemd  dependency, we just can't install haproxy v1.5 in centos
> 6.5.
>
> Do you have any solution or idea about this problem?
>
>
> Thanks for any response.
>
> Best Regards.
>
>
> Wilence Yao



Re: Multiplexing multiple services behind one agent (feature suggestion; patch attached)

2015-10-22 Thread Baptiste
On Thu, Oct 22, 2015 at 3:59 AM, James Brown  wrote:
> Hello haproxy@:
>
> My name is James Brown; I wrote a small piece of software called hacheck
> (https://github.com/Roguelazer/hacheck) which is designed to be a healthcheck
> proxy for decentralized load balancer control (remove a node from a load
> balancer without knowing where the load balancers are; helpful once you
> start to have a truly, stupidly large number of load balancers).
>
> I am interested in using agent-checks instead of co-opting the existing
> httpchk mechanism; unfortunately, it looks like there's no convenient way to
> multiplex multiple services onto a single agent-port and reasonably
> disambiguate them. For example, it'd be great if I could have a server which
> runs one agent-check responder and can `MAINT` any of a dozen (or a hundred)
> different services running on this box.
>
> I've attached a small patch which adds a new server parameter (agent-send)
> which is a static string which will be sent to the agent on every server.
> This allows me to generate configs that look like
>
> backend foo
> server web1 10.1.2.1:8001 agent-check agent-port 3334 agent-send 
> "foo/web1\n"
> server web2 10.1.2.2:8001 agent-check agent-port 3334 agent-send 
> "foo/web2\n"
>
> backend bar
> server web1 10.1.2.1:8002 agent-check agent-port 3334 agent-send 
> "bar/web1\n"
> server web2 10.1.2.2:8002 agent-check agent-port 3334 agent-send 
> "bar/web2\n"
>
> And have a single service (running on port 3334) which can easily MAINT or
> UP either "foo" or "bar" depending on the value that it receives.
>
> The patch seems to work in my limited testing (that is to say, HAProxy sends
> the string and doesn't segfault or leak infinite amounts of RAM).
>
> Does this sound useful to anyone else? Is it worth upstreaming the patch? I
> welcome your thoughts.
> --
> James Brown
> Engineer
> EasyPost

Hi James,

This is interesting.
That said, I'm suggesting an improvement: use the log format varialble.

So your configuration would become:

backend foo
  default-server agent-send "%b/%s\n"
  server web1 10.1.2.1:8001 agent-check agent-port 3334
  server web2 10.1.2.2:8001 agent-check agent-port 3334

Baptiste



Re: Looking for help about "req.body" logging

2015-10-22 Thread Baptiste
To share this information to everyone, here is what Alberot might have done:

global
 log 127.0.0.1:514 local0 info

defaults
 log global
 log-format "body: %[capture.req.hdr(0)]"
 mode http

frontend f
 declare capture request len 8192 # id=0 to store request body
 bind 127.0.0.1:8001
 http-request capture req.body id 0
 default_backend b

backend b
 server s 127.0.0.1:8000


Log generated when a body is sent:
 localhost haproxy[25012]: body: foo:bar
and when no body are sent:
 localhost haproxy[25012]: body: -

Baptiste


On Thu, Oct 22, 2015 at 10:30 AM, Alberto Zaccagni
 wrote:
> Sorry for skipping over that part, I thought I've understood what the
> example in http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/
> meant, but I did not.
> I now get it and it worked, thanks Baptiste.
>
> Alberto
>
> On Thu, 22 Oct 2015 at 08:53 Baptiste  wrote:
>>
>> You might have missed the most important part of my previous mail, so
>> I'm repeating it again:
>> "Look for capture in this page
>> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/ and use
>> it to capture the body at the request time and log it."
>>
>> Use req.body in the example and log the capture.req.hdr in your
>> log-format and your done.
>>
>> Baptiste
>>
>>
>> On Thu, Oct 22, 2015 at 9:09 AM, Alberto Zaccagni
>>  wrote:
>> > Hello Baptiste,
>> >
>> > I've read both the 1.6 announcement and the docs about this feature, but
>> > u
>> > could not get it to work, I know I'm doing something wrong, I just don't
>> > know what.
>> > I've posted my configuration in the previous email, here is it:
>> > https://gist.github.com/lazywithclass/d255bb4d2086b07be178
>> >
>> > Thanks for your help
>> >
>> > Alberto
>> >
>> >
>> > On Wed, 21 Oct 2015, 9:58 p.m. Baptiste  wrote:
>> >>
>> >> Hi,
>> >>
>> >> I guess this is because the sample applies to a request element while
>> >> logging happens after the response has been sent, so data is not
>> >> available anymore.
>> >> Look for capture in this page
>> >> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/ and use
>> >> it to capture the body at the request time and log it.
>> >>
>> >> If it still doesn't work, post your configuration.
>> >>
>> >> Baptiste
>> >>
>> >>
>> >> On Wed, Oct 21, 2015 at 5:31 PM, Alberto Zaccagni
>> >>  wrote:
>> >> > Did anyone succeeded in logging req.body?
>> >> > If so I would likely appreciate an example / some hints / a pointer
>> >> > into
>> >> > the
>> >> > docs, even though I've looked into this last one and could not find
>> >> > how
>> >> > to
>> >> > do it.
>> >> >
>> >> > Thank you
>> >> >
>> >> > Alberto
>> >> >
>> >> > On Fri, 16 Oct 2015 at 10:40 Alberto Zaccagni
>> >> >  wrote:
>> >> >>
>> >> >> Yes, I did turn it on. Or so I think, please have a look at my
>> >> >> configuration file:
>> >> >> https://gist.github.com/lazywithclass/d255bb4d2086b07be178
>> >> >>
>> >> >> Thank you
>> >> >>
>> >> >> Alberto
>> >> >>
>> >> >>
>> >> >> On Fri, 16 Oct 2015 at 10:12 Baptiste  wrote:
>> >> >>>
>> >> >>>
>> >> >>> Le 16 oct. 2015 10:46, "Alberto Zaccagni"
>> >> >>>  a écrit :
>> >> >>> >
>> >> >>> > Hello,
>> >> >>> >
>> >> >>> > Sorry for the repost, but it's really not clear to me how to use
>> >> >>> > this
>> >> >>> > feature: "Processing of HTTP request body" in
>> >> >>> > http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/, can
>> >> >>> > it
>> >> >>> > be used
>> >> >>> > to log the body of a request?
>> >> >>> >
>> >> >>> > I am trying to use it like this in both my HTTP and HTTPS
>> >> >>> > frontends:
>> >> >>> >
>> >> >>> > option http-buffer-request
>> >> >>> > log-format "%[req.body]"
>> >> >>> >
>> >> >>> > The error I get is "'log-format' : sample fetch  may
>> >> >>> > not
>> >> >>> > be
>> >> >>> > reliably used here because it needs 'HTTP request headers' which
>> >> >>> > is
>> >> >>> > not
>> >> >>> > available here.", where should I be using it?
>> >> >>> > Does that mean that we cannot log req.body at all or that I have
>> >> >>> > to
>> >> >>> > enable another option before trying to use it?
>> >> >>> >
>> >> >>> > Any hint or help is much appreciated.
>> >> >>> > Thank you.
>> >> >>> >
>> >> >>> > Cheers
>> >> >>>
>> >> >>> Have you turned on 'mode http'?
>> >> >>>
>> >> >>> Baptiste



Re: no free ports && tcp_timestamps

2015-10-22 Thread Baptiste
Hi Luca,

It seems your clients are closing connections instead of the servers,
leading HAProxy to run out of free ports to get connected on the
server side.
Actually, the "source" directive should help fixing your issue since
you would have 64K ports per client IP to get connected to servers
instead of relying only on HAProxy box local IP and its only 64K
ports.
That said, the source directive may be improved by using "clientip"
instead of "client".
More information here:
  
http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#source%20%28Alphabetically%20sorted%20keywords%20reference%29

more about source port exhaustion (applied to mysql):
  
http://blog.haproxy.com/2012/12/12/haproxy-high-mysql-request-rate-and-tcp-source-port-exhaustion/

Don't forget to set sysctl net.ipv4.ip_local_port_range

Baptiste

On Thu, Oct 22, 2015 at 1:56 PM, luca boncompagni  wrote:
> Hi to all,
> On my production server running on fedora 20 and haproxy 1.5.2:
>
> Linux prod-lb01.prod 3.15.10-200.fc20.x86_64 #1 SMP Thu Aug 14 15:39:24 UTC
> 2014 x86_64 x86_64 x86_64 GNU/Linux
> [root@prod-lb01 ~]# rpm -qa | grep hapro
> haproxy-debuginfo-1.5.2-1.fc20.x86_64
> haproxy-1.5.2-1.fc20.x86_64
>
> after disabling tcp_timestamp for securtiy reaseon
> (http://www.forensicswiki.org/wiki/TCP_timestamps):
>
> [root@prod-lb01 ~]# echo 0 > /proc/sys/net/ipv4/tcp_timestamps
>
> I get a a lot of "no free ports" in the log and client receives a connection
> reset :
>
> [root@prod-lb01 ~]# wc -l /var/log/haproxy/haproxy-20151021.log
> 841275 /var/log/haproxy/haproxy-20151021.log
> [root@prod-lb01 ~]# grep -c Connect /var/log/haproxy/haproxy-20151021.log
> 29091
> [root@prod-lb01 ~]# grep Connect /var/log/haproxy/haproxy-20151021.log   |
> grep -c 'Oct 21 14:57:11 '
> 19
>
> My configuration set the retries number to 18:
>
> defaults
> modetcp
> log global
> option  dontlognull
> option  tcplog
> option  redispatch
> timeout connect 10s
> timeout client 3600s
> timeout client-fin 60s
> timeout server 3600s
> #timeout server-fin 60s
> maxconn 2
> # Set retries needed with balance source to avoid connection errors on
> the client side
> # With: "check inter 10s fastinter 2s fall 3" and considering every
> retry waits 1 second:
> # set retries >= inter + fastinter * fall = 10 + 2 * 3 = 16
> retries 18
> default-server inter 10s fastinter 2s fall 3
>
> frontend ssl
> bind 192.168.1.4:443
> bind 192.168.2.10:443
> default_backend ssl
>
> backend ssl
> balance source
> source  0.0.0.0 usesrc client
> option  allbackups
> server  web01 192.168.1.21:4443 check
> server  web02 192.168.1.22:4443 check
> server  web03 192.168.1.23:4443 check
> server  web04 192.168.1.24:4443 check
> server  sorry01 192.168.1.31:4443 backup check
> server  sorry02 192.168.1.32:4443 backup check
>
> I upgraded to the fedora 21 and haproxy 1.5.14:
>
> Linux prod-lb02.prod 4.1.5-100.fc21.x86_64 #1 SMP Tue Aug 11 00:24:23 UTC
> 2015 x86_64 x86_64 x86_64 GNU/Linux
> [root@prod-lb02 ~]# rpm -qa | grep hapro
> haproxy-1.5.14-1.fc21.x86_64
>
> and I get the same rate of errors.
>
> If I reenable the tcp timestamp:
>
> [root@prod-lb01 ~]# echo 0 > /proc/sys/net/ipv4/tcp_timestamps
>
> everithings works well in both vetsions of fedora.
>
> Do you have any idea about a resolution?
>
> Luca



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-27 Thread Baptiste
On Tue, Oct 27, 2015 at 11:44 AM, Ben Tisdall  wrote:
> Hi and thanks for a great load balancer. We're developing a much more
> complex proxy ruleset and being able to switch back to haproxy now
> that it supports DNS resolution was a huge relief!
>
> Unfortunately DNS resolution is not doing what I expect given the
> configuration. When the downstream ELB to which the server points to
> switches IP addresses the backend is failing with a L4 timeout on the
> check. DNS queries are being made, see:
> https://gist.github.com/btisdall/31b57b57fee19dc79637
>
> This is the output of "show stat resolvers":
>
> Resolvers section aws
>  nameserver aws_0:
>   sent: 2892976
>   valid: 2887729
>   update: 0
>   cname: 0
>   cname_error: 0
>   any_err: 0
>   nx: 0
>   timeout: 0
>   refused: 0
>   other: 0
>   invalid: 2887729
>   too_big: 0
>   truncated: 0
>   outdated: 0
>
> Note that  "valid" and "invalid" counts increase in exact step.
> Switching to "resolve-prefer ipv4" had no effect on this.
>
> Config
> =
>
> resolvers aws
>   nameserver aws_0 10.111.0.2:53
>
> # ...
>
> server myserver some-server.example.com:80 check resolvers aws
>
> Build Options
> ==
>
> HA-Proxy version 1.6.1 2015/10/20
> Copyright 2000-2015 Willy Tarreau 
>
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4
> -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
>   OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
>
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.8
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
> Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.31 2012-07-06
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built with Lua version : Lua 5.3.1
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
>
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> Regards,
>
> --
> Ben
>


Hi Ben,

I can't reproduce the problem with git version.
I'll try with 1.6.1, but DNS code is supposed to be the same between
both versions for now.

I've setup the following amazon lab:
- 1 instance with HAProxy running poininting to 1 ELB
- 1 ELB instance taking traffic from haproxy above above and
load-balancing haproxy's stats page from above server
- 1 instance to inject traffic on ELB to force it to change its IP
address after a few minutes

HTTP stream is like: public > haproxy:8080 > elb:80 > haproxy:80
It works like a charm.
I triggered a DNS change on ELB by massiveley injecting traffic and
here is the output of DNS stats:

Resolvers section aws
 nameserver aws1:
  sent: 95
  valid: 95
  update: 1
  cname: 0
  cname_error: 0
  any_err: 0
  nx: 0
  timeout: 0
  refused: 0
  other: 0
  invalid: 0
  too_big: 0
  truncated: 0
  outdated: 0


Here is my configuration:

global
 daemon
 log 127.0.0.1:514 local0 info
 stats socket /tmp/socket level admin
 stats timeout 10m

resolvers aws
 nameserver aws1 172.31.0.2:53

defaults HTTP
 mode http
 timeout client 10s
 timeout connect 4s
 timeout server 10s

frontend f
 bind :8080
 default_backend b

backend b
 server s ${LBNAME}:80 check resolvers aws resolve-prefer ipv4

frontend s
 bind :80
 stats enable
 stats uri /stats
 stats show-legends
 http-request redirect location /stats if { path / }



Please take a real pcap file using tcpdump and send it to me privately.

You also seem to use a CNAME which points to your ELB amazon name.
Could you let me know how you setup this, so I can try to reproduce
the issue in my lab?

Maybe the CNAME parsing is broken.

Baptiste



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-27 Thread Baptiste
On Wed, Oct 28, 2015 at 12:13 AM, Baptiste  wrote:
> On Tue, Oct 27, 2015 at 11:44 AM, Ben Tisdall  
> wrote:
>> Hi and thanks for a great load balancer. We're developing a much more
>> complex proxy ruleset and being able to switch back to haproxy now
>> that it supports DNS resolution was a huge relief!
>>
>> Unfortunately DNS resolution is not doing what I expect given the
>> configuration. When the downstream ELB to which the server points to
>> switches IP addresses the backend is failing with a L4 timeout on the
>> check. DNS queries are being made, see:
>> https://gist.github.com/btisdall/31b57b57fee19dc79637
>>
>> This is the output of "show stat resolvers":
>>
>> Resolvers section aws
>>  nameserver aws_0:
>>   sent: 2892976
>>   valid: 2887729
>>   update: 0
>>   cname: 0
>>   cname_error: 0
>>   any_err: 0
>>   nx: 0
>>   timeout: 0
>>   refused: 0
>>   other: 0
>>   invalid: 2887729
>>   too_big: 0
>>   truncated: 0
>>   outdated: 0
>>
>> Note that  "valid" and "invalid" counts increase in exact step.
>> Switching to "resolve-prefer ipv4" had no effect on this.
>>
>> Config
>> =
>>
>> resolvers aws
>>   nameserver aws_0 10.111.0.2:53
>>
>> # ...
>>
>> server myserver some-server.example.com:80 check resolvers aws
>>
>> Build Options
>> ==
>>
>> HA-Proxy version 1.6.1 2015/10/20
>> Copyright 2000-2015 Willy Tarreau 
>>
>> Build options :
>>   TARGET  = linux2628
>>   CPU = generic
>>   CC  = gcc
>>   CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4
>> -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
>>   OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
>>
>> Default settings :
>>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>>
>> Encrypted password support via crypt(3): yes
>> Built with zlib version : 1.2.8
>> Compression algorithms supported : identity("identity"),
>> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
>> Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
>> Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
>> OpenSSL library supports TLS extensions : yes
>> OpenSSL library supports SNI : yes
>> OpenSSL library supports prefer-server-ciphers : yes
>> Built with PCRE version : 8.31 2012-07-06
>> PCRE library supports JIT : no (USE_PCRE_JIT not set)
>> Built with Lua version : Lua 5.3.1
>> Built with transparent proxy support using: IP_TRANSPARENT
>> IPV6_TRANSPARENT IP_FREEBIND
>>
>> Available polling systems :
>>   epoll : pref=300,  test result OK
>>poll : pref=200,  test result OK
>>  select : pref=150,  test result OK
>> Total: 3 (3 usable), will use epoll.
>>
>> Regards,
>>
>> --
>> Ben
>>
>
>
> Hi Ben,
>
> I can't reproduce the problem with git version.
> I'll try with 1.6.1, but DNS code is supposed to be the same between
> both versions for now.
>
> I've setup the following amazon lab:
> - 1 instance with HAProxy running poininting to 1 ELB
> - 1 ELB instance taking traffic from haproxy above above and
> load-balancing haproxy's stats page from above server
> - 1 instance to inject traffic on ELB to force it to change its IP
> address after a few minutes
>
> HTTP stream is like: public > haproxy:8080 > elb:80 > haproxy:80
> It works like a charm.
> I triggered a DNS change on ELB by massiveley injecting traffic and
> here is the output of DNS stats:
>
> Resolvers section aws
>  nameserver aws1:
>   sent: 95
>   valid: 95
>   update: 1
>   cname: 0
>   cname_error: 0
>   any_err: 0
>   nx: 0
>   timeout: 0
>   refused: 0
>   other: 0
>   invalid: 0
>   too_big: 0
>   truncated: 0
>   outdated: 0
>
>
> Here is my configuration:
>
> global
>  daemon
>  log 127.0.0.1:514 local0 info
>  stats socket /tmp/socket level admin
>  stats timeout 10m
>
> resolvers aws
>  nameserver aws1 172.31.0.2:53
>
> defaults HTTP
>  mode http
>  timeout client 10s
>  timeout connect 4s
>  timeout server 10s
>
> frontend f
>  bind :8080
>  default_backend b
>
> backend b
>  server s ${LBNAME}:80 check resolvers aws resolve-prefer ipv4
>
> frontend s
>  bind :80
>  stats enable
>  stats uri /stats
>  stats show-legends
>  http-request redirect location /stats if { path / }
>
>
>
> Please take a real pcap file using tcpdump and send it to me privately.
>
> You also seem to use a CNAME which points to your ELB amazon name.
> Could you let me know how you setup this, so I can try to reproduce
> the issue in my lab?
>
> Maybe the CNAME parsing is broken.
>
> Baptiste


Ok, I use my personal domain name to create a CNAME pointing to my
internal ELB name and I can now reproduce the problem:
Resolvers section aws
 nameserver aws1:
  sent: 10485
  valid: 10469
  update: 0
  cname: 0
  cname_error: 0
  any_err: 0
  nx: 12
  timeout: 0
  refused: 0
  other: 0
  invalid: 10469
  too_big: 0
  truncated: 0
  outdated: 0

Now, let's dig in there :)

Baptiste



Re: HA Proxy - packet capture functionality

2015-10-27 Thread Baptiste
Hi Javier,

Are you aware of HAProxy logs and its termination states?
It says on which side (client / server) a problem occurred, as well as
what type of problem.
Maybe analyzing the logs will prevent you from performing the
harassing job of analyzing a packet capture.

Baptiste


On Tue, Oct 27, 2015 at 4:18 PM, Javier Torres  wrote:
> Hello,
>
>
>
> I’m currently working toward troubleshooting an application that is using HA
> Proxy for load balancing and would like to leverage this great tool to help
> correct the problem.  We would like to know how we can turn on packet
> capture tool?
>
>
>
> We’re working to understand weather the issue is network or application
> related and would like to gather some packet captures in order to understand
> the behavior.
>
>
>
> Ideally, we would like to filter the capture with the source ip address of
> the remote site.
>
>
>
> Can you kindly advise how we can achieve this?
>
>
>
> Thank you in advance.
>
>
>
> Rgds,
>
> Javier



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-27 Thread Baptiste
Ben,

I found a couple of bugs:
#1 an incomplete end of processing when the queried hostname can't be
found in the response. This lead to the query loop you may have
observed.
#2 an error in the way we parse CNAME responses, leading to return an
error when validating a CNAME (this triggers bug #1).

Please find in attachment a couple of patches you could give a try and
report whether you still have an issue or not.

Baptiste
From 67687363df5e2b5c82f12ecf2c560d22f9da795c Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Wed, 28 Oct 2015 02:03:32 +0100
Subject: [PATCH 1/2] BUG/MAJOR: dns: DNS response packet not matching queried
 hostname may lead to a loop

The status DNS_UPD_NAME_ERROR returned by dns_get_ip_from_response and
which means the queried name can't be found in the response was
improperly processed (fell into the default case).
This lead to a loop where HAProxy simply resend a new query as soon as
it got a response for this status

This should be backported into 1.6 branch
---
 src/server.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/server.c b/src/server.c
index dcc5961..0e0cab3 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2603,6 +2603,7 @@ int snr_resolution_cb(struct dns_resolution *resolution, struct dns_nameserver *
 			}
 			goto stop_resolution;
 
+		case DNS_UPD_NAME_ERROR:
 		case DNS_UPD_SRVIP_NOT_FOUND:
 			goto save_ip;
 
-- 
2.5.0

From c5f95cda9cf66db99d6088af4ecf82568a4602b4 Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Wed, 28 Oct 2015 02:10:02 +0100
Subject: [PATCH 2/2] BUG/MINOR: dns: unable to parse CNAMEs response

A bug lied in the parsing of DNS CNAME response, leading HAProxy to
think the CNAME was improperly resolved in the response.

This should be backported into 1.6 branch
---
 src/dns.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index 53b65ab..e28e2a9 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -628,8 +628,11 @@ int dns_get_ip_from_response(unsigned char *resp, unsigned char *resp_end,
 		else
 			ptr = reader;
 
-		if (cname && memcmp(ptr, cname, cnamelen))
-			return DNS_UPD_NAME_ERROR;
+		if (cname) {
+		   if (memcmp(ptr, cname, cnamelen)) {
+return DNS_UPD_NAME_ERROR;
+			}
+		}
 		else if (memcmp(ptr, dn_name, dn_name_len))
 			return DNS_UPD_NAME_ERROR;
 
-- 
2.5.0



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-28 Thread Baptiste
On Wed, Oct 28, 2015 at 11:36 AM, Ben Tisdall  wrote:
> On Wed, Oct 28, 2015 at 10:15 AM, Ben Tisdall  
> wrote:
>>
>> Thanks Baptiste, will get on this today.
>>
>
> Ok this in the test environment now and the "other" counter now
> increments in step with "valid", eg:
>
> Resolvers section aws
>  nameserver aws_0:
>   sent: 208
>   valid: 104
>   update: 0
>   cname: 0
>   cname_error: 0
>   any_err: 0
>   nx: 0
>   timeout: 0
>   refused: 0
>   other: 104
>   invalid: 0
>   too_big: 0
>   truncated: 0
>   outdated: 0
>
> We'll get some (system-wide) load and regression testing done.
>
> --
> Ben

Have you forced resolution to ipv4 only?
if not, could you give it a try?

Baptiste



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-28 Thread Baptiste
Now, you can simply use whatever tool (ab, httperf, wrk, etc...)
hosted on a third party VM to inject traffic on ELB IP directly.
After a few minutes (less than 5), ELB service will be moved
automatically to an other instance, leading IP to change.
On HAProxy stat socket, you should see the 'update' counter to be
incremented to 1.
Of course, traffic load-balanced by HAProxy should followup as well.

Baptiste


On Wed, Oct 28, 2015 at 2:05 PM, Ben Tisdall  wrote:
> On Wed, Oct 28, 2015 at 1:55 PM, Baptiste  wrote:
>
>>
>> Have you forced resolution to ipv4 only?
>> if not, could you give it a try?
>>
>
> Right, with "resolver-prefer ipv4":
>
> Resolvers section aws
>  nameserver aws_0:
>   sent: 11
>   valid: 11
>   update: 0
>   cname: 0
>   cname_error: 0
>   any_err: 0
>   nx: 0
>   timeout: 0
>   refused: 0
>   other: 0
>   invalid: 0
>   too_big: 0
>   truncated: 0
>   outdated: 0



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-28 Thread Baptiste
Great, thanks for confirming!

Baptiste

On Wed, Oct 28, 2015 at 4:13 PM, Ben Tisdall  wrote:
> On Wed, Oct 28, 2015 at 3:04 PM, Baptiste  wrote:
>> Now, you can simply use whatever tool (ab, httperf, wrk, etc...)
>> hosted on a third party VM to inject traffic on ELB IP directly.
>> After a few minutes (less than 5), ELB service will be moved
>> automatically to an other instance, leading IP to change.
>> On HAProxy stat socket, you should see the 'update' counter to be
>> incremented to 1.
>> Of course, traffic load-balanced by HAProxy should followup as well.
>>
>
> Ok, I forced an address change as you described (good tip btw) and
> sure enough the "update" counter incremented by 1 and the proxy
> continued to function.
>
> --
> Ben



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-28 Thread Baptiste
Jesse

On Wed, Oct 28, 2015 at 5:25 PM, Jesse Hathaway  wrote:
> On Tue, Oct 27, 2015 at 8:18 PM, Baptiste  wrote:
>> #2 an error in the way we parse CNAME responses, leading to return an
>> error when validating a CNAME (this triggers bug #1).
>
> How does your patch for this issue change the logic? It appears
> functionally the same to me.

Good catch, forget about patch 1, It was 2AM in the morning when I
wrote it :'(...
I wanted to apply the same code as DNS_UPD_NO_IP_FOUND, and increment
the OTHER error.

Actually, the bug was triggered because the status of the resolution
was never updated in this very particular case (first DNS response,
can't find requested name in the response), which lead the code to
resend a packet, creating a loop.

Ben, could you apply the patch below instead of 0001:

diff --git a/src/server.c b/src/server.c
index dcc5961..c92623d 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2620,6 +2620,17 @@ int snr_resolution_cb(struct dns_resolution
*resolution, struct dns_nameserver *
}
goto stop_resolution;

+   case DNS_UPD_NAME_ERROR:
+   /* if this is not the last expected response,
we ignore it */
+   if (resolution->nb_responses <
nameserver->resolvers->count_nameservers)
+   return 0;
+   /* update resolution status to OTHER error type */
+   if (resolution->status != RSLV_STATUS_OTHER) {
+   resolution->status = RSLV_STATUS_OTHER;
+   resolution->last_status_change = now_ms;
+   }
+   goto stop_resolution;
+
default:
goto invalid;


I'll also test it in our amazon lab later tonight.
Then I'll ask Willy to merge them.


Jesse, thanks again for catching this!


Baptiste



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-28 Thread Baptiste
On Wed, Oct 28, 2015 at 6:39 PM, Ben Tisdall  wrote:
> On Wed, Oct 28, 2015 at 6:28 PM, Ben Tisdall  wrote:
>> On Wed, Oct 28, 2015 at 6:00 PM, Baptiste  wrote:
>>>
>>> Ben, could you apply the patch below instead of 0001:
>>>
>>> [snip]
>
> That patch is proving problematic to apply, to save me guessing can
> you provide it as an attachment please.

Hi Ben,

Here you go.

Baptiste
From c96ec88f274689f5dd5b7efd403fccbc8837e748 Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Wed, 28 Oct 2015 02:03:32 +0100
Subject: [PATCH 1/2] BUG/MAJOR: dns: first DNS response packet not matching
 queried hostname may lead to a loop

The status DNS_UPD_NAME_ERROR returned by dns_get_ip_from_response and
which means the queried name can't be found in the response was
improperly processed (fell into the default case).
This lead to a loop where HAProxy simply resend a new query as soon as
it got a response for this status and in the only case where such type
of response is the very first one received by the process.

This should be backported into 1.6 branch
---
 src/server.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/src/server.c b/src/server.c
index dcc5961..c92623d 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2620,6 +2620,17 @@ int snr_resolution_cb(struct dns_resolution *resolution, struct dns_nameserver *
 			}
 			goto stop_resolution;
 
+		case DNS_UPD_NAME_ERROR:
+			/* if this is not the last expected response, we ignore it */
+			if (resolution->nb_responses < nameserver->resolvers->count_nameservers)
+return 0;
+			/* update resolution status to OTHER error type */
+			if (resolution->status != RSLV_STATUS_OTHER) {
+resolution->status = RSLV_STATUS_OTHER;
+resolution->last_status_change = now_ms;
+			}
+			goto stop_resolution;
+
 		default:
 			goto invalid;
 
-- 
2.5.0



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-28 Thread Baptiste
On Wed, Oct 28, 2015 at 7:04 PM, Jesse Hathaway  wrote:
> On Wed, Oct 28, 2015 at 12:00 PM, Baptiste  wrote:
>> Good catch, forget about patch 1, It was 2AM in the morning when I
>> wrote it :'(...
>> I wanted to apply the same code as DNS_UPD_NO_IP_FOUND, and increment
>> the OTHER error.
>
> That is interesting, but I was asking about the second patch,
> 0002-BUG-MINOR-dns-unable-to-parse-CNAMEs-response.patch

Ah, ok!
Anyway, your mail made me read my patches and find the ugly thing in
the other patch :)

So, when you write
   if (cname && memcmp(ptr, cname, cnamelen))
   return DNS_UPD_NAME_ERROR;
  else if (memcmp(ptr, dn_name, dn_name_len))
return DNS_UPD_NAME_ERROR;

your compare cname againt name in current record only if cname is set.
In Ben's case, cname is set and ptr and cname comparison was true,
hence memcmp returned 0.
Since memcmp returns 0, then HAProxy checks the next condition and
compare ptr to dn_name, which lead to return the DNS_UPD_NAME_ERROR
since we're evaluating a cname and ptr points to the CNAME while
dn_name points to the queried name.

Basically, the code parsed the first response record, the CNAME, then
returned an error because the value of the cname does not match
anymore the name in the A record.

With the code below, when cname is set, there is no chance you compare
ptr and dn_name...
   if (cname) {
  if (memcmp(ptr, cname, cnamelen)) {
   return DNS_UPD_NAME_ERROR;
   }
   }
  else if (memcmp(ptr, dn_name, dn_name_len))
    return DNS_UPD_NAME_ERROR;

Baptiste



Re: 1.6.1: Backend DNS Resolution Error

2015-10-28 Thread Baptiste
On Wed, Oct 28, 2015 at 9:23 PM, Susheel Jalali
 wrote:
> Dear Baptiste, Ben and Jesse,
>
> We have been facing the same issue that HAProxy backend is not able to
> pull the right Web servers using our local DNS.  We applied Baptiste’s
> updated patches to server.c and dns.c and re-installed with a clean
> make/make install.  Still the same erroneous result.
>
> Hope the following test results help resolve the DNS issue.  If the
> issue is already resolved, please let us know what we could be missing?
>
> In our case, Product1, Product2, Product3 are being served by the same
> Web server and same port.  The logs show that the request to backend
> “product1” is getting redirected to product1.local.domain, default
> backend and to other backend servers (product2.local.domain,
> product3.local.domain).
>
> Here are the HAProxy configuration, relevant “info” logs and the patches
> applied.  We would appreciate any vectors.
>
> Patches applied:
> 0001 (UPDATED): http://marc.info/?l=haproxy&m=144605173426350&w=2
> 0002: http://marc.info/?l=haproxy&m=144605551527649&q=p3
>
> +++
> HAProxy Configuration
> +++
> global
> [..]
>
> defaults
> [..]
>
> frontend webapps-frontend
>  #Product1
>  acl host_httpsreq.hdr(Host) 
>  acl path_subdomain_p1 path_beg -i /Product1
>  use_backend subdomain_p1-backend if host_https path_subdomain_p1
>
>  #Product2
>  acl host_httpsreq.hdr(Host) 
>  acl path_subdomain_p2 path_beg -i /Product2
>  use_backend subdomain_p2-backend if host_https path_subdomain_p2
>
>  default_backend webapps-backend
>
> backend webapps-backend
>  server server-id DefaultProductServer.internal.domain:80 check
>
> backend subdomain_p1-backend
>  http-request set-header Host 
>
>  reqirep ^([^\ ]*)\ /Product1/*([^\ ]*)\ (.*)$   \1\ /\2\ \3
>  rspirep ^(Location:)\ (https?://([^/]*))/(.*)$Location:\
> /Product1/\3
>
>  server  :80 check resolvers
> haproxy-dns
>
> backend subdomain_p2-backend
>  [..]
>
> resolvers HAProxy-dns
>   nameserver dnsserver 10.10.10.1:53
>   resolve_retries   3
>   timeout retry 1s
>   hold valid   10s
>
> ++
> Logs:  Info
> ++
> Oct 28 14:51:41 localhost haproxy[12167]: 192.168.100.153 - -
> [28/Oct/2015:19:51:41 +] "GET /Product1/ HTTP/1.1" 302 235 "" ""
> 49936 640 "webapps-frontend~" "subdomain_p1-backend" "Product1.prod0" 33
> 0 0 5 38  1 1 0 0 0 0 0 "" ""
>
> Oct 28 14:51:41 localhost haproxy[12167]: 192.168.100.153 - -
> [28/Oct/2015:19:51:41 +] "GET /Product1/ HTTP/1.1" 302 235 "" ""
> 49936 640 "webapps-frontend~" "subdomain_p1-backend" "Product1.prod0" 33
> 0 0 5 38  1 1 0 0 0 0 0 "" ""
>
> Oct 28 14:51:41 localhost haproxy[12167]: 192.168.100.153 - -
> [28/Oct/2015:19:51:41 +] "GET
> /Product1/interface/login/login_frame.php?site=default HTTP/1.1" 200
> 1159 "" "" 49936 678 "webapps-frontend~" "subdomain_p1-backend"
> "Product1.prod0" 4 0 0 15 19  1 1 0 1 0 0 0 "" ""
>
> Oct 28 14:51:41 localhost haproxy[12167]: 192.168.100.153 - -
> [28/Oct/2015:19:51:41 +] "GET
> /Product1/interface/login/login_frame.php?site=default HTTP/1.1" 200
> 1159 "" "" 49936 678 "webapps-frontend~" "subdomain_p1-backend"
> "Product1.prod0" 4 0 0 15 19  1 1 0 1 0 0 0 "" ""
>
> Oct 28 14:51:41 localhost haproxy[12167]: 192.168.100.153 - -
> [28/Oct/2015:19:51:41 +] "GET /interface/themes/style_oemr.css
> HTTP/1.1" 404 283 "" "" 49936 698 "webapps-frontend~" "webapps-backend"
> "DefaultProductServer.prod0" 22 0 1 6 29  3 3 0 1 0 0 0 "" ""
>
> Oct 28 14:51:41 localhost haproxy[12167]: 192.168.100.153 - -
> [28/Oct/2015:19:51:41 +] "GET /interface/themes/style_oemr.css
> HTTP/1.1" 404 283 "" "" 49936 698 "webapps-frontend~" "webapps-backend"
> "DefaultProductServer.prod0" 22 0 1 6 29  3 3 0 1 0 0 0 "" ""
>
> Oct 28 14:51:41 localhost haproxy[12167]: 192.168.100.153 - -
> [28/Oct/2015:19:51:41 +] "GET /interface/login/login_title.php
> HTTP/1.1" 404 283 "" "" 49936 727 "webapps-frontend~" "webapps-backend"
> "DefaultProductServer.prod0" 3 0 1 2 6 -

Re: stick tables and url_param + post headers - counter‏

2015-10-28 Thread Baptiste
On Sun, Oct 25, 2015 at 12:22 PM, Roland RoLaNd  wrote:
>
> Hello,
>
> I am trying to rate limit requests depending on their specific identifier
> which is sent either as a post header or a query string parameter.
>
> Below is my starting config (am i mistaken to be using this ? )
>
> stick-table type string len 70 size 5M expire 1m store
> gpc0_rate(60s),conn_cnt,conn_cur,conn_rate(60s),sess_cnt,sess_rate(60s),http_req_rate(60s)
>
>  stick on url_param(uid)
>
>
> my hope is to use a throttled backend, if connections within 1 minute from
> the same UID (query string or post header) exceeds  30
>
> I have the same setup working with IPs though im finding it a bit tricky to
> do the same with qs/headers
>
>
> Any advice on the right direction? i am not confident with the above
> counters
>


Hi,

You can have multiple stick on lines. They'll be process in the order
you write them and the first match stop the processing.
It means all solutions will work:
- client sending the url parameter only
- client sending the HTTP header only
- client sending both URL parameter and HTTP header


stick-table type string len 70 size 5M expire 1m store
gpc0_rate(60s),conn_cnt,conn_cur,conn_rate(60s),sess_cnt,sess_rate(60s),http_req_rate(60s)

 stick on url_param(uid)
 stick on req.hdr(UID)


We use this type of configuration to maintain persistence on
JSESSIONID cookie which may be found either in a Cookie or in a url
parameter.

Baptiste



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-30 Thread Baptiste
On Fri, Oct 30, 2015 at 12:53 PM, Ben Tisdall  wrote:
> On Thu, Oct 29, 2015 at 1:43 PM, Ben Tisdall  wrote:
>
>> Sorry, I'm misinterpreting the test results, please ignore that. One
>> ELB address has remained the same today so it's likely HAProxy has
>> been using that and has not needed to update.
>
> Ok, finally observed some more ELB address changes (2, the other
> may've escaped me somehow):
>
> Resolvers section aws
>  nameserver aws_0:
>   sent: 18528
>   valid: 18527
>   update: 3
>   cname: 0
>   cname_error: 0
>   any_err: 0
>   nx: 0
>   timeout: 0
>   refused: 0
>   other: 0
>   invalid: 0
>   too_big: 0
>   truncated: 0
>   outdated: 1
>
> Proxy is proxying.
>
> --
> Ben


Hi Ben,

Thanks a lot for confirming!
I managed to run it in my lab as well a couple of hours ago to confirm
the problem is fixed.

I sent patches to Willy, and they have been integrated a few minutes ago.
You can git pull ; make clean ; make [...]

Baptiste



Re: DNS resolution problem on 1.6.1-1ppa1~trusty

2015-10-30 Thread Baptiste
On Fri, Oct 30, 2015 at 2:10 PM, Lukas Tribus  wrote:
>> I sent patches to Willy, and they have been integrated a few minutes ago.
>> You can git pull ; make clean ; make [...]
>
> Unless you use haproxy-1.6, in that case you have to wait for the backport
> and the git push, which has not happened yet.
>
> Lukas


True :)
I'm cutting edge: "HAProxy version 1.7-dev0-e4c4b7-18".

Baptiste



Re: GET HAPROXY HOST INFO VIA Api/JSON

2015-11-01 Thread Baptiste
Yes, using jq:
http://infiniteundo.com/post/99336704013/convert-csv-to-json-with-jq

Baptiste

On Mon, Nov 2, 2015 at 6:36 AM, Melvil JA  wrote:
> Is it possible to get haproxy host information via api/json  other than via
> ui and csv ?
>
> --
> Thanks
>
> who/melvil
>
> _
> The information contained in this communication is intended solely for the
> use of the individual or entity to whom it is addressed and others
> authorized to receive it. It may contain confidential or legally privileged
> information. If you are not the intended recipient you are hereby notified
> that any disclosure, copying, distribution or taking any action in reliance
> on the contents of this information is strictly prohibited and may be
> unlawful. If you have received this communication in error, please notify us
> immediately by responding to this email and then delete it from your system.
> The firm is neither liable for the proper and complete transmission of the
> information contained in this communication nor for any delay in its
> receipt.



Re: haproxy + exim + sni

2015-11-01 Thread Baptiste
On Mon, Nov 2, 2015 at 2:16 AM, Matt Bryant  wrote:
> All,
>
> exim supports SNI for multidomain certs off one running instance and can get
> that working ok ... but now trying to put that behind a haproxy LB ... can
> this be done ??? Is there a way that haproxy can forward the SNI information
> on in the connection it makes ?? So far I seem to just get the default cert
>  or do I need to terminate the SSL at haproxy .. would rather not since
> it means more config and more places to the put the cert ..(to aupport
> starttls etc the cert has to be on the mailserver anyhow).
>
> rgds
>
> Matt Bryant
> --
> m...@the-bryants.net


Hi Matt,

Yes, you have the server side 'sni' keyword:
http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#sni

(HAProxy 1.6 only)

Baptiste



Re: GET HAPROXY HOST INFO VIA Api/JSON

2015-11-02 Thread Baptiste
I was delivering the "quick" answer to have this feature right now :)

This is a need we identified and I already talked to Willy about it.
There is technically nothing against such feature.
We need time or resource to develop it.
If you want to contribute, write a similar function than csv_enc() in
src/standard.c.

Baptiste


On Mon, Nov 2, 2015 at 8:44 AM, Melvil JA  wrote:
> CSV to json conversion,there are lots of options.
> Is it possible to get direct restful output from haproxy.
> Suppoes i have hundreds of servers..is it good to parse hundreds of csv o/p
> to json ?
>
> On Mon, Nov 2, 2015 at 1:07 PM, Baptiste  wrote:
>>
>> Yes, using jq:
>> http://infiniteundo.com/post/99336704013/convert-csv-to-json-with-jq
>>
>> Baptiste
>>
>> On Mon, Nov 2, 2015 at 6:36 AM, Melvil JA 
>> wrote:
>> > Is it possible to get haproxy host information via api/json  other than
>> > via
>> > ui and csv ?
>> >
>> > --
>> > Thanks
>> >
>> > who/melvil
>> >
>> > _
>> > The information contained in this communication is intended solely for
>> > the
>> > use of the individual or entity to whom it is addressed and others
>> > authorized to receive it. It may contain confidential or legally
>> > privileged
>> > information. If you are not the intended recipient you are hereby
>> > notified
>> > that any disclosure, copying, distribution or taking any action in
>> > reliance
>> > on the contents of this information is strictly prohibited and may be
>> > unlawful. If you have received this communication in error, please
>> > notify us
>> > immediately by responding to this email and then delete it from your
>> > system.
>> > The firm is neither liable for the proper and complete transmission of
>> > the
>> > information contained in this communication nor for any delay in its
>> > receipt.
>
>
>
>
> --
> Thanks
>
> who/melvil
>
> _
> The information contained in this communication is intended solely for the
> use of the individual or entity to whom it is addressed and others
> authorized to receive it. It may contain confidential or legally privileged
> information. If you are not the intended recipient you are hereby notified
> that any disclosure, copying, distribution or taking any action in reliance
> on the contents of this information is strictly prohibited and may be
> unlawful. If you have received this communication in error, please notify us
> immediately by responding to this email and then delete it from your system.
> The firm is neither liable for the proper and complete transmission of the
> information contained in this communication nor for any delay in its
> receipt.



Re: Multiple nameservers with the same ID is allowed

2015-11-02 Thread Baptiste
On Fri, Oct 30, 2015 at 3:22 PM, Pavlos Parissis
 wrote:
> Hi,
>
> Following resolver section passes configuration check
> resolvers mydns1
> nameserver ns1 8.8.8.8:53
> nameserver ns1 8.8.4.4:53
> resolve_retries   3
> timeout retry 1s
> hold valid   10s
>
> IMHO: allowing same ID for 2 different objects, which have stats attached to
> them, may not be the best approach here. Since, HAProxy doesn't allow more
> than one resolver sections with same name, I would say for consistency
> reasons should do the same for nameserver parameters within the same
> resolver section.
>
> If IDs for nameserver are different then you can fetch stats per nameserver:
> echo 'show stat resolvers mydns1 ns1'|socat /run/haproxy/admin1.sock stdio
>
> Cheers,
> Pavlos
>


Hi Pavlos,

I agree with you.
If you think you can contribute this, feel free, otherwise, let me
know and I'll do it.
If we go into this direction, we also may add an 'id' keyword, to
force the nameserver uuid, like on the server line.

We may backport only uuid in 1.6, since other changes may break
existing configuration.
I'll discuss this point with Willy.

Batpiste



Re: Echo server in Lua

2015-11-03 Thread Baptiste
On Tue, Nov 3, 2015 at 5:53 AM, Thrawn  wrote:
> Now that HAProxy has Lua support, I'm looking at the possibility of setting 
> up an echo server, which simply responds with the observed remote address of 
> the client (currently implemented in PHP as  $_SERVER['REMOTE_ADDRESS']; ?>).
>
>
> Does anyone have a suggestion of the most efficient possible implementation 
> of this? If possible, it should handle millions of clients polling it 
> regularly, so speed is essential.
>
>
> Thanks
>

Hi,

content of echo.lua file:
-- a simple echo server
-- it generates a response whose body contains the client IP address
core.register_action("echo", { "http-req" }, function (txn)
local buffer = ""
local response = ""

buffer = txn.f:src()

response = response .. "HTTP/1.0 200 OK\r\n"
response = response .. "Server: haproxy-lua/echo\r\n"
response = response .. "Content-Type: text/html\r\n"
response = response .. "Content-Length: " .. buffer:len() .. "\r\n"
response = response .. "Connection: close\r\n"
response = response .. "\r\n"
response = response .. buffer

txn.res:send(response)
txn:done()
end)

content of haproxy's configuration:

global
  log 127.0.0.1 local0
  lua-load echo.lua

frontend echo
  bind *:10004
  mode http
  http-request lua.echo


Don't forget to setup timeouts, etc...

Baptiste



<    7   8   9   10   11   12   13   14   15   16   >