Re: [PATCH] CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy

2019-02-08 Thread Pavlos Parissis
On 8/2/19 11:11 π.μ., Willy Tarreau wrote:
> Hi Christopher,
> 
> 
> On Thu, Feb 07, 2019 at 10:09:52PM +0100, Christopher Faulet wrote:
>> Hi,
>>
>> This patch adds a new component in contrib. It is a Prometheus exporter for
>> HAProxy.
> (...)
> 
> Thanks for doing this. After reading the whole patch, I measure how
> uninteresting an experience this must have been! After all, using C
> to produce yet-another-format is akin to using python to write
> yet-another-http-proxy :-)
> 
> I totally agree with your approach of placing it under contrib/. After
> all we've already done the same with other extensions explicitly
> targetting products (mod_security, mod_defender, systemd, ...). 
> We support standards, not products. And products which respect standards
> are naturally supported via the standards, so there is indeed no reason
> for opening a new can of worms by inviting $PRODUCT_OF_THE_MONTH into
> src/ especially when these products change faster than our maintenance
> cycle.
> 
> In my opinion the right place for a stats exporter is : outside. However
> I'm well aware that our export formats are not necessarily friendly to
> such exporters. For example, the fact that prometheus uses this funny
> ordering forces a gateway to keep all metrics in memory before being
> able to dump them. It's not cool either. We could have a long-term
> approach consisting in trying to implement multiple tree walk methods
> combined with a few formats so that implementing external exporters in
> various languages becomes trivial. In this case such tools could provide
> high quality agents to collect our metrics by default without having to
> work around some limitations or constraints.
> 
> This is probably a design discussion to have for the long term here on
> the list : what are the main stats export mechanisms desired in field.
> I can imagine that most agents will want to poll haproxy and dump the
> whole stats once in a while, some will rely on it to send a full dump
> once in a while (this may already become an issue during reloads), some
> might possibly want to subscribe to change notification of certain
> metrics, or receive a diff from the previous dump once in a while. And
> for all these variations I guess we may have to implement 2 or 3 dump
> styles :
>   - Front -> Back -> Server -> metric
>   - { Front, Back, Server } -> metric
>   - metric: { front, back, server }
> 
> I don't know if I'm completely off or not, but I do think that those who
> have experience with such tools should definitely join the discussion to
> share their observations and deployment difficulties met in field.
> 


There are mainly two ways to get metrics out of software:
1. Push, where foobar software uploads stats to a remote/local entry point. 
Graphite is one of the
most used system for this.
2. Pull, the metrics pipeline/infra scrapes foobar software to fetch metrics.

The above is the easy part of getting metrics, the most challenging is data 
types(counters, gauges,
summaries) and format.

Graphite has a very simply format and data type, you send strings over TCP 
connections:
metricname value timestamp(epoch)

where metricname looks like a FS tree:
loadbalancers.edge.lb-01.haproxy.frontend.www_haproxy_org.lbtol 124 1549661217

Prometheus uses the pull method and it is a bit more complicated.

You also have "proxy" systems like telegraf/fluentd that can work on any method 
and build a bridge
between foobar software and any metric pipeline. All those "proxy" systems 
allow you to write any
transformer you want, so the options are countless.

I have to agree with you that supporting all possible combinations is quite 
hard.
Some software support both, some only one of them. More and more you see new 
software have
"instrumentation" out of the box using the pull method and default to 
Prometheus model.

I personally find the cvs data we get out of STATS socket easy to use. I can 
easily write a software
to support both models. I have written one to support the Push method and use 
Graphite as a metric
pipeline and it would be trivial to write an exporter for Prometheus and for 
another X system in two
years.

I prefer the foorbar software to give me raw data and let me decide how I will 
use them. I don't
want to have any kind of aggregation at the source of the metrics or any other 
funny things, which
can use issues when I do aggregation at the store or during visualization.

My 2cents,
Pavlos




signature.asc
Description: OpenPGP digital signature


IT Management Service Contact List

2019-02-08 Thread sonya . henson
Hi,



How about targeting *IT Management Software products Users base *for your
market expansion strategy.



*IT Management Software products List Includes:*



• Nagios

• Intermedia

• Microsoft System Center Configuration Manager

• Microsoft System Center

• Zabbix

• Altiris

• CommVault

• Spiceworks

• HelpSystems

• ManageEngine

• Dell Kace System Management Appliances

• Novell ZENworks and many more.



If this sounds useful please let us know so I can share the detailed
information with you.



Thanks-

*Sonya Henson*

Demand Generation



If see no interest please revert no need!


Re: Does anyone *really* use 51d or WURFL ?

2019-02-08 Thread Willy Tarreau
Hi Ben,

On Tue, Feb 05, 2019 at 01:37:59PM +, Ben Shillito wrote:
> Hi Willy,
> 
> I have attached two patches.
> 
> One is the threading change which maps the threading flag in 51Degrees to the
> one in HAProxy. There are also some changes in the 51d.c module code to make
> everything thread safe.
> 
> The other is a minor bug in the multiple header matching when using the Hash
> Trie API.

Thanks and sorry for the delay, I thought I already picked them. Now merged.

Thanks,
Willy



Re: Require info on ACL for rate limiting on per URL basis.

2019-02-08 Thread Marco Corte

Il 2019-02-08 14:46 Badari Prasad ha scritto:


Can I get some reference for a url based rate limiting, so that I can
build on this 


Hi!

I found there two posts very valuable

https://www.haproxy.com/blog/introduction-to-haproxy-stick-tables/
https://www.haproxy.com/blog/application-layer-ddos-attack-protection-with-haproxy/

Ciao!

.marcoc



Re: http-use-htx and IIS

2019-02-08 Thread Willy Tarreau
Hi Marco,

On Fri, Feb 08, 2019 at 02:20:53PM +0100, Marco Corte wrote:
> Il 2019-02-07 17:50 Marco Corte ha scritto:
> > Hello!
> > 
> > I am testing haproxy version 1.9.4 on Ubuntu 18.04.
> > 
> > With the "option http-use-htx", haproxy shows a strange behaviour when
> > the real server is IIS and if the users' browsers try to do a POST.
> > 
> 
> I activated two frontend/backend pair on the same haproxy instance,
> forwarding to the same real server 10.64.44.74:82.
> 
> bind 10.64.44.112:443 -> no option http-use-htx -> server 10.64.44.74:82
> bind 10.64.44.112:444 ->option http-use-htx -> server 10.64.44.74:82
> 
(..)
> Two minutes after the POST, the real server logs a "400" error (because a
> timeout is reached, I guess).
> The fact that the real server is waiting for some data also matches with the
> haproxy logs that have a "SD" state at disconnection.
> 
> It is difficult to anonymize the packet content and I do not want to
> generate a WOT here posting the whole packet capture in clear.
> If someone is interested, I can do a tcpdump and sent it to him/her.

Could you please give a few extra indications like :
  - the approximate size of the POST request
  - the approximate size of the response (if any)
  - the request headers haproxy sends to IIS
  - the response headers haproxy receives to IIS

You can run haproxy in debug mode (-d) and you'll get all these at once,
it will significantly help figure where to search.

Thanks,
Willy



Require info on ACL for rate limiting on per URL basis.

2019-02-08 Thread Badari Prasad
Hi ,
 I am a novice for HAProxy, was checking if HAProxy can support rate
limiting per url basis.
I did check some examples and documentation, amount of info is
overwhelming.

My back end server exposes url's say
1) /api/v1/{client_name}/transfer_data
Ex: /api/v1/client1/transfer_data or  /api/v1/client2/transfer_data
 2) /api/v1/{client_name}/user_data
 Ex: /api/v1/client1/user_data or /api/v1/client2/user_data

where client1 and client2 are client identifiers which are known ahead at
haproxy.

I would want to configure 1000 tps for url /api/v1/client1/transfer_data or
500 tps for /api/v1/client2/user_data and so on

I did try out some config but did not help much ( based on this link:
https://jve.linuxwall.info/ressources/taf/haproxy-aws/#id28 )

Can I get some reference for a url based rate limiting, so that I can build
on this 

Thanks in advance.
 Badari


Re: http-use-htx and IIS

2019-02-08 Thread Marco Corte

Il 2019-02-07 17:50 Marco Corte ha scritto:

Hello!

I am testing haproxy version 1.9.4 on Ubuntu 18.04.

With the "option http-use-htx", haproxy shows a strange behaviour when
the real server is IIS and if the users' browsers try to do a POST.



I activated two frontend/backend pair on the same haproxy instance, 
forwarding to the same real server 10.64.44.74:82.


bind 10.64.44.112:443 -> no option http-use-htx -> server 10.64.44.74:82
bind 10.64.44.112:444 ->option http-use-htx -> server 10.64.44.74:82

I captured the communication from 10.64.44.112 to the real server 
10.64.44.74:82: the traffic generated by haproxy in the two cases is 
different.


This is (part of) the capture of a working POST (length 561+527)

12:53:21.973969 IP 10.64.44.112.34706 > 10.64.44.74.82: Flags [P.], seq 
1:562, ack 1, win 229, options [nop,nop,TS val 2416540384 ecr 
3320899791], length 561
12:53:21.974484 IP 10.64.44.112.34706 > 10.64.44.74.82: Flags [P.], seq 
562:1089, ack 1, win 229, options [nop,nop,TS val 2416540385 ecr 
3320899791], length 527
12:53:21.974602 IP 10.64.44.74.82 > 10.64.44.112.34706: Flags [.], ack 
1089, win 2081, options [nop,nop,TS val 3320899793 ecr 2416540384], 
length 0

 and the communication continues ...


When "option http-use-htx" is active, haproxy opens the connection to 
the real server, sends the headers and nothing more (length 444+133).


12:51:19.833831 IP 10.64.44.112.34678 > 10.64.44.74.82: Flags [P.], seq 
148880094:148880538, ack 1910718319, win 1167, options [nop,nop,TS val 
2416418245 ecr 3320745060], length 444
12:51:19.834437 IP 10.64.44.112.34678 > 10.64.44.74.82: Flags [P.], seq 
444:577, ack 1, win 1167, options [nop,nop,TS val 2416418245 ecr 
3320745060], length 133
12:51:19.834583 IP 10.64.44.74.82 > 10.64.44.112.34678: Flags [.], ack 
577, win 2081, options [nop,nop,TS val 3320777652 ecr 2416418245], 
length 0


... and the communication hangs here.

Two minutes after the POST, the real server logs a "400" error (because 
a timeout is reached, I guess).
The fact that the real server is waiting for some data also matches with 
the haproxy logs that have a "SD" state at disconnection.


It is difficult to anonymize the packet content and I do not want to 
generate a WOT here posting the whole packet capture in clear.

If someone is interested, I can do a tcpdump and sent it to him/her.

Thank you again

.marcoc



Re: Using server-template for DNS resolution

2019-02-08 Thread Igor Cicimov
Hi Baptise,

On Fri, Feb 8, 2019 at 6:10 PM Baptiste  wrote:

>
>
> On Fri, Feb 8, 2019 at 6:09 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> On Fri, Feb 8, 2019 at 2:29 PM Igor Cicimov <
>> ig...@encompasscorporation.com> wrote:
>>
>>> Hi,
>>>
>>> I have a Jetty frontend exposed for couple of ActiveMQ servers behind
>>> SSL terminating Haproxy-1.8.18. They share same storage and state via lock
>>> file and there is only one active AMQ at any given time. I'm testing this
>>> now with dynamic backend using Consul DNS resolution:
>>>
>>> # dig +short @127.0.0.1 -p 8600 activemq.service.consul
>>> 10.140.4.122
>>> 10.140.3.171
>>>
>>> # dig +short @127.0.0.1 -p 8600 _activemq._tcp.service.consul SRV
>>> 1 1 61616 ip-10-140-4-122.node.dc1.consul.
>>> 1 1 61616 ip-10-140-3-171.node.dc1.consul.
>>>
>>> The backends status, the current "master":
>>>
>>> root@ip-10-140-3-171:~/configuration-management# netstat -tuplen | grep
>>> java
>>> tcp0  0 0.0.0.0:81610.0.0.0:*
>>> LISTEN  5031374919617256/java
>>> tcp0  0 0.0.0.0:6161   0.0.0.0:*
>>> LISTEN  5031374919317256/java
>>>
>>> and the "slave":
>>>
>>> root@ip-10-140-4-122:~# netstat -tuplen | grep java
>>>
>>> So the service ports are not available on the second one.
>>>
>>> This is the relevant part of the HAP config that I think might be of
>>> interest:
>>>
>>> global
>>> server-state-base /var/lib/haproxy
>>> server-state-file hap_state
>>>
>>> defaults
>>> load-server-state-from-file global
>>> default-server init-addrlast,libc,none
>>>
>>> listen amq
>>> bind ... ssl crt ...
>>> mode http
>>>
>>> option prefer-last-server
>>>
>>> # when this is on the backend is down
>>> #option tcp-check
>>>
>>> default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
>>> maxconn 25 maxqueue 256 weight 100
>>>
>>> # working but both show as up
>>> server-template amqs 2 activemq.service.consul:8161 check
>>>
>>> # working old static setup
>>> #server ip-10-140-3-171 10.140.3.171:8161 check
>>> #server ip-10-140-4-122 10.140.4.122:8161 check
>>>
>>> This is working but the thing is I see both servers as UP in the HAP
>>> console:
>>> [image: amqs.png]
>>> Is this normal for this kind of setup or I'm doing something wrong?
>>>
>>> Another observation, when I have tcp check enabled like:
>>>
>>> option tcp-check
>>>
>>> the way I had it with the static lines like:
>>>
>>> server ip-10-140-3-171 10.140.3.171:8161 check
>>> server ip-10-140-4-122 10.140.4.122:8161 check
>>>
>>> then both servers show as down.
>>> Thanks in advance for any kind of input.
>>> Igor
>>>
>>> Ok, the state has changed now, I have correct state on one haproxy:
>>
>> [image: amqs_hap1.png]
>> but on the second the whole backend is down:
>>
>> [image: amqs_hap2.png]
>> I confirmed via telnet that I can connect to port 8161 to the running amq
>> server from both haproxy servers.
>>
>>
>
>
> Hi Igor,
>
> You're using the libc resolver function at startup time to resolve your
> backend, this is not recommended integration with Consul.
>  You will find some good explanations in this blog article:
>
> https://www.haproxy.com/fr/blog/haproxy-and-consul-with-dns-for-service-discovery/
>
> Basically, you should first create a "resolvers" section, in order to
> allow HAProxy to perform DNS resolution at runtime too.
>
> resolvers consul
>   nameserver consul 127.0.0.1:8600
>   accepted_payload_size 8192
>
> Then, you need to adjust your server-template line, like this:
> server-template amqs 10 _activemq._tcp.service.consul resolvers consul
> resolve-prefer ipv4 check
>
> In the example above, I am using on purpose the SRV records, because
> HAProxy supports it and it will use all information available in the
> response to update server's IP, weight and port.
>
> I hope this will help you.
>
> Baptiste
>

All sorted now. For the record and those interested here is my setup:

Haproxy:


global
server-state-base /var/lib/haproxy
server-state-file hap_state

defaults
load-server-state-from-file global
default-server init-addrlast,libc,none

resolvers consul
nameserver consul 127.0.0.1:8600
accepted_payload_size 8192
resolve_retries   30
timeout resolve   1s
timeout retry 2s
hold valid30s
hold other30s
hold refused  30s
hold nx   30s
hold timeout  30s
hold obsolete 30s

listen jetty
bind _port_ ssl crt ...
mode http

option forwardfor except 127.0.0.1 header X-Forwarded-For
option http-ignore-probes
option prefer-last-server

default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
maxconn 25 maxqueue 256 weight 100
server-template jettys 2 _jetty._tcp.service.consul resolvers consul
resolve-prefer ipv4 check

Consul:
---
{
  "services": [
{

Re: possible use of unitialized value in v2.0-dev0-274-g1a0fe3be

2019-02-08 Thread Ricardo Nabinger Sanchez
On Wed, 6 Feb 2019 19:12:31 +0100
Tim Düsterhus  wrote:

> Line 4398 is missing here, it appends a marker (empty string) to mark
> the end of the array.
> 
> > ...
> > 
> > 4450 /* look for the Host header and place it in :authority 
> > */
> > 4451 auth = ist2(NULL, 0);
> > 4452 for (hdr = 0; hdr < sizeof(list)/sizeof(list[0]); 
> > hdr++) {
> > 4453 if (isteq(list[hdr].n, ist("")))
> > // (here, assume the condition is false, so control keeps in this block...) 
> >  
> 
> We established that `list` is an array without holes terminated by an
> empty string.
> 
> Thus either:
> 1. The Condition is false, then the value must be initialized
> or
> 2. The Condition is true, then the loop is exited.
> 
> Thus I believe this is a false-positive.

Thank you for checking this out and, by extension, Willy on his set of
replise.  I missed the marker (and apparently, so did Clang).

Cheers,

-- 
Ricardo Nabinger Sanchez http://www.taghos.com.br/




Re: [PATCH] CONTRIB: contrib/prometheus-exporter: Add a Prometheus exporter for HAProxy

2019-02-08 Thread Willy Tarreau
Hi Christopher,


On Thu, Feb 07, 2019 at 10:09:52PM +0100, Christopher Faulet wrote:
> Hi,
> 
> This patch adds a new component in contrib. It is a Prometheus exporter for
> HAProxy.
(...)

Thanks for doing this. After reading the whole patch, I measure how
uninteresting an experience this must have been! After all, using C
to produce yet-another-format is akin to using python to write
yet-another-http-proxy :-)

I totally agree with your approach of placing it under contrib/. After
all we've already done the same with other extensions explicitly
targetting products (mod_security, mod_defender, systemd, ...). 
We support standards, not products. And products which respect standards
are naturally supported via the standards, so there is indeed no reason
for opening a new can of worms by inviting $PRODUCT_OF_THE_MONTH into
src/ especially when these products change faster than our maintenance
cycle.

In my opinion the right place for a stats exporter is : outside. However
I'm well aware that our export formats are not necessarily friendly to
such exporters. For example, the fact that prometheus uses this funny
ordering forces a gateway to keep all metrics in memory before being
able to dump them. It's not cool either. We could have a long-term
approach consisting in trying to implement multiple tree walk methods
combined with a few formats so that implementing external exporters in
various languages becomes trivial. In this case such tools could provide
high quality agents to collect our metrics by default without having to
work around some limitations or constraints.

This is probably a design discussion to have for the long term here on
the list : what are the main stats export mechanisms desired in field.
I can imagine that most agents will want to poll haproxy and dump the
whole stats once in a while, some will rely on it to send a full dump
once in a while (this may already become an issue during reloads), some
might possibly want to subscribe to change notification of certain
metrics, or receive a diff from the previous dump once in a while. And
for all these variations I guess we may have to implement 2 or 3 dump
styles :
  - Front -> Back -> Server -> metric
  - { Front, Back, Server } -> metric
  - metric: { front, back, server }

I don't know if I'm completely off or not, but I do think that those who
have experience with such tools should definitely join the discussion to
share their observations and deployment difficulties met in field.

In the mean time I think your patch should be merged, you'll get more
feedback on it this way, and it's not a critical part like HTX that
you want to be certain to be perfect before merging :-)

Thanks,
Willy