Re: Chaining haproxy instances for a migration scenario

2015-09-11 Thread Baptiste
On Fri, Sep 11, 2015 at 10:41 AM, Tim Verhoeven
 wrote:
> Hello everyone,
>
> I'm mostly passive on this list but a happy haproxy user for more then 2
> years.
>
> Now, we are going to migrate our platform to a new provider (and new
> hardware) in the coming months and I'm looking for a way to avoid a one-shot
> migration.
>
> So I've been doing some googl'ing and it should be possible to use the proxy
> protocol to send traffic from one haproxy instance (at the old site) to the
> another haproxy instance (at the new site). Then at the new site the haproxy
> instance there would just accept the traffic as it came from the internet
> directly.
>
> Is that how it works? Is that possible?
>
> Ideally the traffic between the 2 haproxy instances would be encrypted with
> TLS to avoid having to setup an VPN.
>
> Now I haven't found any examples of this kind of setup, so any pointers on
> how to set this up would be really appriciated.
>
> Thanks,
> Tim


Hi Tim,

Your usecase is an interesting scenario for a blog article :)

About your questions, simply update the app backend of the current
site in order to add a new 'server' that would be the HAProxy of the
new site:

backend myapp
 [...]
 server app1 ...
 server app2 ...
 server newhaproxy [IP]:8443 check ssl send-proxy-v2 ca-file
/etc/haproxy/myca.pem crt /etc/haproxy/client.pem

ca-file: to validate the certificate presented by the server using
your own CA (or use DANGEROUSLY "ssl-server-verify none" in your
global section)
crt : allows you to use a client certificate to get connected on the
other HAProxy

On the newhaproxy (in the new instance):

frontend fe_myapp
 bind :80
 bind :443 ssl crt server.pem
 bind :8443 ssl crt server.pem accept-proxy-v2



You can play with weight on the current site to send a few request to
the newhaproxy box and increase this weight once you're confident.

Baptiste



Re: Chaining haproxy instances for a migration scenario

2015-09-11 Thread bjun...@gmail.com
2015-09-11 10:55 GMT+02:00 Baptiste :

> On Fri, Sep 11, 2015 at 10:41 AM, Tim Verhoeven
>  wrote:
> > Hello everyone,
> >
> > I'm mostly passive on this list but a happy haproxy user for more then 2
> > years.
> >
> > Now, we are going to migrate our platform to a new provider (and new
> > hardware) in the coming months and I'm looking for a way to avoid a
> one-shot
> > migration.
> >
> > So I've been doing some googl'ing and it should be possible to use the
> proxy
> > protocol to send traffic from one haproxy instance (at the old site) to
> the
> > another haproxy instance (at the new site). Then at the new site the
> haproxy
> > instance there would just accept the traffic as it came from the internet
> > directly.
> >
> > Is that how it works? Is that possible?
> >
> > Ideally the traffic between the 2 haproxy instances would be encrypted
> with
> > TLS to avoid having to setup an VPN.
> >
> > Now I haven't found any examples of this kind of setup, so any pointers
> on
> > how to set this up would be really appriciated.
> >
> > Thanks,
> > Tim
>
>
> Hi Tim,
>
> Your usecase is an interesting scenario for a blog article :)
>
> About your questions, simply update the app backend of the current
> site in order to add a new 'server' that would be the HAProxy of the
> new site:
>
> backend myapp
>  [...]
>  server app1 ...
>  server app2 ...
>  server newhaproxy [IP]:8443 check ssl send-proxy-v2 ca-file
> /etc/haproxy/myca.pem crt /etc/haproxy/client.pem
>
> ca-file: to validate the certificate presented by the server using
> your own CA (or use DANGEROUSLY "ssl-server-verify none" in your
> global section)
> crt : allows you to use a client certificate to get connected on the
> other HAProxy
>
> On the newhaproxy (in the new instance):
>
> frontend fe_myapp
>  bind :80
>  bind :443 ssl crt server.pem
>  bind :8443 ssl crt server.pem accept-proxy-v2
>
>
>
> You can play with weight on the current site to send a few request to
> the newhaproxy box and increase this weight once you're confident.
>
> Baptiste
>
>

Hi Tim,

i'm having a similiar use case (smooth migration from 1.5 to 1.6). I've
recently blogged about this:


http://godevops.net/2015/09/07/testing-new-haproxy-versions-with-some-sort-of-ab-testing/


-
Best Regards / Mit freundlichen Grüßen

Bjoern


Chaining haproxy instances for a migration scenario

2015-09-11 Thread Tim Verhoeven
Hello everyone,

I'm mostly passive on this list but a happy haproxy user for more then 2
years.

Now, we are going to migrate our platform to a new provider (and new
hardware) in the coming months and I'm looking for a way to avoid a
one-shot migration.

So I've been doing some googl'ing and it should be possible to use the
proxy protocol to send traffic from one haproxy instance (at the old site)
to the another haproxy instance (at the new site). Then at the new site the
haproxy instance there would just accept the traffic as it came from the
internet directly.

Is that how it works? Is that possible?

Ideally the traffic between the 2 haproxy instances would be encrypted with
TLS to avoid having to setup an VPN.

Now I haven't found any examples of this kind of setup, so any pointers on
how to set this up would be really appriciated.

Thanks,
Tim


Re: Issue with Haproxy Reload

2015-09-11 Thread Joseph Lynch
Hi Rajeev,

> We are using Haproxy on top of Mesos cluster, We are doing  dynamic reloads 
> for Haproxy based on marathon events (50-100 times in a day). We have nearly 
> 300 applications that are running on Mesos (300 virtual hosts in Haproxy).

That should be very doable; for context we reload HAProxy thousands of
times per day and have around the same number of services. We do
leverage our improvements to https://github.com/airbnb/synapse to
minimize the number of reloads we have to do, but marathon is good at
making us reload. Just curious, how do you have HAProxy deployed, is
it running on a centralized machine somewhere or is it running on
every host?

> When we do dynamic reloads, Haproxy is taking long time to reload Haproxy, we 
> observed that for 50 applications takes 30-40secs to reload Haproxy.

This seems very surprising to me unless you're doing something like
SSL. Can you post a portion of your config?

> We have a single config file for Haproxy, when we do reload all the 
> applications are getting reloaded (Front-ends), this causing downtime of all 
> applications. Is there anyway to reduce the downtime and impact on end-users.
>
> We tried this scenario,
> "http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html;
>
> By this if user requests while reload, the requests are queued and serving 
> after reload.

Full disclaimer, I wrote that post, and I'm not sure that it will be
all that useful to you if your clients are external or your reloads
take > 30s. "The largest drawback is that this works only for outgoing
links and not for incoming traffic." It would theoretically not be
hard to extend to incoming traffic using ifb but I haven't worked on
actually proving out that solution. If the reload takes > 30s that
technique simply won't work (you'll be buffering connections for 30s,
and likely dropping them). If the 30s reloads are unavoidable you will
likely want to consider one of the alternative strategies mentioned in
the post. For example you can just drop SYNs since the 1s penalty
isn't that big of a deal (will still see 30s+ of unavailability), use
nginx/haproxy to route in front of haproxy (can be a bit confusing and
hard to work with), or make something similar to
http://inside.unbounce.com/product-dev/haproxy-reloads/ (be wary you
pay conntrack with a solution like that).

> But if we do multiple reloads one after another, HaProxy old processes 
> persist even after reloading the HaProxy service, this is causing the serious 
> issue.
>
> root  7816  0.1  0.0  20024  3028 ?Ss   03:52   0:00 
> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 
> 6778
> root  7817  0.0  0.0  20024  3148 ?Ss   03:52   0:00 
> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 
> 6778
>
> Is there any solution stop the previous process once after it serving the 
> request.

That is expected behaviour afaik. Those processes are likely still
alive because there are still open connections held against them. How
long is the longest timeout on your backend servers? This is common
with long lived TCP mode backends, but those apps are often resilient
to losing the TCP connection so you may just be able to kill the
haproxy instances (it's what we do).

> Can we separate the configurations based on front-ends like in Nginx, so that 
> only those apps will effect if there is any changes in backend.

I mean there is nothing that stops you from running multiple haproxy
instances that bind to different ports. I think the right place to
start though is figuring out why reloading takes so long, which can
probably be figured out by looking at the config.

Good luck,
-Joey



Build failure in current master HEAD

2015-09-11 Thread Conrad Hoffmann
Hi,

I cannot build the current dev's master HEAD (ec3c37d) because of this error:

> In file included from include/proto/proto_http.h:26:0,
>  from src/stick_table.c:26:
> include/types/action.h:102:20: error: field ‘re’ has incomplete type
> struct my_regex re;/* used by replace-header and replace-value */
> ^
> Makefile:771: recipe for target 'src/stick_table.o' failed
> make: *** [src/stick_table.o] Error 1

The struct act_rule defined in action.h includes a full struct my_regex
without #include-ing regex.h. Both gcc 5.2.0 and clang 3.6.2 do not allow this.

Cheers,
Conrad
-- 
Conrad Hoffmann
Traffic Engineer

SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany

Managing Director: Alexander Ljung | Incorporated in England & Wales
with Company No. 6343600 | Local Branch Office | AG Charlottenburg |
HRB 110657B



Multiple log entries with exact same Tq, Tc, Tr and Tt

2015-09-11 Thread Dave Stern
We run haproxy 1.5.14 on Ubuntu 14.04 in AWS. It load balances connections
from heroku to our backend, a neo4j cluster of multiple instances. It also
terminates SSL and handles auth. Connections to the backend are over
private network space within our AWS network and via unencrypted http.

During a recent event with heavy load, we saw log entries with the same
repeating pattern of multiple lines with the exact same Tq, Tc, Tr and Tt
values, but with different requests from different clients routed to
different backend pool members. We're not sure what's causing this or if it
relates to performance issues we saw during our events. Here's some log
lines with excerpted fields. We send a custom header with the query name
for metrics, which I've replaced with generic entries. I also replaced the
real client IP with 1.1.1.1, 2.2.2.2, etc.


Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37480
bs=db_production_read:production-04 hdrs="{query_1}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37481
bs=db_production_read:production-02 hdrs="{query_2}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37482
bs=db_production_read:production-03 hdrs="{query_3}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37484
bs=db_production_read:production-04 hdrs="{query_4}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37483
bs=db_production_read:production-02 hdrs="{query_5}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37485
bs=db_production_read:production-03 hdrs="{query_6}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37486
bs=db_production_read:production-05 hdrs="{query_7}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=2.2.2.2:37488
bs=db_production_read:production-03 hdrs="{query_8}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37487
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37488
bs=db_production_read:production-04 hdrs="{query_10}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37489
bs=db_production_read:production-02 hdrs="{query_11}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37490
bs=db_production_read:production-03 hdrs="{query_12}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37491
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37492
bs=db_production_read:production-04 hdrs="{query_13}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53958
bs=db_production_read:production-02 hdrs="{query_10}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53959
bs=db_production_read:production-03 hdrs="{query_11}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53960
bs=db_production_read:production-04 hdrs="{query_12}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=2.2.2.2:37489
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37493
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53961
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=2.2.2.2:37490
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53962
bs=db_production_write:production-01 hdrs="{query_14}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=2.2.2.2:37491
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53963
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37495
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53964
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=3.3.3.3:53965
bs=db_production_read:production-02 hdrs="{query_13}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37496
bs=db_production_write:production-01 hdrs="{query_9}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37498
bs=db_production_read:production-04 hdrs="{query_15}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=2.2.2.2:37492
bs=db_production_read:production-02 hdrs="{query_14}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=1.1.1.1:37499
bs=db_production_write:production-01 hdrs="{query_13}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=2.2.2.2:37493
bs=db_production_read:production-03 hdrs="{query_10}"
Sep 10 20:00:00 Tq=1060 Tw=0 Tc=1204 Tr=922 Tt=3186 cip=2.2.2.2:37495
bs=db_production_read:production-02 hdrs="{query_12}"

The logs repeat like this with different sets of entries with 

Re: TCP_NODELAY in tcp mode

2015-09-11 Thread Willy Tarreau
Hi Dmitry,

On Fri, Sep 11, 2015 at 01:58:42PM +0300, Dmitry Sivachenko wrote:
> For reference: I tracked this down to be FreeBSD-specific problem:
> https://lists.freebsd.org/pipermail/freebsd-net/2015-September/043314.html
> 
> Thanks all for your help.

Thanks for the update. What I'm seeing in your description looks
very much like equivalent issues we used to face with softirq on
Linux, so it's possible that you're in the worst case where work
cannot be aggregated but comes with a huge overhead. Also maybe
you have pf or something like this eating some extra CPU. I can't
be specific, I don't use FreeBSD myself, but like Linux it's a
modern and performant OS so I think you'll come to a solution.

I don't know if you can pin processes to CPUs but there could be
interesting tests to run regarding how processes and interrupts
are pinned.

Also if your NIC supports multiple queues, you'll need to check
how interrupts are delivered. It would be possible that you're
facing a scalability issue in the network driver or stack, maybe
just in case where too few sockets are used or when packets get
highly reordered.

Cheers,
Willy




rate limiting according to "total time" - possible ?

2015-09-11 Thread Roland RoLaNd
hello
i have haproxy directing traffic to a number of backends.
these backends can auto scale upon traffic; my goal is to change "maxcon" 
depending on "total time" or "backend time"  that a request took to complete.
for example:
if totaltime < 1 second ; maxcon = 1000if totaltime < 2 second: maxconn = 
500etc...

the goal is to hold connections in queue till backend auto scaling is in effect.

Can i do the above scenario within haproxy config or a cron that checks haproxy 
socket/totaltime and act accordingly is a better idea?

do you have an alternative advice for me to accomplish that goal ?
Thanks in advance
  

Re: Client Affinity in HAProxy with MQTT Broker

2015-09-11 Thread Sourav Das
Hi Baptiste,

Thanks a lot for the quick reply.
Please find the snapshot of the TCP dump for MQTT publish message.

[image: Inline image 1]


Please note that the MQTT messages are carried on top of TCP. Also, as
mentioned in previous mail, I am trying to load balance the traffic based
on the topic mentioned in the MQTT message( 3rd line from end ) which means
that all the traffic corresponding to a particular topic should be
forwarded to a specific server only.

So considering above requirement, I have the following questions :

1. Is MQTT already supported as an out of box feature?
2. Is there a configurable hook/plugin through which above can be achieved?
3. If no configuration/plugins are available, how easy it would be to add
the feature/code in order to retrieve payload information from TCP packets
and parse it as par MQTT?
4. What are main modules which I should look into in order to implement the
same in case not yet implemented?


Please let me know for any further clarification.

Regards,
Sourav

On Thu, Sep 10, 2015 at 7:58 PM, Baptiste  wrote:

> On Thu, Sep 10, 2015 at 4:05 PM, Sourav Das 
> wrote:
> > Hi,
> >
> > I have been going through the HAProxy documentation for my work which
> deals
> > with scaling and load balancing for MQTT Brokers.
> >
> > However, I could not find any configuration regarding the Client Affinity
> > where the routing of the MQTT traffic is done based on the topic present
> in
> > the MQTT message. As MQTT is also carried over TCP, is it possible to
> use a
> > pre-configured hook in HAProxy so that the traffic can be routed to the
> > appropriate server based on the MQTT topic.
> >
> > At present I am not able to find out any hook which enables this to be
> done.
> > I am a bit curious to know whether the support of MQTT is planned in
> future
> > releases of HAProxy.
> >
> >
> > Please let me know if this makes sense.
> >
> > Regards,
> > Sourav
>
>
> Hi Sourav,
>
> This would be doable only if the information can be retrived from the
> payload of the first request sent by the client.
> could you provide more information about how MQTT protocol works? Is
> there any server banner?
> A simple TCP dump containing an example of the message you want to
> route would be appreciated and allow us to deliver you an accurate
> answer.
>
> Baptiste
>


Re: TCP_NODELAY in tcp mode

2015-09-11 Thread Dmitry Sivachenko

> On 8 сент. 2015 г., at 18:33, Willy Tarreau  wrote:
> 
> Hi Dmitry,
> 
> On Tue, Sep 08, 2015 at 05:25:33PM +0300, Dmitry Sivachenko wrote:
>> 
>>> On 30 ??. 2015 ??., at 22:29, Willy Tarreau  wrote:
>>> 
>>> On Fri, Aug 28, 2015 at 11:40:18AM +0200, Lukas Tribus wrote:
>> Ok, you may be hitting a bug. Can you provide haproxy -vv output?
>> 
> 
> 
> What do you mean? I get the following warning when trying to use this
> option in tcp backend/frontend:
 
 Yes I know (I didn't realize you are using tcp mode). I don't mean the
 warning is the bug, I mean the tcp mode is supposed to not cause any
 delays by default, if I'm not mistaken.
>>> 
>>> You're not mistaken, tcp_nodelay is unconditional in TCP mode and MSG_MORE
>>> is not used there since we never know if more data follows. In fact there's
>>> only one case where it can happen, it's when data wrap at the end of the
>>> buffer and we want to send them together.
>>> 
>> 
>> 
>> Hello,
>> 
>> yes, you are right, the problem is not TCP_NODELAY.  I performed some 
>> testing:
>> 
>> Under low network load, passing TCP connection through haproxy involves 
>> almost zero overhead.
>> When load grows, at some point haproxy starts to slow things down.
>> 
>> In our testing scenario the application establishes long-lived TCP 
>> connection to server and sends many small requests.
>> Typical traffic at which adding haproxy in the middle causes measurable 
>> slowdown is ~30MB/sec, ~100kpps.
> 
> This is not huge, it's smaller than what can be achieved in pure HTTP mode,
> where I could achieve about 180k req/s end-to-end, which means at least 
> 180kpps
> in both directions on both sides, so 360kpps in each direction.
> 


For reference: I tracked this down to be FreeBSD-specific problem:
https://lists.freebsd.org/pipermail/freebsd-net/2015-September/043314.html

Thanks all for your help.




Two things: proxy protocol v2 example and a missing article.

2015-09-11 Thread Eliezer Croitoru

Hey List,

I am writing a proxy protocol parser in golang and I need some help.
I am looking for couple proxy protocol v2 examples for testing purposes.
I am looking for couple strings which I can throw at my parser.
The first thing to do is just run a haproxy and dump the strings but I 
think it's missing from the docs of v2 compared to v1.(from what I was 
reading)



Another issue is that I have been trying to work on some FreeBSD things 
and I have seen in the haproxy docs at:

http://www.haproxy.org/#docs
That the article:
Benchmarking HAProxy under VMware : Ubuntu vs 
FreeBSD[http://equima.pfpfree.net/2010/benchmarking-haproxy-ubuntu-vs-freebsd/]


Is missing from the origin server.
I am looking for this article and any FreeBSD tuning article which 
applies to high load FreeBSD with haproxy.
I noticed that the default settings of FreeBSD 10.1 doesn't really allow 
or give the admin the option to use it as a HAPROXY and was wondering if 
some others have notes about tuning FreeBSD.


Thanks,
Eliezer



Re: Client Affinity in HAProxy with MQTT Broker

2015-09-11 Thread Baptiste
Hi Sourav,

Thanks a lot for the mail and the screenshot.
That said, usually, when we ask for a capture, we mean a pcap file, not a
png one :)
It's fine, I have the information I need.

I also used this documentation:
http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718037


Please note that the MQTT messages are carried on top of TCP. Also, as
> mentioned in previous mail, I am trying to load balance the traffic based
> on the topic mentioned in the MQTT message( 3rd line from end ) which means
> that all the traffic corresponding to a particular topic should be
> forwarded to a specific server only.
>

That's doable only in the following condition:
- the client speaks first (no server banner)
- the information is available in first session buffer
- the information is available always at the same place

It seems MQTT meet the 3 rules above, so we may be able to do something.

So considering above requirement, I have the following questions :
>
> 1. Is MQTT already supported as an out of box feature?
>

No, MQTT is not supported out of the box.


> 2. Is there a configurable hook/plugin through which above can be achieved?
>

You might be able to code a MQTT protocol parser in Lua.
You could even do MQTT routing to a specific farm based on the topic, but
load-balancing is an other story.


> 3. If no configuration/plugins are available, how easy it would be to add
> the feature/code in order to retrieve payload information from TCP packets
> and parse it as par MQTT?
>

Very complicated, I'm afraid...


> 4. What are main modules which I should look into in order to implement
> the same in case not yet implemented?
>
>
Well, as I mentionned above, there might be some stuff we can do.
Please check
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#req.payload_lv

Following your capture, this fetch can be used to retrieve the whole topic:
req.payload_lv(1,1) => it fetch content in the TCP payload buffer whose
size is specified at byte 1 for 1 byte and data is stored right after the
size.

This fetch has the corresponding ACL to match against static patterns.
Imagine 3 topics: topic1, topic2, topic3, then you could do:

frontend mqtt
[...]
 use_backend bk_topic1 if { req.payload_lv(1,1),lower topic1 }
 use_backend bk_topic2 if { req.payload_lv(1,1),lower topic2 }
 use_backend bk_topic3 if { req.payload_lv(1,1),lower topic3 }

backend bk_topic1
[...]

backend bk_topic2
[...]

backend bk_topic3
[...]

The same could be applied in a single backend, using the use-server
statement.

I hope this help and this is enough.

Be aware that once the TCP session has been forwarded to a server, then all
subsequent messages are going to be forwarded to this server, regardless of
the next topics set over the same connection.
To be routed again, a client must send next PUBLISH message over a new TCP
connection.

Baptiste



On Thu, Sep 10, 2015 at 7:58 PM, Baptiste  wrote:

> On Thu, Sep 10, 2015 at 4:05 PM, Sourav Das 
> wrote:
> > Hi,
> >
> > I have been going through the HAProxy documentation for my work which
> deals
> > with scaling and load balancing for MQTT Brokers.
> >
> > However, I could not find any configuration regarding the Client Affinity
> > where the routing of the MQTT traffic is done based on the topic present
> in
> > the MQTT message. As MQTT is also carried over TCP, is it possible to
> use a
> > pre-configured hook in HAProxy so that the traffic can be routed to the
> > appropriate server based on the MQTT topic.
> >
> > At present I am not able to find out any hook which enables this to be
> done.
> > I am a bit curious to know whether the support of MQTT is planned in
> future
> > releases of HAProxy.
> >
> >
> > Please let me know if this makes sense.
> >
> > Regards,
> > Sourav
>
>
> Hi Sourav,
>
> This would be doable only if the information can be retrived from the
> payload of the first request sent by the client.
> could you provide more information about how MQTT protocol works? Is
> there any server banner?
> A simple TCP dump containing an example of the message you want to
> route would be appreciated and allow us to deliver you an accurate
> answer.
>
> Baptiste
>

>


Re: Build failure in current master HEAD

2015-09-11 Thread Thierry FOURNIER
Thank you. I submit the patch.

Thierry

On Fri, 11 Sep 2015 10:38:22 +0200
Conrad Hoffmann  wrote:

> Hi,
> 
> I cannot build the current dev's master HEAD (ec3c37d) because of this error:
> 
> > In file included from include/proto/proto_http.h:26:0,
> >  from src/stick_table.c:26:
> > include/types/action.h:102:20: error: field ‘re’ has incomplete type
> > struct my_regex re;/* used by replace-header and replace-value */
> > ^
> > Makefile:771: recipe for target 'src/stick_table.o' failed
> > make: *** [src/stick_table.o] Error 1
> 
> The struct act_rule defined in action.h includes a full struct my_regex
> without #include-ing regex.h. Both gcc 5.2.0 and clang 3.6.2 do not allow 
> this.
> 
> Cheers,
> Conrad
> -- 
> Conrad Hoffmann
> Traffic Engineer
> 
> SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany
> 
> Managing Director: Alexander Ljung | Incorporated in England & Wales
> with Company No. 6343600 | Local Branch Office | AG Charlottenburg |
> HRB 110657B
> 



L’actualité hebdomadaire par RFI - Au Ladakh, les hommes luttent contre la fonte des...

2015-09-11 Thread RFI L'HEBDO
L’actualité hebdomadaire par RFI -  11/09/2015

Visualisez cet email dans votre navigateur 

http://rfi.nlfrancemm.com/HM?b=M0ptRJT1gqaH8OzuP6u6P2_tpLaH2CgXPxeaX0L_gXEXkBlSOHDXcc0tLryX2fxP=glHe-dqTFAJuLU6sIiXIQw
 


Au Ladakh, les hommes luttent contre la fonte des sommets
La région himalayenne désertique du Ladakh subit fortement le réchauffement 
climatique. Les glaciers, qui représentent l'une des seules sources d'eau pour 
la population, fondent à grande vitesse. Certains villages ont donc installé 
des glaciers artificiels qui retiennent les flots pendant l'hiver et offrent 
aux habitants assez d'eau pour irriguer leurs champs au printemps.
http://rfi.nlfrancemm.com/HP?b=y0E43LZuEUE4oLRhFUN3OYEzODz3WiJvs4fr4oB4YG0msTfzdqzdYz5aWYy0XZ6Y=gWFcWNvjaevQh4p0Zw1szA
La Slovaquie peut-elle refuser l’accueil aux réfugiés musulmans?
Appels sur l’actualité revient sur la politique annoncée par le gouvernement 
slovaque en matière d’accueil des réfugiés, en totale contradiction avec les 
propos de Jean-Claude Junker mercredi. Le président de la Commission européenne 
a spécifié que la religion ne doit pas être un critère de choix. Pourtant, la 
Slovaquie a posé ses conditions : seuls les chrétiens seraient acceptés, au 
prétexte que le pays ne dispose pas de mosquées pour ceux de confession 
musulmane… Eclairage.
http://rfi.nlfrancemm.com/HP?b=voQh8PfEKfMn5rXwW5PeVXT2VrDFfJ-yAxsttMlcpzcXMqwxQgTKgT27DOMv2KxQ=sgoRCf-uTX7zTZTZNGOqOQ
Toni Morrison, inlassable conteuse de l'Amérique noire
Les éditions Christian Bourgois viennent de traduire en français Délivrances, 
le onzième roman de l’Américaine Toni Morrison, qui s’intitule en anglais God 
Save The Child. A 84 ans, la grande dame des lettres africaines-américaines, 
Prix Nobel de littérature 1993, n’a rien perdu de la puissance et de la 
pertinence qui caractérisent son œuvre transgressive, marquée par la révolte 
contre l’Amérique blanche et patriarcale.
http://rfi.nlfrancemm.com/HP?b=I0VnmoqFizQ2kRcBGAyOL_q_UGh359eG-90TSLVcRTxh1GzeGMz8Tk0UeAT33cJa=6Os-ehu0I2KAx31fSb6LDg
La Chine montre ses muscles mais réduit son armée
Deux cents avions ont survolé Pékin le 3 septembre dernier. Un défilé militaire 
imposant a marqué le 70e anniversaire de la capitulation du Japon et la fin de 
la Seconde Guerre mondiale en Asie. Un conflit qui a coûté la vie à plus de 15 
millions de Chinois. La Chine a retrouvé toute sa grandeur, a assuré le 
président Xi Jinping dans son discours sur la place Tienanmen - annonçant dans 
la foulée une réduction importante de ses effectifs militaires… Retour sur 
événement qui en dit long sur les ambitions du régime chinois.
http://rfi.nlfrancemm.com/HP?b=UlJb29Z3gCYOs-sqmb9iA__5L-vkjZhpj6OND_0Wa2gHGk1Kb28BTwPEWa5E4rJr=Ronenw4dVLGDHvVFmGt9eg
Allen Nelson, itinéraire d’un soldat afro-américain en Asie
Après quatre ans passés au service de l’armée américaine, dans l'enfer 
vietnamien notamment, Allen Nelson, soldat afro-américain de l'US Navy, bascule 
dans le camp de la paix. Depuis son décès en 2009, son témoignage mémoire est 
entretenu au Vietnam et au Japon, alors que les Japonais viennent de commémorer 
le 70e anniversaire des bombardements d’Hiroshima et de Nagasaki. Le tout dans 
un contexte de remise en question du pacifisme constitutionnel japonais.
http://rfi.nlfrancemm.com/HP?b=dAdwbjz4wMlAZ0GWcUAL0Xmwfb5G_RjovQXuil9iaeTf9_82abb-qzOmsfSBxLR-=ql2ReZu6QTXO8NamxTtQFQ
Gaël Giraud: «La croissance n’est plus la panacée»
Gaël Giraud, 45 ans, économiste en chef de l’Agence française de développement 
depuis février dernier, tient un discours de rupture avec la croissance. A 
quelques semaines de la COP21, la conférence internationale sur le climat qui 
se tiendra en décembre à Paris, il explore les alternatives de développement 
qui s’imposent selon lui pour le long terme.
http://rfi.nlfrancemm.com/HP?b=2ukmmEP1lUKhyYefub-8AJXzJpHPha_j8yqUCCP5AeZkDUJ4UPOjppl6J40PD8x_=4gB-u_9DC58Tyyz8WfDhRw


Ropoporose, un groupe français au Festival des musiques émergentes
Le Festival de musique émergente, le FME, dans le Québec profond, est une 
expérience qui se mérite. À huit heures de route de Montréal et après la 
réserve Mohawk, il accueille ce qui se fait de mieux en matière de musique 
électro et rock au Canada, mais aussi au-delà de ses frontières. Depuis le 
début, jeudi dernier, de sa 13e édition dans la ville isolée de Rouyn-Noranda, 
il a vu défiler pas loin de 70 groupes. Parmi eux, un jeune groupe français a 
fait sensation. Notre envoyé spécial à Rouyn-Noranda, José Marinho.  
http://rfi.nlfrancemm.com/HP?b=dY8yZhKa1CbuSz_T89gPQKZMt8SezagT8oodBbjrE3F3W4uhpoa4oNphjap7llNN=IatSL6nzDGq464hOEIxO2Q
Traumas psychologiques, les plaies invisibles des migrants
Depuis le début de l’année, ce sont près de 366 000 personnes qui sont arrivées 
en Europe après un voyage en mer, via les côtes grecques ou italiennes le plus 
souvent. Un tiers d’entre eux ont traversé la Méditerranée avant d’arriver en 
Italie. Un voyage commencé parfois des années plus tôt et