MQTT client id in Haproxy

2017-12-01 Thread CJ
Hi All,

Is it possible to get client id in haproxy in tcp mode just like http mode.

I'm trying to get mqtt client id in haproxy tcp mode

Similar case in nginx:

https://www.nginx.com/blog/nginx-plus-iot-load-balancing-mqtt/


Regards,

CJ


Re: Rate limiting w/o 429s

2016-08-05 Thread CJ Ess
Not the tarpit feature, that will deny access to the content with 500
status. I don't want to kill the request, just delay it,


On Fri, Aug 5, 2016 at 8:57 PM, Dennis Jacobfeuerborn  wrote:

> On 05.08.2016 19:11, CJ Ess wrote:
> > So I know I can use Haproxy to send 429s when a given request rate is
> > exceeded.
> >
> > I have a case where the "user" is mostly screen scrapers and click bots,
> so
> > if I return a 429 they'll just turn around and re-request until
> successful
> > - I can't expect them to voluntarily manage their request rate or do any
> > sort of back-off when requests fail. So instead I want to keep the
> > connections open and the requests alive, and just delay dispatching them
> to
> > an upstream backend. Is there anyway I can do something like this? I'm
> open
> > to suggestions of alternative ways to achieve the same effect.
> >
>
> It sounds like the tarpit functionality is what you want:
> https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#reqtarpit
>
> Regards,
>   Dennis
>
>
>


Rate limiting w/o 429s

2016-08-05 Thread CJ Ess
So I know I can use Haproxy to send 429s when a given request rate is
exceeded.

I have a case where the "user" is mostly screen scrapers and click bots, so
if I return a 429 they'll just turn around and re-request until successful
- I can't expect them to voluntarily manage their request rate or do any
sort of back-off when requests fail. So instead I want to keep the
connections open and the requests alive, and just delay dispatching them to
an upstream backend. Is there anyway I can do something like this? I'm open
to suggestions of alternative ways to achieve the same effect.


Re: Problem w/ connection reuse for haproxy backends

2016-07-22 Thread CJ Ess
Thanks Willy and Nenand!


On Fri, Jul 22, 2016 at 5:44 AM, Willy Tarreau  wrote:

> Hi Pavlos,
>
> On Fri, Jul 22, 2016 at 12:33:07AM +0200, Pavlos Parissis wrote:
> > On 21/07/2016 10:30 , Willy Tarreau wrote:
> > > Hi,
> > >
> > > On Thu, Jul 21, 2016 at 02:33:05PM -0400, CJ Ess wrote:
> > >> I think I'm overlooking something simple, could someone spot check me?
> > >>
> > >> What I want to do is to pool connections on my http backend - keep
> HAProxy
> > >> from opening a new connection to the same backend if there is an
> > >> established connection that is idle.
> > >>
> > >> My haproxy version is 1.5.18
> > > (...)
> > >
> > >> There is more then enough traffic going through the backend that if a
> > >> connection is idle, there will be a request that could use it (within
> ms,
> > >> should never hit the 5s or 75s timeouts), however in every case the
> > >> connection just sites idle for five seconds then closes.
> > >>
> > >> Am I missing something simple to enable this behavior?
> > >
> > > Yes, you're missing the "http-reuse" directive which was introduced in
> > > 1.6. Be careful when doing this (read the doc carefully), as some
> > > servers still tend to confuse requests and connections and could do
> > > some funny stuff there.
> >
> > Can you elaborate a bit more on this?
> > Which servers? Nginx/Apache and under which conditions ?
>
> Some application servers (or some components) tend to tie some
> incoming parameters to the connection instead of the request. There
> used to be a lot of confusion regarding this when keep-alive was
> brought to HTTP because it was the era where reverse proxies would
> not even exist so there was no doubt that a connection always comes
> from a client. Unfortunately some bad designs were introduced due to
> this. The most widely known certainly is NTLM, which violates HTTP
> since it assumes that all requests coming over a connection belong to
> the same client. HAProxy detects this by marking a connection "private"
> as soon as it sees a 401 or 407 on it, and will not share it with any
> other client. But regardless of this, you'll find dirty applications
> which assign a cookie only after the 2nd or 3rd request over a given
> connection. Some will only emit a response cookie on the first response
> so the next requests will never get a cookie. Other ones will only check
> the X-Forwarded-For header when the connection establishes and will use
> this value for all requests from the connection, resulting in wrong logs
> and/or possibly rules. Others will simply take a decision on the first
> request of a connection and not check the remaining ones (like haproxy
> used to do up to version 1.3 and can still do when forced in tunnel
> mode).
>
> Most often the application components which break these HTTP principles
> are the ones which do not support a load balancer. But sometimes some
> of them work when you install a load balancer working in tunnel mode
> in front of them (like haproxy up to 1.3 by default).
>
> A rule of thumb is that if your application only works when you have
> "option prefer-last-server", then your application certainly is at
> risk.
>
> This problem has been widely discussed inside the IETF HTTP working
> group and is known as "requests must work in isolation". It's been
> quite well documented over the years and normally all modern components
> are safe. But if you connect to a good old dirty thing developed in the
> early 2000, be careful! Similarly, when using 3rd party apache modules
> developed by people doing a quick and dirty thing, be prepared to
> discover the hard way that they never read an RFC in their life...
>
> Cheers,
> Willy
>
>


Problem w/ connection reuse for haproxy backends

2016-07-21 Thread CJ Ess
I think I'm overlooking something simple, could someone spot check me?

What I want to do is to pool connections on my http backend - keep HAProxy
from opening a new connection to the same backend if there is an
established connection that is idle.

My haproxy version is 1.5.18

In my defaults section I have:

  timeout http-request 5s
  timeout http-keep-alive 75s

In my backend definition I have:

  mode http
  option http-keep-alive
  balance roundrobin

What I see in the packet capture is:

  The client makes an http/1.1 request w/ no Connection header (haproxy ->
backend)
  The server gives an an http/1.1 response with content-length and no
connection header (backend -> haproxy)
  5s later Haproxy closes the connection

There is more then enough traffic going through the backend that if a
connection is idle, there will be a request that could use it (within ms,
should never hit the 5s or 75s timeouts), however in every case the
connection just sites idle for five seconds then closes.

Am I missing something simple to enable this behavior?


Re: HAProxy error log

2016-07-15 Thread CJ Ess
>From global section:

  log 127.0.0.1 local0
  log 127.0.0.1 local1 err

>From defaults section:

  log global
  option httplog
  option log-separate-errors

>From rsyslog confg:

$Umask 
$FileCreateMode 0644
local1.* -/var/log/haproxy/haproxy_errors.log

No network capture available.


On Fri, Jul 15, 2016 at 3:45 PM, Cyril Bonté  wrote:

> Le 15/07/2016 à 19:35, CJ Ess a écrit :
>
>> I think I gave the relevant details but here is a sample (with hostname,
>> frontend name, backend name, server name, user agent, x-forwarded-for
>> chain, and url path changed). I have thousands of these, identical
>> frontends, backends, method, and url) to a pool of identical servers.
>> Each of these appears in the error log, however they all have in common
>> a 200 result code and -'s for state flags. I don't know why these would
>> appear in the error log.
>>
>> 2016-07-15T13:00:19-04:00 hostname haproxy[116593]: 127.0.0.1:62401
>> <http://127.0.0.1:62401> [15/Jul/2016:13:00:16.191] frontend_name
>> backend_name/server_name 790/0/953/1155/2898 200 135 - - 
>> 17847/17845/5414/677/0 0/0 {user_agent|x-forwarded-for} "POST
>> /services/x HTTP/1.1"
>>
>
> Well this log line is a beginning, but details are  :
> - the haproxy configuration
> - the syslog server configuration/rules
> - a network capture showing that logs are sent at an error level
>
>
>
>> On Fri, Jul 15, 2016 at 1:24 PM, Cyril Bonté > <mailto:cyril.bo...@free.fr>> wrote:
>>
>> Le 15/07/2016 à 17:46, CJ Ess a écrit :
>>
>> I've got thousands of errors showing up in my haproxy error.log
>> but I'm
>> not sure why, the requests being logged there have a 200 result
>> code and
>> the session state flags are all -'s. However its primarily
>> requests to a
>> particular backend being logged there. What can I do to diagnose?
>>
>>
>> At least, provide some details.
>>
>>
>>
>> My HAProxy version is 1.5.18
>>
>>
>>
>> --
>> Cyril Bonté
>>
>>
>>
>
> --
> Cyril Bonté
>


Re: HAProxy error log

2016-07-15 Thread CJ Ess
I think I gave the relevant details but here is a sample (with hostname,
frontend name, backend name, server name, user agent, x-forwarded-for
chain, and url path changed). I have thousands of these, identical
frontends, backends, method, and url) to a pool of identical servers. Each
of these appears in the error log, however they all have in common a 200
result code and -'s for state flags. I don't know why these would appear in
the error log.

2016-07-15T13:00:19-04:00 hostname haproxy[116593]: 127.0.0.1:62401
[15/Jul/2016:13:00:16.191] frontend_name backend_name/server_name
790/0/953/1155/2898 200 135 - -  17847/17845/5414/677/0 0/0
{user_agent|x-forwarded-for} "POST /services/x HTTP/1.1"

On Fri, Jul 15, 2016 at 1:24 PM, Cyril Bonté  wrote:

> Le 15/07/2016 à 17:46, CJ Ess a écrit :
>
>> I've got thousands of errors showing up in my haproxy error.log but I'm
>> not sure why, the requests being logged there have a 200 result code and
>> the session state flags are all -'s. However its primarily requests to a
>> particular backend being logged there. What can I do to diagnose?
>>
>
> At least, provide some details.
>
>
>
> My HAProxy version is 1.5.18
>>
>>
>
> --
> Cyril Bonté
>


HAProxy error log

2016-07-15 Thread CJ Ess
I've got thousands of errors showing up in my haproxy error.log but I'm not
sure why, the requests being logged there have a 200 result code and the
session state flags are all -'s. However its primarily requests to a
particular backend being logged there. What can I do to diagnose?

My HAProxy version is 1.5.18


Re: compression with Transfer-Encoding: chunked

2016-06-30 Thread CJ Ess
Are they http/1.1 requests?

On Thu, Jun 30, 2016 at 11:58 AM, Richert, Tim  wrote:

> Hello there,
>
> I've been working with haproxy for some time now and it's doing a
> fantastic job! Thank you for all your development and all your efforts in
> this great piece of software!
>
> I am trying to implement compression and it's working just fine with fixed
> Content-Lengths.
> But it seems that compression isn't working when Transfer-Encoding is
> 'chunked'.
>
> Documentation indicates this as well:
> Compression is disabled when:
> [..]
> *  * response header "Transfer-Encoding" contains "chunked" (Temporary*
> *Workaround)*
> [..]
>
> But the next condition is:
>   * response contain neither a "Content-Length" header nor a
> "Transfer-Encoding" whose last value is "chunked"
>
> If I understand that second part correctly, than compression should kick
> in.
>
> I couldn't find more details on the first part and why there is a
> workaround and if it's still in place.
> Changelog tells me:
> 2014/04/23 : 1.5-dev23
> - MAJOR: http: re-enable compression on chunked encoding
>
> I am currently using
> HA-Proxy version 1.6.4 2016/03/13
> Copyright 2000-2016 Willy Tarreau 
>
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
>   OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
>
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.7
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with OpenSSL version : OpenSSL 1.0.2h  3 May 2016
> Running on OpenSSL version : OpenSSL 1.0.2h  3 May 2016
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.32 2012-11-30
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built with Lua version : Lua 5.3.2
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
>
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> Should compression be working with Transfer-Encoding: chunked? If not: Is
> there some workaround I could use?
>
> Thank you in advance!
> Tim
>
>


Re: Healthchecks with many nbprocs

2016-06-20 Thread CJ Ess
We have pools of Haproxy talking to pools of Nginx servers with php-fpm
backends. We were seeing 50-60 health checks per second, all of which had
to be serviced by the php-fpm process and which almost always returned the
same result except for the rare memory or nic failure. So we used the
Nginx's cache feature with a 1 second ttl in front of our application's
health check endpoint so that the first request will actually hit the
backend and the other health check requests queue up behind the first
(fastcgi_cache_lock). We set a 250ms timeout on the lock so that health
checks don't queue forever (fastcgi_cache_lock_timeout).

On Mon, Jun 20, 2016 at 7:44 AM, Daniel Ylitalo 
wrote:

> Hi!
>
> I haven't found anything about this topic anywhere so I was hoping someone
> in the mailinglist has done this in the past :)
>
> We are at the size where we need to round-robin tcp balance our incoming
> web traffic with pf to two haproxy servers both running with nbproc 28 for
> http load balancing, however, this leads to 56 healthchecks being done each
> second against our web nodes which hammers them quite hard.
>
> How exactly are you guys solving this issue? Because at this size, the
> healthchecks kind of starts eating more cpu than they are helpful.
>
> --
> Daniel Ylitalo
> System & Network manager
>
> about.mytaste.com
>
>
>
> "Experience is something you earn just right after you screwed up and were
> really in need of it"
>
>


Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread CJ Ess
I personally don't have a need to limit requests the haproxy side at the
moment, I'm just thought I'd try to help Manas make his case. Hes basically
saying that he wants the option to close the client connection after the
nth request and that seems pretty reasonable to me. Maybe it would help him
with DDOS or to manage the number of ports used by the server, If one
server becomes particularly loaded then forcing the clients to reconnect
gives his load balancer an opportunity to move load around to less utilized
servers. I'm just speculating.


Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread CJ Ess
I can only speak for 1.5.x but when haproxy issues an error (not to be
confused with passing through an error from the upstream, but haproxy
itself issuing the error due to acl rules or whatever) it just sends the
error file (or the built-in error text) as a blob and closes the
connection. In my case I have nginx in front of haproxy and it rewrites the
error response adding a content-length header and changing the connection:
close header to connection: keep-alive so that the client doesn't have its
connection closed, but the next request from nginx to haproxy will either
be routed though another idle connection or a new connection to haproxy
will be made.

With the graceful stop I believe we're waiting for the clients to stop
sending us traffic and go away - which most of the time they do in
seconds-minutes. I have a lot of bot activity so I generally only get ~50
requests per connection before I deny something and that closes the
connection. Though I have some servers that emit continuous streams of
data, and doing a graceful restart or shutdown of them basically never
really succeeds because it can take months for the clients to be
interrupted and close their connections. For those servers I have to
specifically chop the connections to force the old haproxy processes to die
off and the clients to reconnect to the new ones.

On Wed, Jun 8, 2016 at 3:13 PM, Lukas Tribus  wrote:

> Hi,
>
>
> Am 08.06.2016 um 20:51 schrieb CJ Ess:
>
>> I'm terminating connections with nginx, then I have a pool of upstream
>> connections from nginx to haproxy where I allow unlimited keep-alive
>> requests between nginx and haproxy per connection. The only times the
>> connections close is when haproxy sends an error response, because it
>> always closes the connection (I don't know why, just because I get a
>> non-2xx/3xxx response it doesn't mean that connection in whole is bad).
>>
>
> Does this happen in haproxy 1.6.3+ or 1.5.16+ as well?
>
>
> If I had haproxy terminating the connections directly then I would like a
>> graceful way to bring those conversations to an end, even if its just
>> waiting for the existing connections to time out or max out the number of
>> requests.
>>
>
> Why would a graceful stop not work for that use case? It covers this exact
> use case and is way more reliable than some max amount of time or number of
> request.
>
>
>
> Lukas
>


Re: HTTP Keep Alive : Limit number of sessions in a connection

2016-06-08 Thread CJ Ess
Nginx for instance allows you to limit the number of keep-alive requests
that a client can send on an existing connection - afterwhich the client
connection is closed.
http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests
Apache has something similar
http://httpd.apache.org/docs/2.4/mod/core.html#maxkeepaliverequests.
Just pointing these out so you can see that its a common feature.

I'm terminating connections with nginx, then I have a pool of upstream
connections from nginx to haproxy where I allow unlimited keep-alive
requests between nginx and haproxy per connection. The only times the
connections close is when haproxy sends an error response, because it
always closes the connection (I don't know why, just because I get a
non-2xx/3xxx response it doesn't mean that connection in whole is bad). If
I had haproxy terminating the connections directly then I would like a
graceful way to bring those conversations to an end, even if its just
waiting for the existing connections to time out or max out the number of
requests.





On Tue, Jun 7, 2016 at 3:45 PM, Lukas Tribus  wrote:

> Am 07.06.2016 um 21:32 schrieb Manas Gupta:
>
>> Hi Lukas,
>> My understanding was that soft-stop will cater to new connections.
>>
>
> That would mean soft stopping doesn't have any effect at all, basically.
>
> No, that's not the case, but either way your hardware load balancer
> would've already stopped sending you new connections, isn't that correct?
>
>
>
> I am looking for a way to gracefully close current/established
>> keep-alive connections after a certain number of sessions have been
>> served by issuing a FIN or HTTP Header Connection:close
>>
>
> You mean after a certain number of *requests* have been served; no, that
> is not supported and it would be a lot less reliable than the proposed
> solution.
>
>
>
> Lukas
>
>


Re: haproxy and pcre

2016-04-29 Thread CJ Ess
Thank you! I think you just made the case for me. =)


On Fri, Apr 29, 2016 at 1:45 AM, Willy Tarreau  wrote:

> On Thu, Apr 28, 2016 at 02:36:44PM -0400, CJ Ess wrote:
> > I'm wanting to make a case for compiling haproxy against a modern version
> > of pcre (supporting jit) when we roll out the next release to my day job.
> > Anyone have some numbers handy that show the benefit of doing so?
>
> Please check these two commits, the first one links to a benchmark which
> led to implement JIT, and the second one reports numbers with and without
> pcre_study() :
>
>7035132 ("MEDIUM: regex: Use PCRE JIT in acl")
>de89871 ("MEDIUM: regex: Use pcre_study always when PCRE is used,
> regardless of JIT")
>
> Willy
>
>


haproxy and pcre

2016-04-28 Thread CJ Ess
I'm wanting to make a case for compiling haproxy against a modern version
of pcre (supporting jit) when we roll out the next release to my day job.
Anyone have some numbers handy that show the benefit of doing so?


Re: Maybe I've found an haproxy bug?

2016-04-26 Thread CJ Ess
That sounds like the issue exactly, the solution seems to be to upgrade.
Thanks for the pointer!

On Tue, Apr 26, 2016 at 6:12 PM, Cyril Bonté  wrote:

> Hi,
>
>
> Le 26/04/2016 23:41, CJ Ess a écrit :
>
>> Maybe I've found an haproxy bug? I am wondering if anyone else can
>> reproduce this -
>>
>> You'll need to send two requests w/ keep-alive:
>>
>> curl -v -v -v http://127.0.0.1/something http://127.0.0.1/
>>
>> On my system the first request returns a 404 error (but I've also seen
>> this with 200 responses - the 404 was highly customized with a chunked
>> response body, and the 200 also had a chunked response body, but I don't
>> know that the chunked encoding is relevant or not), and the second
>> request returns a 504 error (gateway timeout) - in this case haproxy is
>> timing out the connection after 15 seconds.
>>
>> When you run curl the first request will happen just fine (you'll get
>> the 404 response) and the second request will time out, at which point
>> the connection will close with no response of any sort.
>>
>> (curl tries to be smart and will resend the request after the connection
>> closes, but it does note the connection dies)
>>
>> I'm using Haproxy 1.5.12 and can reproduce this at will.
>>
>
> Not sure to follow your explanations (lack of details), but it looks like
> a behaviour that has already been modified in 1.5.16 with this commit :
>
> http://www.haproxy.org/git?p=haproxy-1.5.git;a=commit;h=ef8a113d59e89b2214adf7ab9f9b0b75905a7050
>
> Please upgrade and retry.
>
> --
> Cyril Bonté
>


Maybe I've found an haproxy bug?

2016-04-26 Thread CJ Ess
Maybe I've found an haproxy bug? I am wondering if anyone else can
reproduce this -

You'll need to send two requests w/ keep-alive:

curl -v -v -v  http://127.0.0.1/something http://127.0.0.1/

On my system the first request returns a 404 error (but I've also seen this
with 200 responses - the 404 was highly customized with a chunked response
body, and the 200 also had a chunked response body, but I don't know that
the chunked encoding is relevant or not), and the second request returns a
504 error (gateway timeout) - in this case haproxy is timing out the
connection after 15 seconds.

When you run curl the first request will happen just fine (you'll get the
404 response) and the second request will time out, at which point the
connection will close with no response of any sort.

(curl tries to be smart and will resend the request after the connection
closes, but it does note the connection dies)

I'm using Haproxy 1.5.12 and can reproduce this at will.


Re: HAProxy rejecting requests w/ extended characters in their URLs as bad

2016-04-19 Thread CJ Ess
That will work for now, in the future it wold be nice to have an option to
allow non-control utf-8 characters in the URI without enabling all of the
other stuff.


On Mon, Apr 18, 2016 at 4:59 PM, PiBa-NL  wrote:

> Op 18-4-2016 om 22:47 schreef CJ Ess:
>
> This is using HAProxy 1.5.12 - I've noticed an issue where HAProxy is
>> sometimes rejecting requests with a 400 code when the URL string contains
>> extended characters. Nginx is fronting HAProxy and has passed them through
>> as as valid requests and just eyeballing them they look ok to me.
>>
>> An example is a german URL with 0xc3 0x95 contained in the URL
>>
>> A second example is a latin URL with 0xc3 0xa7 contained in the URL
>>
>> A third example is an asian URL with 0xe6 0xac 0xa1 0xe3 contained in the
>> URL (and many more so I may or may not have complete characters in the
>> example)
>>
>> I don't know the encoding these characters are part of, there are no
>> hints in the other headers.
>>
>> Any idea what I can do to have haproxy accept these?
>>
>> Have you tried?:
> http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#4.2-option%20accept-invalid-http-request
> Though technically the requests are invalid, and should be fixed/avoided
> if possible
>


HAProxy rejecting requests w/ extended characters in their URLs as bad

2016-04-18 Thread CJ Ess
This is using HAProxy 1.5.12 - I've noticed an issue where HAProxy is
sometimes rejecting requests with a 400 code when the URL string contains
extended characters. Nginx is fronting HAProxy and has passed them through
as as valid requests and just eyeballing them they look ok to me.

An example is a german URL with 0xc3 0x95 contained in the URL

A second example is a latin URL with 0xc3 0xa7 contained in the URL

A third example is an asian URL with 0xe6 0xac 0xa1 0xe3 contained in the
URL (and many more so I may or may not have complete characters in the
example)

I don't know the encoding these characters are part of, there are no hints
in the other headers.

Any idea what I can do to have haproxy accept these?


Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-04 Thread CJ Ess
Funny you should mention that, I pushed out the revised config and
immediately got warning about session usage from our moniting. Turns you
you need Maxconn defined as global for hard limits and default for the soft
limit. In this case I'm not completely clear why the global maxconn is
different then the default maxconn - I almost think it would make more
sense to have different keywords. But I'll write it off as a learning
experience in our transition to using keepalives.


On Mon, Apr 4, 2016 at 1:44 PM, Cyril Bonté  wrote:

> Hi,
>
> Le 04/04/2016 19:14, CJ Ess a écrit :
>
>> Moving the setting to global worked perfectly AND it upped the ulimit-n
>> to a more appropriate value:
>>
>
> I feel unconfortable with the "Moving the setting" part.
> Did you really MOVE the maxconn declaration from defaults (or
> listen/frontend) to the global section ? Or did you ADD one to the global
> section ?
>
> This is important, as the effect is not the same at all ;-)
>
>
>> ...
>> Ulimit-n: 131351
>> Maxsock: 131351
>> Maxconn: 65535
>> Hard_maxconn: 65535
>> ...
>>
>> So we'll write this down as a learning experience. We recently
>> transitioned from doing one request per connection to using keep-alives
>> to the fullest, so I suspect that we've always had this problem but just
>> never saw it because our connections turned over so quickly.
>>
>>
>> On Sun, Apr 3, 2016 at 3:59 AM, Baptiste > <mailto:bed...@gmail.com>> wrote:
>>
>>
>> Le 3 avr. 2016 03:45, "CJ Ess" > <mailto:zxcvbn4...@gmail.com>> a écrit :
>>  >
>>  > Oops, that is important - I have both the maxconn and fullconn
>> settings in the defaults section.
>>  >
>>  > On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL > <mailto:piba.nl@gmail.com>> wrote:
>>  >>
>>  >> Op 2-4-2016 om 22:32 schreef CJ Ess:
>>  >>>
>>  >>> So in my config file I have:
>>  >>>
>>  >>> maxconn 65535
>>  >>
>>  >> Where do you have that maxconn setting? In frontend , global, or
>> both.?
>>  >>
>>  >>> fullconn 64511
>>  >>>
>>  >>> However, "show info" still has a maxconn 2000 limit and that
>> caused a blow up because I exceeded the limit =(
>>  >>>
>>  >>> So my questions are 1)  is there a way to raise maxconn without
>> restarting haproxy with the -P parameter (can I add -P when I do a
>> reload?) 2) Are there any other related gotchas I need to take care
>> of?
>>  >>>
>>  >>> I notice that ulimit-n and maxsock both show 4495 despite
>> "ulimit -n" for the user showing 65536 (which is probably half of
>> what I really want since each "session" is going to consume two
>> sockets)
>>  >>>
>>  >>> I'm using haproxy 1.5.12
>>  >>>
>>  >>
>>  >
>>
>> So add a maxconn in your global section.
>> Your process is limited by default to 2000 connections forwarded.
>>
>> Baptiste
>>
>>
>>
>
> --
> Cyril Bonté
>


Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-04 Thread CJ Ess
Moving the setting to global worked perfectly AND it upped the ulimit-n to
a more appropriate value:

...
Ulimit-n: 131351
Maxsock: 131351
Maxconn: 65535
Hard_maxconn: 65535
...

So we'll write this down as a learning experience. We recently transitioned
from doing one request per connection to using keep-alives to the fullest,
so I suspect that we've always had this problem but just never saw it
because our connections turned over so quickly.


On Sun, Apr 3, 2016 at 3:59 AM, Baptiste  wrote:

>
> Le 3 avr. 2016 03:45, "CJ Ess"  a écrit :
> >
> > Oops, that is important - I have both the maxconn and fullconn settings
> in the defaults section.
> >
> > On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL  wrote:
> >>
> >> Op 2-4-2016 om 22:32 schreef CJ Ess:
> >>>
> >>> So in my config file I have:
> >>>
> >>> maxconn 65535
> >>
> >> Where do you have that maxconn setting? In frontend , global, or both.?
> >>
> >>> fullconn 64511
> >>>
> >>> However, "show info" still has a maxconn 2000 limit and that caused a
> blow up because I exceeded the limit =(
> >>>
> >>> So my questions are 1)  is there a way to raise maxconn without
> restarting haproxy with the -P parameter (can I add -P when I do a reload?)
> 2) Are there any other related gotchas I need to take care of?
> >>>
> >>> I notice that ulimit-n and maxsock both show 4495 despite "ulimit -n"
> for the user showing 65536 (which is probably half of what I really want
> since each "session" is going to consume two sockets)
> >>>
> >>> I'm using haproxy 1.5.12
> >>>
> >>
> >
>
> So add a maxconn in your global section.
> Your process is limited by default to 2000 connections forwarded.
>
> Baptiste
>


Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-02 Thread CJ Ess
I'm on Linux so I think that /etc/security/limits.d
and /etc/security/limits.conf are where I would change the default settings
for a user - however the ulimit-n setting in haproxy is a fraction of what
the user's current ulimit -n is, and I'm not sure why.


On Sat, Apr 2, 2016 at 4:46 PM, PiBa-NL  wrote:

> Op 2-4-2016 om 22:32 schreef CJ Ess:
>
> So in my config file I have:
>
> maxconn 65535
> fullconn 64511
>
> However, "show info" still has a maxconn 2000 limit and that caused a blow
> up because I exceeded the limit =(
>
> So my questions are 1)  is there a way to raise maxconn without restarting
> haproxy with the -P parameter (can I add -P when I do a reload?) 2) Are
> there any other related gotchas I need to take care of?
>
> I notice that ulimit-n and maxsock both show 4495 despite "ulimit -n" for
> the user showing 65536 (which is probably half of what I really want since
> each "session" is going to consume two sockets)
>
> as for ulimit-n on freebsd i need to set these two system flags: kern.maxfiles
> kern.maxfilesperprocwhat OS are you using?
>
>
> I'm using haproxy 1.5.12
>
>
>


Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-02 Thread CJ Ess
Oops, that is important - I have both the maxconn and fullconn settings in
the defaults section.

On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL  wrote:

> Op 2-4-2016 om 22:32 schreef CJ Ess:
>
>> So in my config file I have:
>>
>> maxconn 65535
>>
> Where do you have that maxconn setting? In frontend , global, or both.?
>
> fullconn 64511
>>
>> However, "show info" still has a maxconn 2000 limit and that caused a
>> blow up because I exceeded the limit =(
>>
>> So my questions are 1)  is there a way to raise maxconn without
>> restarting haproxy with the -P parameter (can I add -P when I do a reload?)
>> 2) Are there any other related gotchas I need to take care of?
>>
>> I notice that ulimit-n and maxsock both show 4495 despite "ulimit -n" for
>> the user showing 65536 (which is probably half of what I really want since
>> each "session" is going to consume two sockets)
>>
>> I'm using haproxy 1.5.12
>>
>>
>


KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-02 Thread CJ Ess
So in my config file I have:

maxconn 65535
fullconn 64511

However, "show info" still has a maxconn 2000 limit and that caused a blow
up because I exceeded the limit =(

So my questions are 1)  is there a way to raise maxconn without restarting
haproxy with the -P parameter (can I add -P when I do a reload?) 2) Are
there any other related gotchas I need to take care of?

I notice that ulimit-n and maxsock both show 4495 despite "ulimit -n" for
the user showing 65536 (which is probably half of what I really want since
each "session" is going to consume two sockets)

I'm using haproxy 1.5.12


HAProxy keepalives and max-keep-alive-queue

2016-03-19 Thread CJ Ess
So at long last, I'm getting to use keep-alives with HAProxy!

I'm terminating http/ssl/spdy with Nginx and then passing the connections
to HAProxy via an upstream pool. I've verified by packet capture that
connection reuse between clients, Nginx, and HAProxy is occurring.

So I'd like to keep the connection between Nginx and Haproxy alive for a
while, so I've set "timeout http-keep-alive" to a high value, is there
anything else I should do?

As I propagate keep-alives deeper into our stack (we've always closed every
connection after every request, so this is new territory for us) I would
like to reuse existing sessions to the backend servers if they are
available (idle), but I don't want to wait for an existing session to
become available. It looks like I would need to tweak the
 "max-keep-alive-queue" value to achieve this but I don't see a lot of
information about it, can anyone advise me what I should do here?

If it helps the backend servers are relatively close to the haproxy
servers, lets say for argument that the penalty for opening a new
connection over reusing an existing one is 1ms - and that eliminating the
1ms penalty would reduce my total request latency by 50%, so its worth it.


Re: Keep-alive causing latency spike

2016-02-27 Thread CJ Ess
The HAProxy docs say:

- "Tt" is the total time in milliseconds elapsed between the accept and the
last close.

So I could see that not being what I want. If Tt is the total time between
accept() and close() for the frontend connection, then the question becomes
what is the correct way to calculate the latency of the individual requests
sent through the connection? Tw+Tc+Tr? I think I'd want to avoid Tq, but I
could also see it being significant for post requests, maybe others?


On Sat, Feb 27, 2016 at 3:40 PM, Skarbek, John  wrote:

> On February 27, 2016 at 15:26:31, CJ Ess (zxcvbn4...@gmail.com) wrote:
>
> Hey folks, I could use some help figuring this one out. My environment
> looks like this:
>
> (client) <-> (nginx) <-> (haproxy 1.5) <-> (backend server pools all with
> (nginx -> phpfpm))
>
> Where the client is a browser or bot, nginx is terminating
> http/https/spdy, haproxy has the business and routing logic, and the
> backend pools are running nginx+php-fpm.
>
> Traditionally we have closed the connection after every request (a)
> between the client and nginx (b) between nginx and haproxy and (c) between
> haproxy and the backend servers.
>
> Some of it was unintentional, some of it was intentionally working around
> issues with php-fpm servicing multiple requests from the same connection
> (which I assume is some sort of application programming issue).
>
> So I made changes to enable keep-alive connections between the client and
> nginx - no problems.
>
> Then I made changes to enable keep-alive connections between nginx and
> haproxy, and I've got problems. I'm seeing a 25% increase in latency, where
> as I expected no change or a slight decrease. So either something is going
> on that I don't understand or I'm not measuring the latency right and
> haven't noticed before because all the connection closing hid the issue.
>
> The way I am monitoring the request latency is by averaging the Tt field
> from the haproxy logs by second.
>
> Does `Tt` include the time it takes for the session to close?  If that’s
> the case and you enabled keepalives, I would think that number would
> naturally increase.  My theory would be that you are using the wrong values
> to measure latency.
>
>
>
> My HAProxy config looks like this:
>
> global
>   daemon
>   maxconn 81920
>
> defaults
>   log global
>   timeout http-request 5s
>   timeout client 15s
>   timeout server 15s
>   timeout connect 4s
>   option forwardfor except 127.0.0.1
>   option httplog
>   option redispatch
>   option log-separate-errors
>   retries 2
>
> frontend myrontend
>   bind 127.0.0.1:8080
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A8080&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=BZ2S09kcMRiJIUh57WZsng&m=o8bp-O0sa7eP22_EXvIQqRRFGdHKO83zZCM5Z3sdGTY&s=poESCEVSovBGUReXu-NJJD2GIykB49hCUgwhHGlh0oU&e=>
>  defer-accept
>   backlog 65536
>   mode http
>   option http-keep-alive
>   log 127.0.0.1 local0
>   log 127.0.0.1 local1 err
>
>   default_backend mypool
>
> backend mypool
>   mode http
>   option http-server-close
>   balance roundrobin
>
>
> My NGinx config looks like this:
>
> ...
> upstream haproxy {
> server 127.0.0.1:8080
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1-3A8080&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=BZ2S09kcMRiJIUh57WZsng&m=o8bp-O0sa7eP22_EXvIQqRRFGdHKO83zZCM5Z3sdGTY&s=poESCEVSovBGUReXu-NJJD2GIykB49hCUgwhHGlh0oU&e=>
> ;
> keepalive 1; # also tried, 16, 32, 256, saw the same latency spike
> with all
>  }
> ...
> location / {
>proxy_http_version 1.1;
>proxy_set_header Connection "";
>proxy_set_header Host $host;
>proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>proxy_buffering off;
>proxy_pass http://haproxy
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__haproxy&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=BZ2S09kcMRiJIUh57WZsng&m=o8bp-O0sa7eP22_EXvIQqRRFGdHKO83zZCM5Z3sdGTY&s=_qPQBE0_kxKA2iKX-H_KuyFvrS_NRqRvSFaWf8pPBa0&e=>
> ;
>  }
>
>
> Any help or advice appreciated!
>
>
>


Keep-alive causing latency spike

2016-02-27 Thread CJ Ess
Hey folks, I could use some help figuring this one out. My environment
looks like this:

(client) <-> (nginx) <-> (haproxy 1.5) <-> (backend server pools all with
(nginx -> phpfpm))

Where the client is a browser or bot, nginx is terminating http/https/spdy,
haproxy has the business and routing logic, and the backend pools are
running nginx+php-fpm.

Traditionally we have closed the connection after every request (a) between
the client and nginx (b) between nginx and haproxy and (c) between haproxy
and the backend servers.

Some of it was unintentional, some of it was intentionally working around
issues with php-fpm servicing multiple requests from the same connection
(which I assume is some sort of application programming issue).

So I made changes to enable keep-alive connections between the client and
nginx - no problems.

Then I made changes to enable keep-alive connections between nginx and
haproxy, and I've got problems. I'm seeing a 25% increase in latency, where
as I expected no change or a slight decrease. So either something is going
on that I don't understand or I'm not measuring the latency right and
haven't noticed before because all the connection closing hid the issue.

The way I am monitoring the request latency is by averaging the Tt field
from the haproxy logs by second.

My HAProxy config looks like this:

global
  daemon
  maxconn 81920

defaults
  log global
  timeout http-request 5s
  timeout client 15s
  timeout server 15s
  timeout connect 4s
  option forwardfor except 127.0.0.1
  option httplog
  option redispatch
  option log-separate-errors
  retries 2

frontend myrontend
  bind 127.0.0.1:8080 defer-accept
  backlog 65536
  mode http
  option http-keep-alive
  log 127.0.0.1 local0
  log 127.0.0.1 local1 err

  default_backend mypool

backend mypool
  mode http
  option http-server-close
  balance roundrobin


My NGinx config looks like this:

...
upstream haproxy {
server 127.0.0.1:8080;
keepalive 1; # also tried, 16, 32, 256, saw the same latency spike with
all
 }
...
location / {
   proxy_http_version 1.1;
   proxy_set_header Connection "";
   proxy_set_header Host $host;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_buffering off;
   proxy_pass http://haproxy;
 }


Any help or advice appreciated!


Migrating haproxy 1.5 monitor-uri

2016-02-24 Thread CJ Ess
I need to migrate to a different URL for our haproxy health checks, and it
would be really helpful if I could respond to multiple URLs as part of the
transition, or could create an empty 200 response.


Round robin w/o closing backend connections?

2016-02-16 Thread CJ Ess
So lets say that I don't want HAProxy to close the connections to my
backend servers - they can stay active and be available for keepalives -
but I do want every request from the frontend to go to a different backend
via round robin. The idea being that it keeps one frontend connection from
monopolizing a single backend while not forcing me to close the backend
connection after every request. Is there a way I can do that with any
version of HAProxy?


Re: Reloading haproxy without dropping connections

2016-01-22 Thread CJ Ess
The yelp solution I can't do because it requires a newer kernel then I have
access to, but the unbounce solution is interesting, I may be able to work
up something around that.



On Fri, Jan 22, 2016 at 4:07 AM, Pedro Mata-Mouros  wrote:

> Hi,
>
> Haven’t had the chance to implement this yet, but maybe these links can
> get you started:
>
>
> http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
> http://inside.unbounce.com/product-dev/haproxy-reloads/
>
> It’d be cool to have a sort of “officially endorsed” way of achieving this.
>
> Best,
>
> Pedro.
>
>
>
> On 22 Jan 2016, at 00:38, CJ Ess  wrote:
>
> One of our sore points with HAProxy has been that when we do a reload
> there is a ~100ms gap where neither the old or new HAproxy processes accept
> any requests. See attached graphs. I assume that during this time any
> connections received to the port are dropped. Is there anything we can do
> so that the old process keeps accepting requests until the new process is
> completely initialized and starts accepting connections on its own?
>
> I've looked into fencing the restart with iptable commands to blackhole
> TCP SYNs, and I've looked into the huptime utility though I'm not sure
> overloading libc functions is the best approach long term. Any other
> solutions?
>
>
> 
> 
>
>
>
>


Re: Looking for stick table example

2016-01-19 Thread CJ Ess
Thanks to both of you for the example and the pointer about the error file!


On Tue, Jan 19, 2016 at 6:41 PM, Baptiste  wrote:

> To return the 429 response code, use the "errorfile 403" directive and
> replace the 403 code in the file by 429.
>
> Baptiste
>
>
>
> On Wed, Jan 20, 2016 at 12:30 AM, Chad Lavoie  wrote:
> > Greetings,
> >
> > I'd use the following four lines in a backend:
> > stick-table type integer size 1 expire 1m store http_req_rate(1s)
> > tcp-request content track-sc0 be_id()
> > acl enough_log_data sc_http_req_rate(0) gt 2
> > http-request deny if enough_log_data
> >
> > If you want to use that in a frontend instead just replace the be_id with
> > fe_id.  That function doesn't really serve any purpose other then to
> return
> > a static value to be stored in the stick table.
> >
> > That returns a 403 error when the limit is exceeded... I don't think
> there
> > is a good way to return a 429 response without making it substantially
> more
> > complicated.
> >
> > - Chad
> >
> >
> > On 01/19/2016 05:47 PM, CJ Ess wrote:
> >>
> >> I'm looking to limit requests per second on a per-backend basis (not per
> >> IP or per url, just per second).
> >>
> >> The backend itself just forwards requests w/ samples of performance data
> >> to a logging backend - beyond X per second we have all the samples we
> need
> >> and can discard the rest (pref HTTP code 429) so as not to overload the
> >> logger process.
> >>
> >> Anyone have a quick example how to do that?
> >>
> >>
> >
> >
>


Looking for stick table example

2016-01-19 Thread CJ Ess
I'm looking to limit requests per second on a per-backend basis (not per IP
or per url, just per second).

The backend itself just forwards requests w/ samples of performance data to
a logging backend - beyond X per second we have all the samples we need and
can discard the rest (pref HTTP code 429) so as not to overload the logger
process.

Anyone have a quick example how to do that?


Re: Issue with http-response add-header and ACLs

2015-10-01 Thread CJ Ess
Cyril, that makes perfect sense but I wouldn't have thought of it. Thank
you for pointing me the right direction!


On Thu, Oct 1, 2015 at 4:39 PM, Cyril Bonté  wrote:

> Hi,
>
> Le 01/10/2015 20:56, CJ Ess a écrit :
>
>> So I am trying to set some new rules - since I don't have anything hand
>> to echo requests back to me, I'm using http-response add-header so I can
>> verify my rules work with curl.
>>
>> Added to haproxy.cfg:
>>
>> acl test_origin  hdr(X-TEST-IP) -m ip -f /etc/haproxy/acl/test.acl
>> http-response add-header X-Test test
>> http-response add-header X-Test internal if test_origin
>> #http-request deny if test_origin
>> Added to /etc/haproxy/acl/test.acl
>>
>> 127.0.0.3
>>
>> I expect that when I do: curl -vvv -H "X-TEST-IP: 127.0.0.3"
>> http://127.0.0.1:4089/
>>
>> That I would get a response that included two X-Test headers - however I
>> am only seeing the first one. "X-Test: test".
>>
>> If I uncomment the "deny" rule then the request will be denied, so I
>> believe the the acl is working.
>>
>> If I change the "if test_origin" to "if !test_origin" then I'll see the
>> second header, so I think the if is being parsed at least.
>>
>
> You're trying to apply an acl on a request header during the response
> processing, hence such header is not available anymore in the buffer.
>
> You should look at the warning during haproxy init, you'll probably have :
> "acl 'test_origin' will never match because it only involves keywords that
> are incompatible with 'backend http-response header rule'"
>
> With the 1.6 dev branch, you can use variables to store the request value
> in the session :
>   http-request set-var(sess.X_TEST_IP) hdr(X-TEST-IP)
>   acl test_origin var(sess.X_TEST_IP) -m -f /etc/haproxy/acl/test.acl
>
> During the request processing, the header is stored at the session scope,
> which will be available during the response processing.
>
>
> --
> Cyril Bonté
>


Issue with http-response add-header and ACLs

2015-10-01 Thread CJ Ess
So I am trying to set some new rules - since I don't have anything hand to
echo requests back to me, I'm using http-response add-header so I can
verify my rules work with curl.

Added to haproxy.cfg:

acl test_origin  hdr(X-TEST-IP) -m ip -f /etc/haproxy/acl/test.acl
http-response add-header X-Test test
http-response add-header X-Test internal if test_origin
#http-request deny if test_origin

Added to /etc/haproxy/acl/test.acl

127.0.0.3

I expect that when I do: curl -vvv -H "X-TEST-IP: 127.0.0.3"
http://127.0.0.1:4089/

That I would get a response that included two X-Test headers - however I am
only seeing the first one. "X-Test: test".

If I uncomment the "deny" rule then the request will be denied, so I
believe the the acl is working.

If I change the "if test_origin" to "if !test_origin" then I'll see the
second header, so I think the if is being parsed at least.

However I don't know why I'm not seeing the header in the case above.


Re: Frontend closes w/ chunked encoding?

2015-09-17 Thread CJ Ess
Update: Someone pointed out to me that the requests to haproxy are forced
to HTTP/1.0, but the response is HTTP/1.1 w/ chunked encoding. So question
is now, if haproxy will accept the chunked encoding to keep alive the
frontend connection when there isn't a content-length header, and if it
will still do that if the client request from the frontend is http/1.0 and
the server response is http/1.1.


On Thu, Sep 17, 2015 at 4:43 PM, CJ Ess  wrote:

> We've noticed that our front-end connections to haproxy are closing after
> talking to a backend running php-fpm. The php-fpm backend is not sending a
> content-length header, but is using chunked encoding which encodes lengths
> of the chunks and should be enough to keep the connection alive for another
> request. How does HAProxy handle this situation?
>
>
>
>


Frontend closes w/ chunked encoding?

2015-09-17 Thread CJ Ess
We've noticed that our front-end connections to haproxy are closing after
talking to a backend running php-fpm. The php-fpm backend is not sending a
content-length header, but is using chunked encoding which encodes lengths
of the chunks and should be enough to keep the connection alive for another
request. How does HAProxy handle this situation?


Re: IP address ACLs

2015-08-16 Thread CJ Ess
Sounds good. If I use the external file, will HAProxy reload it if the
modification timestamp changes? Or do I need to explicitly send a reload
signal?


On Sat, Aug 15, 2015 at 3:39 AM, Baptiste  wrote:

> Hi,
>
> there is no performance drop of loading from a file or directly in the
> config file.
> That said, if you have multiple ACLs with the same name loading many
> IPs, then you'll perform as many lookups as you have ACLs... While
> loading content from a file would perform a single lookup.
> Anyway, there should not be any noticeable performance impact, since
> IP lookup is very quick in HAProxy (a few hundred of nano second in a
> tree of 1.000.000 IPs).
>
> Concerning comments, any string after a dash '#' is considered as a
> comment and not loaded in the ACL.
>
> Baptiste
>
>
> On Sat, Aug 15, 2015 at 8:28 AM, Nathan Williams 
> wrote:
> > We use a file for about 40 cidr blocks, and don't have any problems with
> > load speed. Presumably large means more than that, though.
> >
> > We use comments as well, but they have to be at the beginning of their
> own
> > line, not tagged on after the address.
> >
> >
> > On Fri, Aug 14, 2015, 9:09 PM CJ Ess  wrote:
> >>
> >> When doing a large number of IP based ACLs in HAProxy, is it more
> >> efficient to load the ACLs from a file with the -f argument? Or is just
> as
> >> good to use multiple ACL statements in the cfg file?
> >>
> >> If I did use a file with the -f parameter, is it possible to put
> comments
> >> in the file?
> >>
> >
>


IP address ACLs

2015-08-14 Thread CJ Ess
When doing a large number of IP based ACLs in HAProxy, is it more efficient
to load the ACLs from a file with the -f argument? Or is just as good to
use multiple ACL statements in the cfg file?

If I did use a file with the -f parameter, is it possible to put comments
in the file?


Re: HTTP/2 -- is support required on the back end?

2015-06-24 Thread CJ Ess
http/2 takes how web sites have been architected for the last decade and
turns it upside down, so I suspect it will take a while to really take
hold. On haproxy's roadmap http/2 is in the uncategorized section. =P Also
many people think that the TLS overhead that browsers have forced on http/2
wasteful on the backend. So I'm actually hoping to terminate http/2 with
the first thing that supports it reasonably well (Apache Traffic Server and
Apache mod_h2 look like leading candidates, h2o is still too immature for a
production site IMO) then use haproxy to talk http/1.1 to backends. I'm
hoping that might also ease the transition between the architectural
differences because I can serve an http/2 optimized structure to those
clients and use haproxy to map the destinations back to http/1.1 backends
while still keeping the existing structures in place for http/1 clients.
When haproxy does finally gain http/2 support then maybe it can terminate
directly and save the extra hop. If it supports http/2 to backends then I'm
hoping it will have the option to connect directly (without tls).


On Wed, Jun 24, 2015 at 12:26 PM, Shawn Heisey  wrote:

> When http/2 support lands in haproxy, will http/2 support also be
> required on the back end to take advantage of it?
>
> I'm hoping that I can leverage http/2 without immediate support on the
> back end.  I would expect that the LAN connection between haproxy and
> the back end servers will be fast enough that the single http/2
> connection can be used on the Internet-facing side with multiple
> http/1.1 connections on the back end, but I don't know if that kind of
> isolation will be possible.  We do have plans to upgrade the back end to
> support http/2, but that may happen a lot slower than I would like.
>
> The back end servers for haproxy are Apache, with Tomcat behind those,
> so I have similar concerns there.  Apache has http/2 support now, but
> Tomcat is lagging behind.
>
> Thanks,
> Shawn
>
>


Re: LB as a first row of defence against DDoS

2015-06-24 Thread CJ Ess
Someone posted a link to a really tricked out anti-ddos haproxy config not
long ago, it might be interesting to you:

https://github.com/analytically/haproxy-ddos

On Wed, Jun 24, 2015 at 11:51 AM, Shawn Heisey  wrote:

> On 6/18/2015 4:32 PM, Shawn Heisey wrote:
> > On 6/17/2015 9:29 PM, Krishna Kumar (Engineering) wrote:
> >> Referring to Baptiste's excellent blog on "Use a lb as a first row of
> >> defense
> >> against DDoS" @
> >>
> >>
> http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
> >>
> >> I am not able to find a follow up, if it was written, on combining
> >> configuration
> >> examples to improve protection. Is there either another article
> explaining
> >> how to combine the configuration settings to protect against multiple
> >> types of
> >> DoS attacks, else, how would one do this?
> >
> > We have a very good query here.
> >
> > I would like to see an example config that combines all of these
> > techniques together in the same config that has (as an example) 10 front
> > ends and 30 back ends, rather than seeing each technique in isolation on
> > a very limited config.  Looking at the examples, I can't see how to
> > combine multiple techniques, especially if I want to apply it to a large
> > config.
>
> I was going to comment on the blog post so the author would see the
> request to put together a complete config with multiple front ends and
> back ends, with all of them using every one of the DDOS techniques
> included on the blog post.  Unfortunately the blog has an unhelpful
> combination of settings -- new user registration is disabled, and login
> is required to comment.
>
> I believe that the author is active on this list, so I hope that they
> are watching, and can help fill in the gaps for those of us who are less
> familiar with how to use haproxy's advanced features.
>
> Thanks,
> Shawn
>
>
>


Re: Receiving HTTP responses to TCP pool

2015-06-16 Thread CJ Ess
I think that nails the problem. So if its not just me then the question is
if this is intended behavior or if its a bug. If its intended then I don't
think its entirely clear from the documentation that 'mode tcp' only works
under certain circumstances. If we confirm that its a bug then I'd be
willing to see if I can track it down and fix it.


On Tue, Jun 16, 2015 at 4:39 PM, PiBa-NL  wrote:

>  Which does not prevent the backend from using mode http as the defaults
> section sets.
>
> CJ Ess schreef op 16-6-2015 om 22:36:
>
> "mode tcp" is already present in mainfrontend definition below the bind
> statement
>
>
> On Mon, Jun 15, 2015 at 3:05 PM, PiBa-NL  wrote:
>
>>  CJ Ess schreef op 15-6-2015 om 20:52:
>>
>> This one has me stumped - I'm trying to proxy SMTP connections however
>> I'm getting an HTTP response when I try to connect to port 25 (even though
>> I've done mode tcp).
>>
>>  This is the smallest subset that reproduced the problem - I can make
>> this work by doing "mode tcp" in the default section and then doing "mode
>> http" in all of the http frontends (not shown). But doing 'mode http' as
>> default and then 'mode tcp' in the smtp frontend definition seems to not
>> work and I'm not certain why.
>>
>>  global
>>   daemon
>>   maxconn 10240
>>   log 127.0.0.1 local0
>>   log 127.0.0.1 local1 notice
>>   stats socket /var/run/haproxy.sock user root group root mode 600 level
>> admin
>>   stats timeout 2m
>>
>>  defaults
>>   log global
>>   modehttp
>>   timeout client 30s
>>   timeout server 30s
>>   timeout connect 4s
>>   option  socket-stats
>>
>>  frontend mainfrontend
>>   bind *:25
>>   mode tcp
>>   maxconn 10240
>>   option smtpchk EHLO example.com
>>   default_backend mxpool
>>
>>  backend mxpool
>>
>>  add:
>> mode tcp
>>
>>balance roundrobin
>>   server mailparser-xxx 172.0.0.51:25 check port 25 weight 20 maxconn
>> 10240
>>   server mailparser-yyy 172.0.0.67:25 check port 25 weight 20 maxconn
>> 10240
>>
>>
>>
>
>


Re: Receiving HTTP responses to TCP pool

2015-06-16 Thread CJ Ess
"mode tcp" is already present in mainfrontend definition below the bind
statement


On Mon, Jun 15, 2015 at 3:05 PM, PiBa-NL  wrote:

>  CJ Ess schreef op 15-6-2015 om 20:52:
>
> This one has me stumped - I'm trying to proxy SMTP connections however I'm
> getting an HTTP response when I try to connect to port 25 (even though I've
> done mode tcp).
>
>  This is the smallest subset that reproduced the problem - I can make
> this work by doing "mode tcp" in the default section and then doing "mode
> http" in all of the http frontends (not shown). But doing 'mode http' as
> default and then 'mode tcp' in the smtp frontend definition seems to not
> work and I'm not certain why.
>
>  global
>   daemon
>   maxconn 10240
>   log 127.0.0.1 local0
>   log 127.0.0.1 local1 notice
>   stats socket /var/run/haproxy.sock user root group root mode 600 level
> admin
>   stats timeout 2m
>
>  defaults
>   log global
>   modehttp
>   timeout client 30s
>   timeout server 30s
>   timeout connect 4s
>   option  socket-stats
>
>  frontend mainfrontend
>   bind *:25
>   mode tcp
>   maxconn 10240
>   option smtpchk EHLO example.com
>   default_backend mxpool
>
>  backend mxpool
>
> add:
> mode tcp
>
>balance roundrobin
>   server mailparser-xxx 172.0.0.51:25 check port 25 weight 20 maxconn
> 10240
>   server mailparser-yyy 172.0.0.67:25 check port 25 weight 20 maxconn
> 10240
>
>
>


Receiving HTTP responses to TCP pool

2015-06-15 Thread CJ Ess
This one has me stumped - I'm trying to proxy SMTP connections however I'm
getting an HTTP response when I try to connect to port 25 (even though I've
done mode tcp).

This is the smallest subset that reproduced the problem - I can make this
work by doing "mode tcp" in the default section and then doing "mode http"
in all of the http frontends (not shown). But doing 'mode http' as default
and then 'mode tcp' in the smtp frontend definition seems to not work and
I'm not certain why.

global
  daemon
  maxconn 10240
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  stats socket /var/run/haproxy.sock user root group root mode 600 level
admin
  stats timeout 2m

defaults
  log global
  modehttp
  timeout client 30s
  timeout server 30s
  timeout connect 4s
  option  socket-stats

frontend mainfrontend
  bind *:25
  mode tcp
  maxconn 10240
  option smtpchk EHLO example.com
  default_backend mxpool

backend mxpool
  balance roundrobin
  server mailparser-xxx 172.0.0.51:25 check port 25 weight 20 maxconn 10240
  server mailparser-yyy 172.0.0.67:25 check port 25 weight 20 maxconn 10240


Re: VM Power Control/Elasticity

2015-05-12 Thread CJ Ess
You can't add or remove hosts to a pool without doing a reload - you can
change the weights, mark them up and down, but not add or remove.

On Mon, May 11, 2015 at 1:00 PM, Nick Couchman 
wrote:

> I was wondering if it is possible or there's a recommended way to deal
> with dynamic capacity expansion for a given back-end.  I searched through
> the documentation some and didn't see anything obvious, so figured I would
> ask here.
>
> Basically, I would like a way to trigger a script when the number of
> active connections gets within a certain range of the total available
> connections for the available backend servers.  I would like this script to
> be able to do something like trigger a script or command that creates or
> powers on or off a virtual machine that is already added to or could be
> dynamically added to or removed from the back-end.  The basic scenario is
> this:
> - Back-end starts with 10 configured systems or 10 connections each.
> - 5 of the 10 systems are powered on by default, with the other 5 down.
> - Users connect, and close to 40/50 available connections.
> - HAProxy detects the connection limit and triggers a script that starts
> up the 6th VM.
> - Once HAProxy detects that the 6th VM is running, the number of available
> connections moves to 60.
> - Users continue to connect and close to 50/60, triggering another power
> event.
>
> I'd also like the reverse of that to happen:
> - Users begin to disconnect and connections drop to 40/60.
> - HAProxy triggers another script to stop one of the the configured
> back-end systems that has zero connections.
>
> Is this possible?  Or on the roadmap?  Or something that, while not
> implemented directly in the HAProxy configuration could be done some other
> way - some periodic polling of HAProxy some other way?
>
> Thanks,
> Nick
>
>
>


Re: [PATCH] HAProxy 1.6 Compile clean with -DDEBUG_FULL -DDEBUG_AUTH

2015-05-02 Thread CJ Ess
Done. I checked and all the other debug options are quiet also.

--- a/src/auth.c
+++ b/src/auth.c
@@ -218,11 +218,14 @@ check_user(struct userlist *ul, const char *user,
const char *pass)
 {

struct auth_users *u;
+#ifdef DEBUG_AUTH
+   struct auth_groups_list *agl;
+#endif
const char *ep;

 #ifdef DEBUG_AUTH
-   fprintf(stderr, "req: userlist=%s, user=%s, pass=%s, group=%s\n",
-   ul->name, user, pass, group);
+   fprintf(stderr, "req: userlist=%s, user=%s, pass=%s\n",
+   ul->name, user, pass);
 #endif

for (u = ul->users; u; u = u->next)
diff --git a/src/stream.c b/src/stream.c
index 12b6f9d..72cbb08 100644
--- a/src/stream.c
+++ b/src/stream.c
@@ -737,10 +737,10 @@ static void sess_update_stream_int(struct stream *s)
DPRINTF(stderr,"[%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u
rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d\n",
now_ms, __FUNCTION__,
s,
-   req, s->rep,
+   req, &s->res,
req->rex, s->res.wex,
req->flags, s->res.flags,
-   req->buf->i, req->buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, req->cons->state);
+   req->buf->i, req->buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, s->si[1].state);

if (si->state == SI_ST_ASS) {
/* Server assigned to connection request, we have to try to
connect now */
@@ -931,10 +931,10 @@ static void sess_prepare_conn_req(struct stream *s)
DPRINTF(stderr,"[%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u
rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d\n",
now_ms, __FUNCTION__,
s,
-   s->req, s->rep,
+   &s->req, &s->res,
s->req.rex, s->res.wex,
s->req.flags, s->res.flags,
-   s->req.buf->i, s->req.buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, s->req.cons->state);
+   s->req.buf->i, s->req.buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, s->si[1].state);

if (si->state != SI_ST_REQ)
return;


On Fri, May 1, 2015 at 1:22 AM, Willy Tarreau  wrote:

> Hi,
>
> On Thu, Apr 30, 2015 at 01:47:30PM -0400, CJ Ess wrote:
> > diff --git a/src/auth.c b/src/auth.c
> > index 42c0808..6973136 100644
> > --- a/src/auth.c
> > +++ b/src/auth.c
> > @@ -218,11 +218,12 @@ check_user(struct userlist *ul, const char *user,
> > const char *pass)
> >  {
> >
> > struct auth_users *u;
> > +   struct auth_groups_list *agl;
> > const char *ep;
> >
> >  #ifdef DEBUG_AUTH
>
> Above, could you please move the variable declaration after the #ifdef
> so that we don't get a build warning in the non-debug case ?
>
> Otherwise it looks fine to me.
>
> Thanks,
> Willy
>
>


[PATCH] HAProxy 1.6 Compile clean with -DDEBUG_FULL -DDEBUG_AUTH

2015-04-30 Thread CJ Ess
diff --git a/src/auth.c b/src/auth.c
index 42c0808..6973136 100644
--- a/src/auth.c
+++ b/src/auth.c
@@ -218,11 +218,12 @@ check_user(struct userlist *ul, const char *user,
const char *pass)
 {

struct auth_users *u;
+   struct auth_groups_list *agl;
const char *ep;

 #ifdef DEBUG_AUTH
-   fprintf(stderr, "req: userlist=%s, user=%s, pass=%s, group=%s\n",
-   ul->name, user, pass, group);
+   fprintf(stderr, "req: userlist=%s, user=%s, pass=%s\n",
+   ul->name, user, pass);
 #endif

for (u = ul->users; u; u = u->next)
diff --git a/src/stream.c b/src/stream.c
index 12b6f9d..72cbb08 100644
--- a/src/stream.c
+++ b/src/stream.c
@@ -737,10 +737,10 @@ static void sess_update_stream_int(struct stream *s)
DPRINTF(stderr,"[%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u
rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d\n",
now_ms, __FUNCTION__,
s,
-   req, s->rep,
+   req, &s->res,
req->rex, s->res.wex,
req->flags, s->res.flags,
-   req->buf->i, req->buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, req->cons->state);
+   req->buf->i, req->buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, s->si[1].state);

if (si->state == SI_ST_ASS) {
/* Server assigned to connection request, we have to try to
connect now */
@@ -931,10 +931,10 @@ static void sess_prepare_conn_req(struct stream *s)
DPRINTF(stderr,"[%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u
rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d\n",
now_ms, __FUNCTION__,
s,
-   s->req, s->rep,
+   &s->req, &s->res,
s->req.rex, s->res.wex,
s->req.flags, s->res.flags,
-   s->req.buf->i, s->req.buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, s->req.cons->state);
+   s->req.buf->i, s->req.buf->o, s->res.buf->i, s->res.buf->o,
s->si[0].state, s->si[1].state);

if (si->state != SI_ST_REQ)
return;


Re: Choosing backend based on constant

2015-04-30 Thread CJ Ess
Perhaps this is more what you are looking for?
https://github.com/smarterclayton/haproxy-map-route-example

On Thu, Apr 30, 2015 at 11:43 AM, Veiko Kukk  wrote:

> I'd like to manually add that constant string into configuration, not to
> get it from the traffic. It would help to reduce differences in haproxy
> configuration file between server groups and easier migration between
> groups.
>
> Best regards,
> Veiko
>
>
> On 30/04/15 18:06, Baptiste wrote:
>
>> On Thu, Apr 30, 2015 at 11:49 AM, Veiko Kukk 
>> wrote:
>>
>>> Hi everybody
>>>
>>> I'd like to simplify my haproxy configuration management by using almost
>>> identical configurations for different groups of haproxy installations
>>> that
>>> use different backends based on string comparision. The only difference
>>> in
>>> haproxy configuration files of different groups would be that string.
>>>
>>> The configuration logic would be something like this (not syntactically
>>> correct for haproxy, I know, but should show what I wish to accomplish):
>>>
>>> constant = foo # first hostgroup configuration
>>> constant = bar # second hostgroup configuration
>>>
>>> # common configuration for all hostgroups
>>> use_backend ha_backend_foo if constant == foo
>>> use_backend ha_backend_bar if constant == bar
>>> ...
>>>
>>> I wonder how to specify that string and form acl to use in 'use_backend'
>>> statement?
>>>
>>> Thanks in advance,
>>> Veiko
>>>
>>
>>
>> Hi Veiko,
>>
>> The question is how do you set your constant, what piece of
>> information do you use from the traffic or whatever?
>> Then we may help you.
>>
>> Baptiste
>>
>>
>


Re: Choosing backend based on constant

2015-04-30 Thread CJ Ess
You can use stick tables to create sticky sessions based on origin IP,
cookies, and things like that, you'll need HAProxy 1.5 of better t do it.
If you google for "haproxy sticky sessions" you'll find an number of
examples. Here are a couple stand-outs:

http://blog.haproxy.com/2012/03/29/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/
http://stackoverflow.com/questions/27094501/haproxy-1-5-8-how-do-i-configure-cookie-based-stickiness
http://serverfault.com/questions/652911/implementing-tcp-sticky-sessions-with-haproxy-to-handle-ssl-pass-through-traffic


Show outgoing headers when full debug enabled

2015-04-27 Thread CJ Ess
When you run HAProxy in full debugging mode there is a debug_hdrs() call
that displays all of the http headers read from the frontend, I'd also like
to be able to see the headers being sent to the backend.

So far I haven't pinpointed where the headers are being sent from so that I
can add another debug_hdrs() call. Anyone point me to the right place?


Re: SEGV capturing tcp traffic

2015-04-25 Thread CJ Ess
Very cool! Thank you!

On Sat, Apr 25, 2015 at 10:33 AM, Baptiste  wrote:

> Hi,
>
> I reported this issue to Willy already and latest snapshot includes a fix:
>
> http://git.haproxy.org/?p=haproxy.git;a=commit;h=e91ffd093e548aa08d7ccb835fd261f3d71ffb17
>
> run a git pull or git clone ;)
>
> Baptiste
>
>
> On Fri, Apr 24, 2015 at 5:58 PM, CJ Ess  wrote:
> > Its possible that I'm doing this wrong, I don't see many examples of
> working
> > with tcp streams, but this combination seems to SEGV haproxy 1.6
> > consistently.
> >
> > The idea is to capture the first 32 bytes of a TCP stream and use it to
> make
> > a sticky session. What I've done is this:
> >
> > frontend fe_capture
> > mode tcp
> > bind *:9048
> > default_backend be_capture
> >
> > backend be_capture
> > mode tcp
> > balance roundrobin
> > tcp-request inspect-delay 5s
> > tcp-request content accept
> > stick-table type binary len 32 size 30k expire 30m
> > stick on payload(0,32)
> > server test9050 127.0.0.1:9050 weight 1 check observe layer4
> > server test9051 127.0.0.1:9051 weight 1 check observe layer4
> >
> > And to test it I do this:
> >
> > curl -v http://127.0.0.1:9048/
> > (And I'm not really doing all this to look at http, this is just an
> example
> > that demonstrates the issue)
> >
> >
>


SEGV capturing tcp traffic

2015-04-24 Thread CJ Ess
Its possible that I'm doing this wrong, I don't see many examples of
working with tcp streams, but this combination seems to SEGV haproxy 1.6
consistently.

The idea is to capture the first 32 bytes of a TCP stream and use it to
make a sticky session. What I've done is this:

frontend fe_capture
mode tcp
bind *:9048
default_backend be_capture

backend be_capture
mode tcp
balance roundrobin
tcp-request inspect-delay 5s
tcp-request content accept
stick-table type binary len 32 size 30k expire 30m
stick on payload(0,32)
server test9050 127.0.0.1:9050 weight 1 check observe layer4
server test9051 127.0.0.1:9051 weight 1 check observe layer4

And to test it I do this:

curl -v http://127.0.0.1:9048/
(And I'm not really doing all this to look at http, this is just an example
that demonstrates the issue)


Re: Access control for stats page

2015-04-21 Thread CJ Ess
Very cool, thank you for the snippets!

On Tue, Apr 21, 2015 at 6:55 PM, Neil - HAProxy List <
maillist-hapr...@iamafreeman.com> wrote:

> heres are some relevent snips
> I run this in with same address as the service
>
> frontend SSL
> ...
> acl url_hastats url_beg /hastats
> acl location_trusted src 123.123.123.0/24
> acl magic_cookie_trusted hdr_sub(cookie)
> magicforthissiteonly=foobar_SHA1value_etc
> use_backend hastats if url_hastats location_trusted
> use_backend hastats if url_hastats magic_cookie_trusted
> deny if url_hastats
> ...
>
> backend hastats
> mode http
> stats uri /hastats
> stats realm Service\ Loadbalancer
> stats show-desc url.domain:
> Service Loadbalancerrunning on
> hostname config version
> stats show-legends
> stats auth admin:password
> stats admin if TRUE
>
>
> On 21 April 2015 at 21:04, Neil - HAProxy List <
> maillist-hapr...@iamafreeman.com> wrote:
>
>> Hello
>>
>> Yep there is
>>
>> Have a frontend
>>
>> Send say /hastats to a hastats backend
>>
>> have the backend have its stats URL be /hastats too
>>
>> Set the acls in the frontend
>>
>> I'll post a config example in a bit.
>>
>> Neil
>> On 21 Apr 2015 20:09, "CJ Ess"  wrote:
>>
>>> Is there a way to setup an ACL for the haproxy stats page? We do have
>>> authentication set up for the URL, but we would feel better if we could
>>> limit access to a white list of local networks. Is there a way to do that?
>>>
>>>
>


Access control for stats page

2015-04-21 Thread CJ Ess
Is there a way to setup an ACL for the haproxy stats page? We do have
authentication set up for the URL, but we would feel better if we could
limit access to a white list of local networks. Is there a way to do that?


Re: Stick tables and counters persistence

2015-04-17 Thread CJ Ess
Do you have an example of what that looks like? Am I literally adding
127.0.0.1 as a peer?


On Fri, Apr 17, 2015 at 12:26 AM, Dennis Jacobfeuerborn <
denni...@conversis.de> wrote:

> On 17.04.2015 02:12, Igor Cicimov wrote:
> > Hi all,
> >
> > Just a quick one, are the stick tables and counters persisted on haproxy
> > 1.5.11 reload/restart?
>
> With nbproc=1 yes as long as you use a peers section that contains the
> local host as an entry.
>
> Regards,
>   Dennis
>
>
>
>


Long ACLs

2015-04-14 Thread CJ Ess
What is the best way to deal with long ACLs with HAProxy. For instance
Amazon EC2 has around 225 address blocks. So if I wanted to direct requests
originating from EC2 to a particular backend, thats a lot of CIDRs to
manage and compare against. Any suggestions how best to approach a
situation like this?


Re: Achieving Zero Downtime Restarts at Yelp

2015-04-14 Thread CJ Ess
I think the gold standard for graceful restarts is nginx - it will start a
new instance (could be a new binary), send the accept fd's to the new
instance, then the original instance will stop accepting new requests and
allow the existing connections to drain off. The whole process is
controlled by signals and you can even decide there is a problem with the
new instance and have the old one resume taking traffic. I love it because
I can bounce nginx all day long and noone notices. I could see haproxy
having the same ability when nbproc = 1, but not exactly a two weekend
project.


On Mon, Apr 13, 2015 at 1:24 PM, Joseph Lynch  wrote:

> Hello,
>
> I published an article today on Yelp's engineering blog (
> http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html)
> that shows a technique we use for low latency, zero downtime restarts of
> HAProxy. This solves the "when I restart HAProxy some of my clients get
> RSTs" problems that can occur. We built it to solve the RSTs in our
> internal load balancing, so there is a little more work to be done to
> modify the method to work with external traffic, which I talk about in the
> post.
>
> The solution basically consists of using Linux queuing disciplines to
> delay SYN packets for the duration of the restart. It can definitely be
> improved by further tuning the qdiscs or replacing the iptables mangle with
> a u8/u32 tc filter, but I decided it was better to talk about the idea and
> if the community likes it, then we can optimize it further.
>
> -Joey
>


[PATCH] Configurable http result codes for http-request deny

2015-04-07 Thread CJ Ess
This is my first time submitting a modification to haproxy, so I would
appreciate feedback.

We've been experimenting with using the stick tables feature in Haproxy to
do rate limiting by IP at the edge. We know from past experience that we
will need to maintain a whitelist because schools and small ISPs (in
particular) have a habit of proxying a significant number of requests
through a handful of addresses without providing x-forwarded-for to
differentiate between actual origins. My employer has a strict "we talk to
our customers" policy (what a unique concept!) so when we do rate limit
someone we want to return a custom error page which explains in a positive
way why we are not serving he requested page and how our support group will
be happy to add them to the white list if they contact us.

This patch adds support for error codes 429 and 405 to Haproxy and a
"deny_status XXX" option to "http-request deny" where you can specify which
code is returned with 403 being the default. We really want to do this the
"haproxy way" and hope to have this patch included in the mainline. We'll
be happy address any feedback on how this is implemented.
diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9a04200..daba1b9 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2612,7 +2612,8 @@ errorfile  
  yes   |yes   |   yes  |   yes
   Arguments :
 is the HTTP status code. Currently, HAProxy is capable of
-  generating codes 200, 400, 403, 408, 500, 502, 503, and 504.
+  generating codes 200, 400, 403, 405, 408, 429, 500, 502, 503, and
+  504.
 
 designates a file containing the full HTTP response. It is
   recommended to follow the common practice of appending ".http" to
diff --git a/include/types/proto_http.h b/include/types/proto_http.h
index 5a4489d..d649fdd 100644
--- a/include/types/proto_http.h
+++ b/include/types/proto_http.h
@@ -309,7 +309,9 @@ enum {
HTTP_ERR_200 = 0,
HTTP_ERR_400,
HTTP_ERR_403,
+   HTTP_ERR_405,
HTTP_ERR_408,
+   HTTP_ERR_429,
HTTP_ERR_500,
HTTP_ERR_502,
HTTP_ERR_503,
@@ -417,6 +419,7 @@ struct http_req_rule {
struct list list;
struct acl_cond *cond; /* acl condition to meet */
unsigned int action;   /* HTTP_REQ_* */
+   short deny_status; /* HTTP status to return to user 
when denying */
int (*action_ptr)(struct http_req_rule *rule, struct proxy *px, struct 
session *s, struct http_txn *http_txn);  /* ptr to custom action */
union {
struct {
@@ -484,6 +487,7 @@ struct http_txn {
unsigned int flags; /* transaction flags */
enum http_meth_t meth;  /* HTTP method */
/* 1 unused byte here */
+   short rule_deny_status; /* HTTP status from rule when denying */
short status;   /* HTTP status from the server, 
negative if from proxy */
 
char *uri;  /* first line if log needed, NULL 
otherwise */
diff --git a/src/proto_http.c b/src/proto_http.c
index 611a8c1..989d399 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -131,7 +131,9 @@ const int http_err_codes[HTTP_ERR_SIZE] = {
[HTTP_ERR_200] = 200,  /* used by "monitor-uri" */
[HTTP_ERR_400] = 400,
[HTTP_ERR_403] = 403,
+   [HTTP_ERR_405] = 405,
[HTTP_ERR_408] = 408,
+   [HTTP_ERR_429] = 429,
[HTTP_ERR_500] = 500,
[HTTP_ERR_502] = 502,
[HTTP_ERR_503] = 503,
@@ -163,6 +165,14 @@ static const char *http_err_msgs[HTTP_ERR_SIZE] = {
"\r\n"
"403 Forbidden\nRequest forbidden by 
administrative rules.\n\n",
 
+   [HTTP_ERR_405] =
+   "HTTP/1.0 405 Method Not Allowed\r\n"
+   "Cache-Control: no-cache\r\n"
+   "Connection: close\r\n"
+   "Content-Type: text/html\r\n"
+   "\r\n"
+   "405 Method Not Allowed\nA request was made of a 
resource using a request method not supported by that 
resource\n\n",
+
[HTTP_ERR_408] =
"HTTP/1.0 408 Request Time-out\r\n"
"Cache-Control: no-cache\r\n"
@@ -171,6 +181,14 @@ static const char *http_err_msgs[HTTP_ERR_SIZE] = {
"\r\n"
"408 Request Time-out\nYour browser didn't send a 
complete request in time.\n\n",
 
+   [HTTP_ERR_429] =
+   "HTTP/1.0 429 Too Many Requests\r\n"
+   "Cache-Control: no-cache\r\n"
+   "Connection: close\r\n"
+   "Content-Type: text/html\r\n"
+   "\r\n"
+   "429 Too Many Requests\nYou have sent too many 
requests in a given amount of time.\n\n",
+
[HTTP_ERR_500] =
"HTTP/1.0 500 Server Error\r\n"
"Cache-Control: no-cache\r\n"
@@ -3408,10 +3426,12 @@ resume_execution:
return HTTP_RULE_RES_STOP;
 
case HTTP_REQ_ACT_DENY:
+   txn->rule_deny

Re: SPDY with Apache mod_spdy

2015-01-27 Thread CJ Ess
I'm under the impression that Haproxy doesn't speak SPDY natively so best
it can do for pass is through to a backend that does. If you use nginx to
terminate ssl and spdy, then you can use all the features of haproxy.


On Tue, Jan 27, 2015 at 1:21 PM, Erwin Schliske  wrote:

> Hello,
>
> actually I have the task to setup a system with Haproxy balancing a Apache
> with mod_spdy enabled. I don't have a problem with ssl-offloading, but I
> cannot find out how to serve spdy enabled clients. I have tried several
> howtos like
>
>
> http://www.igvita.com/2012/10/31/simple-spdy-and-npn-negotiation-with-haproxy/
>
> My config is:
>
> listen spdytest
>   modetcp
>   bind  X.X.X.X:443 ssl crt /etc/haproxy/ssl/example.com.pem
> no-sslv3 npn spdy/2
>   server   backend1 10.X.X.X:1443 ssl
>
> All tutorials I've found use Nginx as webserver, which can serve spdy
> without ssl. But this is not the case with Apache mod_spdy. It needs https
> as proto.
>
> Does someone have a hint what I'm doing wrong?
>
>
> Thanks.
>
>
> Regards,
> Erwin
>


Stick tables, good guys, bad guys, and NATs

2015-01-26 Thread CJ Ess
I am upgrading my environment from haproxy 1.3/1.4 to haproxy 1.5 but as of
yet am not using any of the newer features.

I'm intrigued with using the stick table facilities in haproxy 1.5 to help
mitigate the impact of malicious users and that seems to be a common goal -
however I haven't seen any discussion about large groups of users behind
NATs and firewalls (businesses, universities, mobile, etc.) Has anyone
found a happy median between these two concerns? Aside from white listing
and the blocks aging out over time.

One thought I had, in a virtual hosting environment, was to use a stick
table to track the number of requests by Host header, and direct requests
to a different backend (with dedicated resources) once requests for a
particular vhost crosses a threshold - and rejoin the common pool once the
traffic dies down. Has anyone been successful with a similar setup?


Linux & tcp splice

2014-06-26 Thread CJ Ess
Hello,

I am looking for some really solid information on using the tcp splicing
features present in recent HAProxy and Linux builds. Most of the
information I'm finding with Google is from 2009/2010.

I'm looking for stuff like:
- Which kernel options are required (it appears that patching is no longer
required)
- What needs to be configured in the HAProxy config file
- Any performance tuning tips for either component on a box that will have
heavy traffic
- Any troubleshooting tips that might come in handy for this configuration