Not the tarpit feature, that will deny access to the content with 500
status. I don't want to kill the request, just delay it,
On Fri, Aug 5, 2016 at 8:57 PM, Dennis Jacobfeuerborn <denni...@conversis.de
> wrote:
> On 05.08.2016 19:11, CJ Ess wrote:
> > So I know I can use Hapro
So I know I can use Haproxy to send 429s when a given request rate is
exceeded.
I have a case where the "user" is mostly screen scrapers and click bots, so
if I return a 429 they'll just turn around and re-request until successful
- I can't expect them to voluntarily manage their request rate or
I think I'm overlooking something simple, could someone spot check me?
What I want to do is to pool connections on my http backend - keep HAProxy
from opening a new connection to the same backend if there is an
established connection that is idle.
My haproxy version is 1.5.18
In my defaults
available.
On Fri, Jul 15, 2016 at 3:45 PM, Cyril Bonté <cyril.bo...@free.fr> wrote:
> Le 15/07/2016 à 19:35, CJ Ess a écrit :
>
>> I think I gave the relevant details but here is a sample (with hostname,
>> frontend name, backend name, server name, user agent, x-forwarded-for
&
/2898 200 135 - - 17847/17845/5414/677/0 0/0
{user_agent|x-forwarded-for} "POST /services/x HTTP/1.1"
On Fri, Jul 15, 2016 at 1:24 PM, Cyril Bonté <cyril.bo...@free.fr> wrote:
> Le 15/07/2016 à 17:46, CJ Ess a écrit :
>
>> I've got thousands of errors showin
I've got thousands of errors showing up in my haproxy error.log but I'm not
sure why, the requests being logged there have a 200 result code and the
session state flags are all -'s. However its primarily requests to a
particular backend being logged there. What can I do to diagnose?
My HAProxy
Are they http/1.1 requests?
On Thu, Jun 30, 2016 at 11:58 AM, Richert, Tim wrote:
> Hello there,
>
> I've been working with haproxy for some time now and it's doing a
> fantastic job! Thank you for all your development and all your efforts in
> this great piece of software!
We have pools of Haproxy talking to pools of Nginx servers with php-fpm
backends. We were seeing 50-60 health checks per second, all of which had
to be serviced by the php-fpm process and which almost always returned the
same result except for the rare memory or nic failure. So we used the
Nginx's
I personally don't have a need to limit requests the haproxy side at the
moment, I'm just thought I'd try to help Manas make his case. Hes basically
saying that he wants the option to close the client connection after the
nth request and that seems pretty reasonable to me. Maybe it would help him
. For those servers I have to
specifically chop the connections to force the old haproxy processes to die
off and the clients to reconnect to the new ones.
On Wed, Jun 8, 2016 at 3:13 PM, Lukas Tribus <lu...@gmx.net> wrote:
> Hi,
>
>
> Am 08.06.2016 um 20:51 schrieb CJ Ess:
&g
Nginx for instance allows you to limit the number of keep-alive requests
that a client can send on an existing connection - afterwhich the client
connection is closed.
http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests
Apache has something similar
Thank you! I think you just made the case for me. =)
On Fri, Apr 29, 2016 at 1:45 AM, Willy Tarreau <w...@1wt.eu> wrote:
> On Thu, Apr 28, 2016 at 02:36:44PM -0400, CJ Ess wrote:
> > I'm wanting to make a case for compiling haproxy against a modern version
> > of pcre (su
I'm wanting to make a case for compiling haproxy against a modern version
of pcre (supporting jit) when we roll out the next release to my day job.
Anyone have some numbers handy that show the benefit of doing so?
That sounds like the issue exactly, the solution seems to be to upgrade.
Thanks for the pointer!
On Tue, Apr 26, 2016 at 6:12 PM, Cyril Bonté <cyril.bo...@free.fr> wrote:
> Hi,
>
>
> Le 26/04/2016 23:41, CJ Ess a écrit :
>
>> Maybe I've found an haproxy bug? I am w
Maybe I've found an haproxy bug? I am wondering if anyone else can
reproduce this -
You'll need to send two requests w/ keep-alive:
curl -v -v -v http://127.0.0.1/something http://127.0.0.1/
On my system the first request returns a 404 error (but I've also seen this
with 200 responses - the
That will work for now, in the future it wold be nice to have an option to
allow non-control utf-8 characters in the URI without enabling all of the
other stuff.
On Mon, Apr 18, 2016 at 4:59 PM, PiBa-NL <piba.nl@gmail.com> wrote:
> Op 18-4-2016 om 22:47 schreef CJ Ess:
>
>
This is using HAProxy 1.5.12 - I've noticed an issue where HAProxy is
sometimes rejecting requests with a 400 code when the URL string contains
extended characters. Nginx is fronting HAProxy and has passed them through
as as valid requests and just eyeballing them they look ok to me.
An example
then the default maxconn - I almost think it would make more
sense to have different keywords. But I'll write it off as a learning
experience in our transition to using keepalives.
On Mon, Apr 4, 2016 at 1:44 PM, Cyril Bonté <cyril.bo...@free.fr> wrote:
> Hi,
>
> Le 04/04/201
to using keep-alives to the fullest,
so I suspect that we've always had this problem but just never saw it
because our connections turned over so quickly.
On Sun, Apr 3, 2016 at 3:59 AM, Baptiste <bed...@gmail.com> wrote:
>
> Le 3 avr. 2016 03:45, "CJ Ess" <zxcvbn4...@gmail.co
-NL <piba.nl@gmail.com> wrote:
> Op 2-4-2016 om 22:32 schreef CJ Ess:
>
> So in my config file I have:
>
> maxconn 65535
> fullconn 64511
>
> However, "show info" still has a maxconn 2000 limit and that caused a blow
> up because I exceeded the limit =(
Oops, that is important - I have both the maxconn and fullconn settings in
the defaults section.
On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL <piba.nl@gmail.com> wrote:
> Op 2-4-2016 om 22:32 schreef CJ Ess:
>
>> So in my config file I have:
>>
>> maxconn 65535
>&
So in my config file I have:
maxconn 65535
fullconn 64511
However, "show info" still has a maxconn 2000 limit and that caused a blow
up because I exceeded the limit =(
So my questions are 1) is there a way to raise maxconn without restarting
haproxy with the -P parameter (can I add -P when I
So at long last, I'm getting to use keep-alives with HAProxy!
I'm terminating http/ssl/spdy with Nginx and then passing the connections
to HAProxy via an upstream pool. I've verified by packet capture that
connection reuse between clients, Nginx, and HAProxy is occurring.
So I'd like to keep the
7, 2016 at 15:26:31, CJ Ess (zxcvbn4...@gmail.com) wrote:
>
> Hey folks, I could use some help figuring this one out. My environment
> looks like this:
>
> (client) <-> (nginx) <-> (haproxy 1.5) <-> (backend server pools all with
> (nginx -> phpfpm))
>
> W
Hey folks, I could use some help figuring this one out. My environment
looks like this:
(client) <-> (nginx) <-> (haproxy 1.5) <-> (backend server pools all with
(nginx -> phpfpm))
Where the client is a browser or bot, nginx is terminating http/https/spdy,
haproxy has the business and routing
I need to migrate to a different URL for our haproxy health checks, and it
would be really helpful if I could respond to multiple URLs as part of the
transition, or could create an empty 200 response.
So lets say that I don't want HAProxy to close the connections to my
backend servers - they can stay active and be available for keepalives -
but I do want every request from the frontend to go to a different backend
via round robin. The idea being that it keeps one frontend connection from
fficially endorsed” way of achieving this.
>
> Best,
>
> Pedro.
>
>
>
> On 22 Jan 2016, at 00:38, CJ Ess <zxcvbn4...@gmail.com> wrote:
>
> One of our sore points with HAProxy has been that when we do a reload
> there is a ~100ms gap where neither the old or new HAproxy
doesn't really serve any purpose other then to
> return
> > a static value to be stored in the stick table.
> >
> > That returns a 403 error when the limit is exceeded... I don't think
> there
> > is a good way to return a 429 response without making it substantially
> more
> >
So I am trying to set some new rules - since I don't have anything hand to
echo requests back to me, I'm using http-response add-header so I can
verify my rules work with curl.
Added to haproxy.cfg:
acl test_origin hdr(X-TEST-IP) -m ip -f /etc/haproxy/acl/test.acl
http-response add-header
Cyril, that makes perfect sense but I wouldn't have thought of it. Thank
you for pointing me the right direction!
On Thu, Oct 1, 2015 at 4:39 PM, Cyril Bonté <cyril.bo...@free.fr> wrote:
> Hi,
>
> Le 01/10/2015 20:56, CJ Ess a écrit :
>
>> So I am trying to set some n
We've noticed that our front-end connections to haproxy are closing after
talking to a backend running php-fpm. The php-fpm backend is not sending a
content-length header, but is using chunked encoding which encodes lengths
of the chunks and should be enough to keep the connection alive for
, and if it
will still do that if the client request from the frontend is http/1.0 and
the server response is http/1.1.
On Thu, Sep 17, 2015 at 4:43 PM, CJ Ess <zxcvbn4...@gmail.com> wrote:
> We've noticed that our front-end connections to haproxy are closing after
> talking to a backend running php-
for about 40 cidr blocks, and don't have any problems with
load speed. Presumably large means more than that, though.
We use comments as well, but they have to be at the beginning of their
own
line, not tagged on after the address.
On Fri, Aug 14, 2015, 9:09 PM CJ Ess zxcvbn4...@gmail.com
When doing a large number of IP based ACLs in HAProxy, is it more efficient
to load the ACLs from a file with the -f argument? Or is just as good to
use multiple ACL statements in the cfg file?
If I did use a file with the -f parameter, is it possible to put comments
in the file?
Someone posted a link to a really tricked out anti-ddos haproxy config not
long ago, it might be interesting to you:
https://github.com/analytically/haproxy-ddos
On Wed, Jun 24, 2015 at 11:51 AM, Shawn Heisey hapr...@elyograg.org wrote:
On 6/18/2015 4:32 PM, Shawn Heisey wrote:
On 6/17/2015
http/2 takes how web sites have been architected for the last decade and
turns it upside down, so I suspect it will take a while to really take
hold. On haproxy's roadmap http/2 is in the uncategorized section. =P Also
many people think that the TLS overhead that browsers have forced on http/2
be
willing to see if I can track it down and fix it.
On Tue, Jun 16, 2015 at 4:39 PM, PiBa-NL piba.nl@gmail.com wrote:
Which does not prevent the backend from using mode http as the defaults
section sets.
CJ Ess schreef op 16-6-2015 om 22:36:
mode tcp is already present in mainfrontend
mode tcp is already present in mainfrontend definition below the bind
statement
On Mon, Jun 15, 2015 at 3:05 PM, PiBa-NL piba.nl@gmail.com wrote:
CJ Ess schreef op 15-6-2015 om 20:52:
This one has me stumped - I'm trying to proxy SMTP connections however I'm
getting an HTTP response
This one has me stumped - I'm trying to proxy SMTP connections however I'm
getting an HTTP response when I try to connect to port 25 (even though I've
done mode tcp).
This is the smallest subset that reproduced the problem - I can make this
work by doing mode tcp in the default section and then
You can't add or remove hosts to a pool without doing a reload - you can
change the weights, mark them up and down, but not add or remove.
On Mon, May 11, 2015 at 1:00 PM, Nick Couchman nick.couch...@seakr.com
wrote:
I was wondering if it is possible or there's a recommended way to deal
with
1, 2015 at 1:22 AM, Willy Tarreau w...@1wt.eu wrote:
Hi,
On Thu, Apr 30, 2015 at 01:47:30PM -0400, CJ Ess wrote:
diff --git a/src/auth.c b/src/auth.c
index 42c0808..6973136 100644
--- a/src/auth.c
+++ b/src/auth.c
@@ -218,11 +218,12 @@ check_user(struct userlist *ul, const char *user
Perhaps this is more what you are looking for?
https://github.com/smarterclayton/haproxy-map-route-example
On Thu, Apr 30, 2015 at 11:43 AM, Veiko Kukk vk...@xvidservices.com wrote:
I'd like to manually add that constant string into configuration, not to
get it from the traffic. It would help
diff --git a/src/auth.c b/src/auth.c
index 42c0808..6973136 100644
--- a/src/auth.c
+++ b/src/auth.c
@@ -218,11 +218,12 @@ check_user(struct userlist *ul, const char *user,
const char *pass)
{
struct auth_users *u;
+ struct auth_groups_list *agl;
const char *ep;
#ifdef
You can use stick tables to create sticky sessions based on origin IP,
cookies, and things like that, you'll need HAProxy 1.5 of better t do it.
If you google for haproxy sticky sessions you'll find an number of
examples. Here are a couple stand-outs:
When you run HAProxy in full debugging mode there is a debug_hdrs() call
that displays all of the http headers read from the frontend, I'd also like
to be able to see the headers being sent to the backend.
So far I haven't pinpointed where the headers are being sent from so that I
can add another
;)
Baptiste
On Fri, Apr 24, 2015 at 5:58 PM, CJ Ess zxcvbn4...@gmail.com wrote:
Its possible that I'm doing this wrong, I don't see many examples of
working
with tcp streams, but this combination seems to SEGV haproxy 1.6
consistently.
The idea is to capture the first 32 bytes of a TCP
Its possible that I'm doing this wrong, I don't see many examples of
working with tcp streams, but this combination seems to SEGV haproxy 1.6
consistently.
The idea is to capture the first 32 bytes of a TCP stream and use it to
make a sticky session. What I've done is this:
frontend fe_capture
Is there a way to setup an ACL for the haproxy stats page? We do have
authentication set up for the URL, but we would feel better if we could
limit access to a white list of local networks. Is there a way to do that?
in a bit.
Neil
On 21 Apr 2015 20:09, CJ Ess zxcvbn4...@gmail.com wrote:
Is there a way to setup an ACL for the haproxy stats page? We do have
authentication set up for the URL, but we would feel better if we could
limit access to a white list of local networks. Is there a way to do that?
Do you have an example of what that looks like? Am I literally adding
127.0.0.1 as a peer?
On Fri, Apr 17, 2015 at 12:26 AM, Dennis Jacobfeuerborn
denni...@conversis.de wrote:
On 17.04.2015 02:12, Igor Cicimov wrote:
Hi all,
Just a quick one, are the stick tables and counters persisted
What is the best way to deal with long ACLs with HAProxy. For instance
Amazon EC2 has around 225 address blocks. So if I wanted to direct requests
originating from EC2 to a particular backend, thats a lot of CIDRs to
manage and compare against. Any suggestions how best to approach a
situation like
I think the gold standard for graceful restarts is nginx - it will start a
new instance (could be a new binary), send the accept fd's to the new
instance, then the original instance will stop accepting new requests and
allow the existing connections to drain off. The whole process is
controlled by
This is my first time submitting a modification to haproxy, so I would
appreciate feedback.
We've been experimenting with using the stick tables feature in Haproxy to
do rate limiting by IP at the edge. We know from past experience that we
will need to maintain a whitelist because schools and
I'm under the impression that Haproxy doesn't speak SPDY natively so best
it can do for pass is through to a backend that does. If you use nginx to
terminate ssl and spdy, then you can use all the features of haproxy.
On Tue, Jan 27, 2015 at 1:21 PM, Erwin Schliske erwin.schli...@sevenval.com
55 matches
Mail list logo