Thanks!

This is exactly what I was looking for, we're already planning on upgrading
to 1.4 soon so we'll we try out the tuning parameters then.  Thanks so much
for your help.

Tim

On Sat, Sep 11, 2010 at 9:24 AM, Cyril Bonté <[email protected]> wrote:

> Hi,
>
> Le vendredi 10 septembre 2010 19:07:39, Timothy Garnett a écrit :
> > Hi all,
> >
> > We use a nginx ->  haproxy (1.3.20 currently) -> mongrels setup to serve
> > our website and recently we noticed an issue where haproxy is returning a
> > 400 error for requests with very long headers (for us typically long url,
> > long referrer, or the combination of both).  I checked the change logs
> and
> > didn't see anything that might address this in newer versions. While I
> > haven't narrowed down the exact limit yet it's around 6500 bytes.  This
> > occurs only when the config file has mode http, in mode tcp the request
> > goes through fine.  So perhaps related to some buffer limit in the http
> > parsing? If so is the limit documented some where and is there a setting
> > to increase it? While long urls are generally not a good idea, we don't
> > always have full control over the referrer and for a couple of reasons
> > we'd like to support very long urls in a few parts of our site.
>
> By default, HAProxy uses a 16K buffer and reserves half of it for rewrites
> (to
> add headers to the requests, ...), which means HAProxy only accept 8K to
> parse
> requests headers. If it exceeds this 8K, it fails with a 400 HTTP status.
>
> With HAProxy 1.3, to modify this values you'll have to recompile HAProxy.
> Have a look to BUFSIZE and MAXREWRITE in the Makefile and in the file
> include/common/defaults.h (this is where it's documented).
>
> Since HAProxy 1.4, you don't need to recompile HAProxy, instead you can use
> the keywords tune.bufsize and tune.maxrewrite.
>
> --
> Cyril Bonté
>

Reply via email to