Hi Timothy, On Fri, Sep 10, 2010 at 03:43:50PM -0400, Timothy Garnett wrote: > Hi Andrew, > > Thanks for the response. My nginx configured with large buffers and I've > verified that problem still occurs when targeting haproxy directly (skipping > nginx) and does not occur when nginx targets mongrels directly skipping > haproxy. So it does appear to be haproxy that's returning the 400 status. > > Some test cases: > Very Long URL (~6500 characters), No Referrer => 400 returned > Long URL (~4000 characters), Short Referrer => no problem > Long URL (~4000 characters), Long Referrer (4000 characters) => 400 returned > Short URL, Long Referrer (4000 characters) => no problem > So the issue seems to be related to some combination of the size of the > request and headers.
Indeed, the request buffer holds the whole request (method, uri, version, headers). As Cyril explained it, the default size in 1.3 is 8 kB, which confirms your observations. You can recompile haproxy to increase that length, but you must be very careful. At 8 kB, your site is already very slow, because the visitors have to push that amount of data for each request. Having long URLs is the worst thing to do (after cookies) because the referrer is sent for all images. So let's say you have a page with 30 images and a large URL (~8k chars, headers included), your users will have to upload 240 kB before the page loads. With some basic 512/128 kbps ADSL lines, the upload alone will take about 15 seconds ! Also, you should consider that haproxy switched from 4 to 8kB by default a long time ago because one side had an issue with too large cookies. Now nobody complains about the 8kB default, and in fact even Apache's limit is 8kB per line. That indicates you're getting very close to the dark area that nobody explores and where anything can happen... It's very likely that some proxies between your visitors and your site will experience trouble too. Regards, Willy

