Hi Seth, On Sat, Jan 29, 2011 at 11:52:16PM +1100, Seth Yates wrote: > Hi, > > We're getting quite a few BADREQ entries in the logs, and we're thinking its > because some clients are accumulating a lot of cookies. Is HAPROXY > returning 400 Bad Request when a request or cookie or header size is > exceeded?
Yes, a request must fit in a buffer in order to be processed. The default buffer size is 16kB with one half reserved for rewrite purposes, which leaves you with 8kB max per request. You can change that in your global section using tune.bufsize and tune.maxrewrite (the later being the reserved size). You can for instance just reduce maxrewrite to 1024 so that you'll have a limit of 15kB per request. > If so, is there a way to turn off these checks? It's not a check per se, it's a limitation by design. Parsing and processing HTTP requires some memory and all products have limits. > We're using the > latest haproxy downloaded from the 1wt website. Here's a tcpdump of one of > the sessions: (...) > seq 1:1381, ack 1, win 4140, length 1380 > GET /serve?p=3&n=4d4406... (...) > Referer: http://xxxx.xxxxxx... > Cookie: cv-%21%21%21%21%... > seq 1381:2761, ack 1, win 4140, length 1380 > seq 2761:4141, ack 1, win 4140, length 1380 > seq 4141:5521, ack 1, win 4140, length 1380 > seq 5521:6901, ack 1, win 4140, length 1380 > seq 6901:8281, ack 1, win 4140, length 1380 > ack 8281, win 183, length 0 See above ? your client is sending more than 8kB of data, of which approx 7kB are for the cookie alone. The request and the referrer are very large too. The problem your site's visitors will face with this is a very slow access to your site. All requests will have a large path and a large referrer, and above all an extremely large cookie. If your site has 20 images to display, the 8kB above will have to be posted 20 times, which means 160 kB of requests to upload from a slow client. With a 512/128 ADSL line, this means that the line is saturated for 10 seconds before the page can load. If your site has to be accessed from smartphones, it will be even worse, because the upload speed will be even smaller, and the amount of uploaded data will force the client to wait for ACKs to be sent every two packets or so, resulting in the RTT being added many times to the download time. You should really find a way to reduce these cookies and URLs. A 1kB cookie should already be considered the worst tolerable case, and URLs should be a lot shorter to avoid them appearing multiple times in referrers (or use POST instead of GET). Also, while haproxy (and many other products) has a per-request size limit, others such as Apache have a per-line size limit. Apache limits headers to 8kB. That means that your cookie is about to be rejected on Apache too, and as such on many other products, because Apache is often considered as a reference for setting limits : what does not pass through it has no reason to pass through something else given it's everywhere. Hoping this helps, Willy

