Hi Kristjan, On Mon, Jul 29, 2013 at 02:29:01PM +0300, Kristjan Koppel wrote: > Hi, > > > We are seeing strange behavior from HAProxy (v1.4.24) when requesting a > very large HTML page (~257MB, all in a single chunk) through a simple HTTP > proxy. The client gets an empty response with a 200 status code and the > respective log line looks like this: > Jul 29 08:42:49 webserver haproxy[2359]: > 127.0.0.1:39894[29/Jul/2013:08:42:39.640] ngg-bo ngg-bo/server2-bo2 > 0/0/0/9617/9617 200 > 8192 - - PD-- 329/1/1/1/0 0/0 "GET /large.html HTTP/1.1" > > Here's the last time it worked: > Jul 28 09:59:27 webserver haproxy[2359]: > 127.0.0.1:54744[28/Jul/2013:09:59:00.644] ngg-bo ngg-bo/server1-bo1 > 0/0/0/8521/27138 200 > 267276880 - - ---- 625/0/0/0/0 0/0 "GET /large.html HTTP/1.1" > > When I try the same with lynx, it works fine (HTTP 1.0 and no chunked > transfer encoding): > Jul 29 09:29:52 webserver haproxy[2359]: > 127.0.0.1:53273[29/Jul/2013:09:29:24.313] ngg-bo ngg-bo/server1-bo1 > 0/0/0/5094/27787 200 > 270136010 - - ---- 427/0/0/0/0 0/0 "GET /large.html HTTP/1.0" > > I found the following by sending "show errors" to the stats socket: > [29/Jul/2013:08:42:49.256] backend ngg-bo (#44) : invalid response > src 127.0.0.1, session #131745047, frontend ngg-bo (#44), server > server2-bo2 (#6) > HTTP internal state 29, buffer flags 0x00100012, event #617 > response length 7761 bytes, error at position 8: > > 00000 1019f1e4\r\n > 00010 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"\n > 00073 "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n > 00139 <html xmlns="http://www.w3.org/1999/xhtml">\n > ... > > The HTML page is dynamically generated and has been growing over time. > Looks like it broke after going past the 256MB mark, but I couldn't find > any such limit documented anywhere. We solved the problem by reducing the > HTML size down to 60MB, but I'd feel better if I knew why this happened and > if there is an actual limit to the response size in HAProxy. Any insight > into this would be much appreciated. > > Relevant parts from HAProxy configuration: > defaults > log global > mode http > option httplog > retries 3 > option redispatch > maxconn 10000 > balance leastconn > timeout connect 4s > timeout client 31s > timeout server 31s > > listen ngg-bo 127.0.0.1:1530 > server server1-bo1 server1:1531 check > server server1-bo2 server1:1532 check > server server2-bo1 server2:1531 check > server server2-bo2 server2:1532 check > > timeout client 301s > timeout server 301s > appsession NGG_BACKOFFICE len 32 timeout 90m request-learn > > > I'll be glad to provide any additional information if I can.
This limitation is in the code and has been there since chunked encoding was introduced in 1.4-dev5, it's directly related to the way the chunk size is parsed. This shortcoming was addressed in 1.5-dev with the attached patch. This patch was not backported at that time because I wanted an observation period before doing so, and of course I simply forgot to merge it. Please use it, confirm that it's OK for you and I'll backport it. Regards, Willy
commit 431946e9617572d2813bd5a8f5a51ce36f841ea3 Author: Willy Tarreau <[email protected]> Date: Fri Feb 24 19:20:12 2012 +0100 MEDIUM: increase chunk-size limit to 2GB-1 Since commit 115acb97, chunk size was limited to 256MB. There is no reason for such a limit and the comment on the code suggests a missing zero. However, increasing the limit past 2 GB causes trouble due to some 32-bit subtracts in various computations becoming negative (eg: buffer_max_len). So let's limit the chunk size to 2 GB - 1 max. diff --git a/src/proto_http.c b/src/proto_http.c index cfbebd9..5efc7ec 100644 --- a/src/proto_http.c +++ b/src/proto_http.c @@ -2150,7 +2150,7 @@ int http_parse_chunk_size(struct buffer *buf, struct http_msg *msg) break; if (++ptr >= end) ptr = buf->data; - if (chunk & 0xF000000) /* overflow will occur */ + if (chunk & 0xF8000000) /* integer overflow will occur if result >= 2GB */ goto error; chunk = (chunk << 4) + c; }

