Re: Headers Size
Thanks for all the suggestions, i'm going to try them. Do you think that this value of BUFSIZE may cause some security or perfomance issues in haproxy? Regards, Héctor Paz On Fri, Jan 8, 2010 at 1:27 AM, Willy Tarreau w...@1wt.eu wrote: If you don't have too much traffic or can try it by hand, start it in debug mode using -d on the command line, and redirect its output to a file. It will dump all headers it receives. That can be huge. If you're on production and cannot do that, your best friend is tcpdump, especially if you know that the problem only affects one of your customers and can filter on the IP address. Simply don't forget to dump full packets (-s0).
Re: [ANNOUNCE] haproxy 1.4-dev6 : many fixes
I wanted to report after using 1.4-dev6 for several sites for a couple days that the results seem very good. One site was peaking at over 150 Mbps and over 65 million hits past couple of days, during that time memory use stayed steady between 1.5-2.5 GB and went down when load went down. On 1/7/10 11:05 PM, Willy Tarreau wrote: Hi all, well, some of you have encountered issues with 1.4-dev5 with sessions left in CLOSE_WAIT state or with memory leaks.
Re: Headers Size
On Sat, Jan 09, 2010 at 06:24:46PM +, Hector Danniel Paz Trillo wrote: Thanks for all the suggestions, i'm going to try them. Do you think that this value of BUFSIZE may cause some security or perfomance issues in haproxy? no, in fact it may even improve performance to have larger buffers. However, it can become a concern when you're running with tens of thousands of concurrent connections because it will then use more memory. Regards, Willy
Re: [ANNOUNCE] haproxy 1.4-dev6 : many fixes
On Sat, Jan 09, 2010 at 11:03:16AM -0800, Hank A. Paulson wrote: I wanted to report after using 1.4-dev6 for several sites for a couple days that the results seem very good. One site was peaking at over 150 Mbps and over 65 million hits past couple of days, during that time memory use stayed steady between 1.5-2.5 GB and went down when load went down. Excellent, thanks a lot for your report Hank ! BTW if you're running with many concurrent connections causing that amount of memory to be consumed, you may want to try to build with dlmalloc (check the makefile for that). It makes extensive use of mmap() and is able to release lots of unused memory, more than the libc's malloc. This is particularly appreciated during soft restarts, when you need to make two processes coexist. Regards, Willy
Re: [ANNOUNCE] haproxy 1.4-dev6 : many fixes
Hi Willy, Le Samedi 9 Janvier 2010 14:01:36, Willy Tarreau a écrit : (...) One thing I suspect would be that we simply fail to free lots of allocated memory and that the last pool_alloc() returns NULL due to lack of memory, hence the segfault. But I also suspect that we *may* end up corrupting some lists or pools if we reuse some data across two consecutive requests. Anyway I've committed the fix. This doesn't directly concern this issue but I've tried to follow all the pool_alloc2/pool_free2 calls in the code to track memory leaks. I've found one which only happens when there's already no more memory when allocating a new appsession cookie : --- haproxy-1.4-dev6/src/proto_http.c 2010-01-10 00:14:47.0 +0100 +++ haproxy-1.4-dev6-freememory/src/proto_http.c2010-01-10 00:15:16.0 +0100 @@ -5954,6 +5954,7 @@ if ((asession-sessid = pool_alloc2(apools.sessid)) == NULL) { Alert(Not enough Memory process_srv():asession-sessid:malloc().\n); send_log(t-be, LOG_ALERT, Not enough Memory process_srv():asession-sessid:malloc().\n); + t-be-htbl_proxy.destroy(asession); return; } memcpy(asession-sessid, t-sessid, t-be-appsession_len); @@ -5963,6 +5964,7 @@ if ((asession-serverid = pool_alloc2(apools.serverid)) == NULL) { Alert(Not enough Memory process_srv():asession-sessid:malloc().\n); send_log(t-be, LOG_ALERT, Not enough Memory process_srv():asession-sessid:malloc().\n); + t-be-htbl_proxy.destroy(asession); return; } asession-serverid[0] = '\0'; -- Cyril Bonté