Re: Possible memory leak.

2014-07-10 Thread Amos Jeffries
On 11/07/2014 6:10 a.m., Eliezer Croitoru wrote:
> OK so I started this reverse proxy for a bandwidth testing site and it
> seems odd that it using more then 400MB when the only difference in the
> config is maximum_object_size_in_memory to 150MB and StoreID
> 
> I have extracted mgr:mem and mgr:info at these urls:
> http://www1.ngtech.co.il/paste/1169/raw/
> http://www1.ngtech.co.il/paste/1168/raw/
> 
> A top snapshot:
> http://www1.ngtech.co.il/paste/1170/raw/
> 
> The default settings are 256MB for ram cache and this instance is ram
> only..
> squid.conf at:
> http://www1.ngtech.co.il/paste/1171/raw/
> 
> I started the machine 29 days ago while squid is up for less then that.
> 
> Any direction is welcomed but test cannot be done on this machine
> directly for now.
> 
> Eliezer

This may be what Martin Sperl is reporting in the squid-users thread
"squid: Memory utilization higher than expected since moving from 3.3 to
3.4 and Vary: working"

What I'm trying to get from him there is a series of mgr:mem reports
over time to see if any particular object type is growing unusually. And
mgr:filedescriptors in case its a side effect of the hung connections
Christos identified recently.

If we are lucky enough that the squid was built with valgrind support
there should be a valgrind leak trace available in one of the info and
mem reports. This will only catch real leaks though, not ref-counting
holding things active.

Amos



Re: Possible memory leak.

2014-07-10 Thread Eliezer Croitoru
I think I can build a squid rpm with valgrind support for this purpose 
and others.


Well now we have a bug report for the issue and it's good that we can 
test it.
Note that this server has 99% hits since it is serving less then 30 file 
which about half of them should be downloaded as a TCP_MEM_HIT.
Using StoreID stripping from the URL any traces of query parameters 
redirecting squid to fetch the same object for all the request which are 
abusing the "url path query" part to avoid caching.
(why would anyone that is testing his bandwidth against a specific 
server would want to not fetch it from ram instead of spinning disks?)


Eliezer

On 07/11/2014 06:32 AM, Amos Jeffries wrote:

This may be what Martin Sperl is reporting in the squid-users thread
"squid: Memory utilization higher than expected since moving from 3.3 to
3.4 and Vary: working"

What I'm trying to get from him there is a series of mgr:mem reports
over time to see if any particular object type is growing unusually. And
mgr:filedescriptors in case its a side effect of the hung connections
Christos identified recently.

If we are lucky enough that the squid was built with valgrind support
there should be a valgrind leak trace available in one of the info and
mem reports. This will only catch real leaks though, not ref-counting
holding things active.

Amos





Re: Possible memory leak.

2014-07-11 Thread Eliezer Croitoru

Well there is 100% a memory leak.
I was almost sure I filed a bug and then found out that I was filing it 
under the wrong bug report.

For now I have the mgr reports at:
http://www1.ngtech.co.il/squid/3.4.5-leak/

Eliezer


Re: Possible memory leak.

2014-07-20 Thread Eliezer Croitoru

I want to verify the issue I have seen:
Now The server is on about 286 MB of resident memory.
The issue is that the server memory usage was more then 800MB while two 
things in mind

1 - The whole web server is 600 MB
2 - 150MB is the maximum object size in memory (there is no disk cache)
3 - the cache memory of the server is the default of 256MB.

I cannot think about an option that will lead this server to consume 
more then 400MB even if one 10 bytes file is being fetched with a query 
term every time with a different parameter.


If the sum of all the request to the proxy are 30k I do not see how it 
would still lead to 900MB of ram used by squid.


If I am mistaken(could very simple accomplished) then I want to 
understand what to look for in the mgr interface to see if there is a 
reasonable usage of memory or not.

(I know it's a lot to ask but still)

Thanks,
Eliezer

On 07/10/2014 09:10 PM, Eliezer Croitoru wrote:

OK so I started this reverse proxy for a bandwidth testing site and it
seems odd that it using more then 400MB when the only difference in the
config is maximum_object_size_in_memory to 150MB and StoreID



Eliezer




Re: Possible memory leak.

2014-07-20 Thread Alex Rousskov
On 07/20/2014 09:27 AM, Eliezer Croitoru wrote:
> I want to verify the issue I have seen:
> Now The server is on about 286 MB of resident memory.
> The issue is that the server memory usage was more then 800MB while two
> things in mind
> 1 - The whole web server is 600 MB
> 2 - 150MB is the maximum object size in memory (there is no disk cache)
> 3 - the cache memory of the server is the default of 256MB.
> 
> I cannot think about an option that will lead this server to consume
> more then 400MB even if one 10 bytes file is being fetched with a query
> term every time with a different parameter.
> 
> If the sum of all the request to the proxy are 30k I do not see how it
> would still lead to 900MB of ram used by squid.
> 
> If I am mistaken(could very simple accomplished) then I want to
> understand what to look for in the mgr interface to see if there is a
> reasonable usage of memory or not.

I would start with the total amount of memory accounted for by Squid
(a.k.a. pooled memory). IIRC, that is reported on mgr:info and mgr:mem.
There are several possibilities,

A) The Squid-reported pooled memory amount matches your expectations,
but Squid consumes much more RAM than what is reported. Thus, the extra
memory is not pooled, and a lot more work is needed to identify
unreported memory consumers.

B) The Squid-reported pooled memory matches the total Squid memory
consumption. You are lucky! Look for major memory consumers in mgr:mem
output and match that against your model of where the memory should go.

C) The Squid-reported pooled memory amount does not match your
expectations, and Squid consumes much more RAM than what is reported.
First, study the reported memory to adjust your expectations or find
some memory consumers that should not be there. Then go to (A) or (B).


HTH,

Alex.



Re: Possible memory leak.

2014-07-20 Thread Marcus Kool

Eliezer,

It is important to know what implementation of malloc is used.
So it is important to know which OS/distro is used and which version of 
glibc/malloc.

malloc on 64bit CentOS 6.x uses memory-mapped memory for allocations of 128 KB 
or larger
and uses multiple (can't find how many) 64MB segments and many more when 
threads are used.

I also suggest to collect total memory size _and_ resident memory size.
The resident memory size is usually significantly smaller than the total memory 
size
which can be explained by the 64MB segments that are only used for a low 
percentage.

If you use CentOS, I recommend to
   export MALLOC_ARENA_MAX=1# should work well
and/or
   export MMAP_THRESHOLD=4100100100 # no experience if this works
and run the test again.

Marcus


On 07/20/2014 12:27 PM, Eliezer Croitoru wrote:

I want to verify the issue I have seen:
Now The server is on about 286 MB of resident memory.
The issue is that the server memory usage was more then 800MB while two things 
in mind
1 - The whole web server is 600 MB
2 - 150MB is the maximum object size in memory (there is no disk cache)
3 - the cache memory of the server is the default of 256MB.

I cannot think about an option that will lead this server to consume more then 
400MB even if one 10 bytes file is being fetched with a query term every time 
with a different parameter.

If the sum of all the request to the proxy are 30k I do not see how it would 
still lead to 900MB of ram used by squid.

If I am mistaken(could very simple accomplished) then I want to understand what 
to look for in the mgr interface to see if there is a reasonable usage of 
memory or not.
(I know it's a lot to ask but still)

Thanks,
Eliezer

On 07/10/2014 09:10 PM, Eliezer Croitoru wrote:

OK so I started this reverse proxy for a bandwidth testing site and it
seems odd that it using more then 400MB when the only difference in the
config is maximum_object_size_in_memory to 150MB and StoreID



Eliezer