On May 24, 2017, at 2:52 PM, Dunkin, Nick 
<[email protected]<mailto:[email protected]>> wrote:

Hi again,

This is great stuff, but it leads me to believe that I’ve totally overestimated 
my ram_cache.size setting.  And in fact, totally misunderstood the parameter.

Let me see if I understand what you’ve explained:

If I expect 5 of my ioBufAllocators to be in use during normal activity, then 
potentially I could see memory allocated to the level of (5 x ram_cache.size)?  
Because each ioBufAllocator is bounded by ram_cache.size?
No, not really. I guess my example which was intended as worst case example may 
have confused things :)
Let's differentiate between:
“Allocated" Chunks: IO Buffer chunks that have been allocated by the 
ioBufAllocator, but not all of them are actually being used in RAM Cache.
"In-use” Chunks: Chunks that are in use in the RAM Cache, these are subset of 
the “allocated” chunks.
“Free” Chunks:  These contain the difference between the above two. When 
ioBufAllocator needs chunks of a particular size pool, it will try get it from 
the free list. If not available, only then new chunks are allocated from memory.
Allocated Chunks = In-use Chunks + Free Chunks

When buffer chunk is “de-alloced” from RAM cache, it is put back into the free 
chunk pool.

Ram Cache size parameter will limit the total “in-use” chunks, and this 
includes sum total size of the “in-use" chunks from all 15 pools. In general 
your traffic pattern should fall into steady state “plateau” such that all the 
“allocated” chunks doesn’t need to grow. But yes, sum total size of allocated 
chunks >= Ram cache size parameter. So it is best to keep some headroom in RAM.

I remember there was a way to dump the mem pools information  to traffic.out - 
maybe someone in the list can help.

Hope this doesn’t confuse things more :)

(Terminology I used above may not reflect what’s in the code)





In which case I need to reduce, or tune, my ram_cache.size by a factor of 5?

I have a large ram_cache.size (100gb), assuming it was allocated to one large 
reserve of memory, so I assume this understanding is naive?

Thanks again for all your assistance,

Nick

From: "Kapil Sharma (kapsharm)" <[email protected]<mailto:[email protected]>>
Reply-To: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, May 24, 2017 at 11:29 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: Understanding ioBufAllocator behvaiour

On plateauing - not necessarily; we do see the memory consumption increasing 
continuously in our deployments as well. It depends on the pattern of segment 
sizes over time.

ATS uses power of 2 allocators for memory pool - there are 15 of those, ranging 
from 128bytes to 2M if my memory serves me right - and these are per thread! 
ATS will choose an optimal allocator for the segments.

As Alan mentioned, once chunk are allocated, they are never freed.

Here is a totally artificial example just to make the point (please correct if 
my understanding is flawed):
* the traffic pattern was such that initially only 2M allocators were used then 
ATS will keep allocating 2M chunks until RAM cache limit (lets say it is 64GB) 
is reached.
* Now traffic pattern changed (smaller fragment requests), and only 1M 
allocators are used, ATS will now keep allocating 1M chunks, again capping at 
64GB. But in the end ATS would have allocated 128GB well over RAM cache size 
limit….


In the past a there was some prototype of reclaimable buffer support added in 
ATS, but I believe it was removed in 7.0? Also there is recent discussion of 
adding jmalloc?



On May 24, 2017, at 11:01 AM, Alan Carroll 
<[email protected]<mailto:[email protected]>> wrote:

One issue is that memory never moves between the iobuf sizes. Once a chunk of 
memory is used for a specific iobuf slot, it's there forever. But unless 
something is leaking, the total size should eventually plateau, certainly 
within less than a day if you have a basically constant load. There will be 
some growth due to blocks being kept in thread local allocation pools, but 
again that should level in less time than you've run.


On Wednesday, May 24, 2017, 9:50:39 AM CDT, Dunkin, Nick 
<[email protected]<mailto:[email protected]>> wrote:

Hi Alan,



This is 7.0.0



I only see this behavior on ioBufAllocator[0], [4] and [5].  The other 
ioBufAllocators’ usage looks as I would expect (i.e. allocated goes up then 
flat), so I was thinking it was more likely something to do with my 
configuration or use-case.



I’d also just like to understand, at a high level, how the ioBufAllocators are 
used.



Thanks,



Nick



From: Alan Carroll 
<[email protected]<mailto:[email protected]>>
Reply-To: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, May 24, 2017 at 10:33 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: Understanding ioBufAllocator behvaiour



Honestly it sounds like a leak. Can you specify which version of Traffic Server 
this is?




On Wednesday, May 24, 2017, 8:22:46 AM CDT, Dunkin, Nick 
<[email protected]<mailto:[email protected]>> wrote:

Hi



I have a load test that I’ve been running for a number of days now.  I’m using 
the memory dump logging in traffic.out and I’m trying to understand how Traffic 
Server allocates and reuses memory.  I’m still quite new to Traffic Server.



Nearly all of the memory traces look as I would expect, i.e. memory is 
allocated and reused over the lifetime of the test.  However my readings from 
ioBufAllocator[0] show a continual increase in allocated AND used.  I am 
attaching a graph.  (FYI – This graph covers approximately 3 days of continual 
load test.)



I would have expected to start seeing reuse in ioBufAllocator by now, like I do 
in the other ioBufAllocators.  Can someone help me understand what I’m seeing?



Many thanks,



Nick Dunkin



Nick Dunkin

Principal Engineer

o:   678.258.4071

e:   [email protected]<mailto:[email protected]>

4375 River Green Pkwy # 100, Duluth, GA 30096, USA

<image001.png>

<image001.png>


Reply via email to