On 11/30/12 4:07 AM, Tomasz Kuzemko wrote:
Hello,
I have setup ATS with a disk cache of 2x 9TB raw disks and I see a constant resident memory usage of 21GB of RAM right after starting. Initially I thought this was the automatic RAM store using all this space, because according to documentation it should use around 1MB per 1GB of disk storage, so this sums to around 18GB of RAM. But also according to the docs I should be able to put a hard limit on the RAM cache with this option:

CONFIG proxy.config.cache.ram_cache.size INT -1

but no matter what value I set here, it doesn't affect the initial memory usage. Is this by design? If yes, so it means that my maximum disk store size is limited by RAM size?

Seems about right. Every disk cache object consumes 10 bytes of RAM, this is a "directory entry" (necessary for fast lookups). The default config allocates one directory entry for every 8000 bytes of storage. So, quick math tells me 2*9TiB / 8000 = 20.9GB.

Your config can hold roughly 1B HTTP objects in the cache (I varies depending on number of alternates etc.). If that is much more than you expect (typically because your average object size is much > than 16KB) then you can change this configuration in records.config. Doubling it to 16000 would reduce memory consumption by 1/2 (so, 10.5GB) and so on and so forth.

Note that 10 is a very small number of bytes per object. It's a key benefit of the ATS cache vs e.g. Squid cache (which consumes 60 or so bytes, last I checked). Nginx doesn't have this, since it just creates a file on disk (but imagine having 1 billion i-nodes on the disk, good times doing a full fsck on that ;).

I hope that helps.

-- Leif

Reply via email to