On 7/1/13 3:41 PM, Scott Harris wrote:

Hi,

We are currently looking at upgrading to ats from some bluecoat web caches and trying to find out if ats has a hard limit on max number of stored objects? (Not single object size, I know there is a setting to control that)

I ask this because our bluecoats have a hard limit of 32 million objects.



It's a calculation based on the size of your disk cache, and the setting proxy.config.cache.min_average_object_size from records.config. Basically, you get one "directory entry" in memory for every 8KB of disk cache. You always consume at least one directory entry per cache object, but it can be more.

Example: 2TB disk, default settings -> 275 million directory entries.

this happens to also consume 275MM * 10 bytes = ~2.5GB of RAM just for the directory entries. The easiest way to think of this is to think of our directory entries as I-nodes. There's a fixed number of I-nodes, and you can tweak it via that config, but doing so will be the same as doing an "mkfs" (i.e. blow the cache).

Note that more directory entries will consume a bit more disk resources as well as RAM, when it syncs (and reads) the directory to (and from) disk. So having a huge amount of unused directory entries is a waste too, but you don't want to risk running out of them either.


-- leif

Reply via email to