Hi,
On 19/08/14 14:43, Thomas Mueller muel...@adobe.com wrote:
Limiting the cache size by number of entries doesn't make sense. It is a
sure way to run into out of memory, exactly because the sizes of documents
varies a lot.
I agree with Thomas. I think there must always be a way to tell the
Hi,
If we need a limit on the number of entries for some other (internal)
reason, like consistency check, then I understand. If we later find a way
to speed up consistency check (or if we don't need it, which I would
prefer), then this is no longer needed. But I also don't know how to limit
by
why is 256MB -- the default value -- sufficient/insufficient
We don't know. But how do you know that a cache of 10'000 entries is
sufficient? Specially if each entry can be either 1 KB or 1 MB or 20 MB.
The available memory can be divided into different areas, and each
component is given a
Hi Vikas,
Sizing the cache can be done by either number of entries or the size
taken by cache. Currently in Oak we limit by size however as you
mentioned limit by count is more deterministic. We use Guava Cache and
it supports either limiting by size or by number of entries i.e. the
two policies
Hi,
Limiting the cache size by number of entries doesn't make sense. It is a
sure way to run into out of memory, exactly because the sizes of documents
varies a lot.
as you mentioned limit by count is more deterministic.
How, or in what way, is it more deterministic?
sysadmin can be provided
Hi Thomas,
On Tue, Aug 19, 2014 at 6:13 PM, Thomas Mueller muel...@adobe.com wrote:
How, or in what way, is it more deterministic?
Missed providing some context there so here are the details. Currently
we limit the cache by total size taken. Now given a system where you
have say 32 GB RAM
sysadmin can be provided with a rough idea about relation of (frequently
used) repo nodes using which sysadmin can update cache size.
I can't follow you, sorry. How would a sysadmin possibly know the number
of frequently used nodes? And why would he know that, and not the amount
of memory?
We use Guava Cache and
it supports either limiting by size or by number of entries i.e. the
two policies are exclusive.
Hmm I totally missed this point.
So at minimum if you can provide a patch which allows the admin to
choose between the two it would allow us to experiment and later see
Hi,
We were struggling past couple of weeks with severe performance issues
on AEM6/Oak/MongoNS -- fortunately the issue was due to VM we were
using. So, all seems well for now.
BUT, during investigation, one of the things that we were worried
about was document cache missing hits... we tried
Hello Vikas,
On 18/08/2014 12:05, Vikas Saurabh wrote:
Hi,
...
specify a cache size. While I agree that cache should be a memory
hog... but entry size of a document in cache is quite variable in
nature and as an admin I can make guesses about JCR nodes and their
access patters. Document
we can probably have both and cache respects whichever constraint hits
first (sort of min(byte size, entry size)).
First of all I don't know MongoNS implementation details so I can be wrong.
I'd rather keep the size in bytes as it gives me much more control over
the memory I have and what I
11 matches
Mail list logo