>> sysadmin can be provided with a rough idea about relation of (frequently
>>used) repo nodes using which sysadmin can update cache size.
>
> I can't follow you, sorry. How would a sysadmin possibly know the number
> of frequently used nodes? And why would he know that, and not the amount
> of memory? And why wouldn't he worry about running into out of memory?
>
> Even for off-heap caches, I think it's still important to limit the
> memory. Even tought you don't get an out-of-memory exception, you would
> still run out of physical memory, at which point the system would get
> extremely slow (virtual memory trashing).

What I meant was there was no way for me to guess a good number for
document cache (e.g. why is 256MB -- the default value --
sufficient/insufficient) given that I knew what type of load I (as
application engineer) plan to put on an author. I understand that mem
usage is the bottom line and sysadmin must configure that too -- but
from a sysadmin point of view what should the course of action when
seeing a lot of cache misses: (a) notify application team, or (b)
increase cache size. Yes, at the end of the day there would be balance
between these 2 options -- but from app engineer point of view, I've
no idea what/how much cache size is useful/sufficient or even how to
map a given size in bytes to the kind of access I'd plan on this
repository which kind of nullifies option (a). I don't know, for sure,
about general deployments, but in our case engineer team does
recommend heap size and other JVM settings (and possibly tweak levels)
to sysadmin team -- I thought that's how setups usually are done.

Thanks,
Vikas

Reply via email to