[ 
https://issues.apache.org/jira/browse/JCR-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518405
 ] 

Jukka Zitting commented on JCR-937:
-----------------------------------

There's a minimum size (default 128kB) per each cache that overrides the global 
maximum memory setting when you start having large numbers of sessions. Each 
session is in effect guaranteed at least a small slice of memory for caching.

Do you have an idea how many sessions you have open concurrently?

> CacheManager max memory size
> ----------------------------
>
>                 Key: JCR-937
>                 URL: https://issues.apache.org/jira/browse/JCR-937
>             Project: Jackrabbit
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.3
>            Reporter: Xiaohua Lu
>            Priority: Minor
>
> I have ran into OutOfMemory a couple of times with Jackrabbit (cluster, 4 
> nodes, each has 1G mem heap size). 
> After adding some debug into the CacheManager, I noticed that maxMemorySize 
> (default to 16M) is not really honored during resizeAll check.  Each 
> individual MLRUItemStateCache seems to honor the size, but the total 
> number/size of MLRUItemStateCache is not. If you put some print statement of 
> totalMemoryUsed and unusedMemory, you can see that totalMemoryUsed is more 
> than 16M and unusedMemory is negative. 
> The other problem we have noticed during the profiling is there are a lots of 
> other in memory objects that are consuming memory but not included in 
> CacheManager caches control. One example is CachingHierarchyManager which 
> consumed 58M out of 242M through its use of PathMap. If CacheManager's 
> maxSize can control the total cache size used by Jackrabbit, that would be 
> easier from a management's perspective. (btw, upper_limit in 
> CachingHierarchyManager is hardcoded and can't be control from outside)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to