[ https://issues.apache.org/jira/browse/JCR-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12537255 ]
Thomas Mueller commented on JCR-937: ------------------------------------ Hi, I agree the cache size calculation is not accurate. The question is, how exact does it need to be? I don't think that the original reporter of the problem used long strings: the problem was too many caches (hundreds if not thousands). The reason for that many caches is not known, for this we need a reproducible test case. > CacheManager max memory size > ---------------------------- > > Key: JCR-937 > URL: https://issues.apache.org/jira/browse/JCR-937 > Project: Jackrabbit > Issue Type: Bug > Components: jackrabbit-core > Affects Versions: 1.3 > Reporter: Xiaohua Lu > Assignee: Thomas Mueller > Priority: Minor > Attachments: CacheManagerTest.java > > > I have ran into OutOfMemory a couple of times with Jackrabbit (cluster, 4 > nodes, each has 1G mem heap size). > After adding some debug into the CacheManager, I noticed that maxMemorySize > (default to 16M) is not really honored during resizeAll check. Each > individual MLRUItemStateCache seems to honor the size, but the total > number/size of MLRUItemStateCache is not. If you put some print statement of > totalMemoryUsed and unusedMemory, you can see that totalMemoryUsed is more > than 16M and unusedMemory is negative. > The other problem we have noticed during the profiling is there are a lots of > other in memory objects that are consuming memory but not included in > CacheManager caches control. One example is CachingHierarchyManager which > consumed 58M out of 242M through its use of PathMap. If CacheManager's > maxSize can control the total cache size used by Jackrabbit, that would be > easier from a management's perspective. (btw, upper_limit in > CachingHierarchyManager is hardcoded and can't be control from outside) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.