[ https://issues.apache.org/jira/browse/JCR-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12537293 ]
Thomas Mueller commented on JCR-937: ------------------------------------ > extend InternalValue and BlobFileValue with a much more accurate estimation > of the retained memory Yes, that would be very good. I don't think it would affect performance much. > cache this size in the MLRUItemStateCache itself I am not sure, but maybe the objects can change (so the size can change) while they are in the cache. > CacheManager max memory size > ---------------------------- > > Key: JCR-937 > URL: https://issues.apache.org/jira/browse/JCR-937 > Project: Jackrabbit > Issue Type: Bug > Components: jackrabbit-core > Affects Versions: 1.3 > Reporter: Xiaohua Lu > Assignee: Thomas Mueller > Priority: Minor > Attachments: CacheManagerTest.java > > > I have ran into OutOfMemory a couple of times with Jackrabbit (cluster, 4 > nodes, each has 1G mem heap size). > After adding some debug into the CacheManager, I noticed that maxMemorySize > (default to 16M) is not really honored during resizeAll check. Each > individual MLRUItemStateCache seems to honor the size, but the total > number/size of MLRUItemStateCache is not. If you put some print statement of > totalMemoryUsed and unusedMemory, you can see that totalMemoryUsed is more > than 16M and unusedMemory is negative. > The other problem we have noticed during the profiling is there are a lots of > other in memory objects that are consuming memory but not included in > CacheManager caches control. One example is CachingHierarchyManager which > consumed 58M out of 242M through its use of PathMap. If CacheManager's > maxSize can control the total cache size used by Jackrabbit, that would be > easier from a management's perspective. (btw, upper_limit in > CachingHierarchyManager is hardcoded and can't be control from outside) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.