Hi.
I write a procedure that remove a node and it gets an OutOfMemoryError with
-Xmx128m.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at
org.apache.commons.collections.map.AbstractHashedMap.ensureCapacity(AbstractHashedMap.java:611)
at
org.apache.commons.collections.map.AbstractHashedMap.checkCapacity(AbstractHashedMap.java:591)
at
org.apache.commons.collections.map.AbstractHashedMap.addMapping(AbstractHashedMap.java:496)
at
org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
at
org.apache.commons.collections.map.AbstractReferenceMap.put(AbstractReferenceMap.java:256)
at
org.apache.jackrabbit.core.state.ItemStateMap.put(ItemStateMap.java:74)
at
org.apache.jackrabbit.core.state.ItemStateReferenceCache.cache(ItemStateReferenceCache.java:122)
at
org.apache.jackrabbit.core.state.LocalItemStateManager.getPropertyState(LocalItemStateManager.java:136)
at
org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(LocalItemStateManager.java:174)
at
org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemStateManager.java:260)
at
org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:200)
at
org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:390)
at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:336)
at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:615)
at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:650)
at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
at
org.apache.jackrabbit.core.NodeImpl.removeChildNode(NodeImpl.java:586)
at org.apache.jackrabbit.core.ItemImpl.internalRemove(ItemImpl.java:887)
at org.apache.jackrabbit.core.ItemImpl.remove(ItemImpl.java:959)
at com.myapplication.whatever...
This node may be huge (By example, all data stocked from an user). In my case,
this node contains a hierarchy of 40000+ nodes.
Also note that since, my application use concurrency and I use a
TransientRepository, I have always another session that is open.
So I think it have to do with the cache/replica system that save change in
memory before session.save().
All I think is to use a recursive algorithm that save each time a node is
removed. But I would prefer to keep my data always consistent for other users.
Any idea?
Frank