Hello Johannes,

oes you implementation already solve this

java.util.ConcurrentModificationException
        at java.util.Hashtable$Enumerator.next(Hashtable.java:980)
at org.mmbase.util.LRUHashtable$LRUEntrySetIterator.next(LRUHashtable.java:465)
        at java.util.AbstractMap.containsKey(AbstractMap.java:144)
at org.mmbase.module.core.MMObjectBuilder.getNodes(MMObjectBuilder.java:1084) at org.mmbase.module.core.MMObjectNode.getRealNodesFromBuilder(MMObjectNode.java:1704)
        at 
org.mmbase.module.core.MMObjectNode.getRealNodes(MMObjectNode.java:1682)
at org.mmbase.module.core.MMObjectNode.getRelatedNodes(MMObjectNode.java:1615) at org.mmbase.bridge.implementation.BasicNode.getRelatedNodes(BasicNode.java:1135) at org.mmbase.bridge.implementation.BasicNode.getRelatedNodes(BasicNode.java:1152)


Or do I have to fix this myself?

Yhe problem is that the Cache does not delegate all Map methods to the wrapped implementation. It uses the default AbstractMap implementation which is not synchronized and not optimized for performance.

Nico



Johannes Verelst wrote:
Hi all,

You probably all use the MMBase caching layer, and configure the cache
sizes in your 'caches.xml'. We also do that, but we are running out of
memory fast, because objects (mainly nodelists) are not flushed often
enough. The problem with caching is, well, it saves objects in memory.
This problem will only get worse with Ernst's project, because that
will mean that even fewer objects are flushed from cache and your
caches remain fuller.

The current cache implementation is the org.mmbase.util.LRUHashTable,
but since MMBase 1.8 this implementation doesn't need to be used. You
can specify your own cache implementation if you want. This is just
what I did, based on the popular OSCache caching code. The nice thing
about oscache is that you can specify the amount of items cached in
memory, but also allow oscache to overflow that to disk. This makes no
sense whatsoever for your nodecache ofcourse, but your multilevel
cache and nodelist cache are good candidates to store on disk. Maybe
your blobcache also?

I've now written my code as an application in the 'applications'
directory, but I also needed to change some things in the core:
- updated the Cache.java and CacheInplementationInterface.java files
to allow a cache to be configured
- updated MMObjectNode to make it serializable. I also implemented a
'readObject()' and 'writeObject()' for this, so other classes
referenced from MMObjectNode (mainly MMObjectBuilder) do not need to
be serializable.

My code depends on the 'oscache-2.2.jar' library, and in order to keep
the MMBase core as clean as possible I chose to create an application.
Do you all agree or should I move my code (and the oscache-2.2.jar
dependency) to the core?

I did change some core classes, so I might need to call an official
hack, but I'd much rather just hear opinions before I propose one. Or
maybe I can shove it in under some project?

Johannes
_______________________________________________
Developers mailing list
[email protected]
http://lists.mmbase.org/mailman/listinfo/developers


_______________________________________________
Developers mailing list
[email protected]
http://lists.mmbase.org/mailman/listinfo/developers

Reply via email to