Hi all,

You probably all use the MMBase caching layer, and configure the cache
sizes in your 'caches.xml'. We also do that, but we are running out of
memory fast, because objects (mainly nodelists) are not flushed often
enough. The problem with caching is, well, it saves objects in memory.
This problem will only get worse with Ernst's project, because that
will mean that even fewer objects are flushed from cache and your
caches remain fuller.

The current cache implementation is the org.mmbase.util.LRUHashTable,
but since MMBase 1.8 this implementation doesn't need to be used. You
can specify your own cache implementation if you want. This is just
what I did, based on the popular OSCache caching code. The nice thing
about oscache is that you can specify the amount of items cached in
memory, but also allow oscache to overflow that to disk. This makes no
sense whatsoever for your nodecache ofcourse, but your multilevel
cache and nodelist cache are good candidates to store on disk. Maybe
your blobcache also?

I've now written my code as an application in the 'applications'
directory, but I also needed to change some things in the core:
- updated the Cache.java and CacheInplementationInterface.java files
to allow a cache to be configured
- updated MMObjectNode to make it serializable. I also implemented a
'readObject()' and 'writeObject()' for this, so other classes
referenced from MMObjectNode (mainly MMObjectBuilder) do not need to
be serializable.

My code depends on the 'oscache-2.2.jar' library, and in order to keep
the MMBase core as clean as possible I chose to create an application.
Do you all agree or should I move my code (and the oscache-2.2.jar
dependency) to the core?

I did change some core classes, so I might need to call an official
hack, but I'd much rather just hear opinions before I propose one. Or
maybe I can shove it in under some project?

Johannes
_______________________________________________
Developers mailing list
[email protected]
http://lists.mmbase.org/mailman/listinfo/developers

Reply via email to