Hi guys,

I have read about the submit of DirectMemory to the ASF and I really
appreciate it. Due the fact that I know Simone for some time and he asked
me, if I want to participate here are my first Things which are in my mind.
;)

I'm working for an Enterprise Content Management Vendor in Germany and
started some years ago with reading about Things like BigMemory, OrientDB
and how to build File Caches.
>From the beginning of DirectMemory @ github, I was interested in this
project, because I had the same Idea. So I started to play around with it
and the best of DirectMemory was the Concept of Staging!
Automatically/Manually submitting an Object from normal live, to serialized
heap, to off-heap. With the last big change, this concept was killed, if I'm
right. (just had a short look into the github repo)
At the moment I'm not sure, what the aim of DirectMemory is. Is it the Idea
to be a pluggable Layer for other Cache Implementations or is it "another"
Cache Implementation?
BigMemory and also ElasticMemory (by Hazelcast) is mostly used for Caching.
To get often used Stuff out of the Garbage Collector.

I'm still thinking, that the Staging mechanismn was the USP and you should
not forget it.

I talked with Simone about the outstanding migration and the code you want
to bring to the ASF. At the moment I'm not sure, because with the last
submit, you are near to a Pool which delivers slices of ByteBuffer.

Don't get me wrong, this is the first step, but there is room for a lot of
improvements, especially the Design of the APIs and the Implementations
(Staging was done in nearly one class ;)) itself.
Would be nice to get some more informations what the aim for the 1.0 is.


Just some thoughts. Maybe a little bit rough. :) And I'm would be happy to
bring some Thoughts/Requirements from the Enterprise Business into the
project. (and also Commits if you want to :))

Bye and Best Regards,
Daniel

Reply via email to