We run on VMs where even "local" disk is on a SAN, so anything that stays
in memory (i.e. ConcurrentHashMap) will be faster than anything that gets
written to disk (i.e. ChronicleHashMap), even if it eliminates garbage
collection.  So this isn't a drop-in replacement we'd welcome unless it was
configurable for which one to use.

Tim
On Mar 20, 2015 12:17 AM, "Kevin Burton" <bur...@spinn3r.com> wrote:

> This is interesting.
>
> Right now AMQ5 uses ConcurrentHashMap and for large heaps this has some
> obvious GC issues.
>
> 1.  You have to keep a LARGE % of the JVM memory free for large GCs.
>
> 2.  All the objects pool into the old generation screwing up GC and wasting
> cycles for every GC.
>
> Then there’s this.
>
>
> http://www.javacodegeeks.com/2015/03/creating-millions-of-objects-with-zero-garbage.html
>
> http://openhft.github.io/Chronicle-Map/apidocs/net/openhft/chronicle/map/ChronicleMapBuilder.html
>
> This should allow a drop in replacement for CHMs with off-heap memory.
>
> --
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
> <http://spinn3r.com>
>

Reply via email to