This is interesting. Right now AMQ5 uses ConcurrentHashMap and for large heaps this has some obvious GC issues.
1. You have to keep a LARGE % of the JVM memory free for large GCs. 2. All the objects pool into the old generation screwing up GC and wasting cycles for every GC. Then there’s this. http://www.javacodegeeks.com/2015/03/creating-millions-of-objects-with-zero-garbage.html http://openhft.github.io/Chronicle-Map/apidocs/net/openhft/chronicle/map/ChronicleMapBuilder.html This should allow a drop in replacement for CHMs with off-heap memory. -- Founder/CEO Spinn3r.com Location: *San Francisco, CA* blog: http://burtonator.wordpress.com … or check out my Google+ profile <https://plus.google.com/102718274791889610666/posts> <http://spinn3r.com>