Dmitriy, May be this is because you don't know code base well enough? ;)
Val, There are few things I can name right away without any investigation: 1. Memory allocation: Unsafe.allocateMemory is inherently not parallel while SnapTree after snapshotting switches into Copy-on-Write mode which does allocations heavily. In multithreaded ops especially on large caches it becomes a bottleneck and you will see it in hot spots easily. 2. Memory deallocation: since it is a concurrent data structure with manual memory management it is protected by GridUnsafeGuard which is implemented as linked queue. I'm not sure if this is the most efficient and scalable algorithm, may be it makes sense to revisit it in the future. Needless to say that Unsafe.freeMemory has the same scalability issues as allocateMemory. 3. We have to always deserialize values to access fields (I hope this can be improved after new marshaller will be merged). 4. Cache value is accessed using swap API. Sergi Sergi 2015-09-17 7:27 GMT+03:00 Alexey Kuznetsov <[email protected]>: > Igniters, > > As I understand "javadevmtl" prepared some code. > May be someone experienced could run it under java profiler and may be it > will give some useful information where is the slowdown? > > -- > Alexey Kuznetsov > GridGain Systems > www.gridgain.com >
