Thanks Matt, 

you nailed the problem. We have scenario where there are a bunch of hot keys
that are accessed concurrently (about 1.2 billion requests/day but
consistently 150 concurrent users and peaks of 10x). 

For us, this cache is meant to always be read-only and we never need to copy
as data is never dirtied. To help with the problem, I moved all values into
the off-heap memory and only kept the keys in-heap and disabled eagerTTL. 

We are still getting performance problems compared to using ehcache with the
BigMemoryGo off-heap implementation. But, the above made it usable. 

Just to give you performance numbers:

<bean class="org.apache.ignite.configuration.CacheConfiguration"
              p:name="myCache"
              p:cacheMode="LOCAL"
              p:memoryMode="ONHEAP_TIERED"
              p:offHeapMaxMemory="#{256 * 1024 * 1024}"
              p:evictionPolicy-ref="max50Elements"
              p:expiryPolicyFactory-ref="2HourTTL"
              p:statisticsEnabled="true"
              p:managementEnabled="true"
              p:swapEnabled="false"/> 

~ 17 seconds per request with the blocking

<bean class="org.apache.ignite.configuration.CacheConfiguration"
          p:name="myCache"
          p:cacheMode="LOCAL"
          p:atomicityMode="ATOMIC"
          p:memoryMode="OFFHEAP_VALUES"
          p:offHeapMaxMemory="#{256 * 1024 * 1024}"
          p:statisticsEnabled="true"
          p:managementEnabled="true"
          p:swapEnabled="false"
          p:eagerTtl="false"
          p:expiryPolicyFactory-ref="2HourTTL"/>

Median time: 64 ms, 90% 120ms, max time: 1.2 seconds 

The max was cold cache. 

Turning off copy should probably put it in the same ballpark as bigmemory as
that option was called out in their documentation. 

A complete read-only option that only blocks when evisting (if even needed
might provide even more head-room.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433p4628.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to