We currently have a cache that is structured with a key of record type and a 
value that is a map from field id to field. So to update this cache, which has 
persistence enabled, we need to atomically load the value map for a key, add to 
that map, and write the map back to the cache. This can be done using invokeAll 
and a CacheEntryProcessor. However, when I test with a higher load (100k 
records with 50 fields each), I run into an OOM exception that I will post 
below. The cause of the exception is reported to be the failure to find a page 
to evict. However, even when I set the DataRegion's eviction threshold to .5 
and the page eviction mode to RANDOM_2_LRU, I am still getting the same error. 
I have 2 main questions from this:

1. Why is it failing to evict a page even with a lower threshold and eviction 
enabled? Is it failing to reach the threshold somehow? Are non-data pages like 
metadata and index pages taken into account when determining if the threshold 
has been reached?

2. We don't have this issue when using IgniteDataStreamer to write large 
amounts of data to the cache, we just can't get transactional support at the 
same time. Why is this OOME an issue with regular cache puts but not with 
IgniteDataStreamer? I would think that any issues with checkpointing and 
eviction would also occur with IgniteDataStreamer.

Reply via email to