Hi,

I’m wondering what guarantees, if any, the write behind mode actually has when 
working with a volume of input which exceeds it’s specified volume to write to 
the backing store ?

For example, suppose I do this


final CacheConfiguration<Long, byte[]> rccfg = new CacheConfiguration<>();
rccfg.setBackups(2);
rccfg.setManagementEnabled(true);
rccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); // 
wait until backups are written
rccfg.setCacheMode(CacheMode.PARTITIONED);
rccfg.setEvictionPolicy(new LruEvictionPolicy<Long, byte[]>());
rccfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
rccfg.setLoadPreviousValue(false);
rccfg.setName("Resources-UserStore");
rccfg.setOffHeapMaxMemory(512 * 1024 * 1024); // in bytes
rccfg.setReadFromBackup(true);
rccfg.setStartSize(5000);
rccfg.setCacheStoreFactory(new ResourceCacheStoreFactory(fDiskStorage));
rccfg.setReadThrough(true);
rccfg.setWriteThrough(true);
rccfg.setWriteBehindEnabled(true);
rccfg.setWriteBehindFlushFrequency(2 * 60  * 1000); // in millis
rccfg.setWriteBehindFlushSize(100);
rccfg.setSwapEnabled(false);
rccfg.setRebalanceBatchSize(2 * 1024 * 1024);
rccfg.setRebalanceThrottle(200); // in millis

What happens if writes keep coming in, but the write behind threads can’t keep 
up ? Either because the Store is too slow to accept them all or because the 
flush size and flush frequency need to be broken ? Will it start loosing data 
if the memory fills up and it needs to evict entries or does it increase the 
frequency and or blocks ?

PS: took me a while to figure out that I had to enable WriteThrough to enable 
WriteBehind .. seemed counterintuitive to me.

Cheers,
Erik Vanherck

Reply via email to