Hi there,
We are analyzing the option to use Geode as a backend DB for a service that 
requires high availability, geo replication and good throughput.
Our data set is big enough to make a full in memory deployment probably not 
cost efficient, I would like to know if there is some information about 
performance when the disk overflow is used (let's say that only 10% of the data 
fits in main memory). Is disk overflow design to support this case, or disk 
overflow it is more a security mechanism until the cluster is scale out and 
rebalanced.
Thkx in advance,
/evaristo

Reply via email to