"*- I have 300GB of data in DB. Will this be the same in Ignite?* No, data size on disk is not a direct 1-to-1 mapping in memory. As a very rough estimate, it can be about 2.5/3 times size on disk excluding indexes and any other overhead."
I am getting aprox. these numbers. 1.3 = 30% because of indices 2017-05-11 0:59 GMT+02:00 Denis Magda <[email protected]>: > Hi, > > Honestly, it’s unclear why you use multipliers like 2.5 and 1.3 in the > formula. > > Please refer to this capacity guide to make up a rough estimation: > https://apacheignite.readme.io/docs/capacity-planning > > — > Denis > > On May 10, 2017, at 8:35 AM, Guillermo Ortiz <[email protected]> wrote: > > What's the reason because it needs so many space Ignite to store data in > memory?. > For example, if your dataset it's about 4TB and you are going to use > backup (1 replica) the final size it's about > > 5TBx2.5x1.3(indices) x backcup = 26TB more or less... So, it could be 28TB > with the memory of the system to work. > > It seems insane to have that memory in many cases. How does Ignite store > data to spend 3 times the size on disk? is it possible to reduce this? > > >
