Ilya,
Thanks for your quick response. I have gone through the capacity planning
link shared by you.
1,000,000,000 Total objects(Records)
1,024 bytes per object (1 KB)
0 backup
4 nodes

Total number of objects X object size (only primary copy since back up is
set 0. Better remove the back up property from XML):
1,000,000,000 x 1,024 = 1024000000000 bytes (1024000 MB)

No Indexes used. I know if it is used it will take 30% of overhead more.

Approximate additional memory required by the platform:
300 MB x 1(No of node in the cluster) = 300 MB

Total size:
1024000 + 300 = 1024300 MB

Hence the anticipated total memory consumption would be just over ~ 1024.3
GB


So what I want to know that my use case is I want to load full 1 billion
subscriber records on datagrid(Apache Ignite) and will read from there. No
disk swapping once my data is loaded in memory.
Let me know my calculation is correct or do I need to add some more memory.
I have a single node cluster as of now and I am not using any index and no
back up.

Regards
Navneet



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to