Just out of curiosity how are you planning to load 1 TB of data in cache, using datastreamer or cachestore? What's the expected time to load cache? Since you are not keeping back up, how are you going to handle the situation when any of the node crashes? This can happen in prod env, so what is the expected down time for you.
To load 15-20 GB of data(from Oracle tables) in different caches, with backup count 1 it is around 40 Gb, my application is taking 28-30 minutes. Please share your results if possible. Thanks, Prasad On Wed 6 Mar, 2019, 2:58 PM Navneet Kumar <[email protected] wrote: > Ilya, > Thanks for your quick response. I have gone through the capacity planning > link shared by you. > 1,000,000,000 Total objects(Records) > 1,024 bytes per object (1 KB) > 0 backup > 4 nodes > > Total number of objects X object size (only primary copy since back up is > set 0. Better remove the back up property from XML): > 1,000,000,000 x 1,024 = 1024000000000 bytes (1024000 MB) > > No Indexes used. I know if it is used it will take 30% of overhead more. > > Approximate additional memory required by the platform: > 300 MB x 1(No of node in the cluster) = 300 MB > > Total size: > 1024000 + 300 = 1024300 MB > > Hence the anticipated total memory consumption would be just over ~ 1024.3 > GB > > > So what I want to know that my use case is I want to load full 1 billion > subscriber records on datagrid(Apache Ignite) and will read from there. No > disk swapping once my data is loaded in memory. > Let me know my calculation is correct or do I need to add some more memory. > I have a single node cluster as of now and I am not using any index and no > back up. > > Regards > Navneet > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
