Starting Ignite 2.0 data is always in off-heap meaning you can easily store and process terabytes of data in the cluster.
Java heap is needed only for operational needs of your application (not as data storage). Start with some value around a couple of gigabytes and increase if the workload requires more. Denis On Saturday, July 15, 2017, Lucky <[email protected]> wrote: > > OK .I see. > But My data is more than 80GB, and it will rise to 120G the next 3 years, > and I want to let my data always resides in off heap. > My ignite version is 2.0. > Any suggestion? > > > Thanks. > Lucky > > > > At 2017-07-14 21:27:21, "Andrey Mashenkov" <[email protected] > <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote: > > Hi Lucky, > > It is a bad practice to set JVM memory >= 32GB. Usually, you can get a > performance degradation for heap > 10Gb due to heavy GC. > Also it make no sense to have different Xms and Xmx params on server side > as, JVM never revert back free memory to OS. > > From ignite-2.x there is no need to have huge heap as all your data will > always resides in off heap. > For 1.x versions, you can add more nodes with smaller heap or use OffHeap > memory mode. > > See recommended settings [1]. > > [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning > > On Thu, Jul 13, 2017 at 5:02 AM, Lucky <[email protected] > <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote: > >> Actually,my setting is : Xms =40G,Xmx =120G. >> but it still got the wrong message. >> Are there another parameters about H2 console? >> >> Thanks. >> Lucky >> >> >> At 2017-07-12 16:53:56, "Humphrey" <[email protected] >> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote: >> >You can modify the ignite.sh script and increase the Xms and values >> > >> > >> > >> > >> > >> >-- >> >View this message in context: >> >http://apache-ignite-users.70518.x6.nabble.com/Requested-array-size-exceeds-VM-limit-tp14708p14710.html >> >Sent from the Apache Ignite Users mailing list archive at Nabble.com. >> >> >> >> >> > > > > -- > Best regards, > Andrey V. Mashenkov > > > > >
