I’m running a set of four server applications on a local system to simulate
a cluster.



Each of the servers has the following memory configurations set:



        public override void ConfigureRaptorGrid(IgniteConfiguration cfg)

        {

            cfg.JvmInitialMemoryMb = 512; // Set to minimum advised memory
for Ignite grid JVM of 512Mb

            cfg.JvmMaxMemoryMb = 1 * 1024; // Set max to 1Gb



            // Don't permit the Ignite node to use more than 1Gb RAM (handy
when running locally...)

            cfg.MemoryConfiguration = new MemoryConfiguration()

            {

                SystemCacheMaxSize = (long)1 * 1024 * 1024 * 1024

            };

        }



The snap below is from the Windows 10 Task Manager where I have included
the Commit Size value. As can be seen, the four identical servers are using
very large and wildly varying commit sizes. Some Googling suggests this is
due to the JVM allocating the largest contiguous block of virtual memory it
can, but I would not have expected this size to be larger than the
configured memory for the JVM (1Gb plus memory from the wider process it is
running in, though this is only a few hundred Mb at most)





The result is that my local system reports ~50-60Gb committed memory on a
system with 16Gb of physical RAM, and I don’t think it likes it!



Is there are way to configure the Ignite JVM to be a better citizen with
respect to the commited size it requests from the host operating system?



Thanks,

Raymond.

Reply via email to