Hi, I have a question about memory usage.
My cluster has 1 master and data node and 3 data nodes. Each have 6 GB heap size (which is nearly half of the machine). I have 250 shards with replicas and I have 600 GB data in total. When I start the cluster, I can use it for 1 week without any problem. After a week, my cluster begins to fail due to low memory (below 10%). When I restart all the nodes, everything is fine, again. Free memory goes up to 40%. And, it fails again 1 week after the restart.
I think some data is remaining in the memory for a long time even if it is not used. Is there any configuration to optimize this? Do I need to flush indices or clear cache periodically?
Thanks, Umutcan -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5322B754.80104%40gamegos.com. For more options, visit https://groups.google.com/d/optout.
