I ran '_cache/clear' which cleaned up fielddata, id_cache and jvm memory usage dropped ~10.5GB -> ~5 GB..
Shouldn't ES itself clear up these cache when jvm memory usage becomes really high? I see the gc count kept increasing but not a lot of memory was reclaimed until I ran _cache/clear.. On Thursday, February 13, 2014 10:19:28 AM UTC-5, Ankush Jhalani wrote: > > We have a single node, 12GB, 16 core ES instance to which we are 12 > threads bulk indexing into a 12shard index. Each thread sends a request of > size kb to couple megabytes. The thread bulk queue_size is increased from > default 50 to 100. > > With v0.90.11, we are noticing that the jvm memory usage keeps growing > slowly and doesn't go down, gc runs frequently but doesn't free up much > memory. From debug logs, it seems the segment merges are happening. However > even after we stop indexing, for many hours the instance is busy doing > segment merges. Sample gist from hot threads I ran couple minutes apart - ( > https://gist.github.com/ajhalani/8976792). Even after 16 hours and little > use on the machine, the jvm memory usage is about 80% (CMS should run at > 75%) and nodes stats show is running very frequently. > > If we don't stop indexing, eventually after 60-70GB indexing the instance > goes out of memory. This seems like a memory leak, we didn't face this > issue with 0.90.7 (though we were probably using a 6 thread process for > bulk indexing). > > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f53af0c2-3d30-4059-a044-54213f1a32f3%40googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.
