We have a single node, 12GB, 16 core ES instance to which we are 12 threads 
bulk indexing into a 12shard index.  Each thread sends a request of size kb 
to couple megabytes. The thread bulk queue_size is increased from default 
50 to 100. 

With v0.90.11, we are noticing that the jvm memory usage keeps growing 
slowly and doesn't go down, gc runs frequently but doesn't free up much 
memory. From debug logs, it seems the segment merges are happening. However 
even after we stop indexing, for many hours the instance is busy doing 
segment merges. Sample gist from hot threads I ran couple minutes apart - 
(https://gist.github.com/ajhalani/8976792). Even after 16 hours and little 
use on the machine, the jvm memory usage is about 80% (CMS should run at 
75%) and nodes stats show is running very frequently.

If we don't stop indexing, eventually after 60-70GB indexing the instance 
goes out of memory.  This seems like a memory leak, we didn't face this 
issue with 0.90.7 (though we were probably using a 6 thread process for 
bulk indexing).

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a1819d5f-caa3-4ac4-886f-5b560eada87a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to