My application write bulk updates the whole time in an Elasticsearch index (index size: ~200,000 docs, 35 MB, shards: 3*2; segment count ~35). My cluster has 3 nodes with each 32 GB RAM, ES_HEAP_SIZE=16g, Elasticsearch V. 1.3.4 I am using `index.merge.scheduler.max_thread_count: 1` as I am using a spinning hard disc.

Unfortunately I often get OutOfMemory errors on every node after merges and I have to restart Elasticsearch to make any bulk requests again:

[12:17:49,716][INFO ][index.engine.internal ] [cluster1] [v-2014-week41][1] now throttling indexing: numMergesInFlight=4, maxNumMerges=3 [12:17:49,716][INFO ][index.engine.internal ] [cluster1] [v-2014-week41][0] now throttling indexing: numMergesInFlight=4, maxNumMerges=3 [12:17:49,719][INFO ][index.engine.internal ] [cluster1] [v-2014-week41][0] stop throttling indexing: numMergesInFlight=2, maxNumMerges=3 [12:17:49,727][INFO ][index.engine.internal ] [cluster1] [v-2014-week41][1] stop throttling indexing: numMergesInFlight=2, maxNumMerges=3
.... (+ 100s of log entries like this until this one:
[12:31:25,299][INFO ][index.engine.internal ] [cluster1] [v-2014-week41][1] stop throttling indexing: numMergesInFlight=2, maxNumMerges=3 [12:32:21,810][DEBUG][action.bulk ] [cluster1] [v-2014-week41][0], node[02934K_ySZKEaQ3S1Hv9SA], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.bulk.BulkShardRequest@320ade50]
java.lang.OutOfMemoryError: PermGen space
[12:32:24,776][WARN ][action.bulk ] [cluster1] Failed to send response for bulk/shard
java.lang.OutOfMemoryError: PermGen space
...

What can I do?
Should I increase ES_HEAP_SIZE?

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/54364518.9010505%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to