elasticsearch is set as a single node instance on a  60G RAM and 32*2.6GHz 
machine. I am actively indexing historic data with logstash. It worked well 
with ~300 million documents (search and indexing were doing ok) , but all 
of a sudden es fails to starts and keep itself up. It starts for few 
minutes and I can query but fails with out of memory error. I monitor the 
memory and atleast 12G of memory is available when it fails. I had set the 
es_heap_size to 31G and then reduced it to 28, 24 and 18 and the same error 
every time (see dump below)

*My security limits are as under  (this is a test/POC server thus "root" 
user) *

root   soft    nofile          65536
root   hard    nofile          65536
root   -       memlock         unlimited

*ES settings *
config]# grep -v "^#" elasticsearch.yml | grep -v "^$"
 bootstrap.mlockall: true

*echo $ES_HEAP_SIZE*
18432m

---DUMP----

# bin/elasticsearch
[2014-05-04 13:30:12,653][INFO ][node                     ] [Sabretooth] 
version[1.1.1], pid[19309], build[f1585f0/2014-04-16T14:27:12Z]
[2014-05-04 13:30:12,653][INFO ][node                     ] [Sabretooth] 
initializing ...
[2014-05-04 13:30:12,669][INFO ][plugins                  ] [Sabretooth] 
loaded [], sites []
[2014-05-04 13:30:15,390][INFO ][node                     ] [Sabretooth] 
initialized
[2014-05-04 13:30:15,390][INFO ][node                     ] [Sabretooth] 
starting ...
[2014-05-04 13:30:15,531][INFO ][transport                ] [Sabretooth] 
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address 
{inet[/10.109.136.59:9300]}
[2014-05-04 13:30:18,553][INFO ][cluster.service          ] [Sabretooth] 
new_master 
[Sabretooth][eocFkTYMQnSTUar94A2vHw][ip-10-109-136-59][inet[/10.109.136.59:9300]],
 
reason: zen-disco-join (elected_as_master)
[2014-05-04 13:30:18,579][INFO ][discovery                ] [Sabretooth] 
elasticsearch/eocFkTYMQnSTUar94A2vHw
[2014-05-04 13:30:18,790][INFO ][http                     ] [Sabretooth] 
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address 
{inet[/10.109.136.59:9200]}
[2014-05-04 13:30:19,976][INFO ][gateway                  ] [Sabretooth] 
recovered [278] indices into cluster_state
[2014-05-04 13:30:19,984][INFO ][node                     ] [Sabretooth] 
started
OpenJDK 64-Bit Server VM warning: Attempt to protect stack guard pages 
failed.
OpenJDK 64-Bit Server VM warning: Attempt to deallocate stack guard pages 
failed.
OpenJDK 64-Bit Server VM warning: INFO: 
os::commit_memory(0x00000007f7c70000, 196608, 0) failed; error='Cannot 
allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 196608 bytes for 
committing reserved memory.
# An error report file with more information is saved as:
# /tmp/jvm-19309/hs_error.log



----
*user untergeek on #logstash told me that I have reached a max number of 
indices on a single node. Here are my questions: *

   1. Can I move half of my indexes to a new node ? If yes, how to do that 
   without compromising indexes
   2. Logstash makes 1 index per day and I want to have 2 years of data 
   indexable ; Can I combine multiple indexes into one ? Like one month per 
   month : this will mean I will not have more than 24 indexes.
   3. How many nodes are ideal for 24 moths of data ~1.5G a day

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f1d8685d-b230-47c8-b52c-71808545059c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to