So I tried to move some of the indexes to a new node ;
I moved1 index first and when elasticsearch master encountered a data node,
it identified that "hey, I don't have all the indexes that are in the other
node, so lets copy it"
*MASTER: *
[2014-05-05 17:59:43,158][INFO ][gateway.local.state.meta ] [node1] auto
importing dangled indices [logstash-2013.12.31/OPEN] from
[[node2][aV-5pfthS8aEGlHRlAgFwA][ip-10-63-24-14][inet[/10.63.24.14:9300]]{master=false}]
*NODE: *
[2014-05-05 17:59:33,974][INFO ][gateway.local.state.meta ] [node2]
[logstash-2013.12.31] dangling index, exists on local file system, but not
in cluster metadata, scheduling to delete in [2h], auto import to cluster
state [YES]
If I set gateway.local.auto_import_dangled=no ; it says that it will
delete the indexes in 2 hours (default time) which I don't want.
So we are back to square one. I can't seem to achieve the most
"fundamental" distributive nature of elasticsearch if it keeps all indexes
on all nodes and elasticsearch cannot start because now it has all the
indexes on one node and that are too many
On Sunday, May 4, 2014 10:13:09 AM UTC-4, Nish wrote:
>
> elasticsearch is set as a single node instance on a 60G RAM and 32*2.6GHz
> machine. I am actively indexing historic data with logstash. It worked well
> with ~300 million documents (search and indexing were doing ok) , but all
> of a sudden es fails to starts and keep itself up. It starts for few
> minutes and I can query but fails with out of memory error. I monitor the
> memory and atleast 12G of memory is available when it fails. I had set the
> es_heap_size to 31G and then reduced it to 28, 24 and 18 and the same error
> every time (see dump below)
>
> *My security limits are as under (this is a test/POC server thus "root"
> user) *
>
> root soft nofile 65536
> root hard nofile 65536
> root - memlock unlimited
>
> *ES settings *
> config]# grep -v "^#" elasticsearch.yml | grep -v "^$"
> bootstrap.mlockall: true
>
> *echo $ES_HEAP_SIZE*
> 18432m
>
> ---DUMP----
>
> # bin/elasticsearch
> [2014-05-04 13:30:12,653][INFO ][node ] [Sabretooth]
> version[1.1.1], pid[19309], build[f1585f0/2014-04-16T14:27:12Z]
> [2014-05-04 13:30:12,653][INFO ][node ] [Sabretooth]
> initializing ...
> [2014-05-04 13:30:12,669][INFO ][plugins ] [Sabretooth]
> loaded [], sites []
> [2014-05-04 13:30:15,390][INFO ][node ] [Sabretooth]
> initialized
> [2014-05-04 13:30:15,390][INFO ][node ] [Sabretooth]
> starting ...
> [2014-05-04 13:30:15,531][INFO ][transport ] [Sabretooth]
> bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
> 10.109.136.59:9300]}
> [2014-05-04 13:30:18,553][INFO ][cluster.service ] [Sabretooth]
> new_master
> [Sabretooth][eocFkTYMQnSTUar94A2vHw][ip-10-109-136-59][inet[/10.109.136.59:9300]],
>
> reason: zen-disco-join (elected_as_master)
> [2014-05-04 13:30:18,579][INFO ][discovery ] [Sabretooth]
> elasticsearch/eocFkTYMQnSTUar94A2vHw
> [2014-05-04 13:30:18,790][INFO ][http ] [Sabretooth]
> bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
> 10.109.136.59:9200]}
> [2014-05-04 13:30:19,976][INFO ][gateway ] [Sabretooth]
> recovered [278] indices into cluster_state
> [2014-05-04 13:30:19,984][INFO ][node ] [Sabretooth]
> started
> OpenJDK 64-Bit Server VM warning: Attempt to protect stack guard pages
> failed.
> OpenJDK 64-Bit Server VM warning: Attempt to deallocate stack guard pages
> failed.
> OpenJDK 64-Bit Server VM warning: INFO:
> os::commit_memory(0x00000007f7c70000, 196608, 0) failed; error='Cannot
> allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (malloc) failed to allocate 196608 bytes for
> committing reserved memory.
> # An error report file with more information is saved as:
> # /tmp/jvm-19309/hs_error.log
>
>
>
> ----
> *user untergeek on #logstash told me that I have reached a max number of
> indices on a single node. Here are my questions: *
>
> 1. Can I move half of my indexes to a new node ? If yes, how to do
> that without compromising indexes
> 2. Logstash makes 1 index per day and I want to have 2 years of data
> indexable ; Can I combine multiple indexes into one ? Like one month per
> month : this will mean I will not have more than 24 indexes.
> 3. How many nodes are ideal for 24 moths of data ~1.5G a day
>
>
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5171155f-1223-45bc-a8f1-0e7f8d7c3646%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.