Running > 32GB for heap size brings a lot of inefficiencies into play as
java pointers will not be compressed and your GC will increase (as you can
see).

When you shut a node down the indexes it has allocated to it go into an
unallocated state and the cluster will try to reallocate all of these
shards. But if you (re)join a node to the cluster it will still initialise
and reallocate them even if it only ends up putting them onto the same
node. This is to ensure the cluster state is maintained and your shards are
evenly distributed.

If you are just restarting a node for whatever reason, you can
set cluster.routing.allocation.disable_allocation to true, then restart the
node, and when it comes back it will simply reinitialise and reopen the
index shards on the local machine, which is a lot faster recovery. Make
sure you set that back to false when things are green again.


Ideally you need to add more nodes to your cluster, you could split those
two and make 4 VMs/containers, or just add more physical nodes. Just
remember my first comment though.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: [email protected]
web: www.campaignmonitor.com


On 26 December 2013 00:13, Prometheus WillSurvive <
[email protected]> wrote:

> Last couple of days I am looking the forums and the blogs regarding to
> find a help  or clue about the Large indexes and the Heap usage similar to
> the our use case.
>
>  Unfortunatelly I didn’t find a solution that helps my case.  Here the
> setup:
>
> We have two test server,  each :
>
> 128 gig Ram
> 24 core xeon cpu
> 3TB disk
>
> We have 10 indexes and 5 shards 0 replica  . Each index has around 120 gig
> size.  Two of them 190 Gig size.  Index mapping has a parent child
> relation.
>
> ES heap is 80 gig
>
>
> Our main problem is the opening ES server.  When ES start to open indexes
> it always require recovering process even we decently close the ES server
> before.
>
> When ES recovering the index shard Heap always going up. Specially when ES
> start to recover/initialize 190 gig size index  Heap almost full and its
> going to endless GC process .
>
> Why ES using so much heap to open / recover / initialize shards ?
>
> After successfully opened shards ES not releasing heap that used  ?
>
> What is the mechanism behind the initializing indexes process ?
>
> Why ES recovering indexes every time ?
>
> What could be your suggestion ?
>
> thanks
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/5c1a453c-04d5-4f99-ab0a-d9bd743e3b34%40googlegroups.com
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624aSGCdkEm3%3D-xcA5sKFWkHnsd5ZdreK7VVoOgF99btW8A%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to