Last couple of days I am looking the forums and the blogs regarding to find 
a help  or clue about the Large indexes and the Heap usage similar to the 
our use case.

 Unfortunatelly I didn’t find a solution that helps my case.  Here the 
setup:

We have two test server,  each :

128 gig Ram
24 core xeon cpu
3TB disk

We have 10 indexes and 5 shards 0 replica  . Each index has around 120 gig 
size.  Two of them 190 Gig size.  Index mapping has a parent child 
relation. 

ES heap is 80 gig


Our main problem is the opening ES server.  When ES start to open indexes 
it always require recovering process even we decently close the ES server 
before. 

When ES recovering the index shard Heap always going up. Specially when ES 
start to recover/initialize 190 gig size index  Heap almost full and its 
going to endless GC process .

Why ES using so much heap to open / recover / initialize shards ? 

After successfully opened shards ES not releasing heap that used  ?

What is the mechanism behind the initializing indexes process ?

Why ES recovering indexes every time ? 

What could be your suggestion ? 

thanks
  

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5c1a453c-04d5-4f99-ab0a-d9bd743e3b34%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to