ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is 
continuously increasing (225 MB per day). 
Total no of documents is around 800k, 500 MB. 

cat /proc/meminfo has 
>
> Slab: 3424728 kB 

SReclaimable: 3407256 kB
>

 

curl -XGET 'http://localhost:9200/_nodes/stats/jvm?pretty' 
>
> "heap_used_in_bytes" : 5788779888,
>           "heap_used_percent" : 67,
>           "heap_committed_in_bytes" : 8555069440,
>
>
slabtop
 OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE 
NAME                   
17750313 17750313 100%    0.19K 845253       21   3381012K dentry
 

So the continuous increase in memory usage is because of the slab usage I 
think, If I restart ES, then slab memory is freed. I see that ES still has 
some free heap available, but from elastic documentation  

> Lucene is designed to leverage the underlying OS for caching in-memory 
> data structures. Lucene segments are stored in individual files. Because 
> segments are immutable, these files never change. This makes them very 
> cache friendly, and the underlying OS will happily keep hot segments 
> resident in memory for faster access.
>

My question is, should I add more nodes or increase the ram of each node to 
let lucene use as much memory as it wants ? how significant performance 
difference will be there if I choose to upgrade ES machines to have more 
RAM. 

Or, can I make some optimizations that decreases the slab usage or clean 
slab memory partially?


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5ccc7887-59f8-4267-ac05-450f00c42045%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to