Hi Mark, Thanks.
I understand that caching makes ES perform better, and it's normal. What I don't understand is the unusual size of dentry objects (dentry size increase at about 200+ mb per day?) for the data size I have. There isn't this behaviour on the ELK ES where I have many times of data compared to this. Does that mean there are unusual no of segments being created?, is there something that needs to be optimized? The only thing that is different is that we take hourly snapshots to S3 directly, is it possible that the S3 paths are also part of dentry objects? is it possible that the no of snapshots has some thing to do with? (I know that having too many no of snapshots will make snapshotting slower). Note that when I restart the ES it gets cleared(most of it, may be OS clears up this cache once it sees that the parent process has been stopped). On Monday, May 4, 2015 at 4:17:40 PM UTC+5:30, Pradeep Reddy wrote: > > ES version 1.5.2 > Arch Linux on Amazon EC2 > of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is > continuously increasing (225 MB per day). > Total no of documents is around 800k, 500 MB. > > cat /proc/meminfo has >> >> Slab: 3424728 kB > > SReclaimable: 3407256 kB >> > > > > curl -XGET 'http://localhost:9200/_nodes/stats/jvm?pretty' >> >> "heap_used_in_bytes" : 5788779888, >> "heap_used_percent" : 67, >> "heap_committed_in_bytes" : 8555069440, >> >> > slabtop > OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE > NAME > 17750313 17750313 100% 0.19K 845253 21 3381012K dentry > > > So the continuous increase in memory usage is because of the slab usage I > think, If I restart ES, then slab memory is freed. I see that ES still has > some free heap available, but from elastic documentation > >> Lucene is designed to leverage the underlying OS for caching in-memory >> data structures. Lucene segments are stored in individual files. Because >> segments are immutable, these files never change. This makes them very >> cache friendly, and the underlying OS will happily keep hot segments >> resident in memory for faster access. >> > > My question is, should I add more nodes or increase the ram of each node > to let lucene use as much memory as it wants ? how significant performance > difference will be there if I choose to upgrade ES machines to have more > RAM. > > Or, can I make some optimizations that decreases the slab usage or clean > slab memory partially? > > > -- Please update your bookmarks! We moved to https://discuss.elastic.co/ --- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2d460ca2-bd9a-45d6-a421-5b4b35d812aa%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.