Hello,

I'm running an Elasticsearch node on a FreeBSD server, on top of ZFS storage. 
For now I've considered that ES is smart and manages its own cache, so I've 
disabled primary cache for data, leaving only metadata being cacheable. Last 
thing I want is to have data cached twice, one time is ZFS ARC and a second 
time in application's own cache. I've also disabled compression:

$ zfs get compression,primarycache,recordsize  zdata/elasticsearch
NAME                 PROPERTY      VALUE         SOURCE
zdata/elasticsearch  compression   off           local
zdata/elasticsearch  primarycache  metadata      local
zdata/elasticsearch  recordsize    128K          default

It's a general purpose server (web, mysql, mail, ELK, etc.). I'm not looking 
for absolute best ES performance, I'm looking for best use of my resources.
I have 16 GB RAM, and I plan to put a limit to ARC size (currently consuming 
8.2 GB RAM) so I can mlockall ES memory. But I don't think I'll go the RAM-only 
storage route 
(<http://jprante.github.io/applications/2012/07/26/Mmap-with-Lucene.html>) as 
I'm running only one node.

How can I estimate the amount of memory I must allocate to ES process?

Should I switch primarycache=all back on despite ES already caching data?

What is the best ZFS record/block size to accommodate Elasticsearch/Lucene IOs?

Thanks,
Patrick

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/FBBA84AE-D610-4060-AFBC-FC7D5BA0803F%40patpro.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to