No one?
On 13 mai 2014, at 07:39, Patrick Proniewski <[email protected]> wrote:

> Hello,
> 
> I'm running an Elasticsearch node on a FreeBSD server, on top of ZFS storage. 
> For now I've considered that ES is smart and manages its own cache, so I've 
> disabled primary cache for data, leaving only metadata being cacheable. Last 
> thing I want is to have data cached twice, one time is ZFS ARC and a second 
> time in application's own cache. I've also disabled compression:
> 
> $ zfs get compression,primarycache,recordsize  zdata/elasticsearch
> NAME                 PROPERTY      VALUE         SOURCE
> zdata/elasticsearch  compression   off           local
> zdata/elasticsearch  primarycache  metadata      local
> zdata/elasticsearch  recordsize    128K          default
> 
> It's a general purpose server (web, mysql, mail, ELK, etc.). I'm not looking 
> for absolute best ES performance, I'm looking for best use of my resources.
> I have 16 GB RAM, and I plan to put a limit to ARC size (currently consuming 
> 8.2 GB RAM) so I can mlockall ES memory. But I don't think I'll go the 
> RAM-only storage route 
> (<http://jprante.github.io/applications/2012/07/26/Mmap-with-Lucene.html>) as 
> I'm running only one node.
> 
> How can I estimate the amount of memory I must allocate to ES process?
> 
> Should I switch primarycache=all back on despite ES already caching data?
> 
> What is the best ZFS record/block size to accommodate Elasticsearch/Lucene 
> IOs?
> 
> Thanks,
> Patrick


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/80091CC6-51BE-4595-8916-EFA0C5B91676%40patpro.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to