Does anyone know if Linux file caching is compartmentalized in Docker
containers or accounted for in their memory limits?

The particular context of this question is Elasticsearch:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#_give_less_than_half_your_memory_to_lucene

"Lucene is designed to leverage the underlying OS for caching in-memory
data structures.
<https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#id-1.10.4.11.9.4.1>

<https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#id-1.10.4.11.9.4.2>Lucene
segments are stored in individual files. Because segments are immutable,
these files never change. This makes them very cache friendly, and the
underlying OS will happily keep hot segments resident in memory for faster
access."

So the question is, if I want to reserve 4GB (via JVM options) for
ElasticSearch running in a container, and 4GB for file caching for Lucene
performance, do I reserve 8GB for the container, or try to ensure that the
host the container is running on has 4GB RAM free outside the container?
_______________________________________________
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to