Does anyone know if Linux file caching is compartmentalized in Docker
containers or accounted for in their memory limits?

The particular context of this question is Elasticsearch:

"Lucene is designed to leverage the underlying OS for caching in-memory
data structures.

segments are stored in individual files. Because segments are immutable,
these files never change. This makes them very cache friendly, and the
underlying OS will happily keep hot segments resident in memory for faster

So the question is, if I want to reserve 4GB (via JVM options) for
ElasticSearch running in a container, and 4GB for file caching for Lucene
performance, do I reserve 8GB for the container, or try to ensure that the
host the container is running on has 4GB RAM free outside the container?
dev mailing list

Reply via email to