Hi Joachim.

Joachim Fahrner - 11.08.17, 17:43:
> Linux uses all available more for caching of filesystems. When copying
> large files to slow network filesystems (nfs, smb, sshfs, davfs) it
> takes a long time until such allocated memory becomes free. When these
> network filesystems saturate memory linux becomes very unresponsive. It
> can take minutes to start applications.
> Is there a way to limit memory usage of network filesystems?

Not specifically for network filesystems, but for all filesystems.


- /proc/sys/vm/dirty_background_bytes (and ratio… but I´d set it in bytes)
- sysctl
- /etc/sysctl.d
- Kernel documentation: sysctl/vm.txt
- If you want to get rid of caches for specific files or run a workload in a 
that tells the kernel not to cache, see package nocache

Linus once recommended to set this to about what your storage can write out in 
about 2-3 seconds AFAIR. I concur. I´d set this to about no more than 4-5 
seconds. Its enough for Linux to optimize write ordering for rotating disks. 
Linus said the usual tresholds of 10% was made in times when computers had way 
less memory. Unfortunately the default value was never replaced by some kind 
of autotuning.

I use this for example:

martin@merkaba:~> cat /etc/sysctl.d/vm.conf 
# Only cache what my SSDs can write a few seconds
vm.dirty_background_bytes = 1073741824
vm.dirty_bytes = 2147483648

Especially when local storage reads are faster than NFS writes then its 
crucical to reduce this to a sane value. This can help especially well in 
combination with rsync which sync()´s on a file after writing it. I once found 
a 2x speedup between large and low dirty memory threshold. Also it tends to 
create bulky writeout before (piling up dirty memory, then having to write it 
in a sudden… pile it up again and so on).

This is the short version (lol!). I teach this in my Linux performance 
analysis & tuning courses. Beware of old pages about this. They may contain 
misleading recommendations.

Dng mailing list

Reply via email to