Hi, Recently I've been setting up unprivileged LXC containers on an older server that has 6GB of physical RAM. As my containers are running, occasionally I am seeing OOM errors in the host's syslog when the kernel kills a process within one of the containers. I did some investigation and noticed a discrepancy in the reported (available) memory between the physical host and within a container:
host:~$ free total used free shared buff/cache available Mem: 6097956 1544404 150772 45716 4402780 4211020 Swap: 8388604 324800 8063804 container:~$ free total used free shared buff/cache available Mem: 6097956 222932 5875024 45716 696520 5875024 Swap: 8388604 324800 8063804 I've been able to reliably trigger the OOM killing if I try compiling the Linux kernel within a container. However, if I set a cgroup memory limit of 4GB in the container's config, the build completes successfully without triggering the OOM killer: container-4GB:~$ free total used free shared buff/cache available Mem: 4194304 186636 4007668 45716 670260 4007668 Swap: 8388604 324800 8063804 I've not used LXC before on a host with this limited amount of physical RAM, so perhaps this is a known issue that I've simply not encountered before. Is the difference in available RAM as seen from within a container and the physical host intentional? Other than setting cgroup limits explicitly in each container config, is there some other way of alleviating the OOM errors I'm seeing? This system is running Debian stretch (currently the "testing" distribution), 64bit kernel 4.7.4-grsec, lxc 2.0.4 and lxcfs 2.0.3 from the stretch repository, and systemd 231. Thanks for any assistance, Mathias
signature.asc
Description: This is a digitally signed message part
_______________________________________________ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users