What kind of memory savings did you get using UKSM? Did you run it on an LXD server?
I finally got KSM working as expected on my containers using the ksm_preload technique (thanks Fajar!). Unfortunately, the RAM savings is not as high as I expected. I have about 50 containers - each running nginx and php7 (identical). I modified the startup scripts for nginx and php7 to use the “ksm-wrapper” tool (which uses the ksm_preload technique) and restarted all the containers. After everything settled down, the best I could get was about 150MB of saved memory. Much less than I expected. I looked into UKSM, but that requires a custom kernel (either build your own or use one of the pre-built ones: pf-kernel). Since these are production servers, I am very hesitant to use anything but a standard kernel. -Ron On Jun 6, 2017, at 5:13 AM, Andreas Freudenberg <lxc-us...@licomonch.net> wrote: Hi, as stated by Tomasz, KSM will only work for applications which support it. If you want a KSM for all apllications you could try UKSM [1]. Worked pretty well here .. AF [1] http://kerneldedup.org/en/projects/uksm/introduction/ Am 05.06.2017 um 03:18 schrieb Tomasz Chmielewski: > KSM only works with applications which support it: > > KSM only operates on those areas of address space which an application > has advised to be likely candidates > for merging, by using the madvise(2) system call: int madvise(addr, > length, MADV_MERGEABLE. > > This means that doing: > > echo 1 > /sys/kernel/mm/ksm/run > > will be enough for KVM, but will not do anything for applications like bash, > nginx, apache, php-fpm and so on. > > > Please refer to "Enabling memory deduplication libraries in containers" on > https://openvz.org/KSM_(kernel_same-page_merging) - you will have to use > ksm_preload mentioned. I haven't personally used it with LXD. > > > Tomasz Chmielewski > https://lxadm.com > > > On Monday, June 05, 2017 09:48 JST, Ron Kelley <rkelley...@gmail.com> wrote: > >> Thanks Fajar. >> >> This is on-site with our own physical servers, storage, etc. The goal is to >> get the most containers per server as possible. While our servers have lots >> of RAM, we need to come up with a long-term scaling plan and hope KSM can >> help us scale beyond the standard numbers. >> >> As for the openvz link; I read that a few times but I don’t get any positive >> results using those methods. This leads me to believe (a) LXD does not >> support KSM or (b) the applications are not registering w/the KSM part of >> the kernel. >> >> I am going to run through some tests this week to see if I can get KSM >> working outside the LXD environment then try to replicate the same tests >> inside LXD. >> >> Thanks again for the feedback. >> >> >> >>> On Jun 4, 2017, at 6:15 PM, Fajar A. Nugraha <l...@fajar.net> wrote: >>> >>> On Sun, Jun 4, 2017 at 11:16 PM, Ron Kelley <rkelley...@gmail.com> wrote: >>> (Reviving the thread about Container Scaling: >>> https://lists.linuxcontainers.org/pipermail/lxc-users/2016-May/011607.html) >>> >>> We have hit critical mass with LXD 2.12 and I need to get Kernel Samepage >>> Merging (KSM) working as soon as possible. All my research has come to a >>> dead-end, and I am reaching out to the group at large for suggestions. >>> >>> Background: We have 5 host servers - each running U16.04 >>> (4.4.0-57-generic), 8G RAM, 20G SWAP, and 50 containers (exact configs per >>> server - nginx and php 7). >>> >>> >>> Is this a cloud, or on-site setup? >>> >>> For cloud, there are a lot of options that could get you running with MUCH >>> more memory, which would save you lots of headaches getting KSM to work. My >>> favorite is EC2 spot instance on AWS. >>> >>> On another note, I now setup most of my hosts with no swap, since >>> performance plummets whenever swap is used. YMMV. >>> >>> >>> I am trying to get KSM working since each container is an identical replica >>> of the other (other than hostname/IP). I have read a ton of information on >>> the ‘net about Ubuntu and KSM, yet I can’t seem to get any pages to share >>> on the host. I am not sure if this is a KSM config issue or if LXD won’t >>> allow KSM between containers. >>> >>> >>> >>> >>> >>> Here is what I have done thus far: >>> ---------------------------------- >>> * Installed the ksmtuned utility and verified ksmd is running on each host. >>> * Created the ksm_preload and ksm-wrapper tools per this site (the >>> https://github.com/unbrice/ksm_preload). >>> * Created 50 identical Ubuntu 16.04 containers running nginx >>> * Modified the nginx startup script on each container to include the >>> ksm_preload.so library; no issues running nginx. >>> >>> (Note: since I could not find the ksm_preload library for Ubuntu, I had to >>> use the ksm-wrapper tool listed above) >>> >>> All the relevant files under /sys/kernel/mm/ksm still show 0 (pages_shared, >>> pages_sharing, etc) regardless of what I do. >>> >>> >>> Can any (@stgraber @brauner) confirm if KSM is supported with LXD? If so, >>> what is the “magic” to make it work? We really want to get 2-3x more sites >>> per container if possible. >>> >>> >>> Have you read https://openvz.org/KSM_(kernel_same-page_merging) ? Some info >>> might be relevant. For example, it mentions something which you did not >>> wrote: >>> >>> To start ksmd, issue >>> [root@HN ~]# echo 1 > /sys/kernel/mm/ksm/run >>> >>> Also the section about Tuning and Caveats. >>> >>> -- >>> Fajar >>> _______________________________________________ >>> lxc-users mailing list >>> lxc-users@lists.linuxcontainers.org >>> http://lists.linuxcontainers.org/listinfo/lxc-users >> >> _______________________________________________ >> lxc-users mailing list >> lxc-users@lists.linuxcontainers.org >> http://lists.linuxcontainers.org/listinfo/lxc-users > > > _______________________________________________ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users _______________________________________________ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users