I did try setting all of the items that are mentioned in production-
setup.md. To start with, a few of them are not reasonable.
max_user_instances defaults to 128, and we were able to see a difference
at 256, but not at 1024. Setting it to 1M seems silly.

I'll also note that my Kernel memory consumption went up significantly
with those settings. When I hit 12 containers I was already up over 2GB
of kernel memory (whereas before I peaked around 1.4GB of kernel memory
when I hit 19 containers).

It seems to be a case of a huge number more "kmalloc-64" entries. I'm
not sure where those are coming from, but there are enough "OBJS" that
it overflows the standard column widths in the slabtop output.

With all of those set, I did get more containers after rebooting my
machine. (Just logging out and back in again, I actually went down to 18
containers max).

At 22 containers I hit 3.8GB of Kernel memory. I'm letting it continue
to run to see where it gets to.

I did also make sure to change the LXD backend pool to ZFS instead of
being just the normal disk. (using zfs-dkms for Trusty kernel.)

Given that I was using a btrfs filesystem, and now LXD is using ZFS that
might also be a factor in how many containers I could run. Certainly in
the initial reports "btrfs_inode_*" was near the top of the KMem output.
And now its all kmalloc and dentry. Maybe that's a side-effect of dkms?

I did end up hitting 30 containers at 4.6GB of Kernel memory before the
go-lxc-run.sh wanted to start clearing up old containers. So I'll patch
that out and see how far I get.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1602192

Title:
  when starting many LXD containers, they start failing to boot with
  "Too many open files"

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju/+bug/1602192/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to