This bug was fixed in the package avahi - 0.6.32~rc+dfsg-1ubuntu2.1

avahi (0.6.32~rc+dfsg-1ubuntu2.1) xenial; urgency=medium

  * d/p/0001-Remove-default-rlimit-nproc-3.patch,
  * d/p/0002-Remove-default-rlimits-from-avahi-daemon.conf.patch:
    - Remove all overly restrictive default rlimit restrictions in
    avahi-daemon.conf which can cause avahi to fail to start due to
    too many running process or crash out of memory. (LP: #1661869)

 -- Trent Lloyd <>  Thu, 15 Mar 2018 10:16:57

** Changed in: avahi (Ubuntu Xenial)
       Status: Fix Committed => Fix Released

** Changed in: avahi (Ubuntu Trusty)
       Status: Fix Committed => Fix Released

You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to avahi in Ubuntu.

  maas install fails inside of a 16.04 lxd container due to avahi

Status in MAAS:
Status in avahi package in Ubuntu:
  Fix Released
Status in lxd package in Ubuntu:
Status in avahi source package in Trusty:
  Fix Released
Status in lxd source package in Trusty:
Status in avahi source package in Xenial:
  Fix Released
Status in lxd source package in Xenial:
Status in avahi source package in Artful:
  Fix Released
Status in lxd source package in Artful:

Bug description:
  [Original Description]
  The bug, and workaround, are clearly described in this mailing list thread:

  I'm trying to install MAAS in a LXD container, but that's failing due
  to avahi package install problems.  I'm tagging all packages here.

  Avahi sets a number of rlimits on startup including the maximum number of 
processes (nproc=2) and limits on memory usage.  These limits are hit in a 
number of cases  - specifically the maximum process limit is hit if you run lxd 
containers in 'privileged' mode such that avahi has the same uid in multiple 
containers and large networks can trigger the memory limit.

  The fix is to remove these default rlimits completely from the
  configuration file.


   * Avahi is unable to start inside of containers without UID namespace 
isolation because an rlimit on the maximum number of processes is set by 
default to 2.  When a container launches Avahi, the total number of processes 
on the system in all containers exceeds this limit and Avahi is killed.  It 
also fails at install time, rather than runtime due to a failure to start the 
   * Some users also have issues with the maximum memory allocation causing 
Avahi to exit on networks with a large number of services as the memory limit 
was quite small (4MB).  Refer LP #1638345

  [Test Case]

   * setup lxd (apt install lxd, lxd init, get working networking)
   * lxc launch ubuntu:16.04 avahi-test --config security.privileged=true
   * lxc exec avahi-test sudo apt install avahi-daemon

  This will fail if the parent host has avahi-daemon installed, however,
  if it does not you can setup a second container (avahi-test2) and
  install avahi there.  That should then fail (as the issue requires 2
  copies of avahi-daemon in the same uid namespace to fail)

  [Regression Potential]

   * The fix removes all rlimits configured by avahi on startup, this is
  an extra step avahi takes that most programs did not take (limiting
  memory usage, running process count, etc).  It's possible an unknown
  bug then consumes significant system resources as a result of that
  limit no longer being in place, that was previously hidden by Avahi
  crashing instead.  However I believe this risk is significantly
  reduced as this change has been shipping upstream for many months and
  have not seen any reports of new problems - however it has fixed a
  number of existing crashes/problems.

   * The main case this may not fix the issue is if they have modified
  their avahi-daemon.conf file - but it will fix new installs and most
  installs as most users don't modify the file. And users may be
  prompted on upgrade to replace the file.

  [Other Info]

   * This change already exists upstream in 0.7 which is in bionic.  SRU
  required to artful, xenial, trusty.

To manage notifications about this bug go to:

Mailing list:
Post to     :
Unsubscribe :
More help   :

Reply via email to