[Sts-sponsors] [Bug 1752683] Update Released

2018-04-19 Thread Łukasz Zemczak
The verification of the Stable Release Update for apache2 has completed
successfully and the package has now been released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  Fix Released
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  Fix Released
Status in apache2 source package in Xenial:
  Fix Released
Status in apache2 source package in Artful:
  Fix Released
Status in apache2 source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = &nulllock;
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(&client_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(&st->cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-04-19 Thread Launchpad Bug Tracker
This bug was fixed in the package apache2 - 2.4.27-2ubuntu4

---
apache2 (2.4.27-2ubuntu4) artful; urgency=medium

  * Avoid crashes, hangs and loops by fixing mod_ldap locking: (LP: #1752683)
- added debian/patches/util_ldap_cache_lock_fix.patch

 -- Rafael David Tinoco   Fri, 02 Mar 2018
02:14:42 +

** Changed in: apache2 (Ubuntu Artful)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  Fix Released
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  Fix Released
Status in apache2 source package in Xenial:
  Fix Released
Status in apache2 source package in Artful:
  Fix Released
Status in apache2 source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = &nulllock;
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(&client_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(&st->cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-04-19 Thread Launchpad Bug Tracker
This bug was fixed in the package apache2 - 2.4.7-1ubuntu4.19

---
apache2 (2.4.7-1ubuntu4.19) trusty; urgency=medium

  * Avoid crashes, hangs and loops by fixing mod_ldap locking: (LP: #1752683)
- added debian/patches/util_ldap_cache_lock_fix.patch

 -- Rafael David Tinoco   Fri, 02 Mar 2018
01:48:33 +

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  Fix Released
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  Fix Released
Status in apache2 source package in Xenial:
  Fix Released
Status in apache2 source package in Artful:
  Fix Released
Status in apache2 source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = &nulllock;
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(&client_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(&st->cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-04-19 Thread Launchpad Bug Tracker
This bug was fixed in the package apache2 - 2.4.18-2ubuntu3.7

---
apache2 (2.4.18-2ubuntu3.7) xenial; urgency=medium

  * Avoid crashes, hangs and loops by fixing mod_ldap locking: (LP: #1752683)
- added debian/patches/util_ldap_cache_lock_fix.patch

 -- Rafael David Tinoco   Thu, 01 Mar 2018
18:29:12 +

** Changed in: apache2 (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

** Changed in: apache2 (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  Fix Released
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  Fix Released
Status in apache2 source package in Xenial:
  Fix Released
Status in apache2 source package in Artful:
  Fix Released
Status in apache2 source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = &nulllock;
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(&client_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(&st->cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1754073] Re: Instances on Apache CloudStack using KVM hypervisor are not detected as virtual machines

2018-04-19 Thread Karify DevOps
I have tested this new package on trusty and now works as intended.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1754073

Title:
  Instances on Apache CloudStack using KVM hypervisor are not detected
  as virtual machines

Status in Landscape Client:
  Fix Committed
Status in landscape-client package in Ubuntu:
  Fix Released
Status in landscape-client source package in Trusty:
  Fix Committed
Status in landscape-client source package in Xenial:
  Fix Committed
Status in landscape-client source package in Artful:
  Fix Committed

Bug description:
  [Impact]

   * This issue affects users of Apache Cloudstack instances by failing to
     detect the hypervisor type and reporting the clients as running on
     a physical machine instead of on KVM.

   * This fix extends dmi vendor mapping, so that clouds customizing sys_vendor
     chassis_vendor or bios_vendor values (e.g. CloudStack, DigitalOcean)
     still get detected as KVM instances.

  [Test Case]

  The issue can be reproduced on libvirt/kvm.

  uvt-kvm create vm
  virsh edit vm

  
  ...
  
  
  
    
  Apache Software Foundation
  CloudStack KVM Hypervisor
    
  
   
    core2duo
    Intel
  

  virsh destroy vm && virsh start vm
  uvt-kvm ssh vm --insecure
  sudo landscape-config --log-level=debug -a devel --silent -t testclient
  # will fail registering, but that's not relevant to the vm-type detection
  grep vm-info /var/log/landscape/broker.log
  # expected output is "KVM", and will be empty because of this bug

  [Regression Potential]

   * Like the previous update, this change is local and only affects vm-type
     detection, which should be low-risk.

   * Since we extend the current detection to fields we were not previously
     looking at, one of the risks is to falsely detect clients as running
     on KVM. This is why we took care to verify opposite scenarios in
     addition to making sure the existing unit tests pass. Were such a
     regression to occur, it would have a low user impact, as being detected as
     VM you can use either physical or VM license, whereas the opposite
     (due to the bug fixed here) is not true.

  [Other Info]

   * AWS and DigitalOcean instances have been fixed slightly differently in
     the previous SRU, but we wanted to avoid repeating this for every other
     cloud, thus extending the DMI field lookup instead of adding yet another
     mapping value.

  [Original Description]

  Instances running on a Apache CloudStack that is using KVM as a
  hypervisor are not detected as virtual machines by the landscape-
  client. They are using a Full license to register instead of a Virtual
  one.

  Information from the client:
  Ubuntu 14.04.5 LTS
  ---
  landscape-client 14.12-0ubuntu6.14.04.2
  ---
  # cat /sys/class/dmi/id/sys_vendor
  Apache Software Foundation
  ---
  lscpu:
  Architecture:  x86_64
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Little Endian
  CPU(s):1
  On-line CPU(s) list:   0
  Thread(s) per core:1
  Core(s) per socket:1
  Socket(s): 1
  NUMA node(s):  1
  Vendor ID: GenuineIntel
  CPU family:6
  Model: 42
  Stepping:  1
  CPU MHz:   2299.998
  BogoMIPS:  4599.99
  Hypervisor vendor: KVM
  Virtualization type:   full
  L1d cache: 32K
  L1i cache: 32K
  L2 cache:  4096K
  NUMA node0 CPU(s): 0
  ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/landscape-client/+bug/1754073/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1750013] Re: systemd-logind: memory leaks on session's connections (trusty-only)

2018-04-19 Thread Guilherme G. Piccoli
As mentioned by Dan in the above comments, some failures in autopkgtest
like:

autopkgtest [14:44:48]: test ubuntu-regression-suite: [---
Source Package Version: 4.4.0-1017.17
Running Kernel Version: 3.13.0-145.194
ERROR: running version does not match source package

Are "expected" due to LP #1723223.

Other than that, the proposed package passed in all other tests.
Thanks,

Guilherme

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1750013

Title:
  systemd-logind: memory leaks on session's connections (trusty-only)

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Trusty:
  Fix Committed
Status in systemd source package in Xenial:
  Fix Released
Status in systemd source package in Artful:
  Fix Released
Status in systemd source package in Bionic:
  Fix Released

Bug description:
  Below the SRU request form. Please refer to the Original Description
  to a more comprehensive explanation of the problem observed.

  
  [Impact] 

   * systemd-logind tool is leaking memory at each session connected. The 
   issues happens in systemd from Trusty (14.04) only.

   * Three issues observed:
- systemd-logind is leaking entire sessions, i.e, the sessions are not 
  feeed after they're closed. In order to fix that, we proactively add 
  the sessions to systemd garbage collector (gc) when they are closed. 
  Also, part of the fix is to make cgmanager package a dependency. Refer 
  to comment #1 to a more thorough explanation of the issue and the fix.

- a small memory leak was observed in the session creation logic of 
  systemd-logind. The fix for that is the addition of an appropriate 
  free() call. Refer to comment #2 to more details on the issue and fix.

- another small memory leak was observed in the cgmanager glue code of 
  systemd-logind - this code is only present in this specific Ubuntu 
  release of the package, due to necessary compatibility layer with 
  upstart init system. The fix is to properly call free() in 2 
  functions. Refer to comment #3 to a deep exposition of the issue and 
  the fix.

  
  [Test Case]

   * The basic test-case is to run the following loop from a remote machine:
 while true; do ssh  "whoami"; done

   * It's possible to watch the increase in memory consumption from 
 "systemd-logind" process in the target machine. One can use the
 "ps uax" command to verify the RSS of the process, or count its 
 anonymous pages from /proc//smaps.

  
  [Regression Potential] 

   * Since the fixes are small and not intrusive, the potential for 
 regressions are low. More regression considerations on comments #1, #2 
 and #3 for each fix.

   * A potential small regressson is performance-wise, since now we add 
 sessions to garbage collector proactively.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1750013/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1750013] Update Released

2018-04-19 Thread Łukasz Zemczak
The verification of the Stable Release Update for systemd has completed
successfully and the package has now been released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1750013

Title:
  systemd-logind: memory leaks on session's connections (trusty-only)

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Trusty:
  Fix Committed
Status in systemd source package in Xenial:
  Fix Released
Status in systemd source package in Artful:
  Fix Released
Status in systemd source package in Bionic:
  Fix Released

Bug description:
  Below the SRU request form. Please refer to the Original Description
  to a more comprehensive explanation of the problem observed.

  
  [Impact] 

   * systemd-logind tool is leaking memory at each session connected. The 
   issues happens in systemd from Trusty (14.04) only.

   * Three issues observed:
- systemd-logind is leaking entire sessions, i.e, the sessions are not 
  feeed after they're closed. In order to fix that, we proactively add 
  the sessions to systemd garbage collector (gc) when they are closed. 
  Also, part of the fix is to make cgmanager package a dependency. Refer 
  to comment #1 to a more thorough explanation of the issue and the fix.

- a small memory leak was observed in the session creation logic of 
  systemd-logind. The fix for that is the addition of an appropriate 
  free() call. Refer to comment #2 to more details on the issue and fix.

- another small memory leak was observed in the cgmanager glue code of 
  systemd-logind - this code is only present in this specific Ubuntu 
  release of the package, due to necessary compatibility layer with 
  upstart init system. The fix is to properly call free() in 2 
  functions. Refer to comment #3 to a deep exposition of the issue and 
  the fix.

  
  [Test Case]

   * The basic test-case is to run the following loop from a remote machine:
 while true; do ssh  "whoami"; done

   * It's possible to watch the increase in memory consumption from 
 "systemd-logind" process in the target machine. One can use the
 "ps uax" command to verify the RSS of the process, or count its 
 anonymous pages from /proc//smaps.

  
  [Regression Potential] 

   * Since the fixes are small and not intrusive, the potential for 
 regressions are low. More regression considerations on comments #1, #2 
 and #3 for each fix.

   * A potential small regressson is performance-wise, since now we add 
 sessions to garbage collector proactively.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1750013/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1750013] Re: systemd-logind: memory leaks on session's connections (trusty-only)

2018-04-19 Thread Launchpad Bug Tracker
This bug was fixed in the package systemd - 204-5ubuntu20.28

---
systemd (204-5ubuntu20.28) trusty; urgency=medium

  * logind: fix memleaks in session's free path and cgmanager glue code
(LP: #1750013)

 -- gpicc...@canonical.com (Guilherme G. Piccoli)  Tue, 03 Apr 2018
13:38:08 +

** Changed in: systemd (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1750013

Title:
  systemd-logind: memory leaks on session's connections (trusty-only)

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Trusty:
  Fix Released
Status in systemd source package in Xenial:
  Fix Released
Status in systemd source package in Artful:
  Fix Released
Status in systemd source package in Bionic:
  Fix Released

Bug description:
  Below the SRU request form. Please refer to the Original Description
  to a more comprehensive explanation of the problem observed.

  
  [Impact] 

   * systemd-logind tool is leaking memory at each session connected. The 
   issues happens in systemd from Trusty (14.04) only.

   * Three issues observed:
- systemd-logind is leaking entire sessions, i.e, the sessions are not 
  feeed after they're closed. In order to fix that, we proactively add 
  the sessions to systemd garbage collector (gc) when they are closed. 
  Also, part of the fix is to make cgmanager package a dependency. Refer 
  to comment #1 to a more thorough explanation of the issue and the fix.

- a small memory leak was observed in the session creation logic of 
  systemd-logind. The fix for that is the addition of an appropriate 
  free() call. Refer to comment #2 to more details on the issue and fix.

- another small memory leak was observed in the cgmanager glue code of 
  systemd-logind - this code is only present in this specific Ubuntu 
  release of the package, due to necessary compatibility layer with 
  upstart init system. The fix is to properly call free() in 2 
  functions. Refer to comment #3 to a deep exposition of the issue and 
  the fix.

  
  [Test Case]

   * The basic test-case is to run the following loop from a remote machine:
 while true; do ssh  "whoami"; done

   * It's possible to watch the increase in memory consumption from 
 "systemd-logind" process in the target machine. One can use the
 "ps uax" command to verify the RSS of the process, or count its 
 anonymous pages from /proc//smaps.

  
  [Regression Potential] 

   * Since the fixes are small and not intrusive, the potential for 
 regressions are low. More regression considerations on comments #1, #2 
 and #3 for each fix.

   * A potential small regressson is performance-wise, since now we add 
 sessions to garbage collector proactively.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1750013/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp