[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-03-29 Thread Dan Streetman
Once the LP: #1466926 apache2 SRU in xenial -proposed is either promoted
or removed, I'll upload this to affected releases.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  New
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  In Progress
Status in apache2 source package in Xenial:
  In Progress
Status in apache2 source package in Artful:
  In Progress
Status in apache2 source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = 
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(>cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1754073] Re: Instances on Apache CloudStack using KVM hypervisor are not detected as virtual machines

2018-03-29 Thread Eric Desrochers
Sponsored for A/X/T

Please note this is the mandatory QA process proposed packages have to pass :
https://wiki.ubuntu.com/LandscapeUpdates

- Eric

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1754073

Title:
  Instances on Apache CloudStack using KVM hypervisor are not detected
  as virtual machines

Status in Landscape Client:
  Fix Committed
Status in landscape-client package in Ubuntu:
  Fix Released

Bug description:
  [Impact]

   * This issue affects users of Apache Cloudstack instances by failing to
     detect the hypervisor type and reporting the clients as running on
     a physical machine instead of on KVM.

   * This fix extends dmi vendor mapping, so that clouds customizing sys_vendor
     chassis_vendor or bios_vendor values (e.g. CloudStack, DigitalOcean)
     still get detected as KVM instances.

  [Test Case]

  The issue can be reproduced on libvirt/kvm.

  uvt-kvm create vm
  virsh edit vm

  
  ...
  
  
  
    
  Apache Software Foundation
  CloudStack KVM Hypervisor
    
  
   
    core2duo
    Intel
  

  virsh destroy vm && virsh start vm
  uvt-kvm ssh vm --insecure
  sudo landscape-config --log-level=debug -a devel --silent -t testclient
  # will fail registering, but that's not relevant to the vm-type detection
  grep vm-info /var/log/landscape/broker.log
  # expected output is "KVM", and will be empty because of this bug

  [Regression Potential]

   * Like the previous update, this change is local and only affects vm-type
     detection, which should be low-risk.

   * Since we extend the current detection to fields we were not previously
     looking at, one of the risks is to falsely detect clients as running
     on KVM. This is why we took care to verify opposite scenarios in
     addition to making sure the existing unit tests pass. Were such a
     regression to occur, it would have a low user impact, as being detected as
     VM you can use either physical or VM license, whereas the opposite
     (due to the bug fixed here) is not true.

  [Other Info]

   * AWS and DigitalOcean instances have been fixed slightly differently in
     the previous SRU, but we wanted to avoid repeating this for every other
     cloud, thus extending the DMI field lookup instead of adding yet another
     mapping value.

  [Original Description]

  Instances running on a Apache CloudStack that is using KVM as a
  hypervisor are not detected as virtual machines by the landscape-
  client. They are using a Full license to register instead of a Virtual
  one.

  Information from the client:
  Ubuntu 14.04.5 LTS
  ---
  landscape-client 14.12-0ubuntu6.14.04.2
  ---
  # cat /sys/class/dmi/id/sys_vendor
  Apache Software Foundation
  ---
  lscpu:
  Architecture:  x86_64
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Little Endian
  CPU(s):1
  On-line CPU(s) list:   0
  Thread(s) per core:1
  Core(s) per socket:1
  Socket(s): 1
  NUMA node(s):  1
  Vendor ID: GenuineIntel
  CPU family:6
  Model: 42
  Stepping:  1
  CPU MHz:   2299.998
  BogoMIPS:  4599.99
  Hypervisor vendor: KVM
  Virtualization type:   full
  L1d cache: 32K
  L1i cache: 32K
  L2 cache:  4096K
  NUMA node0 CPU(s): 0
  ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/landscape-client/+bug/1754073/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-03-29 Thread Eric Desrochers
Sponsored for bionic.

The SRU will be able to start as soon as the package move to bionic-
release.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  New
Status in apache2 package in Ubuntu:
  In Progress
Status in apache2 source package in Trusty:
  New
Status in apache2 source package in Xenial:
  New
Status in apache2 source package in Artful:
  New
Status in apache2 source package in Bionic:
  In Progress

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = 
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(>cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-03-29 Thread Eric Desrochers
Sponsored for bionic.

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  New
Status in apache2 package in Ubuntu:
  In Progress
Status in apache2 source package in Trusty:
  New
Status in apache2 source package in Xenial:
  New
Status in apache2 source package in Artful:
  New
Status in apache2 source package in Bionic:
  In Progress

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = 
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(>cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-03-29 Thread Eric Desrochers
I couldn't talk to cpaelzer, but after double-checking, the only release
on which cpaelzer seems to work for now for apache2 is Xenial.

The bug he is working on has been fix on Bionic a while ago, so I don't
think he'll need extra upload in bionic (at least for this particular
bug).

# Bionic : debian/changelog
  * mpm_event: Fix "scoreboard full" errors. Closes: #834708 LP: #1466926

 -- Stefan Fritsch   Wed, 21 Dec 2016 23:46:06 +0100

With that being said I think I can safely say, our way is clear for
bionic.

Note that it would be best before proceeding with the SRU to re-evaluate
the situation (especially for Xenial).

- Eric

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  New
Status in apache2 package in Ubuntu:
  In Progress
Status in apache2 source package in Trusty:
  New
Status in apache2 source package in Xenial:
  New
Status in apache2 source package in Artful:
  New
Status in apache2 source package in Bionic:
  In Progress

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = 
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(>cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-03-29 Thread Rafael David Tinoco
** Changed in: apache2 (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: apache2 (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: apache2 (Ubuntu Artful)
   Status: New => In Progress

** Changed in: apache2 (Ubuntu Trusty)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: apache2 (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: apache2 (Ubuntu Artful)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: apache2 (Ubuntu Trusty)
   Importance: Undecided => Medium

** Changed in: apache2 (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: apache2 (Ubuntu Artful)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  New
Status in apache2 package in Ubuntu:
  In Progress
Status in apache2 source package in Trusty:
  In Progress
Status in apache2 source package in Xenial:
  In Progress
Status in apache2 source package in Artful:
  In Progress
Status in apache2 source package in Bionic:
  In Progress

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = 
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(>cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1754073] [NEW] Instances on Apache CloudStack using KVM hypervisor are not detected as virtual machines

2018-03-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug by Eric Desrochers (slashd):

[Impact]

 * This issue affects users of Apache Cloudstack instances by failing to
   detect the hypervisor type and reporting the clients as running on
   a physical machine instead of on KVM.

 * This fix extends dmi vendor mapping, so that clouds customizing sys_vendor
   chassis_vendor or bios_vendor values (e.g. CloudStack, DigitalOcean)
   still get detected as KVM instances.

[Test Case]

The issue can be reproduced on libvirt/kvm.

uvt-kvm create vm
virsh edit vm


...



  
Apache Software Foundation
CloudStack KVM Hypervisor
  

 
  core2duo
  Intel


virsh destroy vm && virsh start vm
uvt-kvm ssh vm --insecure
sudo landscape-config --log-level=debug -a devel --silent -t testclient
# will fail registering, but that's not relevant to the vm-type detection
grep vm-info /var/log/landscape/broker.log
# expected output is "KVM", and will be empty because of this bug

[Regression Potential]

 * Like the previous update, this change is local and only affects vm-type
   detection, which should be low-risk.

 * Since we extend the current detection to fields we were not previously
   looking at, one of the risks is to falsely detect clients as running
   on KVM. This is why we took care to verify opposite scenarios in
   addition to making sure the existing unit tests pass. Were such a
   regression to occur, it would have a low user impact, as being detected as
   VM you can use either physical or VM license, whereas the opposite
   (due to the bug fixed here) is not true.

[Other Info]

 * AWS and DigitalOcean instances have been fixed slightly differently in
   the previous SRU, but we wanted to avoid repeating this for every other
   cloud, thus extending the DMI field lookup instead of adding yet another
   mapping value.

[Original Description]

Instances running on a Apache CloudStack that is using KVM as a
hypervisor are not detected as virtual machines by the landscape-client.
They are using a Full license to register instead of a Virtual one.

Information from the client:
Ubuntu 14.04.5 LTS
---
landscape-client 14.12-0ubuntu6.14.04.2
---
# cat /sys/class/dmi/id/sys_vendor
Apache Software Foundation
---
lscpu:
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):1
On-line CPU(s) list:   0
Thread(s) per core:1
Core(s) per socket:1
Socket(s): 1
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 42
Stepping:  1
CPU MHz:   2299.998
BogoMIPS:  4599.99
Hypervisor vendor: KVM
Virtualization type:   full
L1d cache: 32K
L1i cache: 32K
L2 cache:  4096K
NUMA node0 CPU(s): 0
---

** Affects: landscape-client
 Importance: High
 Assignee: Simon Poirier (simpoir)
 Status: Fix Committed

** Affects: landscape-client (Ubuntu)
 Importance: Medium
 Assignee: Simon Poirier (simpoir)
 Status: Fix Committed


** Tags: lds-squad patch
-- 
Instances on Apache CloudStack using KVM hypervisor are not detected as virtual 
machines
https://bugs.launchpad.net/bugs/1754073
You received this bug notification because you are a member of STS Sponsors, 
which is subscribed to the bug report.

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-03-29 Thread Launchpad Bug Tracker
This bug was fixed in the package apache2 - 2.4.29-1ubuntu4

---
apache2 (2.4.29-1ubuntu4) bionic; urgency=medium

  * Avoid crashes, hangs and loops by fixing mod_ldap locking: (LP: #1752683)
- added debian/patches/util_ldap_cache_lock_fix.patch

 -- Rafael David Tinoco   Fri, 02 Mar 2018
02:19:31 +

** Changed in: apache2 (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  New
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  In Progress
Status in apache2 source package in Xenial:
  In Progress
Status in apache2 source package in Artful:
  In Progress
Status in apache2 source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = 
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(>cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp


[Sts-sponsors] [Bug 1752683] Re: race condition on rmm for module ldap (ldap cache)

2018-03-29 Thread Eric Desrochers
Automatic testing went fine in Bionic.

# Excuses... page
- 
https://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html

apache2 (2.4.29-1ubuntu3 to 2.4.29-1ubuntu4)
Maintainer: Ubuntu Developers
0 days old

Valid candidate


-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1752683

Title:
  race condition on rmm for module ldap (ldap cache)

Status in Apache2 Web Server:
  New
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  In Progress
Status in apache2 source package in Xenial:
  In Progress
Status in apache2 source package in Artful:
  In Progress
Status in apache2 source package in Bionic:
  Fix Released

Bug description:
  [Impact]

   * Apache users using ldap module might face this if using multiple
  threads and shared memory activated for apr memory allocator (default
  in Ubuntu).

  [Test Case]

   * Configure apache to use ldap module, for authentication e.g., and wait for 
the race condition to happen.
   * Analysis made out of a dump from a production environment.
   * Bug has been reported multiple times upstream in the past 10 years.

  [Regression Potential]

   * ldap module has broken locking mechanism when using apr mem mgmt.
   * ldap would continue to have broken locking mechanism.
   * race conditions could still exist.
   * could could brake ldap module.
   * patch is upstreamed in next version to be released.

  [Other Info]
   
  ORIGINAL CASE DESCRIPTION:

  Problem summary:

  apr_rmm_init acts as a relocatable memory management initialization

  it is used in: mod_auth_digest and util_ldap_cache

  From the dump was brought to my knowledge, in the following sequence:

  - util_ldap_compare_node_copy()
  - util_ald_strdup()
  - apr_rmm_calloc()
  - find_block_of_size()

  Had a "cache->rmm_addr" with no lock at "find_block_of_size()"

  cache->rmm_addr->lock { type = apr_anylock_none }

  And an invalid "next" offset (out of rmm->base->firstfree).

  This rmm_addr was initialized with NULL as a locking mechanism:

  From apr-utils:

  apr_rmm_init()

  if (!lock) {  <-- 2nd 
argument to apr_rmm_init()
  nulllock.type = apr_anylock_none; <--- found in the dump
  nulllock.lock.pm = NULL;
  lock = 
  }

  From apache:

  # mod_auth_digest

  sts = apr_rmm_init(_rmm,
     NULL, /* no lock, we'll do the locking ourselves */
     apr_shm_baseaddr_get(client_shm),
     shmem_size, ctx);

  # util_ldap_cache

  result = apr_rmm_init(>cache_rmm, NULL,
    apr_shm_baseaddr_get(st->cache_shm), size,
    st->pool);

  It appears that the ldap module chose to use "rmm" for memory allocation, 
using
  the shared memory approach, but without explicitly definiting a lock to it.
  Without it, its up to the caller to guarantee that there are locks for rmm
  synchronization (just like mod_auth_digest does, using global mutexes).

  Because of that, there was a race condition in "find_block_of_size" and a call
  touching "rmm->base->firstfree", possibly "move_block()", in a multi-threaded
  apache environment, since there were no lock guarantees inside rmm logic (lock
  was "apr_anylock_none" and the locking calls don't do anything).

  In find_block_of_size:

  apr_rmm_off_t next = rmm->base->firstfree;

  We have:

  rmm->base->firstfree
   Decimal:356400
   Hex:0x57030

  But "next" turned into:

  Name : next
   Decimal:8320808657351632189
   Hex:0x737973636970653d

  Causing:

  struct rmm_block_t *blk = (rmm_block_t*)((char*)rmm->base +
  next);

  if (blk->size == size)

  To segfault.

  Upstream bugs:

  https://bz.apache.org/bugzilla/show_bug.cgi?id=58483
  https://bz.apache.org/bugzilla/show_bug.cgi?id=60296
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=814980#15

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1752683/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp