> Why does this create an impact to qemu users on Bionic? For example,
is it that there's particular hardware where this is always the case?
What's the actual *user* use case that's broken here, as distinct from a
technical explanation of the root cause of the bug?

Sorry, I've updated the description to clarify that this causes
affeceted qemu instances to fail to setup their networking, making them
unusable.

> Can you detail what steps would be carried out to test this even if
you can't do it yourself?

Setting up DPDK is complex and certainly outside the scope of a bug test
case. I've updated the description to suggest a possible way to increase
the number of mem regions.

** Description changed:

  [impact]
  
  the impact is the same as bug 1886704, the qemu vhost-user driver fails
  to init. see that bug for more details on the impact.
  
  Because the vhost-user driver cannot dictate how many mem regions are
  present in the qemu guest, if the vhost-user driver calculates more than
  8 regions at driver initialization time, this api limitation causes the
  qemu instance that is attempting to add/initialize a new vhost-user
  interface (nic) to fail, resulting in the qemu instance being unable to
  use the nic. Typically, this will mean that a qemu instance that is
  supposed to connect to DPDK-OVS is unable to, and has broken/missing
  networking, and in most cases is unusable.
  
  [test case]
  
- start a qemu guest with at least one vhost-user interface, and more than
- 8 discontiguous memory regions. The vhost-user device will fail to init
- due to exceeding its max memory region limit.
+ start a qemu guest with at least one vhost-user interface (e.g. using
+ DPDK-OVS), and more than 8 discontiguous memory regions. This might
+ happen when using multiple PCI passthrough devices in combination with
+ vhost-user interface(s). The vhost-user device will fail to init due to
+ exceeding its max memory region limit.
  
  As I don't have a DPDK setup to reproduce this, I am relying on the
  reporter of this bug to me to test and verify.
  
  [regression potential]
  
  as this causes vhost-user to ignore some mem regions, any regression
  would likely involve problems with the vhost-user interface; possibly
  failure to init the interface, or failure to configure the interface, or
  problems while using the interface.
  
  [scope]
  
  this is needed for bionic.
  
  this is fixed upstream by commits
  9e2a2a3e083fec1e8059b331e3998c0849d779c1 and
  988a27754bbbc45698f7acb54352e5a1ae699514, which are first included in
  v2.12.0 and v3.0.0, respectively, so this is fixed in focal and later.
  
  I am not proposing this for xenial at this time, as there is more
  context difference and higher regression potential, and lack of anyone
  reporting to me the need for this fix when using xenial.
  
  [other info]
  
  this is closely related to bug 1886704, but that bug is specifically
  about the 8 mem region limit of the vhost-user api. This bug doesn't
  attempt to fix that limitation (as it requires using a new extension of
  the vhost-user api to increase the max mem regions), this only backports
  existing upstream patches that fix the vhost region calculations and
  allow the vhost-user driver to indicate which mem regions it doesn't
  need to use, so those are ignored, in order to keep the total under the
  vhost-user limit.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1887525

Title:
  qemu vhost-user should ignore irrelevant mem regions because it has
  limit of 8 regions

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1887525/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to