Verified with kernel 3.13.0-109-generic on AWS instance NVMe drives are
all initialized.

** Tags removed: verification-needed-trusty
** Tags added: verification-done-trusty

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1656381

Title:
  Xen MSI setup code incorrectly re-uses cached pirq

Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Trusty:
  Fix Committed
Status in linux source package in Xenial:
  Fix Released
Status in linux source package in Yakkety:
  Fix Released
Status in linux source package in Zesty:
  In Progress

Bug description:
  [Impact]

  This bug fixes the root problem reported in bug 1648449, so its
  description can be mostly reused here:

  On an Amazon AWS instance that has NVMe drives, the NVMe drives fail
  to initialize, and so aren't usable by the system. If one of the NVMe
  drives contains the root filesystem, the instance won't boot.

  [Test Case]

  Boot an AWS instance with multiple NVMe drives. All except the first
  will fail to initialize, and errors will appear in the system log (if
  the system boots at all). With a patched kernel, all NVMe drives are
  initialized and enumerated and work properly.

  [Regression Potential]

  Patching the Xen MSI setup function may cause problems with other PCI
  devices using MSI/MSIX interrupts on a Xen guest.

  Note this patch restores correct behavior for guests running under Xen
  4.5 or later hypervisors - specifically Xen hypervisors with qemu
  2.1.0 or later.  For Xen hypervisors with qemu 2.0.0 or earlier, this
  patch causes a regression.  With an Ubuntu hypervisor, Vivid or later
  qemu is patched, as well as UCA Kilo or later qemu.  Trusty qemu or
  UCA Icehouse qemu are not patched - see bug 1657489.

  [Other Info]

  The patch from bug 1648449 was only a workaround, that changed the
  NVMe driver to not trigger this Xen bug.  However, there have been
  reports of that patch causing non-Xen systems with NVMe drives to stop
  working, in bug 1626894.  So, the best thing to do is revert the
  workaround patch (and its regression fix patch from bug 1651602) back
  to the original NVMe drive code, and apply the real Xen patch to fix
  the problem.  That should restore functionality for non-Xen systems,
  and should allow Xen systems with multiple NVMe controllers to work.

  Upstream discussion:
  https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html

  Related: bug 1657489 ("qemu-xen: free all the pirqs for msi/msix when
  driver unload")

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1656381/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to