** Description changed:

- Description to be updated
+ [Impact]
+ * NVMe controllers potentially could miss to send an interrupt, specially
+ due to bugs in virtual devices(which are common those days - qemu has its
+ own NVMe virtual device, so does AWS). This would be a difficult to
+ debug situation, because NVMe driver only reports the request timeout,
+ not the reason.
  
- [Impact]
+ * The upstream patch proposed to SRU here here, 7776db1ccc12
+ ("NVMe/pci: Poll CQ on timeout") was designed to provide more information
+ in these cases, by pro-actively polling the CQEs on request timeouts, to
+ check if the specific request was completed and some issue (probably a
+ missed interrupt) prevented the driver to notice, or if the request really
+ wasn't completed, which indicates more severe issues.
  
-  * 1
+ * Although quite useful for debugging, this patch could help to mitigate
+ issues in cloud environments like AWS, in case we may have jitter in
+ request completion and the i/o timeout was set to low values, or even
+ in case of atypical bugs in the virtual NVMe controller. With this patch,
+ if polling succeeds the NVMe driver will continue working instead of
+ trying a reset controller procedure, which may lead to fails in the 
+ rootfs - refer to https://launchpad.net/bugs/1788035.
+ 
  
  [Test Case]
  
-  * 2
+ * It's a bit tricky to artificially create a situation of missed interrupt;
+ one idea was to implement a small hack in the NVMe qemu virtual device
+ that given a trigger in guest kernel, will induce the virtual device to
+ skip an interrupt. The hack patch is present in comment #1 below.
+ 
+ * To trigger such hack from guest kernel, all is needed is to issue a
+ raw admin command from nvme-cli: "nvme admin-passthru -o 0xff /dev/nvme0"
+ After that, just perform some I/Os to see one of them aborting - one could
+ use dd for this goal, like "dd if=/dev/zero of=/dev/nvme0n1 count=5".
+ 
  
  [Regression Potential]
  
-  * 3
+ * There are no clear risks in adding such polling mechanism to the NVMe 
driver; one bad thing that was neverreported but could happen with this patch 
is the device could be in a bad state IRQ-wise that a reset would fix, but
+ the patch could cause all requests to be completed via polling, which
+ prevents the adapter reset. This is however a very hypothetical situation,
+ which would also happen in the mainline kernel (since it has the patch).

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1807393

Title:
  nvme - Polling on timeout

Status in linux package in Ubuntu:
  Confirmed
Status in linux source package in Xenial:
  Confirmed

Bug description:
  [Impact]
  * NVMe controllers potentially could miss to send an interrupt, specially
  due to bugs in virtual devices(which are common those days - qemu has its
  own NVMe virtual device, so does AWS). This would be a difficult to
  debug situation, because NVMe driver only reports the request timeout,
  not the reason.

  * The upstream patch proposed to SRU here here, 7776db1ccc12
  ("NVMe/pci: Poll CQ on timeout") was designed to provide more information
  in these cases, by pro-actively polling the CQEs on request timeouts, to
  check if the specific request was completed and some issue (probably a
  missed interrupt) prevented the driver to notice, or if the request really
  wasn't completed, which indicates more severe issues.

  * Although quite useful for debugging, this patch could help to mitigate
  issues in cloud environments like AWS, in case we may have jitter in
  request completion and the i/o timeout was set to low values, or even
  in case of atypical bugs in the virtual NVMe controller. With this patch,
  if polling succeeds the NVMe driver will continue working instead of
  trying a reset controller procedure, which may lead to fails in the 
  rootfs - refer to https://launchpad.net/bugs/1788035.

  
  [Test Case]

  * It's a bit tricky to artificially create a situation of missed interrupt;
  one idea was to implement a small hack in the NVMe qemu virtual device
  that given a trigger in guest kernel, will induce the virtual device to
  skip an interrupt. The hack patch is present in comment #1 below.

  * To trigger such hack from guest kernel, all is needed is to issue a
  raw admin command from nvme-cli: "nvme admin-passthru -o 0xff /dev/nvme0"
  After that, just perform some I/Os to see one of them aborting - one could
  use dd for this goal, like "dd if=/dev/zero of=/dev/nvme0n1 count=5".

  
  [Regression Potential]

  * There are no clear risks in adding such polling mechanism to the NVMe 
driver; one bad thing that was neverreported but could happen with this patch 
is the device could be in a bad state IRQ-wise that a reset would fix, but
  the patch could cause all requests to be completed via polling, which
  prevents the adapter reset. This is however a very hypothetical situation,
  which would also happen in the mainline kernel (since it has the patch).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1807393/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to