Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread jianchao.wang
Hi Keith Thanks for your precious time for testing and reviewing. I will send out V3 next. Sincerely Jianchao On 03/13/2018 02:59 AM, Keith Busch wrote: > Hi Jianchao, > > The patch tests fine on all hardware I had. I'd like to queue this up > for the next 4.16-rc. Could you send a v3 with the

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread jianchao.wang
Hi Keith Thanks for your precious time for testing and reviewing. I will send out V3 next. Sincerely Jianchao On 03/13/2018 02:59 AM, Keith Busch wrote: > Hi Jianchao, > > The patch tests fine on all hardware I had. I'd like to queue this up > for the next 4.16-rc. Could you send a v3 with the

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread Keith Busch
Hi Jianchao, The patch tests fine on all hardware I had. I'd like to queue this up for the next 4.16-rc. Could you send a v3 with the cleanup changes Andy suggested and a changelog aligned with Ming's insights? Thanks, Keith

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread Keith Busch
Hi Jianchao, The patch tests fine on all hardware I had. I'd like to queue this up for the next 4.16-rc. Could you send a v3 with the cleanup changes Andy suggested and a changelog aligned with Ming's insights? Thanks, Keith

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread Ming Lei
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote: > On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote: > > > > So I suspect we'll need to go with a patch like this, just with a way > > better changelog. > > I have to agree this is required for that use case. I'll run

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-12 Thread Ming Lei
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote: > On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote: > > > > So I suspect we'll need to go with a patch like this, just with a way > > better changelog. > > I have to agree this is required for that use case. I'll run

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-09 Thread Keith Busch
On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote: > > So I suspect we'll need to go with a patch like this, just with a way > better changelog. I have to agree this is required for that use case. I'll run some quick tests and propose an alternate changelog. Longer term, the

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-09 Thread Keith Busch
On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote: > > So I suspect we'll need to go with a patch like this, just with a way > better changelog. I have to agree this is required for that use case. I'll run some quick tests and propose an alternate changelog. Longer term, the

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-07 Thread Christoph Hellwig
On Thu, Mar 01, 2018 at 09:10:42AM -0700, Keith Busch wrote: > On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote: > > If all CPUs for the 1st IRQ vector of admin queue are offline, then I > > guess NVMe can't work any more. > > Yikes, with respect to admin commands, it appears you're right

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-07 Thread Christoph Hellwig
On Thu, Mar 01, 2018 at 09:10:42AM -0700, Keith Busch wrote: > On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote: > > If all CPUs for the 1st IRQ vector of admin queue are offline, then I > > guess NVMe can't work any more. > > Yikes, with respect to admin commands, it appears you're right

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi Christoph Thanks for your kindly response and directive On 03/01/2018 12:47 AM, Christoph Hellwig wrote: > Note that we originally allocates irqs this way, and Keith changed > it a while ago for good reasons. So I'd really like to see good > reasons for moving away from this, and some

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi Christoph Thanks for your kindly response and directive On 03/01/2018 12:47 AM, Christoph Hellwig wrote: > Note that we originally allocates irqs this way, and Keith changed > it a while ago for good reasons. So I'd really like to see good > reasons for moving away from this, and some

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi Keith Thanks for your kindly directive and precious time for this. On 03/01/2018 11:15 PM, Keith Busch wrote: > On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote: >> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq >> twice, one for itself, >> one for

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi Keith Thanks for your kindly directive and precious time for this. On 03/01/2018 11:15 PM, Keith Busch wrote: > On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote: >> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq >> twice, one for itself, >> one for

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi Andy Thanks for your precious time for this and kindly reminding. On 02/28/2018 11:59 PM, Andy Shevchenko wrote: > On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang > wrote: >> Currently, adminq and ioq0 share the same irq vector. This is >> unfair for both amdinq

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi Andy Thanks for your precious time for this and kindly reminding. On 02/28/2018 11:59 PM, Andy Shevchenko wrote: > On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang > wrote: >> Currently, adminq and ioq0 share the same irq vector. This is >> unfair for both amdinq and ioq0. >> - For adminq,

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Keith Busch
On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote: > If all CPUs for the 1st IRQ vector of admin queue are offline, then I > guess NVMe can't work any more. Yikes, with respect to admin commands, it appears you're right if your system allows offlining CPU0. > So looks it is a good idea to

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Keith Busch
On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote: > If all CPUs for the 1st IRQ vector of admin queue are offline, then I > guess NVMe can't work any more. Yikes, with respect to admin commands, it appears you're right if your system allows offlining CPU0. > So looks it is a good idea to

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Keith Busch
On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote: > When the adminq is free, ioq0 irq completion path has to invoke nvme_irq > twice, one for itself, > one for adminq completion irq action. Let's be a little more careful on the terminology when referring to spec defined features:

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Keith Busch
On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote: > When the adminq is free, ioq0 irq completion path has to invoke nvme_irq > twice, one for itself, > one for adminq completion irq action. Let's be a little more careful on the terminology when referring to spec defined features:

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Ming Lei
On Wed, Feb 28, 2018 at 05:47:26PM +0100, Christoph Hellwig wrote: > Note that we originally allocates irqs this way, and Keith changed > it a while ago for good reasons. So I'd really like to see good > reasons for moving away from this, and some heuristics to figure > out which way to use.

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Ming Lei
On Wed, Feb 28, 2018 at 05:47:26PM +0100, Christoph Hellwig wrote: > Note that we originally allocates irqs this way, and Keith changed > it a while ago for good reasons. So I'd really like to see good > reasons for moving away from this, and some heuristics to figure > out which way to use.

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi sagi Thanks for your kindly response. On 03/01/2018 05:28 PM, Sagi Grimberg wrote: > >> Note that we originally allocates irqs this way, and Keith changed >> it a while ago for good reasons.  So I'd really like to see good >> reasons for moving away from this, and some heuristics to figure

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread jianchao.wang
Hi sagi Thanks for your kindly response. On 03/01/2018 05:28 PM, Sagi Grimberg wrote: > >> Note that we originally allocates irqs this way, and Keith changed >> it a while ago for good reasons.  So I'd really like to see good >> reasons for moving away from this, and some heuristics to figure

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Sagi Grimberg
Note that we originally allocates irqs this way, and Keith changed it a while ago for good reasons. So I'd really like to see good reasons for moving away from this, and some heuristics to figure out which way to use. E.g. if the device supports more irqs than I/O queues your scheme might

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-03-01 Thread Sagi Grimberg
Note that we originally allocates irqs this way, and Keith changed it a while ago for good reasons. So I'd really like to see good reasons for moving away from this, and some heuristics to figure out which way to use. E.g. if the device supports more irqs than I/O queues your scheme might

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-02-28 Thread Christoph Hellwig
Note that we originally allocates irqs this way, and Keith changed it a while ago for good reasons. So I'd really like to see good reasons for moving away from this, and some heuristics to figure out which way to use. E.g. if the device supports more irqs than I/O queues your scheme might always

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-02-28 Thread Christoph Hellwig
Note that we originally allocates irqs this way, and Keith changed it a while ago for good reasons. So I'd really like to see good reasons for moving away from this, and some heuristics to figure out which way to use. E.g. if the device supports more irqs than I/O queues your scheme might always

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-02-28 Thread Andy Shevchenko
On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang wrote: > Currently, adminq and ioq0 share the same irq vector. This is > unfair for both amdinq and ioq0. > - For adminq, its completion irq has to be bound on cpu0. It >just has only one hw queue, it is unreasonable

Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0

2018-02-28 Thread Andy Shevchenko
On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang wrote: > Currently, adminq and ioq0 share the same irq vector. This is > unfair for both amdinq and ioq0. > - For adminq, its completion irq has to be bound on cpu0. It >just has only one hw queue, it is unreasonable to do this. > - For ioq0,