Hi Keith
Thanks for your precious time for testing and reviewing.
I will send out V3 next.
Sincerely
Jianchao
On 03/13/2018 02:59 AM, Keith Busch wrote:
> Hi Jianchao,
>
> The patch tests fine on all hardware I had. I'd like to queue this up
> for the next 4.16-rc. Could you send a v3 with the
Hi Jianchao,
The patch tests fine on all hardware I had. I'd like to queue this up
for the next 4.16-rc. Could you send a v3 with the cleanup changes Andy
suggested and a changelog aligned with Ming's insights?
Thanks,
Keith
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote:
> On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote:
> >
> > So I suspect we'll need to go with a patch like this, just with a way
> > better changelog.
>
> I have to agree this is required for that use case. I'll run so
On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote:
>
> So I suspect we'll need to go with a patch like this, just with a way
> better changelog.
I have to agree this is required for that use case. I'll run some
quick tests and propose an alternate changelog.
Longer term, the curr
On Thu, Mar 01, 2018 at 09:10:42AM -0700, Keith Busch wrote:
> On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote:
> > If all CPUs for the 1st IRQ vector of admin queue are offline, then I
> > guess NVMe can't work any more.
>
> Yikes, with respect to admin commands, it appears you're right
Hi Christoph
Thanks for your kindly response and directive
On 03/01/2018 12:47 AM, Christoph Hellwig wrote:
> Note that we originally allocates irqs this way, and Keith changed
> it a while ago for good reasons. So I'd really like to see good
> reasons for moving away from this, and some heurist
Hi Keith
Thanks for your kindly directive and precious time for this.
On 03/01/2018 11:15 PM, Keith Busch wrote:
> On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote:
>> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq
>> twice, one for itself,
>> one for admin
Hi Andy
Thanks for your precious time for this and kindly reminding.
On 02/28/2018 11:59 PM, Andy Shevchenko wrote:
> On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang
> wrote:
>> Currently, adminq and ioq0 share the same irq vector. This is
>> unfair for both amdinq and ioq0.
>> - For adminq, its
On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote:
> If all CPUs for the 1st IRQ vector of admin queue are offline, then I
> guess NVMe can't work any more.
Yikes, with respect to admin commands, it appears you're right if your
system allows offlining CPU0.
> So looks it is a good idea to
On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote:
> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq
> twice, one for itself,
> one for adminq completion irq action.
Let's be a little more careful on the terminology when referring to spec
defined features: the
On Wed, Feb 28, 2018 at 05:47:26PM +0100, Christoph Hellwig wrote:
> Note that we originally allocates irqs this way, and Keith changed
> it a while ago for good reasons. So I'd really like to see good
> reasons for moving away from this, and some heuristics to figure
> out which way to use. E.g.
Hi sagi
Thanks for your kindly response.
On 03/01/2018 05:28 PM, Sagi Grimberg wrote:
>
>> Note that we originally allocates irqs this way, and Keith changed
>> it a while ago for good reasons. So I'd really like to see good
>> reasons for moving away from this, and some heuristics to figure
>>
Note that we originally allocates irqs this way, and Keith changed
it a while ago for good reasons. So I'd really like to see good
reasons for moving away from this, and some heuristics to figure
out which way to use. E.g. if the device supports more irqs than
I/O queues your scheme might alwa
Note that we originally allocates irqs this way, and Keith changed
it a while ago for good reasons. So I'd really like to see good
reasons for moving away from this, and some heuristics to figure
out which way to use. E.g. if the device supports more irqs than
I/O queues your scheme might always
On Wed, Feb 28, 2018 at 5:48 PM, Jianchao Wang
wrote:
> Currently, adminq and ioq0 share the same irq vector. This is
> unfair for both amdinq and ioq0.
> - For adminq, its completion irq has to be bound on cpu0. It
>just has only one hw queue, it is unreasonable to do this.
> - For ioq0, wh
Currently, adminq and ioq0 share the same irq vector. This is
unfair for both amdinq and ioq0.
- For adminq, its completion irq has to be bound on cpu0. It
just has only one hw queue, it is unreasonable to do this.
- For ioq0, when the irq fires for io completion, the adminq irq
action on t
16 matches
Mail list logo