On Wed, Oct 09, 2019 at 09:05:26AM -0700, Bart Van Assche wrote:
> On 10/9/19 2:32 AM, Ming Lei wrote:
> > @@ -354,7 +354,8 @@ void scsi_device_unbusy(struct scsi_device *sdev, 
> > struct scsi_cmnd *cmd)
> >     if (starget->can_queue > 0)
> >             atomic_dec(&starget->target_busy);
> > -   atomic_dec(&sdev->device_busy);
> > +   if (!blk_queue_nonrot(sdev->request_queue))
> > +           atomic_dec(&sdev->device_busy);
> >   }
> 
> Hi Ming,
> 
> Does this patch impact the meaning of the queue_depth sysfs attribute (see
> also sdev_store_queue_depth()) and also the queue depth ramp up/down
> mechanism (see also scsi_handle_queue_ramp_up())? Have you considered to

This patch only ignores to track inflight SCSI commands on each
LUN, so it doesn't affect the meaning of sdev->queue_depth, which is
just bypassed for SSD.

> enable/disable busy tracking per LUN depending on whether or not
> sdev->queue_depth < shost->can_queue?

Yeah, we can do it, but that isn't contradictory with this patch, :-)
And we can do it even though sdev->queue_depth < shost->can_queue.

Usually sdev->queue_depth is used for the following reasons:

1) this limit isn't a hard limit, which may improve IO merge with cost
of IO latency.

2) fair dispatch among LUNs.

This patch just tries to not apply per-LUN queue depth for SSD, because:

1) fair dispatch has been done by blk-mq in allocating driver tag

2) IO merge doesn't play big role for SSD, and IO latency can be
increased by applying per-LUN queue depth.

3) NVMe doesn't apply per-ns queue depth

> 
> The megaraid and mpt3sas drivers read sdev->device_busy directly. Is the
> current version of this patch compatible with these drivers?

For megaraid, sdev->device_busy is checked for choosing reply queue,
this way shouldn't be better than using default managed IRQ's mapping.
Similar usage is done on _base_get_high_iops_msix_index().

The only effect may be in scsih_dev_reset(), and 'SUCCESS' message
is dumped even though there is in-flight command on this LUN, but it
can be fixed easily.


Thanks,
Ming

Reply via email to