Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2018-01-02 Thread Keith Busch
On Sun, Dec 31, 2017 at 02:30:09PM +0200, Sagi Grimberg wrote:
> Not sure if stealing bios from requests is a better design. Note that
> we do exactly this in other transport (nvme_[rdma|loop|fc]_is_ready).

Well, we're currently failing requests that may succeed if we could
back them out for re-entry. While such scenarios are uncommon, I think
we can handle it better than ending them in failure.
 
> I think it would be better to stick to a coherent behavior across
> the nvme subsystem. If this condition statement is really something
> that is buying us measurable performance gain, I think we should apply
> it for other transports as well (although in fabrics we're a bit
> different because we have a dedicated connect that enters .queue_rq)

We should be able to remove all the state checks in the IO path because
the handlers putting the controller in an IO-incapable state should be
able to quiece the queues before transitioning to that state.

This is not a significant gain compared to the other two patches in this
series, but the sum of all the little things become meaningful.

We'd need to remove this check anyway if we're considering exclusively
polled queues since they wouldn't have a CQ vector.


Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2017-12-31 Thread Sagi Grimberg



Awesome!

Reviewed-by: Sagi Grimberg 


Wait, aren't we unquiesce queues also in nvme_dev_disable?

Doesn't that rely that the queues are suspended and queue_rq
will fail there?


We don't seem to have any other check as far as I can tell.


Ok, we currently do need this check in the submission side to flush
entered requests on invalidated hw contexts. We didn't have a way to
back out entered requests before, so that's why this request killing
check still exists. We can steal bio's to back out requests now, so I
think we should do that instead of failing them.

I'll do rework the series a bit to make that possible, plus add the
removal of the submission side nvme_process_cq, and resend.


Not sure if stealing bios from requests is a better design. Note that
we do exactly this in other transport (nvme_[rdma|loop|fc]_is_ready).

I think it would be better to stick to a coherent behavior across
the nvme subsystem. If this condition statement is really something
that is buying us measurable performance gain, I think we should apply
it for other transports as well (although in fabrics we're a bit
different because we have a dedicated connect that enters .queue_rq)


Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2017-12-29 Thread Keith Busch
On Fri, Dec 29, 2017 at 10:48:14AM +0100, Christoph Hellwig wrote:
> On Wed, Dec 27, 2017 at 11:01:48PM +0200, Sagi Grimberg wrote:
> >
> >> Awesome!
> >>
> >> Reviewed-by: Sagi Grimberg 
> >
> > Wait, aren't we unquiesce queues also in nvme_dev_disable?
> >
> > Doesn't that rely that the queues are suspended and queue_rq
> > will fail there?
> 
> We don't seem to have any other check as far as I can tell.

Ok, we currently do need this check in the submission side to flush
entered requests on invalidated hw contexts. We didn't have a way to
back out entered requests before, so that's why this request killing
check still exists. We can steal bio's to back out requests now, so I
think we should do that instead of failing them.

I'll do rework the series a bit to make that possible, plus add the
removal of the submission side nvme_process_cq, and resend.


Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2017-12-29 Thread Christoph Hellwig
On Wed, Dec 27, 2017 at 11:01:48PM +0200, Sagi Grimberg wrote:
>
>> Awesome!
>>
>> Reviewed-by: Sagi Grimberg 
>
> Wait, aren't we unquiesce queues also in nvme_dev_disable?
>
> Doesn't that rely that the queues are suspended and queue_rq
> will fail there?

We don't seem to have any other check as far as I can tell.


Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2017-12-27 Thread Sagi Grimberg



Awesome!

Reviewed-by: Sagi Grimberg 


Wait, aren't we unquiesce queues also in nvme_dev_disable?

Doesn't that rely that the queues are suspended and queue_rq
will fail there?


Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2017-12-25 Thread Sagi Grimberg

Awesome!

Reviewed-by: Sagi Grimberg 


Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2017-12-21 Thread Jens Axboe
On 12/21/17 1:46 PM, Keith Busch wrote:
> This is a micro-optimization removing unnecessary check for a disabled
> queue. We no longer need this check because blk-mq provides the ability
> to quiesce queues that nvme uses, and the doorbell registers are never
> unmapped as long as requests are active.

Reviewed-by: Jens Axboe 

-- 
Jens Axboe



[PATCH 2/3] nvme/pci: Remove cq_vector check in IO path

2017-12-21 Thread Keith Busch
This is a micro-optimization removing unnecessary check for a disabled
queue. We no longer need this check because blk-mq provides the ability
to quiesce queues that nvme uses, and the doorbell registers are never
unmapped as long as requests are active.

Signed-off-by: Keith Busch 
---
 drivers/nvme/host/pci.c | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index df5550ce0531..0be5124a3a49 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -887,11 +887,6 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx 
*hctx,
}
 
spin_lock_irq(&nvmeq->q_lock);
-   if (unlikely(nvmeq->cq_vector < 0)) {
-   ret = BLK_STS_IOERR;
-   spin_unlock_irq(&nvmeq->q_lock);
-   goto out_cleanup_iod;
-   }
__nvme_submit_cmd(nvmeq, &cmnd);
blk_mq_start_request(req);
nvme_process_cq(nvmeq);
@@ -923,11 +918,9 @@ static inline void nvme_ring_cq_doorbell(struct nvme_queue 
*nvmeq)
 {
u16 head = nvmeq->cq_head;
 
-   if (likely(nvmeq->cq_vector >= 0)) {
-   if (nvme_dbbuf_update_and_check_event(head, nvmeq->dbbuf_cq_db,
+   if (nvme_dbbuf_update_and_check_event(head, nvmeq->dbbuf_cq_db,
  nvmeq->dbbuf_cq_ei))
-   writel(head, nvmeq->q_db + nvmeq->dev->db_stride);
-   }
+   writel(head, nvmeq->q_db + nvmeq->dev->db_stride);
 }
 
 static inline void nvme_handle_cqe(struct nvme_queue *nvmeq,
-- 
2.13.6