Re: [PATCH v4 7/8] libnvdimm: Adding blk-mq support to the pmem driver

2017-08-11 Thread Dan Williams
On Fri, Aug 11, 2017 at 3:57 AM, Christoph Hellwig  wrote:
> On Mon, Aug 07, 2017 at 09:39:53AM -0700, Dave Jiang wrote:
>> Adding blk-mq support to the pmem driver in addition to the direct bio
>> support.
>
> Can you explain why this is only done for pmem and not btt and nd_blk?

I'm skeptical that we can get a dma offload benefit for those drivers
since we need to metadata updates and block-window management after
each I/O. Whether those drivers would benefit from an mq conversion is
a separate question, but the driving motivation for the conversion has
been that it's easier to send a 'struct request' to a dma engine than
a bio.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v4 7/8] libnvdimm: Adding blk-mq support to the pmem driver

2017-08-11 Thread Dave Jiang


On 08/11/2017 03:57 AM, Christoph Hellwig wrote:
> On Mon, Aug 07, 2017 at 09:39:53AM -0700, Dave Jiang wrote:
>> Adding blk-mq support to the pmem driver in addition to the direct bio
>> support.
> 
> Can you explain why this is only done for pmem and not btt and nd_blk?

Only because I have not gotten to them yet. I can follow up once the
pmem implementation is sorted out.

> 
>> This allows for hardware offloading via DMA engines. By default
>> the bio method will be enabled. The blk-mq support can be turned on via
>> module parameter queue_mode=1.
> 
> Any way to auto-discovery the right mode?  Also I'd actually much
> prefer for this to be a separate driver instead of having two entirely
> different I/O paths in the same module.

Ok. I'll look into implementing a separate driver.

> 
>> +struct pmem_cmd {
>> +struct request *rq;
>> +};
> 
> There is no point in having private data that just has a struct
> request backpointer.  Just pass the request along.

ok

> 
>>  
>> -static void pmem_release_queue(void *q)
>> +static void pmem_release_queue(void *data)
>>  {
>> -blk_cleanup_queue(q);
>> +struct pmem_device *pmem = (struct pmem_device *)data;
> 
> No need for the cast.

ok

> 
>> +static int pmem_handle_cmd(struct pmem_cmd *cmd)
> 
> Please merge this into the queue_rq handler.
> 
>> +struct request *req = cmd->rq;
>> +struct request_queue *q = req->q;
>> +struct pmem_device *pmem = q->queuedata;
>> +struct nd_region *nd_region = to_region(pmem);
>> +struct bio_vec bvec;
>> +struct req_iterator iter;
>> +int rc = 0;
>> +
>> +if (req->cmd_flags & REQ_FLUSH)
>> +nvdimm_flush(nd_region);
> 
> For a blk-mq driver you need to check for REQ_OP_FLUSH, .e.g.
> 
>   switch (req_op(req)) {
>   case REQ_FLUSH:
>   ret = nvdimm_flush(nd_region);
>   break;
>   case REQ_OP_READ:
>   ret = pmem_do_io(req, false);
>   break;
>   case REQ_OP_WRITE:
>   ret = pmem_do_io(req, true);
>   if (req->cmd_flags & REQ_FUA)
>   nvdimm_flush(nd_region);
>   break;
>   default:
>   ret = BLK_STS_NOTSUPP;
>   break;
>   }

ok

> 
>> +pmem->tag_set.queue_depth = 64;
> 
> Where does this magic number come from?

It's pretty much random and just a place holder really. It probably
should be 1 since CPU copy is sync I/O. If I'm doing a separate driver
I'll merge this patch into the next one and deal with that then.

Thanks for reviewing Christoph!

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v4 7/8] libnvdimm: Adding blk-mq support to the pmem driver

2017-08-11 Thread Christoph Hellwig
On Mon, Aug 07, 2017 at 09:39:53AM -0700, Dave Jiang wrote:
> Adding blk-mq support to the pmem driver in addition to the direct bio
> support.

Can you explain why this is only done for pmem and not btt and nd_blk?

> This allows for hardware offloading via DMA engines. By default
> the bio method will be enabled. The blk-mq support can be turned on via
> module parameter queue_mode=1.

Any way to auto-discovery the right mode?  Also I'd actually much
prefer for this to be a separate driver instead of having two entirely
different I/O paths in the same module.

> +struct pmem_cmd {
> + struct request *rq;
> +};

There is no point in having private data that just has a struct
request backpointer.  Just pass the request along.

>  
> -static void pmem_release_queue(void *q)
> +static void pmem_release_queue(void *data)
>  {
> - blk_cleanup_queue(q);
> + struct pmem_device *pmem = (struct pmem_device *)data;

No need for the cast.

> +static int pmem_handle_cmd(struct pmem_cmd *cmd)

Please merge this into the queue_rq handler.

> + struct request *req = cmd->rq;
> + struct request_queue *q = req->q;
> + struct pmem_device *pmem = q->queuedata;
> + struct nd_region *nd_region = to_region(pmem);
> + struct bio_vec bvec;
> + struct req_iterator iter;
> + int rc = 0;
> +
> + if (req->cmd_flags & REQ_FLUSH)
> + nvdimm_flush(nd_region);

For a blk-mq driver you need to check for REQ_OP_FLUSH, .e.g.

switch (req_op(req)) {
case REQ_FLUSH:
ret = nvdimm_flush(nd_region);
break;
case REQ_OP_READ:
ret = pmem_do_io(req, false);
break;
case REQ_OP_WRITE:
ret = pmem_do_io(req, true);
if (req->cmd_flags & REQ_FUA)
nvdimm_flush(nd_region);
break;
default:
ret = BLK_STS_NOTSUPP;
break;
}

> + pmem->tag_set.queue_depth = 64;

Where does this magic number come from?
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm