Re: [PATCH rfc 02/30] nvme-rdma: Don't alloc/free the tagset on reset

2017-07-10 Thread James Smart

On 6/19/2017 12:18 AM, Christoph Hellwig wrote:

+static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, bool 
remove)
  {
+   nvme_rdma_stop_queue(>queues[0]);
+   if (remove) {
+   blk_cleanup_queue(ctrl->ctrl.admin_connect_q);
+   blk_cleanup_queue(ctrl->ctrl.admin_q);
+   blk_mq_free_tag_set(>admin_tag_set);
+   nvme_rdma_dev_put(ctrl->device);
+   }
+
nvme_rdma_free_qe(ctrl->queues[0].device->dev, >async_event_sqe,
sizeof(struct nvme_command), DMA_TO_DEVICE);
+   nvme_rdma_free_queue(>queues[0]);

I don't like the calling convention.  We only have have two callers
anyway.  So I'd much rather only keep the code inside the if above
in the new nvme_rdma_destroy_admin_queue that is only called at shutdown
time, and opencode the calls to nvme_rdma_stop_queue, nvme_rdma_free_qe
and nvme_rdma_free_queue in the callers.



Any chance you can make the organization like what I did with FC and 
avoid all the "new" and "remove" flags ?


e.g. code blocks for:
- allocation/initialization for the controller and the tag sets. 
Basically initial allocation/creation of everything that would be the 
os-facing side of the controller.
- an association (or call it a session) create. Basically everything 
that makes the link-side ties to the subsystem and creates the 
controller and its connections. Does admin queue creation, controller 
init, and io queue creation, and enablement of the blk-mq queues as it 
does so.
- an association teardown. Basically everything that stops the blk-mq 
queues and tears down the link-side ties to the controller.
- a final controller teardown, which removes it from the system. 
Everything that terminates the os-facing side of the controller.


-- james






Re: [PATCH rfc 02/30] nvme-rdma: Don't alloc/free the tagset on reset

2017-06-19 Thread Christoph Hellwig
> We can do that, but this tries to eliminate duplicate code as
> much as possible. It's not like the convention is unprecedented...

It's fairly nasty to follow.  OTOH I like your overall cleanup,
so I guess I shouldn't complain about the initial patches to much
but just possibly do another pass after you are done..


Re: [PATCH rfc 02/30] nvme-rdma: Don't alloc/free the tagset on reset

2017-06-19 Thread Sagi Grimberg



On 19/06/17 10:18, Christoph Hellwig wrote:

+static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, bool 
remove)
  {
+   nvme_rdma_stop_queue(>queues[0]);
+   if (remove) {
+   blk_cleanup_queue(ctrl->ctrl.admin_connect_q);
+   blk_cleanup_queue(ctrl->ctrl.admin_q);
+   blk_mq_free_tag_set(>admin_tag_set);
+   nvme_rdma_dev_put(ctrl->device);
+   }
+
nvme_rdma_free_qe(ctrl->queues[0].device->dev, >async_event_sqe,
sizeof(struct nvme_command), DMA_TO_DEVICE);
+   nvme_rdma_free_queue(>queues[0]);


I don't like the calling convention.  We only have have two callers
anyway.  So I'd much rather only keep the code inside the if above
in the new nvme_rdma_destroy_admin_queue that is only called at shutdown
time, and opencode the calls to nvme_rdma_stop_queue, nvme_rdma_free_qe
and nvme_rdma_free_queue in the callers.


We can do that, but this tries to eliminate duplicate code as
much as possible. It's not like the convention is unprecedented...


-static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl)
+static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, bool 
new)


PCIe is just checking for a non-null admin_q.


Which I don't like very much :)


But I think we should
jsut split this into two functions, one for the shared code at the end
and one just for the first-time setup, with the nvme_rdma_init_queue
call open coded.


We can split, but I less like the idea of open-coding
nvme_rdma_init_queue at the call-sites.


error = nvmf_connect_admin_queue(>ctrl);
@@ -1596,6 +1601,8 @@ static int nvme_rdma_configure_admin_queue(struct 
nvme_rdma_ctrl *ctrl)
  
  	set_bit(NVME_RDMA_Q_LIVE, >queues[0].flags);
  
+	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);

+


Where does this come from?


Spilled in I guess...


Re: [PATCH rfc 02/30] nvme-rdma: Don't alloc/free the tagset on reset

2017-06-19 Thread Christoph Hellwig
> +static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, bool 
> remove)
>  {
> + nvme_rdma_stop_queue(>queues[0]);
> + if (remove) {
> + blk_cleanup_queue(ctrl->ctrl.admin_connect_q);
> + blk_cleanup_queue(ctrl->ctrl.admin_q);
> + blk_mq_free_tag_set(>admin_tag_set);
> + nvme_rdma_dev_put(ctrl->device);
> + }
> +
>   nvme_rdma_free_qe(ctrl->queues[0].device->dev, >async_event_sqe,
>   sizeof(struct nvme_command), DMA_TO_DEVICE);
> + nvme_rdma_free_queue(>queues[0]);

I don't like the calling convention.  We only have have two callers
anyway.  So I'd much rather only keep the code inside the if above
in the new nvme_rdma_destroy_admin_queue that is only called at shutdown
time, and opencode the calls to nvme_rdma_stop_queue, nvme_rdma_free_qe
and nvme_rdma_free_queue in the callers.

> -static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl)
> +static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, bool 
> new)

PCIe is just checking for a non-null admin_q.  But I think we should
jsut split this into two functions, one for the shared code at the end
and one just for the first-time setup, with the nvme_rdma_init_queue
call open coded.

>   error = nvmf_connect_admin_queue(>ctrl);
> @@ -1596,6 +1601,8 @@ static int nvme_rdma_configure_admin_queue(struct 
> nvme_rdma_ctrl *ctrl)
>  
>   set_bit(NVME_RDMA_Q_LIVE, >queues[0].flags);
>  
> + blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> +

Where does this come from?