Re: [PATCH v3 01/13] mpt3sas: Update MPI Header

2017-08-08 Thread J Freyensee


Looks like your header has a white space error:

Applying: mpt3sas: Update MPI Header
.git/rebase-apply/patch:1452: new blank line at EOF.
+

Also, FYI, this project has a lot of sparse errors that look like existed
before your patchset.  As your patchset touches a few of the files that
have sparse warnings (such as mpt3sas_base.c, mpt3sas_scsih.c, etc), maybe
you'll want to investigate fixing these things?


[mainline-linux]$ make C=1
  CHK include/config/kernel.release
  CHK include/generated/uapi/linux/version.h
  CHK include/generated/utsrelease.h
  CHK include/generated/bounds.h
  CHK include/generated/timeconst.h
  CHK include/generated/asm-offsets.h
  CALLscripts/checksyscalls.sh
  CHK scripts/mod/devicetable-offsets.h
  CHK include/generated/compile.h
  AR  drivers/scsi/mpt3sas/built-in.o
  CHECK   drivers/scsi/mpt3sas/mpt3sas_base.c
drivers/scsi/mpt3sas/mpt3sas_base.c:861:42: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:861:42:expected unsigned short
[unsigned] [usertype] Event
drivers/scsi/mpt3sas/mpt3sas_base.c:861:42:got restricted __le16
[usertype] Event
drivers/scsi/mpt3sas/mpt3sas_base.c:862:49: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:862:49:expected unsigned int
[unsigned] [usertype] EventContext
drivers/scsi/mpt3sas/mpt3sas_base.c:862:49:got restricted __le32
[usertype] EventContext
drivers/scsi/mpt3sas/mpt3sas_base.c:1080:64: warning: incorrect type in
argument 2 (different address spaces)
drivers/scsi/mpt3sas/mpt3sas_base.c:1080:64:expected void volatile
[noderef] *addr
drivers/scsi/mpt3sas/mpt3sas_base.c:1080:64:got unsigned long long
[usertype] *
drivers/scsi/mpt3sas/mpt3sas_base.c:1129:52: warning: incorrect type in
argument 2 (different address spaces)
drivers/scsi/mpt3sas/mpt3sas_base.c:1129:52:expected void volatile
[noderef] *addr
drivers/scsi/mpt3sas/mpt3sas_base.c:1129:52:got unsigned long long
[usertype] *
drivers/scsi/mpt3sas/mpt3sas_base.c:1519:36: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1519:36:expected unsigned long long
[unsigned] [long] [long long] [usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1519:36:got restricted __le64
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1532:37: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1532:37:expected unsigned long long
[unsigned] [long] [long long] [usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1532:37:got restricted __le64
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1552:45: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1552:45:expected unsigned long long
[unsigned] [long] [long long] [usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1552:45:got restricted __le64
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1565:45: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1565:45:expected unsigned long long
[unsigned] [long] [long long] [usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1565:45:got restricted __le64
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1575:36: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1575:36:expected unsigned long long
[unsigned] [long] [long long] [usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1575:36:got restricted __le64
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1594:5: warning: symbol 'base_mod64'
was not declared. Should it be static?
drivers/scsi/mpt3sas/mpt3sas_base.c:1717:36: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1717:36:expected unsigned long long
[unsigned] [long] [long long] [usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1717:36:got restricted __le64
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1722:28: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1722:28:expected unsigned long long
[unsigned] [long] [long long] [usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1722:28:got restricted __le64
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:1620:1: warning: symbol
'base_make_prp_nvme' was not declared. Should it be static?
drivers/scsi/mpt3sas/mpt3sas_base.c:1761:21: warning: incorrect type in
assignment (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:1761:21:expected unsigned int
[unsigned] [usertype] data_length
drivers/scsi/mpt3sas/mpt3sas_base.c:1761:21:got restricted __le32
[usertype] 
drivers/scsi/mpt3sas/mpt3sas_base.c:2771:32: warning: cast removes address
space of expression
drivers/scsi/mpt3sas/mpt3sas_base.c:3064:16: warning: incorrect type in
argument 1 (different base types)
drivers/scsi/mpt3sas/mpt3sas_base.c:3064:16:expected unsigned long
[unsigned] 

Re: [PATCH v4] add u64 number parser

2016-09-26 Thread J Freyensee
On Sun, 2016-09-25 at 09:14 -0700, James Smart wrote:
> add u64 number parser
> 
> Reverted back to version 2 of the patch.  This adds the interface
> using existing logic. Comments from the prior reviewers to move to
> kasprintf were rejected by Linus.
> 
> Signed-off-by: James Smart 

Acked-by: Jay Freyensee 

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] nvme-fabrics: Add FC LLDD loopback driver to test FC host and target transport

2016-08-02 Thread J Freyensee
On Fri, 2016-07-22 at 17:23 -0700, James Smart wrote:

A couple comments.

> Add FC LLDD loopback driver to test FC host and target transport
> within
> nvme-fabrics
> 
> To aid in the development and testing of the lower-level api of the
> FC
> transport, this loopback driver has been created to act as if it were
> a
> FC hba driver supporting both the host interfaces as well as the
> target
> interfaces with the nvme FC transport.
> 
> 
> Signed-off-by: James Smart 
> 
> ---

snip...

>  +int
> +fcloop_fcp_op(struct nvmet_fc_target_port *tgtport,
> + struct nvmefc_tgt_fcp_req *tgt_fcpreq)
> +{
> + struct fcloop_fcpreq *tfcp_req =
> + container_of(tgt_fcpreq, struct fcloop_fcpreq,
> tgt_fcp_req);
> + struct nvmefc_fcp_req *fcpreq = tfcp_req->fcpreq;
> + u32 rsplen = 0, xfrlen = 0;
> + int fcp_err = 0;
> + u8 op = tgt_fcpreq->op;
> +
> + switch (op) {
> + case NVMET_FCOP_WRITEDATA:
> + xfrlen = tgt_fcpreq->transfer_length;
> + fcloop_fcp_copy_data(op, tgt_fcpreq->sg, fcpreq
> ->first_sgl,
> + tgt_fcpreq->offset, xfrlen);
> + fcpreq->transferred_length += xfrlen;
> + break;
> +
> + case NVMET_FCOP_READDATA:
> + case NVMET_FCOP_READDATA_RSP:
> + xfrlen = tgt_fcpreq->transfer_length;
> + fcloop_fcp_copy_data(op, tgt_fcpreq->sg, fcpreq
> ->first_sgl,
> + tgt_fcpreq->offset, xfrlen);
> + fcpreq->transferred_length += xfrlen;
> + if (op == NVMET_FCOP_READDATA)
> + break;
> +
> + /* Fall-Thru to RSP handling */
> +
> + case NVMET_FCOP_RSP:
> + rsplen = ((fcpreq->rsplen < tgt_fcpreq->rsplen) ?
> + fcpreq->rsplen : tgt_fcpreq
> ->rsplen);
> + memcpy(fcpreq->rspaddr, tgt_fcpreq->rspaddr,
> rsplen);
> + if (rsplen < tgt_fcpreq->rsplen)
> + fcp_err = -E2BIG;
> + fcpreq->rcv_rsplen = rsplen;
> + fcpreq->status = 0;
> + tfcp_req->status = 0;
> + break;
> +
> + case NVMET_FCOP_ABORT:
> + tfcp_req->status = NVME_SC_FC_TRANSPORT_ABORTED;
> + break;
> +
> + default:
> + fcp_err = -EINVAL;
> + break;
> + }
> +
> + tgt_fcpreq->transferred_length = xfrlen;
> + tgt_fcpreq->fcp_error = fcp_err;
> + tgt_fcpreq->done(tgt_fcpreq);
> +
> + if ((!fcp_err) && (op == NVMET_FCOP_RSP ||
> + op == NVMET_FCOP_READDATA_RSP ||
> + op == NVMET_FCOP_ABORT))
> + schedule_work(_req->work);
> +
> + return 0;

if this function returns an 'int', why would it always return 0 and not
the fcp_err values (if there is an error)?

> +}
> +
> +void
> +fcloop_ls_abort(struct nvme_fc_local_port *localport,
> + struct nvme_fc_remote_port *remoteport,
> + struct nvmefc_ls_req *lsreq)
> +{
> +}
> +
> +void
> +fcloop_fcp_abort(struct nvme_fc_local_port *localport,
> + struct nvme_fc_remote_port *remoteport,
> + void *hw_queue_handle,
> + struct nvmefc_fcp_req *fcpreq)
> +{
> +}
> +
> +
> +struct nvme_fc_port_template fctemplate = {
> + .create_queue   = fcloop_create_queue,
> + .delete_queue   = fcloop_delete_queue,
> + .ls_req = fcloop_ls_req,
> + .fcp_io = fcloop_fcp_req,
> + .ls_abort   = fcloop_ls_abort,
> + .fcp_abort  = fcloop_fcp_abort,
> +
> + .max_hw_queues  = 1,
> + .max_sgl_segments = 256,
> + .max_dif_sgl_segments = 256,
> + .dma_boundary = 0x,

Between here and "struct nvmet_fc_target_template tgttemplate" they are
assigning the same magic values to the same variable names, so why not
have these values as #defines for a tad easier maintainability?

> + /* sizes of additional private data for data structures */
> + .local_priv_sz  = sizeof(struct fcloop_lport),
> + .remote_priv_sz = sizeof(struct fcloop_rport),
> + .lsrqst_priv_sz = sizeof(struct fcloop_lsreq),
> + .fcprqst_priv_sz = sizeof(struct fcloop_fcpreq),
> +};
> +
> +struct nvmet_fc_target_template tgttemplate = {
> + .xmt_ls_rsp = fcloop_xmt_ls_rsp,
> + .fcp_op = fcloop_fcp_op,
> +
> + .max_hw_queues  = 1,
> + .max_sgl_segments = 256,
> + .max_dif_sgl_segments = 256,
> + .dma_boundary = 0x,
> +

see above comment.

> + /* optional features */
> + .target_features = NVMET_FCTGTFEAT_READDATA_RSP,
> +
> + /* sizes of additional private data for data structures */
> + .target_priv_sz = sizeof(struct fcloop_tgtport),
> +};
> +
> +static ssize_t
> +fcloop_create_local_port(struct device *dev, struct device_attribute
> *attr,
> + const char *buf, size_t count)
> +{
> + 

Re: [PATCH 4/5] nvme-fabrics: Add target FC transport support

2016-08-01 Thread J Freyensee
On Fri, 2016-07-22 at 17:23 -0700, James Smart wrote:

A few comments.

> Add nvme-fabrics target FC transport support
> 
> Implements the FC-NVME T11 definition of how nvme fabric capsules are
> performed on an FC fabric. Utilizes a lower-layer API to FC host
> adapters
> to send/receive FC-4 LS operations and perform the FCP transactions
> necessary
> to perform and FCP IO request for NVME.
> 
> The T11 definitions for FC-4 Link Services are implemented which
> create
> NVMeOF connections.  Implements the hooks with nvmet layer to pass
> NVME
> commands to it for processing and posting of data/response base to
> the
> host via the differernt connections.
> 
> 
snip
.
.
.

> +static void
> +nvmet_fc_free_target_queue(struct nvmet_fc_tgt_queue *queue)
> +{
> + struct nvmet_fc_tgtport *tgtport = queue->assoc->tgtport;
> + unsigned long flags;
> +
> + /*
> +  * beware: nvmet layer hangs waiting for a completion if
> +  * connect command failed
> +  */
> + flush_workqueue(queue->work_q);
> + if (queue->connected)
> + nvmet_sq_destroy(>nvme_sq);

I was wondering if there is any way for this fc target layer to fake
send an NVMe completion to the nvmet layer to prevent a nvmet layer
hang (because I'm assuming the nvmet layer hangs because it never
receives a connect completion upon failure here), then send a signal to
tear down the sq.

Or alternatively call nvmet_ctrl_fatal_error() if connect fails as a
trial/alternative to having the nvmet layer hang?


> + spin_lock_irqsave(>lock, flags);
> + queue->assoc->queues[queue->qid] = NULL;
> + spin_unlock_irqrestore(>lock, flags);
> + nvmet_fc_destroy_fcp_iodlist(tgtport, queue);
> + destroy_workqueue(queue->work_q);
> + kfree(queue);
> +}
> +
> +static struct nvmet_fc_tgt_queue *
> +nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport,
> + u64 connection_id)
> +{
> + struct nvmet_fc_tgt_assoc *assoc;
> + u64 association_id =
> nvmet_fc_getassociationid(connection_id);
> + u16 qid = nvmet_fc_getqueueid(connection_id);
> + unsigned long flags;
> +
> + spin_lock_irqsave(>lock, flags);
> + list_for_each_entry(assoc, >assoc_list, a_list) {
> + if (association_id == assoc->association_id) {
> + spin_unlock_irqrestore(>lock,
> flags);
> + return assoc->queues[qid];
> + }
> + }
> + spin_unlock_irqrestore(>lock, flags);
> + return NULL;
> +}

snip
.
.

> +
> +/**
> + * nvme_fc_register_targetport - transport entry point called by an
> + *  LLDD to register the existence of a
> local
> + *  NVME subystem FC port.
> + * @pinfo: pointer to information about the port to be
> registered
> + * @template:  LLDD entrypoints and operational parameters for the
> port
> + * @dev:   physical hardware device node port corresponds to.
> Will be
> + * used for DMA mappings
> + * @tgtport_p:   pointer to a local port pointer. Upon success, the


looks like the variable tgtport_p does not exist (or it's now called
portptr)?

> routine
> + * will allocate a nvme_fc_local_port structure and
> place its
> + * address in the local port pointer. Upon failure,
> local port
> + * pointer will be set to 0.

And I think the description is wrong, looks like the code does the more
correct thing, set *portptr = NULL, not 0.


snip.
.
.
.
> +/*
> + * Actual processing routine for received FC-NVME LS Requests from
> the LLD
> + */
> +void
> +nvmet_fc_handle_ls_rqst(struct nvmet_fc_tgtport *tgtport,
> + struct nvmet_fc_ls_iod *iod)
> +{
> + struct fcnvme_ls_rqst_w0 *w0 =
> + (struct fcnvme_ls_rqst_w0 *)iod->rqstbuf;
> +
> + iod->lsreq->nvmet_fc_private = iod;
> + iod->lsreq->rspbuf = iod->rspbuf;
> + iod->lsreq->rspdma = iod->rspdma;
> + iod->lsreq->done = nvmet_fc_xmt_ls_rsp_done;
> + /* Be preventative. handlers will later set to valid length
> */
> + iod->lsreq->rsplen = 0;
> +
> + iod->assoc = NULL;
> +
> + /*
> +  * handlers:
> +  *   parse request input, set up nvmet req (cmd, rsp, 
>  execute)
> +  *   and format the LS response
> +  * if non-zero returned, then no futher action taken on the
> LS
> +  * if zero:
> +  *   valid to call nvmet layer if execute routine set
> +  *   iod->rspbuf contains ls response
> +  */
> + switch (w0->ls_cmd) {
> + case FCNVME_LS_CREATE_ASSOCIATION:
> + /* Creates Association and initial Admin
> Queue/Connection */
> + nvmet_fc_ls_create_association(tgtport, iod);
> + break;
> + case FCNVME_LS_CREATE_CONNECTION:
> + /* Creates an IO Queue/Connection */
> + nvmet_fc_ls_create_connection(tgtport, iod);
> + break;
> + case FCNVME_LS_DISCONNECT:
> + /* 

Re: [PATCH 1/5] nvme-fabrics: Add FC transport FC-NVME definitions

2016-07-29 Thread J Freyensee
On Mon, 2016-07-25 at 10:56 +0200, Johannes Thumshirn wrote:
> On Fri, Jul 22, 2016 at 05:23:55PM -0700, James Smart wrote:
> > 
> > nvme-fabrics: Add FC transport FC-NVME definitions:
> > - Formats for Cmd, Data, Rsp IUs
> > - Formats FC-4 LS definitions
> > 
> > 
> > Signed-off-by: James Smart 
> 
> Acked-by: Johannes Thumshirn 


Acked-by: Jay Freyensee 

> 
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/5] nvme-fabrics: Add FC transport LLDD api definitions

2016-07-29 Thread J Freyensee
On Mon, 2016-07-25 at 11:12 +0200, Johannes Thumshirn wrote:
> On Fri, Jul 22, 2016 at 05:23:56PM -0700, James Smart wrote:
> > 
> > nvme-fabrics: Add FC transport LLDD api definitions:
> > 
> > Host:
> > -LLDD registration with the host transport
> > -registering host ports (local ports) and target ports seen on
> >fabric (remote ports)
> > -Data structures and call points for FC-4 LS's and FCP IO requests
> > 
> > Target:
> > -LLDD registration with the target transport
> > -registering nvme subsystem ports (target ports)
> > -Data structures and call points for reception of FC-4 LS's and
> >FCP IO requests, and callbacks to perform data and rsp transfers
> >for the io.
> > 
> > 
> > Signed-off-by: James Smart 
> 
> Acked-by: Johannes Thumshirn 

Acked-by: Jay Freyensee 

> 
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/5] nvme-fabrics: Add host FC transport support

2016-07-29 Thread J Freyensee
On Fri, 2016-07-22 at 17:23 -0700, James Smart wrote:

A couple of minor comments:


> Add nvme-fabrics host FC transport support
> 
> Implements the FC-NVME T11 definition of how nvme fabric capsules are
> performed on an FC fabric. Utilizes a lower-layer API to FC host
> adapters
> to send/receive FC-4 LS operations and FCP operations that comprise
> NVME
> over FC operation.
> 
> The T11 definitions for FC-4 Link Services are implemented which
> create
> NVMeOF connections.  Implements the hooks with blk-mq to then submit
> admin
> and io requests to the different connections.
> 
> 
> Signed-off-by: James Smart 

snip...

> + /* TODO:
> +  * assoc_rqst->assoc_cmd.cntlid = cpu_to_be16(?);
> +  * strncpy(assoc_rqst->assoc_cmd.hostid, ?,
> +  *  min(FCNVME_ASSOC_HOSTID_LEN, NVMF_NQN_SIZE));
> +  * strncpy(assoc_rqst->assoc_cmd.hostnqn, ?,
> +  *  min(FCNVME_ASSOC_HOSTNQN_LEN, NVMF_NQN_SIZE];
> +  */

What is the TODO here?

more snip...


> +
> +static int
> +nvme_fc_init_queue(struct nvme_fc_ctrl *ctrl, int idx, size_t
> queue_size)
> +{
> + struct nvme_fc_queue *queue;
> +
> + queue = >queues[idx];
> + memset(queue, 0, sizeof(*queue));
> + queue->ctrl = ctrl;
> + queue->qnum = idx;
> + atomic_set(>csn, 1);
> + queue->dev = ctrl->dev;
> +
> + if (idx > 0)
> + queue->cmnd_capsule_len = ctrl->ctrl.ioccsz * 16;
> + else
> + queue->cmnd_capsule_len = sizeof(struct
> nvme_command);
> +
> + queue->queue_size = queue_size;
> +
> + /*
> +  * Considered whether we should allocate buffers for all
> SQEs
> +  * and CQEs and dma map them - mapping their respective
> entries
> +  * into the request structures (kernel vm addr and dma
> address)
> +  * thus the driver could use the buffers/mappings directly.
> +  * It only makes sense if the LLDD would use them for its
> +  * messaging api. It's very unlikely most adapter api's
> would use
> +  * a native NVME sqe/cqe. More reasonable if FC-NVME IU
> payload
> +  * structures were used instead. For now - just pass the
> +  * sqe/cqes to the driver and let it deal with it. We'll
> figure
> +  * out if the FC-NVME IUs make sense later.
> +  */
> +
> + return 0;

Slightly confused.  Looks like in nvme_fc_configure_admin_queue() and
nvme_fc_init_io_queues() check for this function returning an error,
but nvme_fc_init_queue() never returns anything but 0.  Should it
return an error?  Does the comments above imply that this function
could change in the future such that it would return something other
than 0?

more more snip...

> +
> +static int
> +nvme_fc_init_io_queues(struct nvme_fc_ctrl *ctrl)
> +{
> + int i, ret;
> +
> + for (i = 1; i < ctrl->queue_count; i++) {
> + ret = nvme_fc_init_queue(ctrl, i, ctrl
> ->ctrl.sqsize);
> + if (ret) {
> + dev_info(ctrl->ctrl.device,
> + "failed to initialize i/o queue %d:
> %d\n",
> + i, ret);
> + }
> + }
> +
> + return 0;

Right now as-is nvme_fc_init_queue() will always return 0, but this
function is hard-coded to return 0.  Independent of what
nvme_fc_init_queue() returns, this function should be returning 'ret'
as "nvme_fc_create_io_queues()" has code to check if this function
fails:

> +static int
> +nvme_fc_create_io_queues(struct nvme_fc_ctrl *ctrl)
> +{
.
.
.
> + dev_info(ctrl->ctrl.device, "creating %d I/O queues.\n",
> + opts->nr_io_queues);
> +
> + ret = nvme_fc_init_io_queues(ctrl);
> + if (ret)
> + return ret;
> +
.
.

more more more snip...

> +static int
> +nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue
> *queue,
> + struct nvme_fc_fcp_op *op, u32 data_len,
> + enum nvmefc_fcp_datadir io_dir)
> +{
> + struct nvme_fc_cmd_iu *cmdiu = >cmd_iu;
> + struct nvme_command *sqe = >sqe;
> + u32 csn;
> + int ret;
> +
> + /* format the FC-NVME CMD IU and fcp_req */
> + cmdiu->connection_id = cpu_to_be64(queue->connection_id);
> + csn = atomic_inc_return(>csn);
> + cmdiu->csn = cpu_to_be32(csn);
> + cmdiu->data_len = cpu_to_be32(data_len);
> + switch (io_dir) {
> + case NVMEFC_FCP_WRITE:
> + cmdiu->flags = FCNVME_CMD_FLAGS_WRITE;
> + break;
> + case NVMEFC_FCP_READ:
> + cmdiu->flags = FCNVME_CMD_FLAGS_READ;
> + break;
> + case NVMEFC_FCP_NODATA:
> + cmdiu->flags = 0;
> + break;
> + }
> + op->fcp_req.payload_length = data_len;
> + op->fcp_req.io_dir = io_dir;
> + op->fcp_req.transferred_length = 0;
> + op->fcp_req.rcv_rsplen = 0;
> + op->fcp_req.status = 0;
> +
> + /*
> +  * validate per fabric rules, set fields mandated by fabric
> spec
> +  * as well as those by FC-NVME spec.
> +  */
> +