[PATCH] scsi: bnx2fc: Fix NULL dereference in error handling

2018-10-31 Thread Dan Carpenter
If "interface" is NULL then we can't release it and trying to will only
lead to an Oops.

Fixes: aea71a024914 ("[SCSI] bnx2fc: Introduce interface structure for each 
vlan interface")
Signed-off-by: Dan Carpenter 
---
 drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c 
b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index cd160f2ec75d..bcd30e2374f1 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -2364,7 +2364,7 @@ static int _bnx2fc_create(struct net_device *netdev,
if (!interface) {
printk(KERN_ERR PFX "bnx2fc_interface_create failed\n");
rc = -ENOMEM;
-   goto ifput_err;
+   goto netdev_err;
}
 
if (is_vlan_dev(netdev)) {
-- 
2.11.0



Geschäfts- / Projektkredit 1,5%

2018-10-31 Thread SafetyNet Credit




--
Schönen Tag.
Wir hoffen, Sie gut zu treffen.
Benötigen Sie einen dringenden Kredit für ein Geschäft oder ein Projekt?
Wir bieten Kredite zu 1,5% und wir können Ihre Transaktion innerhalb von 
3 bis 5 Arbeitstagen abschließen.


Wenn Sie ernsthaft an einem Kredit interessiert sind, empfehlen wir 
Ihnen, unten die Einzelheiten zur Bearbeitung Ihrer Transaktion 
anzugeben.


Vollständiger Name:..
Darlehensbetrag: ..
Darlehensdauer: ..
Darlehen Zweck:..
Telefon:..

Wir erwarten Ihre Darlehensdaten wie oben beschrieben für die Abwicklung
Ihrer Transaktion.

Freundliche Grüße.

Wilson Rog.
Buchhalter / Berater


[PATCH v4 4/5] qla2xxx_nvmet: Add SysFS node for FC-NVMe Target

2018-10-31 Thread Himanshu Madhani
From: Anil Gurumurthy 

This patch adds SysFS node for NVMe Target configuration

Signed-off-by: Anil Gurumurthy 
Signed-off-by: Himanshu Madhani 
---
 drivers/scsi/qla2xxx/qla_attr.c | 33 +
 drivers/scsi/qla2xxx/qla_gs.c   |  2 +-
 drivers/scsi/qla2xxx/qla_init.c |  3 ++-
 3 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
index 0bb9ac6ece92..678aff5ca947 100644
--- a/drivers/scsi/qla2xxx/qla_attr.c
+++ b/drivers/scsi/qla2xxx/qla_attr.c
@@ -13,6 +13,7 @@
 #include 
 
 static int qla24xx_vport_disable(struct fc_vport *, bool);
+extern void qlt_set_mode(struct scsi_qla_host *vha);
 
 /* SYSFS attributes - 
*/
 
@@ -631,6 +632,37 @@ static struct bin_attribute sysfs_sfp_attr = {
 };
 
 static ssize_t
+qla2x00_sysfs_write_nvmet(struct file *filp, struct kobject *kobj,
+   struct bin_attribute *bin_attr,
+   char *buf, loff_t off, size_t count)
+{
+   struct scsi_qla_host *vha = shost_priv(dev_to_shost(container_of(kobj,
+   struct device, kobj)));
+   struct qla_hw_data *ha = vha->hw;
+   scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);
+
+   ql_log(ql_log_info, vha, 0x706e,
+   "Bringing up target mode!! vha:%p\n", vha);
+   qlt_op_target_mode = 1;
+   qlt_set_mode(base_vha);
+   set_bit(ISP_ABORT_NEEDED, >dpc_flags);
+   qla2xxx_wake_dpc(vha);
+   qla2x00_wait_for_hba_online(vha);
+
+   return count;
+}
+
+static struct bin_attribute sysfs_nvmet_attr = {
+   .attr = {
+   .name = "nvmet",
+   .mode = 0200,
+   },
+   .size = 0,
+   .write = qla2x00_sysfs_write_nvmet,
+};
+
+
+static ssize_t
 qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
@@ -943,6 +975,7 @@ static struct sysfs_entry {
{ "issue_logo", _issue_logo_attr, },
{ "xgmac_stats", _xgmac_stats_attr, 3 },
{ "dcbx_tlv", _dcbx_tlv_attr, 3 },
+   { "nvmet", _nvmet_attr, },
{ NULL },
 };
 
diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
index ea55ed972eed..cfc6818952a0 100644
--- a/drivers/scsi/qla2xxx/qla_gs.c
+++ b/drivers/scsi/qla2xxx/qla_gs.c
@@ -698,7 +698,7 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type)
return (QLA_SUCCESS);
 
return qla_async_rffid(vha, >d_id, qlt_rff_id(vha),
-   FC4_TYPE_FCP_SCSI);
+   type);
 }
 
 static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 665964d4f0c4..36d67230c3b1 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -5526,7 +5526,8 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha)
 * will be newer than discovery_gen. */
qlt_do_generation_tick(vha, _gen);
 
-   if (USE_ASYNC_SCAN(ha)) {
+   if (USE_ASYNC_SCAN(ha) && !(vha->flags.nvmet_enabled)) {
+   /* If NVME target mode is enabled, go through regular scan */
rval = qla24xx_async_gpnft(vha, FC4_TYPE_FCP_SCSI,
NULL);
if (rval)
-- 
2.12.0



[PATCH v4 5/5] qla2xxx: Update driver version to 11.00.00.00-k

2018-10-31 Thread Himanshu Madhani
Signed-off-by: Himanshu Madhani 
---
 drivers/scsi/qla2xxx/qla_version.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_version.h 
b/drivers/scsi/qla2xxx/qla_version.h
index 12bafff71a1a..0d58aa629c08 100644
--- a/drivers/scsi/qla2xxx/qla_version.h
+++ b/drivers/scsi/qla2xxx/qla_version.h
@@ -7,9 +7,9 @@
 /*
  * Driver version
  */
-#define QLA2XXX_VERSION  "10.00.00.11-k"
+#define QLA2XXX_VERSION  "11.00.00.00-k"
 
-#define QLA_DRIVER_MAJOR_VER   10
+#define QLA_DRIVER_MAJOR_VER   11
 #define QLA_DRIVER_MINOR_VER   0
 #define QLA_DRIVER_PATCH_VER   0
 #define QLA_DRIVER_BETA_VER0
-- 
2.12.0



[PATCH v4 2/5] qla2xxx_nvmet: Add files for FC-NVMe Target support

2018-10-31 Thread Himanshu Madhani
From: Anil Gurumurthy 

This patch adds files to enable NVMe Target Support

Signed-off-by: Anil Gurumurthy 
Signed-off-by: Giridhar Malavali 
Signed-off-by: Darren Trapp 
Signed-off-by: Himanshu Madhani 
---
 drivers/scsi/qla2xxx/qla_nvmet.c | 795 +++
 drivers/scsi/qla2xxx/qla_nvmet.h | 129 +++
 2 files changed, 924 insertions(+)
 create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.c
 create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.h

diff --git a/drivers/scsi/qla2xxx/qla_nvmet.c b/drivers/scsi/qla2xxx/qla_nvmet.c
new file mode 100644
index ..caf72d5d627b
--- /dev/null
+++ b/drivers/scsi/qla2xxx/qla_nvmet.c
@@ -0,0 +1,795 @@
+/*
+ * QLogic Fibre Channel HBA Driver
+ * Copyright (c)  2003-2017 QLogic Corporation
+ *
+ * See LICENSE.qla2xxx for copyright and licensing details.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include "qla_nvme.h"
+#include "qla_nvmet.h"
+
+static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair,
+   struct qla_nvmet_cmd *cmd, struct nvmefc_tgt_fcp_req *rsp);
+static void qla_nvmet_send_abts_ctio(struct scsi_qla_host *vha,
+   struct abts_recv_from_24xx *abts, bool flag);
+
+/*
+ * qla_nvmet_targetport_delete -
+ * Invoked by the nvmet to indicate that the target port has
+ * been deleted
+ */
+static void
+qla_nvmet_targetport_delete(struct nvmet_fc_target_port *targetport)
+{
+   struct qla_nvmet_tgtport *tport = targetport->private;
+
+   if (!IS_ENABLED(CONFIG_NVME_TARGET_FC))
+   return;
+
+   complete(>tport_del);
+}
+
+/*
+ * qlt_nvmet_ls_done -
+ * Invoked by the firmware interface to indicate the completion
+ * of an LS cmd
+ * Free all associated resources of the LS cmd
+ */
+static void qlt_nvmet_ls_done(void *ptr, int res)
+{
+   struct srb *sp = ptr;
+   struct srb_iocb   *nvme = >u.iocb_cmd;
+   struct nvmefc_tgt_ls_req *rsp = nvme->u.nvme.desc;
+   struct qla_nvmet_cmd *tgt_cmd = nvme->u.nvme.cmd;
+
+   if (!IS_ENABLED(CONFIG_NVME_TARGET_FC))
+   return;
+
+   ql_dbg(ql_dbg_nvme, sp->vha, 0x11001,
+   "%s: sp %p vha %p, rsp %p, cmd %p\n", __func__,
+   sp, sp->vha, nvme->u.nvme.desc, nvme->u.nvme.cmd);
+
+   rsp->done(rsp);
+
+   /* Free tgt_cmd */
+   kfree(tgt_cmd->buf);
+   kfree(tgt_cmd);
+   qla2x00_rel_sp(sp);
+}
+
+/*
+ * qla_nvmet_ls_rsp -
+ * Invoked by the nvme-t to complete the LS req.
+ * Prepare and send a response CTIO to the firmware.
+ */
+static int
+qla_nvmet_ls_rsp(struct nvmet_fc_target_port *tgtport,
+   struct nvmefc_tgt_ls_req *rsp)
+{
+   struct qla_nvmet_cmd *tgt_cmd =
+   container_of(rsp, struct qla_nvmet_cmd, cmd.ls_req);
+   struct scsi_qla_host *vha = tgt_cmd->vha;
+   struct srb_iocb   *nvme;
+   int rval = QLA_FUNCTION_FAILED;
+   srb_t *sp;
+
+   ql_dbg(ql_dbg_nvme + ql_dbg_buffer, vha, 0x11002,
+   "Dumping the NVMET-LS response buffer\n");
+   ql_dump_buffer(ql_dbg_nvme + ql_dbg_buffer, vha, 0x2075,
+   (uint8_t *)rsp->rspbuf, rsp->rsplen);
+
+   /* Alloc SRB structure */
+   sp = qla2x00_get_sp(vha, NULL, GFP_ATOMIC);
+   if (!sp) {
+   ql_log(ql_log_info, vha, 0x11003, "Failed to allocate SRB\n");
+   return -ENOMEM;
+   }
+
+   sp->type = SRB_NVMET_LS;
+   sp->done = qlt_nvmet_ls_done;
+   sp->vha = vha;
+   sp->fcport = tgt_cmd->fcport;
+
+   nvme = >u.iocb_cmd;
+   nvme->u.nvme.rsp_dma = rsp->rspdma;
+   nvme->u.nvme.rsp_len = rsp->rsplen;
+   nvme->u.nvme.exchange_address = tgt_cmd->atio.u.pt_ls4.exchange_address;
+   nvme->u.nvme.nport_handle = tgt_cmd->atio.u.pt_ls4.nport_handle;
+   nvme->u.nvme.vp_index = tgt_cmd->atio.u.pt_ls4.vp_index;
+
+   nvme->u.nvme.cmd = tgt_cmd; /* To be freed */
+   nvme->u.nvme.desc = rsp; /* Call back to nvmet */
+
+   rval = qla2x00_start_sp(sp);
+   if (rval != QLA_SUCCESS) {
+   ql_log(ql_log_warn, vha, 0x11004,
+   "qla2x00_start_sp failed = %d\n", rval);
+   return rval;
+   }
+
+   return 0;
+}
+
+/*
+ * qla_nvmet_fcp_op -
+ * Invoked by the nvme-t to complete the IO.
+ * Prepare and send a response CTIO to the firmware.
+ */
+static int
+qla_nvmet_fcp_op(struct nvmet_fc_target_port *tgtport,
+   struct nvmefc_tgt_fcp_req *rsp)
+{
+   struct qla_nvmet_cmd *tgt_cmd =
+   container_of(rsp, struct qla_nvmet_cmd, cmd.fcp_req);
+   struct scsi_qla_host *vha = tgt_cmd->vha;
+
+   if (!IS_ENABLED(CONFIG_NVME_TARGET_FC))
+   return 0;
+
+   /* Prepare and send CTIO 82h */
+   qla_nvmet_send_resp_ctio(vha->qpair, tgt_cmd, rsp);
+
+   return 0;
+}
+
+/*
+ * qla_nvmet_fcp_abort_done
+ * free up the used resources
+ */
+static void qla_nvmet_fcp_abort_done(void *ptr, int res)
+{
+   srb_t *sp = ptr;
+
+   

[PATCH v4 1/5] qla2xxx_nvmet: Add FC-NVMe Target Link Service request handling

2018-10-31 Thread Himanshu Madhani
From: Anil Gurumurthy 

This patch provides link service pass through feature handling
in the driver. This feature is implemented mainly by the firmware
and the same implementation is handled in the driver via an
IOCB interface.

Signed-off-by: Anil Gurumurthy 
Signed-off-by: Giridhar Malavali 
Signed-off-by: Darren Trapp 
Signed-off-by: Himanshu Madhani 
---
 drivers/scsi/qla2xxx/qla_dbg.c|  1 +
 drivers/scsi/qla2xxx/qla_dbg.h|  2 ++
 drivers/scsi/qla2xxx/qla_def.h|  3 +++
 drivers/scsi/qla2xxx/qla_gbl.h|  7 +++
 drivers/scsi/qla2xxx/qla_iocb.c   |  8 +++-
 drivers/scsi/qla2xxx/qla_target.c | 11 +++
 6 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
index c7533fa7f46e..ed9c228f7d11 100644
--- a/drivers/scsi/qla2xxx/qla_dbg.c
+++ b/drivers/scsi/qla2xxx/qla_dbg.c
@@ -67,6 +67,7 @@
  * | Target Mode Management  |   0xf09b   | 0xf002 |
  * |  || 0xf046-0xf049  |
  * | Target Mode Task Management  |  0x1000d  ||
+ * | NVME|   0x11000  ||
  * --
  */
 
diff --git a/drivers/scsi/qla2xxx/qla_dbg.h b/drivers/scsi/qla2xxx/qla_dbg.h
index 8877aa97d829..4ad97923e40b 100644
--- a/drivers/scsi/qla2xxx/qla_dbg.h
+++ b/drivers/scsi/qla2xxx/qla_dbg.h
@@ -367,6 +367,8 @@ ql_log_qp(uint32_t, struct qla_qpair *, int32_t, const char 
*fmt, ...);
 #define ql_dbg_tgt_tmr 0x1000 /* Target mode task management */
 #define ql_dbg_tgt_dif  0x0800 /* Target mode dif */
 
+#define ql_dbg_nvme 0x0400 /* NVME Target */
+
 extern int qla27xx_dump_mpi_ram(struct qla_hw_data *, uint32_t, uint32_t *,
uint32_t, void **);
 extern int qla24xx_dump_ram(struct qla_hw_data *, uint32_t, uint32_t *,
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index 26b93c563f92..a37a4d2261e2 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -515,6 +515,9 @@ struct srb_iocb {
 #define SRB_PRLI_CMD   21
 #define SRB_CTRL_VP22
 #define SRB_PRLO_CMD   23
+#define SRB_NVME_ELS_RSP 24
+#define SRB_NVMET_LS   25
+#define SRB_NVMET_FCP  26
 
 enum {
TYPE_SRB,
diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
index 3673fcdb033a..2946c65812cd 100644
--- a/drivers/scsi/qla2xxx/qla_gbl.h
+++ b/drivers/scsi/qla2xxx/qla_gbl.h
@@ -119,6 +119,13 @@ void qla_do_iidma_work(struct scsi_qla_host *vha, 
fc_port_t *fcport);
 int qla2x00_reserve_mgmt_server_loop_id(scsi_qla_host_t *);
 void qla_rscn_replay(fc_port_t *fcport);
 
+
+/*
+ * Used by FC-NVMe Target
+ */
+int qla_nvmet_ls(srb_t *sp, void *rsp_pkt);
+int qlt_send_els_resp(srb_t *sp, void *pkt);
+
 /*
  * Global Data in qla_os.c source file.
  */
diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
index 032635321ad6..70cd55884842 100644
--- a/drivers/scsi/qla2xxx/qla_iocb.c
+++ b/drivers/scsi/qla2xxx/qla_iocb.c
@@ -2113,7 +2113,7 @@ __qla2x00_alloc_iocbs(struct qla_qpair *qpair, srb_t *sp)
req_cnt = 1;
handle = 0;
 
-   if (sp && (sp->type != SRB_SCSI_CMD)) {
+   if (sp && (sp->type != SRB_SCSI_CMD) && (sp->type != SRB_NVMET_FCP)) {
/* Adjust entry-counts as needed. */
req_cnt = sp->iocbs;
}
@@ -3491,6 +3491,9 @@ qla2x00_start_sp(srb_t *sp)
case SRB_NVME_LS:
qla_nvme_ls(sp, pkt);
break;
+   case SRB_NVMET_LS:
+   qla_nvmet_ls(sp, pkt);
+   break;
case SRB_ABT_CMD:
IS_QLAFX00(ha) ?
qlafx00_abort_iocb(sp, pkt) :
@@ -3516,6 +3519,9 @@ qla2x00_start_sp(srb_t *sp)
case SRB_PRLO_CMD:
qla24xx_prlo_iocb(sp, pkt);
break;
+   case SRB_NVME_ELS_RSP:
+   qlt_send_els_resp(sp, pkt);
+   break;
default:
break;
}
diff --git a/drivers/scsi/qla2xxx/qla_target.c 
b/drivers/scsi/qla2xxx/qla_target.c
index c4504740f0e2..e15ea80916c1 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -445,6 +445,17 @@ static bool qlt_24xx_atio_pkt_all_vps(struct scsi_qla_host 
*vha,
return false;
 }
 
+int qlt_send_els_resp(srb_t *sp, void *pkt)
+{
+   return 0;
+}
+
+int
+qla_nvmet_ls(srb_t *sp, void *rsp_pkt)
+{
+   return 0;
+}
+
 void qlt_response_pkt_all_vps(struct scsi_qla_host *vha,
struct rsp_que *rsp, response_t *pkt)
 {
-- 
2.12.0



[PATCH v4 0/5] qla2xxx: Add FC-NVMe Target support

2018-10-31 Thread Himanshu Madhani
Hi Martin, 

This series adds support for FC-NVMe Target.

Patch #1 adds infrastructure to support FC-NVMeT Link Service processing. 
Patch #2 adds addes new qla_nvmet.[ch] files for FC-NVMe Target support.
Patch #3 has bulk of changes to add hooks into common code infrastucture and 
 adds support for FC-NVMe Target LS4 processing via Purex path.
Patch #4 adds SysFS hook to enable NVMe Target for the port.

Please apply them to 4.21/scsi-queue at your earliest convenience.

Changes from v3 -> v4
o Rebased Series on current 4.20/scsi-queue 
o Removed NVMET_FCTGTFEAT_{CMD|OPDONE}_IN_ISR as per James Smart's review 
comment. 

Changes from v2 -> v3
o Reordered patches so that each patch compiles individually and is bisectable.

Changes from v1 -> v2
o Addressed all comments from Bart.
o Consolidated Patch 1 and Patch 2 into single patch.
o Fixed smatch warning reported by kbuild autommation.
o NVMe Target mode is exclusive at the moment. Cavium driver does not support 
both
  FCP Target and NVMe Target at the same time. This will be fixed in later 
updates.
 
Thanks,
Himanshu 

Anil Gurumurthy (4):
  qla2xxx_nvmet: Add FC-NVMe Target Link Service request handling
  qla2xxx_nvmet: Add files for FC-NVMe Target support
  qla2xxx_nvmet: Add FC-NVMe Target handling
  qla2xxx_nvmet: Add SysFS node for FC-NVMe Target

Himanshu Madhani (1):
  qla2xxx: Update driver version to 11.00.00.00-k

 drivers/scsi/qla2xxx/Makefile  |   3 +-
 drivers/scsi/qla2xxx/qla_attr.c|  33 ++
 drivers/scsi/qla2xxx/qla_dbg.c |   1 +
 drivers/scsi/qla2xxx/qla_dbg.h |   2 +
 drivers/scsi/qla2xxx/qla_def.h |  35 +-
 drivers/scsi/qla2xxx/qla_fw.h  | 263 ++
 drivers/scsi/qla2xxx/qla_gbl.h |  24 +-
 drivers/scsi/qla2xxx/qla_gs.c  |  16 +-
 drivers/scsi/qla2xxx/qla_init.c|  49 +-
 drivers/scsi/qla2xxx/qla_iocb.c|   8 +-
 drivers/scsi/qla2xxx/qla_isr.c | 112 -
 drivers/scsi/qla2xxx/qla_mbx.c | 101 +++-
 drivers/scsi/qla2xxx/qla_nvme.h|  33 --
 drivers/scsi/qla2xxx/qla_nvmet.c   | 831 +++
 drivers/scsi/qla2xxx/qla_nvmet.h   | 129 +
 drivers/scsi/qla2xxx/qla_os.c  |  75 ++-
 drivers/scsi/qla2xxx/qla_target.c  | 977 -
 drivers/scsi/qla2xxx/qla_target.h  |  90 
 drivers/scsi/qla2xxx/qla_version.h |   4 +-
 19 files changed, 2711 insertions(+), 75 deletions(-)
 create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.c
 create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.h

-- 
2.12.0



[PATCH v4 3/5] qla2xxx_nvmet: Add FC-NVMe Target handling

2018-10-31 Thread Himanshu Madhani
From: Anil Gurumurthy 

This patch Adds following code in the driver to
support FC-NVMe Target

- Updated ql2xenablenvme to allow FC-NVMe Target operation
- Added Link Service Request handling for NVMe Target
- Added passthru IOCB for LS4 request
- Added CTIO for sending response to FW
- Added FC4 Registration for FC-NVMe Target
- Added PUREX IOCB support for login processing in FC-NVMe Target mode
- Added Continuation IOCB for PUREX
- Added Session creation with PUREX IOCB in FC-NVMe Target mode
- To enable FC-NVMe Target mode use ql2xnvmeenable=2 while loading driver

Signed-off-by: Anil Gurumurthy 
Signed-off-by: Giridhar Malavali 
Signed-off-by: Darren Trapp 
Signed-off-by: Himanshu Madhani 
---
 drivers/scsi/qla2xxx/Makefile |   3 +-
 drivers/scsi/qla2xxx/qla_def.h|  32 +-
 drivers/scsi/qla2xxx/qla_fw.h | 263 ++
 drivers/scsi/qla2xxx/qla_gbl.h|  21 +-
 drivers/scsi/qla2xxx/qla_gs.c |  14 +-
 drivers/scsi/qla2xxx/qla_init.c   |  46 +-
 drivers/scsi/qla2xxx/qla_isr.c| 112 -
 drivers/scsi/qla2xxx/qla_mbx.c| 101 +++-
 drivers/scsi/qla2xxx/qla_nvme.h   |  33 --
 drivers/scsi/qla2xxx/qla_nvmet.c  |  36 ++
 drivers/scsi/qla2xxx/qla_os.c |  75 ++-
 drivers/scsi/qla2xxx/qla_target.c | 988 --
 drivers/scsi/qla2xxx/qla_target.h |  90 
 13 files changed, 1731 insertions(+), 83 deletions(-)

diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile
index 17d5bc1cc56b..ec924733c10e 100644
--- a/drivers/scsi/qla2xxx/Makefile
+++ b/drivers/scsi/qla2xxx/Makefile
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: GPL-2.0
 qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \
qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o qla_bsg.o \
-   qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o
+   qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o \
+   qla_nvmet.o
 
 obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o
 obj-$(CONFIG_TCM_QLA2XXX) += tcm_qla2xxx.o
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index a37a4d2261e2..9e2e2d9ddb30 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -480,6 +480,10 @@ struct srb_iocb {
uint32_t dl;
uint32_t timeout_sec;
struct  list_head   entry;
+   uint32_t exchange_address;
+   uint16_t nport_handle;
+   uint8_t vp_index;
+   void *cmd;
} nvme;
struct {
u16 cmd;
@@ -490,7 +494,11 @@ struct srb_iocb {
struct timer_list timer;
void (*timeout)(void *);
 };
-
+struct srb_nvme_els_rsp {
+   dma_addr_t dma_addr;
+   void *dma_ptr;
+   void *ptr;
+};
 /* Values for srb_ctx type */
 #define SRB_LOGIN_CMD  1
 #define SRB_LOGOUT_CMD 2
@@ -518,6 +526,8 @@ struct srb_iocb {
 #define SRB_NVME_ELS_RSP 24
 #define SRB_NVMET_LS   25
 #define SRB_NVMET_FCP  26
+#define SRB_NVMET_ABTS 27
+#define SRB_NVMET_SEND_ABTS28
 
 enum {
TYPE_SRB,
@@ -548,10 +558,13 @@ typedef struct srb {
int rc;
int retry_count;
struct completion comp;
+   struct work_struct nvmet_comp_work;
+   uint16_t comp_status;
union {
struct srb_iocb iocb_cmd;
struct bsg_job *bsg_job;
struct srb_cmd scmd;
+   struct srb_nvme_els_rsp snvme_els;
} u;
void (*done)(void *, int);
void (*free)(void *);
@@ -2276,6 +2289,15 @@ struct qlt_plogi_ack_t {
void*fcport;
 };
 
+/* NVMET */
+struct qlt_purex_plogi_ack_t {
+   struct list_headlist;
+   struct __fc_plogi rcvd_plogi;
+   port_id_t   id;
+   int ref_count;
+   void*fcport;
+};
+
 struct ct_sns_desc {
struct ct_sns_pkt   *ct_sns;
dma_addr_t  ct_sns_dma;
@@ -3238,6 +3260,7 @@ enum qla_work_type {
QLA_EVT_SP_RETRY,
QLA_EVT_IIDMA,
QLA_EVT_ELS_PLOGI,
+   QLA_EVT_NEW_NVMET_SESS,
 };
 
 
@@ -4232,6 +4255,7 @@ typedef struct scsi_qla_host {
uint32_tqpairs_req_created:1;
uint32_tqpairs_rsp_created:1;
uint32_tnvme_enabled:1;
+   uint32_tnvmet_enabled:1;
} flags;
 
atomic_tloop_state;
@@ -4277,6 +4301,7 @@ typedef struct scsi_qla_host {
 #define N2N_LOGIN_NEEDED   30
 #define IOCB_WORK_ACTIVE   31
 #define SET_ZIO_THRESHOLD_NEEDED 32
+#define NVMET_PUREX33
 
unsigned long   pci_flags;
 #define PFLG_DISCONNECTED  0   /* PCI device removed */
@@ -4317,6 +4342,7 @@ typedef struct scsi_qla_host {
uint8_t fabric_node_name[WWN_SIZE];
 
struct  

[Bug 201583] New: 4.19.0 mpt3sas generates I/O Error while spinning up drives

2018-10-31 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=201583

Bug ID: 201583
   Summary: 4.19.0 mpt3sas generates I/O Error while spinning up
drives
   Product: SCSI Drivers
   Version: 2.5
Kernel Version: 4.19.0
  Hardware: All
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: high
  Priority: P1
 Component: Other
  Assignee: scsi_drivers-ot...@kernel-bugs.osdl.org
  Reporter: j...@jki.io
Regression: No

Created attachment 279277
  --> https://bugzilla.kernel.org/attachment.cgi?id=279277=edit
dmesg output

Since upgrading to 4.19 I'm getting I/O Errors while the drives are spinning
up.
I'm using btrfs on top of dmcrypt with raid10.
I attached the complete dmesg log for the error.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.


Performance improvements & regressions in storage IO tests in Linux Kernel 4.19

2018-10-31 Thread Rajender M
As part of VMWare performance regression testing for Linux Kernel upstream 
releases, when comparing the Linux Kernel 4.19 RC4 to Linux Kernel 4.18GA, we 
are able to notice both latency improvements of up to 60% and CPU cost 
regressions of up to 23% in our storage tests. Details can be found in the 
table below. 

After performing a bisect between 4.18GA and 4.19RC4 we identified the root 
cause of this behavior to be a change that switched the SCSI stack from a 
single queue to a multi queue model. The details of the change are: 

##
scsi: core: switch to scsi-mq by default
It has been more than one year since we tried to change the default from legacy 
to multi queue in SCSI with commit c279bd9e406 ("scsi: default to scsi-mq"). 
But due to issues with suspend/resume and performance problems it had been 
reverted again with commit cbe7dfa26eee ("Revert "scsi: default to scsi-mq"").

In the meantime there have been a substantial amount of performance 
improvements and suspend/resume got fixed as well, thus we can re-enable 
scsi-mq without a significant performance penalty.

Author  :  Johannes Thumshirn    2018-07-04 
10:53:56 +0200
Committer   :  Martin K. Petersen     
2018-07-10 22:42:47 -0400

For more details refer this link: http://url/ragk

Change Hash: d5038a13eca72fb216c07eb717169092e92284f1
Author: Johannes Thumshirn   <2018-07-04 10:53:56>
##


1. Test Environment
Below are the details of our test environment:

ESX: vSphere 6.7 GA
GOS: RHEL7.5
VM type: Single-VM with 8 vDisks
vSCSI controllers: lsisas 
Kernel: 4.18GA and 4.19RC4
Backend device 1: local SATA SSD (exposed through P420 controller)
Backend device 2: FC-8G (connected to EMC VNX 5100 array)
Benchmark: ioblazer 
Block size: 4k & 64k
Access pattern: sequential read & sequential write   
OIO: 16oio/vdisk (16*8 = 128oio)
Metrics: Throughput (IOPS), Latency (ms) & CPU cost (CPIO - cycles per I/O)

2. Test Execution
We create a RHEL 7.5 VM and attach 8 data disks (vdisks) either as RDM ("raw" 
disks) on the FC SAN or as VMDKs on local SSD. After running these tests with a 
Linux 4.18GA kernel to get the baseline results, we rebooted the VM and 
installed/upgraded to the Linux 4.19RC4 kernel. We then re-ran the ioblazer 
benchmark with the above configs and measure the throughput, latency and 
cpucost performance of sequential read & write for 4k & 64K block sizes. 

3. Performance Results
The following are the performance #'s between the previous change hash 
"fc21ae8927f391b6e3944f82e417355da5d06a83 (shown as Hash A)" and Johannes 
change hash "d5038a13eca72fb216c07eb717169092e92284f1 (shown as Hash B)". 
Sharing the performance #'s for tests executed on local SSD. 

-
Test Name  Metric  Hash A Hash B
 Difference (in %)
-
4k seq-read cpucost    64411   65527
   -1.73
  latency 0.563    
0.4511  24.82
  throughput  164342 161167 
    -1.97
 
4k seq write    cpucost    68199   73034
   -7.09
  latency 0.5147  
0.399    28.92
  throughput  181057 181634 
    0.31
    
64k seq read   cpucost    86799   106143
 -22.28
  latency 1.436    
0.902    59.16
  throughput  78573   78741 
  0.21
    
64k seq write cpucost    85403   101037 
    -18.3
  latency 2.407    
1.494    61.1
  throughput  48565   48582 
  0.03
-

Note: 
- For cpucost and latency, lower is better. For throughput, higher is better. 
- Executed above tests for 5 iterations each for both the previous change (Hash 
A) & problem change (Hash B) and got consistent #'s across iterations. 
- The above performance data is from local SSD as backend device, in which we 
see