Re: [PATCH] scsi/eh: fix hang adding ehandler wakeups after decrementing host_busy
Pavel, It turns out that the error handler on our systems was not getting woken up for a different reason... I submitted a patch earlier today that fixes the issue I were seeing (I CCed you on the patch). Before I got my hands on the failing system and was able to root cause it, I was pretty sure that your patch was going to fix our issue, because after I examined the code paths, I couldn't find any other reason that the error handler would not get woken up. I tried forcing the bug that your patch fixes to occur, by compiling in some mdelay()s in a key place or two in the scsi code, but it never failed for me that way. With my patch, several systems that previously failed in 10 minutes or less successfully ran for many days. Thanks, Stuart On 11/9/2017 8:54 AM, Pavel Tikhomirov wrote: >> Are there any issues with this patch >> (https://patchwork.kernel.org/patch/9938919/) that Pavel Tikhomirov >> submitted back in September? I am willing to help if there's anything I can >> do to help get it accepted. > > Hi, Stuart, I asked James Bottomley about the patch status offlist and it > seems that the problem is - patch lacks testing and review. I would highly > appreciate review from your side and anyone who wants to participate! > > And if you can confirm that the patch solves the problem on your environment > with no side effects please add "Tested-by:" tag also. > > Thanks, Pavel > > On 09/05/2017 03:54 PM, Pavel Tikhomirov wrote: >> We have a problem on several our nodes with scsi EH. Imagine such an >> order of execution of two threads: >> >> CPU1 scsi_eh_scmd_add CPU2 scsi_host_queue_ready >> /* shost->host_busy == 1 initialy */ >> >> if (shost->shost_state == SHOST_RECOVERY) >> /* does not get here */ >> return 0; >> >> lock(shost->host_lock); >> shost->shost_state = SHOST_RECOVERY; >> >> busy = shost->host_busy++; >> /* host->can_queue == 1 initialy, busy == 1 >> * - go to starved label */ >> lock(shost->host_lock) /* wait */ >> >> shost->host_failed++; >> /* shost->host_busy == 2, shost->host_failed == 1 */ >> call scsi_eh_wakeup(shost) { >> if (host_busy == host_failed) { >> /* does not get here */ >> wake_up_process(shost->ehandler) >> } >> } >> unlock(shost->host_lock) >> >> /* acquire lock */ >> shost->host_busy--; >> >> Finaly we do not wakeup scsi_error_handler and all other commands >> coming will hang as we are in never ending recovery state as there >> is no one left to wakeup handler. >> >> So scsi disc in these host becomes unresponsive and all bio on node >> hangs. (We trigger these problem when scsi cmnds to DVD drive timeout.) >> >> Main idea of the fix is to try to do wake up every time we decrement >> host_busy or increment host_failed(the latter is already OK). >> >> Now the very *last* one of busy threads getting host_lock after >> decrementing host_busy will see all write operations on host's >> shost_state, host_busy and host_failed completed thanks to implied >> memory barriers on spin_lock/unlock, so at the time of busy==failed >> we will trigger wakeup in at least one thread. (Thats why putting >> recovery and failed checks under lock) >> >> Signed-off-by: Pavel Tikhomirov>> --- >> drivers/scsi/scsi_lib.c | 21 + >> 1 file changed, 17 insertions(+), 4 deletions(-) >> >> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c >> index f6097b89d5d3..6c99221d60aa 100644 >> --- a/drivers/scsi/scsi_lib.c >> +++ b/drivers/scsi/scsi_lib.c >> @@ -320,12 +320,11 @@ void scsi_device_unbusy(struct scsi_device *sdev) >> if (starget->can_queue > 0) >> atomic_dec(>target_busy); >> + spin_lock_irqsave(shost->host_lock, flags); >> if (unlikely(scsi_host_in_recovery(shost) && >> - (shost->host_failed || shost->host_eh_scheduled))) { >> - spin_lock_irqsave(shost->host_lock, flags); >> + (shost->host_failed || shost->host_eh_scheduled))) >> scsi_eh_wakeup(shost); >> - spin_unlock_irqrestore(shost->host_lock, flags); >> - } >> + spin_unlock_irqrestore(shost->host_lock, flags); >> atomic_dec(>device_busy); >> } >> @@ -1503,6 +1502,13 @@ static inline int scsi_host_queue_ready(struct >> request_queue *q, >> spin_unlock_irq(shost->host_lock); >> out_dec: >> atomic_dec(>host_busy); >> + >> + spin_lock_irq(shost->host_lock); >> + if (unlikely(scsi_host_in_recovery(shost) && >> + (shost->host_failed || shost->host_eh_scheduled))) >> + scsi_eh_wakeup(shost); >> + spin_unlock_irq(shost->host_lock); >> + >> return 0; >> } >> @@ -1964,6 +1970,13 @@ static blk_status_t scsi_queue_rq(struct >> blk_mq_hw_ctx *hctx, >> out_dec_host_busy: >> atomic_dec(>host_busy); >> + >> +
Re: [PATCH 0/13] scsi: arcmsr: add some driver options and support new adapter ARC-1884
On Mon, 2017-11-20 at 22:03 -0500, Martin K. Petersen wrote: > Ching, > > > The following patches apply to Martin's 4.15/scsi-queue. > > Applied to 4.16/scsi-queue. Thank you! > Hi Martin, Thank you for response. These patches can apply to 4.16/scsi-queue is very good. It will be very appreciation if you can spend a little time to review our driver's patch. Providing a better software driver is you and our common target. We will keep going. Thanks, Ching
Re: [PATCH 0/3] Some fixes to aacraid
Guilherme, > This series presents 3 small fixes for aacraid driver. The most > important is the crash prevention, IMHO. Applied to 4.15/scsi-fixes. Thank you! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] scsi: aacraid: remove unused variable managed_request_id
Colin, > Variable managed_request_id is being assigned but it is never read, > hence it is redundant and can be removed. Cleans up clang warning: Applied to 4.16/scsi-queue. -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] bfa: remove unused pointer 'port'
Colin, > The pointer 'port' is being assigned but it is never read, hence it is > redundant and can be removed. Cleans up clang warning: > > drivers/scsi/bfa/bfad_attr.c:505:2: warning: Value stored to 'port' > is never read Applied to 4.16/scsi-queue. Thanks, Colin! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] scsi: st.c: fix kernel-doc mismatch
Randy, > Fix kernel-doc function name and comments in st.c::read_ns_show(): > change us to ns to match the function name. Applied to 4.16/scsi-queue. Thanks! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] scsi: fix another I2O typo
Randy, > Correct another typo I20 to I2O. Applied to 4.16/scsi-queue, thank you! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] scsi: csiostor: remove unneeded DRIVER_LICENSE #define
Greg, > There is no need to #define the license of the driver, just put it in > the MODULE_LICENSE() line directly as a text string. > > This allows tools that check that the module license matches the source > code license to work properly, as there is no need to unwind the > unneeded dereference, especially when the string is defined in a .h file > far away from the .c file it is used in. Applied to 4.16/scsi-queue, thanks! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH 3/3] scsi: 3w-9xxx: rework lock timeouts
Arnd, > The TW_IOCTL_GET_LOCK ioctl uses do_gettimeofday() to check whether a > lock has expired. This can misbehave due to a concurrent > settimeofday() call, as it is based on 'real' time, and it will > overflow in y2038 on 32-bit architectures, producing unexpected > results when used across the overflow time. > > This changes it to using monotonic time, using ktime_get() to simplify > the code. Applied to 4.16/scsi-queue, thanks! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH 2/3] scsi: 3ware: use 64-bit times for FW time sync
Arnd, > The calculation of the number of seconds since Sunday 00:00:00 > overflows in 2106, meaning that we instead will return the seconds > since Wednesday 06:28:16 afterwards. > > Using 64-bit time stamps avoids this slight inconsistency, and the > deprecated do_gettimeofday(), replacing it with the simpler > ktime_get_real_seconds(). Applied to 4.16/scsi-queue. -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH 1/3] scsi: 3ware: fix 32-bit time calculations
Arnd, > twl_aen_queue_event/twa_aen_queue_event, we use do_gettimeofday() > to read the lower 32 bits of the current time in seconds, to pass > them to the TW_IOCTL_GET_NEXT_EVENT ioctl or the 3ware_aen_read > sysfs file. > > This will overflow on all architectures in year 2106, there is > not much we can do about that without breaking the ABI. User > space has 90 years to learn to deal with it, so it's probably ok. > > I'm changing it to use ktime_get_real_seconds() with a comment > to document what happens when. Applied to 4.16/scsi-queue. -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH 0/7] scsi: bfa: do_gettimeofday removal
Arnd, > The bfa driver is one of the main users of do_gettimeofday(), a > function that I'm trying to remove as part of the y2038 cleanup. > > The timestamps are all uses in slightly different ways, so this has > turned into a rather longish series for doing something that should be > simple. > > The last patch in the series ("scsi: bfa: use 64-bit times in > bfa_aen_entry_s ABI") is one that needs to be reviewed very carefully, > and it can be skipped if the maintainers prefer to leave the 32-bit > ABI unchanged, the rest are hopefully fairly straightforward. Applied to 4.16/scsi-queue, thanks! Will drop #7 if something breaks. -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH 0/13] scsi: arcmsr: add some driver options and support new adapter ARC-1884
Ching, > The following patches apply to Martin's 4.15/scsi-queue. Applied to 4.16/scsi-queue. Thank you! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH V8 6/7] sd_zbc: Initialize device request queue zoned data
Damien, > Initialize the seq_zones_bitmap, seq_zones_wlock and nr_zones fields > of the disk request queue on disk revalidate. As the seq_zones_bitmap > and seq_zones_wlock allocations are identical, introduce the helper > sd_zbc_alloc_zone_bitmap(). Using this helper, reallocate the bitmaps > whenever the disk capacity (number of zones) changes. Reviewed-by: Martin K. Petersen-- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] scsi: ppa: mark expected switch fall-throughs
Gustavo A., > In preparation to enabling -Wimplicit-fallthrough, mark switch cases > where we are expecting to fall through. Applied to 4.16/scsi-queue. -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] fnic: use 64-bit timestamps
> struct timespec is deprecated since it overflows in 2038 on 32-bit > architectures, so we should use timespec64 consistently. > > I'm slightly adapting the format strings here, to make sure we print > the nanoseconds with the correct number of leading zeroes. Satish: Please review/test. Thank you! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] scsi: hpsa: remove an unecessary NULL check
> device->scsi3addr[] is an array, not a pointer, so it can't be NULL. > I've removed the check. Microsemi folks, please review. Thanks! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] scsi: bnx2i: bnx2i_hwi: use swap macro in bnx2i_send_iscsi_nopout
Gustavo A., > Make use of the swap macro and remove unnecessary variable tmp. > This makes the code easier to read and maintain. Applied to 4.16/scsi-queue. Thanks! -- Martin K. Petersen Oracle Linux Engineering
[PATCH] scsi: st.c: fix kernel-doc mismatch
From: Randy DunlapFix kernel-doc function name and comments in st.c::read_ns_show(): change us to ns to match the function name. Signed-off-by: Randy Dunlap --- drivers/scsi/st.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- lnx-414.orig/drivers/scsi/st.c +++ lnx-414/drivers/scsi/st.c @@ -4712,7 +4712,7 @@ static ssize_t read_byte_cnt_show(struct static DEVICE_ATTR_RO(read_byte_cnt); /** - * read_us_show - return read us - overall time spent waiting on reads in ns. + * read_ns_show - return read ns - overall time spent waiting on reads in ns. * @dev: struct device * @attr: attribute structure * @buf: buffer to return formatted data in
[PATCH] target-core: don't use "const char*" for a buffer that is written to
From: Rasmus Villemoesiscsi_parse_pr_out_transport_id launders the const away via a call to strstr(), and then modifies the buffer (writing a nul byte) through the return value. It's cleaner to be honest and simply declare the parameter as "char*", fixing up the call chain, and allowing us to drop the cast in the return statement. Amusingly, the two current callers found it necessary to cast a non-const pointer to a const. Signed-off-by: Rasmus Villemoes --- drivers/target/target_core_fabric_lib.c | 6 +++--- drivers/target/target_core_internal.h | 2 +- drivers/target/target_core_pr.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/target/target_core_fabric_lib.c b/drivers/target/target_core_fabric_lib.c index 508da345b73f..71a80257a052 100644 --- a/drivers/target/target_core_fabric_lib.c +++ b/drivers/target/target_core_fabric_lib.c @@ -273,7 +273,7 @@ static int iscsi_get_pr_transport_id_len( static char *iscsi_parse_pr_out_transport_id( struct se_portal_group *se_tpg, - const char *buf, + char *buf, u32 *out_tid_len, char **port_nexus_ptr) { @@ -356,7 +356,7 @@ static char *iscsi_parse_pr_out_transport_id( } } - return (char *)[4]; + return [4]; } int target_get_pr_transport_id_len(struct se_node_acl *nacl, @@ -405,7 +405,7 @@ int target_get_pr_transport_id(struct se_node_acl *nacl, } const char *target_parse_pr_out_transport_id(struct se_portal_group *tpg, - const char *buf, u32 *out_tid_len, char **port_nexus_ptr) + char *buf, u32 *out_tid_len, char **port_nexus_ptr) { u32 offset; diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h index 18e3eb16e756..cada158cf1c2 100644 --- a/drivers/target/target_core_internal.h +++ b/drivers/target/target_core_internal.h @@ -101,7 +101,7 @@ int target_get_pr_transport_id(struct se_node_acl *nacl, struct t10_pr_registration *pr_reg, int *format_code, unsigned char *buf); const char *target_parse_pr_out_transport_id(struct se_portal_group *tpg, - const char *buf, u32 *out_tid_len, char **port_nexus_ptr); + char *buf, u32 *out_tid_len, char **port_nexus_ptr); /* target_core_hba.c */ struct se_hba *core_alloc_hba(const char *, u32, u32); diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c index dd2cd8048582..09941d1ae6c1 100644 --- a/drivers/target/target_core_pr.c +++ b/drivers/target/target_core_pr.c @@ -1597,7 +1597,7 @@ core_scsi3_decode_spec_i_port( dest_rtpi = tmp_lun->lun_rtpi; i_str = target_parse_pr_out_transport_id(tmp_tpg, - (const char *)ptr, _len, _ptr); + ptr, _len, _ptr); if (!i_str) continue; @@ -3285,7 +3285,7 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key, goto out; } initiator_str = target_parse_pr_out_transport_id(dest_se_tpg, - (const char *)[24], _tid_len, _ptr); + [24], _tid_len, _ptr); if (!initiator_str) { pr_err("SPC-3 PR REGISTER_AND_MOVE: Unable to locate" " initiator_str from Transport ID\n"); -- 2.11.0
[PATCH v3 00/17] lpfc updates for 11.4.0.5
This patch set provides a number of bug fixes and additions to the driver. The patches were cut against the Martin's 4.15/scsi-queue tree. There are no outside dependencies and are expected to be pulled via Martins tree. v2: Rework patch 1 per review Add signed-by's on other patches v3: Rework patches 11 and 13 per review Add signed-by's on other patches James Smart (17): lpfc: FLOGI failures are reported when connected to a private loop. lpfc: Expand WQE capability of every NVME hardware queue lpfc: Handle XRI_ABORTED_CQE in soft IRQ lpfc: Fix crash after bad bar setup on driver attachment lpfc: Fix NVME LS abort_xri lpfc: Raise maximum NVME sg list size for 256 elements lpfc: Driver fails to detect direct attach storage array lpfc: Fix display for debugfs queInfo lpfc: Adjust default value of lpfc_nvmet_mrq lpfc: Fix ndlp ref count for pt2pt mode issue RSCN lpfc: Linux LPFC driver does not process all RSCNs lpfc: correct port registrations with nvme_fc lpfc: Correct driver deregistrations with host nvme transport lpfc: Fix crash during driver unload with running nvme traffic lpfc: Fix driver handling of nvme resources during unload lpfc: small sg cnt cleanup lpfc: update driver version to 11.4.0.5 drivers/scsi/lpfc/lpfc.h | 4 +- drivers/scsi/lpfc/lpfc_attr.c| 13 ++- drivers/scsi/lpfc/lpfc_crtn.h| 2 + drivers/scsi/lpfc/lpfc_ct.c | 19 +++ drivers/scsi/lpfc/lpfc_debugfs.c | 16 +-- drivers/scsi/lpfc/lpfc_disc.h| 2 + drivers/scsi/lpfc/lpfc_els.c | 71 ++-- drivers/scsi/lpfc/lpfc_hbadisc.c | 29 +++-- drivers/scsi/lpfc/lpfc_hw4.h | 6 +- drivers/scsi/lpfc/lpfc_init.c| 243 +-- drivers/scsi/lpfc/lpfc_nvme.c| 230 +--- drivers/scsi/lpfc/lpfc_nvme.h| 5 +- drivers/scsi/lpfc/lpfc_nvmet.c | 13 ++- drivers/scsi/lpfc/lpfc_nvmet.h | 4 + drivers/scsi/lpfc/lpfc_sli.c | 169 ++- drivers/scsi/lpfc/lpfc_sli4.h| 10 +- drivers/scsi/lpfc/lpfc_version.h | 2 +- 17 files changed, 587 insertions(+), 251 deletions(-) -- 2.13.1
[PATCH v3 01/17] lpfc: FLOGI failures are reported when connected to a private loop.
When the HBA is connected to a private loop, the driver reports FLOGI loop-open failure as functional error. This is an expected condition. Mark loop-open failure as a warning instead of error. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_els.c | 27 ++- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c index b14f7c5653cd..c81cdc637e64 100644 --- a/drivers/scsi/lpfc/lpfc_els.c +++ b/drivers/scsi/lpfc/lpfc_els.c @@ -1030,30 +1030,31 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, stop_rr_fcf_flogi: /* FLOGI failure */ - lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, - "2858 FLOGI failure Status:x%x/x%x TMO:x%x " - "Data x%x x%x\n", - irsp->ulpStatus, irsp->un.ulpWord[4], - irsp->ulpTimeout, phba->hba_flag, - phba->fcf.fcf_flag); + if (!(irsp->ulpStatus == IOSTAT_LOCAL_REJECT && + ((irsp->un.ulpWord[4] & IOERR_PARAM_MASK) == + IOERR_LOOP_OPEN_FAILURE))) + lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, + "2858 FLOGI failure Status:x%x/x%x " + "TMO:x%x Data x%x x%x\n", + irsp->ulpStatus, irsp->un.ulpWord[4], + irsp->ulpTimeout, phba->hba_flag, + phba->fcf.fcf_flag); /* Check for retry */ if (lpfc_els_retry(phba, cmdiocb, rspiocb)) goto out; - /* FLOGI failure */ - lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, -"0100 FLOGI failure Status:x%x/x%x TMO:x%x\n", -irsp->ulpStatus, irsp->un.ulpWord[4], -irsp->ulpTimeout); - - /* If this is not a loop open failure, bail out */ if (!(irsp->ulpStatus == IOSTAT_LOCAL_REJECT && ((irsp->un.ulpWord[4] & IOERR_PARAM_MASK) == IOERR_LOOP_OPEN_FAILURE))) goto flogifail; + lpfc_printf_vlog(vport, KERN_WARNING, LOG_ELS, +"0150 FLOGI failure Status:x%x/x%x TMO:x%x\n", +irsp->ulpStatus, irsp->un.ulpWord[4], +irsp->ulpTimeout); + /* FLOGI failed, so there is no fabric */ spin_lock_irq(shost->host_lock); vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP); -- 2.13.1
[PATCH v3 04/17] lpfc: Fix crash after bad bar setup on driver attachment
In test cases where an instance of the driver is detached and reattached, the driver will crash on reattachment. There is a compound if statement that will skip over the bar setup if the pci_resource_start call is not successful. The driver erroneously returns success to its bar setup in this scenario even though the bars aren't properly configured. Rework the offending code segment for proper initialization steps. If the pci_resource_start call fails, -ENOMEM is now returned. Sample stack: rport-5:0-10: blocked FC remote port time out: removing rport BUG: unable to handle kernel NULL pointer dereference at (null) ... lpfc_sli4_wait_bmbx_ready+0x32/0x70 [lpfc] ... ... RIP: 0010:... ... lpfc_sli4_wait_bmbx_ready+0x32/0x70 [lpfc] Call Trace: ... lpfc_sli4_post_sync_mbox+0x106/0x4d0 [lpfc] ... ? __alloc_pages_nodemask+0x176/0x420 ... ? __kmalloc+0x2e/0x230 ... lpfc_sli_issue_mbox_s4+0x533/0x720 [lpfc] ... ? mempool_alloc+0x69/0x170 ... ? dma_generic_alloc_coherent+0x8f/0x140 ... lpfc_sli_issue_mbox+0xf/0x20 [lpfc] ... lpfc_sli4_driver_resource_setup+0xa6f/0x1130 [lpfc] ... ? lpfc_pci_probe_one+0x23e/0x16f0 [lpfc] ... lpfc_pci_probe_one+0x445/0x16f0 [lpfc] ... local_pci_probe+0x45/0xa0 ... work_for_cpu_fn+0x14/0x20 ... process_one_work+0x17a/0x440 Cc:# 4.12+ Signed-off-by: Dick Kennedy Signed-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_init.c | 84 ++- 1 file changed, 51 insertions(+), 33 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 745aff753396..fc9f91327724 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -9437,44 +9437,62 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba) lpfc_sli4_bar0_register_memmap(phba, if_type); } - if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) && - (pci_resource_start(pdev, PCI_64BIT_BAR2))) { - /* -* Map SLI4 if type 0 HBA Control Register base to a kernel -* virtual address and setup the registers. -*/ - phba->pci_bar1_map = pci_resource_start(pdev, PCI_64BIT_BAR2); - bar1map_len = pci_resource_len(pdev, PCI_64BIT_BAR2); - phba->sli4_hba.ctrl_regs_memmap_p = - ioremap(phba->pci_bar1_map, bar1map_len); - if (!phba->sli4_hba.ctrl_regs_memmap_p) { - dev_printk(KERN_ERR, >dev, - "ioremap failed for SLI4 HBA control registers.\n"); + if (if_type == LPFC_SLI_INTF_IF_TYPE_0) { + if (pci_resource_start(pdev, PCI_64BIT_BAR2)) { + /* +* Map SLI4 if type 0 HBA Control Register base to a +* kernel virtual address and setup the registers. +*/ + phba->pci_bar1_map = pci_resource_start(pdev, + PCI_64BIT_BAR2); + bar1map_len = pci_resource_len(pdev, PCI_64BIT_BAR2); + phba->sli4_hba.ctrl_regs_memmap_p = + ioremap(phba->pci_bar1_map, + bar1map_len); + if (!phba->sli4_hba.ctrl_regs_memmap_p) { + dev_err(>dev, + "ioremap failed for SLI4 HBA " + "control registers.\n"); + error = -ENOMEM; + goto out_iounmap_conf; + } + phba->pci_bar2_memmap_p = +phba->sli4_hba.ctrl_regs_memmap_p; + lpfc_sli4_bar1_register_memmap(phba); + } else { + error = -ENOMEM; goto out_iounmap_conf; } - phba->pci_bar2_memmap_p = phba->sli4_hba.ctrl_regs_memmap_p; - lpfc_sli4_bar1_register_memmap(phba); } - if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) && - (pci_resource_start(pdev, PCI_64BIT_BAR4))) { - /* -* Map SLI4 if type 0 HBA Doorbell Register base to a kernel -* virtual address and setup the registers. -*/ - phba->pci_bar2_map = pci_resource_start(pdev, PCI_64BIT_BAR4); - bar2map_len = pci_resource_len(pdev, PCI_64BIT_BAR4); - phba->sli4_hba.drbl_regs_memmap_p = - ioremap(phba->pci_bar2_map, bar2map_len); - if (!phba->sli4_hba.drbl_regs_memmap_p) { - dev_printk(KERN_ERR, >dev, -
[PATCH v3 09/17] lpfc: Adjust default value of lpfc_nvmet_mrq
The current default for async hw receive queues is 1, which presents issues under heavy load as number of queues influence the available async receive buffer limits. Raise the default to the either the current hw limit (16) or the number of hw qs configured (io channel value). Revise the attribute definition for mrq to better reflect what we do for hw queues. E.g. 0 means default to optimal (# of cpus), non-zero specifies a specific limit. Before this change, mrq=0 meant target mode was disabled. As 0 now has a different meaning, rework the if tests to use the better nvmet_support check. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_attr.c| 11 -- drivers/scsi/lpfc/lpfc_debugfs.c | 2 +- drivers/scsi/lpfc/lpfc_init.c| 47 drivers/scsi/lpfc/lpfc_nvmet.h | 4 4 files changed, 42 insertions(+), 22 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c index 4dcd129ca901..598e07f43912 100644 --- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -3361,12 +3361,13 @@ LPFC_ATTR_R(suppress_rsp, 1, 0, 1, /* * lpfc_nvmet_mrq: Specify number of RQ pairs for processing NVMET cmds + * lpfc_nvmet_mrq = 0 driver will calcualte optimal number of RQ pairs * lpfc_nvmet_mrq = 1 use a single RQ pair * lpfc_nvmet_mrq >= 2 use specified RQ pairs for MRQ * */ LPFC_ATTR_R(nvmet_mrq, - 1, 1, 16, + LPFC_NVMET_MRQ_AUTO, LPFC_NVMET_MRQ_AUTO, LPFC_NVMET_MRQ_MAX, "Specify number of RQ pairs for processing NVMET cmds"); /* @@ -6357,6 +6358,9 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba) phba->cfg_nvmet_fb_size = LPFC_NVMET_FB_SZ_MAX; } + if (!phba->cfg_nvmet_mrq) + phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; + /* Adjust lpfc_nvmet_mrq to avoid running out of WQE slots */ if (phba->cfg_nvmet_mrq > phba->cfg_nvme_io_channel) { phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; @@ -6364,10 +6368,13 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba) "6018 Adjust lpfc_nvmet_mrq to %d\n", phba->cfg_nvmet_mrq); } + if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX) + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX; + } else { /* Not NVME Target mode. Turn off Target parameters. */ phba->nvmet_support = 0; - phba->cfg_nvmet_mrq = 0; + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_OFF; phba->cfg_nvmet_fb_size = 0; } diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c index 4df5a21bd93b..b7f57492aefc 100644 --- a/drivers/scsi/lpfc/lpfc_debugfs.c +++ b/drivers/scsi/lpfc/lpfc_debugfs.c @@ -3213,7 +3213,7 @@ lpfc_idiag_cqs_for_eq(struct lpfc_hba *phba, char *pbuffer, return 1; } - if (eqidx < phba->cfg_nvmet_mrq) { + if ((eqidx < phba->cfg_nvmet_mrq) && phba->nvmet_support) { /* NVMET CQset */ qp = phba->sli4_hba.nvmet_cqset[eqidx]; *len = __lpfc_idiag_print_cq(qp, "NVMET CQset", pbuffer, *len); diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index fc9f91327724..7a06f23a3baf 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -7939,8 +7939,12 @@ lpfc_sli4_queue_verify(struct lpfc_hba *phba) phba->cfg_fcp_io_channel = io_channel; if (phba->cfg_nvme_io_channel > io_channel) phba->cfg_nvme_io_channel = io_channel; - if (phba->cfg_nvme_io_channel < phba->cfg_nvmet_mrq) - phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; + if (phba->nvmet_support) { + if (phba->cfg_nvme_io_channel < phba->cfg_nvmet_mrq) + phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; + } + if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX) + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX; lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "2574 IO channels: irqs %d fcp %d nvme %d MRQ: %d\n", @@ -8454,13 +8458,15 @@ lpfc_sli4_queue_destroy(struct lpfc_hba *phba) /* Release NVME CQ mapping array */ lpfc_sli4_release_queue_map(>sli4_hba.nvme_cq_map); - lpfc_sli4_release_queues(>sli4_hba.nvmet_cqset, - phba->cfg_nvmet_mrq); + if (phba->nvmet_support) { + lpfc_sli4_release_queues(>sli4_hba.nvmet_cqset, +phba->cfg_nvmet_mrq); - lpfc_sli4_release_queues(>sli4_hba.nvmet_mrq_hdr, -
[PATCH v3 08/17] lpfc: Fix display for debugfs queInfo
Display for lpfc/fnX/iDiag/queInfo isn't formatted perfectly. Corrected the format strings for the queue info debug messages. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_debugfs.c | 14 +++--- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c index 2bf5ad3b1512..4df5a21bd93b 100644 --- a/drivers/scsi/lpfc/lpfc_debugfs.c +++ b/drivers/scsi/lpfc/lpfc_debugfs.c @@ -3246,7 +3246,7 @@ __lpfc_idiag_print_eq(struct lpfc_queue *qp, char *eqtype, len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n%s EQ info: EQ-STAT[max:x%x noE:x%x " - "bs:x%x proc:x%llx eqd %d]\n", + "cqe_proc:x%x eqe_proc:x%llx eqd %d]\n", eqtype, qp->q_cnt_1, qp->q_cnt_2, qp->q_cnt_3, (unsigned long long)qp->q_cnt_4, qp->q_mode); len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, @@ -3366,6 +3366,12 @@ lpfc_idiag_queinfo_read(struct file *file, char __user *buf, size_t nbytes, if (len >= max_cnt) goto too_big; + qp = phba->sli4_hba.hdr_rq; + len = __lpfc_idiag_print_rqpair(qp, phba->sli4_hba.dat_rq, + "ELS RQpair", pbuffer, len); + if (len >= max_cnt) + goto too_big; + /* Slow-path NVME LS response CQ */ qp = phba->sli4_hba.nvmels_cq; len = __lpfc_idiag_print_cq(qp, "NVME LS", @@ -3383,12 +3389,6 @@ lpfc_idiag_queinfo_read(struct file *file, char __user *buf, size_t nbytes, if (len >= max_cnt) goto too_big; - qp = phba->sli4_hba.hdr_rq; - len = __lpfc_idiag_print_rqpair(qp, phba->sli4_hba.dat_rq, - "RQpair", pbuffer, len); - if (len >= max_cnt) - goto too_big; - goto out; } -- 2.13.1
[PATCH v3 16/17] lpfc: small sg cnt cleanup
The logic for sg_seg_cnt is a bit convoluted. This patch tries to clean up a couple of areas, especially around the +2 and +1 logic. This patch: - cleans up the lpfc_sg_seg_cnt attribute to specify a real minimum rather than making the minimum be whatever the default is. - Remove the hardcoding of +2 (for the number of elements we use in a sgl for cmd iu and rsp iu) and +1 (an additional entry to compensate for nvme's reduction of io size based on a possible partial page) logic in sg list initialization. In the case where the +1 logic is referenced in host and target io checks, use the values set in the transport template as that value was properly set. There can certainly be more done in this area and it will be addressed in combined host/target driver effort. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc.h | 1 + drivers/scsi/lpfc/lpfc_attr.c | 2 +- drivers/scsi/lpfc/lpfc_init.c | 19 ++- drivers/scsi/lpfc/lpfc_nvme.c | 3 ++- drivers/scsi/lpfc/lpfc_nvmet.c | 2 +- 5 files changed, 19 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h index 46a89bdff8e4..dd2191c83052 100644 --- a/drivers/scsi/lpfc/lpfc.h +++ b/drivers/scsi/lpfc/lpfc.h @@ -55,6 +55,7 @@ struct lpfc_sli2_slim; #define LPFC_MAX_SG_SLI4_SEG_CNT_DIF 128 /* sg element count per scsi cmnd */ #define LPFC_MAX_SG_SEG_CNT_DIF 512/* sg element count per scsi cmnd */ #define LPFC_MAX_SG_SEG_CNT4096/* sg element count per scsi cmnd */ +#define LPFC_MIN_SG_SEG_CNT32 /* sg element count per scsi cmnd */ #define LPFC_MAX_SGL_SEG_CNT 512 /* SGL element count per scsi cmnd */ #define LPFC_MAX_BPL_SEG_CNT 4096/* BPL element count per scsi cmnd */ #define LPFC_MAX_NVME_SEG_CNT 256 /* max SGL element cnt per NVME cmnd */ diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c index 598e07f43912..74d6fe984df4 100644 --- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -5135,7 +5135,7 @@ LPFC_ATTR(delay_discovery, 0, 0, 1, * this parameter will be limited to 128 if BlockGuard is enabled under SLI4 * and will be limited to 512 if BlockGuard is enabled under SLI3. */ -LPFC_ATTR_R(sg_seg_cnt, LPFC_DEFAULT_SG_SEG_CNT, LPFC_DEFAULT_SG_SEG_CNT, +LPFC_ATTR_R(sg_seg_cnt, LPFC_MIN_SG_SEG_CNT, LPFC_DEFAULT_SG_SEG_CNT, LPFC_MAX_SG_SEG_CNT, "Max Scatter Gather Segment Count"); /* diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index c466ceb43bc9..92dc865ca52c 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -5812,6 +5812,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) struct lpfc_mqe *mqe; int longs; int fof_vectors = 0; + int extra; uint64_t wwn; phba->sli4_hba.num_online_cpu = num_online_cpus(); @@ -5867,13 +5868,21 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) */ /* +* 1 for cmd, 1 for rsp, NVME adds an extra one +* for boundary conditions in its max_sgl_segment template. +*/ + extra = 2; + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) + extra++; + + /* * It doesn't matter what family our adapter is in, we are * limited to 2 Pages, 512 SGEs, for our SGL. * There are going to be 2 reserved SGEs: 1 FCP cmnd + 1 FCP rsp */ max_buf_size = (2 * SLI4_PAGE_SIZE); - if (phba->cfg_sg_seg_cnt > LPFC_MAX_SGL_SEG_CNT - 2) - phba->cfg_sg_seg_cnt = LPFC_MAX_SGL_SEG_CNT - 2; + if (phba->cfg_sg_seg_cnt > LPFC_MAX_SGL_SEG_CNT - extra) + phba->cfg_sg_seg_cnt = LPFC_MAX_SGL_SEG_CNT - extra; /* * Since lpfc_sg_seg_cnt is module param, the sg_dma_buf_size @@ -5906,14 +5915,14 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) */ phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp) + - ((phba->cfg_sg_seg_cnt + 2) * + ((phba->cfg_sg_seg_cnt + extra) * sizeof(struct sli4_sge)); /* Total SGEs for scsi_sg_list */ - phba->cfg_total_seg_cnt = phba->cfg_sg_seg_cnt + 2; + phba->cfg_total_seg_cnt = phba->cfg_sg_seg_cnt + extra; /* -* NOTE: if (phba->cfg_sg_seg_cnt + 2) <= 256 we only +* NOTE: if (phba->cfg_sg_seg_cnt + extra) <= 256 we only * need to post 1 page for the SGL. */ } diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index 50bbc61bfe5d..ce2186673dad 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++
[PATCH v3 12/17] lpfc: correct port registrations with nvme_fc
The driver currently registers any remote port that has NVME support. It should only be registering target ports. Register only target ports. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_hbadisc.c | 20 drivers/scsi/lpfc/lpfc_nvme.c| 3 ++- 2 files changed, 14 insertions(+), 9 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c index 31773e481264..9f4936911c4b 100644 --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -4176,12 +4176,14 @@ lpfc_nlp_state_cleanup(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, if (ndlp->nlp_fc4_type & NLP_FC4_NVME) { vport->phba->nport_event_cnt++; - if (vport->phba->nvmet_support == 0) - /* Start devloss */ - lpfc_nvme_unregister_port(vport, ndlp); - else + if (vport->phba->nvmet_support == 0) { + /* Start devloss if target. */ + if (ndlp->nlp_type & NLP_NVME_TARGET) + lpfc_nvme_unregister_port(vport, ndlp); + } else { /* NVMET has no upcall. */ lpfc_nlp_put(ndlp); + } } } @@ -4205,11 +4207,13 @@ lpfc_nlp_state_cleanup(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, ndlp->nlp_fc4_type & NLP_FC4_NVME) { if (vport->phba->nvmet_support == 0) { /* Register this rport with the transport. -* Initiators take the NDLP ref count in -* the register. +* Only NVME Target Rports are registered with +* the transport. */ - vport->phba->nport_event_cnt++; - lpfc_nvme_register_port(vport, ndlp); + if (ndlp->nlp_type & NLP_NVME_TARGET) { + vport->phba->nport_event_cnt++; + lpfc_nvme_register_port(vport, ndlp); + } } else { /* Just take an NDLP ref count since the * target does not register rports. diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index db1ed426f7e6..d3ada630b427 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -2473,7 +2473,8 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) /* Sanity check ndlp type. Only call for NVME ports. Don't * clear any rport state until the transport calls back. */ - if (ndlp->nlp_type & (NLP_NVME_TARGET | NLP_NVME_INITIATOR)) { + + if (ndlp->nlp_type & NLP_NVME_TARGET) { init_completion(>rport_unreg_done); /* No concern about the role change on the nvme remoteport. -- 2.13.1
[PATCH v3 10/17] lpfc: Fix ndlp ref count for pt2pt mode issue RSCN
pt2pt ndlp ref count prematurely goes to 0. There was reference removed that should only be removed if connected to a switch, not if in point-to-point mode. Add a mode check before the reference remove. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_els.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c index 532cd4b49c5d..911066c9612d 100644 --- a/drivers/scsi/lpfc/lpfc_els.c +++ b/drivers/scsi/lpfc/lpfc_els.c @@ -2956,8 +2956,8 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry) /* This will cause the callback-function lpfc_cmpl_els_cmd to * trigger the release of node. */ - - lpfc_nlp_put(ndlp); + if (!(vport->fc_flag & FC_PT2PT)) + lpfc_nlp_put(ndlp); return 0; } -- 2.13.1
[PATCH v3 14/17] lpfc: Fix crash during driver unload with running nvme traffic
When the driver is unloading, the nvme transport could be in the process of submitting new requests, will send abort requests to terminate associations, or may make LS-related requests. The driver's abort and request entry points currently is ignorant of the unloading state and is starting the requests even though the infrastructure to complete them continues to teardown. Change the entry points for new requests to check whether unloading and if so, reject the requests. Abort routines check unloading, and if so, noop the request. An abort is noop'd as the teardown paths are already aborting/terminating the io outstanding at the time the teardown initiated. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_nvme.c | 14 ++ drivers/scsi/lpfc/lpfc_nvmet.c | 11 +++ 2 files changed, 25 insertions(+) diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index 3aa3b889b4cf..9b231c88ca8b 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -423,6 +423,9 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport, if (vport->load_flag & FC_UNLOADING) return -ENODEV; + if (vport->load_flag & FC_UNLOADING) + return -ENODEV; + ndlp = lpfc_findnode_did(vport, pnvme_rport->port_id); if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE | LOG_NVME_IOERR, @@ -538,6 +541,9 @@ lpfc_nvme_ls_abort(struct nvme_fc_local_port *pnvme_lport, vport = lport->vport; phba = vport->phba; + if (vport->load_flag & FC_UNLOADING) + return; + ndlp = lpfc_findnode_did(vport, pnvme_rport->port_id); if (!ndlp) { lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_ABTS, @@ -1273,6 +1279,11 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport, goto out_fail; } + if (vport->load_flag & FC_UNLOADING) { + ret = -ENODEV; + goto out_fail; + } + /* Validate pointers. */ if (!pnvme_lport || !pnvme_rport || !freqpriv) { lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR | LOG_NODE, @@ -1500,6 +1511,9 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport, vport = lport->vport; phba = vport->phba; + if (vport->load_flag & FC_UNLOADING) + return; + /* Announce entry to new IO submit field. */ lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_ABTS, "6002 Abort Request to rport DID x%06x " diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c index 84cf1b9079f7..2b50aecc2722 100644 --- a/drivers/scsi/lpfc/lpfc_nvmet.c +++ b/drivers/scsi/lpfc/lpfc_nvmet.c @@ -635,6 +635,9 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport, if (phba->pport->load_flag & FC_UNLOADING) return -ENODEV; + if (phba->pport->load_flag & FC_UNLOADING) + return -ENODEV; + lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC, "6023 NVMET LS rsp oxid x%x\n", ctxp->oxid); @@ -721,6 +724,11 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport, goto aerr; } + if (phba->pport->load_flag & FC_UNLOADING) { + rc = -ENODEV; + goto aerr; + } + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS if (ctxp->ts_cmd_nvme) { if (rsp->op == NVMET_FCOP_RSP) @@ -823,6 +831,9 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport, if (phba->pport->load_flag & FC_UNLOADING) return; + if (phba->pport->load_flag & FC_UNLOADING) + return; + lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, "6103 NVMET Abort op: oxri x%x flg x%x ste %d\n", ctxp->oxid, ctxp->flag, ctxp->state); -- 2.13.1
[PATCH v3 11/17] lpfc: Linux LPFC driver does not process all RSCNs
During RSCN storms, the driver does not rediscover some targets. The driver marks some RSCN as to be handled after the ones it's working on. The driver missed processing some deferred RSCN. Move where the driver checks for deferred RSCNs and initiate deferred RSCN handling if the flag was set. Also revise nport state within the RSCN confirm routine. Add some state data to a possible debug print to aid future debugging. Signed-off-by: Dick KennedySigned-off-by: James Smart --- v3: per review, fix weird data element that was a combination of multiple fields. The log print now has each field. --- drivers/scsi/lpfc/lpfc_ct.c | 19 +++ drivers/scsi/lpfc/lpfc_els.c | 4 +--- drivers/scsi/lpfc/lpfc_hbadisc.c | 7 +-- 3 files changed, 25 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c index 33417681f5d4..0990f81524cd 100644 --- a/drivers/scsi/lpfc/lpfc_ct.c +++ b/drivers/scsi/lpfc/lpfc_ct.c @@ -685,6 +685,25 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, lpfc_els_flush_rscn(vport); goto out; } + + spin_lock_irq(shost->host_lock); + if (vport->fc_flag & FC_RSCN_DEFERRED) { + vport->fc_flag &= ~FC_RSCN_DEFERRED; + spin_unlock_irq(shost->host_lock); + + /* +* Skip processing the NS response +* Re-issue the NS cmd +*/ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, +"0151 Process Deferred RSCN Data: x%x x%x\n", +vport->fc_flag, vport->fc_rscn_id_cnt); + lpfc_els_handle_rscn(vport); + + goto out; + } + spin_unlock_irq(shost->host_lock); + if (irsp->ulpStatus) { /* Check for retry */ if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) { diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c index 911066c9612d..71ec580f46a3 100644 --- a/drivers/scsi/lpfc/lpfc_els.c +++ b/drivers/scsi/lpfc/lpfc_els.c @@ -1675,6 +1675,7 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, /* Two ndlps cannot have the same did on the nodelist */ ndlp->nlp_DID = keepDID; + lpfc_nlp_set_state(vport, ndlp, keep_nlp_state); if (phba->sli_rev == LPFC_SLI_REV4 && active_rrqs_xri_bitmap) memcpy(ndlp->active_rrqs_xri_bitmap, @@ -6177,9 +6178,6 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); /* send RECOVERY event for ALL nodes that match RSCN payload */ lpfc_rscn_recovery_check(vport); - spin_lock_irq(shost->host_lock); - vport->fc_flag &= ~FC_RSCN_DEFERRED; - spin_unlock_irq(shost->host_lock); return 0; } lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c index 3468257bda02..31773e481264 100644 --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -5837,9 +5837,12 @@ __lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param) if (filter(ndlp, param)) { lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, "3185 FIND node filter %p DID " -"Data: x%p x%x x%x\n", +"ndlp %p did x%x flg x%x st x%x " +"xri x%x type x%x rpi x%x\n", filter, ndlp, ndlp->nlp_DID, -ndlp->nlp_flag); +ndlp->nlp_flag, ndlp->nlp_state, +ndlp->nlp_xri, ndlp->nlp_type, +ndlp->nlp_rpi); return ndlp; } } -- 2.13.1
[PATCH v3 06/17] lpfc: Raise maximum NVME sg list size for 256 elements
Raise the maximum NVME sg list size allowed to 256 elements. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h index 7219b6ce5dc7..46a89bdff8e4 100644 --- a/drivers/scsi/lpfc/lpfc.h +++ b/drivers/scsi/lpfc/lpfc.h @@ -57,7 +57,7 @@ struct lpfc_sli2_slim; #define LPFC_MAX_SG_SEG_CNT4096/* sg element count per scsi cmnd */ #define LPFC_MAX_SGL_SEG_CNT 512 /* SGL element count per scsi cmnd */ #define LPFC_MAX_BPL_SEG_CNT 4096/* BPL element count per scsi cmnd */ -#define LPFC_MAX_NVME_SEG_CNT 128 /* max SGL element cnt per NVME cmnd */ +#define LPFC_MAX_NVME_SEG_CNT 256 /* max SGL element cnt per NVME cmnd */ #define LPFC_MAX_SGE_SIZE 0x8000 /* Maximum data allowed in a SGE */ #define LPFC_IOCB_LIST_CNT 2250/* list of IOCBs for fast-path usage. */ -- 2.13.1
[PATCH v3 05/17] lpfc: Fix NVME LS abort_xri
performing an LS abort results in the following message being seen: 0603 Invalid CQ subtype 6: 0300 2202 0016 d005 and the associated exchange is not properly freed. The code did not recognize the exchange type that was aborted, thus it was not properly handled. Correct by adding the NVME LS ELS type to the exchange types that are recognized. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_sli.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index 4b76db19ef73..c1c7df607604 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -12814,6 +12814,7 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba, spin_unlock_irqrestore(>hbalock, iflags); workposted = true; break; + case LPFC_NVME_LS: /* NVME LS uses ELS resources */ case LPFC_ELS: cq_event = lpfc_cq_event_setup( phba, wcqe, sizeof(struct sli4_wcqe_xri_aborted)); -- 2.13.1
[PATCH v3 03/17] lpfc: Handle XRI_ABORTED_CQE in soft IRQ
XRI_ABORTED_CQE completions were not being handled in the fast path. They were being queued and deferred to the lpfc worker thread for processing. This is an artifact of the driver design prior to moving queue processing out of the isr and into a workq element. Now that queue processing is already in a deferred context, remove this artifact and process them directly. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc.h | 1 - drivers/scsi/lpfc/lpfc_hbadisc.c | 2 - drivers/scsi/lpfc/lpfc_init.c| 8 drivers/scsi/lpfc/lpfc_sli.c | 97 +++- drivers/scsi/lpfc/lpfc_sli4.h| 2 - 5 files changed, 35 insertions(+), 75 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h index 231302273257..7219b6ce5dc7 100644 --- a/drivers/scsi/lpfc/lpfc.h +++ b/drivers/scsi/lpfc/lpfc.h @@ -705,7 +705,6 @@ struct lpfc_hba { * capability */ #define HBA_NVME_IOQ_FLUSH 0x8 /* NVME IO queues flushed. */ -#define NVME_XRI_ABORT_EVENT 0x10 uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/ struct lpfc_dmabuf slim2p; diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c index d9a03beb76a4..3468257bda02 100644 --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -640,8 +640,6 @@ lpfc_work_done(struct lpfc_hba *phba) lpfc_handle_rrq_active(phba); if (phba->hba_flag & FCP_XRI_ABORT_EVENT) lpfc_sli4_fcp_xri_abort_event_proc(phba); - if (phba->hba_flag & NVME_XRI_ABORT_EVENT) - lpfc_sli4_nvme_xri_abort_event_proc(phba); if (phba->hba_flag & ELS_XRI_ABORT_EVENT) lpfc_sli4_els_xri_abort_event_proc(phba); if (phba->hba_flag & ASYNC_EVENT) diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 52c039e9f4a4..745aff753396 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -5954,9 +5954,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) INIT_LIST_HEAD(>sli4_hba.lpfc_abts_nvme_buf_list); INIT_LIST_HEAD(>sli4_hba.lpfc_abts_nvmet_ctx_list); INIT_LIST_HEAD(>sli4_hba.lpfc_nvmet_io_wait_list); - - /* Fast-path XRI aborted CQ Event work queue list */ - INIT_LIST_HEAD(>sli4_hba.sp_nvme_xri_aborted_work_queue); } /* This abort list used by worker thread */ @@ -9199,11 +9196,6 @@ lpfc_sli4_cq_event_release_all(struct lpfc_hba *phba) /* Pending ELS XRI abort events */ list_splice_init(>sli4_hba.sp_els_xri_aborted_work_queue, ); - if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) { - /* Pending NVME XRI abort events */ - list_splice_init(>sli4_hba.sp_nvme_xri_aborted_work_queue, -); - } /* Pending asynnc events */ list_splice_init(>sli4_hba.sp_asynce_work_queue, ); diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index 2ad444ce7529..4b76db19ef73 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -12318,41 +12318,6 @@ void lpfc_sli4_fcp_xri_abort_event_proc(struct lpfc_hba *phba) } /** - * lpfc_sli4_nvme_xri_abort_event_proc - Process nvme xri abort event - * @phba: pointer to lpfc hba data structure. - * - * This routine is invoked by the worker thread to process all the pending - * SLI4 NVME abort XRI events. - **/ -void lpfc_sli4_nvme_xri_abort_event_proc(struct lpfc_hba *phba) -{ - struct lpfc_cq_event *cq_event; - - /* First, declare the fcp xri abort event has been handled */ - spin_lock_irq(>hbalock); - phba->hba_flag &= ~NVME_XRI_ABORT_EVENT; - spin_unlock_irq(>hbalock); - /* Now, handle all the fcp xri abort events */ - while (!list_empty(>sli4_hba.sp_nvme_xri_aborted_work_queue)) { - /* Get the first event from the head of the event queue */ - spin_lock_irq(>hbalock); - list_remove_head(>sli4_hba.sp_nvme_xri_aborted_work_queue, -cq_event, struct lpfc_cq_event, list); - spin_unlock_irq(>hbalock); - /* Notify aborted XRI for NVME work queue */ - if (phba->nvmet_support) { - lpfc_sli4_nvmet_xri_aborted(phba, - _event->cqe.wcqe_axri); - } else { - lpfc_sli4_nvme_xri_aborted(phba, - _event->cqe.wcqe_axri); - }
[PATCH v3 17/17] lpfc: update driver version to 11.4.0.5
Update the driver version to 11.4.0.5 Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_version.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h index e0181371af09..cc2f5cec98c5 100644 --- a/drivers/scsi/lpfc/lpfc_version.h +++ b/drivers/scsi/lpfc/lpfc_version.h @@ -20,7 +20,7 @@ * included with this package. * ***/ -#define LPFC_DRIVER_VERSION "11.4.0.4" +#define LPFC_DRIVER_VERSION "11.4.0.5" #define LPFC_DRIVER_NAME "lpfc" /* Used for SLI 2/3 */ -- 2.13.1
[PATCH v3 13/17] lpfc: Correct driver deregistrations with host nvme transport
The driver's interaction with the host nvme transport has been incorrect for a while. The driver did not wait for the unregister callbacks (waited only 5 jiffies). Thus the driver may remove objects that may be referenced by subsequent abort commands from the transport, and the actual unregister callback was effectively a noop. This was especially problematic if the driver was unloaded. The driver now waits for the unregister callbacks, as it should, before continuing with teardown. Signed-off-by: Dick KennedySigned-off-by: James Smart --- v3: per review: clear NLP_WAIT_FOR_UNREG in all cases --- drivers/scsi/lpfc/lpfc_disc.h | 2 + drivers/scsi/lpfc/lpfc_nvme.c | 116 +++--- drivers/scsi/lpfc/lpfc_nvme.h | 2 + 3 files changed, 114 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h index f9a566eaef04..5a7547f9d8d8 100644 --- a/drivers/scsi/lpfc/lpfc_disc.h +++ b/drivers/scsi/lpfc/lpfc_disc.h @@ -134,6 +134,8 @@ struct lpfc_nodelist { struct lpfc_scsicmd_bkt *lat_data; /* Latency data */ uint32_t fc4_prli_sent; uint32_t upcall_flags; +#define NLP_WAIT_FOR_UNREG0x1 + uint32_t nvme_fb_size; /* NVME target's supported byte cnt */ #define NVME_FB_BIT_SHIFT 9/* PRLI Rsp first burst in 512B units. */ }; diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index d3ada630b427..3aa3b889b4cf 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -154,6 +154,10 @@ lpfc_nvme_localport_delete(struct nvme_fc_local_port *localport) { struct lpfc_nvme_lport *lport = localport->private; + lpfc_printf_vlog(lport->vport, KERN_INFO, LOG_NVME, +"6173 localport %p delete complete\n", +lport); + /* release any threads waiting for the unreg to complete */ complete(>lport_unreg_done); } @@ -946,10 +950,19 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn, freqpriv->nvme_buf = NULL; /* NVME targets need completion held off until the abort exchange -* completes. +* completes unless the NVME Rport is getting unregistered. */ - if (!(lpfc_ncmd->flags & LPFC_SBUF_XBUSY)) + if (!(lpfc_ncmd->flags & LPFC_SBUF_XBUSY) || + ndlp->upcall_flags & NLP_WAIT_FOR_UNREG) { + /* Clear the XBUSY flag to prevent double completions. +* The nvme rport is getting unregistered and there is +* no need to defer the IO. +*/ + if (lpfc_ncmd->flags & LPFC_SBUF_XBUSY) + lpfc_ncmd->flags &= ~LPFC_SBUF_XBUSY; + nCmd->done(nCmd); + } spin_lock_irqsave(>hbalock, flags); lpfc_ncmd->nrport = NULL; @@ -2234,6 +2247,47 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport) return ret; } +/* lpfc_nvme_lport_unreg_wait - Wait for the host to complete an lport unreg. + * + * The driver has to wait for the host nvme transport to callback + * indicating the localport has successfully unregistered all + * resources. Since this is an uninterruptible wait, loop every ten + * seconds and print a message indicating no progress. + * + * An uninterruptible wait is used because of the risk of transport-to- + * driver state mismatch. + */ +void +lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport, + struct lpfc_nvme_lport *lport) +{ +#if (IS_ENABLED(CONFIG_NVME_FC)) + u32 wait_tmo; + int ret; + + /* Host transport has to clean up and confirm requiring an indefinite +* wait. Print a message if a 10 second wait expires and renew the +* wait. This is unexpected. +*/ + wait_tmo = msecs_to_jiffies(LPFC_NVME_WAIT_TMO * 1000); + while (true) { + ret = wait_for_completion_timeout(>lport_unreg_done, + wait_tmo); + if (unlikely(!ret)) { + lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR, +"6176 Lport %p Localport %p wait " +"timed out. Renewing.\n", +lport, vport->localport); + continue; + } + break; + } + lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR, +"6177 Lport %p Localport %p Complete Success\n", +lport, vport->localport); +#endif +} + /** * lpfc_nvme_destroy_localport - Destroy lpfc_nvme bound to nvme transport. * @pnvme: pointer to lpfc nvme data structure. @@ -2268,7 +2322,11 @@ lpfc_nvme_destroy_localport(struct lpfc_vport *vport) */ init_completion(>lport_unreg_done); ret =
[PATCH v3 15/17] lpfc: Fix driver handling of nvme resources during unload
During driver unload, the driver may crash due to NULL pointers. The NULL pointers were due to the driver not protecting itself sufficiently during some of the teardown paths. Additionally, the driver was not waiting for and cleanup up nvme io resources. As such, the driver wasn't making the callbacks to the transport, stalling the transports association teardown. This patch waits for io clean up before tearding down and adds checks for possible NULL pointers. Cc:# 4.12+ Signed-off-by: Dick Kennedy Signed-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_crtn.h | 2 + drivers/scsi/lpfc/lpfc_init.c | 18 drivers/scsi/lpfc/lpfc_nvme.c | 96 ++- 3 files changed, 105 insertions(+), 11 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h index 7e300734b345..dac33900bf17 100644 --- a/drivers/scsi/lpfc/lpfc_crtn.h +++ b/drivers/scsi/lpfc/lpfc_crtn.h @@ -254,6 +254,8 @@ void lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctxp); int lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport, struct fc_frame_header *fc_hdr); +void lpfc_sli_flush_nvme_rings(struct lpfc_hba *phba); +void lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba); void lpfc_sli4_build_dflt_fcf_record(struct lpfc_hba *, struct fcf_record *, uint16_t); int lpfc_sli4_rq_put(struct lpfc_queue *hq, struct lpfc_queue *dq, diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 7a06f23a3baf..c466ceb43bc9 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -10136,6 +10136,16 @@ lpfc_sli4_xri_exchange_busy_wait(struct lpfc_hba *phba) int fcp_xri_cmpl = 1; int els_xri_cmpl = list_empty(>sli4_hba.lpfc_abts_els_sgl_list); + /* Driver just aborted IOs during the hba_unset process. Pause +* here to give the HBA time to complete the IO and get entries +* into the abts lists. +*/ + msleep(LPFC_XRI_EXCH_BUSY_WAIT_T1 * 5); + + /* Wait for NVME pending IO to flush back to transport. */ + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) + lpfc_nvme_wait_for_io_drain(phba); + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP) fcp_xri_cmpl = list_empty(>sli4_hba.lpfc_abts_scsi_buf_list); @@ -11659,6 +11669,10 @@ lpfc_sli4_prep_dev_for_reset(struct lpfc_hba *phba) /* Flush all driver's outstanding SCSI I/Os as we are to reset */ lpfc_sli_flush_fcp_rings(phba); + /* Flush the outstanding NVME IOs if fc4 type enabled. */ + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) + lpfc_sli_flush_nvme_rings(phba); + /* stop all timers */ lpfc_stop_hba_timers(phba); @@ -11690,6 +11704,10 @@ lpfc_sli4_prep_dev_for_perm_failure(struct lpfc_hba *phba) /* Clean up all driver's outstanding SCSI I/Os */ lpfc_sli_flush_fcp_rings(phba); + + /* Flush the outstanding NVME IOs if fc4 type enabled. */ + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) + lpfc_sli_flush_nvme_rings(phba); } /** diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index 9b231c88ca8b..50bbc61bfe5d 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -88,6 +88,9 @@ lpfc_nvme_create_queue(struct nvme_fc_local_port *pnvme_lport, struct lpfc_nvme_qhandle *qhandle; char *str; + if (!pnvme_lport->private) + return -ENOMEM; + lport = (struct lpfc_nvme_lport *)pnvme_lport->private; vport = lport->vport; qhandle = kzalloc(sizeof(struct lpfc_nvme_qhandle), GFP_KERNEL); @@ -140,6 +143,9 @@ lpfc_nvme_delete_queue(struct nvme_fc_local_port *pnvme_lport, struct lpfc_nvme_lport *lport; struct lpfc_vport *vport; + if (!pnvme_lport->private) + return; + lport = (struct lpfc_nvme_lport *)pnvme_lport->private; vport = lport->vport; @@ -1265,13 +1271,29 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport, struct lpfc_nvme_buf *lpfc_ncmd; struct lpfc_nvme_rport *rport; struct lpfc_nvme_qhandle *lpfc_queue_info; - struct lpfc_nvme_fcpreq_priv *freqpriv = pnvme_fcreq->private; + struct lpfc_nvme_fcpreq_priv *freqpriv; #ifdef CONFIG_SCSI_LPFC_DEBUG_FS uint64_t start = 0; #endif + /* Validate pointers. LLDD fault handling with transport does +* have timing races. +*/ lport = (struct lpfc_nvme_lport *)pnvme_lport->private; + if (unlikely(!lport)) { + ret = -EINVAL; + goto out_fail; + } + vport = lport->vport; + + if
[PATCH v3 02/17] lpfc: Expand WQE capability of every NVME hardware queue
Hardware queues are a fast staging area to push commands into the adapter. The adapter should drain them extremely quickly. However, under heavy io load, the host cpu is pushing commands faster than the drain rate of the adapter causing the driver to resource busy commands. Enlarge the hardware queue (wq & cq) to support a larger number of queue entries (4x the prior size) before backpressure. Enlarging the queue requires larger contiguous buffers (16k) per logical page for the hardware. This changed calling sequences that were expecting 4K page sizes that now must pass a parameter with the page sizes. It also required use of a new version of an adapter command that can vary the page size values. Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_hw4.h | 6 +++- drivers/scsi/lpfc/lpfc_init.c | 67 ++-- drivers/scsi/lpfc/lpfc_nvme.h | 3 +- drivers/scsi/lpfc/lpfc_sli.c | 71 +-- drivers/scsi/lpfc/lpfc_sli4.h | 8 +++-- 5 files changed, 112 insertions(+), 43 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h index 2b145966c73f..73c2f6971d2b 100644 --- a/drivers/scsi/lpfc/lpfc_hw4.h +++ b/drivers/scsi/lpfc/lpfc_hw4.h @@ -1122,6 +1122,7 @@ struct cq_context { #define LPFC_CQ_CNT_2560x0 #define LPFC_CQ_CNT_5120x1 #define LPFC_CQ_CNT_1024 0x2 +#define LPFC_CQ_CNT_WORD7 0x3 uint32_t word1; #define lpfc_cq_eq_id_SHIFT22 /* Version 0 Only */ #define lpfc_cq_eq_id_MASK 0x00FF @@ -1129,7 +1130,7 @@ struct cq_context { #define lpfc_cq_eq_id_2_SHIFT 0 /* Version 2 Only */ #define lpfc_cq_eq_id_2_MASK 0x #define lpfc_cq_eq_id_2_WORD word1 - uint32_t reserved0; + uint32_t lpfc_cq_context_count; /* Version 2 Only */ uint32_t reserved1; }; @@ -1193,6 +1194,9 @@ struct lpfc_mbx_cq_create_set { #define lpfc_mbx_cq_create_set_arm_SHIFT 31 #define lpfc_mbx_cq_create_set_arm_MASK0x0001 #define lpfc_mbx_cq_create_set_arm_WORDword2 +#define lpfc_mbx_cq_create_set_cq_cnt_SHIFT16 +#define lpfc_mbx_cq_create_set_cq_cnt_MASK 0x7FFF +#define lpfc_mbx_cq_create_set_cq_cnt_WORD word2 #define lpfc_mbx_cq_create_set_num_cq_SHIFT0 #define lpfc_mbx_cq_create_set_num_cq_MASK 0x #define lpfc_mbx_cq_create_set_num_cq_WORD word2 diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 4ffdde5808ee..52c039e9f4a4 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -7964,10 +7964,10 @@ static int lpfc_alloc_nvme_wq_cq(struct lpfc_hba *phba, int wqidx) { struct lpfc_queue *qdesc; - int cnt; - qdesc = lpfc_sli4_queue_alloc(phba, phba->sli4_hba.cq_esize, - phba->sli4_hba.cq_ecount); + qdesc = lpfc_sli4_queue_alloc(phba, LPFC_NVME_PAGE_SIZE, + phba->sli4_hba.cq_esize, + LPFC_NVME_CQSIZE); if (!qdesc) { lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "0508 Failed allocate fast-path NVME CQ (%d)\n", @@ -7976,8 +7976,8 @@ lpfc_alloc_nvme_wq_cq(struct lpfc_hba *phba, int wqidx) } phba->sli4_hba.nvme_cq[wqidx] = qdesc; - cnt = LPFC_NVME_WQSIZE; - qdesc = lpfc_sli4_queue_alloc(phba, LPFC_WQE128_SIZE, cnt); + qdesc = lpfc_sli4_queue_alloc(phba, LPFC_NVME_PAGE_SIZE, + LPFC_WQE128_SIZE, LPFC_NVME_WQSIZE); if (!qdesc) { lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "0509 Failed allocate fast-path NVME WQ (%d)\n", @@ -7996,8 +7996,9 @@ lpfc_alloc_fcp_wq_cq(struct lpfc_hba *phba, int wqidx) uint32_t wqesize; /* Create Fast Path FCP CQs */ - qdesc = lpfc_sli4_queue_alloc(phba, phba->sli4_hba.cq_esize, - phba->sli4_hba.cq_ecount); + qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE, + phba->sli4_hba.cq_esize, + phba->sli4_hba.cq_ecount); if (!qdesc) { lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "0499 Failed allocate fast-path FCP CQ (%d)\n", wqidx); @@ -8008,7 +8009,8 @@ lpfc_alloc_fcp_wq_cq(struct lpfc_hba *phba, int wqidx) /* Create Fast Path FCP WQs */ wqesize = (phba->fcp_embed_io) ? LPFC_WQE128_SIZE : phba->sli4_hba.wq_esize; - qdesc = lpfc_sli4_queue_alloc(phba, wqesize, phba->sli4_hba.wq_ecount); + qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE, +
[PATCH v3 07/17] lpfc: Driver fails to detect direct attach storage array
The driver does not respond to PLOGI from the direct attach target. The driver uses incorrect S_ID in CONFIG_LINK, after FLOGI completion Correct by issuing CONFIG_LINK with the correct S_ID after receiving the PLOGI from the target Signed-off-by: Dick KennedySigned-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_els.c | 36 1 file changed, 20 insertions(+), 16 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c index c81cdc637e64..532cd4b49c5d 100644 --- a/drivers/scsi/lpfc/lpfc_els.c +++ b/drivers/scsi/lpfc/lpfc_els.c @@ -858,6 +858,9 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, vport->fc_flag |= FC_PT2PT; spin_unlock_irq(shost->host_lock); + /* If we are pt2pt with another NPort, force NPIV off! */ + phba->sli3_options &= ~LPFC_SLI3_NPIV_ENABLED; + /* If physical FC port changed, unreg VFI and ALL VPIs / RPIs */ if ((phba->sli_rev == LPFC_SLI_REV4) && phba->fc_topology_changed) { lpfc_unregister_fcf_prep(phba); @@ -916,28 +919,29 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, spin_lock_irq(shost->host_lock); ndlp->nlp_flag |= NLP_NPR_2B_DISC; spin_unlock_irq(shost->host_lock); - } else + + mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); + if (!mbox) + goto fail; + + lpfc_config_link(phba, mbox); + + mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link; + mbox->vport = vport; + rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); + if (rc == MBX_NOT_FINISHED) { + mempool_free(mbox, phba->mbox_mem_pool); + goto fail; + } + } else { /* This side will wait for the PLOGI, decrement ndlp reference * count indicating that ndlp can be released when other * references to it are done. */ lpfc_nlp_put(ndlp); - /* If we are pt2pt with another NPort, force NPIV off! */ - phba->sli3_options &= ~LPFC_SLI3_NPIV_ENABLED; - - mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); - if (!mbox) - goto fail; - - lpfc_config_link(phba, mbox); - - mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link; - mbox->vport = vport; - rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); - if (rc == MBX_NOT_FINISHED) { - mempool_free(mbox, phba->mbox_mem_pool); - goto fail; + /* Start discovery - this should just do CLEAR_LA */ + lpfc_disc_start(vport); } return 0; -- 2.13.1
Re: [PATCH v15 5/5] PCI: Remove PCI pool macro functions
On Mon, Nov 20, 2017 at 08:32:47PM +0100, Romain Perier wrote: > From: Romain Perier> > Now that all the drivers use dma pool API, we can remove the macro > functions for PCI pool. > > Signed-off-by: Romain Perier > Reviewed-by: Peter Senna Tschudin I already acked this once on Oct 24. Please keep that ack and include it in any future postings so I don't have to deal with this again. Acked-by: Bjorn Helgaas > --- > include/linux/pci.h | 9 - > 1 file changed, 9 deletions(-) > > diff --git a/include/linux/pci.h b/include/linux/pci.h > index 96c94980d1ff..d03b4a20033d 100644 > --- a/include/linux/pci.h > +++ b/include/linux/pci.h > @@ -1324,15 +1324,6 @@ int pci_set_vga_state(struct pci_dev *pdev, bool > decode, > #include > #include > > -#define pci_pool dma_pool > -#define pci_pool_create(name, pdev, size, align, allocation) \ > - dma_pool_create(name, >dev, size, align, allocation) > -#define pci_pool_destroy(pool) dma_pool_destroy(pool) > -#define pci_pool_alloc(pool, flags, handle) dma_pool_alloc(pool, flags, > handle) > -#define pci_pool_zalloc(pool, flags, handle) \ > - dma_pool_zalloc(pool, flags, handle) > -#define pci_pool_free(pool, vaddr, addr) dma_pool_free(pool, vaddr, > addr) > - > struct msix_entry { > u32 vector; /* kernel uses to write allocated vector */ > u16 entry; /* driver uses to specify entry, OS writes */ > -- > 2.14.1 >
[PATCH v15 2/5] net: e100: Replace PCI pool old API
From: Romain PerierThe PCI pool API is deprecated. This commit replaces the PCI pool old API by the appropriate function with the DMA pool API. Signed-off-by: Romain Perier Acked-by: Peter Senna Tschudin Acked-by: Jeff Kirsher Tested-by: Peter Senna Tschudin --- drivers/net/ethernet/intel/e100.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c index 44b3937f7e81..29486478836e 100644 --- a/drivers/net/ethernet/intel/e100.c +++ b/drivers/net/ethernet/intel/e100.c @@ -607,7 +607,7 @@ struct nic { struct mem *mem; dma_addr_t dma_addr; - struct pci_pool *cbs_pool; + struct dma_pool *cbs_pool; dma_addr_t cbs_dma_addr; u8 adaptive_ifs; u8 tx_threshold; @@ -1892,7 +1892,7 @@ static void e100_clean_cbs(struct nic *nic) nic->cb_to_clean = nic->cb_to_clean->next; nic->cbs_avail++; } - pci_pool_free(nic->cbs_pool, nic->cbs, nic->cbs_dma_addr); + dma_pool_free(nic->cbs_pool, nic->cbs, nic->cbs_dma_addr); nic->cbs = NULL; nic->cbs_avail = 0; } @@ -1910,7 +1910,7 @@ static int e100_alloc_cbs(struct nic *nic) nic->cb_to_use = nic->cb_to_send = nic->cb_to_clean = NULL; nic->cbs_avail = 0; - nic->cbs = pci_pool_zalloc(nic->cbs_pool, GFP_KERNEL, + nic->cbs = dma_pool_zalloc(nic->cbs_pool, GFP_KERNEL, >cbs_dma_addr); if (!nic->cbs) return -ENOMEM; @@ -2960,8 +2960,8 @@ static int e100_probe(struct pci_dev *pdev, const struct pci_device_id *ent) netif_err(nic, probe, nic->netdev, "Cannot register net device, aborting\n"); goto err_out_free; } - nic->cbs_pool = pci_pool_create(netdev->name, - nic->pdev, + nic->cbs_pool = dma_pool_create(netdev->name, + >pdev->dev, nic->params.cbs.max * sizeof(struct cb), sizeof(u32), 0); @@ -3001,7 +3001,7 @@ static void e100_remove(struct pci_dev *pdev) unregister_netdev(netdev); e100_free(nic); pci_iounmap(pdev, nic->csr); - pci_pool_destroy(nic->cbs_pool); + dma_pool_destroy(nic->cbs_pool); free_netdev(netdev); pci_release_regions(pdev); pci_disable_device(pdev); -- 2.14.1
[PATCH v15 4/5] scsi: mpt3sas: Replace PCI pool old API
The PCI pool API is deprecated. This commit replaces the PCI pool old API by the appropriate function with the DMA pool API. Signed-off-by: Romain Perier--- drivers/scsi/mpt3sas/mpt3sas_base.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c index 8027de465d47..08237b8659ae 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.c +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c @@ -3790,12 +3790,12 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc) if (ioc->pcie_sgl_dma_pool) { for (i = 0; i < ioc->scsiio_depth; i++) { if (ioc->scsi_lookup[i].pcie_sg_list.pcie_sgl) - pci_pool_free(ioc->pcie_sgl_dma_pool, + dma_pool_free(ioc->pcie_sgl_dma_pool, ioc->scsi_lookup[i].pcie_sg_list.pcie_sgl, ioc->scsi_lookup[i].pcie_sg_list.pcie_sgl_dma); } if (ioc->pcie_sgl_dma_pool) - pci_pool_destroy(ioc->pcie_sgl_dma_pool); + dma_pool_destroy(ioc->pcie_sgl_dma_pool); } if (ioc->config_page) { @@ -4204,21 +4204,21 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc) sz = nvme_blocks_needed * ioc->page_size; ioc->pcie_sgl_dma_pool = - pci_pool_create("PCIe SGL pool", ioc->pdev, sz, 16, 0); + dma_pool_create("PCIe SGL pool", >pdev->dev, sz, 16, 0); if (!ioc->pcie_sgl_dma_pool) { pr_info(MPT3SAS_FMT - "PCIe SGL pool: pci_pool_create failed\n", + "PCIe SGL pool: dma_pool_create failed\n", ioc->name); goto out; } for (i = 0; i < ioc->scsiio_depth; i++) { ioc->scsi_lookup[i].pcie_sg_list.pcie_sgl = - pci_pool_alloc(ioc->pcie_sgl_dma_pool, + dma_pool_alloc(ioc->pcie_sgl_dma_pool, GFP_KERNEL, >scsi_lookup[i].pcie_sg_list.pcie_sgl_dma); if (!ioc->scsi_lookup[i].pcie_sg_list.pcie_sgl) { pr_info(MPT3SAS_FMT - "PCIe SGL pool: pci_pool_alloc failed\n", + "PCIe SGL pool: dma_pool_alloc failed\n", ioc->name); goto out; } -- 2.14.1
[PATCH v15 3/5] hinic: Replace PCI pool old API
From: Romain PerierThe PCI pool API is deprecated. This commit replaces the PCI pool old API by the appropriate function with the DMA pool API. Signed-off-by: Romain Perier --- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 10 +- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index 7d95f0866fb0..28a81ac97af5 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -143,7 +143,7 @@ int hinic_alloc_cmdq_buf(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif = cmdqs->hwif; struct pci_dev *pdev = hwif->pdev; - cmdq_buf->buf = pci_pool_alloc(cmdqs->cmdq_buf_pool, GFP_KERNEL, + cmdq_buf->buf = dma_pool_alloc(cmdqs->cmdq_buf_pool, GFP_KERNEL, _buf->dma_addr); if (!cmdq_buf->buf) { dev_err(>dev, "Failed to allocate cmd from the pool\n"); @@ -161,7 +161,7 @@ int hinic_alloc_cmdq_buf(struct hinic_cmdqs *cmdqs, void hinic_free_cmdq_buf(struct hinic_cmdqs *cmdqs, struct hinic_cmdq_buf *cmdq_buf) { - pci_pool_free(cmdqs->cmdq_buf_pool, cmdq_buf->buf, cmdq_buf->dma_addr); + dma_pool_free(cmdqs->cmdq_buf_pool, cmdq_buf->buf, cmdq_buf->dma_addr); } static unsigned int cmdq_wqe_size_from_bdlen(enum bufdesc_len len) @@ -875,7 +875,7 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif, int err; cmdqs->hwif = hwif; - cmdqs->cmdq_buf_pool = pci_pool_create("hinic_cmdq", pdev, + cmdqs->cmdq_buf_pool = dma_pool_create("hinic_cmdq", >dev, HINIC_CMDQ_BUF_SIZE, HINIC_CMDQ_BUF_SIZE, 0); if (!cmdqs->cmdq_buf_pool) @@ -916,7 +916,7 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif, devm_kfree(>dev, cmdqs->saved_wqs); err_saved_wqs: - pci_pool_destroy(cmdqs->cmdq_buf_pool); + dma_pool_destroy(cmdqs->cmdq_buf_pool); return err; } @@ -942,5 +942,5 @@ void hinic_free_cmdqs(struct hinic_cmdqs *cmdqs) devm_kfree(>dev, cmdqs->saved_wqs); - pci_pool_destroy(cmdqs->cmdq_buf_pool); + dma_pool_destroy(cmdqs->cmdq_buf_pool); } diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h index b35583400cb6..23f8d39eab68 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h @@ -157,7 +157,7 @@ struct hinic_cmdq { struct hinic_cmdqs { struct hinic_hwif *hwif; - struct pci_pool *cmdq_buf_pool; + struct dma_pool *cmdq_buf_pool; struct hinic_wq *saved_wqs; -- 2.14.1
[PATCH v15 0/5] Replace PCI pool by DMA pool API
The current PCI pool API are simple macro functions direct expanded to the appropriate dma pool functions. The prototypes are almost the same and semantically, they are very similar. I propose to use the DMA pool API directly and get rid of the old API. This set of patches, replaces the old API by the dma pool API and remove the defines. Changes in v15: - Rebased series onto next-20171120 - Added patch 04/05 for mpt3sas scsi driver Changes in v14: - Rebased series onto next-20171018 - Rebased patch 03/05 on latest driver Changes in v13: - Rebased series onto next-20170906 - Added a new commit for the hinic ethernet driver - Remove previously merged patches Changes in v12: - Rebased series onto next-20170822 Changes in v11: - Rebased series onto next-20170809 - Removed patches 08-14, these have been merged. Changes in v10: - Rebased series onto next-20170706 - I have fixed and improved patch "scsi: megaraid: Replace PCI pool old API" Changes in v9: - Rebased series onto next-20170522 - I have fixed and improved the patch for lpfc driver Changes in v8: - Rebased series onto next-20170428 Changes in v7: - Rebased series onto next-20170416 - Added Acked-by, Tested-by and Reviwed-by tags Changes in v6: - Fixed an issue reported by kbuild test robot about changes in DAC960 - Removed patches 15/19,16/19,17/19,18/19. They have been merged by Greg - Added Acked-by Tags Changes in v5: - Re-worded the cover letter (remove sentence about checkpatch.pl) - Rebased series onto next-20170308 - Fix typos in commit message - Added Acked-by Tags Changes in v4: - Rebased series onto next-20170301 - Removed patch 20/20: checks done by checkpath.pl, no longer required. Thanks to Peter and Joe for their feedbacks. - Added Reviewed-by tags Changes in v3: - Rebased series onto next-20170224 - Fix checkpath.pl reports for patch 11/20 and patch 12/20 - Remove prefix RFC Changes in v2: - Introduced patch 18/20 - Fixed cosmetic changes: spaces before brace, live over 80 characters - Removed some of the check for NULL pointers before calling dma_pool_destroy - Improved the regexp in checkpatch for pci_pool, thanks to Joe Perches - Added Tested-by and Acked-by tags Romain Perier (5): block: DAC960: Replace PCI pool old API net: e100: Replace PCI pool old API hinic: Replace PCI pool old API scsi: mpt3sas: Replace PCI pool old API PCI: Remove PCI pool macro functions drivers/block/DAC960.c| 38 +++ drivers/block/DAC960.h| 4 +-- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 10 +++--- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.h | 2 +- drivers/net/ethernet/intel/e100.c | 12 +++ drivers/scsi/mpt3sas/mpt3sas_base.c | 12 +++ include/linux/pci.h | 9 -- 7 files changed, 38 insertions(+), 49 deletions(-) -- 2.14.1
[PATCH v15 5/5] PCI: Remove PCI pool macro functions
From: Romain PerierNow that all the drivers use dma pool API, we can remove the macro functions for PCI pool. Signed-off-by: Romain Perier Reviewed-by: Peter Senna Tschudin --- include/linux/pci.h | 9 - 1 file changed, 9 deletions(-) diff --git a/include/linux/pci.h b/include/linux/pci.h index 96c94980d1ff..d03b4a20033d 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -1324,15 +1324,6 @@ int pci_set_vga_state(struct pci_dev *pdev, bool decode, #include #include -#definepci_pool dma_pool -#define pci_pool_create(name, pdev, size, align, allocation) \ - dma_pool_create(name, >dev, size, align, allocation) -#definepci_pool_destroy(pool) dma_pool_destroy(pool) -#definepci_pool_alloc(pool, flags, handle) dma_pool_alloc(pool, flags, handle) -#definepci_pool_zalloc(pool, flags, handle) \ - dma_pool_zalloc(pool, flags, handle) -#definepci_pool_free(pool, vaddr, addr) dma_pool_free(pool, vaddr, addr) - struct msix_entry { u32 vector; /* kernel uses to write allocated vector */ u16 entry; /* driver uses to specify entry, OS writes */ -- 2.14.1
[PATCH v15 1/5] block: DAC960: Replace PCI pool old API
From: Romain PerierThe PCI pool API is deprecated. This commit replaces the PCI pool old API by the appropriate function with the DMA pool API. Signed-off-by: Romain Perier Acked-by: Peter Senna Tschudin Tested-by: Peter Senna Tschudin --- drivers/block/DAC960.c | 38 ++ drivers/block/DAC960.h | 4 ++-- 2 files changed, 20 insertions(+), 22 deletions(-) diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c index 255591ab3716..2a8950ee382c 100644 --- a/drivers/block/DAC960.c +++ b/drivers/block/DAC960.c @@ -268,17 +268,17 @@ static bool DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller) void *AllocationPointer = NULL; void *ScatterGatherCPU = NULL; dma_addr_t ScatterGatherDMA; - struct pci_pool *ScatterGatherPool; + struct dma_pool *ScatterGatherPool; void *RequestSenseCPU = NULL; dma_addr_t RequestSenseDMA; - struct pci_pool *RequestSensePool = NULL; + struct dma_pool *RequestSensePool = NULL; if (Controller->FirmwareType == DAC960_V1_Controller) { CommandAllocationLength = offsetof(DAC960_Command_T, V1.EndMarker); CommandAllocationGroupSize = DAC960_V1_CommandAllocationGroupSize; - ScatterGatherPool = pci_pool_create("DAC960_V1_ScatterGather", - Controller->PCIDevice, + ScatterGatherPool = dma_pool_create("DAC960_V1_ScatterGather", + >PCIDevice->dev, DAC960_V1_ScatterGatherLimit * sizeof(DAC960_V1_ScatterGatherSegment_T), sizeof(DAC960_V1_ScatterGatherSegment_T), 0); if (ScatterGatherPool == NULL) @@ -290,18 +290,18 @@ static bool DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller) { CommandAllocationLength = offsetof(DAC960_Command_T, V2.EndMarker); CommandAllocationGroupSize = DAC960_V2_CommandAllocationGroupSize; - ScatterGatherPool = pci_pool_create("DAC960_V2_ScatterGather", - Controller->PCIDevice, + ScatterGatherPool = dma_pool_create("DAC960_V2_ScatterGather", + >PCIDevice->dev, DAC960_V2_ScatterGatherLimit * sizeof(DAC960_V2_ScatterGatherSegment_T), sizeof(DAC960_V2_ScatterGatherSegment_T), 0); if (ScatterGatherPool == NULL) return DAC960_Failure(Controller, "AUXILIARY STRUCTURE CREATION (SG)"); - RequestSensePool = pci_pool_create("DAC960_V2_RequestSense", - Controller->PCIDevice, sizeof(DAC960_SCSI_RequestSense_T), + RequestSensePool = dma_pool_create("DAC960_V2_RequestSense", + >PCIDevice->dev, sizeof(DAC960_SCSI_RequestSense_T), sizeof(int), 0); if (RequestSensePool == NULL) { - pci_pool_destroy(ScatterGatherPool); + dma_pool_destroy(ScatterGatherPool); return DAC960_Failure(Controller, "AUXILIARY STRUCTURE CREATION (SG)"); } @@ -335,16 +335,16 @@ static bool DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller) Command->Next = Controller->FreeCommands; Controller->FreeCommands = Command; Controller->Commands[CommandIdentifier-1] = Command; - ScatterGatherCPU = pci_pool_alloc(ScatterGatherPool, GFP_ATOMIC, + ScatterGatherCPU = dma_pool_alloc(ScatterGatherPool, GFP_ATOMIC, ); if (ScatterGatherCPU == NULL) return DAC960_Failure(Controller, "AUXILIARY STRUCTURE CREATION"); if (RequestSensePool != NULL) { - RequestSenseCPU = pci_pool_alloc(RequestSensePool, GFP_ATOMIC, + RequestSenseCPU = dma_pool_alloc(RequestSensePool, GFP_ATOMIC, ); if (RequestSenseCPU == NULL) { -pci_pool_free(ScatterGatherPool, ScatterGatherCPU, +dma_pool_free(ScatterGatherPool, ScatterGatherCPU, ScatterGatherDMA); return DAC960_Failure(Controller, "AUXILIARY STRUCTURE CREATION"); @@ -379,8 +379,8 @@ static bool DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller) static void DAC960_DestroyAuxiliaryStructures(DAC960_Controller_T *Controller) { int i; - struct pci_pool *ScatterGatherPool = Controller->ScatterGatherPool; - struct pci_pool *RequestSensePool = NULL; + struct dma_pool *ScatterGatherPool = Controller->ScatterGatherPool; + struct dma_pool *RequestSensePool = NULL; void *ScatterGatherCPU; dma_addr_t ScatterGatherDMA; void *RequestSenseCPU; @@ -411,9 +411,9 @@ static void DAC960_DestroyAuxiliaryStructures(DAC960_Controller_T *Controller) RequestSenseDMA = Command->V2.RequestSenseDMA; } if (ScatterGatherCPU != NULL) - pci_pool_free(ScatterGatherPool, ScatterGatherCPU, ScatterGatherDMA); +
[PATCH] scsi_error: ensure EH wakes up on error to prevent host getting stuck
When a command is added to the host's error handler command queue, there is a chance that the error handler will not be woken up. This can happen when one CPU is running scsi_eh_scmd_add() at the same time as another CPU is running scsi_device_unbusy() for a different command on the same host. Each function changes one value, and then looks at the value of a variable that the other function has just changed, but if they both see stale data, neither will actually wake up the error handler. In scsi_eh_scmd_add, host_failed is incremented, then scsi_eh_wakeup() is called, which sees that host_busy is still 2, so it doesn't actually wake up the handler. Meanwhile, in scsi_device_unbusy(), host_busy is decremented, and then it sees that host_failed is 0, so it doesn't even call scsi_eh_wakeup(). Signed-off-by: Stuart Hyaes--- diff -pur linux-4.14/drivers/scsi/scsi_error.c linux-4.14-stu/drivers/scsi/scsi_error.c --- linux-4.14/drivers/scsi/scsi_error.c2017-11-12 12:46:13.0 -0600 +++ linux-4.14-stu/drivers/scsi/scsi_error.c2017-11-17 14:22:19.230867923 -0600 @@ -243,6 +243,10 @@ void scsi_eh_scmd_add(struct scsi_cmnd * scsi_eh_reset(scmd); list_add_tail(>eh_entry, >eh_cmd_q); shost->host_failed++; + /* +* See scsi_device_unbusy() for explanation of smp_mb(). +*/ + smp_mb(); scsi_eh_wakeup(shost); spin_unlock_irqrestore(shost->host_lock, flags); } diff -pur linux-4.14/drivers/scsi/scsi_lib.c linux-4.14-stu/drivers/scsi/scsi_lib.c --- linux-4.14/drivers/scsi/scsi_lib.c 2017-11-12 12:46:13.0 -0600 +++ linux-4.14-stu/drivers/scsi/scsi_lib.c 2017-11-17 14:22:15.814867833 -0600 @@ -325,6 +325,15 @@ void scsi_device_unbusy(struct scsi_devi unsigned long flags; atomic_dec(>host_busy); + + /* This function changes host_busy and looks at host_failed, while +* scsi_eh_scmd_add() updates host_failed and looks at host_busy (in +* scsi_eh_wakeup())... if these happen simultaneously without the smp +* memory barrier, each can see the old value, such that neither will +* wake up the error handler, which can cause the host controller to +* be hung forever. +*/ + smp_mb(); if (starget->can_queue > 0) atomic_dec(>target_busy); --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
[PATCH] scsi: smartpqi: put controller in SIS mode at shutdown
Since commit 162d7753fce9 ("scsi: smartpqi: ensure controller is in SIS mode at init"), the driver is able to work even if the controller is in PQI mode at startup. This made it possible to keep using the controller across a kexec. But kernels built before that patch still expect the controller to be in SIS mode at startup. They will fail when kexec'd. To handle that case, this patch reverts the controller to SIS mode during the ->shutdown() callback. Signed-off-by: Vincent Minet--- drivers/scsi/smartpqi/smartpqi_init.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index b2880c7709e6..5e898dd9ae2b 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -6699,7 +6699,12 @@ static void pqi_shutdown(struct pci_dev *pci_dev) * storage. */ rc = pqi_flush_cache(ctrl_info, SHUTDOWN); - pqi_reset(ctrl_info); + + if (ctrl_info->pqi_mode_enabled) + pqi_revert_to_sis_mode(ctrl_info); + else + pqi_reset(ctrl_info); + if (rc == 0) return; -- 2.15.0
[PATCH] scsi: ufs: ufshcd: fix potential NULL pointer dereference in ufshcd_config_vreg
_vreg_ is being dereferenced before it is null checked, hence there is a potential null pointer dereference. Fix this by moving the pointer dereference after _vreg_ has been null checked. This issue was detected with the help of Coccinelle. Fixes: aa4976130934 ("ufs: Add regulator enable support") Signed-off-by: Gustavo A. R. Silva--- drivers/scsi/ufs/ufshcd.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 011c336..a355d98 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -6559,12 +6559,15 @@ static int ufshcd_config_vreg(struct device *dev, struct ufs_vreg *vreg, bool on) { int ret = 0; - struct regulator *reg = vreg->reg; - const char *name = vreg->name; + struct regulator *reg; + const char *name; int min_uV, uA_load; BUG_ON(!vreg); + reg = vreg->reg; + name = vreg->name; + if (regulator_count_voltages(reg) > 0) { min_uV = on ? vreg->min_uV : 0; ret = regulator_set_voltage(reg, min_uV, vreg->max_uV); -- 2.7.4
Re: [PATCH v2 17/17] lpfc: update driver version to 11.4.0.5
On 11/10/2017 02:08 AM, James Smart wrote: > Update the driver version to 11.4.0.5 > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_version.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/scsi/lpfc/lpfc_version.h > b/drivers/scsi/lpfc/lpfc_version.h > index e0181371af09..cc2f5cec98c5 100644 > --- a/drivers/scsi/lpfc/lpfc_version.h > +++ b/drivers/scsi/lpfc/lpfc_version.h > @@ -20,7 +20,7 @@ > * included with this package. * > ***/ > > -#define LPFC_DRIVER_VERSION "11.4.0.4" > +#define LPFC_DRIVER_VERSION "11.4.0.5" > #define LPFC_DRIVER_NAME "lpfc" > > /* Used for SLI 2/3 */ > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH v2 15/17] lpfc: Fix driver handling of nvme resources during unload
On 11/10/2017 02:08 AM, James Smart wrote: > During driver unload, the driver may crash due to NULL pointers. > The NULL pointers were due to the driver not protecting itself > sufficiently during some of the teardown paths. > Additionally, the driver was not waiting for and cleanup up nvme > io resources. As such, the driver wasn't making the callbacks > to the transport, stalling the transports association teardown. > > This patch waits for io clean up before tearding down and adds > checks for possible NULL pointers. > > Cc:# 4.12+ > Signed-off-by: Dick Kennedy > Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_crtn.h | 2 + > drivers/scsi/lpfc/lpfc_init.c | 18 > drivers/scsi/lpfc/lpfc_nvme.c | 96 > ++- > 3 files changed, 105 insertions(+), 11 deletions(-) > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH v2 16/17] lpfc: small sg cnt cleanup
On 11/10/2017 02:08 AM, James Smart wrote: > The logic for sg_seg_cnt is a bit convoluted. This patch tries to > clean up a couple of areas, especially around the +2 and +1 logic. > > This patch: > - cleans up the lpfc_sg_seg_cnt attribute to specify a real minimum > rather than making the minimum be whatever the default is. > - Remove the hardcoding of +2 (for the number of elements we use in > a sgl for cmd iu and rsp iu) and +1 (an additional entry to > compensate for nvme's reduction of io size based on a possible > partial page) logic in sg list initialization. In the case where > the +1 logic is referenced in host and target io checks, use the > values set in the transport template as that value was properly set. > > There can certainly be more done in this area and it will be addressed > in combined host/target driver effort. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc.h | 1 + > drivers/scsi/lpfc/lpfc_attr.c | 2 +- > drivers/scsi/lpfc/lpfc_init.c | 19 ++- > drivers/scsi/lpfc/lpfc_nvme.c | 3 ++- > drivers/scsi/lpfc/lpfc_nvmet.c | 2 +- > 5 files changed, 19 insertions(+), 8 deletions(-) > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH v2 14/17] lpfc: Fix crash during driver unload with running nvme traffic
On 11/10/2017 02:08 AM, James Smart wrote: > When the driver is unloading, the nvme transport could be in the > process of submitting new requests, will send abort requests to > terminate associations, or may make LS-related requests. > The driver's abort and request entry points currently is ignorant > of the unloading state and is starting the requests even though > the infrastructure to complete them continues to teardown. > > Change the entry points for new requests to check whether unloading > and if so, reject the requests. Abort routines check unloading, and > if so, noop the request. An abort is noop'd as the teardown paths > are already aborting/terminating the io outstanding at the time the > teardown initiated. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_nvme.c | 14 ++ > drivers/scsi/lpfc/lpfc_nvmet.c | 11 +++ > 2 files changed, 25 insertions(+) > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH v2 13/17] lpfc: Correct driver deregistrations with host nvme transport
On 11/10/2017 02:08 AM, James Smart wrote: > The driver's interaction with the host nvme transport has been > incorrect for a while. The driver did not wait for the unregister > callbacks (waited only 5 jiffies). Thus the driver may remove > objects that may be referenced by subsequent abort commands from > the transport, and the actual unregister callback was effectively > a noop. This was especially problematic if the driver was unloaded. > > The driver now waits for the unregister callbacks, as it should, > before continuing with teardown. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_disc.h | 2 + > drivers/scsi/lpfc/lpfc_nvme.c | 113 > -- > drivers/scsi/lpfc/lpfc_nvme.h | 2 + > 3 files changed, 113 insertions(+), 4 deletions(-) > > diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h > index f9a566eaef04..5a7547f9d8d8 100644 > --- a/drivers/scsi/lpfc/lpfc_disc.h > +++ b/drivers/scsi/lpfc/lpfc_disc.h > @@ -134,6 +134,8 @@ struct lpfc_nodelist { > struct lpfc_scsicmd_bkt *lat_data; /* Latency data */ > uint32_t fc4_prli_sent; > uint32_t upcall_flags; > +#define NLP_WAIT_FOR_UNREG0x1 > + > uint32_t nvme_fb_size; /* NVME target's supported byte cnt */ > #define NVME_FB_BIT_SHIFT 9/* PRLI Rsp first burst in 512B units. */ > }; > diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c > index d3ada630b427..12d09a6a4563 100644 > --- a/drivers/scsi/lpfc/lpfc_nvme.c > +++ b/drivers/scsi/lpfc/lpfc_nvme.c > @@ -154,6 +154,10 @@ lpfc_nvme_localport_delete(struct nvme_fc_local_port > *localport) > { > struct lpfc_nvme_lport *lport = localport->private; > > + lpfc_printf_vlog(lport->vport, KERN_INFO, LOG_NVME, > + "6173 localport %p delete complete\n", > + lport); > + > /* release any threads waiting for the unreg to complete */ > complete(>lport_unreg_done); > } > @@ -946,10 +950,19 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct > lpfc_iocbq *pwqeIn, > freqpriv->nvme_buf = NULL; > > /* NVME targets need completion held off until the abort exchange > - * completes. > + * completes unless the NVME Rport is getting unregistered. >*/ > - if (!(lpfc_ncmd->flags & LPFC_SBUF_XBUSY)) > + if (!(lpfc_ncmd->flags & LPFC_SBUF_XBUSY) || > + ndlp->upcall_flags & NLP_WAIT_FOR_UNREG) { > + /* Clear the XBUSY flag to prevent double completions. > + * The nvme rport is getting unregistered and there is > + * no need to defer the IO. > + */ > + if (lpfc_ncmd->flags & LPFC_SBUF_XBUSY) > + lpfc_ncmd->flags &= ~LPFC_SBUF_XBUSY; > + > nCmd->done(nCmd); > + } > > spin_lock_irqsave(>hbalock, flags); > lpfc_ncmd->nrport = NULL; > @@ -2234,6 +2247,47 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport) > return ret; > } > > +/* lpfc_nvme_lport_unreg_wait - Wait for the host to complete an lport unreg. > + * > + * The driver has to wait for the host nvme transport to callback > + * indicating the localport has successfully unregistered all > + * resources. Since this is an uninterruptible wait, loop every ten > + * seconds and print a message indicating no progress. > + * > + * An uninterruptible wait is used because of the risk of transport-to- > + * driver state mismatch. > + */ > +void > +lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport, > +struct lpfc_nvme_lport *lport) > +{ > +#if (IS_ENABLED(CONFIG_NVME_FC)) > + u32 wait_tmo; > + int ret; > + > + /* Host transport has to clean up and confirm requiring an indefinite > + * wait. Print a message if a 10 second wait expires and renew the > + * wait. This is unexpected. > + */ > + wait_tmo = msecs_to_jiffies(LPFC_NVME_WAIT_TMO * 1000); > + while (true) { > + ret = wait_for_completion_timeout(>lport_unreg_done, > + wait_tmo); > + if (unlikely(!ret)) { > + lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR, > + "6176 Lport %p Localport %p wait " > + "timed out. Renewing.\n", > + lport, vport->localport); > + continue; > + } > + break; > + } > + lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR, > + "6177 Lport %p Localport %p Complete Success\n", > + lport, vport->localport); > +#endif > +} > + > /** > * lpfc_nvme_destroy_localport - Destroy lpfc_nvme bound to nvme transport. > * @pnvme: pointer to lpfc nvme data structure. > @@ -2268,7 +2322,11 @@
Re: [PATCH v2 12/17] lpfc: correct port registrations with nvme_fc
On 11/10/2017 02:08 AM, James Smart wrote: > The driver currently registers any remote port that has NVME support. > It should only be registering target ports. > > Register only target ports. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_hbadisc.c | 20 > drivers/scsi/lpfc/lpfc_nvme.c| 3 ++- > 2 files changed, 14 insertions(+), 9 deletions(-) > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH v2 11/17] lpfc: Linux LPFC driver does not process all RSCNs
On 11/10/2017 02:08 AM, James Smart wrote: > During RSCN storms, the driver does not rediscover some targets. > The driver marks some RSCN as to be handled after the ones it's > working on. The driver missed processing some deferred RSCN. > > Move where the driver checks for deferred RSCNs and initiate > deferred RSCN handling if the flag was set. Also revise nport state > within the RSCN confirm routine. Add some state data to a possible > debug print to aid future debugging. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_ct.c | 19 +++ > drivers/scsi/lpfc/lpfc_els.c | 4 +--- > drivers/scsi/lpfc/lpfc_hbadisc.c | 9 +++-- > 3 files changed, 27 insertions(+), 5 deletions(-) > > diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c > index 33417681f5d4..0990f81524cd 100644 > --- a/drivers/scsi/lpfc/lpfc_ct.c > +++ b/drivers/scsi/lpfc/lpfc_ct.c > @@ -685,6 +685,25 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct > lpfc_iocbq *cmdiocb, > lpfc_els_flush_rscn(vport); > goto out; > } > + > + spin_lock_irq(shost->host_lock); > + if (vport->fc_flag & FC_RSCN_DEFERRED) { > + vport->fc_flag &= ~FC_RSCN_DEFERRED; > + spin_unlock_irq(shost->host_lock); > + > + /* > + * Skip processing the NS response > + * Re-issue the NS cmd > + */ > + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, > + "0151 Process Deferred RSCN Data: x%x x%x\n", > + vport->fc_flag, vport->fc_rscn_id_cnt); > + lpfc_els_handle_rscn(vport); > + > + goto out; > + } > + spin_unlock_irq(shost->host_lock); > + > if (irsp->ulpStatus) { > /* Check for retry */ > if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) { > diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c > index 911066c9612d..71ec580f46a3 100644 > --- a/drivers/scsi/lpfc/lpfc_els.c > +++ b/drivers/scsi/lpfc/lpfc_els.c > @@ -1675,6 +1675,7 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, > uint32_t *prsp, > > /* Two ndlps cannot have the same did on the nodelist */ > ndlp->nlp_DID = keepDID; > + lpfc_nlp_set_state(vport, ndlp, keep_nlp_state); > if (phba->sli_rev == LPFC_SLI_REV4 && > active_rrqs_xri_bitmap) > memcpy(ndlp->active_rrqs_xri_bitmap, > @@ -6177,9 +6178,6 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct > lpfc_iocbq *cmdiocb, > lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); > /* send RECOVERY event for ALL nodes that match RSCN payload */ > lpfc_rscn_recovery_check(vport); > - spin_lock_irq(shost->host_lock); > - vport->fc_flag &= ~FC_RSCN_DEFERRED; > - spin_unlock_irq(shost->host_lock); > return 0; > } > lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, > diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c > b/drivers/scsi/lpfc/lpfc_hbadisc.c > index 3468257bda02..4577330313c0 100644 > --- a/drivers/scsi/lpfc/lpfc_hbadisc.c > +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c > @@ -5832,14 +5832,19 @@ static struct lpfc_nodelist * > __lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param) > { > struct lpfc_nodelist *ndlp; > + uint32_t data1; > > list_for_each_entry(ndlp, >fc_nodes, nlp_listp) { > if (filter(ndlp, param)) { > + data1 = (((uint32_t) ndlp->nlp_state << 24) | > + ((uint32_t) ndlp->nlp_xri << 16) | > + ((uint32_t) ndlp->nlp_type << 8) | > + ((uint32_t) ndlp->nlp_rpi & 0xff)); > lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, >"3185 FIND node filter %p DID " > - "Data: x%p x%x x%x\n", > + "Data: x%p x%x x%x x%x\n", >filter, ndlp, ndlp->nlp_DID, > - ndlp->nlp_flag); > + ndlp->nlp_flag, data1); > return ndlp; > } > } > Where _is_ the point of that? Please use individual entries for the variables, and don't shift them atop some random variable just so save some coding. Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH v2 10/17] lpfc: Fix ndlp ref count for pt2pt mode issue RSCN
On 11/10/2017 02:08 AM, James Smart wrote: > pt2pt ndlp ref count prematurely goes to 0. There was reference > removed that should only be removed if connected to a switch, > not if in point-to-point mode. > > Add a mode check before the reference remove. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_els.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c > index 532cd4b49c5d..911066c9612d 100644 > --- a/drivers/scsi/lpfc/lpfc_els.c > +++ b/drivers/scsi/lpfc/lpfc_els.c > @@ -2956,8 +2956,8 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t > nportid, uint8_t retry) > /* This will cause the callback-function lpfc_cmpl_els_cmd to >* trigger the release of node. >*/ > - > - lpfc_nlp_put(ndlp); > + if (!(vport->fc_flag & FC_PT2PT)) > + lpfc_nlp_put(ndlp); > return 0; > } > > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH v2 01/17] lpfc: FLOGI failures are reported when connected to a private loop.
On 11/10/2017 02:08 AM, James Smart wrote: > When the HBA is connected to a private loop, the driver > reports FLOGI loop-open failure as functional error. This is > an expected condition. > > Mark loop-open failure as a warning instead of error. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > > --- > v2: > Reworked printf. v1 printed 2 messages if not a loop open failure. > This prints a single message if not a loop open failure. A single > message still exists if a loop open failure if warnings enabled. > --- > drivers/scsi/lpfc/lpfc_els.c | 27 ++- > 1 file changed, 14 insertions(+), 13 deletions(-) > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH 10/17] lpfc: Fix ndlp ref count for pt2pt mode issue RSCN
On 11/03/2017 11:56 PM, James Smart wrote: > pt2pt ndlp ref count prematurely goes to 0. There was reference > removed that should only be removed if connected to a switch, > not if in point-to-point mode. > > Add a mode check before the reference remove. > > Signed-off-by: Dick Kennedy> Signed-off-by: James Smart > --- > drivers/scsi/lpfc/lpfc_els.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c > index 95872a0329ad..a708d80cb609 100644 > --- a/drivers/scsi/lpfc/lpfc_els.c > +++ b/drivers/scsi/lpfc/lpfc_els.c > @@ -2962,8 +2962,8 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t > nportid, uint8_t retry) > /* This will cause the callback-function lpfc_cmpl_els_cmd to >* trigger the release of node. >*/ > - > - lpfc_nlp_put(ndlp); > + if (!(vport->fc_flag & FC_PT2PT)) > + lpfc_nlp_put(ndlp); > return 0; > } > > Reviewed-by: Hannes Reinecke Cheers, Hannes -- Dr. Hannes ReineckeTeamlead Storage & Networking h...@suse.de +49 911 74053 688 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton HRB 21284 (AG Nürnberg)
Re: [PATCH V10 1/4] dma-mapping: Rework dma_get_cache_alignment()
Please send the scsi fixes on their own for now so that the rework can go into 4.16. If you don't want to do it I'll do it myself and will send them to Martin for now you can the rebase the dma-mapping and mips work after that.