Re: [PATCH 00/10] mpt3sas: full mq support

2017-02-09 Thread Hannes Reinecke
On 02/10/2017 05:43 AM, Sreekanth Reddy wrote:
> On Thu, Feb 9, 2017 at 6:42 PM, Hannes Reinecke  wrote:
>> On 02/09/2017 02:03 PM, Sreekanth Reddy wrote:
[ .. ]
>>>
>>>
>>> Hannes,
>>>
>>> I have created a md raid0 with 4 SAS SSD drives using below command,
>>> #mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdg /dev/sdh
>>> /dev/sdi /dev/sdj
>>>
>>> And here is 'mdadm --detail /dev/md0' command output,
>>> --
>>> /dev/md0:
>>> Version : 1.2
>>>   Creation Time : Thu Feb  9 14:38:47 2017
>>>  Raid Level : raid0
>>>  Array Size : 780918784 (744.74 GiB 799.66 GB)
>>>Raid Devices : 4
>>>   Total Devices : 4
>>> Persistence : Superblock is persistent
>>>
>>> Update Time : Thu Feb  9 14:38:47 2017
>>>   State : clean
>>>  Active Devices : 4
>>> Working Devices : 4
>>>  Failed Devices : 0
>>>   Spare Devices : 0
>>>
>>>  Chunk Size : 512K
>>>
>>>Name : host_name
>>>UUID : b63f9da7:b7de9a25:6a46ca00:42214e22
>>>  Events : 0
>>>
>>> Number   Major   Minor   RaidDevice State
>>>0   8   960  active sync   /dev/sdg
>>>1   8  1121  active sync   /dev/sdh
>>>2   8  1442  active sync   /dev/sdj
>>>3   8  1283  active sync   /dev/sdi
>>> --
>>>
>>> Then I have used below fio profile to run 4K sequence read operations
>>> with nr_hw_queues=1 driver and with nr_hw_queues=24 driver (as my
>>> system has two numa node and each with 12 cpus).
>>> -
>>> global]
>>> ioengine=libaio
>>> group_reporting
>>> direct=1
>>> rw=read
>>> bs=4k
>>> allow_mounted_write=0
>>> iodepth=128
>>> runtime=150s
>>>
>>> [job1]
>>> filename=/dev/md0
>>> -
>>>
>>> Here are the fio results when nr_hw_queues=1 (i.e. single request
>>> queue) with various number of job counts
>>> 1JOB 4k read  : io=213268MB, bw=1421.8MB/s, iops=363975, runt=150001msec
>>> 2JOBs 4k read : io=309605MB, bw=2064.2MB/s, iops=528389, runt=150001msec
>>> 4JOBs 4k read : io=281001MB, bw=1873.4MB/s, iops=479569, runt=150002msec
>>> 8JOBs 4k read : io=236297MB, bw=1575.2MB/s, iops=403236, runt=150016msec
>>>
>>> Here are the fio results when nr_hw_queues=24 (i.e. multiple request
>>> queue) with various number of job counts
>>> 1JOB 4k read   : io=95194MB, bw=649852KB/s, iops=162463, runt=150001msec
>>> 2JOBs 4k read : io=189343MB, bw=1262.3MB/s, iops=323142, runt=150001msec
>>> 4JOBs 4k read : io=314832MB, bw=2098.9MB/s, iops=537309, runt=150001msec
>>> 8JOBs 4k read : io=277015MB, bw=1846.8MB/s, iops=472769, runt=150001msec
>>>
>>> Here we can see that on less number of jobs count, single request
>>> queue (nr_hw_queues=1) is giving more IOPs than multi request
>>> queues(nr_hw_queues=24).
>>>
>>> Can you please share your fio profile, so that I can try same thing on
>>> my system.
>>>
>> Have you tried with the latest git update from Jens for-4.11/block (or
>> for-4.11/next) branch?
> 
> I am using below git repo,
> 
> https://git.kernel.org/cgit/linux/kernel/git/mkp/scsi.git/log/?h=4.11/scsi-queue
> 
> Today I will try with Jens for-4.11/block.
> 
By all means, do.

>> I've found that using the mq-deadline scheduler has a noticeable
>> performance boost.
>>
>> The fio job I'm using is essentially the same; you just should make sure
>> to specify a 'numjob=' statement in there.
>> Otherwise fio will just use a single CPU, which of course leads to
>> averse effects in the multiqueue case.
> 
> Yes I am providing 'numjob=' on fio command line as shown below,
> 
> # fio md_fio_profile --numjobs=8 --output=fio_results.txt
> 
Still, it looks as if you'd be using less jobs than you have CPUs.
Which means you'll be running into a tag starvation scenario on those
CPUs, especially for the small blocksizes.
What are the results if you set 'numjobs' to the number of CPUs?

Cheers,

Hannes
-- 
Dr. Hannes ReineckeTeamlead Storage & Networking
h...@suse.de   +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)


Re: [PATCH v2 08/39] megaraid_sas: megasas_get_request_descriptor always return valid desc

2017-02-09 Thread Martin K. Petersen
> "Shivasharan" == Shivasharan S  
> writes:

Shivasharan> fix in v2 - split patches into two.  discussed below
Shivasharan> http://marc.info/?l=linux-scsi=148638999110404=2

^^^ Comments like this need to go after the "---" separator so they
don't show up in the commit message.

Shivasharan> No functional change. Code clean up. Removing error code
Shivasharan> which is not valid scenario.  In
Shivasharan> megasas_get_request_descriptor we can remove the error
Shivasharan> handling which is not required.  With fusion controllers,
Shivasharan> if there is a valid message frame available, we are
Shivasharan> guaranteed to get a corresponding request descriptor.

-- 
Martin K. Petersen  Oracle Linux Engineering


Re: [PATCH 00/10] mpt3sas: full mq support

2017-02-09 Thread Sreekanth Reddy
On Thu, Feb 9, 2017 at 6:42 PM, Hannes Reinecke  wrote:
> On 02/09/2017 02:03 PM, Sreekanth Reddy wrote:
>> On Wed, Feb 1, 2017 at 1:13 PM, Hannes Reinecke  wrote:
>>>
>>> On 02/01/2017 08:07 AM, Kashyap Desai wrote:
>
> -Original Message-
> From: Hannes Reinecke [mailto:h...@suse.de]
> Sent: Wednesday, February 01, 2017 12:21 PM
> To: Kashyap Desai; Christoph Hellwig
> Cc: Martin K. Petersen; James Bottomley; linux-scsi@vger.kernel.org;
> Sathya
> Prakash Veerichetty; PDL-MPT-FUSIONLINUX; Sreekanth Reddy
> Subject: Re: [PATCH 00/10] mpt3sas: full mq support
>
> On 01/31/2017 06:54 PM, Kashyap Desai wrote:
>>>
>>> -Original Message-
>>> From: Hannes Reinecke [mailto:h...@suse.de]
>>> Sent: Tuesday, January 31, 2017 4:47 PM
>>> To: Christoph Hellwig
>>> Cc: Martin K. Petersen; James Bottomley; linux-scsi@vger.kernel.org;
>>
>> Sathya
>>>
>>> Prakash; Kashyap Desai; mpt-fusionlinux@broadcom.com
>>> Subject: Re: [PATCH 00/10] mpt3sas: full mq support
>>>
>>> On 01/31/2017 11:02 AM, Christoph Hellwig wrote:

 On Tue, Jan 31, 2017 at 10:25:50AM +0100, Hannes Reinecke wrote:
>
> Hi all,
>
> this is a patchset to enable full multiqueue support for the
> mpt3sas
>>>
>>> driver.
>
> While the HBA only has a single mailbox register for submitting
> commands, it does have individual receive queues per MSI-X
> interrupt and as such does benefit from converting it to full
> multiqueue
>>
>> support.


 Explanation and numbers on why this would be beneficial, please.
 We should not need multiple submissions queues for a single register
 to benefit from multiple completion queues.

>>> Well, the actual throughput very strongly depends on the blk-mq-sched
>>> patches from Jens.
>>> As this is barely finished I didn't post any numbers yet.
>>>
>>> However:
>>> With multiqueue support:
>>> 4k seq read : io=60573MB, bw=1009.2MB/s, iops=258353, runt=
>
> 60021msec
>>>
>>> With scsi-mq on 1 queue:
>>> 4k seq read : io=17369MB, bw=296291KB/s, iops=74072, runt= 60028msec
>>> So yes, there _is_ a benefit.
>>
>>
>> Hannes,
>>
>> I have created a md raid0 with 4 SAS SSD drives using below command,
>> #mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdg /dev/sdh
>> /dev/sdi /dev/sdj
>>
>> And here is 'mdadm --detail /dev/md0' command output,
>> --
>> /dev/md0:
>> Version : 1.2
>>   Creation Time : Thu Feb  9 14:38:47 2017
>>  Raid Level : raid0
>>  Array Size : 780918784 (744.74 GiB 799.66 GB)
>>Raid Devices : 4
>>   Total Devices : 4
>> Persistence : Superblock is persistent
>>
>> Update Time : Thu Feb  9 14:38:47 2017
>>   State : clean
>>  Active Devices : 4
>> Working Devices : 4
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>  Chunk Size : 512K
>>
>>Name : host_name
>>UUID : b63f9da7:b7de9a25:6a46ca00:42214e22
>>  Events : 0
>>
>> Number   Major   Minor   RaidDevice State
>>0   8   960  active sync   /dev/sdg
>>1   8  1121  active sync   /dev/sdh
>>2   8  1442  active sync   /dev/sdj
>>3   8  1283  active sync   /dev/sdi
>> --
>>
>> Then I have used below fio profile to run 4K sequence read operations
>> with nr_hw_queues=1 driver and with nr_hw_queues=24 driver (as my
>> system has two numa node and each with 12 cpus).
>> -
>> global]
>> ioengine=libaio
>> group_reporting
>> direct=1
>> rw=read
>> bs=4k
>> allow_mounted_write=0
>> iodepth=128
>> runtime=150s
>>
>> [job1]
>> filename=/dev/md0
>> -
>>
>> Here are the fio results when nr_hw_queues=1 (i.e. single request
>> queue) with various number of job counts
>> 1JOB 4k read  : io=213268MB, bw=1421.8MB/s, iops=363975, runt=150001msec
>> 2JOBs 4k read : io=309605MB, bw=2064.2MB/s, iops=528389, runt=150001msec
>> 4JOBs 4k read : io=281001MB, bw=1873.4MB/s, iops=479569, runt=150002msec
>> 8JOBs 4k read : io=236297MB, bw=1575.2MB/s, iops=403236, runt=150016msec
>>
>> Here are the fio results when nr_hw_queues=24 (i.e. multiple request
>> queue) with various number of job counts
>> 1JOB 4k read   : io=95194MB, bw=649852KB/s, iops=162463, runt=150001msec
>> 2JOBs 4k read : io=189343MB, bw=1262.3MB/s, iops=323142, runt=150001msec
>> 4JOBs 4k read : io=314832MB, bw=2098.9MB/s, iops=537309, 

Re: [lkp-robot] [scsi, block] 0dba1314d4: WARNING:at_fs/sysfs/dir.c:#sysfs_warn_dup

2017-02-09 Thread Dan Williams
On Wed, Feb 8, 2017 at 4:08 PM, James Bottomley
 wrote:
> On Mon, 2017-02-06 at 21:42 -0800, Dan Williams wrote:
[..]
>> ...but it reproduces on current mainline with the same config. I
>> haven't spotted what makes scsi_debug behave like this.
>
> Looking at the config, it's a static debug with report luns enabled.
>  Is it as simple as the fact that we probe lun 0 manually to see if the
> target exists, but then we don't account for the fact that we already
> did this, so if it turns up again in the report lun scan, we'll probe
> it again leading to a double add.  If that theory is correct, this may
> be the fix (compile tested only).
>
> James
>
> ---
>
> diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
> index 6f7128f..ba4be08 100644
> --- a/drivers/scsi/scsi_scan.c
> +++ b/drivers/scsi/scsi_scan.c
> @@ -1441,6 +1441,10 @@ static int scsi_report_lun_scan(struct scsi_target 
> *starget, int bflags,
> for (lunp = _data[1]; lunp <= _data[num_luns]; lunp++) {
> lun = scsilun_to_int(lunp);
>
> +   if (lun == 0)
> +   /* already scanned LUN 0 */
> +   continue;
> +
> if (lun > sdev->host->max_lun) {
> sdev_printk(KERN_WARNING, sdev,
> "lun%llu has a LUN larger than"

I gave this a shot on top of linux-next, but still hit the failure.
Log attached.


log
Description: Binary data


Re: [PATCH] zfcp: fix use-after-free by not tracing WKA port open/close on failed send

2017-02-09 Thread Martin K. Petersen
> "Steffen" == Steffen Maier  writes:

Steffen> Rather than relying on req being NULL (or ERR_PTR) for all
Steffen> cases where we don't want to trace or should not trace, simply
Steffen> check retval which is unconditionally initialized with -EIO !=
Steffen> 0 and it can only become 0 on successful retval =
Steffen> zfcp_fsf_req_send(req).  With that we can also remove the then
Steffen> again unnecessary unconditional initialization of req which was
Steffen> introduced with that earlier commit.

Applied to 4.10/scsi-fixes.

-- 
Martin K. Petersen  Oracle Linux Engineering


Re: [PATCH] aacraid: Fix INTx/MSI-x issue with older controllers

2017-02-09 Thread Martin K. Petersen
> "Dave" == Dave Carroll  writes:

Dave> commit 78cbccd3bd683 ("[PATCH] aacraid: Fix for KDUMP driver
Dave> hang") caused a problem on older controllers which do not support
Dave> MSI-x (namely ASR3405,ASR3805). This patch conditionalizes the
Dave> previous patch to controllers which support MSI-x

Applied to 4.10/scsi-fixes.

-- 
Martin K. Petersen  Oracle Linux Engineering


Re: [PATCH V2] mpt3sas: disable ASPM for MPI2 controllers

2017-02-09 Thread Martin K. Petersen
> "ojab" == ojab   writes:

>> Acked-by: Sreekanth Reddy 

Applied to 4.10/scsi-fixes.

ojab> Is it too late for 4.10.0?

We'll see. Worst case I'll shuffle it over to the 4.11 tree.

-- 
Martin K. Petersen  Oracle Linux Engineering


Re: [PATCH v2 21/39] megaraid_sas: big endian support changes

2017-02-09 Thread Martin K. Petersen
> "Kashyap" == Kashyap Desai  writes:

Kashyap,

Kashyap> We will fix this patch and resend. Only fixing this patch and
Kashyap> resend works fine with complete series (there is no hunk
Kashyap> failure observe), so just going to push one particular patch
Kashyap> with below title.

Please just send a complete v3. Many patches in v2 have changelog
entries in the body of the commit message instead of after the ---
separator and I am not going to fix all those up by hand.

-- 
Martin K. Petersen  Oracle Linux Engineering


Re: [PATCH -next] scsi: qedi: Fix possible memory leak in qedi_iscsi_update_conn()

2017-02-09 Thread Martin K. Petersen
> "Wei" == Wei Yongjun  writes:

Wei> 'conn_info' is malloced in qedi_iscsi_update_conn() and should be
Wei> freed before leaving from the error handling cases, otherwise it
Wei> will cause memory leak.

Applied to 4.11/scsi-queue.

-- 
Martin K. Petersen  Oracle Linux Engineering


Re: [PATCH v4] scsi/sd: release scan_mutex during sync_cache and start_stop

2017-02-09 Thread Song Liu
Hi Christoph, 

IMHO, scan_mutex maybe still needed for other part of scsi_remove_device(). 

For example, device_del() in sd_remove() and scsi_sysfs_add_devices() in 
scsi_finish_async_scan() should not run in parallel? 

On the other hand, other parts of sd_remove(), including sd_shutdown(), 
does not touch shared resource.  So it is safe to run these sections without
holding scan_mutex. 

Another (and maybe better) solution is to move mutex_lock/unlock() of 
scan_mutex to each _remove function. So we only hold the lock when accessing
scsi_host related data. 

Thanks,
Song


> On Feb 9, 2017, at 12:51 AM, Christoph Hellwig  wrote:
> 
> And what is the lock protecting if we can just release and reacquire it
> safely?  Probably nothing, but someone needs to do the audit carefully.
> 
> Once that is done we can just apply a patch like the one below and
> be done with it:
> 
> ---
> From 47572d7f0758cc0cbd979b1a2c3791f3b94c566e Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig 
> Date: Thu, 9 Feb 2017 09:49:04 +0100
> Subject: scsi: remove scan_mutex
> 
> All our lookup structures are properly locked these days, there shouldn't
> be any need to serialize the high-level scan process.
> 
> Signed-off-by: Christoph Hellwig 
> ---
> drivers/scsi/scsi_scan.c | 15 ---
> 1 file changed, 15 deletions(-)
> 
> diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
> index 6f7128f49c30..c92be1ad486c 100644
> --- a/drivers/scsi/scsi_scan.c
> +++ b/drivers/scsi/scsi_scan.c
> @@ -1490,7 +1490,6 @@ struct scsi_device *__scsi_add_device(struct Scsi_Host 
> *shost, uint channel,
>   return ERR_PTR(-ENOMEM);
>   scsi_autopm_get_target(starget);
> 
> - mutex_lock(>scan_mutex);
>   if (!shost->async_scan)
>   scsi_complete_async_scans();
> 
> @@ -1498,7 +1497,6 @@ struct scsi_device *__scsi_add_device(struct Scsi_Host 
> *shost, uint channel,
>   scsi_probe_and_add_lun(starget, lun, NULL, , 1, hostdata);
>   scsi_autopm_put_host(shost);
>   }
> - mutex_unlock(>scan_mutex);
>   scsi_autopm_put_target(starget);
>   /*
>* paired with scsi_alloc_target().  Target will be destroyed unless
> @@ -1629,7 +1627,6 @@ void scsi_scan_target(struct device *parent, unsigned 
> int channel,
>   strncmp(scsi_scan_type, "manual", 6) == 0)
>   return;
> 
> - mutex_lock(>scan_mutex);
>   if (!shost->async_scan)
>   scsi_complete_async_scans();
> 
> @@ -1637,7 +1634,6 @@ void scsi_scan_target(struct device *parent, unsigned 
> int channel,
>   __scsi_scan_target(parent, channel, id, lun, rescan);
>   scsi_autopm_put_host(shost);
>   }
> - mutex_unlock(>scan_mutex);
> }
> EXPORT_SYMBOL(scsi_scan_target);
> 
> @@ -1686,7 +1682,6 @@ int scsi_scan_host_selected(struct Scsi_Host *shost, 
> unsigned int channel,
>   ((lun != SCAN_WILD_CARD) && (lun >= shost->max_lun)))
>   return -EINVAL;
> 
> - mutex_lock(>scan_mutex);
>   if (!shost->async_scan)
>   scsi_complete_async_scans();
> 
> @@ -1700,7 +1695,6 @@ int scsi_scan_host_selected(struct Scsi_Host *shost, 
> unsigned int channel,
>   scsi_scan_channel(shost, channel, id, lun, rescan);
>   scsi_autopm_put_host(shost);
>   }
> - mutex_unlock(>scan_mutex);
> 
>   return 0;
> }
> @@ -1752,11 +1746,9 @@ static struct async_scan_data 
> *scsi_prep_async_scan(struct Scsi_Host *shost)
>   goto err;
>   init_completion(>prev_finished);
> 
> - mutex_lock(>scan_mutex);
>   spin_lock_irqsave(shost->host_lock, flags);
>   shost->async_scan = 1;
>   spin_unlock_irqrestore(shost->host_lock, flags);
> - mutex_unlock(>scan_mutex);
> 
>   spin_lock(_scan_lock);
>   if (list_empty(_hosts))
> @@ -1789,12 +1781,9 @@ static void scsi_finish_async_scan(struct 
> async_scan_data *data)
> 
>   shost = data->shost;
> 
> - mutex_lock(>scan_mutex);
> -
>   if (!shost->async_scan) {
>   shost_printk(KERN_INFO, shost, "%s called twice\n", __func__);
>   dump_stack();
> - mutex_unlock(>scan_mutex);
>   return;
>   }
> 
> @@ -1806,8 +1795,6 @@ static void scsi_finish_async_scan(struct 
> async_scan_data *data)
>   shost->async_scan = 0;
>   spin_unlock_irqrestore(shost->host_lock, flags);
> 
> - mutex_unlock(>scan_mutex);
> -
>   spin_lock(_scan_lock);
>   list_del(>list);
>   if (!list_empty(_hosts)) {
> @@ -1915,7 +1902,6 @@ struct scsi_device *scsi_get_host_dev(struct Scsi_Host 
> *shost)
>   struct scsi_device *sdev = NULL;
>   struct scsi_target *starget;
> 
> - mutex_lock(>scan_mutex);
>   if (!scsi_host_scan_allowed(shost))
>   goto out;
>   starget = scsi_alloc_target(>shost_gendev, 0, shost->this_id);
> @@ -1929,7 +1915,6 @@ struct scsi_device 

Re: [PATCH v2 10/14] qla2xxx: Fix request queue corruption.

2017-02-09 Thread Tran, Quinn
Bart, thanks for reviewing.  Will clean it up.

Regards,
Quinn Tran

-Original Message-
From:  on behalf of Bart Van Assche 

Date: Wednesday, February 8, 2017 at 11:22 AM
To: "h...@infradead.org" , "Madhani, Himanshu" 
, target-devel , 
Nicholas Bellinger 
Cc: "linux-scsi@vger.kernel.org" , "Malavali, 
Giridhar" 
Subject: Re: [PATCH v2 10/14] qla2xxx: Fix request queue corruption.
Resent-From: 

On Fri, 2017-02-03 at 14:40 -0800, Himanshu Madhani wrote:
> From: Quinn Tran 
> 
> When FW notify driver or driver detects low FW resource,
> driver tries to send out Busy SCSI Status to tell Initiator
> side to back off. During the send process, the lock was not held.
> 
> Signed-off-by: Quinn Tran 
> Signed-off-by: Himanshu Madhani 
> ---
>  drivers/scsi/qla2xxx/qla_target.c | 12 +---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/scsi/qla2xxx/qla_target.c 
b/drivers/scsi/qla2xxx/qla_target.c
> index b61cbb8..b5fb9c55 100644
> --- a/drivers/scsi/qla2xxx/qla_target.c
> +++ b/drivers/scsi/qla2xxx/qla_target.c
> @@ -5170,16 +5170,22 @@ static int __qlt_send_busy(struct scsi_qla_host 
*vha,
>  
>  static int
>  qlt_chk_qfull_thresh_hold(struct scsi_qla_host *vha,
> - struct atio_from_isp *atio)
> + struct atio_from_isp *atio, uint8_t ha_locked)
>  {
>   struct qla_hw_data *ha = vha->hw;
>   uint16_t status;
> + unsigned long flags;
>  
>   if (ha->tgt.num_pend_cmds < Q_FULL_THRESH_HOLD(ha))
>   return 0;
>  
> + if (!ha_locked)
> + spin_lock_irqsave(>hardware_lock, flags);
>   status = temp_sam_status;
>   qlt_send_busy(vha, atio, status);
> + if (!ha_locked)
> + spin_unlock_irqrestore(>hardware_lock, flags);
> +
>   return 1;
>  }
>  
> @@ -5224,7 +5230,7 @@ static void qlt_24xx_atio_pkt(struct scsi_qla_host 
*vha,
>  
>  
>   if (likely(atio->u.isp24.fcp_cmnd.task_mgmt_flags == 0)) {
> - rc = qlt_chk_qfull_thresh_hold(vha, atio);
> + rc = qlt_chk_qfull_thresh_hold(vha, atio, ha_locked);
>   if (rc != 0) {
>   tgt->atio_irq_cmd_count--;
>   return;
> @@ -5347,7 +5353,7 @@ static void qlt_response_pkt(struct scsi_qla_host 
*vha, response_t *pkt)
>   break;
>   }
>  
> - rc = qlt_chk_qfull_thresh_hold(vha, atio);
> + rc = qlt_chk_qfull_thresh_hold(vha, atio, 1);
>   if (rc != 0) {
>   tgt->irq_cmd_count--;
>   return;

Hello Quinn,

Please consider to use bool instead of uint8_t for ha_locked and true /
false instead of 1 / 0.

Bart.




[GIT PULL] target fixes for v4.10

2017-02-09 Thread Nicholas A. Bellinger
Hello Linus,

The following target series for v4.10 contains fixes which address a few
long-standing bugs that DATERA's QA + automation teams have uncovered
while putting v4.1.y target code into production usage.

We've been running the top three in our nightly automated regression
runs for the last two months, and the COMPARE_AND_WRITE fix Mr. Gary Guo
has been manually verifying against a four node ESX cluster this past
week. 

Note all of them have CC' stable tags.

Please go ahead and pull from:

  git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master

This includes:

  - Fix a bug with ESX EXTENDED_COPY + SAM_STAT_RESERVATION_CONFLICT 
status, where target_core_xcopy.c logic was incorrectly returning
SAM_STAT_CHECK_CONDITION for all non SAM_STAT_GOOD cases.
(Nixon Vincent)
  - Fix a TMR LUN_RESET hung task bug while other in-flight TMRs are 
being aborted, before the new one had been dispatched into tmr_wq.
(Rob Millner)
  - Fix a long standing double free OOPs, where a dynamically generated
'demo-mode' NodeACL has multiple sessions associated with it, and
the /sys/kernel/config/target/$FABRIC/$WWN/ subsequently disables
demo-mode, but never converts the dynamic ACL into a explicit ACL.
(Rob Millner)
  - Fix a long standing reference leak with ESX VAAI COMPARE_AND_WRITE
when the second phase WRITE COMMIT command fails, resulting in
CHECK_CONDITION response never being sent and se_cmd->cmd_kref
never reaching zero.  (Gary Guo)

Beyond these items on v4.1.y we've reproduced, fixed, and run through
our regression test suite using iscsi-target exports, there are two
additional outstanding list items:

  - Remove a >= v4.2 RCU conversion BUG_ON that would trigger when 
dynamic node NodeACLs where being converted to explicit NodeACLs.  
The patch drops the BUG_ON to follow how pre RCU conversion
worked for this special case. (Benjamin Estrabaud)
  - Add ibmvscsis target_core_fabric_ops->max_data_sg_nent
assignment to match what IBM's Virtual SCSI hypervisor is
already enforcing at transport layer.  (Bryant Ly + Steven Royer).

Thank you,

--nab

Bryant G. Ly (1):
  ibmvscsis: Add SGL limit

Nicholas Bellinger (5):
  target: Don't BUG_ON during NodeACL dynamic -> explicit conversion
  target: Use correct SCSI status during EXTENDED_COPY exception
  target: Fix early transport_generic_handle_tmr abort scenario
  target: Fix multi-session dynamic se_node_acl double free OOPs
  target: Fix COMPARE_AND_WRITE ref leak for non GOOD status

 drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c |  1 +
 drivers/target/target_core_device.c  | 10 +++-
 drivers/target/target_core_sbc.c |  8 ++-
 drivers/target/target_core_transport.c   | 86 +---
 drivers/target/target_core_xcopy.c   |  2 +-
 include/target/target_core_base.h|  1 +
 6 files changed, 76 insertions(+), 32 deletions(-)




[PATCH] aacraid: Fix INTx/MSI-x issue with older controllers

2017-02-09 Thread Dave Carroll
commit 78cbccd3bd683 ("[PATCH] aacraid: Fix for KDUMP driver hang")
 
caused a problem on older controllers which do not support MSI-x (namely
ASR3405,ASR3805). This patch conditionalizes the previous patch to 
controllers which support MSI-x

cc: sta...@vger.kernel.org
Fixes: 78cbccd3bd683c295a44af8050797dc4a41376ff
Reported-by: Arkadiusz Miskiewicz 
Signed-off-by: Dave Carroll 
Reviewed-by: Raghava Aditya Renukunta 
---
 drivers/scsi/aacraid/comminit.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
index 4f56b10..d56df36 100644
--- a/drivers/scsi/aacraid/comminit.c
+++ b/drivers/scsi/aacraid/comminit.c
@@ -50,9 +50,13 @@ struct aac_common aac_config = {
 
 static inline int aac_is_msix_mode(struct aac_dev *dev)
 {
-   u32 status;
+   u32 status = 0;
 
-   status = src_readl(dev, MUnit.OMR);
+   if (dev->pdev->device == PMC_DEVICE_S6 ||
+   dev->pdev->device == PMC_DEVICE_S7 ||
+   dev->pdev->device == PMC_DEVICE_S8) {
+   status = src_readl(dev, MUnit.OMR);
+   }
return (status & AAC_INT_MODE_MSIX);
 }
 
-- 
2.8.4



RE: [PATCH v2 21/39] megaraid_sas: big endian support changes

2017-02-09 Thread Kashyap Desai
> +static inline void set_num_sge(struct RAID_CONTEXT_G35 rctx_g35,
> +u16 sge_count)
> +{
> + rctx_g35.u.bytes[0] = (u8)(sge_count & NUM_SGE_MASK_LOWER);
> + rctx_g35.u.bytes[1] |= (u8)((sge_count >> NUM_SGE_SHIFT_UPPER)
> + &
> NUM_SGE_MASK_UPPER);
> +}

This function and below get_num_sge() need fix.  We have supposed to pass
pointer of struct RAID_CONTEXT_G35 to get correct setting reflected in IO
frame, otherwise it just set in stack local memory and that is not a
intent here. We will fix this patch and resend. Only fixing this patch and
resend works fine with complete series (there is no hunk failure observe),
so just going to push one particular patch with below title.

[PATCH v2 21/39 RESEND] megaraid_sas: big endian support changes

> +
> +static inline u16 get_num_sge(struct RAID_CONTEXT_G35 rctx_g35) {
> + u16 sge_count;
> +
> + sge_count = (u16)(((rctx_g35.u.bytes[1] & NUM_SGE_MASK_UPPER)
> + << NUM_SGE_SHIFT_UPPER) | (rctx_g35.u.bytes[0]));
> + return sge_count;
> +}
> +
> +#define SET_STREAM_DETECTED(rctx_g35) \
> + (rctx_g35.u.bytes[1] |= STREAM_DETECT_MASK)
> +
> +#define CLEAR_STREAM_DETECTED(rctx_g35) \
> + (rctx_g35.u.bytes[1] &= ~(STREAM_DETECT_MASK))
> +
> +static inline bool is_stream_detected(struct RAID_CONTEXT_G35
> +*rctx_g35) {
> + return ((rctx_g35->u.bytes[1] & STREAM_DETECT_MASK)); }
> +
>  union RAID_CONTEXT_UNION {
>   struct RAID_CONTEXT raid_context;
>   struct RAID_CONTEXT_G35 raid_context_g35;
> --
> 2.8.3


RE: [PATCH v2 19/39] megaraid_sas: MR_TargetIdToLdGet u8 to u16 and avoid invalid raid-map access

2017-02-09 Thread Kashyap Desai
> Signed-off-by: Shivasharan S 
> Signed-off-by: Kashyap Desai 

In this patch series, we are done with review but this particular patch
missed Review-by tag.

Kashyap


Re: [PATCH net-next v2 00/12] net: dsa: remove unnecessary phy.h include

2017-02-09 Thread Kalle Valo
Florian Fainelli  writes:

>>> If not, for something like this it's a must:
>>>
>>> drivers/net/wireless/ath/wil6210/cfg80211.c:24:30: error: expected ‘)’ 
>>> before ‘bool’
>>>  module_param(disable_ap_sme, bool, 0444);
>>>   ^
>>> drivers/net/wireless/ath/wil6210/cfg80211.c:25:34: error: expected ‘)’ 
>>> before string constant
>>>  MODULE_PARM_DESC(disable_ap_sme, " let user space handle AP mode SME");
>>>   ^
>>> Like like that file needs linux/module.h included.
>> 
>> Johannes already fixed a similar (or same) problem in my tree:
>> 
>> wil6210: include moduleparam.h
>> 
>> https://git.kernel.org/cgit/linux/kernel/git/kvalo/wireless-drivers-next.git/commit/?id=949c2d0096753d518ef6e0bd8418c8086747196b
>> 
>> I'm planning to send you a pull request tomorrow which contains that
>> one.
>
> Thanks Kalle!
>
> David, can you hold on this series until Kalle's pull request gets
> submitted? Past this error, allmodconfig builds fine with this patch
> series (just tested). Thanks!

Just submitted the pull request:

https://patchwork.ozlabs.org/patch/726133/

-- 
Kalle Valo


Re: [PATCH v2 03/39] megaraid_sas: raid 1 fast path code optimize

2017-02-09 Thread Tomas Henzl
On 8.2.2017 19:51, Kashyap Desai wrote:
>>> +static inline void
>>> +megasas_complete_r1_command(struct megasas_instance *instance,
>>> +   struct megasas_cmd_fusion *cmd) {
>>> +   u8 *sense, status, ex_status;
>>> +   u32 data_length;
>>> +   u16 peer_smid;
>>> +   struct fusion_context *fusion;
>>> +   struct megasas_cmd_fusion *r1_cmd = NULL;
>>> +   struct scsi_cmnd *scmd_local = NULL;
>>> +   struct RAID_CONTEXT_G35 *rctx_g35;
>>> +
>>> +   rctx_g35 = >io_request->RaidContext.raid_context_g35;
>>> +   fusion = instance->ctrl_context;
>>> +   peer_smid = le16_to_cpu(rctx_g35->smid.peer_smid);
>>> +
>>> +   r1_cmd = fusion->cmd_list[peer_smid - 1];
>>> +   scmd_local = cmd->scmd;
>>> +   status = rctx_g35->status;
>>> +   ex_status = rctx_g35->ex_status;
>>> +   data_length = cmd->io_request->DataLength;
>>> +   sense = cmd->sense;
>>> +
>>> +   cmd->cmd_completed = true;
>> Please help me understand how this works
>> - there are two peer commands sent to the controller
>> - both are completed and the later calls scsi_done and returns both
> r1_cmd
>> + cmd
>> - if both commands can be completed at the same time, is it possible
> that
>> the
>>   above line is executed at the same moment for both completions ?
>> How is the code  protected against a double completion when both
>> completed commands see the peer cmd_completed as set ?
>
> Tomas,  cmd and r1_cmd (part of  same Raid 1 FP) will be always completed
> on same reply queue by firmware. That is one of the key requirement here
> for raid 1 fast path.
> What you ask is possible if FW completes cmd and r1_cmd on different reply
> queue. If you notice when we clone r1_cmd, we also clone MSI-x index from
> parent command.
> So eventually, FW is aware of binding of both cmd and r1_cmd w.r.t reply
> queue index.

ok, thanks

>
> ` Kashyap
>
>>> +




Re: [PATCH v2 03/39] megaraid_sas: raid 1 fast path code optimize

2017-02-09 Thread Tomas Henzl
On 8.2.2017 10:28, Shivasharan S wrote:
> fix in v2 - ex_status and status was wrongly re-used in 
> megasas_complete_r1_command.
> discussed below -
> http://marc.info/?l=linux-scsi=148638763409385=2
>
>
> No functional change. Code refactor.
> Remove function megasas_fpio_to_ldio as we never require to convert fpio to 
> ldio because of frame unavailability.
> Grab extra frame of raid 1 write fast path before it creates first frame as 
> Fast Path.
> Removed is_raid_1_fp_write flag as raid 1 write fast path command is decided 
> using r1_alt_dev_handle only.
> Move resetting megasas_cmd_fusion fields at common function 
> megasas_return_cmd_fusion.
>
>
> Signed-off-by: Shivasharan S 
> Signed-off-by: Kashyap Desai 
> Reviewed-by: Hannes Reinecke 

Reviewed-by: Tomas Henzl 

Tomas



Re: [PATCH 00/10] mpt3sas: full mq support

2017-02-09 Thread Hannes Reinecke
On 02/09/2017 02:03 PM, Sreekanth Reddy wrote:
> On Wed, Feb 1, 2017 at 1:13 PM, Hannes Reinecke  wrote:
>>
>> On 02/01/2017 08:07 AM, Kashyap Desai wrote:

 -Original Message-
 From: Hannes Reinecke [mailto:h...@suse.de]
 Sent: Wednesday, February 01, 2017 12:21 PM
 To: Kashyap Desai; Christoph Hellwig
 Cc: Martin K. Petersen; James Bottomley; linux-scsi@vger.kernel.org;
 Sathya
 Prakash Veerichetty; PDL-MPT-FUSIONLINUX; Sreekanth Reddy
 Subject: Re: [PATCH 00/10] mpt3sas: full mq support

 On 01/31/2017 06:54 PM, Kashyap Desai wrote:
>>
>> -Original Message-
>> From: Hannes Reinecke [mailto:h...@suse.de]
>> Sent: Tuesday, January 31, 2017 4:47 PM
>> To: Christoph Hellwig
>> Cc: Martin K. Petersen; James Bottomley; linux-scsi@vger.kernel.org;
>
> Sathya
>>
>> Prakash; Kashyap Desai; mpt-fusionlinux@broadcom.com
>> Subject: Re: [PATCH 00/10] mpt3sas: full mq support
>>
>> On 01/31/2017 11:02 AM, Christoph Hellwig wrote:
>>>
>>> On Tue, Jan 31, 2017 at 10:25:50AM +0100, Hannes Reinecke wrote:

 Hi all,

 this is a patchset to enable full multiqueue support for the
 mpt3sas
>>
>> driver.

 While the HBA only has a single mailbox register for submitting
 commands, it does have individual receive queues per MSI-X
 interrupt and as such does benefit from converting it to full
 multiqueue
>
> support.
>>>
>>>
>>> Explanation and numbers on why this would be beneficial, please.
>>> We should not need multiple submissions queues for a single register
>>> to benefit from multiple completion queues.
>>>
>> Well, the actual throughput very strongly depends on the blk-mq-sched
>> patches from Jens.
>> As this is barely finished I didn't post any numbers yet.
>>
>> However:
>> With multiqueue support:
>> 4k seq read : io=60573MB, bw=1009.2MB/s, iops=258353, runt=

 60021msec
>>
>> With scsi-mq on 1 queue:
>> 4k seq read : io=17369MB, bw=296291KB/s, iops=74072, runt= 60028msec
>> So yes, there _is_ a benefit.
> 
> 
> Hannes,
> 
> I have created a md raid0 with 4 SAS SSD drives using below command,
> #mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdg /dev/sdh
> /dev/sdi /dev/sdj
> 
> And here is 'mdadm --detail /dev/md0' command output,
> --
> /dev/md0:
> Version : 1.2
>   Creation Time : Thu Feb  9 14:38:47 2017
>  Raid Level : raid0
>  Array Size : 780918784 (744.74 GiB 799.66 GB)
>Raid Devices : 4
>   Total Devices : 4
> Persistence : Superblock is persistent
> 
> Update Time : Thu Feb  9 14:38:47 2017
>   State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
> 
>  Chunk Size : 512K
> 
>Name : host_name
>UUID : b63f9da7:b7de9a25:6a46ca00:42214e22
>  Events : 0
> 
> Number   Major   Minor   RaidDevice State
>0   8   960  active sync   /dev/sdg
>1   8  1121  active sync   /dev/sdh
>2   8  1442  active sync   /dev/sdj
>3   8  1283  active sync   /dev/sdi
> --
> 
> Then I have used below fio profile to run 4K sequence read operations
> with nr_hw_queues=1 driver and with nr_hw_queues=24 driver (as my
> system has two numa node and each with 12 cpus).
> -
> global]
> ioengine=libaio
> group_reporting
> direct=1
> rw=read
> bs=4k
> allow_mounted_write=0
> iodepth=128
> runtime=150s
> 
> [job1]
> filename=/dev/md0
> -
> 
> Here are the fio results when nr_hw_queues=1 (i.e. single request
> queue) with various number of job counts
> 1JOB 4k read  : io=213268MB, bw=1421.8MB/s, iops=363975, runt=150001msec
> 2JOBs 4k read : io=309605MB, bw=2064.2MB/s, iops=528389, runt=150001msec
> 4JOBs 4k read : io=281001MB, bw=1873.4MB/s, iops=479569, runt=150002msec
> 8JOBs 4k read : io=236297MB, bw=1575.2MB/s, iops=403236, runt=150016msec
> 
> Here are the fio results when nr_hw_queues=24 (i.e. multiple request
> queue) with various number of job counts
> 1JOB 4k read   : io=95194MB, bw=649852KB/s, iops=162463, runt=150001msec
> 2JOBs 4k read : io=189343MB, bw=1262.3MB/s, iops=323142, runt=150001msec
> 4JOBs 4k read : io=314832MB, bw=2098.9MB/s, iops=537309, runt=150001msec
> 8JOBs 4k read : io=277015MB, bw=1846.8MB/s, iops=472769, runt=150001msec
> 
> Here we can see that on less number of jobs count, single request
> queue (nr_hw_queues=1) is giving 

Re: [PATCH 00/10] mpt3sas: full mq support

2017-02-09 Thread Sreekanth Reddy
On Wed, Feb 1, 2017 at 1:13 PM, Hannes Reinecke  wrote:
>
> On 02/01/2017 08:07 AM, Kashyap Desai wrote:
>>>
>>> -Original Message-
>>> From: Hannes Reinecke [mailto:h...@suse.de]
>>> Sent: Wednesday, February 01, 2017 12:21 PM
>>> To: Kashyap Desai; Christoph Hellwig
>>> Cc: Martin K. Petersen; James Bottomley; linux-scsi@vger.kernel.org;
>>> Sathya
>>> Prakash Veerichetty; PDL-MPT-FUSIONLINUX; Sreekanth Reddy
>>> Subject: Re: [PATCH 00/10] mpt3sas: full mq support
>>>
>>> On 01/31/2017 06:54 PM, Kashyap Desai wrote:
>
> -Original Message-
> From: Hannes Reinecke [mailto:h...@suse.de]
> Sent: Tuesday, January 31, 2017 4:47 PM
> To: Christoph Hellwig
> Cc: Martin K. Petersen; James Bottomley; linux-scsi@vger.kernel.org;

 Sathya
>
> Prakash; Kashyap Desai; mpt-fusionlinux@broadcom.com
> Subject: Re: [PATCH 00/10] mpt3sas: full mq support
>
> On 01/31/2017 11:02 AM, Christoph Hellwig wrote:
>>
>> On Tue, Jan 31, 2017 at 10:25:50AM +0100, Hannes Reinecke wrote:
>>>
>>> Hi all,
>>>
>>> this is a patchset to enable full multiqueue support for the
>>> mpt3sas
>
> driver.
>>>
>>> While the HBA only has a single mailbox register for submitting
>>> commands, it does have individual receive queues per MSI-X
>>> interrupt and as such does benefit from converting it to full
>>> multiqueue

 support.
>>
>>
>> Explanation and numbers on why this would be beneficial, please.
>> We should not need multiple submissions queues for a single register
>> to benefit from multiple completion queues.
>>
> Well, the actual throughput very strongly depends on the blk-mq-sched
> patches from Jens.
> As this is barely finished I didn't post any numbers yet.
>
> However:
> With multiqueue support:
> 4k seq read : io=60573MB, bw=1009.2MB/s, iops=258353, runt=
>>>
>>> 60021msec
>
> With scsi-mq on 1 queue:
> 4k seq read : io=17369MB, bw=296291KB/s, iops=74072, runt= 60028msec
> So yes, there _is_ a benefit.


Hannes,

I have created a md raid0 with 4 SAS SSD drives using below command,
#mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdg /dev/sdh
/dev/sdi /dev/sdj

And here is 'mdadm --detail /dev/md0' command output,
--
/dev/md0:
Version : 1.2
  Creation Time : Thu Feb  9 14:38:47 2017
 Raid Level : raid0
 Array Size : 780918784 (744.74 GiB 799.66 GB)
   Raid Devices : 4
  Total Devices : 4
Persistence : Superblock is persistent

Update Time : Thu Feb  9 14:38:47 2017
  State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

 Chunk Size : 512K

   Name : host_name
   UUID : b63f9da7:b7de9a25:6a46ca00:42214e22
 Events : 0

Number   Major   Minor   RaidDevice State
   0   8   960  active sync   /dev/sdg
   1   8  1121  active sync   /dev/sdh
   2   8  1442  active sync   /dev/sdj
   3   8  1283  active sync   /dev/sdi
--

Then I have used below fio profile to run 4K sequence read operations
with nr_hw_queues=1 driver and with nr_hw_queues=24 driver (as my
system has two numa node and each with 12 cpus).
-
global]
ioengine=libaio
group_reporting
direct=1
rw=read
bs=4k
allow_mounted_write=0
iodepth=128
runtime=150s

[job1]
filename=/dev/md0
-

Here are the fio results when nr_hw_queues=1 (i.e. single request
queue) with various number of job counts
1JOB 4k read  : io=213268MB, bw=1421.8MB/s, iops=363975, runt=150001msec
2JOBs 4k read : io=309605MB, bw=2064.2MB/s, iops=528389, runt=150001msec
4JOBs 4k read : io=281001MB, bw=1873.4MB/s, iops=479569, runt=150002msec
8JOBs 4k read : io=236297MB, bw=1575.2MB/s, iops=403236, runt=150016msec

Here are the fio results when nr_hw_queues=24 (i.e. multiple request
queue) with various number of job counts
1JOB 4k read   : io=95194MB, bw=649852KB/s, iops=162463, runt=150001msec
2JOBs 4k read : io=189343MB, bw=1262.3MB/s, iops=323142, runt=150001msec
4JOBs 4k read : io=314832MB, bw=2098.9MB/s, iops=537309, runt=150001msec
8JOBs 4k read : io=277015MB, bw=1846.8MB/s, iops=472769, runt=150001msec

Here we can see that on less number of jobs count, single request
queue (nr_hw_queues=1) is giving more IOPs than multi request
queues(nr_hw_queues=24).

Can you please share your fio profile, so that I can try same thing on
my system.

Thanks,
Sreekanth


>
>
> (Which is actually quite cool, as these tests were done on a SAS3
> HBA,

Re: [PATCH V2] mpt3sas: disable ASPM for MPI2 controllers

2017-02-09 Thread ojab

On 2017/02/09 05:09, Sreekanth Reddy wrote:

On Wed, Dec 28, 2016 at 4:35 PM, ojab  wrote:

MPI2 controllers sometimes got lost (i. e. disappears from
/sys/bus/pci/devices) if ASMP is enabled.

Signed-off-by: Slava Kardakov 
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=60644


From some of our system engineers team, I came to known that ASPM
needs to be disabled.
So this patch looks good. Please consider this patch as

Acked-by: Sreekanth Reddy 


Yay, thanks!

I assume that this patch should also be CC: stable@, since it's clearly 
a rather critical bugfix.


Is it too late for 4.10.0?

//wbr ojab


Thanks,
Sreekanth




Re: [PATCH v4] scsi/sd: release scan_mutex during sync_cache and start_stop

2017-02-09 Thread Christoph Hellwig
And what is the lock protecting if we can just release and reacquire it
safely?  Probably nothing, but someone needs to do the audit carefully.

Once that is done we can just apply a patch like the one below and
be done with it:

---
>From 47572d7f0758cc0cbd979b1a2c3791f3b94c566e Mon Sep 17 00:00:00 2001
From: Christoph Hellwig 
Date: Thu, 9 Feb 2017 09:49:04 +0100
Subject: scsi: remove scan_mutex

All our lookup structures are properly locked these days, there shouldn't
be any need to serialize the high-level scan process.

Signed-off-by: Christoph Hellwig 
---
 drivers/scsi/scsi_scan.c | 15 ---
 1 file changed, 15 deletions(-)

diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 6f7128f49c30..c92be1ad486c 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -1490,7 +1490,6 @@ struct scsi_device *__scsi_add_device(struct Scsi_Host 
*shost, uint channel,
return ERR_PTR(-ENOMEM);
scsi_autopm_get_target(starget);
 
-   mutex_lock(>scan_mutex);
if (!shost->async_scan)
scsi_complete_async_scans();
 
@@ -1498,7 +1497,6 @@ struct scsi_device *__scsi_add_device(struct Scsi_Host 
*shost, uint channel,
scsi_probe_and_add_lun(starget, lun, NULL, , 1, hostdata);
scsi_autopm_put_host(shost);
}
-   mutex_unlock(>scan_mutex);
scsi_autopm_put_target(starget);
/*
 * paired with scsi_alloc_target().  Target will be destroyed unless
@@ -1629,7 +1627,6 @@ void scsi_scan_target(struct device *parent, unsigned int 
channel,
strncmp(scsi_scan_type, "manual", 6) == 0)
return;
 
-   mutex_lock(>scan_mutex);
if (!shost->async_scan)
scsi_complete_async_scans();
 
@@ -1637,7 +1634,6 @@ void scsi_scan_target(struct device *parent, unsigned int 
channel,
__scsi_scan_target(parent, channel, id, lun, rescan);
scsi_autopm_put_host(shost);
}
-   mutex_unlock(>scan_mutex);
 }
 EXPORT_SYMBOL(scsi_scan_target);
 
@@ -1686,7 +1682,6 @@ int scsi_scan_host_selected(struct Scsi_Host *shost, 
unsigned int channel,
((lun != SCAN_WILD_CARD) && (lun >= shost->max_lun)))
return -EINVAL;
 
-   mutex_lock(>scan_mutex);
if (!shost->async_scan)
scsi_complete_async_scans();
 
@@ -1700,7 +1695,6 @@ int scsi_scan_host_selected(struct Scsi_Host *shost, 
unsigned int channel,
scsi_scan_channel(shost, channel, id, lun, rescan);
scsi_autopm_put_host(shost);
}
-   mutex_unlock(>scan_mutex);
 
return 0;
 }
@@ -1752,11 +1746,9 @@ static struct async_scan_data 
*scsi_prep_async_scan(struct Scsi_Host *shost)
goto err;
init_completion(>prev_finished);
 
-   mutex_lock(>scan_mutex);
spin_lock_irqsave(shost->host_lock, flags);
shost->async_scan = 1;
spin_unlock_irqrestore(shost->host_lock, flags);
-   mutex_unlock(>scan_mutex);
 
spin_lock(_scan_lock);
if (list_empty(_hosts))
@@ -1789,12 +1781,9 @@ static void scsi_finish_async_scan(struct 
async_scan_data *data)
 
shost = data->shost;
 
-   mutex_lock(>scan_mutex);
-
if (!shost->async_scan) {
shost_printk(KERN_INFO, shost, "%s called twice\n", __func__);
dump_stack();
-   mutex_unlock(>scan_mutex);
return;
}
 
@@ -1806,8 +1795,6 @@ static void scsi_finish_async_scan(struct async_scan_data 
*data)
shost->async_scan = 0;
spin_unlock_irqrestore(shost->host_lock, flags);
 
-   mutex_unlock(>scan_mutex);
-
spin_lock(_scan_lock);
list_del(>list);
if (!list_empty(_hosts)) {
@@ -1915,7 +1902,6 @@ struct scsi_device *scsi_get_host_dev(struct Scsi_Host 
*shost)
struct scsi_device *sdev = NULL;
struct scsi_target *starget;
 
-   mutex_lock(>scan_mutex);
if (!scsi_host_scan_allowed(shost))
goto out;
starget = scsi_alloc_target(>shost_gendev, 0, shost->this_id);
@@ -1929,7 +1915,6 @@ struct scsi_device *scsi_get_host_dev(struct Scsi_Host 
*shost)
scsi_target_reap(starget);
put_device(>dev);
  out:
-   mutex_unlock(>scan_mutex);
return sdev;
 }
 EXPORT_SYMBOL(scsi_get_host_dev);
-- 
2.11.0