On 1/30/2018 11:33 AM, Vivek Gautam wrote:
Hi Asutosh,
On 1/30/2018 10:11 AM, Asutosh Das wrote:
From: Subhash Jadavani
UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION is only applicable for QCOM UFS host
controller version 2.x.y and this has been fixed from version 3.x.y
Hi Asutosh,
On 1/30/2018 10:11 AM, Asutosh Das wrote:
From: Subhash Jadavani
UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION is only applicable for QCOM UFS host
controller version 2.x.y and this has been fixed from version 3.x.y
onwards, hence this change removes this quirk for
On Tue, Jan 30, 2018 at 6:14 AM, Mike Snitzer wrote:
> On Mon, Jan 29 2018 at 4:51pm -0500,
> Bart Van Assche wrote:
>
>> On Mon, 2018-01-29 at 16:44 -0500, Mike Snitzer wrote:
>> > But regardless of which wins the race, the queue will have been run.
The start stop unit command takes in the order of a second to complete
on some SAS SSDs and longer on hard disks. Synchronize cache can also
take some time. Both commands have an IMMED bit for those apps that don't
want to wait. This patch introduces a long delay for those commands
when the IMMED
From: Venkat Gopalakrishnan
As multiple requests are submitted to the ufs host controller in
parallel there could be instances where the command completion
interrupt arrives later for a request that is already processed
earlier as the corresponding doorbell was cleared
From: Subhash Jadavani
UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION is only applicable for QCOM UFS host
controller version 2.x.y and this has been fixed from version 3.x.y
onwards, hence this change removes this quirk for version 3.x.y onwards.
Signed-off-by: Subhash Jadavani
From: Subhash Jadavani
Currently we call the scsi_block_requests()/scsi_unblock_requests()
whenever we want to block/unblock scsi requests but as there is no
reference counting, nesting of these calls could leave us in undesired
state sometime. Consider following call
On Tue, Jan 30, 2018 at 03:37:38AM +, Bart Van Assche wrote:
> On Tue, 2018-01-30 at 11:31 +0800, Ming Lei wrote:
> > Please take a look at drivers, when BLK_STS_RESOURCE is returned, who
> > will call blk_mq_delay_run_hw_queue() for drivers?
>
> As you know the SCSI and dm drivers in kernel
On Tue, 2018-01-30 at 11:31 +0800, Ming Lei wrote:
> Please take a look at drivers, when BLK_STS_RESOURCE is returned, who
> will call blk_mq_delay_run_hw_queue() for drivers?
As you know the SCSI and dm drivers in kernel v4.15 already call that function
whenever necessary.
> >
> > > [ ... ]
On Tue, Jan 30, 2018 at 01:11:22AM +, Bart Van Assche wrote:
> On Tue, 2018-01-30 at 09:07 +0800, Ming Lei wrote:
> > On Mon, Jan 29, 2018 at 04:48:31PM +, Bart Van Assche wrote:
> > > - It is easy to fix this race inside the block layer, namely by using
> > > call_rcu() inside the
From: Subhash Jadavani
vendor specific setup_clocks ops may depend on clocks managed by ufshcd
driver so if the vendor specific setup_clocks callback is called when
the required clocks are turned off, it results into unclocked register
access.
This change make sure that
On Tue, Jan 30, 2018 at 5:51 AM, Bart Van Assche wrote:
> On Mon, 2018-01-29 at 16:44 -0500, Mike Snitzer wrote:
>> But regardless of which wins the race, the queue will have been run.
>> Which is all we care about right?
>
> Running the queue is not sufficient. With this
On Tue, Jan 30, 2018 at 5:22 AM, Bart Van Assche wrote:
> On Mon, 2018-01-29 at 15:33 -0500, Mike Snitzer wrote:
>> + * If driver returns BLK_STS_RESOURCE and SCHED_RESTART
>> + * bit is set, run queue after 10ms to avoid IO stalls
>> +
On 1/29/18 4:46 PM, James Bottomley wrote:
> On Mon, 2018-01-29 at 14:00 -0700, Jens Axboe wrote:
>> On 1/29/18 1:56 PM, James Bottomley wrote:
>>>
>>> On Mon, 2018-01-29 at 23:46 +0800, Ming Lei wrote:
>>> [...]
2. When to enable SCSI_MQ at default again?
>>>
>>> I'm not sure there's
On Mon, Jan 29, 2018 at 03:40:31PM -0500, Mike Snitzer wrote:
> On Mon, Jan 29 2018 at 10:46am -0500,
> Ming Lei wrote:
>
> > 2. When to enable SCSI_MQ at default again?
> >
> > SCSI_MQ is enabled on V3.17 firstly, but disabled at default. In V4.13-rc1,
> > it is enabled
On Mon, Jan 29, 2018 at 12:56:30PM -0800, James Bottomley wrote:
> On Mon, 2018-01-29 at 23:46 +0800, Ming Lei wrote:
> [...]
> > 2. When to enable SCSI_MQ at default again?
>
> I'm not sure there's much to discuss ... I think the basic answer is as
> soon as Christoph wants to try it again.
I
On Tue, 2018-01-30 at 09:07 +0800, Ming Lei wrote:
> On Mon, Jan 29, 2018 at 04:48:31PM +, Bart Van Assche wrote:
> > - It is easy to fix this race inside the block layer, namely by using
> > call_rcu() inside the blk_mq_delay_run_hw_queue() implementation to
> > postpone the queue
On Mon, Jan 29, 2018 at 04:48:31PM +, Bart Van Assche wrote:
> On Sun, 2018-01-28 at 07:41 +0800, Ming Lei wrote:
> > Not mention, the request isn't added to dispatch list yet in .queue_rq(),
> > strictly speaking, it is not correct to call blk_mq_delay_run_hw_queue() in
> > .queue_rq(), so
On Mon, 2018-01-29 at 23:39 +, Bart Van Assche wrote:
> On Tue, 2018-01-30 at 00:30 +0100, Martin Wilck wrote:
> > + static struct aborted_cmd_blist blist[] = {
>
> Please consider to declare this array const.
That doesn't work because I want to store the "warned" flag in it. And
I think
On Mon, 2018-01-29 at 14:00 -0700, Jens Axboe wrote:
> On 1/29/18 1:56 PM, James Bottomley wrote:
> >
> > On Mon, 2018-01-29 at 23:46 +0800, Ming Lei wrote:
> > [...]
> > >
> > > 2. When to enable SCSI_MQ at default again?
> >
> > I'm not sure there's much to discuss ... I think the basic
Hi Chang,
> On Jan 18, 2018, at 4:51 AM, Changlimin wrote:
>
> Hi Himanshu,
> Today I reproduced the issue in my server.
> First, I compiled kernel 4.15-rc6 (make localmodconfig; make; make
> modules_install; make install), then start the kernel with parameter
>
On Tue, 2018-01-30 at 00:30 +0100, Martin Wilck wrote:
> + static struct aborted_cmd_blist blist[] = {
Please consider to declare this array const.
> + for (i = 0; i < sizeof(blist)/sizeof(struct aborted_cmd_blist); i++) {
Have you considered to use ARRAY_SIZE()?
Thanks,
Bart.
If every_nth > 0, the injection flags must be reset for commands
that aren't supposed to fail (i.e. that aren't "nth"). Otherwise,
commands will continue to fail, like in the every_nth < 0 case.
Signed-off-by: Martin Wilck
---
drivers/scsi/scsi_debug.c | 7 ++-
1 file
(Resending, because I forgot to cc linux-scsi. BIG SORRY!)
Introduce a new blist flag that indicates the device may return certain
sense code/ASC/ASCQ combinations that indicate different treatment than
normal. In particular, some devices need unconditional retry (aka
ADD_TO_MLQUEUE) under
On Mon, 2018-01-29 at 17:14 -0500, Mike Snitzer wrote:
> On Mon, Jan 29 2018 at 4:51pm -0500,
> Bart Van Assche wrote:
>
> > On Mon, 2018-01-29 at 16:44 -0500, Mike Snitzer wrote:
> > > But regardless of which wins the race, the queue will have been run.
> > > Which is
On Mon, Jan 29 2018 at 4:51pm -0500,
Bart Van Assche wrote:
> On Mon, 2018-01-29 at 16:44 -0500, Mike Snitzer wrote:
> > But regardless of which wins the race, the queue will have been run.
> > Which is all we care about right?
>
> Running the queue is not sufficient.
On Mon, 2018-01-29 at 16:44 -0500, Mike Snitzer wrote:
> But regardless of which wins the race, the queue will have been run.
> Which is all we care about right?
Running the queue is not sufficient. With this patch applied it can happen
that the block driver returns BLK_STS_DEV_RESOURCE, that the
On Mon, Jan 29 2018 at 4:22pm -0500,
Bart Van Assche wrote:
> On Mon, 2018-01-29 at 15:33 -0500, Mike Snitzer wrote:
> > +* If driver returns BLK_STS_RESOURCE and SCHED_RESTART
> > +* bit is set, run queue after 10ms to avoid IO stalls
> > +
On Mon, 2018-01-29 at 15:33 -0500, Mike Snitzer wrote:
> + * If driver returns BLK_STS_RESOURCE and SCHED_RESTART
> + * bit is set, run queue after 10ms to avoid IO stalls
> + * that could otherwise occur if the queue is idle.
>*/
> -
On 1/29/18 1:56 PM, James Bottomley wrote:
> On Mon, 2018-01-29 at 23:46 +0800, Ming Lei wrote:
> [...]
>> 2. When to enable SCSI_MQ at default again?
>
> I'm not sure there's much to discuss ... I think the basic answer is as
> soon as Christoph wants to try it again.
FWIW, internally I've been
On Mon, 2018-01-29 at 23:46 +0800, Ming Lei wrote:
[...]
> 2. When to enable SCSI_MQ at default again?
I'm not sure there's much to discuss ... I think the basic answer is as
soon as Christoph wants to try it again.
> SCSI_MQ is enabled on V3.17 firstly, but disabled at default. In
> V4.13-rc1,
On Mon, Jan 29 2018 at 10:46am -0500,
Ming Lei wrote:
> 2. When to enable SCSI_MQ at default again?
>
> SCSI_MQ is enabled on V3.17 firstly, but disabled at default. In V4.13-rc1,
> it is enabled at default, but later the patch is reverted in V4.13-rc7, and
> becomes
From: Ming Lei
This status is returned from driver to block layer if device related
resource is unavailable, but driver can guarantee that IO dispatch
will be triggered in future when the resource is available.
Convert some drivers to return BLK_STS_DEV_RESOURCE. Also, if
On 1/29/2018 3:45 AM, Steffen Maier wrote:
@@ -2966,6 +2975,9 @@ struct lpfc_mbx_read_top {
#define LPFC_LINK_SPEED_10GHZ 0x40
#define LPFC_LINK_SPEED_16GHZ 0x80
#define LPFC_LINK_SPEED_32GHZ 0x90
+#define LPFC_LINK_SPEED_64GHZ 0xA0
+#define LPFC_LINK_SPEED_128GHZ 0xB0
On 1/10/18 12:26 PM, Mike Christie wrote:
> On 01/04/2018 10:11 AM, Bryant G. Ly wrote:
>> This patch allows for multiple attributes to be reconfigured
>> and handled all in one call as compared to multiple netlinks.
>>
>> Example:
>> set attribute
On Thu, Jan 18, 2018 at 12:46:52AM +0800, John Garry wrote:
> From: Xiaofei Tan
"dt-bindings: ..." is the preferred subject prefix.
>
> Add directly attached disk LED feature for v2 hw.
>
> Signed-off-by: Xiaofei Tan
> Signed-off-by: John Garry
On Fri, 2018-01-26 at 17:58 +0100, Michal Suchanek wrote:
> When the drive closes it can take tens of seconds until the disc is
> analyzed. Wait for the drive to become ready or report an error.
>
> Signed-off-by: Michal Suchanek
> ---
> drivers/cdrom/cdrom.c | 9 +
>
On Fri, 2018-01-26 at 17:58 +0100, Michal Suchanek wrote:
> +static int cdrom_tray_close(struct cdrom_device_info *cdi)
> +{
> + int ret;
> +
> + ret = cdi->ops->tray_move(cdi, 0);
> + if (ret || !cdi->ops->drive_status)
> + return ret;
> +
> + return
On Fri, 2018-01-26 at 17:58 +0100, Michal Suchanek wrote:
> - ret=cdo->tray_move(cdi,0);
> + ret = cdo->tray_move(cdi, 0);
Please separate whitespace-only changes from functional changes such that
this patch series becomes easier to review.
On Fri, 2018-01-26 at 17:58 +0100, Michal Suchanek wrote:
> Add convenience macro for polling an event that does not have a
> waitqueue.
>
> Signed-off-by: Michal Suchanek
> ---
> include/linux/delay.h | 12
> 1 file changed, 12 insertions(+)
>
> diff --git
> -Original Message-
> From: Hannes Reinecke [mailto:h...@suse.de]
> Sent: Monday, January 29, 2018 2:29 PM
> To: Kashyap Desai; linux-scsi@vger.kernel.org; Peter Rivera
> Subject: Re: [RFC 0/2] mpt3sas/megaraid_sas : irq poll and load balancing
> of
> reply queue
>
> On 01/15/2018 01:12
On Sun, 2018-01-28 at 07:41 +0800, Ming Lei wrote:
> Not mention, the request isn't added to dispatch list yet in .queue_rq(),
> strictly speaking, it is not correct to call blk_mq_delay_run_hw_queue() in
> .queue_rq(), so the current block layer API can't handle it well enough.
I disagree that
> -Original Message-
> From: Bart Van Assche [mailto:bart.vanass...@wdc.com]
> Sent: Monday, January 29, 2018 10:08 PM
> To: Elliott, Robert (Persistent Memory); Hannes Reinecke;
> lsf-pc@lists.linux-
> foundation.org
> Cc: linux-scsi@vger.kernel.org; linux-n...@lists.infradead.org;
> -Original Message-
> From: Raghava Aditya Renukunta
> [mailto:raghavaaditya.renuku...@microsemi.com]
> Sent: Friday, January 19, 2018 6:02 PM
> To: j...@linux.vnet.ibm.com; martin.peter...@oracle.com; linux-
> s...@vger.kernel.org
> Cc: Scott Benesh ; Tom
On 01/29/18 07:41, Elliott, Robert (Persistent Memory) wrote:
-Original Message-
From: Linux-nvme [mailto:linux-nvme-boun...@lists.infradead.org] On Behalf
Of Hannes Reinecke
Sent: Monday, January 29, 2018 3:09 AM
To: lsf...@lists.linux-foundation.org
Cc: linux-n...@lists.infradead.org;
> -Original Message-
> From: Raghava Aditya Renukunta
> [mailto:raghavaaditya.renuku...@microsemi.com]
> Sent: Friday, January 19, 2018 6:02 PM
> To: j...@linux.vnet.ibm.com; martin.peter...@oracle.com; linux-
> s...@vger.kernel.org
> Cc: Scott Benesh ; Tom
> -Original Message-
> From: Raghava Aditya Renukunta
> [mailto:raghavaaditya.renuku...@microsemi.com]
> Sent: Friday, January 19, 2018 6:02 PM
> To: j...@linux.vnet.ibm.com; martin.peter...@oracle.com; linux-
> s...@vger.kernel.org
> Cc: Scott Benesh ; Tom
Hi guys,
Two blk-mq related topics
1. blk-mq vs. CPU hotplug & IRQ vectors spread on CPUs
We have done three big changes in this field before, each time some issues
are fixed, meantime new ones are introduced
1) freeze all queues during CPU hotplug handler
- issues: queue dependency such as
> -Original Message-
> From: Linux-nvme [mailto:linux-nvme-boun...@lists.infradead.org] On Behalf
> Of Hannes Reinecke
> Sent: Monday, January 29, 2018 3:09 AM
> To: lsf...@lists.linux-foundation.org
> Cc: linux-n...@lists.infradead.org; linux-scsi@vger.kernel.org; Kashyap
> Desai
On Wed, Jan 24, 2018 at 02:58:01PM +, Colin King wrote:
> From: Colin Ian King
>
> The pointer ln is assigned a value that is never read, it is re-assigned
> a new value in the list_for_each loop hence the initialization is
> redundant and can be removed.
>
>
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
On Mon, Jan 29, 2018 at 1:30 PM, Corentin Labbe wrote:
> Remove line using inexistant files which were removed in
> commit 642978beb483 ("[SCSI] remove m68k NCR53C9x based drivers")
>
> Signed-off-by: Corentin Labbe
Acked-by: Geert Uytterhoeven
Remove line using inexistant files which were removed in
commit 642978beb483 ("[SCSI] remove m68k NCR53C9x based drivers")
Signed-off-by: Corentin Labbe
---
drivers/scsi/Makefile | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/scsi/Makefile
On 01/26/2018 08:31 PM, James Smart wrote:
The G7 adapter supports 64G link speeds. Add support to the driver.
In addition, a small cleanup to replace the odd bitmap logic with
a switch case.
Signed-off-by: Dick Kennedy
Signed-off-by: James Smart
On 01/24/2018 11:45 PM, James Smart wrote:
> Revise the NVME PRLI to indicate CONF support.
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
> ---
> drivers/scsi/lpfc/lpfc_els.c | 3 ++-
> drivers/scsi/lpfc/lpfc_hw4.h
On 01/24/2018 11:45 PM, James Smart wrote:
> During link bounce testing in a point-to-point topology, the
> host may enter a soft lockup on the lpfc_worker thread:
> Call Trace:
> lpfc_work_done+0x1f3/0x1390 [lpfc]
> lpfc_do_work+0x16f/0x180 [lpfc]
> kthread+0xc7/0xe0
>
On 01/24/2018 11:45 PM, James Smart wrote:
> The driver ignored checks on whether the link should be
> kept administratively down after a link bounce. Correct the
> checks.
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
> ---
>
On 01/24/2018 11:45 PM, James Smart wrote:
> When using the special option to suppress the response iu, ensure
> the adapter fully supports the feature by checking feature flags
> from the adapter.
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
On 01/24/2018 11:45 PM, James Smart wrote:
> Currently, write underruns (mismatch of amount transferred vs scsi
> status and its residual) detected by the adapter are not being
> flagged as an error. Its expected the target controls the data
> transfer and would appropriately set the RSP values.
On 01/24/2018 11:45 PM, James Smart wrote:
> Increased CQ and WQ sizes for SCSI FCP, matching those used
> for NVMe development.
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
> ---
> drivers/scsi/lpfc/lpfc.h | 1 +
>
On 01/24/2018 11:45 PM, James Smart wrote:
> Updated Copyright in files updated 11.4.0.7
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
> ---
> drivers/scsi/lpfc/lpfc.h | 2 +-
> drivers/scsi/lpfc/lpfc_attr.c | 2
On 01/24/2018 11:45 PM, James Smart wrote:
> I/O conditions on the nvme target may have the driver submitting
> to a full hardware wq. The hardware wq is a shared resource among
> all nvme controllers. When the driver hit a full wq, it failed the
> io posting back to the nvme-fc transport, which
On 01/24/2018 11:45 PM, James Smart wrote:
> During SCSI error handling escalation to host reset, the SCSI io
> routines were moved off the txcmplq, but the individual io's
> ON_CMPLQ flag wasn't cleared. Thus, a background thread saw the
> io and attempted to access it as if on the txcmplq.
>
>
On 01/24/2018 11:45 PM, James Smart wrote:
> The lpfc driver does not discover a target when the topology
> changes from switched-fabric to direct-connect. The target
> rejects the PRLI from the initiator in direct-connect as the
> driver is using the old S_ID from the switched topology.
>
> The
On 01/24/2018 11:45 PM, James Smart wrote:
> Update the driver version to 11.4.0.7
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
> ---
> drivers/scsi/lpfc/lpfc_version.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
On 01/15/2018 01:12 PM, Kashyap Desai wrote:
> Hi All -
>
> We have seen cpu lock up issue from fields if system has greater (more
> than 96) logical cpu count.
> SAS3.0 controller (Invader series) supports at max 96 msix vector and
> SAS3.5 product (Ventura) supports at max 128 msix vectors.
>
Hi all,
here's a topic which came up on the SCSI ML (cf thread '[RFC 0/2]
mpt3sas/megaraid_sas: irq poll and load balancing of reply queue').
When doing I/O tests on a machine with more CPUs than MSIx vectors
provided by the HBA we can easily setup a scenario where one CPU is
submitting I/O and
On 01/24/2018 11:45 PM, James Smart wrote:
> The driver was inappropriately pulling in the nvme host's
> nvme.h header. What it really needed was the standard
> header.
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
> ---
>
On 01/24/2018 11:45 PM, James Smart wrote:
> Ensure nvme localports/targetports are torn down before
> dismantling the adapter sli interface on driver detachment.
> This aids leaving interfaces live while nvme may be making
> callbacks to abort it.
>
> Signed-off-by: Dick Kennedy
On 01/24/2018 11:45 PM, James Smart wrote:
> Make the attribute writeable.
> Remove the ramp up to logic as its unnecessary, simply set depth.
> Add debug message if depth changed, possibly reducing limit, yet
> our outstanding count has yet to catch up with it.
>
> Signed-off-by: Dick Kennedy
On 01/24/2018 11:45 PM, James Smart wrote:
> When nvme target deferred receive logic waits for exchange
> resources, the corresponding receive buffer is not replenished
> with the hardware. This can result in a lack of asynchronous
> receive buffer resources in the hardware, resulting in a
> "2885
On 01/24/2018 11:45 PM, James Smart wrote:
> In a test that is doing large numbers of cable swaps on the target,
> the nvme controllers wouldn't reconnect.
>
> During the cable swaps, the targets n_port_id would change. This
> information was passed to the nvme-fc transport, in the new remoteport
On 01/24/2018 11:45 PM, James Smart wrote:
> A stress test repeatedly resetting the adapter while performing
> io would eventually report I/O failures and missing nvme namespaces.
>
> The driver was setting the nvmefc_fcp_req->private pointer to NULL
> during the IO completion routine before
On 01/24/2018 11:45 PM, James Smart wrote:
> Existing code was using the wrong field for the completion status
> when comparing whether to increment abort statistics
>
> Signed-off-by: Dick Kennedy
> Signed-off-by: James Smart
> ---
>
On 01/24/2018 11:45 PM, James Smart wrote:
> The driver controls when the hardware sends completions that
> communicate consumption of elements from the WQ. This is done by
> setting a WQEC bit on a WQE.
>
> The current driver sets it on every Nth WQE posting. However, the
> driver isn't clearing
76 matches
Mail list logo