show max io queue depth, it may doesn't reflect the real hardware's
max queue depth which was reduced by some software designs.
Signed-off-by: weiping zhang
---
block/blk-sysfs.c | 11 +++
include/linux/blkdev.h | 5 +
2 files changed, 16
On 08/09/2017 08:07 PM, Goldwyn Rodrigues wrote:
>>> No, from a multi-device point of view, this is inconsistent. I
>>> have tried the request bio returns -EAGAIN before the split, but
>>> I shall check again. Where do you see this happening?
>>
>> No, this isn't multi-device
On 08/08/2017 02:36 PM, Jens Axboe wrote:
> On 08/08/2017 02:32 PM, Shaohua Li wrote:
>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>> index 25f6a0cb27d3..fae021ebec1b 100644
>>> --- a/include/linux/blkdev.h
>>> +++ b/include/linux/blkdev.h
>>> @@ -633,6 +633,7 @@ struct
On 08/09/2017 08:17 PM, Shaohua Li wrote:
> On Wed, Aug 09, 2017 at 05:16:23PM -0500, Goldwyn Rodrigues wrote:
>>
>>
>> On 08/09/2017 03:21 PM, Shaohua Li wrote:
>>> On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote:
On 08/09/2017 10:02 AM, Shaohua Li wrote:
>
From: Joseph Qi
Since throtl_rb_first may return NULL, so it should check tg first and
then use it.
Signed-off-by: Joseph Qi
---
block/blk-throttle.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git
On Wed, Aug 09, 2017 at 05:16:23PM -0500, Goldwyn Rodrigues wrote:
>
>
> On 08/09/2017 03:21 PM, Shaohua Li wrote:
> > On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote:
> >>
> >>
> >> On 08/09/2017 10:02 AM, Shaohua Li wrote:
> >>> On Wed, Aug 09, 2017 at 06:44:55AM -0500,
I haven't really tested but I was aware of the commit before I send my
last email. It doesn't seem relevant to be honest, because it doesn't
change the fact that the inner loop wil only end if the whole request
has been looped over. So still one big bio.
There are a few things that seem
On 08/09/2017 03:21 PM, Shaohua Li wrote:
> On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote:
>>
>>
>> On 08/09/2017 10:02 AM, Shaohua Li wrote:
>>> On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote:
On 08/08/2017 03:32 PM, Shaohua Li wrote:
>
> Il giorno 08 ago 2017, alle ore 19:33, Paolo Valente
> ha scritto:
>
>>
>> Il giorno 08 ago 2017, alle ore 10:06, Paolo Valente
>> ha scritto:
>>
>>>
>>> Il giorno 07 ago 2017, alle ore 20:42, Paolo Valente
>>>
On Wed, Aug 09, 2017 at 10:35:39AM -0500, Goldwyn Rodrigues wrote:
>
>
> On 08/09/2017 10:02 AM, Shaohua Li wrote:
> > On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote:
> >>
> >>
> >> On 08/08/2017 03:32 PM, Shaohua Li wrote:
> >>> On Wed, Jul 26, 2017 at 06:57:58PM -0500,
On 08/09/2017 12:28 PM, Bart Van Assche wrote:
> The blk_mq_delay_kick_requeue_list() function is used by the device
> mapper and only by the device mapper to rerun the queue and requeue
> list after a delay. This function is called once per request that
> gets requeued. Modify this function such
The blk_mq_delay_kick_requeue_list() function is used by the device
mapper and only by the device mapper to rerun the queue and requeue
list after a delay. This function is called once per request that
gets requeued. Modify this function such that the queue is run once
per path change event
On 08/09/2017 03:38 PM, h...@lst.de wrote:
Does commit 615d22a51c04856efe62af6e1d5b450aaf5cc2c0
"block: Fix __blkdev_issue_zeroout loop" fix the issue for you?
I crashed the 4.13-rc4 with the test above (blkdiscard -z on dm-crypt
dev). Unfortunately I hadn't have my VM configured properly so
4.12-stable review patch. If anyone has any objections, please let me know.
--
From: Christoph Hellwig
commit 4b855ad37194f7bdbb200ce7a1c7051fecb56a08 upstream.
Currently we only create hctx for online CPUs, which can lead to a lot
of churn due to frequent soft
On Wed, 2017-08-09 at 12:43 -0400, Laurence Oberman wrote:
> Your latest patch on stock upstream without Ming's latest patches is
> behaving for me.
>
> As already mentioned, the requeue -11 and clone failure messages are
> gone and I am not actually seeing any soft lockups or hard lockups.
>
4.12-stable review patch. If anyone has any objections, please let me know.
--
From: Christoph Hellwig
commit 5f042e7cbd9ebd3580077dcdc21f35e68c2adf5f upstream.
This way we get a nice distribution independent of the current cpu
online / offline state.
On 08/08/2017 10:28 PM, Laurence Oberman wrote:
On Tue, 2017-08-08 at 20:11 -0400, Laurence Oberman wrote:
On Tue, 2017-08-08 at 22:17 +0800, Ming Lei wrote:
Hi Guys,
Laurence and I see a system lockup issue when running
concurrent
big buffered write(4M bytes) to IB SRP on v4.13-rc3.
Christoph,
> Found these while coming up with the fixes just sent.
Also OK.
Reviewed-by: Martin K. Petersen
--
Martin K. Petersen Oracle Linux Engineering
Christoph,
> this series fixes regressions in the integrity handling update in
> 4.13-rc.
>
> The first one was sent earlier by Milan and while both Martin and I
> aren't exactly happy about the way dm uses the integrity code to cause
> this regression this minimal fix gets us back to the status
On 08/09/2017 09:48 AM, Christoph Hellwig wrote:
> Found these while coming up with the fixes just sent.
Added for 4.14.
--
Jens Axboe
On 08/09/2017 09:47 AM, Christoph Hellwig wrote:
> Hi Jens,
>
> this series fixes regressions in the integrity handling update in 4.13-rc.
>
> The first one was sent earlier by Milan and while both Martin and I aren't
> exactly happy about the way dm uses the integrity code to cause this
>
This makes the code more obvious, and moves the most likely branch first
in the function.
Signed-off-by: Christoph Hellwig
---
block/bio-integrity.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index
Found these while coming up with the fixes just sent.
This flag is never set right after calling bio_integrity_alloc,
so don't clear it and confuse the reader.
Signed-off-by: Christoph Hellwig
---
drivers/md/dm-crypt.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index
Hi Jens,
this series fixes regressions in the integrity handling update in 4.13-rc.
The first one was sent earlier by Milan and while both Martin and I aren't
exactly happy about the way dm uses the integrity code to cause this
regression this minimal fix gets us back to the status quo.
The
From: Milan Broz
In dm-integrity target we register integrity profile that have
both generate_fn and verify_fn callbacks set to NULL.
This is used if dm-integrity is stacked under a dm-crypt device
for authenticated encryption (integrity payload contains authentication
tag
This gets us back to the behavior in 4.12 and earlier.
Signed-off-by: Christoph Hellwig
Fixes: 7c20f116 ("bio-integrity: stop abusing bi_end_io")
---
block/bio-integrity.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/block/bio-integrity.c
On 08/09/2017 10:02 AM, Shaohua Li wrote:
> On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote:
>>
>>
>> On 08/08/2017 03:32 PM, Shaohua Li wrote:
>>> On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote:
From: Goldwyn Rodrigues
On 08/09/2017 04:10 PM, Christoph Hellwig wrote:
> On Mon, Aug 07, 2017 at 08:09:11AM +0200, Hannes Reinecke wrote:
>> On 08/05/2017 01:39 PM, Christoph Hellwig wrote:
>>> Can you use normal linux style for the code instead of copy and
>>> pasting the weird naming and capitalization from the
On 08/09/2017 04:23 PM, Christoph Hellwig wrote:
> On Wed, Aug 02, 2017 at 04:13:24PM +0200, Hannes Reinecke wrote:
>> Hi all,
>>
>> as we're trying to get rid of the remaining request_fn drivers here's
>> a patchset to move the DAC960 driver to the SCSI stack.
>> The new driver is called 'mylex'.
On Wed, Aug 09, 2017 at 06:44:55AM -0500, Goldwyn Rodrigues wrote:
>
>
> On 08/08/2017 03:32 PM, Shaohua Li wrote:
> > On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote:
> >> From: Goldwyn Rodrigues
> >>
> >> Nowait is a feature of direct AIO, where users can
On Wed, Aug 02, 2017 at 04:13:24PM +0200, Hannes Reinecke wrote:
> Hi all,
>
> as we're trying to get rid of the remaining request_fn drivers here's
> a patchset to move the DAC960 driver to the SCSI stack.
> The new driver is called 'mylex'.
>
> The Mylex/DAC960 HBA comes in two flavours; the
Hi Linus,
Three patches that should go into this release. Two of them are from
Paolo and fix up some corner cases with BFQ, and the last patch is from
Ming and fixes up a potential usage count imbalance regression in this
series, due to the NOWAIT work that went in.
Please pull!
On 08/09/2017 01:14 AM, Omar Sandoval wrote:
> On Tue, Aug 08, 2017 at 05:47:15PM -0600, Jens Axboe wrote:
>> On 08/08/2017 04:48 PM, Omar Sandoval wrote:
>>> On Fri, Aug 04, 2017 at 09:04:21AM -0600, Jens Axboe wrote:
Modify blk_mq_in_flight() to count both a partition and root at
the
We do set rq->sense_len when we assigne the reply-buffer in
blk_fill_sgv4_hdr_rq(). No point in possibly deviating from this value
later on.
bsg-lib.h specifies:
unsigned int reply_len;
/*
* On entry : reply_len indicates the buffer size allocated for
* the reply.
*
*
Hello all,
Steffen noticed recently that we have a regression in the BSG code that
prevents us from sending any traffic over this interface. After I
researched this a bit, it turned out that this affects not only zFCP, but
likely all LLDs that implements the BSG API. This was introduced in 4.11
The BSG implementations use the bsg_job's reply buffer as storage for their
own custom reply structures (e.g.: struct fc_bsg_reply or
struct iscsi_bsg_reply). The size of bsg_job's reply buffer and those of
the implementations is not dependent in any way the compiler can currently
check.
To make
Since struct bsg_command is now used in every calling case, we don't
need separation of arguments anymore that are contained in the same
bsg_command.
Signed-off-by: Benjamin Block
---
block/bsg.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
Before, the SG_IO ioctl for BSG devices used to use its own on-stack data
to assemble and send the specified command. The read and write calls use
their own infrastructure build around the struct bsg_command and a custom
slab-pool for that.
Rafactor this, so that SG_IO ioctl also uses struct
Since struct bsg_command is now used in every calling case, we don't
need separation of arguments anymore that are contained in the same
bsg_command.
Signed-off-by: Benjamin Block
---
block/bsg.c | 20 +---
1 file changed, 9 insertions(+), 11
In contrast to the normal SCSI-lib, the BSG block-queue doesn't make use of
any extra init_rq_fn() to make additional allocations during
request-creation, and the request sense-pointer is not used to transport
SCSI sense data, but is used as backing for the bsg_job->reply pointer;
that in turn is
Does commit 615d22a51c04856efe62af6e1d5b450aaf5cc2c0
"block: Fix __blkdev_issue_zeroout loop" fix the issue for you?
In below scenario blkio cgroup does not work as per their assigned
weights :-
1. When the underlying device is nonrotational with a single HW queue
with depth of >= CFQ_HW_QUEUE_MIN
2. When the use case is forming two blkio cgroups cg1(weight 1000) &
cg2(wight 100) and two processes(file1 and
On 08/08/2017 03:43 PM, Shaohua Li wrote:
> On Wed, Jul 26, 2017 at 06:58:01PM -0500, Goldwyn Rodrigues wrote:
>> From: Goldwyn Rodrigues
>>
>> Return EAGAIN in case RAID5 would block because of waiting due to:
>> + Reshaping
>> + Suspension
>> + Stripe Expansion
>>
>>
On 08/08/2017 03:32 PM, Shaohua Li wrote:
> On Wed, Jul 26, 2017 at 06:57:58PM -0500, Goldwyn Rodrigues wrote:
>> From: Goldwyn Rodrigues
>>
>> Nowait is a feature of direct AIO, where users can request
>> to return immediately if the I/O is going to block. This translates
>>
On 08/08/2017 03:39 PM, Shaohua Li wrote:
> On Wed, Jul 26, 2017 at 06:58:00PM -0500, Goldwyn Rodrigues wrote:
>> From: Goldwyn Rodrigues
>>
>> The RAID1 driver would bail with EAGAIN in case of:
>> + I/O has to wait for a barrier
>> + array is frozen
>> + Area is
On Tue, Aug 08, 2017 at 05:47:15PM -0600, Jens Axboe wrote:
> On 08/08/2017 04:48 PM, Omar Sandoval wrote:
> > On Fri, Aug 04, 2017 at 09:04:21AM -0600, Jens Axboe wrote:
> >> Modify blk_mq_in_flight() to count both a partition and root at
> >> the same time. Then we only have to call it once,
On Wed, Aug 09, 2017 at 10:32:52AM +0800, Ming Lei wrote:
> On Wed, Aug 9, 2017 at 8:11 AM, Omar Sandoval wrote:
> > On Sat, Aug 05, 2017 at 02:56:46PM +0800, Ming Lei wrote:
> >> When hw queue is busy, we shouldn't take requests from
> >> scheduler queue any more, otherwise
48 matches
Mail list logo