> + zones = kzalloc(sizeof(struct blk_zone) * rep.nr_zones,
> + GFP_KERNEL);
> + if (!zones)
> + return -ENOMEM;
This should use kcalloc to get us underflow checking for the user
controlled allocation size.
> + if (copy_to_user(argp, , sizeof(struct
Hello Jens,
Multiple block drivers need the functionality to stop a request queue
and to wait until all ongoing request_fn() / queue_rq() calls have
finished without waiting until all outstanding requests have finished.
Hence this patch series that introduces the blk_quiesce_queue() and
Ensure that if scsi-mq is enabled that srp_wait_for_queuecommand()
waits until ongoing shost->hostt->queuecommand() calls have finished.
For the !scsi-mq path, use blk_quiesce_queue() and blk_resume_queue()
instead of busy-waiting.
Signed-off-by: Bart Van Assche
Cc:
Avoid that nvme_queue_rq() is still running when nvme_stop_queues()
returns. Untested.
Signed-off-by: Bart Van Assche
Cc: Keith Busch
Cc: Christoph Hellwig
Cc: Sagi Grimberg
---
drivers/nvme/host/core.c | 16
On Mon, Sep 26 2016 at 2:25pm -0400,
Bart Van Assche wrote:
> Hello Jens,
>
> Multiple block drivers need the functionality to stop a request
> queue and to wait until all ongoing request_fn() / queue_rq() calls
> have finished without waiting until all outstanding
Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
calls have stopped before setting the "queue stopped" flag. This
allows to remove the "queue stopped" test from dm_mq_queue_rq() and
dm_mq_requeue_request(). This patch fixes a race condition because
dm_mq_queue_rq() is called
Move the blk_freeze_queue() and blk_unfreeze_queue() implementations
from block/blk-mq.c to block/blk-core.c. Drop "_mq" from the name of
the functions that have been moved.
Signed-off-by: Bart Van Assche
---
block/blk-core.c | 45
The function blk_queue_stopped() allows to test whether or not a
traditional request queue has been stopped. Introduce a helper
function that allows block drivers to query easily whether or not
one or more hardware contexts of a blk-mq queue have been stopped.
Signed-off-by: Bart Van Assche
Since these two structure members are now used in blk-mq and !blk-mq
paths, remove the mq_prefix. This patch does not change any
functionality.
Signed-off-by: Bart Van Assche
---
block/blk-core.c | 20 ++--
block/blk-mq.c | 4 ++--
blk_quiesce_queue() prevents that new queue_rq() invocations
occur and waits until ongoing invocations have finished. This
function does *not* wait until all outstanding requests have
finished (this means invocation of request.end_io()).
blk_resume_queue() resumes normal I/O processing.
Signed-off-by: Bart Van Assche
---
block/blk-core.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 8cc8006..5ecc7ab 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -689,7 +689,10 @@
On Fri, Sep 23, 2016 at 03:21:14PM -0700, Sagi Grimberg wrote:
> Question: is using pci_alloc_irq_vectors() obligated for
> supplying blk-mq with the device affinity mask?
No, but it's very useful. We'll need equivalents for other busses
that provide multipl vectors and vector spreading.
> If I
From: Hannes Reinecke
Signed-off-by: Hannes Reinecke
Signed-off-by: Damien Le Moal
---
block/blk-settings.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index b1d5b7f..55369a6 100644
Add the zoned queue limit to indicate the zoning model of a block device.
Defined values are 0 (BLK_ZONED_NONE) for regular block devices,
1 (BLK_ZONED_HA) for host-aware zone block devices and 2 (BLK_ZONED_HM)
for host-managed zone block devices. The standards defined drive managed
model is not
From: Hannes Reinecke
The queue limits already have a 'chunk_sectors' setting, so
we should be presenting it via sysfs.
Signed-off-by: Hannes Reinecke
Signed-off-by: Damien Le Moal
---
block/blk-sysfs.c | 11 +++
1 file changed, 11
From: Shaun Tancheff
Adds the new BLKREPORTZONE and BLKRESETZONE ioctls for respectively
obtaining the zone configuration of a zoned block device and resetting
the write pointer of sequential zones of a zoned block device.
The BLKREPORTZONE ioctl maps directly to a
The only user of polling requires its original request be completed in
its entirety before continuing execution. If the bio needs to be split
and chained for any reason, the direct IO path would have waited for just
that split portion to complete, leading to potential data corruption if
the
On Thu, Mar 31, 2016 at 1:34 AM, Jens Axboe wrote:
> On 03/30/2016 05:31 PM, Alexey Klimov wrote:
>>
>> Hi all,
>>
>> On Wed, Jan 27, 2016 at 9:01 PM, Jeff Moyer wrote:
>>>
>>> Alexey Klimov writes:
>>>
Last user of
Unlocking a mutex twice is wrong. Hence modify blkcg_policy_register()
such that blkcg_pol_mutex is unlocked once if cpd == NULL. This patch
avoids that smatch reports the following error:
block/blk-cgroup.c:1378: blkcg_policy_register() error: double unlock
'mutex:_pol_mutex'
Signed-off-by:
On 09/26/2016 11:33 AM, Mike Snitzer wrote:
How much testing has this series seen? Did you run it against the
mptest testsuite? https://github.com/snitm/mptest
Hello Mike,
The output of mptest with MULTIPATH_BACKEND_MODULE="scsidebug":
# ./runtest
[ ... ]
SUCCESS
** summary **
PASSED:
On Mon, Sep 26, 2016 at 11:37 AM, Christoph Hellwig wrote:
>> + zones = kzalloc(sizeof(struct blk_zone) * rep.nr_zones,
>> + GFP_KERNEL);
>> + if (!zones)
>> + return -ENOMEM;
>
> This should use kcalloc to get us underflow checking for
No objection here.
On Mon, Sep 26, 2016 at 6:30 PM, Damien Le Moal wrote:
>
> Christoph,
>
> On 9/27/16 01:37, Christoph Hellwig wrote:
>>> -/*
>>> - * Zone type.
>>> - */
>>> -enum blk_zone_type {
>>> -BLK_ZONE_TYPE_UNKNOWN,
>>> -BLK_ZONE_TYPE_CONVENTIONAL,
>>> -
22 matches
Mail list logo