On Thu, Nov 15, 2018 at 12:51:27PM -0700, Jens Axboe wrote:
> Put the short code in the fast path, where we don't have any
> functions attached to the queue. This minimizes the impact on
> the hot path in the core code.
This looks mechanically fine:
Reviewed-by: Christoph Hellwig
But since I
On Fri, Nov 16, 2018 at 09:10:05AM +0100, Christoph Hellwig wrote:
> blk_mq_stop_hw_queues doesn't need any locking, and the ide
> dev_flags field isn't protected by it either.
Is it a bug that dev_flags is no longer protected by queue_lock after
the mq conversion?
> Signed-off-by: Christoph
On Thu, Nov 15, 2018 at 12:51:31PM -0700, Jens Axboe wrote:
> If we want to support async IO polling, then we have to allow
> finding completions that aren't just for the one we are
> looking for. Always pass in -1 to the mq_ops->poll() helper,
> and have that return how many events were found in
On Fri, Nov 16, 2018 at 09:10:06AM +0100, Christoph Hellwig wrote:
> Replace the lock in mmc_blk_data that is only used through a pointer
> in struct mmc_queue and to protect fields in that structure with
> an actual lock in struct mmc_queue.
Looks sane to me, but I'll let the mmc people ack.
>
On Fri, Nov 16, 2018 at 03:04:57PM +1100, Dave Chinner wrote:
> They don't run on my test machines because they require a modular
> kernel and I run a monolithic kernel specified externally by the
> qemu command line on all my test VMs.
>
> generic/349 [not run] scsi_debug module not found
>
On Thu, Nov 15, 2018 at 12:51:28PM -0700, Jens Axboe wrote:
> Ensure that writes to the dio/bio waiter field are ordered
> correctly. With the smp_rmb() before the READ_ONCE() check,
> we should be able to use a more relaxed ordering for the
> task state setting. We don't need a heavier barrier on
On Fri, Nov 16, 2018 at 12:37:32AM -0800, Omar Sandoval wrote:
> On Fri, Nov 16, 2018 at 09:10:05AM +0100, Christoph Hellwig wrote:
> > blk_mq_stop_hw_queues doesn't need any locking, and the ide
> > dev_flags field isn't protected by it either.
>
> Is it a bug that dev_flags is no longer
On Thu, Nov 15, 2018 at 12:51:34PM -0700, Jens Axboe wrote:
> Inherit the iocb IOCB_HIPRI flag, and pass on REQ_HIPRI for
> those kinds of requests.
Looks fine,
Reviewed-by: Christoph Hellwig
On Fri, Nov 16, 2018 at 12:32:33AM -0800, Christoph Hellwig wrote:
> On Fri, Nov 16, 2018 at 03:04:57PM +1100, Dave Chinner wrote:
> > They don't run on my test machines because they require a modular
> > kernel and I run a monolithic kernel specified externally by the
> > qemu command line on all
On Fri, Nov 16, 2018 at 09:10:04AM +0100, Christoph Hellwig wrote:
> There is nothing we can synchronize against over a call to
> blk_queue_dying.
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> drivers/ide/ide-pm.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff
On Fri, Nov 16, 2018 at 09:10:03AM +0100, Christoph Hellwig wrote:
> blk_queue_max_hw_sectors can't do anything with queue_lock protection
> so don't hold it.
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> drivers/block/pktcdvd.c | 2 --
> 1 file changed, 2 deletions(-)
On Fri, Nov 16, 2018 at 09:10:01AM +0100, Christoph Hellwig wrote:
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> include/linux/blkdev.h | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index
On Fri, Nov 16, 2018 at 09:10:02AM +0100, Christoph Hellwig wrote:
> There is nothing the queue_lock could protect inside floppy_end_request,
> so remove it.
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> drivers/block/floppy.c | 5 -
> 1 file changed, 5
On Thu, Nov 15, 2018 at 12:51:35PM -0700, Jens Axboe wrote:
> Those will go straight to issue inside blk-mq, so don't bother
> setting up a block plug for them.
Looks fine,
Reviewed-by: Christoph Hellwig
On Fri, Nov 16, 2018 at 12:47:39AM -0800, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 12:51:34PM -0700, Jens Axboe wrote:
> > Inherit the iocb IOCB_HIPRI flag, and pass on REQ_HIPRI for
> > those kinds of requests.
>
> Looks fine,
Actually. Who is going to poll for them? With the
Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1d185f1fc333..5c5ef461845f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -567,7 +567,6 @@ struct
There is nothing we can synchronize against over a call to
blk_queue_dying.
Signed-off-by: Christoph Hellwig
---
drivers/ide/ide-pm.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 51fe10ac02fa..56690f523100 100644
---
blk_mq_stop_hw_queues doesn't need any locking, and the ide
dev_flags field isn't protected by it either.
Signed-off-by: Christoph Hellwig
---
drivers/ide/ide-pm.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 56690f523100..192e6c65d34e
There is nothing the queue_lock could protect inside floppy_end_request,
so remove it.
Signed-off-by: Christoph Hellwig
---
drivers/block/floppy.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index eeb4be8d000b..218099dd8e44 100644
---
Hi Jens,
a few fixups for the queue_lock conversion, drop a few more bogus
queue_lock uses in drivers, and clean up the mmc use of the queue_lock
as suggested by Ulf.
Replace the lock in mmc_blk_data that is only used through a pointer
in struct mmc_queue and to protect fields in that structure with
an actual lock in struct mmc_queue.
Suggested-by: Ulf Hansson
Signed-off-by: Christoph Hellwig
---
drivers/mmc/core/block.c | 24 +++-
On Thu, Nov 15, 2018 at 12:51:25PM -0700, Jens Axboe wrote:
> If we have separate poll queues, we know that they aren't using
> interrupts. Hence we don't need to disable interrupts around
> finding completions.
>
> Provide a separate set of blk_mq_ops for such devices.
This looks ok, but I'd
On Thu, Nov 15, 2018 at 12:51:26PM -0700, Jens Axboe wrote:
> Various spots check for q->mq_ops being non-NULL, but provide
> a helper to do this instead.
>
> Where the ->mq_ops != NULL check is redundant, remove it.
>
> Since mq == rq-based now that legacy is gone, get rid of the
>
On Fri, Nov 16, 2018 at 09:40:04AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 16, 2018 at 12:37:32AM -0800, Omar Sandoval wrote:
> > On Fri, Nov 16, 2018 at 09:10:05AM +0100, Christoph Hellwig wrote:
> > > blk_mq_stop_hw_queues doesn't need any locking, and the ide
> > > dev_flags field isn't
On Thu, Nov 15, 2018 at 12:51:29PM -0700, Jens Axboe wrote:
> If we're polling for IO on a device that doesn't use interrupts, then
> IO completion loop (and wake of task) is done by submitting task itself.
> If that is the case, then we don't need to enter the wake_up_process()
> function, we can
On Fri, Nov 16, 2018 at 12:46:58AM -0800, Omar Sandoval wrote:
> > > generic/349 [not run] scsi_debug module not found
> > > generic/350 [not run] scsi_debug module not found
> > > generic/351 [not run] scsi_debug module not found
> >
> > Same here, btw. Any test that requires
Hi Jens,
The 1st patch fixes the kobject lifetime issue which is triggerd when
DEBUG_KOBJECT_RELEASE is enabled.
The 2nd patch can be thought as one follow-up cleanup.
V2:
- allocate 'blk_mq_ctx' inside blk_mq_init_allocated_queue()
- allocate q->mq_kobj directly
Ming Lei (2):
Now q->queue_ctx is just one read-mostly table for query the
'blk_mq_ctx' instance from one cpu index, it isn't necessary
to allocate it as percpu variable. One simple array may be
more efficient.
Cc: "jianchao.wang"
Cc: Guenter Roeck
Cc: Greg Kroah-Hartman
Signed-off-by: Ming Lei
---
Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime
from block layer's view, actually they don't because userspace may
grab one kobject anytime via sysfs, so each kobject's lifetime has
to be independent, then the objects(mq_kobj, ctx) which hosts its
own kobject have to be allocated
On Fri, Nov 16, 2018 at 03:04:57PM +1100, Dave Chinner wrote:
> On Thu, Nov 15, 2018 at 02:24:19PM -0800, Darrick J. Wong wrote:
> > On Fri, Nov 16, 2018 at 09:13:37AM +1100, Dave Chinner wrote:
> > > On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> > > > On Thu, Nov 15, 2018 at
Hi all,
a while ago Thadeu reported installer problems because nvme multipath
didn't provide slaves/holders links for the underlying devices.
This is because for one Tejun said the API shall not be used for new
users, but also because it currently hangs off the block_device
structure, which has
This allows tools like distro installers easily track the relationship.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 5 +++--
drivers/nvme/host/multipath.c | 12 ++--
drivers/nvme/host/nvme.h | 12
3 files changed, 21 insertions(+), 8
We'd like to track the slaves and holder for nvme multipath devices
in the same standard fashion as all the other stacked block devices
to make the life for things like distro installers easy.
But struct block_device only exists while we have open instances,
which we never have for the underlying
On Fri, Nov 16, 2018 at 07:23:10PM +0800, Ming Lei wrote:
> @@ -456,7 +456,7 @@ struct request_queue {
> /*
>* mq queue kobject
>*/
> - struct kobject mq_kobj;
> + struct kobject *mq_kobj;
What is this kobject even used for? It wasn't obvious at all from this
patch,
On Fri, Nov 16, 2018 at 07:23:11PM +0800, Ming Lei wrote:
> Now q->queue_ctx is just one read-mostly table for query the
> 'blk_mq_ctx' instance from one cpu index, it isn't necessary
> to allocate it as percpu variable. One simple array may be
> more efficient.
"may be", have you run benchmarks
On 11/16/18 8:19 AM, Jens Axboe wrote:
> On 11/16/18 1:43 AM, Christoph Hellwig wrote:
>> On Thu, Nov 15, 2018 at 12:51:31PM -0700, Jens Axboe wrote:
>>> If we want to support async IO polling, then we have to allow
>>> finding completions that aren't just for the one we are
>>> looking for.
On 11/16/18 1:10 AM, Christoph Hellwig wrote:
> Hi Jens,
>
> a few fixups for the queue_lock conversion, drop a few more bogus
> queue_lock uses in drivers, and clean up the mmc use of the queue_lock
> as suggested by Ulf.
Applied 1-5 for now, giving Ulf a chance to checkout #6.
--
Jens Axboe
The pull request you sent on Thu, 15 Nov 2018 13:14:47 -0700:
> git://git.kernel.dk/linux-block.git tags/for-linus-20181115
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/59749c2d49bf28df69ac4bcabf1f69b00d3dca59
Thank you!
--
Deet-doot-dot, I am a bot.
On 11/16/18 1:35 AM, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 12:51:25PM -0700, Jens Axboe wrote:
>> If we have separate poll queues, we know that they aren't using
>> interrupts. Hence we don't need to disable interrupts around
>> finding completions.
>>
>> Provide a separate set of
On 11/16/18 1:41 AM, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 12:51:28PM -0700, Jens Axboe wrote:
>> Ensure that writes to the dio/bio waiter field are ordered
>> correctly. With the smp_rmb() before the READ_ONCE() check,
>> we should be able to use a more relaxed ordering for the
>>
On Fri, Nov 16, 2018 at 03:49:06PM +0800, Ming Lei wrote:
On Fri, Nov 16, 2018 at 01:52:05AM -0500, Sasha Levin wrote:
On Fri, Nov 16, 2018 at 11:28:25AM +0800, Ming Lei wrote:
> Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime
> from block layer's view, actually they don't
On Fri, Nov 16, 2018 at 06:06:23AM -0800, Greg Kroah-Hartman wrote:
> On Fri, Nov 16, 2018 at 07:23:11PM +0800, Ming Lei wrote:
> > Now q->queue_ctx is just one read-mostly table for query the
> > 'blk_mq_ctx' instance from one cpu index, it isn't necessary
> > to allocate it as percpu variable.
On Fri, Nov 16, 2018 at 06:05:21AM -0800, Greg Kroah-Hartman wrote:
> On Fri, Nov 16, 2018 at 07:23:10PM +0800, Ming Lei wrote:
> > @@ -456,7 +456,7 @@ struct request_queue {
> > /*
> > * mq queue kobject
> > */
> > - struct kobject mq_kobj;
> > + struct kobject *mq_kobj;
>
>
On 11/16/18 1:48 AM, Christoph Hellwig wrote:
> On Fri, Nov 16, 2018 at 12:47:39AM -0800, Christoph Hellwig wrote:
>> On Thu, Nov 15, 2018 at 12:51:34PM -0700, Jens Axboe wrote:
>>> Inherit the iocb IOCB_HIPRI flag, and pass on REQ_HIPRI for
>>> those kinds of requests.
>>
>> Looks fine,
>
>
On 11/16/18 1:43 AM, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 12:51:31PM -0700, Jens Axboe wrote:
>> If we want to support async IO polling, then we have to allow
>> finding completions that aren't just for the one we are
>> looking for. Always pass in -1 to the mq_ops->poll() helper,
>>
45 matches
Mail list logo