Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1d185f1fc333..5c5ef461845f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -567,7 +567,6 @@ struct
On Fri, Nov 16, 2018 at 02:11:07PM +0800, jianchao.wang wrote:
>
>
> On 11/16/18 11:28 AM, Ming Lei wrote:
> ...
> >
> > +struct blk_mq_kobj {
> > + struct kobject kobj;
> > +};
> > +
> > static void blk_mq_sysfs_release(struct kobject *kobj)
> > {
> > + struct blk_mq_kobj *mq_kobj =
On Fri, Nov 16, 2018 at 01:52:05AM -0500, Sasha Levin wrote:
> On Fri, Nov 16, 2018 at 11:28:25AM +0800, Ming Lei wrote:
> > Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime
> > from block layer's view, actually they don't because userspace may
> > grab one kobject anytime via
On Fri, Nov 16, 2018 at 11:28:25AM +0800, Ming Lei wrote:
Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime
from block layer's view, actually they don't because userspace may
grab one kobject anytime via sysfs, so each kobject's lifetime has
to be independent, then the
On 11/16/18 11:28 AM, Ming Lei wrote:
...
>
> +struct blk_mq_kobj {
> + struct kobject kobj;
> +};
> +
> static void blk_mq_sysfs_release(struct kobject *kobj)
> {
> + struct blk_mq_kobj *mq_kobj = container_of(kobj, struct blk_mq_kobj,
> +
On Thu, Nov 15, 2018 at 02:24:19PM -0800, Darrick J. Wong wrote:
> On Fri, Nov 16, 2018 at 09:13:37AM +1100, Dave Chinner wrote:
> > On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> > > On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > > > On Thu, Nov 15, 2018 at
Hi Jens,
The 1st patch fixes the kobject lifetime issue which is triggerd when
DEBUG_KOBJECT_RELEASE is enabled.
The 2nd patch can be thought as one follow-up cleanup.
Ming Lei (2):
blk-mq: not embed .mq_kobj and ctx->kobj into queue instance
blk-mq: alloc q->queue_ctx as normal array
Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime
from block layer's view, actually they don't because userspace may
grab one kobject anytime via sysfs, so each kobject's lifetime has
to be independent, then the objects(mq_kobj, ctx) which hosts its
own kobject have to be allocated
Now q->queue_ctx is just one read-mostly table for query the
'blk_mq_ctx' instance from one cpu index, it isn't necessary
to allocate it as percpu variable. One simple array may be
more efficient.
Cc: Guenter Roeck
Cc: Greg Kroah-Hartman
Signed-off-by: Ming Lei
---
block/blk-mq-sysfs.c | 6
On Fri, Nov 16, 2018 at 09:13:37AM +1100, Dave Chinner wrote:
> On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> > On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > > On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > > > On Wed, Nov 14, 2018 at 08:18:24AM
On Thu, Nov 15, 2018 at 11:10:36AM +0800, Ming Lei wrote:
> On Thu, Nov 15, 2018 at 12:22:01PM +1100, Dave Chinner wrote:
> > On Thu, Nov 15, 2018 at 09:06:52AM +0800, Ming Lei wrote:
> > > On Wed, Nov 14, 2018 at 08:18:24AM -0700, Jens Axboe wrote:
> > > > On 11/13/18 2:43 PM, Dave Chinner wrote:
in floppy (me)
- Fix SCSI queue cleanup regression. While elusive, it caused oopses in
queue running (Ming)
- Fix bad string copy in kyber tracing (Omar)
Please pull!
git://git.kernel.dk/linux-block.git tags/for-linus-20181115
Inherit the iocb IOCB_HIPRI flag, and pass on REQ_HIPRI for
those kinds of requests.
Signed-off-by: Jens Axboe
---
fs/block_dev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 7810f5b588ea..c124982b810d 100644
--- a/fs/block_dev.c
+++
Those will go straight to issue inside blk-mq, so don't bother
setting up a block plug for them.
Signed-off-by: Jens Axboe
---
fs/block_dev.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index c124982b810d..9dc695a3af4e 100644
If we want to support async IO polling, then we have to allow
finding completions that aren't just for the one we are
looking for. Always pass in -1 to the mq_ops->poll() helper,
and have that return how many events were found in this poll
loop.
Signed-off-by: Jens Axboe
---
block/blk-mq.c
Right now we immediately bail if need_resched() is true, but
we need to do at least one loop in case we have entries waiting.
So just invert the need_resched() check, putting it at the
bottom of the loop.
Signed-off-by: Jens Axboe
---
block/blk-mq.c | 4 ++--
1 file changed, 2 insertions(+), 2
Ensure that writes to the dio/bio waiter field are ordered
correctly. With the smp_rmb() before the READ_ONCE() check,
we should be able to use a more relaxed ordering for the
task state setting. We don't need a heavier barrier on
the wakeup side after writing the waiter field, since we
either
If we're polling for IO on a device that doesn't use interrupts, then
IO completion loop (and wake of task) is done by submitting task itself.
If that is the case, then we don't need to enter the wake_up_process()
function, we can simply mark ourselves as TASK_RUNNING.
Signed-off-by: Jens Axboe
We currently only really support sync poll, ie poll with 1
IO in flight. This prepares us for supporting async poll.
Note that the returned value isn't necessarily 100% accurate.
If poll races with IRQ completion, we assume that the fact
that the task is now runnable means we found at least one
blk_poll() has always kept spinning until it found an IO. This is
fine for SYNC polling, since we need to find one request we have
pending, but in preparation for ASYNC polling it can be beneficial
to just check if we have any entries available or not.
Existing callers are converted to pass in
Various spots check for q->mq_ops being non-NULL, but provide
a helper to do this instead.
Where the ->mq_ops != NULL check is redundant, remove it.
Since mq == rq-based now that legacy is gone, get rid of the
queue_is_rq_based() and just use queue_is_mq() everywhere.
Signed-off-by: Jens Axboe
Put the short code in the fast path, where we don't have any
functions attached to the queue. This minimizes the impact on
the hot path in the core code.
Cc: Josef Bacik
Signed-off-by: Jens Axboe
---
block/blk-rq-qos.c | 63 +-
block/blk-rq-qos.h |
If we have separate poll queues, we know that they aren't using
interrupts. Hence we don't need to disable interrupts around
finding completions.
Provide a separate set of blk_mq_ops for such devices.
Signed-off-by: Jens Axboe
---
drivers/nvme/host/pci.c | 45
Some of these are optimizations, the latter part is prep work
for supporting polling with aio.
Patches against my for-4.21/block branch. These patches can also
be found in my mq-perf branch, though there are other patches
sitting on top of this series (notably aio polling, as mentioned).
Changes
On 11/15/18 12:14 PM, Jens Axboe wrote:
> On 11/14/18 9:02 AM, Christoph Hellwig wrote:
>> Hi Jens,
>>
>> this series removes another bunch of legacy request leftovers,
>> including the pointer indirection for the queue_lock.
>
> Applied, with the subname part removed as mentioned in #13.
Your
On 11/14/18 9:02 AM, Christoph Hellwig wrote:
> Hi Jens,
>
> this series removes another bunch of legacy request leftovers,
> including the pointer indirection for the queue_lock.
Applied, with the subname part removed as mentioned in #13.
> Note that we have very few queue_lock users left, I
On Thu, Nov 15, 2018 at 07:55:02AM +0100, Hannes Reinecke wrote:
>> Signed-off-by: Christoph Hellwig
>> ---
>> block/blk-core.c | 54 ++--
>> block/blk-mq.c | 2 +-
>> block/blk-settings.c | 10 +++-
>> block/blk-sysfs.c | 28
On Wed, Nov 14, 2018 at 06:56:41PM +0100, Ulf Hansson wrote:
> > } else {
> > @@ -397,6 +397,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct
> > mmc_card *card,
> > int ret;
> >
> > mq->card = card;
> > + mq->lock = lock;
>
> Unless I am mistaken, it seems like
On Wed, Nov 14, 2018 at 06:31:45PM +0100, Ulf Hansson wrote:
> > + * @subname: partition subname
>
> Drop subname :-)
Fixed.
29 matches
Mail list logo