On Sat, Nov 17, 2018 at 10:34:18AM +0800, Ming Lei wrote:
> On Fri, Nov 16, 2018 at 06:06:23AM -0800, Greg Kroah-Hartman wrote:
> > On Fri, Nov 16, 2018 at 07:23:11PM +0800, Ming Lei wrote:
> > > Now q->queue_ctx is just one read-mostly table for query the
> > > 'blk_mq_ctx' instance from one cpu
On 16 November 2018 at 09:10, Christoph Hellwig wrote:
> Replace the lock in mmc_blk_data that is only used through a pointer
> in struct mmc_queue and to protect fields in that structure with
> an actual lock in struct mmc_queue.
>
> Suggested-by: Ulf Hansson
> Signed-off-by: Christoph Hellwig
Give a interface to adjust io timeout by device.
Signed-off-by: Weiping Zhang
---
block/blk-sysfs.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 80eef48fddc8..0cabfb935e71 100644
--- a/block/blk-sysfs.c
+++
If the ioprio capability check fails, we return without putting
the file pointer.
Fixes: d9a08a9e616b ("fs: Add aio iopriority support")
Signed-off-by: Jens Axboe
---
fs/aio.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/aio.c b/fs/aio.c
index b36691268b6c..3d9bc81cf500 100644
---
Up until now, IO polling has been exclusively available through preadv2
and pwrite2, both fully synchronous interfaces. This works fine for
completely synchronous use cases, but that's about it. If QD=1 wasn't
enough read the performance goals, the only alternative was to increase
the thread
Add the field and have the blockdev direct_IO() helpers set it.
This is in preparation for being able to poll for iocb completion.
Signed-off-by: Jens Axboe
---
fs/block_dev.c | 2 ++
include/linux/fs.h | 1 +
2 files changed, 3 insertions(+)
diff --git a/fs/block_dev.c b/fs/block_dev.c
We know this is a read/write request, but in preparation for
having different kinds of those, ensure that we call the assigned
handler instead of assuming it's aio_complete_rq().
Signed-off-by: Jens Axboe
---
fs/aio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/aio.c
Add polled variants of PREAD/PREADV and PWRITE/PWRITEV. These act
like their non-polled counterparts, except we expect to poll for
completion of them. The polling happens at io_getevent() time, and
works just like non-polled IO.
Polled IO doesn't support the user mapped completion ring. Events
Needs further work, but this should work fine on normal setups
with a file system on a pollable block device.
Signed-off-by: Jens Axboe
---
fs/aio.c | 2 ++
fs/direct-io.c | 4 +++-
fs/iomap.c | 7 +--
3 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/fs/aio.c
blk_poll() has always kept spinning until it found an IO. This is
fine for SYNC polling, since we need to find one request we have
pending, but in preparation for ASYNC polling it can be beneficial
to just check if we have any entries available or not.
Existing callers are converted to pass in
This relies on the fc taget ops setting ->poll_queue, which
nobody does. Otherwise it just checks if something has
completed, which isn't very useful.
Signed-off-by: Jens Axboe
---
drivers/nvme/host/fc.c | 33 -
1 file changed, 33 deletions(-)
diff --git
Right now we immediately bail if need_resched() is true, but
we need to do at least one loop in case we have entries waiting.
So just invert the need_resched() check, putting it at the
bottom of the loop.
Signed-off-by: Jens Axboe
---
block/blk-mq.c | 4 ++--
1 file changed, 2 insertions(+), 2
For the core poll helper, the task state setting don't need
to imply any atomics, as it's the current task itself that
is being modified and we're not going to sleep.
For IRQ driven, the wakeup path have the necessary barriers
to not need us using the heavy handed version of the task
state
We currently only really support sync poll, ie poll with 1
IO in flight. This prepares us for supporting async poll.
Note that the returned value isn't necessarily 100% accurate.
If poll races with IRQ completion, we assume that the fact
that the task is now runnable means we found at least one
We always pass in -1 now and none of the callers use the tag value,
remove the parameter.
Signed-off-by: Jens Axboe
---
block/blk-mq.c | 2 +-
drivers/nvme/host/pci.c | 8
drivers/nvme/host/rdma.c | 2 +-
include/linux/blk-mq.h | 2 +-
4 files changed, 7 insertions(+), 7
If we want to support async IO polling, then we have to allow
finding completions that aren't just for the one we are
looking for. Always pass in -1 to the mq_ops->poll() helper,
and have that return how many events were found in this poll
loop.
Signed-off-by: Jens Axboe
---
block/blk-mq.c
Some of these are optimizations, the latter part is prep work
for supporting polling with aio.
Patches against my for-4.21/block branch. These patches can also
be found in my mq-perf branch. Some of them are prep patches for
the aio poll work, which can be found in my aio-poll branch.
These will
17 matches
Mail list logo