On 11/19/18 6:52 PM, Damien Le Moal wrote:
> This small series based on for-4.21/block brings improvements to I/O priority
> hanlding. The main fixes are in patches 5, 6 and 7. These fix BIO and request
> I/O priority initialization for both the synchronous and asynchronous pathes.
Applied,
bio->bi_ioc is never set so always NULL. Remove references to it in
bio_disassociate_task() and in rq_ioc() and delete this field from
struct bio. With this change, rq_ioc() always returns
current->io_context without the need for a bio argument. Further
simplify the code and make it more readable
Define get_current_ioprio() as an inline helper to obtain the caller
I/O priority from its task I/O context. Use this helper in
blk_init_request_from_bio() to set a request ioprio.
Reviewed-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
Signed-off-by: Damien Le Moal
---
For cases when the application does not specify aio_reqprio for an aio,
fallback to use get_current_ioprio() to obtain the task I/O priority
last set using ioprio_set() rather than the hardcoded IOPRIO_CLASS_NONE
value.
Reviewed-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
Reviewed-by:
On Mon, 2018-11-19 at 00:16 -0800, Christoph Hellwig wrote:
> On Mon, Nov 19, 2018 at 12:51:27PM +0900, Damien Le Moal wrote:
> > As explained in ioprio_get() and ionice man pages, the default I/O
> > priority class for processes which have not set an I/O priority
> > using
> > ioprio_set() is
Growing in size a high priority request by merging it with a lower
priority BIO or request will increase the request execution time. This
is the opposite result of the desired effect of high I/O priorities,
namely getting low I/O latencies. Prevent merging of requests and BIOs
that have different
Comment the use of the IOCB_FLAG_IOPRIO aio flag similarly to the
IOCB_FLAG_RESFD flag.
Reviewed-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
Signed-off-by: Damien Le Moal
---
include/uapi/linux/aio_abi.h | 2 ++
1 file changed, 2 insertions(+)
diff --git
This small series based on for-4.21/block brings improvements to I/O priority
hanlding. The main fixes are in patches 5, 6 and 7. These fix BIO and request
I/O priority initialization for both the synchronous and asynchronous pathes.
Changes from v1:
* Removed not very useful comments in patch 5
For the synchronous I/O path case (read(), write() etc system calls), a
BIO I/O priority is not initialized until the execution of
blk_init_request_from_bio() when the BIO is submitted and a request
initialized for the BIO execution. This is due to the ki_ioprio field of
the struct kiocb defined
Adam,
On 2018/11/20 3:18, Adam Manzanares wrote:
> On Mon, 2018-11-19 at 12:51 +0900, Damien Le Moal wrote:
>> Define get_current_ioprio() as an inline helper to obtain the caller
>> I/O priority from its task I/O context. Use this helper in
>> blk_init_request_from_bio() to set a request ioprio.
Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime
from block layer's view, actually they don't because userspace may
grab one kobject anytime via sysfs.
This patch fixes the issue by the following approach:
1) introduce 'struct blk_mq_ctxs' for holding .mq_kobj and managing
all
On Tue, Nov 20, 2018 at 09:44:35AM +0800, Ming Lei wrote:
> Even though .mq_kobj, ctx->kobj and q->kobj share same lifetime
> from block layer's view, actually they don't because userspace may
> grab one kobject anytime via sysfs.
>
> This patch fixes the issue by the following approach:
>
> 1)
On Mon, 2018-11-19 at 12:51 +0900, Damien Le Moal wrote:
> bio->bi_ioc is never set so always NULL. Remove references to it in
> bio_disassociate_task() and in rq_ioc() and delete this field from
> struct bio. With this change, rq_ioc() always returns
> current->io_context without the need for a
On Mon, 2018-11-19 at 12:51 +0900, Damien Le Moal wrote:
> For cases when the application does not specify aio_reqprio for an
> aio,
> fallback to use get_current_ioprio() to obtain the task I/O priority
> last set using ioprio_set() rather than the hardcoded
> IOPRIO_CLASS_NONE
> value.
>
>
On Mon, 2018-11-19 at 12:51 +0900, Damien Le Moal wrote:
> For the synchronous I/O path case (read(), write() etc system calls),
> a
> BIO I/O priority is not initialized until the execution of
> blk_init_request_from_bio() when the BIO is submitted and a request
> initialized for the BIO
On Mon, 2018-11-19 at 12:51 +0900, Damien Le Moal wrote:
> Define get_current_ioprio() as an inline helper to obtain the caller
> I/O priority from its task I/O context. Use this helper in
> blk_init_request_from_bio() to set a request ioprio.
>
> Signed-off-by: Damien Le Moal
> ---
>
On 11/19/2018 7:19 AM, Jens Axboe wrote:
On 11/19/18 12:59 AM, Christoph Hellwig wrote:
On Sat, Nov 17, 2018 at 02:43:50PM -0700, Jens Axboe wrote:
This relies on the fc taget ops setting ->poll_queue, which
nobody does. Otherwise it just checks if something has
completed, which isn't very
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 52b1c97cd7c6..3ca00d712158 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -3266,9 +3266,7 @@ static bool blk_mq_poll_hybrid_sleep(struct
> request_queue *q,
>* 0: use half of prev avg
>* >0: use this specific
On Sat, Nov 17, 2018 at 02:43:52PM -0700, Jens Axboe wrote:
> We always pass in -1 now and none of the callers use the tag value,
> remove the parameter.
>
> Signed-off-by: Jens Axboe
Looks good,
Reviewed-by: Christoph Hellwig
> -bool blk_poll(struct request_queue *q, blk_qc_t cookie)
> +bool blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
I find the paramter name a little confusing. Maybe wait_for_request,
although I don't particularly like that one either. But we really need
to document the parameter
On Sat, Nov 17, 2018 at 02:43:54PM -0700, Jens Axboe wrote:
> Right now we immediately bail if need_resched() is true, but
> we need to do at least one loop in case we have entries waiting.
> So just invert the need_resched() check, putting it at the
> bottom of the loop.
Looks good,
On Sat, Nov 17, 2018 at 04:53:13PM -0700, Jens Axboe wrote:
> We know this is a read/write request, but in preparation for
> having different kinds of those, ensure that we call the assigned
> handler instead of assuming it's aio_complete_rq().
>
> Signed-off-by: Jens Axboe
Looks good,
On Sat, Nov 17, 2018 at 04:53:14PM -0700, Jens Axboe wrote:
> If the ioprio capability check fails, we return without putting
> the file pointer.
>
> Fixes: d9a08a9e616b ("fs: Add aio iopriority support")
> Signed-off-by: Jens Axboe
Looks good. Please also send it to Al so that it can go into
On Fri, Nov 16, 2018 at 02:37:10PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:52:54PM +0800, Ming Lei wrote:
> > index 2955a4ea2fa8..161e14b8b180 100644
> > --- a/fs/btrfs/compression.c
> > +++ b/fs/btrfs/compression.c
> > @@ -400,8 +400,11 @@ blk_status_t
I like this idea, but there are a couple issues here.
First the flag per command really doesn't work - we need a creation
time flag. Unfortunately the existing io_setup system call doesn't
take flags, so we'll need to add a new one.
Second we need a check that the polling mode is actually
On Sat, Nov 17, 2018 at 04:53:17PM -0700, Jens Axboe wrote:
> Needs further work, but this should work fine on normal setups
> with a file system on a pollable block device.
>
> Signed-off-by: Jens Axboe
> ---
> fs/aio.c | 2 ++
> fs/direct-io.c | 4 +++-
> fs/iomap.c | 7 +--
> 3
On Mon, Nov 19, 2018 at 12:51:25PM +0900, Damien Le Moal wrote:
> Comment the use of the IOCB_FLAG_IOPRIO aio flag similarly to the
> IOCB_FLAG_RESFD flag.
>
> Signed-off-by: Damien Le Moal
Looks good,
Reviewed-by: Christoph Hellwig
On Mon, Nov 19, 2018 at 12:51:26PM +0900, Damien Le Moal wrote:
> bio->bi_ioc is never set so always NULL. Remove references to it in
> bio_disassociate_task() and in rq_ioc() and delete this field from
> struct bio. With this change, rq_ioc() always returns
> current->io_context without the need
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG
On Thu, Nov 15, 2018 at 04:23:56PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:55PM +0800, Ming Lei wrote:
> > BTRFS is the only user of this helper, so move this helper into
> > BTRFS, and implement it via bio_for_each_segment_all(), since
> > bio->bi_vcnt may not equal to number
On Mon, Nov 19, 2018 at 12:51:27PM +0900, Damien Le Moal wrote:
> As explained in ioprio_get() and ionice man pages, the default I/O
> priority class for processes which have not set an I/O priority using
> ioprio_set() is IOPRIO_CLASS_BE and not IOPRIO_CLASS_NONE.
While this matches the
On Mon, Nov 19, 2018 at 12:51:28PM +0900, Damien Le Moal wrote:
> Define get_current_ioprio() as an inline helper to obtain the caller
> I/O priority from its task I/O context. Use this helper in
> blk_init_request_from_bio() to set a request ioprio.
>
> Signed-off-by: Damien Le Moal
Looks
On Mon, Nov 19, 2018 at 12:51:29PM +0900, Damien Le Moal wrote:
> For cases when the application does not specify aio_reqprio for an aio,
> fallback to use get_current_ioprio() to obtain the task I/O priority
> last set using ioprio_set() rather than the hardcoded IOPRIO_CLASS_NONE
> value.
>
>
On Fri, Nov 16, 2018 at 02:38:45PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:52:55PM +0800, Ming Lei wrote:
> > BTRFS is the only user of this helper, so move this helper into
> > BTRFS, and implement it via bio_for_each_segment_all(), since
> > bio->bi_vcnt may not equal to
> + /*
> + * Don't allow merge of different I/O priorities.
> + */
> + if (req->ioprio != next->ioprio)
> + return NULL;
> + /*
> + * Don't allow merge of different I/O priorities.
> + */
> + if (rq->ioprio != bio_prio(bio))
> + return
On Mon, Nov 19, 2018 at 12:51:31PM +0900, Damien Le Moal wrote:
> For the synchronous I/O path case (read(), write() etc system calls), a
> BIO I/O priority is not initialized until the execution of
> blk_init_request_from_bio() when the BIO is submitted and a request
> initialized for the BIO
On Fri, Nov 16, 2018 at 02:45:41PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:52:56PM +0800, Ming Lei wrote:
> > There are still cases in which we need to use bio_bvecs() for get the
> > number of multi-page segment, so introduce it.
>
> The only user in your final tree seems to
On Mon, Nov 19, 2018 at 04:19:24PM +0800, Ming Lei wrote:
> On Fri, Nov 16, 2018 at 02:38:45PM +0100, Christoph Hellwig wrote:
> > On Thu, Nov 15, 2018 at 04:52:55PM +0800, Ming Lei wrote:
> > > BTRFS is the only user of this helper, so move this helper into
> > > BTRFS, and implement it via
On Thu, Nov 15, 2018 at 04:40:22PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:57PM +0800, Ming Lei wrote:
> > iov_iter is implemented with bvec itererator, so it is safe to pass
> > multipage bvec to it, and this way is much more efficient than
> > passing one page in each bvec.
>
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG
On Thu, Nov 15, 2018 at 04:44:02PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:58PM +0800, Ming Lei wrote:
> > bch_bio_alloc_pages() is always called on one new bio, so it is safe
> > to access the bvec table directly. Given it is the only kind of this
> > case, open code the bvec
Looks good,
Reiewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG
On Fri, Nov 16, 2018 at 02:46:45PM +0100, Christoph Hellwig wrote:
> > - bio_for_each_segment_all(bv, bio, i) {
> > + for (i = 0, bv = bio->bi_io_vec; i < bio->bi_vcnt; bv++) {
>
> This really needs a comment. Otherwise it looks fine to me.
OK, will do it in next version.
Thanks,
Ming
On Thu, Nov 15, 2018 at 01:42:52PM +0100, David Sterba wrote:
> On Thu, Nov 15, 2018 at 04:52:59PM +0800, Ming Lei wrote:
> > diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> > index 13ba2011a306..789b09ae402a 100644
> > --- a/block/blk-zoned.c
> > +++ b/block/blk-zoned.c
> > @@ -123,6 +123,7
On Thu, Nov 15, 2018 at 10:58:18AM -0700, Keith Busch wrote:
> A driver may have internal state to cleanup if we're pretending a request
> didn't complete. Return 'false' if the command wasn't actually completed
> due to the timeout error injection, and true otherwise.
>
> Signed-off-by: Keith
On Thu, Nov 15, 2018 at 05:22:45PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:52:59PM +0800, Ming Lei wrote:
> > This patch introduces one extra iterator variable to
> > bio_for_each_segment_all(),
> > then we can allow bio_for_each_segment_all() to iterate over multi-page
> >
On Thu, Nov 15, 2018 at 05:46:58PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:00PM +0800, Ming Lei wrote:
> > After multi-page is enabled, one new page may be merged to a segment
> > even though it is a new added page.
> >
> > This patch deals with this issue by post-check in
On Fri, Nov 16, 2018 at 02:49:36PM +0100, Christoph Hellwig wrote:
> I'd much rather have __bio_try_merge_page only do merges in
> the same page, and have a new __bio_try_merge_segment that does
> multi-page merges. This will keep the accounting a lot simpler.
Looks this way is clever, will do
On Thu, Nov 15, 2018 at 05:56:27PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:01PM +0800, Ming Lei wrote:
> > This patch pulls the trigger for multi-page bvecs.
> >
> > Now any request queue which supports queue cluster will see multi-page
> > bvecs.
> >
> > Cc: Dave Chinner
>
> index 5d83a162d03b..c1d5e4e36125 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -1635,8 +1635,11 @@ static blk_status_t scsi_mq_prep_fn(struct request
> *req)
>
> static void scsi_mq_done(struct scsi_cmnd *cmd)
> {
> + if
On Thu, Nov 15, 2018 at 10:58:20AM -0700, Keith Busch wrote:
> There are no more users relying on blk-mq request states to prevent
> double completions, so replace the relatively expensive cmpxchg operation
> with WRITE_ONCE.
>
> Signed-off-by: Keith Busch
Looks fine,
Reviewed-by: Christoph
On Fri, Nov 16, 2018 at 02:53:08PM +0100, Christoph Hellwig wrote:
> > -
> > - if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len) {
> > - bv->bv_len += len;
> > - bio->bi_iter.bi_size += len;
> > - return true;
> > -
On Thu, Nov 15, 2018 at 05:59:36PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:02PM +0800, Ming Lei wrote:
> > Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to
> > increase BIO_MAX_PAGES for it.
>
> You mentioned to it in the cover letter, but this needs more
On Thu, Nov 15, 2018 at 06:11:40PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:04PM +0800, Ming Lei wrote:
> > It is wrong to use bio->bi_vcnt to figure out how many segments
> > there are in the bio even though CLONED flag isn't set on this bio,
> > because this bio may be
On Thu, Nov 15, 2018 at 06:18:11PM -0800, Omar Sandoval wrote:
> On Thu, Nov 15, 2018 at 04:53:05PM +0800, Ming Lei wrote:
> > Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after
> > splitting"),
> > physical segment number is mainly figured out in blk_queue_split() for
> > fast
On Fri, Nov 16, 2018 at 02:58:03PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 15, 2018 at 04:53:05PM +0800, Ming Lei wrote:
> > Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after
> > splitting"),
> > physical segment number is mainly figured out in blk_queue_split() for
> > fast
On Fri, Nov 16, 2018 at 02:28:02PM -0500, Mike Snitzer wrote:
> You rejected the idea of allowing fine-grained control over whether
> native NVMe multipathing is enabled or not on a per-namespace basis.
> All we have is the coarse-grained nvme_core.multipath=N knob. Now
> you're forecasting
On Sat, Nov 17, 2018 at 10:26:38AM +0800, Ming Lei wrote:
> On Fri, Nov 16, 2018 at 06:05:21AM -0800, Greg Kroah-Hartman wrote:
> > On Fri, Nov 16, 2018 at 07:23:10PM +0800, Ming Lei wrote:
> > > @@ -456,7 +456,7 @@ struct request_queue {
> > > /*
> > >* mq queue kobject
> > >*/
> > > -
On Mon, Nov 19, 2018 at 10:04:27AM +0800, Ming Lei wrote:
> On Sat, Nov 17, 2018 at 11:03:42AM +0100, Greg Kroah-Hartman wrote:
> > On Sat, Nov 17, 2018 at 10:34:18AM +0800, Ming Lei wrote:
> > > On Fri, Nov 16, 2018 at 06:06:23AM -0800, Greg Kroah-Hartman wrote:
> > > > On Fri, Nov 16, 2018 at
On Mon, Nov 19, 2018 at 11:06:06AM +0100, Greg Kroah-Hartman wrote:
> On Sat, Nov 17, 2018 at 10:26:38AM +0800, Ming Lei wrote:
> > On Fri, Nov 16, 2018 at 06:05:21AM -0800, Greg Kroah-Hartman wrote:
> > > On Fri, Nov 16, 2018 at 07:23:10PM +0800, Ming Lei wrote:
> > > > @@ -456,7 +456,7 @@ struct
BFQ now shares interface files with CFQ, for the proportional-share
policy. Make documentation consistent with that.
Signed-off-by: Paolo Valente
---
Documentation/block/bfq-iosched.txt | 28 +++-
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git
From: Angelo Ruocco
When two or more entities (of any kind) share a file, their respective
cftypes are linked together. The allowed operations on those files
are: open, release, write and show, mapped to the functions defined in
the cftypes.
This commit makes the cgroup core invoke, whenever
Some of the files exposed in a cgroup by bfq, for the proportional
share policy, have the same meaning as the files owned by cfq (before
legacy blk was removed).
The old implementation of the cgroup interface didn't allow different
entities to create cgroup files with the same name (in the same
From: Angelo Ruocco
Two entities, of any kind, are not able to create a cgroup file with
the same name in the same folder: if an entity tries to create a file
that has the same name as a file created by another entity, the cgroup
core stops it, warns the user about the error, and then proceeds
From: Angelo Ruocco
The piece of information "who created a certain cftype" is not stored
anywhere, thus a cftype is not able to know who is its owner.
This commit addresses this problem by adding a new field in the cftype
structure that enables the name of its owner to be explicitly set.
This commit fixes a few clerical errors in
Documentation/block/bfq-iosched.txt.
Signed-off-by: Paolo Valente
---
Documentation/block/bfq-iosched.txt | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/Documentation/block/bfq-iosched.txt
The current implementation of the seq_show hook in the cftype struct
has only, as parameters, the seq_file to write to and the arguments
passed by the command line. Thus, the only way to retrieve the cftype
that owns an instance of such hook function is by using the accessor
in the seq_file
From: Angelo Ruocco
Some of the cgroup files defined in the throttle policy have the same
meaning as those defined in the proportional share policy.
This commit uses the new file sharing interface in cgroup to share
these files.
Signed-off-by: Angelo Ruocco
Signed-off-by: Paolo Valente
---
Some seq_show functions need to access the cftype they belong to, for
retrieving the data to show. These functions get their cftype by using
the seq_cft accessor for the seq_file. This solution is no longer
viable in case a seq_file is shared among more than one cftype,
because the accessor always
From: Angelo Ruocco
bfq exposes a cgroup attribute, weight, with the same meaning as that
exposed by cfq.
This commit changes bfq default and min weights to match the ones set
by cfq (before legacy blk was removed).
Signed-off-by: Angelo Ruocco
Signed-off-by: Paolo Valente
---
On Mon, Nov 19, 2018 at 11:17:49AM +0100, Greg Kroah-Hartman wrote:
> On Mon, Nov 19, 2018 at 10:04:27AM +0800, Ming Lei wrote:
> > On Sat, Nov 17, 2018 at 11:03:42AM +0100, Greg Kroah-Hartman wrote:
> > > On Sat, Nov 17, 2018 at 10:34:18AM +0800, Ming Lei wrote:
> > > > On Fri, Nov 16, 2018 at
> On Sun, Nov 11, 2018 at 02:32:08PM +0100, Christoph Hellwig wrote:
> > Move all actual functionality into helpers, just leaving the dispatch
> > in this function.
> >
> > Signed-off-by: Christoph Hellwig
> > ---
> > block/bsg.c | 158
>
> -Original Message-
> From: linux-scsi-ow...@vger.kernel.org
> On Behalf Of Christoph Hellwig
> Sent: Sunday, November 11, 2018 3:32 PM
> To: ax...@kernel.dk; martin.peter...@oracle.com; o...@electrozaur.com
> Cc: Johannes Thumshirn ; Benjamin Block
> ; linux-s...@vger.kernel.org;
I just saw the patch that avoids the irq disabling show up in your
tree this morning. I think we can do even better by using slightly
lazy lists that are not updated from ->ki_complete context.
Please take a look at the patch below - this replaces patch 3 from
my previous mail, that is it is
Give a interface to adjust io timeout by device.
Signed-off-by: Weiping Zhang
---
Changes since v1:
* make sure timeout > 0
block/blk-sysfs.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 80eef48fddc8..90a927514d30
On Mon, Nov 19 2018 at 4:39am -0500,
Christoph Hellwig wrote:
> On Fri, Nov 16, 2018 at 02:28:02PM -0500, Mike Snitzer wrote:
> > You rejected the idea of allowing fine-grained control over whether
> > native NVMe multipathing is enabled or not on a per-namespace basis.
> > All we have is the
On 11/19/18 1:58 AM, Christoph Hellwig wrote:
>> index 5d83a162d03b..c1d5e4e36125 100644
>> --- a/drivers/scsi/scsi_lib.c
>> +++ b/drivers/scsi/scsi_lib.c
>> @@ -1635,8 +1635,11 @@ static blk_status_t scsi_mq_prep_fn(struct request
>> *req)
>>
>> static void scsi_mq_done(struct scsi_cmnd *cmd)
On 11/19/18 12:59 AM, Christoph Hellwig wrote:
> On Sat, Nov 17, 2018 at 02:43:50PM -0700, Jens Axboe wrote:
>> This relies on the fc taget ops setting ->poll_queue, which
>> nobody does. Otherwise it just checks if something has
>> completed, which isn't very useful.
>
> Please also remove the
On 11/19/18 1:02 AM, Christoph Hellwig wrote:
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index 52b1c97cd7c6..3ca00d712158 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -3266,9 +3266,7 @@ static bool blk_mq_poll_hybrid_sleep(struct
>> request_queue *q,
>> * 0: use
On 11/19/18 1:05 AM, Christoph Hellwig wrote:
>> -bool blk_poll(struct request_queue *q, blk_qc_t cookie)
>> +bool blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
>
> I find the paramter name a little confusing. Maybe wait_for_request,
> although I don't particularly like that one
On Mon, Nov 19, 2018 at 12:58:15AM -0800, Christoph Hellwig wrote:
> > index 5d83a162d03b..c1d5e4e36125 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -1635,8 +1635,11 @@ static blk_status_t scsi_mq_prep_fn(struct request
> > *req)
> >
> > static void
On 11/19/18 1:07 AM, Christoph Hellwig wrote:
> On Sat, Nov 17, 2018 at 04:53:14PM -0700, Jens Axboe wrote:
>> If the ioprio capability check fails, we return without putting
>> the file pointer.
>>
>> Fixes: d9a08a9e616b ("fs: Add aio iopriority support")
>> Signed-off-by: Jens Axboe
>
> Looks
On 11/19/18 6:32 AM, Christoph Hellwig wrote:
>
> I just saw the patch that avoids the irq disabling show up in your
> tree this morning. I think we can do even better by using slightly
> lazy lists that are not updated from ->ki_complete context.
I totally agree, it's just a first step. One
84 matches
Mail list logo