Jan Kara <[email protected]> writes: > Currently blk_insert_flush() just adds flush request to q->queue_head > when flush is not required. That completely bypasses IO scheduler so > e.g. CFQ can be idling waiting for new request to arrive and will idle > through the whole window unnecessarily. Luckily this only happens in > rare cases as usually checks in generic_make_request_checks() clear > FLUSH and FUA flags early if they are not needed.
Right. I think the only way we'd even enter that 'if' block was if the drive state changed (from WB cache to WT cache) between generic_make_request_checks and blk_insert_flush. > When no flushing is actually required, we can easily fix the problem by > properly queueing the request through the IO scheduler. Ideally IO > scheduler should be also made aware of requests queued via > blk_flush_queue_rq(). However inserting flush request through IO > scheduler can have unwanted side-effects since due to flush batching > delaying the flush request in IO scheduler will delay all flush requests > possibly coming from other processes. So we keep adding the request > directly to q->queue_head. Reviewed-by: Jeff Moyer <[email protected]> > Signed-off-by: Jan Kara <[email protected]> > --- > block/blk-flush.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/block/blk-flush.c b/block/blk-flush.c > index 9c423e53324a..c81d56ec308f 100644 > --- a/block/blk-flush.c > +++ b/block/blk-flush.c > @@ -422,7 +422,7 @@ void blk_insert_flush(struct request *rq) > if (q->mq_ops) { > blk_mq_insert_request(rq, false, false, true); > } else > - list_add_tail(&rq->queuelist, &q->queue_head); > + q->elevator->type->ops.elevator_add_req_fn(q, rq); > return; > } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

