On 11/20/2015 06:13 AM, Chris Mason wrote:
On Thu, Nov 19, 2015 at 05:49:37PM -0800, Liu Bo wrote:
while xfstesting, this bug[1] is spotted by both btrfs/061 and btrfs/063,
so those sub-stripe writes are gatherred into plug callback list and
hopefully we can have a full stripe writes.

However, while processing these plugged callbacks, it's within an atomic
context which is provided by blk_sq_make_request() because of a get_cpu()
in blk_mq_get_ctx().

This changes to always use btrfs_rmw_helper to complete the pending writes.


Thanks Liu, but MD raid has the same troubles, we're not atomic in our unplugs.

Jens?

Yeah, blk-mq does have preemption disabled when it flushes, for the single queue setup. That's a bug. Attached is an untested patch that should fix it, can you try it?

I'll rework this to be a proper patch, not convinced we want to add the new request before flush, that might destroy merging opportunities. I'll unify the mq/sq parts.

--
Jens Axboe

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 3ae09de62f19..0325dcec8c74 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1380,12 +1391,15 @@ static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
 		blk_mq_bio_to_request(rq, bio);
 		if (!request_count)
 			trace_block_plug(q);
-		else if (request_count >= BLK_MAX_REQUEST_COUNT) {
+
+		list_add_tail(&rq->queuelist, &plug->mq_list);
+		blk_mq_put_ctx(data.ctx);
+
+		if (request_count + 1 >= BLK_MAX_REQUEST_COUNT) {
 			blk_flush_plug_list(plug, false);
 			trace_block_plug(q);
 		}
-		list_add_tail(&rq->queuelist, &plug->mq_list);
-		blk_mq_put_ctx(data.ctx);
+
 		return cookie;
 	}
 

Reply via email to