One interesting point of bitmap tag allocation is that it may wait for at least BT_WAIT_BATCH times tag free for a blocked allocation. Obviously, it may hang allocation if the depth is smaller than BT_WAIT_BATCH.
This patch simply sets the wait count as 1 if depth is smaller than BT_WAIT_BATCH to avoid the problem. Maybe better idea is that it should be set as one ratio of depth(1/8 or others), but it may need more tests for verification. Signed-off-by: Ming Lei <[email protected]> --- block/blk-mq-tag.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 6532aea..8e3a22d 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -135,6 +135,16 @@ static struct bt_wait_state *bt_wait_ptr(struct blk_mq_bitmap_tags *bt, return bs; } +static void bs_reset_wait_cnt(struct blk_mq_bitmap_tags *bt, + struct bt_wait_state *bs) +{ + int cnt; + + cnt = bt->depth < BT_WAIT_BATCH ? 1 : BT_WAIT_BATCH; + + atomic_set(&bs->wait_cnt, cnt); +} + static int bt_get(struct blk_mq_bitmap_tags *bt, struct blk_mq_hw_ctx *hctx, unsigned int *last_tag, gfp_t gfp) { @@ -160,7 +170,7 @@ static int bt_get(struct blk_mq_bitmap_tags *bt, struct blk_mq_hw_ctx *hctx, break; if (was_empty) - atomic_set(&bs->wait_cnt, BT_WAIT_BATCH); + bs_reset_wait_cnt(bt, bs); io_schedule(); } while (1); @@ -243,7 +253,7 @@ static void bt_clear_tag(struct blk_mq_bitmap_tags *bt, unsigned int tag) bs = bt_wake_ptr(bt); if (bs && atomic_dec_and_test(&bs->wait_cnt)) { - atomic_set(&bs->wait_cnt, BT_WAIT_BATCH); + bs_reset_wait_cnt(bt, bs); bt_index_inc(&bt->wake_index); wake_up(&bs->wait); } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

