On Fri, Oct 12, 2018 at 03:53:10PM +0800, Ming Lei wrote:
> blk_queue_split() does respect this limit via bio splitting, so no
> need to do that in blkdev_issue_discard(), then we can align to
> normal bio submit(bio_add_page() & submit_bio()).
>
> More importantly, this patch fixes one issue introduced in a22c4d7e34402cc
> ("block: re-add discard_granularity and alignment checks"), in which
> zero discard bio may be generated in case of zero alignment.
>
> Fixes: a22c4d7e34402ccdf3 ("block: re-add discard_granularity and alignment
> checks")
> Cc: [email protected]
> Cc: Mariusz Dabrowski <[email protected]>
> Cc: Ming Lin <[email protected]>
> Cc: Mike Snitzer <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Xiao Ni <[email protected]>
> Signed-off-by: Ming Lei <[email protected]>
> ---
> block/blk-lib.c | 28 ++--------------------------
> 1 file changed, 2 insertions(+), 26 deletions(-)
>
> diff --git a/block/blk-lib.c b/block/blk-lib.c
> index d1b9dd03da25..bbd44666f2b5 100644
> --- a/block/blk-lib.c
> +++ b/block/blk-lib.c
> @@ -29,9 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev,
> sector_t sector,
> {
> struct request_queue *q = bdev_get_queue(bdev);
> struct bio *bio = *biop;
> - unsigned int granularity;
> unsigned int op;
> - int alignment;
> sector_t bs_mask;
>
> if (!q)
> @@ -54,38 +52,16 @@ int __blkdev_issue_discard(struct block_device *bdev,
> sector_t sector,
> if ((sector | nr_sects) & bs_mask)
> return -EINVAL;
>
> - /* Zero-sector (unknown) and one-sector granularities are the same. */
> - granularity = max(q->limits.discard_granularity >> 9, 1U);
> - alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
> -
> while (nr_sects) {
> - unsigned int req_sects;
> - sector_t end_sect, tmp;
> + unsigned int req_sects = nr_sects;
> + sector_t end_sect;
>
> - /*
> - * Issue in chunks of the user defined max discard setting,
> - * ensuring that bi_size doesn't overflow
> - */
> - req_sects = min_t(sector_t, nr_sects,
> - q->limits.max_discard_sectors);
> if (!req_sects)
> goto fail;
> if (req_sects > UINT_MAX >> 9)
> req_sects = UINT_MAX >> 9;
>
> - /*
> - * If splitting a request, and the next starting sector would be
> - * misaligned, stop the discard at the previous aligned sector.
> - */
> end_sect = sector + req_sects;
> - tmp = end_sect;
> - if (req_sects < nr_sects &&
> - sector_div(tmp, granularity) != alignment) {
> - end_sect = end_sect - alignment;
> - sector_div(end_sect, granularity);
> - end_sect = end_sect * granularity + alignment;
> - req_sects = end_sect - sector;
> - }
>
> bio = next_bio(bio, 0, gfp_mask);
> bio->bi_iter.bi_sector = sector;
> --
> 2.9.5
>
Ping...
This patch fixes BUG() in blk_mq_end_request() when zero-length discard
bio is generated from __blkdev_issue_discard().
Thanks,
Ming