On Sat, Aug 12, 2023 at 1:07 AM Coly Li <col...@suse.de> wrote:
>
> This patch rewrites badblocks_check() with similar coding style as
> _badblocks_set() and _badblocks_clear(). The only difference is bad
> blocks checking may handle multiple ranges in bad tables now.
>
> If a checking range covers multiple bad blocks range in bad block table,
> like the following condition (C is the checking range, E1, E2, E3 are
> three bad block ranges in bad block table),
>   +------------------------------------+
>   |                C                   |
>   +------------------------------------+
>     +----+      +----+      +----+
>     | E1 |      | E2 |      | E3 |
>     +----+      +----+      +----+
> The improved badblocks_check() algorithm will divide checking range C
> into multiple parts, and handle them in 7 runs of a while-loop,
>   +--+ +----+ +----+ +----+ +----+ +----+ +----+
>   |C1| | C2 | | C3 | | C4 | | C5 | | C6 | | C7 |
>   +--+ +----+ +----+ +----+ +----+ +----+ +----+
>        +----+        +----+        +----+
>        | E1 |        | E2 |        | E3 |
>        +----+        +----+        +----+
> And the start LBA and length of range E1 will be set as first_bad and
> bad_sectors for the caller.
>
> The return value rule is consistent for multiple ranges. For example if
> there are following bad block ranges in bad block table,
>    Index No.     Start        Len         Ack
>        0          400          20          1
>        1          500          50          1
>        2          650          20          0
> the return value, first_bad, bad_sectors by calling badblocks_set() with

s/badblocks_set/badblocks_check/g

> different checking range can be the following values,
>     Checking Start, Len     Return Value   first_bad    bad_sectors
>                100, 100          0           N/A           N/A
>                100, 310          1           400           10
>                100, 440          1           400           10
>                100, 540          1           400           10
>                100, 600         -1           400           10
>                100, 800         -1           400           10
>
> In order to make code review easier, this patch names the improved bad
> block range checking routine as _badblocks_check() and does not change
> existing badblock_check() code yet. Later patch will delete old code of
> badblocks_check() and make it as a wrapper to call _badblocks_check().
> Then the new added code won't mess up with the old deleted code, it will
> be more clear and easier for code review.
>
> Signed-off-by: Coly Li <col...@suse.de>
> Cc: Dan Williams <dan.j.willi...@intel.com>
> Cc: Geliang Tang <geliang.t...@suse.com>
> Cc: Hannes Reinecke <h...@suse.de>
> Cc: Jens Axboe <ax...@kernel.dk>
> Cc: NeilBrown <ne...@suse.de>
> Cc: Vishal L Verma <vishal.l.ve...@intel.com>
> Cc: Xiao Ni <x...@redhat.com>
> ---
>  block/badblocks.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 97 insertions(+)
>
> diff --git a/block/badblocks.c b/block/badblocks.c
> index 4f1434808930..3438a2517749 100644
> --- a/block/badblocks.c
> +++ b/block/badblocks.c
> @@ -1270,6 +1270,103 @@ static int _badblocks_clear(struct badblocks *bb, 
> sector_t s, int sectors)
>         return rv;
>  }
>
> +/* Do the exact work to check bad blocks range from the bad block table */
> +static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors,
> +                           sector_t *first_bad, int *bad_sectors)
> +{
> +       int unacked_badblocks, acked_badblocks;
> +       int prev = -1, hint = -1, set = 0;
> +       struct badblocks_context bad;
> +       unsigned int seq;
> +       int len, rv;
> +       u64 *p;
> +
> +       WARN_ON(bb->shift < 0 || sectors == 0);
> +
> +       if (bb->shift > 0) {
> +               sector_t target;
> +
> +               /* round the start down, and the end up */
> +               target = s + sectors;
> +               rounddown(s, bb->shift);
> +               roundup(target, bb->shift);

The same question here. It needs to set s and target?

> +               sectors = target - s;
> +       }
> +
> +retry:
> +       seq = read_seqbegin(&bb->lock);
> +
> +       p = bb->page;
> +       unacked_badblocks = 0;
> +       acked_badblocks = 0;
> +
> +re_check:
> +       bad.start = s;
> +       bad.len = sectors;
> +
> +       if (badblocks_empty(bb)) {
> +               len = sectors;
> +               goto update_sectors;
> +       }
> +
> +       prev = prev_badblocks(bb, &bad, hint);

Is it better to add check prev < 0 as setting and clearing badblocks?
If not, in the following overlap_front check, it'll have problems when
prev is -1. p[-1] will be the last one element of the array.

> +
> +       /* start after all badblocks */
> +       if ((prev + 1) >= bb->count && !overlap_front(bb, prev, &bad)) {
> +               len = sectors;
> +               goto update_sectors;
> +       }
> +
> +       if (overlap_front(bb, prev, &bad)) {
> +               if (BB_ACK(p[prev]))
> +                       acked_badblocks++;
> +               else
> +                       unacked_badblocks++;
> +
> +               if (BB_END(p[prev]) >= (s + sectors))
> +                       len = sectors;
> +               else
> +                       len = BB_END(p[prev]) - s;
> +
> +               if (set == 0) {
> +                       *first_bad = BB_OFFSET(p[prev]);
> +                       *bad_sectors = BB_LEN(p[prev]);

Is it right to set bad_sectors with len?

> +                       set = 1;
> +               }
> +               goto update_sectors;
> +       }
> +
> +       /* Not front overlap, but behind overlap */
> +       if ((prev + 1) < bb->count && overlap_behind(bb, &bad, prev + 1)) {
> +               len = BB_OFFSET(p[prev + 1]) - bad.start;
> +               hint = prev + 1;
> +               goto update_sectors;
> +       }
> +
> +       /* not cover any badblocks range in the table */
> +       len = sectors;
> +
> +update_sectors:
> +       s += len;
> +       sectors -= len;
> +
> +       if (sectors > 0)
> +               goto re_check;
> +
> +       WARN_ON(sectors < 0);
> +
> +       if (unacked_badblocks > 0)
> +               rv = -1;
> +       else if (acked_badblocks > 0)
> +               rv = 1;
> +       else
> +               rv = 0;
> +
> +       if (read_seqretry(&bb->lock, seq))
> +               goto retry;
> +
> +       return rv;
> +}
>
>  /**
>   * badblocks_check() - check a given range for bad sectors
> --
> 2.35.3
>

Reviewed-by: Xiao Ni <x...@redhat.com>


Reply via email to