On Thu, Aug 28, 2025 at 12:01:20AM +0200, David Hildenbrand wrote:
> Let's cleanup and simplify the function a bit.

Ah I guess you separated this out from the previous patch? :)

I feel like it might be worth talking about the implementation here in the
commit message as it took me a while to figure this out.

>
> Signed-off-by: David Hildenbrand <da...@redhat.com>


This original implementation is SO GROSS.

God this hurts my mind

                n = min(bytes, (size_t)PAGE_SIZE - offset);

So either it'll be remaining bytes in page or we're only spanning one page first
time round

Then we

                res += n;
                bytes -= n;

So bytes comes to end of page if spanning multiple

Then offset if spanning multiple pages will be PAGE_SIZE -offset + offset (!!!)
therefore PAGE_SIZE And we move to the next page and reset offset to 0:

                offset += n;
                if (offset == PAGE_SIZE) {
                        page = nth_page(page, 1);
                        offset = 0;
                }

Then from then on n = min(bytes, PAGE_SIZE) (!!!!!!)

So res = remaining safe bytes in first page + num other pages OR bytes if we
don't span more than 1.

Lord above.

Also semantics of 'if bytes == 0, then check first page anyway' which you do
capture.

OK think I have convinced myself this is right, so hopefully no deeply subtle
off-by-one issues here :P

Anyway, LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoa...@oracle.com>

> ---
>  fs/hugetlbfs/inode.c | 33 +++++++++++----------------------
>  1 file changed, 11 insertions(+), 22 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index c5a46d10afaa0..6ca1f6b45c1e5 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -198,31 +198,20 @@ hugetlb_get_unmapped_area(struct file *file, unsigned 
> long addr,
>  static size_t adjust_range_hwpoison(struct folio *folio, size_t offset,
>               size_t bytes)
>  {
> -     struct page *page;
> -     size_t n = 0;
> -     size_t res = 0;
> -
> -     /* First page to start the loop. */
> -     page = folio_page(folio, offset / PAGE_SIZE);
> -     offset %= PAGE_SIZE;
> -     while (1) {
> -             if (is_raw_hwpoison_page_in_hugepage(page))
> -                     break;
> +     struct page *page = folio_page(folio, offset / PAGE_SIZE);
> +     size_t safe_bytes;
> +
> +     if (is_raw_hwpoison_page_in_hugepage(page))
> +             return 0;
> +     /* Safe to read the remaining bytes in this page. */
> +     safe_bytes = PAGE_SIZE - (offset % PAGE_SIZE);
> +     page++;
>
> -             /* Safe to read n bytes without touching HWPOISON subpage. */
> -             n = min(bytes, (size_t)PAGE_SIZE - offset);
> -             res += n;
> -             bytes -= n;
> -             if (!bytes || !n)
> +     for (; safe_bytes < bytes; safe_bytes += PAGE_SIZE, page++)

OK this is quite subtle - so if safe_bytes == bytes, this means we've confirmed
that all requested bytes are safe.

So offset=0, bytes = 4096 would fail this (as safe_bytes == 4096).

Maybe worth putting something like:

        /*
         * Now we check page-by-page in the folio to see if any bytes we don't
         * yet know to be safe are contained within posioned pages or not.
         */

Above the loop. Or something like this.

> +             if (is_raw_hwpoison_page_in_hugepage(page))
>                       break;
> -             offset += n;
> -             if (offset == PAGE_SIZE) {
> -                     page++;
> -                     offset = 0;
> -             }
> -     }
>
> -     return res;
> +     return min(safe_bytes, bytes);

Yeah given above analysis this seems correct.

You must have torn your hair out over this :)

>  }
>
>  /*
> --
> 2.50.1
>

Reply via email to