> > With the above assert in place, the usable_pages of 879 ars you have
> > in test_dsa.c crashes
> > due to the assertion failure.
>
> Yep, the assertion looks useful to have in place.
>
> > As far as the fix, I do agree with what you have in 0001-, except I am
> > not so sure about the "+1 for rounding".
> > Can we just do ceiling division?
>
> Ceiling division is more precise at the end, and can be checked, I'd
> tend to stick with your approach.
>
> > +               /*
> > +                * We must also account for pagemap entries needed to cover 
> > the
> > +                * metadata pages themselves.  The pagemap must track
> > all pages in the
> > +                * segment, including the pages occupied by metadata.
> > +                */
> > +               metadata_bytes +=
> > +                       ((metadata_bytes + (FPM_PAGE_SIZE -
> > sizeof(dsa_pointer)) - 1) /
> > +                       (FPM_PAGE_SIZE - sizeof(dsa_pointer))) *
> > +                       sizeof(dsa_pointer);
> >
> > I don't think we should add a test_dsa, but I do think it was useful
> > to verify the issue.
>
> I think that a test would be actually nice to have in test_dsa, but the
> proposed one lacks flexibility.  How about changing it to a function
> that takes in input the values you'd want to test for pagemap_start,
> usable_pages and the offset?  The point is to check a dsa_allocate()
> with the sanity of an offset, hence a simple test_dsa_allocate() as
> function name?  The test module exists down to v17, which should be
> enough to check things across platforms moving ahead.

How about we loop through a wider range of allocations and rely
on the assert to fail if we hit the issue for an undersized pagemap.

test allocations up to 10k pages, skipping 100 pages at a time.

```
SELECT test_dsa_allocate(10000, 100);
```

I think this gives us broader coverage for dsa_allocate().

See v3.

> > This sounds like it should be backpatched, but we'll see what a
> > committer thinks.
>
> This should be backpatched.  If you'd have told me that the
> allocation is oversized, then an optimization may have been possible
> on HEAD, especially if overestimation was large.  A small reduction in
> size may not even be worth worrying in some cases, as we tend to
> oversize some shmem areas as well.  An undersized calculation is a
> very different story: memory corruptions on the stack are not cool at
> all, especially on stable branches, and they lead to unpredictible
> behaviors.

I agree.

--
Sami Imseih
Amazon Web Services (AWS)

Attachment: v3-0001-Fix-DSA-pagemap-undersizing-in-make_new_segment.patch
Description: Binary data

Reply via email to