On Sat, Nov 30, 2019 at 3:13 PM Andrew Morton <[email protected]> wrote:
>
> On Wed, 25 Sep 2019 09:21:02 +0530 "Aneesh Kumar K.V" 
> <[email protected]> wrote:
>
> > Andrew Morton <[email protected]> writes:
> >
> > > On Tue, 17 Sep 2019 21:01:29 +0530 "Aneesh Kumar K.V" 
> > > <[email protected]> wrote:
> > >
> > >> vmem_altmap_offset() adjust the section aligned base_pfn offset.
> > >> So we need to make sure we account for the same when computing base_pfn.
> > >>
> > >> ie, for altmap_valid case, our pfn_first should be:
> > >>
> > >> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
> > >
> > > What are the user-visible runtime effects of this change?
> >
> > This was found by code inspection. If the pmem region is not correctly
> > section aligned we can skip pfns while iterating device pfn using
> >       for_each_device_pfn(pfn, pgmap)
> >
> >
> > I still would want Dan to ack the change though.
> >
>
> Dan?
>
>
> From: "Aneesh Kumar K.V" <[email protected]>
> Subject: mm/pgmap: use correct alignment when looking at first pfn from a 
> region
>
> vmem_altmap_offset() adjusts the section aligned base_pfn offset.  So we
> need to make sure we account for the same when computing base_pfn.
>
> ie, for altmap_valid case, our pfn_first should be:
>
> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
>
> This was found by code inspection. If the pmem region is not correctly
> section aligned we can skip pfns while iterating device pfn using
>
>         for_each_device_pfn(pfn, pgmap)
>
> [[email protected]: coding style fixes]
> Link: 
> http://lkml.kernel.org/r/[email protected]
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
> Cc: Ralph Campbell <[email protected]>
> Cc: Dan Williams <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
>
>  mm/memremap.c |   12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> --- 
> a/mm/memremap.c~mm-pgmap-use-correct-alignment-when-looking-at-first-pfn-from-a-region
> +++ a/mm/memremap.c
> @@ -55,8 +55,16 @@ static void pgmap_array_delete(struct re
>
>  static unsigned long pfn_first(struct dev_pagemap *pgmap)
>  {
> -       return PHYS_PFN(pgmap->res.start) +
> -               vmem_altmap_offset(pgmap_altmap(pgmap));
> +       const struct resource *res = &pgmap->res;
> +       struct vmem_altmap *altmap = pgmap_altmap(pgmap);
> +       unsigned long pfn;
> +
> +       if (altmap)
> +               pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
> +       else
> +               pfn = PHYS_PFN(res->start);

This would only be a problem if res->start is not subsection aligned.
Is that bug triggering in your case, or is this just inspection. Now
that the subsections can be assumed as the minimum mapping granularity
I'd rather see a cleanup  I'd rather cleanup the implementation to
eliminate altmap->base_pfn or at least assert that
PHYS_PFN(res->start) and altmap->base_pfn are always identical.

Otherwise ->base_pfn is supposed to be just a convenient way to recall
the bounds of the memory hotplug operation deeper in the vmemmap
setup.
_______________________________________________
Linux-nvdimm mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to