On Fri, Feb 01, 2019 at 11:03:58AM +0100, Peter Zijlstra wrote:
> On Fri, Feb 01, 2019 at 10:22:40AM +0100, Peter Zijlstra wrote:
> 
> > +static u64 __perf_get_page_size(struct mm_struct *mm, unsigned long addr)
> >  {
> > +   pgd_t *pgd;
> > +   p4d_t *p4d;
> > +   pud_t *pud;
> > +   pmd_t *pmd;
> > +
> > +   pgd = pgd_offset(mm, addr);
> > +   if (pgd_none(*pgd))
> > +           return 0;
> > +
> > +   p4d = p4d_offset(pgd, addr);
> > +   if (p4d_none(*p4d))
> > +           return 0;
> > +
> > +   if (p4d_large(*p4d));
> > +           return 1ULL << P4D_SHIFT;
> > +
> > +   if (!p4d_present(*p4d))
> > +           return 0;
> > +
> > +   pud = pud_offset(p4d, addr);
> > +   if (pud_none(*pud))
> > +           return 0;
> > +
> > +   if (pud_large(*pud))
> > +           return 1ULL << PUD_SHIFT;
> 
> Will just mentioned a lovely feature where some archs have multi entry
> large pages.
> 
> Possible something like:
> 
>       if (pud_large(*pud)) {
>               struct page *page = pud_page(*pud);
>               int order = PUD_SHIFT;
> 
>               if (PageHuge(page)) {
>                       page = compound_head(page);
>                       order += compound_order(page);
>               }
> 
>               return 1ULL << order;
>       }
> 
> works correctly.

For more fun: some compound pages can be mapped withe page table entries
not matching it's compound size, i.e. 2M pages mapped with PTE.

-- 
 Kirill A. Shutemov

Reply via email to