On Wed, Jul 16, 2025 at 7:46 AM David Hildenbrand <da...@redhat.com> wrote:
>
> On 14.07.25 02:31, Nico Pache wrote:
> > From: Dev Jain <dev.j...@arm.com>
> >
> > Pass order to alloc_charge_folio() and update mTHP statistics.
> >
> > Reviewed-by: Baolin Wang <baolin.w...@linux.alibaba.com>
> > Co-developed-by: Nico Pache <npa...@redhat.com>
> > Signed-off-by: Nico Pache <npa...@redhat.com>
> > Signed-off-by: Dev Jain <dev.j...@arm.com>
> > ---
> >   Documentation/admin-guide/mm/transhuge.rst |  8 ++++++++
> >   include/linux/huge_mm.h                    |  2 ++
> >   mm/huge_memory.c                           |  4 ++++
> >   mm/khugepaged.c                            | 17 +++++++++++------
> >   4 files changed, 25 insertions(+), 6 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/transhuge.rst 
> > b/Documentation/admin-guide/mm/transhuge.rst
> > index dff8d5985f0f..2c523dce6bc7 100644
> > --- a/Documentation/admin-guide/mm/transhuge.rst
> > +++ b/Documentation/admin-guide/mm/transhuge.rst
> > @@ -583,6 +583,14 @@ anon_fault_fallback_charge
> >       instead falls back to using huge pages with lower orders or
> >       small pages even though the allocation was successful.
> >
> > +collapse_alloc
> > +     is incremented every time a huge page is successfully allocated for a
> > +     khugepaged collapse.
> > +
> > +collapse_alloc_failed
> > +     is incremented every time a huge page allocation fails during a
> > +     khugepaged collapse.
> > +
> >   zswpout
> >       is incremented every time a huge page is swapped out to zswap in one
> >       piece without splitting.
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index 7748489fde1b..4042078e8cc9 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -125,6 +125,8 @@ enum mthp_stat_item {
> >       MTHP_STAT_ANON_FAULT_ALLOC,
> >       MTHP_STAT_ANON_FAULT_FALLBACK,
> >       MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
> > +     MTHP_STAT_COLLAPSE_ALLOC,
> > +     MTHP_STAT_COLLAPSE_ALLOC_FAILED,
> >       MTHP_STAT_ZSWPOUT,
> >       MTHP_STAT_SWPIN,
> >       MTHP_STAT_SWPIN_FALLBACK,
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index bd7a623d7ef8..e2ed9493df77 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -614,6 +614,8 @@ static struct kobj_attribute _name##_attr = 
> > __ATTR_RO(_name)
> >   DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC);
> >   DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
> >   DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, 
> > MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
> > +DEFINE_MTHP_STAT_ATTR(collapse_alloc, MTHP_STAT_COLLAPSE_ALLOC);
> > +DEFINE_MTHP_STAT_ATTR(collapse_alloc_failed, 
> > MTHP_STAT_COLLAPSE_ALLOC_FAILED);
> >   DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
> >   DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
> >   DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
> > @@ -679,6 +681,8 @@ static struct attribute *any_stats_attrs[] = {
> >   #endif
> >       &split_attr.attr,
> >       &split_failed_attr.attr,
> > +     &collapse_alloc_attr.attr,
> > +     &collapse_alloc_failed_attr.attr,
> >       NULL,
> >   };
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index fa0642e66790..cc9a35185604 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1068,21 +1068,26 @@ static int __collapse_huge_page_swapin(struct 
> > mm_struct *mm,
> >   }
> >
> >   static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
> > -                           struct collapse_control *cc)
> > +                           struct collapse_control *cc, u8 order)
>
> u8, really? :)
At the time I knew I was going to use u8's at the bitmap level so I
thought I should have them here too. But you are right I went through
and cleaned up all the u8 usage with the exception of the actual
bitmap storage.
>
> Just use an "unsigned int" like folio_order() would or what
> __folio_alloc() consumes.
>
>
>
> Apart from that
>
> Acked-by: David Hildenbrand <da...@redhat.com>
Thank you!

>
> --
> Cheers,
>
> David / dhildenb
>


Reply via email to