On Tue 14-11-17 14:28:11, David Rientjes wrote:
[...]
> > /proc/meminfo is paved with mistakes throughout the history. It pretends
> > to give a good picture of the memory usage, yet we have many pointless
> > entries while large consumers are not reflected at all in many case.
> > 
> > Hugetlb data with that great details shouldn't have been exported in the
> > first place when they reflect only one specific hugepage size. I would
> > argue that if somebody went down to configure non-default hugetlb page
> > sizes then checking for the sysfs stats would be an immediate place to
> > look at. Anyway I can see that the cumulative information might be
> > helpful for those who do not own the machine but merely debug an issue
> > which is the primary usacase for the file.
> > 
> 
> I agree in principle, but I think it's inevitable on projects that span 
> decades and accumulate features that evolve over time.

Yes, this is acceptable in earlier stages but I believe we have reached
a mature state where we shouldn't repeat those mistakes.
[...]
> > >   if (!hugepages_supported())
> > >           return;
> > >   seq_printf(m,
> > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > >                   h->resv_huge_pages,
> > >                   h->surplus_huge_pages,
> > >                   1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> > > +
> > > + for_each_hstate(h)
> > > +         total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
> > 
> > Please keep the total calculation consistent with what we have there
> > already.
> > 
> 
> Yeah, and I'm not sure if your comment eludes to this being racy, but it 
> would be better to store the default size for default_hstate during the 
> iteration to total the size for all hstates.

I just meant to have the code consistent. I do not prefer one or other
option.
-- 
Michal Hocko
SUSE Labs

Reply via email to