[cc Andi] On Wed, Mar 13, 2013 at 3:08 PM, Wanpeng Li <liw...@linux.vnet.ibm.com> wrote: > After commit 42d7395f ("mm: support more pagesizes for > MAP_HUGETLB/SHM_HUGETLB") > be merged, kernel permit multiple huge page sizes, and when the system > administrator > has configured the system to provide huge page pools of different sizes, > application > can choose the page size used for their allocation. However, just default > size of > huge page pool is statistical when memory overcommit accouting, the bad is > that this > will result in innocent processes be killed by oom-killer later. Fix it by > statistic > all huge page pools of different sizes provided by administrator. > Can we enrich the output of hugetlb_report_meminfo() ?
thanks Hillf > Testcase: > boot: hugepagesz=1G hugepages=1 > before patch: > egrep 'CommitLimit' /proc/meminfo > CommitLimit: 55434168 kB > after patch: > egrep 'CommitLimit' /proc/meminfo > CommitLimit: 54909880 kB > > Signed-off-by: Wanpeng Li <liw...@linux.vnet.ibm.com> > --- > mm/hugetlb.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index cdb64e4..9e25040 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf) > /* Return the number pages of memory we physically have, in PAGE_SIZE units. > */ > unsigned long hugetlb_total_pages(void) > { > - struct hstate *h = &default_hstate; > - return h->nr_huge_pages * pages_per_huge_page(h); > + struct hstate *h; > + unsigned long nr_total_pages = 0; > + for_each_hstate(h) > + nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h); > + return nr_total_pages; > } > > static int hugetlb_acct_memory(struct hstate *h, long delta) > -- > 1.7.11.7 > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/