On Fri, 10 Nov 2017 22:59:57 +0530
"Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com> wrote:

> Michael Ellerman <m...@ellerman.id.au> writes:
> 
> > "Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com> writes:
> >  
> >> While computing slice mask for the free area we need make sure we only 
> >> search
> >> in the addr limit applicable for this mmap. We update the slb_addr_limit
> >> after we request for a mmap above 128TB. But the following mmap request
> >> with hint addr below 128TB should still limit its search to below 128TB. 
> >> ie.
> >> we should not use slb_addr_limit to compute slice mask in this case. 
> >> Instead,
> >> we should derive high addr limit based on the mmap hint addr value.
> >>
> >> Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.vnet.ibm.com>
> >> ---
> >>  arch/powerpc/mm/slice.c | 34 ++++++++++++++++++++++------------
> >>  1 file changed, 22 insertions(+), 12 deletions(-)  
> >
> > How does this relate to the fixes Nick has sent?  
> 
> This patch is on top of the patch series sent by Nick. Without this
> patch we will allocate memory across the 128TB range if hint_addr <
> 128TB but hint_addr + len is more. Inorder to recreate this issue we
> will have to map stack below. Hence one won't hit the error in general
> case.

I couldn't get it to trigger this case after that series -- hash
get_unmapped_area should be excluding that case up front before
getting into the slice allocator. Do you have an strace to reproduce
it?

Either way I do think it would be good to tighten up all the slice
bitmap limits, including all the other places that hardcodes the
max bitmap size.

Thanks,
Nick

Reply via email to