On Tue, Aug 30, 2016 at 06:59:38PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo <wei.guo.si...@gmail.com>
> 
> In do_mlock(), the check against locked memory limitation
> has a hole which will fail following cases at step 3):
> 1) User has a memory chunk from addressA with 50k, and user
> mem lock rlimit is 64k.
> 2) mlock(addressA, 30k)
> 3) mlock(addressA, 40k)
> 
> The 3rd step should have been allowed since the 40k request
> is intersected with the previous 30k at step 2), and the
> 3rd step is actually for mlock on the extra 10k memory.
> 
> This patch checks vma to caculate the actual "new" mlock
> size, if necessary, and ajust the logic to fix this issue.
> 
> Signed-off-by: Simon Guo <wei.guo.si...@gmail.com>

Looks reasonable to me. Few nitpicks below.

> ---
>  mm/mlock.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
> 
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 14645be..9283187 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -617,6 +617,43 @@ static int apply_vma_lock_flags(unsigned long start, 
> size_t len,
>       return error;
>  }
>  
> +/*
> + * Go through vma areas and sum size of mlocked
> + * vma pages, as return value.
> + * Note deferred memory locking case(mlock2(,,MLOCK_ONFAULT)
> + * is also counted.
> + * Return value: previously mlocked page counts
> + */
> +static int count_mm_mlocked_page_nr(struct mm_struct *mm,
> +             unsigned long start, size_t len)
> +{
> +     struct vm_area_struct *vma;
> +     int count = 0;
> +
> +     if (mm == NULL)
> +             mm = current->mm;
> +
> +     vma = find_vma(mm, start);
> +     if (vma == NULL)
> +             vma = mm->mmap;
> +
> +     for (; vma ; vma = vma->vm_next) {
> +             if (start + len <=  vma->vm_start)
> +                     break;

        for (; vma && start + len <= vma->vm_start; vma = vma->vm_next) {

> +             if (vma->vm_flags && VM_LOCKED) {
> +                     if (start > vma->vm_start)
> +                             count -= (start - vma->vm_start);
> +                     if (start + len < vma->vm_end) {
> +                             count += start + len - vma->vm_start;
> +                             break;
> +                     }
> +                     count += vma->vm_end - vma->vm_start;
> +             }
> +     }
> +
> +     return (PAGE_ALIGN(count) >> PAGE_SHIFT);

Redundant parenthesis.

And do we need PAGE_ALIGN() here? Caller already aligned 'len', and vma
boundaries are alinged.

> +}
> +
>  static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t 
> flags)
>  {
>       unsigned long locked;
> @@ -639,6 +676,18 @@ static __must_check int do_mlock(unsigned long start, 
> size_t len, vm_flags_t fla
>               return -EINTR;
>  
>       locked += current->mm->locked_vm;
> +     if ((locked > lock_limit) && (!capable(CAP_IPC_LOCK))) {
> +             /*
> +              * It is possible that the regions requested
> +              * intersect with previously mlocked areas,
> +              * that part area in "mm->locked_vm" should
> +              * not be counted to new mlock increment
> +              * count. So check and adjust locked count
> +              * if necessary.
> +              */
> +             locked -= count_mm_mlocked_page_nr(current->mm,
> +                             start, len);
> +     }
>  
>       /* check against resource limits */
>       if ((locked <= lock_limit) || capable(CAP_IPC_LOCK))
> -- 
> 1.8.3.1
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"d...@kvack.org";> em...@kvack.org </a>

-- 
 Kirill A. Shutemov

Reply via email to