On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <[email protected]> wrote:

> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
> as it accumulated the current pss everytime a locked VMA was found.
> 
> Fix that by making sure we record the current pss value before each VMA
> is walked. So, we can only add the delta if the VMA was found to be
> VM_LOCKED.
> 
> ...
>
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>  #endif
>               .mm = vma->vm_mm,
>       };
> +     unsigned long pss;
>  
>       smaps_walk.private = mss;
>  
> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct 
> *vma,
>               }
>       }
>  #endif
> -
> +     /* record current pss so we can calculate the delta after page walk */
> +     pss = mss->pss;
>       /* mmap_sem is held in m_start */
>       walk_page_vma(vma, &smaps_walk);
>       if (vma->vm_flags & VM_LOCKED)
> -             mss->pss_locked += mss->pss;
> +             mss->pss_locked += mss->pss - pss;
>  }

This seems to be a rather obscure way of accumulating
mem_size_stats.pss_locked.  Wouldn't it make more sense to do this in
smaps_account(), wherever we increment mem_size_stats.pss?

It would be a tiny bit less efficient but I think that the code cleanup
justifies such a cost?

Reply via email to