With help from our local Linux kernel experts we've tracked down the
inexplicable Private_Clean emergence in our processes' smaps file to a
kernel bug,
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/fs/proc/task_mmu.c?id=1c2499ae87f828eabddf6483b0dfc11da1100c07
, which, according to GIT, was first committed in v2.6.36-rc6~63.  When we
manually applied the aforementioned patched to our kernel there were no
memory segments in smaps showing large Private_Clean regions during our
test.  Unfortunately the fix seems to have been merely an accounting
change.  Everything previously reported as Private_Clean is now correctly
showing up as Private_Dirty, so we are still digging to find out why our
RSS, specifically Private_Dirty, continues to grow while jemalloc's active
reports much lower numbers.

Thanks,

Tom


|------------>
| From:      |
|------------>
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
  |Jason Evans <[email protected]>                                           
                                                                    |
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| To:        |
|------------>
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
  |Thomas R Gissel/Rochester/IBM@IBMUS,                                         
                                                                    |
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Cc:        |
|------------>
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
  |[email protected]                                               
                                                                    |
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Date:      |
|------------>
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
  |06/06/2013 01:34 AM                                                          
                                                                    |
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Subject:   |
|------------>
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|
  |Re: High amount of private clean data in smaps                               
                                                                    |
  
>-------------------------------------------------------------------------------------------------------------------------------------------------|





On Jun 5, 2013, at 9:17 PM, Thomas R Gissel <[email protected]> wrote:


      I too have been trying to reproduce the existence of Private_Clean
      memory segments in smaps via a simple test case with jemalloc and was
      unable to on my laptop, a 2 core machine running a 3.8.0-23 kernel .
      I then moved my test to our production box: 96GB memory, 24 hardware
      threads and 2.6 kernel (detailed information below), and within a few
      minutes of execution, with a few minor adjustments, I was able
      duplicate the results, smaps showing the jemalloc segment with
      Private_Clean memory usage, of our larger test. Note that I'm using
      the same jemalloc library whose information Kurtis posted earlier (96
      arenas etc...).


Interesting!  I don't see anything unusual about the test program, so I'm
guessing this is kernel-specific.  I'll run it on some 8- and 16-core
machines tomorrow with a couple of kernel versions and see what happens.

Thanks,
Jason

<<inline: graycol.gif>>

<<inline: ecblank.gif>>

_______________________________________________
jemalloc-discuss mailing list
[email protected]
http://www.canonware.com/mailman/listinfo/jemalloc-discuss

Reply via email to