Re: new procfs memory analysis feature

2006-12-11 Thread Joe Green

Albert Cahalan wrote:

David Singleton writes:


Add variation of /proc/PID/smaps called /proc/PID/pagemaps.
Shows reference counts for individual pages instead of aggregate totals.
Allows more detailed memory usage information for memory analysis tools.
An example of the output shows the shared text VMA for ld.so and
the share depths of the pages in the VMA.

a7f4b000-a7f65000 r-xp  00:0d 19185826   /lib/ld-2.5.90.so
 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 8 8 8 13 13 13 13 13 
13 13


Arrrgh! Not another ghastly maps file!

Now we have /proc/*/smaps, which should make decent programmers cry.


Yes, that's what we based this implementation on.  :)


Along the way, nobody bothered to add support for describing the
page size (IMHO your format ***severely*** needs this)


Since the map size and an entry for each page is given, it's possible to 
figure out the page size, assuming each map uses only a single page 
size.  But adding the page size would be reasonable.



There can be a million pages in a mapping for a 32-bit process.
If my guess (since you too failed to document your format) is right,
you propose to have one decimal value per page.


Yes, that's right.  We considered using repeat counts for sequences 
pages with the same reference count (quite common), but it hasn't been 
necessary in our application (see below).


In other words, the lines of this file can be megabytes long without 
even getting

to the issue of 64-bit hardware. This is no text file!

How about a proper system call?


Our use for this is to optimize memory usage on very small embedded 
systems, so the number of pages hasn't been a problem.


For the same reason, not needing a special program on the target system 
to read the data is an advantage, because each extra program needed adds 
to the footprint problem.


The data is taken off the target and interpreted on another system, 
which often is of a different architecture, so the portable text format 
is useful also.


This isn't mean to say your arguments aren't important, I'm just 
explaining why this implementation is useful for us.



--
Joe Green <[EMAIL PROTECTED]>
MontaVista Software, Inc.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-11 Thread Albert Cahalan

David Singleton writes:


Add variation of /proc/PID/smaps called /proc/PID/pagemaps.
Shows reference counts for individual pages instead of aggregate totals.
Allows more detailed memory usage information for memory analysis tools.
An example of the output shows the shared text VMA for ld.so and
the share depths of the pages in the VMA.

a7f4b000-a7f65000 r-xp  00:0d 19185826   /lib/ld-2.5.90.so
 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 8 8 8 13 13 13 13 13 13 13


Arrrgh! Not another ghastly maps file!

The original was mildly defective. Somebody thought " (deleted)" was
a reserved filename extension. Somebody thought "/SYSV*" was also
some kind of reserved namespace. Nobody ever thought to bother with
a properly specified grammar; it's more fun to blame application
developers for guessing as best they can. The use of %08lx is quite
a wart too, looking ridiculous on 64-bit systems.

Now we have /proc/*/smaps, which should make decent programmers cry.
Really now, WTF? It has compact non-obvious parts, which would be a
nice choice for performance if not for being MIXED with wordy bloated
parts of a completely different nature. Parsing is terribly painful.

Supposedly there is a NUMA version too.

Along the way, nobody bothered to add support for describing the
page size (IMHO your format ***severely*** needs this) or for the
various VMA flags to indicate if memory is locked, randomized, etc.

There can be a million pages in a mapping for a 32-bit process.
If my guess (since you too failed to document your format) is right,
you propose to have one decimal value per page. In other words,
the lines of this file can be megabytes long without even getting
to the issue of 64-bit hardware. This is no text file!

How about a proper system call? Enough is enough already. Take a
look at the mincore system call. Imagine it taking a PID. The 7
available bits probably won't do, so expand that a bit. Just take
the user-allowed parts of the VMA and/or PTE (both varients are
good to have) and put them in a struct. There may be some value
in having both low-privilage and high-privilege versions of this.

BTW, you might wish to ensure that Wine can implement VirtualQueryEx
perfectly based on this.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-11 Thread Albert Cahalan

David Singleton writes:


Add variation of /proc/PID/smaps called /proc/PID/pagemaps.
Shows reference counts for individual pages instead of aggregate totals.
Allows more detailed memory usage information for memory analysis tools.
An example of the output shows the shared text VMA for ld.so and
the share depths of the pages in the VMA.

a7f4b000-a7f65000 r-xp  00:0d 19185826   /lib/ld-2.5.90.so
 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 8 8 8 13 13 13 13 13 13 13


Arrrgh! Not another ghastly maps file!

The original was mildly defective. Somebody thought  (deleted) was
a reserved filename extension. Somebody thought /SYSV* was also
some kind of reserved namespace. Nobody ever thought to bother with
a properly specified grammar; it's more fun to blame application
developers for guessing as best they can. The use of %08lx is quite
a wart too, looking ridiculous on 64-bit systems.

Now we have /proc/*/smaps, which should make decent programmers cry.
Really now, WTF? It has compact non-obvious parts, which would be a
nice choice for performance if not for being MIXED with wordy bloated
parts of a completely different nature. Parsing is terribly painful.

Supposedly there is a NUMA version too.

Along the way, nobody bothered to add support for describing the
page size (IMHO your format ***severely*** needs this) or for the
various VMA flags to indicate if memory is locked, randomized, etc.

There can be a million pages in a mapping for a 32-bit process.
If my guess (since you too failed to document your format) is right,
you propose to have one decimal value per page. In other words,
the lines of this file can be megabytes long without even getting
to the issue of 64-bit hardware. This is no text file!

How about a proper system call? Enough is enough already. Take a
look at the mincore system call. Imagine it taking a PID. The 7
available bits probably won't do, so expand that a bit. Just take
the user-allowed parts of the VMA and/or PTE (both varients are
good to have) and put them in a struct. There may be some value
in having both low-privilage and high-privilege versions of this.

BTW, you might wish to ensure that Wine can implement VirtualQueryEx
perfectly based on this.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-11 Thread Joe Green

Albert Cahalan wrote:

David Singleton writes:


Add variation of /proc/PID/smaps called /proc/PID/pagemaps.
Shows reference counts for individual pages instead of aggregate totals.
Allows more detailed memory usage information for memory analysis tools.
An example of the output shows the shared text VMA for ld.so and
the share depths of the pages in the VMA.

a7f4b000-a7f65000 r-xp  00:0d 19185826   /lib/ld-2.5.90.so
 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 8 8 8 13 13 13 13 13 
13 13


Arrrgh! Not another ghastly maps file!

Now we have /proc/*/smaps, which should make decent programmers cry.


Yes, that's what we based this implementation on.  :)


Along the way, nobody bothered to add support for describing the
page size (IMHO your format ***severely*** needs this)


Since the map size and an entry for each page is given, it's possible to 
figure out the page size, assuming each map uses only a single page 
size.  But adding the page size would be reasonable.



There can be a million pages in a mapping for a 32-bit process.
If my guess (since you too failed to document your format) is right,
you propose to have one decimal value per page.


Yes, that's right.  We considered using repeat counts for sequences 
pages with the same reference count (quite common), but it hasn't been 
necessary in our application (see below).


In other words, the lines of this file can be megabytes long without 
even getting

to the issue of 64-bit hardware. This is no text file!

How about a proper system call?


Our use for this is to optimize memory usage on very small embedded 
systems, so the number of pages hasn't been a problem.


For the same reason, not needing a special program on the target system 
to read the data is an advantage, because each extra program needed adds 
to the footprint problem.


The data is taken off the target and interpreted on another system, 
which often is of a different architecture, so the portable text format 
is useful also.


This isn't mean to say your arguments aren't important, I'm just 
explaining why this implementation is useful for us.



--
Joe Green [EMAIL PROTECTED]
MontaVista Software, Inc.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-08 Thread Jeremy Fitzhardinge
Paul Cameron Davies wrote:
> The PTI gathers all the open coded iterators togethers into one place,
> which would be a good precursor to providing generic iterators for
> non performance critical iterations.
>
> We are completing the updating/enhancements to this PTI for the latest
> kernel, to be released just prior to LCA.  This PTI is benchmarking
> well. We also plan to release the experimental guarded page table
> (GPT) running under this PTI.

I looked at implementing linear pagetable mappings for x86 as a way of
getting rid of CONFIG_HIGHPTE, and to make pagetable manipulations
generally more efficient.  I gave up on it after a while because all the
existing pagetable accessors are not suitable for a linear pagetable,
and I didn't want to have to introduce a pile of new pagetable
interfaces.  Would the PTI interface be helpful for this?

Thanks,
J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-08 Thread Jeremy Fitzhardinge
Paul Cameron Davies wrote:
 The PTI gathers all the open coded iterators togethers into one place,
 which would be a good precursor to providing generic iterators for
 non performance critical iterations.

 We are completing the updating/enhancements to this PTI for the latest
 kernel, to be released just prior to LCA.  This PTI is benchmarking
 well. We also plan to release the experimental guarded page table
 (GPT) running under this PTI.

I looked at implementing linear pagetable mappings for x86 as a way of
getting rid of CONFIG_HIGHPTE, and to make pagetable manipulations
generally more efficient.  I gave up on it after a while because all the
existing pagetable accessors are not suitable for a linear pagetable,
and I didn't want to have to introduce a pile of new pagetable
interfaces.  Would the PTI interface be helpful for this?

Thanks,
J
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread Paul Cameron Davies

On Thu, 7 Dec 2006, Andrew Morton wrote:


I think that's our eighth open-coded pagetable walker.  Apparently they are
all slightly different.  Perhaps we shouild do something about that one
day.


At UNSW we have abstracted the page table into its own layer, and
are running an alternate page table (a GPT), under a clean page table
interface (PTI).

The PTI gathers all the open coded iterators togethers into one place,
which would be a good precursor to providing generic iterators for
non performance critical iterations.

We are completing the updating/enhancements to this PTI for the latest 
kernel, to be released just prior to LCA.  This PTI is benchmarking well. 
We also plan to release the experimental guarded page table (GPT) running 
under this PTI.


Paul Davies
[EMAIL PROTECTED]
~

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread david singleton


On Dec 7, 2006, at 5:46 PM, Andrew Morton wrote:


On Thu, 7 Dec 2006 17:07:22 -0800
david singleton <[EMAIL PROTECTED]> wrote:


Attached is the 2.6.19 patch.


It still has the overflow bug.

+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = 
-page_mapcount(page);

+   else
+   mapcount = 
page_mapcount(page);

+   } else {
+   mapcount = 1;
+   }
+   }
+   seq_printf(m, " %d", mapcount);
+
+   } while (pte++, addr += PAGE_SIZE, addr != end);


Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will 
not

overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?



I guess that could happen?Any suggestions?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread Andrew Morton
On Thu, 7 Dec 2006 17:07:22 -0800
david singleton <[EMAIL PROTECTED]> wrote:

> Attached is the 2.6.19 patch.

It still has the overflow bug.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread david singleton

Attached is the 2.6.19 patch.




pagemaps.patch
Description: Binary data






On Dec 7, 2006, at 2:36 PM, Andrew Morton wrote:


On Thu, 07 Dec 2006 14:09:40 -0800
David Singleton <[EMAIL PROTECTED]> wrote:



Andrew,

this implements a feature for memory analysis tools to go along 
with

smaps.
It shows reference counts for individual pages instead of aggregate
totals for a given VMA.
It helps memory analysis tools determine how well pages are being
shared, or not,
in a shared libraries, etc.

   The per page information is presented in /proc//pagemaps.



I think the concept is not a bad one, frankly - this requirement arises
frequently.  What bugs me is that it only displays the mapcount and
dirtiness.  Perhaps there are other things which people want to know.  
I'm

not sure what they would be though.

I wonder if it would be insane to display the info via a filesystem:

cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45

Probably it would.


Index: linux-2.6.18/Documentation/filesystems/proc.txt


Against 2.6.18?  I didn't know you could still buy copies of that ;)

This patch's changelog should include sample output.

Your email client wordwraps patches, and it replaces tabs with spaces.


...

+static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t 
*pmd,

+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pte_t *pte, ptent;
+   spinlock_t *ptl;
+   struct page *page;
+   int mapcount = 0;
+
+   pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, );
+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = 
-page_mapcount(page);

+   else
+   mapcount = 
page_mapcount(page);

+   } else {
+   mapcount = 1;
+   }
+   }
+   seq_printf(m, " %d", mapcount);
+
+   } while (pte++, addr += PAGE_SIZE, addr != end);


Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will 
not

overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?


+
+static inline void pagemaps_pmd_range(struct vm_area_struct *vma, 
pud_t

*pud,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pmd_t *pmd;
+   unsigned long next;
+
+   pmd = pmd_offset(pud, addr);
+   do {
+   next = pmd_addr_end(addr, end);
+   if (pmd_none_or_clear_bad(pmd))
+   continue;
+   pagemaps_pte_range(vma, pmd, addr, next, m);
+   } while (pmd++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pud_range(struct vm_area_struct *vma, 
pgd_t

*pgd,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pud_t *pud;
+   unsigned long next;
+
+   pud = pud_offset(pgd, addr);
+   do {
+   next = pud_addr_end(addr, end);
+   if (pud_none_or_clear_bad(pud))
+   continue;
+   pagemaps_pmd_range(vma, pud, addr, next, m);
+   } while (pud++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pgd_range(struct vm_area_struct *vma,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pgd_t *pgd;
+   unsigned long next;
+
+   pgd = pgd_offset(vma->vm_mm, addr);
+   do {
+   next = pgd_addr_end(addr, end);
+   if (pgd_none_or_clear_bad(pgd))
+   continue;
+   pagemaps_pud_range(vma, pgd, addr, next, m);
+   } while (pgd++, addr = next, addr != end);
+}


I think that's our eighth open-coded pagetable walker.  Apparently 
they are

all slightly different.  Perhaps we shouild do something about that one
day.




Re: new procfs memory analysis feature

2006-12-07 Thread david singleton


On Dec 7, 2006, at 2:36 PM, Andrew Morton wrote:


On Thu, 07 Dec 2006 14:09:40 -0800
David Singleton <[EMAIL PROTECTED]> wrote:



Andrew,

this implements a feature for memory analysis tools to go along 
with

smaps.
It shows reference counts for individual pages instead of aggregate
totals for a given VMA.
It helps memory analysis tools determine how well pages are being
shared, or not,
in a shared libraries, etc.

   The per page information is presented in /proc//pagemaps.



I think the concept is not a bad one, frankly - this requirement arises
frequently.  What bugs me is that it only displays the mapcount and
dirtiness.  Perhaps there are other things which people want to know.  
I'm

not sure what they would be though.

I wonder if it would be insane to display the info via a filesystem:

cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45

Probably it would.


Index: linux-2.6.18/Documentation/filesystems/proc.txt


Against 2.6.18?  I didn't know you could still buy copies of that ;)


whoops, I have an old copy.  let me make a patch against 2.6.19.



This patch's changelog should include sample output.


okay.



Your email client wordwraps patches, and it replaces tabs with spaces.


Is an attachment okay?  gziped tarfile?  a new mailer?

David



...

+static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t 
*pmd,

+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pte_t *pte, ptent;
+   spinlock_t *ptl;
+   struct page *page;
+   int mapcount = 0;
+
+   pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, );
+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = 
-page_mapcount(page);

+   else
+   mapcount = 
page_mapcount(page);

+   } else {
+   mapcount = 1;
+   }
+   }
+   seq_printf(m, " %d", mapcount);
+
+   } while (pte++, addr += PAGE_SIZE, addr != end);


Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will 
not

overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?


+
+static inline void pagemaps_pmd_range(struct vm_area_struct *vma, 
pud_t

*pud,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pmd_t *pmd;
+   unsigned long next;
+
+   pmd = pmd_offset(pud, addr);
+   do {
+   next = pmd_addr_end(addr, end);
+   if (pmd_none_or_clear_bad(pmd))
+   continue;
+   pagemaps_pte_range(vma, pmd, addr, next, m);
+   } while (pmd++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pud_range(struct vm_area_struct *vma, 
pgd_t

*pgd,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pud_t *pud;
+   unsigned long next;
+
+   pud = pud_offset(pgd, addr);
+   do {
+   next = pud_addr_end(addr, end);
+   if (pud_none_or_clear_bad(pud))
+   continue;
+   pagemaps_pmd_range(vma, pud, addr, next, m);
+   } while (pud++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pgd_range(struct vm_area_struct *vma,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pgd_t *pgd;
+   unsigned long next;
+
+   pgd = pgd_offset(vma->vm_mm, addr);
+   do {
+   next = pgd_addr_end(addr, end);
+   if (pgd_none_or_clear_bad(pgd))
+   continue;
+   pagemaps_pud_range(vma, pgd, addr, next, m);
+   } while (pgd++, addr = next, addr != end);
+}


I think that's our eighth open-coded pagetable walker.  Apparently 
they are

all slightly different.  Perhaps we shouild do something about that one
day.




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread Andrew Morton
On Thu, 07 Dec 2006 14:09:40 -0800
David Singleton <[EMAIL PROTECTED]> wrote:

> 
> Andrew,
> 
> this implements a feature for memory analysis tools to go along with 
> smaps.
> It shows reference counts for individual pages instead of aggregate 
> totals for a given VMA.
> It helps memory analysis tools determine how well pages are being 
> shared, or not,
> in a shared libraries, etc.
> 
>The per page information is presented in /proc//pagemaps.
> 

I think the concept is not a bad one, frankly - this requirement arises
frequently.  What bugs me is that it only displays the mapcount and
dirtiness.  Perhaps there are other things which people want to know.  I'm
not sure what they would be though.

I wonder if it would be insane to display the info via a filesystem:

cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45

Probably it would.

> Index: linux-2.6.18/Documentation/filesystems/proc.txt

Against 2.6.18?  I didn't know you could still buy copies of that ;)

This patch's changelog should include sample output.

Your email client wordwraps patches, and it replaces tabs with spaces.

> ...
>
> +static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> +   unsigned long addr, unsigned long end,
> +   struct seq_file *m)
> +{
> +   pte_t *pte, ptent;
> +   spinlock_t *ptl;
> +   struct page *page;
> +   int mapcount = 0;
> +
> +   pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, );
> +   do {
> +   ptent = *pte;
> +   if (pte_present(ptent)) {
> +   page = vm_normal_page(vma, addr, ptent);
> +   if (page) {
> +   if (pte_dirty(ptent))
> +   mapcount = -page_mapcount(page);
> +   else
> +   mapcount = page_mapcount(page);
> +   } else {
> +   mapcount = 1;
> +   }
> +   }
> +   seq_printf(m, " %d", mapcount);
> +
> +   } while (pte++, addr += PAGE_SIZE, addr != end);

Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will not
overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?

> +
> +static inline void pagemaps_pmd_range(struct vm_area_struct *vma, pud_t 
> *pud,
> +   unsigned long addr, unsigned long end,
> +   struct seq_file *m)
> +{
> +   pmd_t *pmd;
> +   unsigned long next;
> +
> +   pmd = pmd_offset(pud, addr);
> +   do {
> +   next = pmd_addr_end(addr, end);
> +   if (pmd_none_or_clear_bad(pmd))
> +   continue;
> +   pagemaps_pte_range(vma, pmd, addr, next, m);
> +   } while (pmd++, addr = next, addr != end);
> +}
> +
> +static inline void pagemaps_pud_range(struct vm_area_struct *vma, pgd_t 
> *pgd,
> +   unsigned long addr, unsigned long end,
> +   struct seq_file *m)
> +{
> +   pud_t *pud;
> +   unsigned long next;
> +
> +   pud = pud_offset(pgd, addr);
> +   do {
> +   next = pud_addr_end(addr, end);
> +   if (pud_none_or_clear_bad(pud))
> +   continue;
> +   pagemaps_pmd_range(vma, pud, addr, next, m);
> +   } while (pud++, addr = next, addr != end);
> +}
> +
> +static inline void pagemaps_pgd_range(struct vm_area_struct *vma,
> +   unsigned long addr, unsigned long end,
> +   struct seq_file *m)
> +{
> +   pgd_t *pgd;
> +   unsigned long next;
> +
> +   pgd = pgd_offset(vma->vm_mm, addr);
> +   do {
> +   next = pgd_addr_end(addr, end);
> +   if (pgd_none_or_clear_bad(pgd))
> +   continue;
> +   pagemaps_pud_range(vma, pgd, addr, next, m);
> +   } while (pgd++, addr = next, addr != end);
> +}

I think that's our eighth open-coded pagetable walker.  Apparently they are
all slightly different.  Perhaps we shouild do something about that one
day.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


new procfs memory analysis feature

2006-12-07 Thread David Singleton


Andrew,

   this implements a feature for memory analysis tools to go along with 
smaps.
It shows reference counts for individual pages instead of aggregate 
totals for a given VMA.
It helps memory analysis tools determine how well pages are being 
shared, or not,

in a shared libraries, etc.

  The per page information is presented in /proc//pagemaps.

Signed-off-by: David SIngleton <[EMAIL PROTECTED]>
Signed-off-by: Joe Green <[EMAIL PROTECTED]>

Documentation/filesystems/proc.txt |3 -
fs/proc/base.c |   15 +
fs/proc/internal.h |5 -
fs/proc/task_mmu.c |  110 
+ 4 files changed, 128 
insertions(+), 5 deletions(-)


Index: linux-2.6.18/Documentation/filesystems/proc.txt
===
--- linux-2.6.18.orig/Documentation/filesystems/proc.txt
+++ linux-2.6.18/Documentation/filesystems/proc.txt
@@ -128,12 +128,13 @@ Table 1-1: Process specific entries in /
 fd  Directory, which contains all file descriptors
 maps   Memory maps to executables and library files   (2.4)
 mem Memory held by this process
+ pagemaps Based on maps, presents page ref counts for each mapped file
 root   Link to the root directory of this process
+ smaps  Extension based on maps, presenting the rss size for each 
mapped file

 statProcess status
 statm   Process memory status information
 status  Process status in human readable form
 wchan   If CONFIG_KALLSYMS is set, a pre-decoded wchan
- smaps  Extension based on maps, presenting the rss size for each 
mapped file

..

For example, to get the status information of a process, all you have 
to do is

Index: linux-2.6.18/fs/proc/base.c
===
--- linux-2.6.18.orig/fs/proc/base.c
+++ linux-2.6.18/fs/proc/base.c
@@ -182,6 +182,11 @@ enum pid_directory_inos {
   PROC_TID_OOM_SCORE,
   PROC_TID_OOM_ADJUST,

+#ifdef CONFIG_MMU
+   PROC_TGID_PAGEMAPS,
+   PROC_TID_PAGEMAPS,
+#endif
+
   /* Add new entries before this */
   PROC_TID_FD_DIR = 0x8000,   /* 0x8000-0x */
};
@@ -240,6 +245,9 @@ static struct pid_entry tgid_base_stuff[
#ifdef CONFIG_AUDITSYSCALL
   E(PROC_TGID_LOGINUID, "loginuid", S_IFREG|S_IWUSR|S_IRUGO),
#endif
+#ifdef CONFIG_MMU
+   E(PROC_TGID_PAGEMAPS,  "pagemaps", S_IFREG|S_IRUGO),
+#endif
   {0,0,NULL,0}
};
static struct pid_entry tid_base_stuff[] = {
@@ -282,6 +290,9 @@ static struct pid_entry tid_base_stuff[]
#ifdef CONFIG_AUDITSYSCALL
   E(PROC_TID_LOGINUID, "loginuid", S_IFREG|S_IWUSR|S_IRUGO),
#endif
+#ifdef CONFIG_MMU
+   E(PROC_TID_PAGEMAPS,   "pagemaps", S_IFREG|S_IRUGO),
+#endif
   {0,0,NULL,0}
};

@@ -1769,6 +1780,10 @@ static struct dentry *proc_pident_lookup
   case PROC_TGID_SMAPS:
   inode->i_fop = _smaps_operations;
   break;
+   case PROC_TID_PAGEMAPS:
+   case PROC_TGID_PAGEMAPS:
+   inode->i_fop = _pagemaps_operations;
+   break;
#endif
   case PROC_TID_MOUNTSTATS:
   case PROC_TGID_MOUNTSTATS:
Index: linux-2.6.18/fs/proc/internal.h
===
--- linux-2.6.18.orig/fs/proc/internal.h
+++ linux-2.6.18/fs/proc/internal.h
@@ -40,10 +40,7 @@ extern int proc_pid_statm(struct task_st
extern struct file_operations proc_maps_operations;
extern struct file_operations proc_numa_maps_operations;
extern struct file_operations proc_smaps_operations;
-
-extern struct file_operations proc_maps_operations;
-extern struct file_operations proc_numa_maps_operations;
-extern struct file_operations proc_smaps_operations;
+extern struct file_operations proc_pagemaps_operations;


void free_proc_entry(struct proc_dir_entry *de);
Index: linux-2.6.18/fs/proc/task_mmu.c
===
--- linux-2.6.18.orig/fs/proc/task_mmu.c
+++ linux-2.6.18/fs/proc/task_mmu.c
@@ -436,6 +436,116 @@ static int do_maps_open(struct inode *in
   return ret;
}

+static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pte_t *pte, ptent;
+   spinlock_t *ptl;
+   struct page *page;
+   int mapcount = 0;
+
+   pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, );
+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = -page_mapcount(page);
+   else
+  

new procfs memory analysis feature

2006-12-07 Thread David Singleton


Andrew,

   this implements a feature for memory analysis tools to go along with 
smaps.
It shows reference counts for individual pages instead of aggregate 
totals for a given VMA.
It helps memory analysis tools determine how well pages are being 
shared, or not,

in a shared libraries, etc.

  The per page information is presented in /proc/pid/pagemaps.

Signed-off-by: David SIngleton [EMAIL PROTECTED]
Signed-off-by: Joe Green [EMAIL PROTECTED]

Documentation/filesystems/proc.txt |3 -
fs/proc/base.c |   15 +
fs/proc/internal.h |5 -
fs/proc/task_mmu.c |  110 
+ 4 files changed, 128 
insertions(+), 5 deletions(-)


Index: linux-2.6.18/Documentation/filesystems/proc.txt
===
--- linux-2.6.18.orig/Documentation/filesystems/proc.txt
+++ linux-2.6.18/Documentation/filesystems/proc.txt
@@ -128,12 +128,13 @@ Table 1-1: Process specific entries in /
 fd  Directory, which contains all file descriptors
 maps   Memory maps to executables and library files   (2.4)
 mem Memory held by this process
+ pagemaps Based on maps, presents page ref counts for each mapped file
 root   Link to the root directory of this process
+ smaps  Extension based on maps, presenting the rss size for each 
mapped file

 statProcess status
 statm   Process memory status information
 status  Process status in human readable form
 wchan   If CONFIG_KALLSYMS is set, a pre-decoded wchan
- smaps  Extension based on maps, presenting the rss size for each 
mapped file

..

For example, to get the status information of a process, all you have 
to do is

Index: linux-2.6.18/fs/proc/base.c
===
--- linux-2.6.18.orig/fs/proc/base.c
+++ linux-2.6.18/fs/proc/base.c
@@ -182,6 +182,11 @@ enum pid_directory_inos {
   PROC_TID_OOM_SCORE,
   PROC_TID_OOM_ADJUST,

+#ifdef CONFIG_MMU
+   PROC_TGID_PAGEMAPS,
+   PROC_TID_PAGEMAPS,
+#endif
+
   /* Add new entries before this */
   PROC_TID_FD_DIR = 0x8000,   /* 0x8000-0x */
};
@@ -240,6 +245,9 @@ static struct pid_entry tgid_base_stuff[
#ifdef CONFIG_AUDITSYSCALL
   E(PROC_TGID_LOGINUID, loginuid, S_IFREG|S_IWUSR|S_IRUGO),
#endif
+#ifdef CONFIG_MMU
+   E(PROC_TGID_PAGEMAPS,  pagemaps, S_IFREG|S_IRUGO),
+#endif
   {0,0,NULL,0}
};
static struct pid_entry tid_base_stuff[] = {
@@ -282,6 +290,9 @@ static struct pid_entry tid_base_stuff[]
#ifdef CONFIG_AUDITSYSCALL
   E(PROC_TID_LOGINUID, loginuid, S_IFREG|S_IWUSR|S_IRUGO),
#endif
+#ifdef CONFIG_MMU
+   E(PROC_TID_PAGEMAPS,   pagemaps, S_IFREG|S_IRUGO),
+#endif
   {0,0,NULL,0}
};

@@ -1769,6 +1780,10 @@ static struct dentry *proc_pident_lookup
   case PROC_TGID_SMAPS:
   inode-i_fop = proc_smaps_operations;
   break;
+   case PROC_TID_PAGEMAPS:
+   case PROC_TGID_PAGEMAPS:
+   inode-i_fop = proc_pagemaps_operations;
+   break;
#endif
   case PROC_TID_MOUNTSTATS:
   case PROC_TGID_MOUNTSTATS:
Index: linux-2.6.18/fs/proc/internal.h
===
--- linux-2.6.18.orig/fs/proc/internal.h
+++ linux-2.6.18/fs/proc/internal.h
@@ -40,10 +40,7 @@ extern int proc_pid_statm(struct task_st
extern struct file_operations proc_maps_operations;
extern struct file_operations proc_numa_maps_operations;
extern struct file_operations proc_smaps_operations;
-
-extern struct file_operations proc_maps_operations;
-extern struct file_operations proc_numa_maps_operations;
-extern struct file_operations proc_smaps_operations;
+extern struct file_operations proc_pagemaps_operations;


void free_proc_entry(struct proc_dir_entry *de);
Index: linux-2.6.18/fs/proc/task_mmu.c
===
--- linux-2.6.18.orig/fs/proc/task_mmu.c
+++ linux-2.6.18/fs/proc/task_mmu.c
@@ -436,6 +436,116 @@ static int do_maps_open(struct inode *in
   return ret;
}

+static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pte_t *pte, ptent;
+   spinlock_t *ptl;
+   struct page *page;
+   int mapcount = 0;
+
+   pte = pte_offset_map_lock(vma-vm_mm, pmd, addr, ptl);
+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = -page_mapcount(page);
+   else
+   

Re: new procfs memory analysis feature

2006-12-07 Thread Andrew Morton
On Thu, 07 Dec 2006 14:09:40 -0800
David Singleton [EMAIL PROTECTED] wrote:

 
 Andrew,
 
 this implements a feature for memory analysis tools to go along with 
 smaps.
 It shows reference counts for individual pages instead of aggregate 
 totals for a given VMA.
 It helps memory analysis tools determine how well pages are being 
 shared, or not,
 in a shared libraries, etc.
 
The per page information is presented in /proc/pid/pagemaps.
 

I think the concept is not a bad one, frankly - this requirement arises
frequently.  What bugs me is that it only displays the mapcount and
dirtiness.  Perhaps there are other things which people want to know.  I'm
not sure what they would be though.

I wonder if it would be insane to display the info via a filesystem:

cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45

Probably it would.

 Index: linux-2.6.18/Documentation/filesystems/proc.txt

Against 2.6.18?  I didn't know you could still buy copies of that ;)

This patch's changelog should include sample output.

Your email client wordwraps patches, and it replaces tabs with spaces.

 ...

 +static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 +   unsigned long addr, unsigned long end,
 +   struct seq_file *m)
 +{
 +   pte_t *pte, ptent;
 +   spinlock_t *ptl;
 +   struct page *page;
 +   int mapcount = 0;
 +
 +   pte = pte_offset_map_lock(vma-vm_mm, pmd, addr, ptl);
 +   do {
 +   ptent = *pte;
 +   if (pte_present(ptent)) {
 +   page = vm_normal_page(vma, addr, ptent);
 +   if (page) {
 +   if (pte_dirty(ptent))
 +   mapcount = -page_mapcount(page);
 +   else
 +   mapcount = page_mapcount(page);
 +   } else {
 +   mapcount = 1;
 +   }
 +   }
 +   seq_printf(m,  %d, mapcount);
 +
 +   } while (pte++, addr += PAGE_SIZE, addr != end);

Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will not
overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?

 +
 +static inline void pagemaps_pmd_range(struct vm_area_struct *vma, pud_t 
 *pud,
 +   unsigned long addr, unsigned long end,
 +   struct seq_file *m)
 +{
 +   pmd_t *pmd;
 +   unsigned long next;
 +
 +   pmd = pmd_offset(pud, addr);
 +   do {
 +   next = pmd_addr_end(addr, end);
 +   if (pmd_none_or_clear_bad(pmd))
 +   continue;
 +   pagemaps_pte_range(vma, pmd, addr, next, m);
 +   } while (pmd++, addr = next, addr != end);
 +}
 +
 +static inline void pagemaps_pud_range(struct vm_area_struct *vma, pgd_t 
 *pgd,
 +   unsigned long addr, unsigned long end,
 +   struct seq_file *m)
 +{
 +   pud_t *pud;
 +   unsigned long next;
 +
 +   pud = pud_offset(pgd, addr);
 +   do {
 +   next = pud_addr_end(addr, end);
 +   if (pud_none_or_clear_bad(pud))
 +   continue;
 +   pagemaps_pmd_range(vma, pud, addr, next, m);
 +   } while (pud++, addr = next, addr != end);
 +}
 +
 +static inline void pagemaps_pgd_range(struct vm_area_struct *vma,
 +   unsigned long addr, unsigned long end,
 +   struct seq_file *m)
 +{
 +   pgd_t *pgd;
 +   unsigned long next;
 +
 +   pgd = pgd_offset(vma-vm_mm, addr);
 +   do {
 +   next = pgd_addr_end(addr, end);
 +   if (pgd_none_or_clear_bad(pgd))
 +   continue;
 +   pagemaps_pud_range(vma, pgd, addr, next, m);
 +   } while (pgd++, addr = next, addr != end);
 +}

I think that's our eighth open-coded pagetable walker.  Apparently they are
all slightly different.  Perhaps we shouild do something about that one
day.


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread david singleton


On Dec 7, 2006, at 2:36 PM, Andrew Morton wrote:


On Thu, 07 Dec 2006 14:09:40 -0800
David Singleton [EMAIL PROTECTED] wrote:



Andrew,

this implements a feature for memory analysis tools to go along 
with

smaps.
It shows reference counts for individual pages instead of aggregate
totals for a given VMA.
It helps memory analysis tools determine how well pages are being
shared, or not,
in a shared libraries, etc.

   The per page information is presented in /proc/pid/pagemaps.



I think the concept is not a bad one, frankly - this requirement arises
frequently.  What bugs me is that it only displays the mapcount and
dirtiness.  Perhaps there are other things which people want to know.  
I'm

not sure what they would be though.

I wonder if it would be insane to display the info via a filesystem:

cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45

Probably it would.


Index: linux-2.6.18/Documentation/filesystems/proc.txt


Against 2.6.18?  I didn't know you could still buy copies of that ;)


whoops, I have an old copy.  let me make a patch against 2.6.19.



This patch's changelog should include sample output.


okay.



Your email client wordwraps patches, and it replaces tabs with spaces.


Is an attachment okay?  gziped tarfile?  a new mailer?

David



...

+static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t 
*pmd,

+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pte_t *pte, ptent;
+   spinlock_t *ptl;
+   struct page *page;
+   int mapcount = 0;
+
+   pte = pte_offset_map_lock(vma-vm_mm, pmd, addr, ptl);
+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = 
-page_mapcount(page);

+   else
+   mapcount = 
page_mapcount(page);

+   } else {
+   mapcount = 1;
+   }
+   }
+   seq_printf(m,  %d, mapcount);
+
+   } while (pte++, addr += PAGE_SIZE, addr != end);


Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will 
not

overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?


+
+static inline void pagemaps_pmd_range(struct vm_area_struct *vma, 
pud_t

*pud,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pmd_t *pmd;
+   unsigned long next;
+
+   pmd = pmd_offset(pud, addr);
+   do {
+   next = pmd_addr_end(addr, end);
+   if (pmd_none_or_clear_bad(pmd))
+   continue;
+   pagemaps_pte_range(vma, pmd, addr, next, m);
+   } while (pmd++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pud_range(struct vm_area_struct *vma, 
pgd_t

*pgd,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pud_t *pud;
+   unsigned long next;
+
+   pud = pud_offset(pgd, addr);
+   do {
+   next = pud_addr_end(addr, end);
+   if (pud_none_or_clear_bad(pud))
+   continue;
+   pagemaps_pmd_range(vma, pud, addr, next, m);
+   } while (pud++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pgd_range(struct vm_area_struct *vma,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pgd_t *pgd;
+   unsigned long next;
+
+   pgd = pgd_offset(vma-vm_mm, addr);
+   do {
+   next = pgd_addr_end(addr, end);
+   if (pgd_none_or_clear_bad(pgd))
+   continue;
+   pagemaps_pud_range(vma, pgd, addr, next, m);
+   } while (pgd++, addr = next, addr != end);
+}


I think that's our eighth open-coded pagetable walker.  Apparently 
they are

all slightly different.  Perhaps we shouild do something about that one
day.




-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread david singleton

Attached is the 2.6.19 patch.




pagemaps.patch
Description: Binary data






On Dec 7, 2006, at 2:36 PM, Andrew Morton wrote:


On Thu, 07 Dec 2006 14:09:40 -0800
David Singleton [EMAIL PROTECTED] wrote:



Andrew,

this implements a feature for memory analysis tools to go along 
with

smaps.
It shows reference counts for individual pages instead of aggregate
totals for a given VMA.
It helps memory analysis tools determine how well pages are being
shared, or not,
in a shared libraries, etc.

   The per page information is presented in /proc/pid/pagemaps.



I think the concept is not a bad one, frankly - this requirement arises
frequently.  What bugs me is that it only displays the mapcount and
dirtiness.  Perhaps there are other things which people want to know.  
I'm

not sure what they would be though.

I wonder if it would be insane to display the info via a filesystem:

cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45

Probably it would.


Index: linux-2.6.18/Documentation/filesystems/proc.txt


Against 2.6.18?  I didn't know you could still buy copies of that ;)

This patch's changelog should include sample output.

Your email client wordwraps patches, and it replaces tabs with spaces.


...

+static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t 
*pmd,

+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pte_t *pte, ptent;
+   spinlock_t *ptl;
+   struct page *page;
+   int mapcount = 0;
+
+   pte = pte_offset_map_lock(vma-vm_mm, pmd, addr, ptl);
+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = 
-page_mapcount(page);

+   else
+   mapcount = 
page_mapcount(page);

+   } else {
+   mapcount = 1;
+   }
+   }
+   seq_printf(m,  %d, mapcount);
+
+   } while (pte++, addr += PAGE_SIZE, addr != end);


Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will 
not

overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?


+
+static inline void pagemaps_pmd_range(struct vm_area_struct *vma, 
pud_t

*pud,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pmd_t *pmd;
+   unsigned long next;
+
+   pmd = pmd_offset(pud, addr);
+   do {
+   next = pmd_addr_end(addr, end);
+   if (pmd_none_or_clear_bad(pmd))
+   continue;
+   pagemaps_pte_range(vma, pmd, addr, next, m);
+   } while (pmd++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pud_range(struct vm_area_struct *vma, 
pgd_t

*pgd,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pud_t *pud;
+   unsigned long next;
+
+   pud = pud_offset(pgd, addr);
+   do {
+   next = pud_addr_end(addr, end);
+   if (pud_none_or_clear_bad(pud))
+   continue;
+   pagemaps_pmd_range(vma, pud, addr, next, m);
+   } while (pud++, addr = next, addr != end);
+}
+
+static inline void pagemaps_pgd_range(struct vm_area_struct *vma,
+   unsigned long addr, unsigned long end,
+   struct seq_file *m)
+{
+   pgd_t *pgd;
+   unsigned long next;
+
+   pgd = pgd_offset(vma-vm_mm, addr);
+   do {
+   next = pgd_addr_end(addr, end);
+   if (pgd_none_or_clear_bad(pgd))
+   continue;
+   pagemaps_pud_range(vma, pgd, addr, next, m);
+   } while (pgd++, addr = next, addr != end);
+}


I think that's our eighth open-coded pagetable walker.  Apparently 
they are

all slightly different.  Perhaps we shouild do something about that one
day.




Re: new procfs memory analysis feature

2006-12-07 Thread Andrew Morton
On Thu, 7 Dec 2006 17:07:22 -0800
david singleton [EMAIL PROTECTED] wrote:

 Attached is the 2.6.19 patch.

It still has the overflow bug.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread david singleton


On Dec 7, 2006, at 5:46 PM, Andrew Morton wrote:


On Thu, 7 Dec 2006 17:07:22 -0800
david singleton [EMAIL PROTECTED] wrote:


Attached is the 2.6.19 patch.


It still has the overflow bug.

+   do {
+   ptent = *pte;
+   if (pte_present(ptent)) {
+   page = vm_normal_page(vma, addr, ptent);
+   if (page) {
+   if (pte_dirty(ptent))
+   mapcount = 
-page_mapcount(page);

+   else
+   mapcount = 
page_mapcount(page);

+   } else {
+   mapcount = 1;
+   }
+   }
+   seq_printf(m,  %d, mapcount);
+
+   } while (pte++, addr += PAGE_SIZE, addr != end);


Well that's cute.  As long as both seq_file and pte-pages are of size
PAGE_SIZE, and as long as pte's are more than three bytes, this will 
not

overflow the seq_file output buffer.

hm.  Unless the pages are all dirty and the mapcounts are all 1.  I
think it will overflow then?



I guess that could happen?Any suggestions?

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: new procfs memory analysis feature

2006-12-07 Thread Paul Cameron Davies

On Thu, 7 Dec 2006, Andrew Morton wrote:


I think that's our eighth open-coded pagetable walker.  Apparently they are
all slightly different.  Perhaps we shouild do something about that one
day.


At UNSW we have abstracted the page table into its own layer, and
are running an alternate page table (a GPT), under a clean page table
interface (PTI).

The PTI gathers all the open coded iterators togethers into one place,
which would be a good precursor to providing generic iterators for
non performance critical iterations.

We are completing the updating/enhancements to this PTI for the latest 
kernel, to be released just prior to LCA.  This PTI is benchmarking well. 
We also plan to release the experimental guarded page table (GPT) running 
under this PTI.


Paul Davies
[EMAIL PROTECTED]
~

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/