On Thu, 05 Apr 2007 03:31:24 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:

> Jakub Jelinek wrote:
> 
> > My guess is that all the page zeroing is pretty expensive as well and
> > takes significant time, but I haven't profiled it.
> 
> With the attached patch (Andrew, I'll change the details around
> if you want - I just wanted something to test now), your test
> case run time went down considerably.
> 
> I modified the test case to only run 1000 loops, so it would run
> a bit faster on my system.  I also modified it to use MADV_DONTNEED
> to zap the pages, instead of the mmap(PROT_NONE) thing you use.
> 

Interesting...

Could you please add this patch and see if it helps on your machine ?

[PATCH] VM : mm_struct's mmap_cache should be close to mmap_sem

Avoids cache line dirtying : The first cache line of mm_struct is/should_be 
mostly read.

In case find_vma() hits the cache, we dont need to access the begining of 
mm_struct.
Since we just dirtied mmap_sem, access to its cache line is free.

In case find_vma() misses the cache, we dont need to dirty the begining of 
mm_struct.


Signed-off-by: Eric Dumazet <[EMAIL PROTECTED]>

--- linux-2.6.21-rc5/include/linux/sched.h
+++ linux-2.6.21-rc5-ed/include/linux/sched.h
@@ -310,7 +310,6 @@ typedef unsigned long mm_counter_t;
 struct mm_struct {
        struct vm_area_struct * mmap;           /* list of VMAs */
        struct rb_root mm_rb;
-       struct vm_area_struct * mmap_cache;     /* last find_vma result */
        unsigned long (*get_unmapped_area) (struct file *filp,
                                unsigned long addr, unsigned long len,
                                unsigned long pgoff, unsigned long flags);
@@ -324,6 +323,7 @@ struct mm_struct {
        atomic_t mm_count;                      /* How many references to 
"struct mm_struct" (users count as 1) */
        int map_count;                          /* number of VMAs */
        struct rw_semaphore mmap_sem;
+       struct vm_area_struct * mmap_cache;     /* last find_vma result */
        spinlock_t page_table_lock;             /* Protects page tables and 
some counters */
 
        struct list_head mmlist;                /* List of maybe swapped mm's.  
These are globally strung



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to