Hello,

On (06/16/15 23:47), Minchan Kim wrote:
[..]
> 
> I like the idea but still have a concern to lack of fragmented zspages
> during memory pressure because auto-compaction will prevent fragment
> most of time. Surely, using fragment space as buffer in heavy memory
> pressure is not intened design so it could be fragile but I'm afraid
> this feature might accelrate it and it ends up having a problem and
> change current behavior in zram as swap.
> 
> I hope you test this feature with considering my concern.
> Of course, I will test it with enough time.
> 

OK, to explore "compaction leaves no fragmentation in classes" I did some
heavy testing today -- parallel copy/remove of the linux kernel, git, glibc;
parallel builds (-j4), parallel clean ups (make clean); git gc, etc.


device's IO stats:
cat /sys/block/zram0/stat←
   277050        0  2216400     1463  8442846        0 67542768   106536        
0   107810   108146

device's MM stats:
cat /sys/block/zram0/mm_stat←
 3095515136 2020518768 2057990144        0 2645716992     2030   182119


We auto-compacted (mostly auto-compacted, because I triggered manual compaction
less than 5 times)    182119  objects.


Now, during the compaction I also accounted the number of classes that ended up 
to
be 'fully compacted' (class->OBJ_ALLOCATED) == class->OBJ_USED) and 'partially
compacted'.


And the results (after 1377 compactions) are:

 pool compaction nr:1377 (full:487 part:35498)



So, we 'fully compact'-ed only 487/(35498 + 487) == 0.0135

roughtly ~1.35%

This argument does not stand anymore. We leave 'holes' in classes in ~98% of the
cases.



code that I used to gather those stats:

---

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 55cfda8..894773a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -253,6 +253,9 @@ struct zs_pool {
 #ifdef CONFIG_ZSMALLOC_STAT
        struct dentry           *stat_dentry;
 #endif
+       int                     compaction_nr;
+       long                    full_compact;
+       long                    part_compact;
 };
 
 /*
@@ -1717,6 +1720,7 @@ static void __zs_compact(struct zs_pool *pool, struct 
size_class *class)
        struct zs_compact_control cc;
        struct page *src_page;
        struct page *dst_page = NULL;
+       bool compacted = false;
 
        spin_lock(&class->lock);
        while ((src_page = isolate_source_page(class))) {
@@ -1726,6 +1730,8 @@ static void __zs_compact(struct zs_pool *pool, struct 
size_class *class)
                if (!zs_can_compact(class))
                        break;
 
+               compacted = true;
+
                cc.index = 0;
                cc.s_page = src_page;
 
@@ -1751,6 +1757,13 @@ out:
        if (src_page)
                putback_zspage(pool, class, src_page);
 
+       if (compacted) {
+               if (zs_stat_get(class, OBJ_ALLOCATED) == zs_stat_get(class, 
OBJ_USED))
+                       pool->full_compact++;
+               else
+                       pool->part_compact++;
+       }
+
        spin_unlock(&class->lock);
 }
 
@@ -1767,6 +1780,11 @@ unsigned long zs_compact(struct zs_pool *pool)
                        continue;
                __zs_compact(pool, class);
        }
+
+       pool->compaction_nr++;
+       pr_err("pool compaction nr:%d (full:%ld part:%ld)\n", 
pool->compaction_nr,
+                       pool->full_compact, pool->part_compact);
+
        return pool->num_migrated;
 }
 EXPORT_SYMBOL_GPL(zs_compact);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to