[PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v5)

2011-03-29 Thread Balbir Singh
This patch moves zone_reclaim and associated helpers
outside CONFIG_NUMA. This infrastructure is reused
in the patches for page cache control that follow.

Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
Reviewed-by: Christoph Lameter c...@linux.com
---
 include/linux/mmzone.h |4 ++--
 include/linux/swap.h   |4 ++--
 kernel/sysctl.c|   16 
 mm/page_alloc.c|6 +++---
 mm/vmscan.c|2 --
 5 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 628f07b..59cbed0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -306,12 +306,12 @@ struct zone {
 */
unsigned long   lowmem_reserve[MAX_NR_ZONES];
 
-#ifdef CONFIG_NUMA
-   int node;
/*
 * zone reclaim becomes active if more unmapped pages exist.
 */
unsigned long   min_unmapped_pages;
+#ifdef CONFIG_NUMA
+   int node;
unsigned long   min_slab_pages;
 #endif
struct per_cpu_pageset __percpu *pageset;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index ed6ebe6..ce8f686 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -264,11 +264,11 @@ extern int vm_swappiness;
 extern int remove_mapping(struct address_space *mapping, struct page *page);
 extern long vm_total_pages;
 
+extern int sysctl_min_unmapped_ratio;
+extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #ifdef CONFIG_NUMA
 extern int zone_reclaim_mode;
-extern int sysctl_min_unmapped_ratio;
 extern int sysctl_min_slab_ratio;
-extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #else
 #define zone_reclaim_mode 0
 static inline int zone_reclaim(struct zone *z, gfp_t mask, unsigned int order)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 927fc5a..e3a8ce4 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1214,14 +1214,6 @@ static struct ctl_table vm_table[] = {
.proc_handler   = proc_dointvec_unsigned,
},
 #endif
-#ifdef CONFIG_NUMA
-   {
-   .procname   = zone_reclaim_mode,
-   .data   = zone_reclaim_mode,
-   .maxlen = sizeof(zone_reclaim_mode),
-   .mode   = 0644,
-   .proc_handler   = proc_dointvec_unsigned,
-   },
{
.procname   = min_unmapped_ratio,
.data   = sysctl_min_unmapped_ratio,
@@ -1231,6 +1223,14 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
.extra2 = one_hundred,
},
+#ifdef CONFIG_NUMA
+   {
+   .procname   = zone_reclaim_mode,
+   .data   = zone_reclaim_mode,
+   .maxlen = sizeof(zone_reclaim_mode),
+   .mode   = 0644,
+   .proc_handler   = proc_dointvec_unsigned,
+   },
{
.procname   = min_slab_ratio,
.data   = sysctl_min_slab_ratio,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6e1b52a..1d32865 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4249,10 +4249,10 @@ static void __paginginit free_area_init_core(struct 
pglist_data *pgdat,
 
zone-spanned_pages = size;
zone-present_pages = realsize;
-#ifdef CONFIG_NUMA
-   zone-node = nid;
zone-min_unmapped_pages = (realsize*sysctl_min_unmapped_ratio)
/ 100;
+#ifdef CONFIG_NUMA
+   zone-node = nid;
zone-min_slab_pages = (realsize * sysctl_min_slab_ratio) / 100;
 #endif
zone-name = zone_names[j];
@@ -5157,7 +5157,6 @@ int min_free_kbytes_sysctl_handler(ctl_table *table, int 
write,
return 0;
 }
 
-#ifdef CONFIG_NUMA
 int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
 {
@@ -5174,6 +5173,7 @@ int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table 
*table, int write,
return 0;
 }
 
+#ifdef CONFIG_NUMA
 int sysctl_min_slab_ratio_sysctl_handler(ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 060e4c1..4923160 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2874,7 +2874,6 @@ static int __init kswapd_init(void)
 
 module_init(kswapd_init)
 
-#ifdef CONFIG_NUMA
 /*
  * Zone reclaim mode
  *
@@ -3084,7 +3083,6 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, 
unsigned int order)
 
return ret;
 }
-#endif
 
 /*
  * page_evictable - test whether a page is evictable

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v4)

2011-01-27 Thread Balbir Singh
* Christoph Lameter c...@linux.com [2011-01-26 10:56:56]:

 
 Reviewed-by: Christoph Lameter c...@linux.com


Thanks for the review! 

-- 
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v4)

2011-01-26 Thread Christoph Lameter

Reviewed-by: Christoph Lameter c...@linux.com

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v4)

2011-01-24 Thread Balbir Singh
This patch moves zone_reclaim and associated helpers
outside CONFIG_NUMA. This infrastructure is reused
in the patches for page cache control that follow.

Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
---
 include/linux/mmzone.h |4 ++--
 include/linux/swap.h   |4 ++--
 kernel/sysctl.c|   18 +-
 mm/page_alloc.c|6 +++---
 mm/vmscan.c|2 --
 5 files changed, 16 insertions(+), 18 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 02ecb01..2485acc 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -303,12 +303,12 @@ struct zone {
 */
unsigned long   lowmem_reserve[MAX_NR_ZONES];
 
-#ifdef CONFIG_NUMA
-   int node;
/*
 * zone reclaim becomes active if more unmapped pages exist.
 */
unsigned long   min_unmapped_pages;
+#ifdef CONFIG_NUMA
+   int node;
unsigned long   min_slab_pages;
 #endif
struct per_cpu_pageset __percpu *pageset;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 5e3355a..7b75626 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -255,11 +255,11 @@ extern int vm_swappiness;
 extern int remove_mapping(struct address_space *mapping, struct page *page);
 extern long vm_total_pages;
 
+extern int sysctl_min_unmapped_ratio;
+extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #ifdef CONFIG_NUMA
 extern int zone_reclaim_mode;
-extern int sysctl_min_unmapped_ratio;
 extern int sysctl_min_slab_ratio;
-extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #else
 #define zone_reclaim_mode 0
 static inline int zone_reclaim(struct zone *z, gfp_t mask, unsigned int order)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index bc86bb3..12e8f26 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1224,15 +1224,6 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
},
 #endif
-#ifdef CONFIG_NUMA
-   {
-   .procname   = zone_reclaim_mode,
-   .data   = zone_reclaim_mode,
-   .maxlen = sizeof(zone_reclaim_mode),
-   .mode   = 0644,
-   .proc_handler   = proc_dointvec,
-   .extra1 = zero,
-   },
{
.procname   = min_unmapped_ratio,
.data   = sysctl_min_unmapped_ratio,
@@ -1242,6 +1233,15 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
.extra2 = one_hundred,
},
+#ifdef CONFIG_NUMA
+   {
+   .procname   = zone_reclaim_mode,
+   .data   = zone_reclaim_mode,
+   .maxlen = sizeof(zone_reclaim_mode),
+   .mode   = 0644,
+   .proc_handler   = proc_dointvec,
+   .extra1 = zero,
+   },
{
.procname   = min_slab_ratio,
.data   = sysctl_min_slab_ratio,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index aede3a4..7b56473 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4167,10 +4167,10 @@ static void __paginginit free_area_init_core(struct 
pglist_data *pgdat,
 
zone-spanned_pages = size;
zone-present_pages = realsize;
-#ifdef CONFIG_NUMA
-   zone-node = nid;
zone-min_unmapped_pages = (realsize*sysctl_min_unmapped_ratio)
/ 100;
+#ifdef CONFIG_NUMA
+   zone-node = nid;
zone-min_slab_pages = (realsize * sysctl_min_slab_ratio) / 100;
 #endif
zone-name = zone_names[j];
@@ -5084,7 +5084,6 @@ int min_free_kbytes_sysctl_handler(ctl_table *table, int 
write,
return 0;
 }
 
-#ifdef CONFIG_NUMA
 int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
 {
@@ -5101,6 +5100,7 @@ int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table 
*table, int write,
return 0;
 }
 
+#ifdef CONFIG_NUMA
 int sysctl_min_slab_ratio_sysctl_handler(ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 47a5096..5899f2f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2868,7 +2868,6 @@ static int __init kswapd_init(void)
 
 module_init(kswapd_init)
 
-#ifdef CONFIG_NUMA
 /*
  * Zone reclaim mode
  *
@@ -3078,7 +3077,6 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, 
unsigned int order)
 
return ret;
 }
-#endif
 
 /*
  * page_evictable - test whether a page is evictable

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[REPOST] [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v3)

2011-01-20 Thread Balbir Singh
This patch moves zone_reclaim and associated helpers
outside CONFIG_NUMA. This infrastructure is reused
in the patches for page cache control that follow.

Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
---
 include/linux/mmzone.h |4 ++--
 include/linux/swap.h   |4 ++--
 kernel/sysctl.c|   18 +-
 mm/vmscan.c|2 --
 4 files changed, 13 insertions(+), 15 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4890662..aeede91 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -302,12 +302,12 @@ struct zone {
 */
unsigned long   lowmem_reserve[MAX_NR_ZONES];
 
-#ifdef CONFIG_NUMA
-   int node;
/*
 * zone reclaim becomes active if more unmapped pages exist.
 */
unsigned long   min_unmapped_pages;
+#ifdef CONFIG_NUMA
+   int node;
unsigned long   min_slab_pages;
 #endif
struct per_cpu_pageset __percpu *pageset;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 84375e4..ac5c06e 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -253,11 +253,11 @@ extern int vm_swappiness;
 extern int remove_mapping(struct address_space *mapping, struct page *page);
 extern long vm_total_pages;
 
+extern int sysctl_min_unmapped_ratio;
+extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #ifdef CONFIG_NUMA
 extern int zone_reclaim_mode;
-extern int sysctl_min_unmapped_ratio;
 extern int sysctl_min_slab_ratio;
-extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #else
 #define zone_reclaim_mode 0
 static inline int zone_reclaim(struct zone *z, gfp_t mask, unsigned int order)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index a00fdef..e40040e 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1211,15 +1211,6 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
},
 #endif
-#ifdef CONFIG_NUMA
-   {
-   .procname   = zone_reclaim_mode,
-   .data   = zone_reclaim_mode,
-   .maxlen = sizeof(zone_reclaim_mode),
-   .mode   = 0644,
-   .proc_handler   = proc_dointvec,
-   .extra1 = zero,
-   },
{
.procname   = min_unmapped_ratio,
.data   = sysctl_min_unmapped_ratio,
@@ -1229,6 +1220,15 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
.extra2 = one_hundred,
},
+#ifdef CONFIG_NUMA
+   {
+   .procname   = zone_reclaim_mode,
+   .data   = zone_reclaim_mode,
+   .maxlen = sizeof(zone_reclaim_mode),
+   .mode   = 0644,
+   .proc_handler   = proc_dointvec,
+   .extra1 = zero,
+   },
{
.procname   = min_slab_ratio,
.data   = sysctl_min_slab_ratio,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 42a4859..e841cae 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2740,7 +2740,6 @@ static int __init kswapd_init(void)
 
 module_init(kswapd_init)
 
-#ifdef CONFIG_NUMA
 /*
  * Zone reclaim mode
  *
@@ -2950,7 +2949,6 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, 
unsigned int order)
 
return ret;
 }
-#endif
 
 /*
  * page_evictable - test whether a page is evictable

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REPOST] [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v3)

2011-01-20 Thread Christoph Lameter
On Thu, 20 Jan 2011, Balbir Singh wrote:

 --- a/include/linux/swap.h
 +++ b/include/linux/swap.h
 @@ -253,11 +253,11 @@ extern int vm_swappiness;
  extern int remove_mapping(struct address_space *mapping, struct page *page);
  extern long vm_total_pages;

 +extern int sysctl_min_unmapped_ratio;
 +extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
  #ifdef CONFIG_NUMA
  extern int zone_reclaim_mode;
 -extern int sysctl_min_unmapped_ratio;
  extern int sysctl_min_slab_ratio;
 -extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
  #else
  #define zone_reclaim_mode 0

So the end result of this patch is that zone reclaim is compiled
into vmscan.o even on !NUMA configurations but since zone_reclaim_mode ==
0 noone can ever call that code?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REPOST] [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v3)

2011-01-20 Thread Balbir Singh
* Christoph Lameter c...@linux.com [2011-01-20 08:49:27]:

 On Thu, 20 Jan 2011, Balbir Singh wrote:
 
  --- a/include/linux/swap.h
  +++ b/include/linux/swap.h
  @@ -253,11 +253,11 @@ extern int vm_swappiness;
   extern int remove_mapping(struct address_space *mapping, struct page 
  *page);
   extern long vm_total_pages;
 
  +extern int sysctl_min_unmapped_ratio;
  +extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
   #ifdef CONFIG_NUMA
   extern int zone_reclaim_mode;
  -extern int sysctl_min_unmapped_ratio;
   extern int sysctl_min_slab_ratio;
  -extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
   #else
   #define zone_reclaim_mode 0
 
 So the end result of this patch is that zone reclaim is compiled
 into vmscan.o even on !NUMA configurations but since zone_reclaim_mode ==
 0 noone can ever call that code?


The third patch, fixes this with the introduction of a config
(cut-copy-paste below). If someone were to bisect to this point, what
you say is correct.

+#if defined(CONFIG_UNMAPPED_PAGECACHE_CONTROL) ||
defined(CONFIG_NUMA)
 extern int sysctl_min_unmapped_ratio;
 extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
-#ifdef CONFIG_NUMA
-extern int zone_reclaim_mode;
-extern int sysctl_min_slab_ratio;
 #else
-#define zone_reclaim_mode 0
 static inline int zone_reclaim(struct zone *z, gfp_t mask, unsigned
int order)
 {
return 0;
 }
 #endif

Thanks for the review! 

-- 
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v3)

2010-12-23 Thread Balbir Singh
This patch moves zone_reclaim and associated helpers
outside CONFIG_NUMA. This infrastructure is reused
in the patches for page cache control that follow.

Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
---
 include/linux/mmzone.h |4 ++--
 include/linux/swap.h   |4 ++--
 kernel/sysctl.c|   18 +-
 mm/vmscan.c|2 --
 4 files changed, 13 insertions(+), 15 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4890662..aeede91 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -302,12 +302,12 @@ struct zone {
 */
unsigned long   lowmem_reserve[MAX_NR_ZONES];
 
-#ifdef CONFIG_NUMA
-   int node;
/*
 * zone reclaim becomes active if more unmapped pages exist.
 */
unsigned long   min_unmapped_pages;
+#ifdef CONFIG_NUMA
+   int node;
unsigned long   min_slab_pages;
 #endif
struct per_cpu_pageset __percpu *pageset;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 84375e4..ac5c06e 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -253,11 +253,11 @@ extern int vm_swappiness;
 extern int remove_mapping(struct address_space *mapping, struct page *page);
 extern long vm_total_pages;
 
+extern int sysctl_min_unmapped_ratio;
+extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #ifdef CONFIG_NUMA
 extern int zone_reclaim_mode;
-extern int sysctl_min_unmapped_ratio;
 extern int sysctl_min_slab_ratio;
-extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #else
 #define zone_reclaim_mode 0
 static inline int zone_reclaim(struct zone *z, gfp_t mask, unsigned int order)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index a00fdef..e40040e 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1211,15 +1211,6 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
},
 #endif
-#ifdef CONFIG_NUMA
-   {
-   .procname   = zone_reclaim_mode,
-   .data   = zone_reclaim_mode,
-   .maxlen = sizeof(zone_reclaim_mode),
-   .mode   = 0644,
-   .proc_handler   = proc_dointvec,
-   .extra1 = zero,
-   },
{
.procname   = min_unmapped_ratio,
.data   = sysctl_min_unmapped_ratio,
@@ -1229,6 +1220,15 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
.extra2 = one_hundred,
},
+#ifdef CONFIG_NUMA
+   {
+   .procname   = zone_reclaim_mode,
+   .data   = zone_reclaim_mode,
+   .maxlen = sizeof(zone_reclaim_mode),
+   .mode   = 0644,
+   .proc_handler   = proc_dointvec,
+   .extra1 = zero,
+   },
{
.procname   = min_slab_ratio,
.data   = sysctl_min_slab_ratio,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 42a4859..e841cae 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2740,7 +2740,6 @@ static int __init kswapd_init(void)
 
 module_init(kswapd_init)
 
-#ifdef CONFIG_NUMA
 /*
  * Zone reclaim mode
  *
@@ -2950,7 +2949,6 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, 
unsigned int order)
 
return ret;
 }
-#endif
 
 /*
  * page_evictable - test whether a page is evictable

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA (v2)

2010-12-10 Thread Balbir Singh
Changelog v2
Moved sysctl for min_unmapped_ratio as well

This patch moves zone_reclaim and associated helpers
outside CONFIG_NUMA. This infrastructure is reused
in the patches for page cache control that follow.

Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
---
 include/linux/mmzone.h |4 ++--
 include/linux/swap.h   |4 ++--
 kernel/sysctl.c|   18 +-
 mm/vmscan.c|2 --
 4 files changed, 13 insertions(+), 15 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4890662..aeede91 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -302,12 +302,12 @@ struct zone {
 */
unsigned long   lowmem_reserve[MAX_NR_ZONES];
 
-#ifdef CONFIG_NUMA
-   int node;
/*
 * zone reclaim becomes active if more unmapped pages exist.
 */
unsigned long   min_unmapped_pages;
+#ifdef CONFIG_NUMA
+   int node;
unsigned long   min_slab_pages;
 #endif
struct per_cpu_pageset __percpu *pageset;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 84375e4..ac5c06e 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -253,11 +253,11 @@ extern int vm_swappiness;
 extern int remove_mapping(struct address_space *mapping, struct page *page);
 extern long vm_total_pages;
 
+extern int sysctl_min_unmapped_ratio;
+extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #ifdef CONFIG_NUMA
 extern int zone_reclaim_mode;
-extern int sysctl_min_unmapped_ratio;
 extern int sysctl_min_slab_ratio;
-extern int zone_reclaim(struct zone *, gfp_t, unsigned int);
 #else
 #define zone_reclaim_mode 0
 static inline int zone_reclaim(struct zone *z, gfp_t mask, unsigned int order)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index a00fdef..e40040e 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1211,15 +1211,6 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
},
 #endif
-#ifdef CONFIG_NUMA
-   {
-   .procname   = zone_reclaim_mode,
-   .data   = zone_reclaim_mode,
-   .maxlen = sizeof(zone_reclaim_mode),
-   .mode   = 0644,
-   .proc_handler   = proc_dointvec,
-   .extra1 = zero,
-   },
{
.procname   = min_unmapped_ratio,
.data   = sysctl_min_unmapped_ratio,
@@ -1229,6 +1220,15 @@ static struct ctl_table vm_table[] = {
.extra1 = zero,
.extra2 = one_hundred,
},
+#ifdef CONFIG_NUMA
+   {
+   .procname   = zone_reclaim_mode,
+   .data   = zone_reclaim_mode,
+   .maxlen = sizeof(zone_reclaim_mode),
+   .mode   = 0644,
+   .proc_handler   = proc_dointvec,
+   .extra1 = zero,
+   },
{
.procname   = min_slab_ratio,
.data   = sysctl_min_slab_ratio,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 42a4859..e841cae 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2740,7 +2740,6 @@ static int __init kswapd_init(void)
 
 module_init(kswapd_init)
 
-#ifdef CONFIG_NUMA
 /*
  * Zone reclaim mode
  *
@@ -2950,7 +2949,6 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, 
unsigned int order)
 
return ret;
 }
-#endif
 
 /*
  * page_evictable - test whether a page is evictable

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA

2010-11-30 Thread Balbir Singh
This patch moves zone_reclaim and associated helpers
outside CONFIG_NUMA. This infrastructure is reused
in the patches for page cache control that follow.

Signed-off-by: Balbir Singh bal...@linux.vnet.ibm.com
---
 include/linux/mmzone.h |4 ++--
 mm/vmscan.c|2 --
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4890662..aeede91 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -302,12 +302,12 @@ struct zone {
 */
unsigned long   lowmem_reserve[MAX_NR_ZONES];
 
-#ifdef CONFIG_NUMA
-   int node;
/*
 * zone reclaim becomes active if more unmapped pages exist.
 */
unsigned long   min_unmapped_pages;
+#ifdef CONFIG_NUMA
+   int node;
unsigned long   min_slab_pages;
 #endif
struct per_cpu_pageset __percpu *pageset;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8cc90d5..325443a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2644,7 +2644,6 @@ static int __init kswapd_init(void)
 
 module_init(kswapd_init)
 
-#ifdef CONFIG_NUMA
 /*
  * Zone reclaim mode
  *
@@ -2854,7 +2853,6 @@ int zone_reclaim(struct zone *zone, gfp_t gfp_mask, 
unsigned int order)
 
return ret;
 }
-#endif
 
 /*
  * page_evictable - test whether a page is evictable

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA

2010-11-30 Thread Christoph Lameter

Reviewed-by: Christoph Lameter c...@linux.com


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA

2010-11-30 Thread Andrew Morton
On Tue, 30 Nov 2010 15:45:12 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:

 This patch moves zone_reclaim and associated helpers
 outside CONFIG_NUMA. This infrastructure is reused
 in the patches for page cache control that follow.
 

Thereby adding a nice dollop of bloat to everyone's kernel.  I don't
think that is justifiable given that the audience for this feature is
about eight people :(

How's about CONFIG_UNMAPPED_PAGECACHE_CONTROL?

Also this patch instantiates sysctl_min_unmapped_ratio and
sysctl_min_slab_ratio on non-NUMA builds but fails to make those
tunables actually tunable in procfs.  Changes to sysctl.c are
needed.

 Reviewed-by: Christoph Lameter c...@linux.com

More careful reviewers, please.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] Move zone_reclaim() outside of CONFIG_NUMA

2010-11-30 Thread Balbir Singh
* Balbir Singh bal...@linux.vnet.ibm.com [2010-12-01 10:04:08]:

 * Andrew Morton a...@linux-foundation.org [2010-11-30 14:23:38]:
 
  On Tue, 30 Nov 2010 15:45:12 +0530
  Balbir Singh bal...@linux.vnet.ibm.com wrote:
  
   This patch moves zone_reclaim and associated helpers
   outside CONFIG_NUMA. This infrastructure is reused
   in the patches for page cache control that follow.
   
  
  Thereby adding a nice dollop of bloat to everyone's kernel.  I don't
  think that is justifiable given that the audience for this feature is
  about eight people :(
  
  How's about CONFIG_UNMAPPED_PAGECACHE_CONTROL?
 
 
 OK, I'll add the config, but this code is enabled under CONFIG_NUMA
 today, so the bloat I agree is more for non NUMA users. I'll make
 CONFIG_UNMAPPED_PAGECACHE_CONTROL default if CONFIG_NUMA is set.
  
  Also this patch instantiates sysctl_min_unmapped_ratio and
  sysctl_min_slab_ratio on non-NUMA builds but fails to make those
  tunables actually tunable in procfs.  Changes to sysctl.c are
  needed.
  
 
 Oh! yeah.. I missed it while refactoring, my fault.
 
   Reviewed-by: Christoph Lameter c...@linux.com
  

My local MTA failed to deliver the message, trying again.

-- 
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html