Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-18 Thread Johannes Weiner
On Thu, Apr 11, 2013 at 08:57:54PM +0100, Mel Gorman wrote:
> Currently kswapd queues dirty pages for writeback if scanning at an elevated
> priority but the priority kswapd scans at is not related to the number
> of unqueued dirty encountered.  Since commit "mm: vmscan: Flatten kswapd
> priority loop", the priority is related to the size of the LRU and the
> zone watermark which is no indication as to whether kswapd should write
> pages or not.
> 
> This patch tracks if an excessive number of unqueued dirty pages are being
> encountered at the end of the LRU.  If so, it indicates that dirty pages
> are being recycled before flusher threads can clean them and flags the
> zone so that kswapd will start writing pages until the zone is balanced.
> 
> Signed-off-by: Mel Gorman 

Acked-by: Johannes Weiner 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-18 Thread Johannes Weiner
On Thu, Apr 11, 2013 at 08:57:54PM +0100, Mel Gorman wrote:
 Currently kswapd queues dirty pages for writeback if scanning at an elevated
 priority but the priority kswapd scans at is not related to the number
 of unqueued dirty encountered.  Since commit mm: vmscan: Flatten kswapd
 priority loop, the priority is related to the size of the LRU and the
 zone watermark which is no indication as to whether kswapd should write
 pages or not.
 
 This patch tracks if an excessive number of unqueued dirty pages are being
 encountered at the end of the LRU.  If so, it indicates that dirty pages
 are being recycled before flusher threads can clean them and flags the
 zone so that kswapd will start writing pages until the zone is balanced.
 
 Signed-off-by: Mel Gorman mgor...@suse.de

Acked-by: Johannes Weiner han...@cmpxchg.org
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-11 Thread Rik van Riel

On 04/09/2013 07:07 AM, Mel Gorman wrote:

Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit "mm: vmscan: Flatten kswapd
priority loop", the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman 


I like your approach of essentially not writing out from
kswapd if we manage to reclaim well at DEF_PRIORITY, and
doing writeout more and more aggressively if we have to
reduce priority.

Reviewed-by: Rik van Riel 

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-11 Thread Mel Gorman
Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit "mm: vmscan: Flatten kswapd
priority loop", the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman 
---
 include/linux/mmzone.h |  9 +
 mm/vmscan.c| 31 +--
 2 files changed, 34 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c74092e..ecf0c7d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -495,6 +495,10 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
+* many dirty file pages at the tail
+* of the LRU.
+*/
 } zone_flags_t;
 
 static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
@@ -517,6 +521,11 @@ static inline int zone_is_reclaim_congested(const struct 
zone *zone)
return test_bit(ZONE_CONGESTED, >flags);
 }
 
+static inline int zone_is_reclaim_dirty(const struct zone *zone)
+{
+   return test_bit(ZONE_TAIL_LRU_DIRTY, >flags);
+}
+
 static inline int zone_is_reclaim_locked(const struct zone *zone)
 {
return test_bit(ZONE_RECLAIM_LOCKED, >flags);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bc4c2a7..22e8ca9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -675,13 +675,14 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
  struct zone *zone,
  struct scan_control *sc,
  enum ttu_flags ttu_flags,
- unsigned long *ret_nr_dirty,
+ unsigned long *ret_nr_unqueued_dirty,
  unsigned long *ret_nr_writeback,
  bool force_reclaim)
 {
LIST_HEAD(ret_pages);
LIST_HEAD(free_pages);
int pgactivate = 0;
+   unsigned long nr_unqueued_dirty = 0;
unsigned long nr_dirty = 0;
unsigned long nr_congested = 0;
unsigned long nr_reclaimed = 0;
@@ -807,14 +808,17 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
if (PageDirty(page)) {
nr_dirty++;
 
+   if (!PageWriteback(page))
+   nr_unqueued_dirty++;
+
/*
 * Only kswapd can writeback filesystem pages to
-* avoid risk of stack overflow but do not writeback
-* unless under significant pressure.
+* avoid risk of stack overflow but only writeback
+* if many dirty pages have been encountered.
 */
if (page_is_file_cache(page) &&
(!current_is_kswapd() ||
-sc->priority >= DEF_PRIORITY - 2)) {
+!zone_is_reclaim_dirty(zone))) {
/*
 * Immediately reclaim when written back.
 * Similar in principal to deactivate_page()
@@ -959,7 +963,7 @@ keep:
list_splice(_pages, page_list);
count_vm_events(PGACTIVATE, pgactivate);
mem_cgroup_uncharge_end();
-   *ret_nr_dirty += nr_dirty;
+   *ret_nr_unqueued_dirty += nr_unqueued_dirty;
*ret_nr_writeback += nr_writeback;
return nr_reclaimed;
 }
@@ -1372,6 +1376,15 @@ shrink_inactive_list(unsigned long nr_to_scan, struct 
lruvec *lruvec,
(nr_taken >> (DEF_PRIORITY - sc->priority)))
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
 
+   /*
+* Similarly, if many dirty pages are encountered that are not
+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc) && nr_dirty &&
+   nr_dirty >= (nr_taken >> (DEF_PRIORITY - sc->priority)))
+   zone_set_flag(zone, ZONE_TAIL_LRU_DIRTY);
+

[PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-11 Thread Mel Gorman
Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit mm: vmscan: Flatten kswapd
priority loop, the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman mgor...@suse.de
---
 include/linux/mmzone.h |  9 +
 mm/vmscan.c| 31 +--
 2 files changed, 34 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c74092e..ecf0c7d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -495,6 +495,10 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
+* many dirty file pages at the tail
+* of the LRU.
+*/
 } zone_flags_t;
 
 static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
@@ -517,6 +521,11 @@ static inline int zone_is_reclaim_congested(const struct 
zone *zone)
return test_bit(ZONE_CONGESTED, zone-flags);
 }
 
+static inline int zone_is_reclaim_dirty(const struct zone *zone)
+{
+   return test_bit(ZONE_TAIL_LRU_DIRTY, zone-flags);
+}
+
 static inline int zone_is_reclaim_locked(const struct zone *zone)
 {
return test_bit(ZONE_RECLAIM_LOCKED, zone-flags);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bc4c2a7..22e8ca9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -675,13 +675,14 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
  struct zone *zone,
  struct scan_control *sc,
  enum ttu_flags ttu_flags,
- unsigned long *ret_nr_dirty,
+ unsigned long *ret_nr_unqueued_dirty,
  unsigned long *ret_nr_writeback,
  bool force_reclaim)
 {
LIST_HEAD(ret_pages);
LIST_HEAD(free_pages);
int pgactivate = 0;
+   unsigned long nr_unqueued_dirty = 0;
unsigned long nr_dirty = 0;
unsigned long nr_congested = 0;
unsigned long nr_reclaimed = 0;
@@ -807,14 +808,17 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
if (PageDirty(page)) {
nr_dirty++;
 
+   if (!PageWriteback(page))
+   nr_unqueued_dirty++;
+
/*
 * Only kswapd can writeback filesystem pages to
-* avoid risk of stack overflow but do not writeback
-* unless under significant pressure.
+* avoid risk of stack overflow but only writeback
+* if many dirty pages have been encountered.
 */
if (page_is_file_cache(page) 
(!current_is_kswapd() ||
-sc-priority = DEF_PRIORITY - 2)) {
+!zone_is_reclaim_dirty(zone))) {
/*
 * Immediately reclaim when written back.
 * Similar in principal to deactivate_page()
@@ -959,7 +963,7 @@ keep:
list_splice(ret_pages, page_list);
count_vm_events(PGACTIVATE, pgactivate);
mem_cgroup_uncharge_end();
-   *ret_nr_dirty += nr_dirty;
+   *ret_nr_unqueued_dirty += nr_unqueued_dirty;
*ret_nr_writeback += nr_writeback;
return nr_reclaimed;
 }
@@ -1372,6 +1376,15 @@ shrink_inactive_list(unsigned long nr_to_scan, struct 
lruvec *lruvec,
(nr_taken  (DEF_PRIORITY - sc-priority)))
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
 
+   /*
+* Similarly, if many dirty pages are encountered that are not
+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc)  nr_dirty 
+   nr_dirty = (nr_taken  (DEF_PRIORITY - sc-priority)))
+   zone_set_flag(zone, ZONE_TAIL_LRU_DIRTY);
+

Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-11 Thread Rik van Riel

On 04/09/2013 07:07 AM, Mel Gorman wrote:

Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit mm: vmscan: Flatten kswapd
priority loop, the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman mgor...@suse.de


I like your approach of essentially not writing out from
kswapd if we manage to reclaim well at DEF_PRIORITY, and
doing writeout more and more aggressively if we have to
reduce priority.

Reviewed-by: Rik van Riel r...@redhat.com

--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-09 Thread Mel Gorman
Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit "mm: vmscan: Flatten kswapd
priority loop", the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman 
---
 include/linux/mmzone.h |  9 +
 mm/vmscan.c| 31 +--
 2 files changed, 34 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c74092e..ecf0c7d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -495,6 +495,10 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
+* many dirty file pages at the tail
+* of the LRU.
+*/
 } zone_flags_t;
 
 static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
@@ -517,6 +521,11 @@ static inline int zone_is_reclaim_congested(const struct 
zone *zone)
return test_bit(ZONE_CONGESTED, >flags);
 }
 
+static inline int zone_is_reclaim_dirty(const struct zone *zone)
+{
+   return test_bit(ZONE_TAIL_LRU_DIRTY, >flags);
+}
+
 static inline int zone_is_reclaim_locked(const struct zone *zone)
 {
return test_bit(ZONE_RECLAIM_LOCKED, >flags);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3d8b80a..53d5006 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -675,13 +675,14 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
  struct zone *zone,
  struct scan_control *sc,
  enum ttu_flags ttu_flags,
- unsigned long *ret_nr_dirty,
+ unsigned long *ret_nr_unqueued_dirty,
  unsigned long *ret_nr_writeback,
  bool force_reclaim)
 {
LIST_HEAD(ret_pages);
LIST_HEAD(free_pages);
int pgactivate = 0;
+   unsigned long nr_unqueued_dirty = 0;
unsigned long nr_dirty = 0;
unsigned long nr_congested = 0;
unsigned long nr_reclaimed = 0;
@@ -807,14 +808,17 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
if (PageDirty(page)) {
nr_dirty++;
 
+   if (!PageWriteback(page))
+   nr_unqueued_dirty++;
+
/*
 * Only kswapd can writeback filesystem pages to
-* avoid risk of stack overflow but do not writeback
-* unless under significant pressure.
+* avoid risk of stack overflow but only writeback
+* if many dirty pages have been encountered.
 */
if (page_is_file_cache(page) &&
(!current_is_kswapd() ||
-sc->priority >= DEF_PRIORITY - 2)) {
+!zone_is_reclaim_dirty(zone))) {
/*
 * Immediately reclaim when written back.
 * Similar in principal to deactivate_page()
@@ -959,7 +963,7 @@ keep:
list_splice(_pages, page_list);
count_vm_events(PGACTIVATE, pgactivate);
mem_cgroup_uncharge_end();
-   *ret_nr_dirty += nr_dirty;
+   *ret_nr_unqueued_dirty += nr_unqueued_dirty;
*ret_nr_writeback += nr_writeback;
return nr_reclaimed;
 }
@@ -1372,6 +1376,15 @@ shrink_inactive_list(unsigned long nr_to_scan, struct 
lruvec *lruvec,
(nr_taken >> (DEF_PRIORITY - sc->priority)))
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
 
+   /*
+* Similarly, if many dirty pages are encountered that are not
+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc) && nr_dirty &&
+   nr_dirty >= (nr_taken >> (DEF_PRIORITY - sc->priority)))
+   zone_set_flag(zone, ZONE_TAIL_LRU_DIRTY);
+

[PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-04-09 Thread Mel Gorman
Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit mm: vmscan: Flatten kswapd
priority loop, the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman mgor...@suse.de
---
 include/linux/mmzone.h |  9 +
 mm/vmscan.c| 31 +--
 2 files changed, 34 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c74092e..ecf0c7d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -495,6 +495,10 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
+* many dirty file pages at the tail
+* of the LRU.
+*/
 } zone_flags_t;
 
 static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
@@ -517,6 +521,11 @@ static inline int zone_is_reclaim_congested(const struct 
zone *zone)
return test_bit(ZONE_CONGESTED, zone-flags);
 }
 
+static inline int zone_is_reclaim_dirty(const struct zone *zone)
+{
+   return test_bit(ZONE_TAIL_LRU_DIRTY, zone-flags);
+}
+
 static inline int zone_is_reclaim_locked(const struct zone *zone)
 {
return test_bit(ZONE_RECLAIM_LOCKED, zone-flags);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3d8b80a..53d5006 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -675,13 +675,14 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
  struct zone *zone,
  struct scan_control *sc,
  enum ttu_flags ttu_flags,
- unsigned long *ret_nr_dirty,
+ unsigned long *ret_nr_unqueued_dirty,
  unsigned long *ret_nr_writeback,
  bool force_reclaim)
 {
LIST_HEAD(ret_pages);
LIST_HEAD(free_pages);
int pgactivate = 0;
+   unsigned long nr_unqueued_dirty = 0;
unsigned long nr_dirty = 0;
unsigned long nr_congested = 0;
unsigned long nr_reclaimed = 0;
@@ -807,14 +808,17 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
if (PageDirty(page)) {
nr_dirty++;
 
+   if (!PageWriteback(page))
+   nr_unqueued_dirty++;
+
/*
 * Only kswapd can writeback filesystem pages to
-* avoid risk of stack overflow but do not writeback
-* unless under significant pressure.
+* avoid risk of stack overflow but only writeback
+* if many dirty pages have been encountered.
 */
if (page_is_file_cache(page) 
(!current_is_kswapd() ||
-sc-priority = DEF_PRIORITY - 2)) {
+!zone_is_reclaim_dirty(zone))) {
/*
 * Immediately reclaim when written back.
 * Similar in principal to deactivate_page()
@@ -959,7 +963,7 @@ keep:
list_splice(ret_pages, page_list);
count_vm_events(PGACTIVATE, pgactivate);
mem_cgroup_uncharge_end();
-   *ret_nr_dirty += nr_dirty;
+   *ret_nr_unqueued_dirty += nr_unqueued_dirty;
*ret_nr_writeback += nr_writeback;
return nr_reclaimed;
 }
@@ -1372,6 +1376,15 @@ shrink_inactive_list(unsigned long nr_to_scan, struct 
lruvec *lruvec,
(nr_taken  (DEF_PRIORITY - sc-priority)))
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
 
+   /*
+* Similarly, if many dirty pages are encountered that are not
+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc)  nr_dirty 
+   nr_dirty = (nr_taken  (DEF_PRIORITY - sc-priority)))
+   zone_set_flag(zone, ZONE_TAIL_LRU_DIRTY);
+

Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-21 Thread Rik van Riel

On 03/21/2013 02:15 PM, Mel Gorman wrote:

On Thu, Mar 21, 2013 at 01:53:41PM -0400, Rik van Riel wrote:

On 03/17/2013 11:11 AM, Mel Gorman wrote:

On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:

Mel Gorman  writes:


@@ -495,6 +495,9 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_DIRTY, /* reclaim scanning has recently found
+* many dirty file pages
+*/


Needs a better name. ZONE_DIRTY_CONGESTED ?



That might be confusing. The underlying BDI is not necessarily
congested. I accept your point though and will try thinking of a better
name.


ZONE_LOTS_DIRTY ?



I had changed it to

 ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
  * many dirty file pages at the tail
  * of the LRU.
  */

Is that reasonable?


Works for me.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-21 Thread Mel Gorman
On Thu, Mar 21, 2013 at 01:53:41PM -0400, Rik van Riel wrote:
> On 03/17/2013 11:11 AM, Mel Gorman wrote:
> >On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:
> >>Mel Gorman  writes:
> >>
> >>>@@ -495,6 +495,9 @@ typedef enum {
> >>>   ZONE_CONGESTED, /* zone has many dirty pages backed by
> >>>* a congested BDI
> >>>*/
> >>>+  ZONE_DIRTY, /* reclaim scanning has recently found
> >>>+   * many dirty file pages
> >>>+   */
> >>
> >>Needs a better name. ZONE_DIRTY_CONGESTED ?
> >>
> >
> >That might be confusing. The underlying BDI is not necessarily
> >congested. I accept your point though and will try thinking of a better
> >name.
> 
> ZONE_LOTS_DIRTY ?
> 

I had changed it to

ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
 * many dirty file pages at the tail
 * of the LRU.
 */

Is that reasonable?

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-21 Thread Rik van Riel

On 03/17/2013 11:11 AM, Mel Gorman wrote:

On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:

Mel Gorman  writes:


@@ -495,6 +495,9 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_DIRTY, /* reclaim scanning has recently found
+* many dirty file pages
+*/


Needs a better name. ZONE_DIRTY_CONGESTED ?



That might be confusing. The underlying BDI is not necessarily
congested. I accept your point though and will try thinking of a better
name.


ZONE_LOTS_DIRTY ?


+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc) && nr_dirty &&
+   nr_dirty >= (nr_taken >> (DEF_PRIORITY - sc->priority)))
+   zone_set_flag(zone, ZONE_DIRTY);
+
trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,


I suppose you want to trace the dirty case here too.



I guess it wouldn't hurt to have a new tracepoint for when the flag gets
set. A vmstat might be helpful as well.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-21 Thread Rik van Riel

On 03/17/2013 11:11 AM, Mel Gorman wrote:

On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:

Mel Gorman mgor...@suse.de writes:


@@ -495,6 +495,9 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_DIRTY, /* reclaim scanning has recently found
+* many dirty file pages
+*/


Needs a better name. ZONE_DIRTY_CONGESTED ?



That might be confusing. The underlying BDI is not necessarily
congested. I accept your point though and will try thinking of a better
name.


ZONE_LOTS_DIRTY ?


+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc)  nr_dirty 
+   nr_dirty = (nr_taken  (DEF_PRIORITY - sc-priority)))
+   zone_set_flag(zone, ZONE_DIRTY);
+
trace_mm_vmscan_lru_shrink_inactive(zone-zone_pgdat-node_id,


I suppose you want to trace the dirty case here too.



I guess it wouldn't hurt to have a new tracepoint for when the flag gets
set. A vmstat might be helpful as well.



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-21 Thread Mel Gorman
On Thu, Mar 21, 2013 at 01:53:41PM -0400, Rik van Riel wrote:
 On 03/17/2013 11:11 AM, Mel Gorman wrote:
 On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:
 Mel Gorman mgor...@suse.de writes:
 
 @@ -495,6 +495,9 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
 +  ZONE_DIRTY, /* reclaim scanning has recently found
 +   * many dirty file pages
 +   */
 
 Needs a better name. ZONE_DIRTY_CONGESTED ?
 
 
 That might be confusing. The underlying BDI is not necessarily
 congested. I accept your point though and will try thinking of a better
 name.
 
 ZONE_LOTS_DIRTY ?
 

I had changed it to

ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
 * many dirty file pages at the tail
 * of the LRU.
 */

Is that reasonable?

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-21 Thread Rik van Riel

On 03/21/2013 02:15 PM, Mel Gorman wrote:

On Thu, Mar 21, 2013 at 01:53:41PM -0400, Rik van Riel wrote:

On 03/17/2013 11:11 AM, Mel Gorman wrote:

On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:

Mel Gorman mgor...@suse.de writes:


@@ -495,6 +495,9 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_DIRTY, /* reclaim scanning has recently found
+* many dirty file pages
+*/


Needs a better name. ZONE_DIRTY_CONGESTED ?



That might be confusing. The underlying BDI is not necessarily
congested. I accept your point though and will try thinking of a better
name.


ZONE_LOTS_DIRTY ?



I had changed it to

 ZONE_TAIL_LRU_DIRTY,/* reclaim scanning has recently found
  * many dirty file pages at the tail
  * of the LRU.
  */

Is that reasonable?


Works for me.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-19 Thread Mel Gorman
On Mon, Mar 18, 2013 at 07:08:50PM +0800, Wanpeng Li wrote:
> >@@ -2735,8 +2748,12 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, 
> >int order,
> > end_zone = i;
> > break;
> > } else {
> >-/* If balanced, clear the congested flag */
> >+/*
> >+ * If balanced, clear the dirty and congested
> >+ * flags
> >+ */
> > zone_clear_flag(zone, ZONE_CONGESTED);
> >+zone_clear_flag(zone, ZONE_DIRTY);
> 
> Hi Mel,
> 
> There are two places in balance_pgdat clear ZONE_CONGESTED flag, one
> is during scan zone which have free_pages <= high_wmark_pages(zone), the 
> other one is zone get balanced after reclaim, it seems that you miss the 
> later one.
> 

I did and it's fixed now. Thanks.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-19 Thread Mel Gorman
On Mon, Mar 18, 2013 at 07:08:50PM +0800, Wanpeng Li wrote:
 @@ -2735,8 +2748,12 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, 
 int order,
  end_zone = i;
  break;
  } else {
 -/* If balanced, clear the congested flag */
 +/*
 + * If balanced, clear the dirty and congested
 + * flags
 + */
  zone_clear_flag(zone, ZONE_CONGESTED);
 +zone_clear_flag(zone, ZONE_DIRTY);
 
 Hi Mel,
 
 There are two places in balance_pgdat clear ZONE_CONGESTED flag, one
 is during scan zone which have free_pages = high_wmark_pages(zone), the 
 other one is zone get balanced after reclaim, it seems that you miss the 
 later one.
 

I did and it's fixed now. Thanks.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-17 Thread Mel Gorman
On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:
> Mel Gorman  writes:
> 
> > @@ -495,6 +495,9 @@ typedef enum {
> > ZONE_CONGESTED, /* zone has many dirty pages backed by
> >  * a congested BDI
> >  */
> > +   ZONE_DIRTY, /* reclaim scanning has recently found
> > +* many dirty file pages
> > +*/
> 
> Needs a better name. ZONE_DIRTY_CONGESTED ? 
> 

That might be confusing. The underlying BDI is not necessarily
congested. I accept your point though and will try thinking of a better
name.

> > +* currently being written then flag that kswapd should start
> > +* writing back pages.
> > +*/
> > +   if (global_reclaim(sc) && nr_dirty &&
> > +   nr_dirty >= (nr_taken >> (DEF_PRIORITY - sc->priority)))
> > +   zone_set_flag(zone, ZONE_DIRTY);
> > +
> > trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
> 
> I suppose you want to trace the dirty case here too.
> 

I guess it wouldn't hurt to have a new tracepoint for when the flag gets
set. A vmstat might be helpful as well.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-17 Thread Andi Kleen
Mel Gorman  writes:

> @@ -495,6 +495,9 @@ typedef enum {
>   ZONE_CONGESTED, /* zone has many dirty pages backed by
>* a congested BDI
>*/
> + ZONE_DIRTY, /* reclaim scanning has recently found
> +  * many dirty file pages
> +  */

Needs a better name. ZONE_DIRTY_CONGESTED ? 

> +  * currently being written then flag that kswapd should start
> +  * writing back pages.
> +  */
> + if (global_reclaim(sc) && nr_dirty &&
> + nr_dirty >= (nr_taken >> (DEF_PRIORITY - sc->priority)))
> + zone_set_flag(zone, ZONE_DIRTY);
> +
>   trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,

I suppose you want to trace the dirty case here too.

-Andi
-- 
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-17 Thread Mel Gorman
Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit "mm: vmscan: Flatten kswapd
priority loop", the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman 
---
 include/linux/mmzone.h |  8 
 mm/vmscan.c| 29 +++--
 2 files changed, 31 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ede2749..edd6b98 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -495,6 +495,9 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_DIRTY, /* reclaim scanning has recently found
+* many dirty file pages
+*/
 } zone_flags_t;
 
 static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
@@ -517,6 +520,11 @@ static inline int zone_is_reclaim_congested(const struct 
zone *zone)
return test_bit(ZONE_CONGESTED, >flags);
 }
 
+static inline int zone_is_reclaim_dirty(const struct zone *zone)
+{
+   return test_bit(ZONE_DIRTY, >flags);
+}
+
 static inline int zone_is_reclaim_locked(const struct zone *zone)
 {
return test_bit(ZONE_RECLAIM_LOCKED, >flags);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index af3bb6f..493728b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -675,13 +675,14 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
  struct zone *zone,
  struct scan_control *sc,
  enum ttu_flags ttu_flags,
- unsigned long *ret_nr_dirty,
+ unsigned long *ret_nr_unqueued_dirty,
  unsigned long *ret_nr_writeback,
  bool force_reclaim)
 {
LIST_HEAD(ret_pages);
LIST_HEAD(free_pages);
int pgactivate = 0;
+   unsigned long nr_unqueued_dirty = 0;
unsigned long nr_dirty = 0;
unsigned long nr_congested = 0;
unsigned long nr_reclaimed = 0;
@@ -807,14 +808,17 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
if (PageDirty(page)) {
nr_dirty++;
 
+   if (!PageWriteback(page))
+   nr_unqueued_dirty++;
+
/*
 * Only kswapd can writeback filesystem pages to
-* avoid risk of stack overflow but do not writeback
-* unless under significant pressure.
+* avoid risk of stack overflow but only writeback
+* if many dirty pages have been encountered.
 */
if (page_is_file_cache(page) &&
(!current_is_kswapd() ||
-sc->priority >= DEF_PRIORITY - 2)) {
+!zone_is_reclaim_dirty(zone))) {
/*
 * Immediately reclaim when written back.
 * Similar in principal to deactivate_page()
@@ -959,7 +963,7 @@ keep:
list_splice(_pages, page_list);
count_vm_events(PGACTIVATE, pgactivate);
mem_cgroup_uncharge_end();
-   *ret_nr_dirty += nr_dirty;
+   *ret_nr_unqueued_dirty += nr_unqueued_dirty;
*ret_nr_writeback += nr_writeback;
return nr_reclaimed;
 }
@@ -1372,6 +1376,15 @@ shrink_inactive_list(unsigned long nr_to_scan, struct 
lruvec *lruvec,
(nr_taken >> (DEF_PRIORITY - sc->priority)))
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
 
+   /*
+* Similarly, if many dirty pages are encountered that are not
+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc) && nr_dirty &&
+   nr_dirty >= (nr_taken >> (DEF_PRIORITY - sc->priority)))
+   zone_set_flag(zone, ZONE_DIRTY);
+
trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
zone_idx(zone),
nr_scanned, nr_reclaimed,
@@ 

[PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-17 Thread Mel Gorman
Currently kswapd queues dirty pages for writeback if scanning at an elevated
priority but the priority kswapd scans at is not related to the number
of unqueued dirty encountered.  Since commit mm: vmscan: Flatten kswapd
priority loop, the priority is related to the size of the LRU and the
zone watermark which is no indication as to whether kswapd should write
pages or not.

This patch tracks if an excessive number of unqueued dirty pages are being
encountered at the end of the LRU.  If so, it indicates that dirty pages
are being recycled before flusher threads can clean them and flags the
zone so that kswapd will start writing pages until the zone is balanced.

Signed-off-by: Mel Gorman mgor...@suse.de
---
 include/linux/mmzone.h |  8 
 mm/vmscan.c| 29 +++--
 2 files changed, 31 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ede2749..edd6b98 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -495,6 +495,9 @@ typedef enum {
ZONE_CONGESTED, /* zone has many dirty pages backed by
 * a congested BDI
 */
+   ZONE_DIRTY, /* reclaim scanning has recently found
+* many dirty file pages
+*/
 } zone_flags_t;
 
 static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
@@ -517,6 +520,11 @@ static inline int zone_is_reclaim_congested(const struct 
zone *zone)
return test_bit(ZONE_CONGESTED, zone-flags);
 }
 
+static inline int zone_is_reclaim_dirty(const struct zone *zone)
+{
+   return test_bit(ZONE_DIRTY, zone-flags);
+}
+
 static inline int zone_is_reclaim_locked(const struct zone *zone)
 {
return test_bit(ZONE_RECLAIM_LOCKED, zone-flags);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index af3bb6f..493728b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -675,13 +675,14 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
  struct zone *zone,
  struct scan_control *sc,
  enum ttu_flags ttu_flags,
- unsigned long *ret_nr_dirty,
+ unsigned long *ret_nr_unqueued_dirty,
  unsigned long *ret_nr_writeback,
  bool force_reclaim)
 {
LIST_HEAD(ret_pages);
LIST_HEAD(free_pages);
int pgactivate = 0;
+   unsigned long nr_unqueued_dirty = 0;
unsigned long nr_dirty = 0;
unsigned long nr_congested = 0;
unsigned long nr_reclaimed = 0;
@@ -807,14 +808,17 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
if (PageDirty(page)) {
nr_dirty++;
 
+   if (!PageWriteback(page))
+   nr_unqueued_dirty++;
+
/*
 * Only kswapd can writeback filesystem pages to
-* avoid risk of stack overflow but do not writeback
-* unless under significant pressure.
+* avoid risk of stack overflow but only writeback
+* if many dirty pages have been encountered.
 */
if (page_is_file_cache(page) 
(!current_is_kswapd() ||
-sc-priority = DEF_PRIORITY - 2)) {
+!zone_is_reclaim_dirty(zone))) {
/*
 * Immediately reclaim when written back.
 * Similar in principal to deactivate_page()
@@ -959,7 +963,7 @@ keep:
list_splice(ret_pages, page_list);
count_vm_events(PGACTIVATE, pgactivate);
mem_cgroup_uncharge_end();
-   *ret_nr_dirty += nr_dirty;
+   *ret_nr_unqueued_dirty += nr_unqueued_dirty;
*ret_nr_writeback += nr_writeback;
return nr_reclaimed;
 }
@@ -1372,6 +1376,15 @@ shrink_inactive_list(unsigned long nr_to_scan, struct 
lruvec *lruvec,
(nr_taken  (DEF_PRIORITY - sc-priority)))
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
 
+   /*
+* Similarly, if many dirty pages are encountered that are not
+* currently being written then flag that kswapd should start
+* writing back pages.
+*/
+   if (global_reclaim(sc)  nr_dirty 
+   nr_dirty = (nr_taken  (DEF_PRIORITY - sc-priority)))
+   zone_set_flag(zone, ZONE_DIRTY);
+
trace_mm_vmscan_lru_shrink_inactive(zone-zone_pgdat-node_id,
zone_idx(zone),
nr_scanned, 

Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-17 Thread Andi Kleen
Mel Gorman mgor...@suse.de writes:

 @@ -495,6 +495,9 @@ typedef enum {
   ZONE_CONGESTED, /* zone has many dirty pages backed by
* a congested BDI
*/
 + ZONE_DIRTY, /* reclaim scanning has recently found
 +  * many dirty file pages
 +  */

Needs a better name. ZONE_DIRTY_CONGESTED ? 

 +  * currently being written then flag that kswapd should start
 +  * writing back pages.
 +  */
 + if (global_reclaim(sc)  nr_dirty 
 + nr_dirty = (nr_taken  (DEF_PRIORITY - sc-priority)))
 + zone_set_flag(zone, ZONE_DIRTY);
 +
   trace_mm_vmscan_lru_shrink_inactive(zone-zone_pgdat-node_id,

I suppose you want to trace the dirty case here too.

-Andi
-- 
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority

2013-03-17 Thread Mel Gorman
On Sun, Mar 17, 2013 at 07:42:39AM -0700, Andi Kleen wrote:
 Mel Gorman mgor...@suse.de writes:
 
  @@ -495,6 +495,9 @@ typedef enum {
  ZONE_CONGESTED, /* zone has many dirty pages backed by
   * a congested BDI
   */
  +   ZONE_DIRTY, /* reclaim scanning has recently found
  +* many dirty file pages
  +*/
 
 Needs a better name. ZONE_DIRTY_CONGESTED ? 
 

That might be confusing. The underlying BDI is not necessarily
congested. I accept your point though and will try thinking of a better
name.

  +* currently being written then flag that kswapd should start
  +* writing back pages.
  +*/
  +   if (global_reclaim(sc)  nr_dirty 
  +   nr_dirty = (nr_taken  (DEF_PRIORITY - sc-priority)))
  +   zone_set_flag(zone, ZONE_DIRTY);
  +
  trace_mm_vmscan_lru_shrink_inactive(zone-zone_pgdat-node_id,
 
 I suppose you want to trace the dirty case here too.
 

I guess it wouldn't hurt to have a new tracepoint for when the flag gets
set. A vmstat might be helpful as well.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/