be able to limit the scope of
the lock further and still avoid the use of i_mutex.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe stable in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
even though the thrash-detection stuff was not
backported. It could have been backported without it but then the
patches would look much different from their mainline equivalent.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe stable in
the body of a message
-mostly fields.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe stable in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Sep 26, 2014 at 11:53:14AM +0200, Jiri Slaby wrote:
On 08/28/2014, 08:34 PM, Mel Gorman wrote:
From: David Rientjes rient...@google.com
commit b104a35d32025ca740539db2808aa3385d0f30eb upstream.
The page allocator relies on __GFP_WAIT to determine if ALLOC_CPUSET
should
@vger.kernel.org.
thanks,
greg k-h
-- original commit in Linus's tree --
From abc40bd2eeb77eb7c2effcaf63154aad929a1d5f Mon Sep 17 00:00:00 2001
From: Mel Gorman mgor...@suse.de
Date: Thu, 2 Oct 2014 19:47:42 +0100
Subject: [PATCH] mm: numa: Do not mark PTEs
()
mm: __rmqueue_fallback() should respect pageblock type
Linus Torvalds (1):
mm: don't pointlessly use BUG_ON() for sanity check
Mel Gorman (30):
mm, x86: Account for TLB flushes only when debugging
x86/mm: Clean up inconsistencies when flushing TLB ranges
x86/mm: Eliminate redundant page
...@google.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/page_alloc.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
A. Shutemov kirill.shute...@linux.intel.com
Cc: Andrea Arcangeli aarca...@redhat.com
Cc: Mel Gorman m...@csn.ul.ie
Cc: Andrew Davidoff david...@qedmf.net
Cc: Wanpeng Li liw...@linux.vnet.ibm.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux
still completed
faster and intuitively it makes sense to take as few passes as possible
through the zonelists.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux
to be updated. If a new CPU is onlined,
refresh_zone_stat_thresholds() will set the thresholds correctly.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux
Torvalds torva...@linux-foundation.org
Cc: Kirill A. Shutemov kirill.shute...@linux.intel.com
Cc: Konstantin Khlebnikov koc...@gmail.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
accessed, consider this as rectifying an oversight.
Signed-off-by: Hugh Dickins hu...@google.com
Acked-by: Mel Gorman mgor...@suse.de
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Vlastimil Babka vba...@suse.cz
Cc: Michal Hocko mho...@suse.cz
Cc: Dave Hansen dave.han...@intel.com
Cc: Prabhakar Lad
0.7428 vmlinux-3.15.0-rc5-mmotm-20140513 end_page_writeback
23740 0.2409 vmlinux-3.15.0-rc5-lessatomic end_page_writeback
Signed-off-by: Mel Gorman mgor...@suse.de
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off
.
This patch makes copy_pte_range() static again.
Signed-off-by: Jerome Marchand jmarc...@redhat.com
Acked-by: David Rientjes rient...@google.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor
this by
skipping zones on remote nodes until the lower one is found. While this
makes sense from a page aging and performance perspective, it breaks the
expected zonelist policy. This patch restores the expected behaviour
for zone-list ordering.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked
vanillarearrange-v5 vmstat-v5
User 746.94 759.78 774.56
System 65336.2258350.9832847.27
Elapsed 27553.5227282.0227415.04
Note that the overhead reduction will vary depending on where exactly
pages are allocated and freed.
Signed-off-by: Mel Gorman mgor
Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Cc: Kirill A. Shutemov kirill.shute...@linux.intel.com
Cc: Bob Liu bob@oracle.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
is reduced by this patch.
3.16.0-rc3 3.16.0-rc3
vanillarearrange-v5r9
User 746.94 759.78
System 65336.2258350.98
Elapsed 27553.5227282.02
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Signed-off
is active.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/trace/events/pagemap.h | 16
a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/vmalloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e2be0f8..060dc36 100644
--- a/mm/vmalloc.c
+++ b
Acked-by: Johannes Weiner han...@cmpxchg.org
Reviewed-by: Rik van Riel r...@redhat.com
Cc: Mel Gorman mgor...@suse.de
Signed-off-by: Johannes Weiner han...@cmpxchg.org
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel
commit a6e21b14f22041382e832d30deda6f26f37b1097 upstream.
Currently it's calculated once per zone in the zonelist.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Reviewed-by: Rik van Riel r...@redhat.com
Cc: Vlastimil Babka vba...@suse.cz
Cc: Jan Kara j
commit d8846374a85f4290a473a4e2a64c1ba046c4a0e1 upstream.
There is no need to calculate zone_idx(preferred_zone) multiple times
or use the pgdat to figure it out.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Acked-by: David Rientjes rient...@google.com
Cc
commit e3741b506c5088fa8c911bb5884c430f770fb49d upstream.
There should be no references to it any more and a parallel mark should
not be reordered against us. Use non-locked varient to clear page active.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Cc
commit dc4b0caff24d9b2918e9f27bc65499ee63187eba upstream.
In the free path we calculate page_to_pfn multiple times. Reduce that.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Cc: Johannes Weiner han...@cmpxchg.org
Acked-by: Vlastimil Babka vba...@suse.cz
Cc
...@infradead.org
Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Vlastimil Babka vba...@suse.cz
Cc: Jan Kara j...@suse.cz
Cc: Michal Hocko mho...@suse.cz
Cc: Hugh Dickins hu...@google.com
Cc: Dave Hansen dave.han...@intel.com
Cc: Theodore Ts'o ty...@mit.edu
Cc: Paul E
commit cfc47a2803db42140167b92d991ef04018e162c7 upstream.
get_pageblock_migratetype() is called during free with IRQs disabled.
This is unnecessary and disables IRQs for longer than necessary.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Cc: Johannes Weiner
.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Vlastimil Babka vba...@suse.cz
Cc: Jan Kara j...@suse.cz
Cc: Michal Hocko mho...@suse.cz
Cc: Hugh Dickins hu...@google.com
Cc: Dave Hansen dave.han...@intel.com
Cc: Theodore
is that that the page may be
promoted to the active list that might have been left on the inactive
list before the patch. It's too tiny a race and too marginal a
consequence to always use atomic operations for.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Cc: Vlastimil Babka
commit 07a427884348d38a6fd56fa4d78249c407196650 upstream.
shmem_getpage_gfp uses an atomic operation to set the SwapBacked field
before it's even added to the LRU or visible. This is unnecessary as what
could it possible race against? Use an unlocked variant.
Signed-off-by: Mel Gorman mgor
...@mit.edu
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Oleg Nesterov o...@redhat.com
Cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm
overlap.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Acked-by: Rik van Riel r...@redhat.com
Cc: Vlastimil Babka vba...@suse.cz
Cc: Jan Kara j...@suse.cz
Cc: Michal Hocko mho...@suse.cz
Cc: Hugh Dickins hu...@google.com
Cc: Dave Hansen dave.han...@intel.com
.
[a...@linux-foundation.org: move BUFFER_FLAGS_DISCARD into the .c file]
Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Vlastimil Babka vba...@suse.cz
Cc: Jan Kara j...@suse.cz
Cc: Michal Hocko mho...@suse.cz
Cc: Hugh Dickins hu...@google.com
Cc: Dave Hansen
commit b745bc85f21ea707e4ea1a91948055fa3e72c77b upstream.
cold is a bool, make it one. Make the likely case the if part of the
block instead of the else as according to the optimisation manual this is
preferred.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Cc
]
Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Vlastimil Babka vba...@suse.cz
Cc: Jan Kara j...@suse.cz
Cc: Michal Hocko mho...@suse.cz
Cc: Hugh Dickins hu...@google.com
Cc: Dave Hansen dave.han...@intel.com
Cc: Theodore Ts'o ty...@mit.edu
Cc: Paul E. McKenney
commit 664eeddeef6539247691197c1ac124d4aa872ab6 upstream.
If cpusets are not in use then we still check a global variable on every
page allocation. Use jump labels to avoid the overhead.
Signed-off-by: Mel Gorman mgor...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
Cc: Johannes Weiner han
...@zeniv.linux.org.uk
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
fs/super.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/fs/super.c b/fs/super.c
index d127de2
...@suse.cz
Reported-by: Yong-Taek Lee ytk@samsung.com
Reported-by: Bartlomiej Zolnierkiewicz b.zolnier...@samsung.com
Suggested-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Joonsoo Kim iamjoonsoo@lge.com
Suggested-by: Mel Gorman mgor...@suse.de
Acked-by: Minchan Kim minc...@kernel.org
Cc
commit 800a1e750c7b04c2aa2459afca77e936e01c0029 upstream.
If a zone cannot be used for a dirty page then it gets marked full which
is cached in the zlc and later potentially skipped by allocation requests
that have nothing to do with dirty zones.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked
line it's deceptively expensive and most
machines will not care. Only update the zlc if it was active.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Johannes Weiner han...@cmpxchg.org
Reviewed-by: Rik van Riel r...@redhat.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off
writes anon 43945891
Note that there are fewer allocation stalls even though the amount
of direct reclaim scanning is very approximately the same.
Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Hugh Dickins hu...@google.com
Cc: Tim Chen
, especially not
with memcg cleanups coming in 3.17.
Reported-by: Dave Jones da...@redhat.com
Signed-off-by: Hugh Dickins hu...@google.com
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/migrate.c | 5 +++--
1 file changed, 3 insertions
%)
Signed-off-by: Tim Chen tim.c.c...@linux.intel.com
Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Hugh Dickins hu...@google.com
Cc: Dave Chinner da...@fromorbit.com
Tested-by: Yuanhan Liu yuanhan@linux.intel.com
Cc: Bob Liu bob@oracle.com
Cc: Jan Kara j
iamjoonsoo@lge.com
Cc: Rafael Aquini aqu...@redhat.com
Cc: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Cc: Andrea Arcangeli aarca...@redhat.com
Cc: Khalid Aziz khalid.a...@oracle.com
Cc: Christoph Hellwig h...@lst.de
Reviewed-by: Zhang Yanfei zhangyan...@cn.fujitsu.com
Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Bartlomiej Zolnierkiewicz b.zolnier...@samsung.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel r
time the delay
shouldn't really matter because there's no real memory
pressure for swapout to react to. ]
Suggested-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Shaohua Li s...@fusionio.com
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Mel Gorman mgor...@suse.de
Acked-by: Hugh
compaction is updated only when called for
sync compaction.
Signed-off-by: David Rientjes rient...@google.com
Acked-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Greg Thelen gthe...@google.com
Cc: Mel Gorman mgor...@suse.de
Signed-off-by: Andrew
Thelen gthe...@google.com
Acked-by: Mel Gorman mgor...@suse.de
Acked-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor
successful pageblock in zone-compact_cached_free_pfn, remains unchanged.
This cache is used when the whole compaction is restarted, not for
multiple invocations of the free scanner during single compaction.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman
for migration, and
this one was just duplicating the info.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Bartlomiej Zolnierkiewicz
.
[a...@linux-foundation.org: fix typo in comment]
Reported-by: Joonsoo Kim iamjoonsoo@lge.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Bartlomiej Zolnierkiewicz
Horiguchi n-horigu...@ah.jp.nec.com
Acked-by: Mel Gorman mgor...@suse.de
Acked-by: Vlastimil Babka vba...@suse.cz
Cc: Greg Thelen gthe...@google.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
: Sunghwan Yun sunghwan@samsung.com
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Bartlomiej Zolnierkiewicz b.zolnier...@samsung.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Christoph Lameter
care of the
scheduling.
If the cond_resched() actually triggers, then terminate this pageblock
scan for async compaction as well.
Signed-off-by: David Rientjes rient...@google.com
Acked-by: Mel Gorman mgor...@suse.de
Acked-by: Vlastimil Babka vba...@suse.cz
Cc: Mel Gorman mgor...@suse.de
Cc
from sysfs, either for the entire
system or for a node, to force MIGRATE_SYNC.
[a...@linux-foundation.org: fix build]
[iamjoonsoo@lge.com: use MIGRATE_SYNC in alloc_contig_range()]
Signed-off-by: David Rientjes rient...@google.com
Suggested-by: Mel Gorman mgor...@suse.de
Acked-by: Vlastimil
for commenting different versions.
Signed-off-by: Fabian Frederick f...@skynet.be
Suggested-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include
From: Al Viro v...@zeniv.linux.org.uk
commit 9e8c2af96e0d2d5fe298dd796fb6bc16e888a48d upstream.
... it does that itself (via kmap_atomic())
Signed-off-by: Al Viro v...@zeniv.linux.org.uk
Signed-off-by: Mel Gorman mgor...@suse.de
---
fs/btrfs/file.c | 5 -
fs/fuse/file.c | 2 --
mm
: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/vmacache.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/vmacache.c b/mm/vmacache.c
index d4224b3..1037a3ba 100644
...@google.com
Cc: Hugh Dickins hu...@google.com
Cc: Jan Kara j...@suse.cz
Cc: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Cc: Luigi Semenzato semenz...@google.com
Cc: Mel Gorman mgor...@suse.de
Cc: Metin Doslu me...@citusdata.com
Cc: Michel Lespinasse wal...@google.com
Cc: Ozgun Erdogan oz
that. Rename it to
reflect its actual meaning.
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Acked-by: David Rientjes rient...@google.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor
...@redhat.com
Tested-by: Hugh Dickins hu...@google.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/unicore32/include/asm/mmu_context.h | 4 +-
fs/exec.c
...@ti.com
Tested-by: Grygorii Strashko grygorii.stras...@ti.com
Cc: Tejun Heo t...@kernel.org
Cc: Santosh Shilimkar santosh.shilim...@ti.com
Cc: Ingo Molnar mi...@kernel.org
Cc: Mel Gorman m...@csn.ul.ie
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux
-by: Johannes Weiner han...@cmpxchg.org
Reviewed-by: Minchan Kim minc...@kernel.org
Reviewed-by: Rik van Riel r...@redhat.com
Acked-by: Mel Gorman mgor...@suse.de
Cc: Andrea Arcangeli aarca...@redhat.com
Cc: Bob Liu bob@oracle.com
Cc: Christoph Hellwig h...@infradead.org
Cc: Dave Chinner da
be adapted to handle the new definition
of page cache hole.
Signed-off-by: Johannes Weiner han...@cmpxchg.org
Reviewed-by: Rik van Riel r...@redhat.com
Reviewed-by: Minchan Kim minc...@kernel.org
Acked-by: Mel Gorman mgor...@suse.de
Cc: Andrea Arcangeli aarca...@redhat.com
Cc: Bob Liu bob@oracle.com
-page entries in page cache radix
trees)
Signed-off-by: Johannes Weiner han...@cmpxchg.org
Reported-by: Hugh Dickins hu...@google.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm
-by: Rik van Riel r...@redhat.com
Acked-by: Mel Gorman mgor...@suse.de
Cc: Andrea Arcangeli aarca...@redhat.com
Cc: Bob Liu bob@oracle.com
Cc: Christoph Hellwig h...@infradead.org
Cc: Dave Chinner da...@fromorbit.com
Cc: Greg Thelen gthe...@google.com
Cc: Hugh Dickins hu...@google.com
Cc: Jan Kara j
page to the cache, and removes read_cache_page_async() and its wrappers.
Signed-off-by: Sasha Levin sasha.le...@oracle.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
fs/cramfs
a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/filemap.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index df13a9b..ed99e57 100644
--- a/mm/filemap.c
shrinker is NUMA
aware. That said, binding a process to a particular NUMA node won't
prevent it from shrinking inode/dentry caches from other nodes, which is
not good. Fix this.
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Cc: Mel Gorman mgor...@suse.de
Cc: Michal Hocko mho...@suse.cz
-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/readahead.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index 5b637b5..e9c4a6a 100644
--- a/mm/readahead.c
+++ b/mm
its previous incarnation.
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Mel Gorman mgor...@suse.de
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux
an effect, gcc doesn't optimize it itself because
of cc-sync.
Signed-off-by: David Rientjes rient...@google.com
Cc: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Vlastimil Babka vba...@suse.cz
Cc: Joonsoo Kim iamjoonsoo@lge.com
Signed-off-by: Andrew Morton a...@linux
is the first thing we should do and makes things more
simple.
[vba...@suse.cz: rephrase commit description]
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Vlastimil Babka vba...@suse.cz
Cc: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Andrew Morton a...@linux
-by: Vlastimil Babka vba...@suse.cz
Cc: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c | 15 +++
1 file
pageblock.
suitable_migration_target() also checks if page is highorder or not, but
it's criteria for highorder is pageblock order. So calling it once
within pageblock range has no problem.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Vlastimil Babka vba...@suse.cz
Cc: Mel Gorman
, locked variable would be false.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Vlastimil Babka vba...@suse.cz
Cc: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
comment]
Signed-off-by: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Signed-off-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Reported-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Tested-by: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Cc: Mel Gorman mgor...@suse.de
Signed-off
second.
Signed-off-by: David Rientjes rient...@google.com
Acked-by: Hugh Dickins hu...@google.com
Acked-by: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Rik van Riel r...@redhat.com
Cc: Greg Thelen gthe...@google.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed
, there's no need
to skip pageblocks based on heuristics (mainly for debugging).
Signed-off-by: David Rientjes rient...@google.com
Acked-by: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux
: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c | 11 ---
1 file changed, 4 insertions(+), 7 deletions
will not call such shrinkers at
all. As a result some slabs will be left untouched under some
circumstances. Let us fix it.
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Reported-by: Dave Chinner dchin...@redhat.com
Cc: Mel Gorman mgor...@suse.de
Cc: Michal Hocko mho...@suse.cz
Cc: Johannes Weiner
-off-by: Mel Gorman mgor...@suse.de
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Rik van Riel r...@redhat.com
Cc: Joonsoo Kim iamjoonsoo@lge.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor
...@redhat.com
Acked-by: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c | 11 +--
1 file changed, 9
. Add a new spinlock, swap_avail_lock, to protect
the swap_avail_head list.
Mel Gorman suggested using plists since they internally handle ordering
the list entries based on priority, which is exactly what swap was doing
manually. All the ordering code is now removed, and swap_info_struct
entries
).
Signed-off-by: Dan Streetman ddstr...@ieee.org
Acked-by: Mel Gorman mgor...@suse.de
Cc: Paul Gortmaker paul.gortma...@windriver.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: Shaohua Li s...@fusionio.com
Cc: Hugh Dickins hu...@google.com
Cc: Dan Streetman
with any
other threads serialised behind it. In comparison to that, the
flush is noise. It makes more sense to optimise balancing to
require fewer flushes than to optimise the flush itself.
This patch deletes the redundant huge page check.
Signed-off-by: Mel Gorman mgor...@suse.de
Tested
virt_to_head_page(slab-s_mem).
Acked-by: Andi Kleen a...@linux.intel.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Signed-off-by: Pekka Enberg penb...@iki.fi
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/slab.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/slab.c b/mm
equally in pairs. The swap behavior is now as
advertised, i.e. different priority swap entries are used in order, and
equal priority swap targets are used concurrently.
Signed-off-by: Dan Streetman ddstr...@ieee.org
Acked-by: Mel Gorman mgor...@suse.de
Cc: Shaohua Li s...@fusionio.com
Cc: Hugh
Jones da...@redhat.com
Cc: Cyrill Gorcunov gorcu...@gmail.com
Cc: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/fremap.c | 28 ++--
1 file changed, 22 insertions(+), 6
evaluates zone reclaim behavior and
suitable reclaim_nodes.
Signed-off-by: Michal Hocko mho...@suse.cz
Acked-by: David Rientjes rient...@google.com
Acked-by: Nishanth Aravamudan n...@linux.vnet.ibm.com
Tested-by: Nishanth Aravamudan n...@linux.vnet.ibm.com
Acked-by: Mel Gorman mgor...@suse.de
Signed
not justify
taking the penalty everywhere so make it a debugging option.
Signed-off-by: Mel Gorman mgor...@suse.de
Tested-by: Davidlohr Bueso davidl...@hp.com
Reviewed-by: Rik van Riel r...@redhat.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Cc: Hugh Dickins hu...@google.com
Cc: Alex Shi alex
!MIGRATE_MOVABLE blocks that async
compactions now skips without marking them.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Rik van Riel r...@redhat.com
Acked-by: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off
added) broke it.
This patch restores sane and old behavior. It also removes an incorrect
comment which was introduced by commit fef903efcf0c (mm/page_alloc.c:
restructure free-page stealing code and fix a bug).
Signed-off-by: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Cc: Mel Gorman mgor
objects, which sounds reasonable.
[*] http://www.spinics.net/lists/cgroups/msg06913.html
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Cc: Mel Gorman mgor...@suse.de
Cc: Michal Hocko mho...@suse.cz
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Rik van Riel r...@redhat.com
Cc: Dave Chinner dchin
...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/readahead.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index e4ed041..5b637b5 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -401,6 +401,7
A. Shutemov kirill.shute...@linux.intel.com
Cc: Mel Gorman m...@csn.ul.ie
Cc: Yasuaki Ishimatsu isimatu.yasu...@jp.fujitsu.com
Cc: Wanpeng Li liw...@linux.vnet.ibm.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman
commit 15aa368255f249df0b2af630c9487bb5471bd7da upstream.
NR_TLB_LOCAL_FLUSH_ALL is not always accounted for correctly and
the comparison with total_vm is done before taking
tlb_flushall_shift into account. Clean it up.
Signed-off-by: Mel Gorman mgor...@suse.de
Tested-by: Davidlohr Bueso davidl
: Minchan Kim minc...@kernel.org
Cc: Konstantin Khlebnikov khlebni...@openvz.org
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/page-flags.h | 4 +--
mm/swap_state.c
this
functionality and make it easier to understand the code. No functional
change.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Acked-by: Mel Gorman mgor...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
Cc: Joonsoo Kim iamjoonsoo@lge.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed
Berriche h...@sgi.com
Cc: Hugh Dickins hu...@google.com
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Kirill A. Shutemov kirill.shute...@linux.intel.com
Cc: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Cc: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Cc: stable@vger.kernel.org
1 - 100 of 234 matches
Mail list logo