On Tue, Oct 02, 2012 at 03:48:57PM -0700, Rick Jones wrote:
On 10/02/2012 01:45 AM, Mel Gorman wrote:
SIZE=64
taskset -c 0 netserver
taskset -c 1 netperf -t UDP_STREAM -i 50,6 -I 99,1 -l 20 -H 127.0.0.1 -- -P
15895 -s 32768 -S 32768 -m $SIZE -M $SIZE
Just FYI, unless you are running
isolations in CMA. This
patch should address the problem.
This patch is a fix for
mm-compaction-acquire-the-zone-lock-as-late-as-possible-fix-2.patch
Signed-off-by: Mel Gorman mgor...@suse.de
diff --git a/mm/compaction.c b/mm/compaction.c
index 136debd..ed3b8f1 100644
--- a/mm/compaction.c
+++ b/mm
but the resolutions should
be straight-forward.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c | 16
mm/internal.h |3 ++-
mm/page_alloc.c | 43 ++-
3 files changed, 28 insertions(+), 34 deletions(-)
diff --git a/mm
.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Wed, Oct 03, 2012 at 11:04:16AM -0700, Rick Jones wrote:
On 10/03/2012 02:47 AM, Mel Gorman wrote:
On Tue, Oct 02, 2012 at 03:48:57PM -0700, Rick Jones wrote:
On 10/02/2012 01:45 AM, Mel Gorman wrote:
SIZE=64
taskset -c 0 netserver
taskset -c 1 netperf -t UDP_STREAM -i 50,6 -I 99,1 -l
but they are somewhere else yet
to be determined.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 2c4ce17..9eef558 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -346,7
On Mon, Oct 08, 2012 at 05:06:54PM +0900, Minchan Kim wrote:
Hi Mel,
On Tue, Oct 02, 2012 at 04:12:17PM +0100, Mel Gorman wrote:
On Tue, Oct 02, 2012 at 05:03:07PM +0200, Thierry Reding wrote:
On Tue, Oct 02, 2012 at 03:41:35PM +0100, Mel Gorman wrote:
On Tue, Oct 02, 2012 at 02:48
On Sun, Oct 07, 2012 at 01:14:17AM -0700, Anton Vorontsov wrote:
On Fri, Oct 05, 2012 at 10:29:12AM +0100, Mel Gorman wrote:
[...]
The implemented approach can notify userland about two things:
- Constantly rising number of scanned pages shows that Linux is busy w/
rehashing
is cleared by clear_page_dirty_for_io(),
the page gets writeprotected in page_mkclean(). So pagecache page is writeable
if and only if it is dirty.
CC: Martin Schwidefsky schwidef...@de.ibm.com
CC: Mel Gorman mgor...@suse.de
CC: linux-s...@vger.kernel.org
Signed-off-by: Jan Kara j...@suse.cz
Acked
that breaks this rule). It would be less efficient on
SPARSEMEM than what you're trying to merge but I do not see the need for
the additional complexity unless you can show it makes a big difference
to boot times.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe
enabled.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
on failures and will be revisited in the future.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/vmscan.c | 25 -
1 file changed, 25 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2624edc..e081ee8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1760,28 +1760,6
can be drawn from its value.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/vm_event_item.h |4 +++-
mm/compaction.c |4
mm/migrate.c |6 ++
mm/vmstat.c |7 ---
4 files changed, 13 insertions(+), 8 deletions
on the
current node or migrated to a different node.
Acked-by: Rik van Riel r...@redhat.com
Signed-off-by: Andrea Arcangeli aarca...@redhat.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/x86/include/asm/pgtable.h | 65 ++--
include/asm-generic/pgtable.h
From: Andrea Arcangeli aarca...@redhat.com
When we split a transparent hugepage, transfer the NUMA type from the
pmd to the pte if needed.
Signed-off-by: Andrea Arcangeli aarca...@redhat.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c |2 ++
1 file changed, 2 insertions
Signed-off-by: Andrea Arcangeli aarca...@redhat.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/x86/mm/gup.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index dd74e46..02c5ec5 100644
--- a/arch/x86/mm/gup.c
+++ b/arch
pte_numa(). This isn't
a problem since PROT_NONE (and possible PROT_WRITE with dirty tracking)
aren't used or are rare enough for us to not care about their placement.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/huge_mm.h | 10 +
mm/huge_memory.c| 21 ++
mm
lee.schermerh...@hp.com
Cc: Andrew Morton a...@linux-foundation.org
Cc: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/uapi/linux/mempolicy.h |1 +
mm
-foundation.org
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Cc: Andrea Arcangeli aarca...@redhat.com
Cc: Rik van Riel r...@redhat.com
[ Wrote the changelog, ran measurements, tuned the default. ]
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/sched.h
and fixed bug. ]
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/mm_types.h |3 +++
include/linux/sched.h|1 +
kernel/sched/fair.c | 45 -
kernel/sysctl.c |7 +++
4 files
van Riel r...@redhat.com
Cc: Andrew Morton a...@linux-foundation.org
Cc: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/uapi/linux/mempolicy.h |1
-by: Peter Zijlstra a.p.zijls...@chello.nl
Based-on-work-by: Andrea Arcangeli aarca...@redhat.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/migrate.h |8
mm/migrate.c| 104 ++-
2 files changed, 110 insertions(+), 2
a migrate page copy but any improvement to the model would still
use the same vmstat counters.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/vm_event_item.h |2 ++
mm/compaction.c |8
mm/vmstat.c |3 +++
3 files changed, 13 insertions
time) without
losing information, this bitflag must never be set when the pte and
pmd are present, so the bitflag picked for _PAGE_NUMA usage, must not
be used by the swap entry format.
Signed-off-by: Andrea Arcangeli aarca...@redhat.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/x86/include
on PROT_NONE pages being !present and avoid
the TLB flush from try_to_unmap(TTU_MIGRATION). This greatly improves the
page-migration performance.
Based-on-work-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/huge_mm.h |8
mm/huge_memory.c
-on-fault;
simplified code now that we don't have to bother
with special crap for interleaved ]
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/mempolicy.h |8 +
include
intelligent about it.
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/sh/mm/Kconfig |1 +
include/linux/mm_types.h | 11 +
include/linux/sched.h| 20
init/Kconfig | 14 ++
kernel/sched/core.c | 13 +
kernel/sched/fair.c | 122
] nodes.
After PROT_NONE, the pages in regions assigned to the worker threads
will be automatically migrated local to the threads on 1st touch.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/mm.h |3 +
include/uapi/linux/mempolicy.h | 13 ++-
mm/mempolicy.c
and how fast it is
doing it.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/vm_event_item.h |6 ++
mm/huge_memory.c |1 +
mm/memory.c |3 +++
mm/mempolicy.c|6 ++
mm/migrate.c |3 ++-
mm
This is the dumbest possible policy that still does something of note.
When a pte_numa is faulted, it is moved immediately. Any replacement
policy must at least do better than this and in all likelihood this
policy regresses normal workloads.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include
There are currently two competing approaches to implement support for
automatically migrating pages to optimise NUMA locality. Performance results
are available for both but review highlighted different problems in both.
They are not compatible with each other even though some fundamental
The pgmigrate_success and pgmigrate_fail vmstat counters tells the user
about migration activity but not the type or the reason. This patch adds
a tracepoint to identify the type of page migration and why the page is
being migrated.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux
On Tue, Nov 06, 2012 at 01:58:26PM -0500, Rik van Riel wrote:
On 11/06/2012 04:14 AM, Mel Gorman wrote:
Note: This patch started as mm/mpol: Create special PROT_NONE
infrastructure and preserves the basic idea but steals *very*
heavily from autonuma: numa hinting page faults entry
On Tue, Nov 06, 2012 at 02:41:13PM -0500, Rik van Riel wrote:
On 11/06/2012 04:14 AM, Mel Gorman wrote:
From: Peter Zijlstra a.p.zijls...@chello.nl
NOTE: This patch is based on sched, numa, mm: Add fault driven
placement and migration policy but as it throws away all the policy
On Tue, Nov 06, 2012 at 02:55:06PM -0500, Rik van Riel wrote:
On 11/06/2012 04:14 AM, Mel Gorman wrote:
It is tricky to quantify the basic cost of automatic NUMA placement in a
meaningful manner. This patch adds some vmstats that can be used as part
of a basic costing model.
u= basic
On Wed, Nov 07, 2012 at 05:48:30AM -0500, Rik van Riel wrote:
On 11/07/2012 05:38 AM, Mel Gorman wrote:
On Tue, Nov 06, 2012 at 01:58:26PM -0500, Rik van Riel wrote:
On 11/06/2012 04:14 AM, Mel Gorman wrote:
Note: This patch started as mm/mpol: Create special PROT_NONE
infrastructure
On Tue, Nov 06, 2012 at 02:18:18PM -0500, Rik van Riel wrote:
On 11/06/2012 04:14 AM, Mel Gorman wrote:
Note: Based on mm/mpol: Use special PROT_NONE to migrate pages but
sufficiently different that the signed-off-bys were dropped
Combine our previous _PAGE_NUMA, mpol_misplaced
and SLUB
initialisation trips up. Check it is initialised.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/mempolicy.c |4
1 file changed, 4 insertions(+)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 11d4b6b..8cfa6dc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -129,6 +129,10
pressure notifications
+ *
+ * Copyright 2011-2012 Pekka Enberg penb...@kernel.org
+ * Copyright 2011-2012 Linaro Ltd.
+ * Anton Vorontsov anton.voront...@linaro.org
+ *
+ * Based on ideas from KOSAKI Motohiro, Leonid Moiseichuk, Mel Gorman,
+ * Minchan Kim and Pekka Enberg
by
distribution configs. Support for MPST should be detected at runtime and
3. ACPI support to actually use this thing and validate the design is
compatible with the spec and actually works in hardware
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
On Fri, Oct 19, 2012 at 01:53:18PM -0600, Mike Yoknis wrote:
On Tue, 2012-10-09 at 08:56 -0600, Mike Yoknis wrote:
On Mon, 2012-10-08 at 16:16 +0100, Mel Gorman wrote:
On Wed, Oct 03, 2012 at 08:56:14AM -0600, Mike Yoknis wrote:
memmap_init_zone() loops through every Page Frame Number
been a helper function
called unmap_and_move_thp() in migrate.c instead of being buried in
mm/huge_memory.c
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
I'm travelling for a conference at the moment so these patches are not
tested but with the ongoing NUMA migration work I figured it was best to
post these sooner rather than later.
This series adds vmstat counters and tracepoints for migration, compaction
and autonuma. Using them it's possible to
The pgmigrate_success and pgmigrate_fail vmstat counters tells the user
about migration activity but not the type or the reason. This patch adds
a tracepoint to identify the type of page migration and why the page is
being migrated.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux
can be drawn from its value.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/vm_event_item.h |4 +++-
mm/compaction.c |4
mm/migrate.c |6 ++
mm/vmstat.c |7 ---
4 files changed, 13 insertions(+), 8 deletions
a migrate page copy but any improvement to the model would still
use the same vmstat counters.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/vm_event_item.h |2 ++
mm/compaction.c |8
mm/vmstat.c |3 +++
3 files changed, 13 insertions
hints recorded. When the workload is fully
converged the value is 1.
This can measure if AutoNUMA is converging and how fast it is doing it.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/vm_event_item.h |6 ++
mm/autonuma.c | 19
Record in the migrate_pages tracepoint that the migration is for
AutoNUMA.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/migrate.h|1 +
include/trace/events/migrate.h |1 +
mm/autonuma.c |3 ++-
3 files changed, 4 insertions(+), 1 deletions
On Mon, Oct 08, 2012 at 09:24:40PM -0700, Hugh Dickins wrote:
SNIP
CC: Mel Gorman mgor...@suse.de
and I'm grateful to Mel's ack for reawakening me to it...
CC: linux-s...@vger.kernel.org
Signed-off-by: Jan Kara j...@suse.cz
but I think it's wrong.
Dang.
---
mm/rmap.c
bool migrate_scanner)
{
struct zone *zone = cc-zone;
- if (!page)
+
+ if (!page || cc-ignore_skip_hint)
return;
if (!nr_isolated) {
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
On Mon, Oct 08, 2012 at 06:42:16PM -0700, John Stultz wrote:
On 10/08/2012 02:46 AM, Mel Gorman wrote:
On Sun, Oct 07, 2012 at 01:14:17AM -0700, Anton Vorontsov wrote:
And here we just try to let userland to assist, userland can tell oh,
don't bother with swapping or draining caches, I can
On Tue, Oct 09, 2012 at 01:08:30PM +0200, Bartlomiej Zolnierkiewicz wrote:
On Tuesday 09 October 2012 12:11:43 Mel Gorman wrote:
On Tue, Oct 09, 2012 at 10:40:10AM +0200, Bartlomiej Zolnierkiewicz wrote:
I also need following patch to make CONFIG_CMA=y CONFIG_COMPACTION=y
case
work
This is a backport of the series Memory policy corruption fixes V2. This
should apply to 3.6-stable, 3.5-stable, 3.4-stable and 3.0-stable without
any difficulty. It will not apply cleanly to 3.2 but just drop the revert
patch and the rest of the series should apply.
I tested 3.6-stable and
() is virtually a no-op
and while it does not allow memory corruption it is not the right fix.
This patch is a revert.
[mgor...@suse.de: Edited changelog]
Signed-off-by: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Christoph Lameter c...@linux.com
Cc
the
reference count and frees the policy prematurely.
Signed-off-by: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Signed-off-by: Mel Gorman mgor...@suse.de
Reviewed-by: Christoph Lameter c...@linux.com
Cc: Josh Boyer jwbo...@gmail.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off
Gorman mgor...@suse.de
Reviewed-by: Christoph Lameter c...@linux.com
Cc: Josh Boyer jwbo...@gmail.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/mempolicy.c | 15 +--
1
are not that
performance critical this patch converts sp-lock to sp-mutex so it can
sleep when calling sp_alloc().
[kosaki.motoh...@jp.fujitsu.com: Original patch]
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Reviewed-by: Christoph Lameter c
...@redhat.com,
Cc: Christoph Lameter c...@linux.com,
Reviewed-by: Christoph Lameter c...@linux.com
Signed-off-by: KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com
Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Josh Boyer jwbo...@gmail.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off
at the TTWU_QUEUE figures. There are 530K interrupts versus 33K interrupts
for NO_TTWU_QUEUE. If each one of those IPIs are effectively a context
switch then the actual switch rates are 1.5M switches versus 1.3 switches
and TTWU_QUEUE is actually switching faster.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from
On Thu, Oct 11, 2012 at 04:56:11PM +0200, Andrea Arcangeli wrote:
Hi Mel,
On Thu, Oct 11, 2012 at 11:19:30AM +0100, Mel Gorman wrote:
As a basic sniff test I added a test to MMtests for the AutoNUMA
Benchmark on a 4-node machine and the following fell out
On Thu, Oct 11, 2012 at 06:07:02PM +0200, Andrea Arcangeli wrote:
Hi,
On Thu, Oct 11, 2012 at 11:50:36AM +0100, Mel Gorman wrote:
On Thu, Oct 04, 2012 at 01:50:43AM +0200, Andrea Arcangeli wrote:
+The AutoNUMA logic is a chain reaction resulting from the actions of
+the AutoNUMA daemon
On Thu, Oct 11, 2012 at 06:43:00PM +0200, Andrea Arcangeli wrote:
On Thu, Oct 11, 2012 at 12:01:37PM +0100, Mel Gorman wrote:
On Thu, Oct 04, 2012 at 01:50:46AM +0200, Andrea Arcangeli wrote:
The objective of _PAGE_NUMA is to be able to trigger NUMA hinting page
faults to identify the per
On Thu, Oct 11, 2012 at 06:58:47PM +0200, Andrea Arcangeli wrote:
On Thu, Oct 11, 2012 at 12:15:45PM +0100, Mel Gorman wrote:
huh?
#define _PAGE_NUMA _PAGE_PROTNONE
so this is effective _PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PROTNONE
I suspect you are doing this because
On Thu, Oct 11, 2012 at 07:05:33PM +0200, Andrea Arcangeli wrote:
On Thu, Oct 11, 2012 at 01:22:55PM +0100, Mel Gorman wrote:
On Thu, Oct 04, 2012 at 01:50:48AM +0200, Andrea Arcangeli wrote:
In the special pmd mode of knuma_scand
(/sys/kernel/mm/autonuma/knuma_scand/pmd == 1), the pmd
On Thu, Oct 11, 2012 at 07:15:20PM +0200, Andrea Arcangeli wrote:
On Thu, Oct 11, 2012 at 01:28:27PM +0100, Mel Gorman wrote:
s/togehter/together/
Fixed.
+ * knumad_scan structure.
+ */
+struct mm_autonuma {
Nit but this is very similar in principle to mm_slot
On Thu, Oct 11, 2012 at 07:34:42PM +0200, Andrea Arcangeli wrote:
On Thu, Oct 11, 2012 at 02:46:43PM +0100, Mel Gorman wrote:
Should this be a SCHED_FEATURE flag?
I guess it could. It is only used by kernel/sched/numa.c which isn't
even built unless CONFIG_AUTONUMA is set. So it would
On Fri, Oct 12, 2012 at 02:25:13AM +0200, Andrea Arcangeli wrote:
On Thu, Oct 11, 2012 at 03:58:05PM +0100, Mel Gorman wrote:
On Thu, Oct 04, 2012 at 01:50:52AM +0200, Andrea Arcangeli wrote:
This algorithm takes as input the statistical information filled by the
knuma_scand (mm
On Fri, Oct 12, 2012 at 03:45:53AM +0200, Andrea Arcangeli wrote:
Hi Mel,
On Thu, Oct 11, 2012 at 10:34:32PM +0100, Mel Gorman wrote:
So after getting through the full review of it, there wasn't anything
I could not stand. I think it's *very* heavy on some of the paths like
the idle
the aggressive reclaim to the process attempting the THP
allocation.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/vmscan.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2624edc..2b7edfa 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
consistently and the methodology needs work. I know filtering
statistics like this is a major flaw in the methodology but the decision
was made in this case in the interest of the benchmarks with unstable
results completing in a reasonable time.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from
On Thu, Oct 11, 2012 at 04:35:03PM +0100, Mel Gorman wrote:
On Thu, Oct 11, 2012 at 04:56:11PM +0200, Andrea Arcangeli wrote:
Hi Mel,
On Thu, Oct 11, 2012 at 11:19:30AM +0100, Mel Gorman wrote:
As a basic sniff test I added a test to MMtests for the AutoNUMA
Benchmark on a 4-node
On Wed, Oct 24, 2012 at 09:47:47AM -0600, Mike Yoknis wrote:
On Sat, 2012-10-20 at 09:29 +0100, Mel Gorman wrote:
On Fri, Oct 19, 2012 at 01:53:18PM -0600, Mike Yoknis wrote:
On Tue, 2012-10-09 at 08:56 -0600, Mike Yoknis wrote:
On Mon, 2012-10-08 at 16:16 +0100, Mel Gorman wrote
much. I've picked it up and it'll be in MMTests 0.07.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
On Fri, Oct 26, 2012 at 03:48:48PM +0800, Ni zhan Chen wrote:
On 10/12/2012 10:51 PM, Mel Gorman wrote:
MMTests 0.06 is a configurable test suite that runs a number of common
workloads of interest to MM developers. There are multiple additions
all but in many respects the most useful
On Tue, Nov 06, 2012 at 11:15:54AM +0100, Johannes Hirte wrote:
Am Mon, 5 Nov 2012 14:24:49 +
schrieb Mel Gorman mgor...@suse.de:
Jiri Slaby reported the following:
(It's an effective revert of mm: vmscan: scale number of
pages reclaimed by reclaim/compaction based on failures
of lumpy reclaim (less CPU usage but greater system distruption that is
harder to measure). Shortly after, lumpy reclaim was removed entirely so
now larger amounts of CPU time is spent compacting memory that previously
would have been reclaimed.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from
On Fri, Nov 09, 2012 at 10:44:16AM +0530, Vaidyanathan Srinivasan wrote:
* Mel Gorman mgor...@suse.de [2012-11-08 18:02:57]:
On Wed, Nov 07, 2012 at 01:22:13AM +0530, Srivatsa S. Bhat wrote:
Hi Mel,
Thanks for detailed
the problem seems to be effectively solved by
revert patch: https://lkml.org/lkml/2012/11/5/308
Ok, while there is still a question on whether it's enough I think it's
sensible to at least start with the obvious one.
Thanks very much.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send
On Mon, Nov 05, 2012 at 02:24:49PM +, Mel Gorman wrote:
Jiri Slaby reported the following:
(It's an effective revert of mm: vmscan: scale number of pages
reclaimed by reclaim/compaction based on failures.) Given kswapd
had hours of runtime in ps/top output yesterday
it should have been
described in the code anyway.
If you get the barrier issue sorted out then feel free to add
Acked-by: Mel Gorman m...@csn.ul.ie
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
!= -EAGAIN) {
/*
It may be necessary to make this more generic for migration-related
callbacks but I see nothing incompatible in your patch with doing that.
Doing the abstraction now would be overkill so
Acked-by: Mel Gorman m...@csn.ul.ie
--
Mel Gorman
SUSE Labs
--
To unsubscribe from
life very hard or would
you notice?
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org
On Fri, Nov 09, 2012 at 03:42:57PM +0100, Andrea Arcangeli wrote:
Hi Mel,
On Tue, Nov 06, 2012 at 09:14:36AM +, Mel Gorman wrote:
This series addresses part of the integration and sharing problem by
implementing a foundation that either the policy for schednuma or autonuma
can
On Fri, Nov 09, 2012 at 12:53:22PM -0200, Rafael Aquini wrote:
SNIP
If you get the barrier issue sorted out then feel free to add
Acked-by: Mel Gorman m...@csn.ul.ie
I believe we can drop the barriers stuff, as the locking scheme is now
provinding
enough protection against
On Fri, Nov 09, 2012 at 11:18:17PM +0100, Thierry Reding wrote:
The compact_capture_page() function is only used if compaction is
enabled so it should be moved into the corresponding #ifdef.
Signed-off-by: Thierry Reding thierry.red...@avionic-design.de
Acked-by: Mel Gorman mgor...@suse.de
On Sat, Nov 10, 2012 at 10:47:41AM +0800, Alex Shi wrote:
On Sat, Nov 3, 2012 at 8:21 PM, Mel Gorman mgor...@suse.de wrote:
On Sat, Nov 03, 2012 at 07:04:04PM +0800, Alex Shi wrote:
In reality, this report is larger but I chopped it down a bit for
brevity. autonuma beats schednuma
.
Signed-off-by: Mel Gorman mgor...@suse.de
---
drivers/mtd/mtdcore.c |6 --
include/linux/gfp.h |5 -
include/trace/events/gfpflags.h |1 +
mm/page_alloc.c |7 ---
4 files changed, 13 insertions(+), 6 deletions(-)
diff --git
shrink_slab() on each iteration.
This patch defers when kswapd gets woken up for THP allocations. For !THP
allocations, kswapd is always woken up. For THP allocations, kswapd is
woken up iff the process is willing to enter into direct
reclaim/compaction.
Signed-off-by: Mel Gorman mgor...@suse.de
On Mon, Nov 12, 2012 at 02:13:20PM +0100, Zdenek Kabelac wrote:
Dne 12.11.2012 13:19, Mel Gorman napsal(a):
On Sun, Nov 11, 2012 at 10:13:14AM +0100, Zdenek Kabelac wrote:
Hmm, so it's just took longer to hit the problem and observe kswapd0
spinning on my CPU again - it's not as endless like
(Since I wrote this changelog there has been another release of schednuma.
I had delayed releasing this series long enough and decided not to delay
further. Of course, I plan to dig into that new revision and see what
has changed.)
This is V2 of the series which attempts to layer parts of
-on-fault;
simplified code now that we don't have to bother
with special crap for interleaved ]
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/mempolicy.h |8 +
include
and fixed bug. ]
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
---
include/linux/mm_types.h |3 +++
include/linux/sched.h|1 +
kernel/sched/fair.c | 61
From: Rik van Riel r...@redhat.com
The function ptep_set_access_flags() is only ever invoked to set access
flags or add write permission on a PTE. The write bit is only ever set
together with the dirty bit.
Because we only ever upgrade a PTE, it is safe to skip flushing entries on
remote TLBs.
From: Rik van Riel r...@redhat.com
Intel has an architectural guarantee that the TLB entry causing
a page fault gets invalidated automatically. This means
we should be able to drop the local TLB invalidation.
Because of the way other areas of the page fault code work,
chances are good that all
...@hp.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
Cc: Andrew Morton a...@linux-foundation.org
Cc: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/uapi/linux/mempolicy.h |9
into two parts.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/sched.h |3 +
kernel/sched/core.c | 14 ++-
kernel/sched/debug.c|3 +
kernel/sched/fair.c | 298 +++
kernel/sched/features.h | 18 +++
kernel/sched
From: Rik van Riel r...@redhat.com
The function ptep_set_access_flags is only ever used to upgrade
access permissions to a page. That means the only negative side
effect of not flushing remote TLBs is that other CPUs may incur
spurious page faults, if they happen to access the same address,
and
This patch introduces a last_nid field to the page struct. This is used
to build a two-stage filter in the next patch that is aimed at
mitigating a problem whereby pages migrate to the wrong node when
referenced by a process that was running off its home node.
Signed-off-by: Mel Gorman mgor
structures to track the number
of faults in total and on a per-nid basis. On each NUMA fault it
checks if the system would benefit if the current task was migrated
to another node. If the task should be migrated, its home node is
updated and the task is requeued.
Signed-off-by: Mel Gorman mgor
node because the threads clearing pte_numa were running off-node. This
patch uses page-last_nid to build a two-stage filter before pages get
migrated to avoid problems with short or unlikely task-node
relationships.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/mempolicy.c | 27
201 - 300 of 10256 matches
Mail list logo