> + * - Package (PKG)
>
> With that:
> Acked-by: Valentin Schneider
>
No objection either, PKG is less ambiguous than DIE
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
On Thu, Jan 26, 2023 at 08:18:31AM -0800, Suren Baghdasaryan wrote:
> On Thu, Jan 26, 2023 at 7:47 AM Mel Gorman
> wrote:
> >
> > On Wed, Jan 25, 2023 at 03:35:53PM -0800, Suren Baghdasaryan wrote:
> > > In cases when VMA flags are modified after VMA
On Thu, Jan 26, 2023 at 08:10:26AM -0800, Suren Baghdasaryan wrote:
> On Thu, Jan 26, 2023 at 7:10 AM Mel Gorman
> wrote:
> >
> > On Wed, Jan 25, 2023 at 03:35:51PM -0800, Suren Baghdasaryan wrote:
> > > Replace direct modifications to vma->vm_flags with cal
if (ret)
> return ret;
> + reset_vm_flags(vma, vm_flags);
Same.
Not necessary as such as there are few users of ksm_madvise and I doubt
it'll introduce new surprises.
With or without the comment;
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
Michal Hocko
Acked-by: Mel Gorman
Minor comments that are safe to ignore.
I think a better name for mod_vm_flags is set_clear_vm_flags to hint that
the first flags are to be set and the second flags are to be cleared.
For this patch, it doesn't matter, but it might avoid accidental s
On Wed, Jan 25, 2023 at 03:35:50PM -0800, Suren Baghdasaryan wrote:
> To simplify the usage of VM_LOCKED_CLEAR_MASK in clear_vm_flags(),
> replace it with VM_LOCKED_MASK bitmask and convert all users.
>
> Signed-off-by: Suren Baghdasaryan
> Acked-by: Michal Hocko
Acked-by: Mel G
On Wed, Jan 25, 2023 at 03:35:50PM -0800, Suren Baghdasaryan wrote:
> To simplify the usage of VM_LOCKED_CLEAR_MASK in clear_vm_flags(),
> replace it with VM_LOCKED_MASK bitmask and convert all users.
>
> Signed-off-by: Suren Baghdasaryan
> Acked-by: Michal Hocko
Acked-by: Mel G
es racing with such
> operations. Introduce modifier functions for vm_flags to be used whenever
> flags are updated. This way we can better check and control correct
> locking behavior during these updates.
>
> Signed-off-by: Suren Baghdasaryan
With or without the suggested rename;
Acked-by: Mel
*new = data_race(*orig);
+ data_race(memcpy(new, orig, sizeof(*new)));
INIT_LIST_HEAD(>anon_vma_chain);
dup_anon_vma_name(orig, new);
}
I don't see how memcpy could automagically figure out whether the memcpy
is prone to races or not in an arbitrary context.
Assuming using data_race this way is ok then
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
On Mon, Jan 24, 2022 at 11:12:07AM -0500, Zi Yan wrote:
> On 24 Jan 2022, at 9:02, Mel Gorman wrote:
>
> > On Wed, Jan 19, 2022 at 02:06:17PM -0500, Zi Yan wrote:
> >> From: Zi Yan
> >>
> >> This is done in addition to MIGRATE_ISOLATE pageblock merge avoid
ATE_UNMOVABLE,
> MIGRATE_TYPES },
> [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE,
> MIGRATE_TYPES },
> + [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */
> #ifdef CONFIG_CMA
> [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */
> #endif
unting right.
However, there does not appear to be any special protection against a
page in a highatomic pageblock getting merged with a buddy of another
pageblock type. The pageblock would still have the right setting but on
allocation, the pages could split to the wrong free list and be lost
until the pages belonging to MIGRATE_HIGHATOMIC were freed again.
Not sure how much of a problem that is in practice, it's been a while
since I've heard of high-order atomic allocation failures.
--
Mel Gorman
SUSE Labs
h generation
of Zen. The common pattern is that a single NUMA node can have multiple
L3 caches and at one point I thought it might be reasonable to allow
spillover to select a local idle CPU instead of stacking multiple tasks
on a CPU sharing cache. I never got as far as thinking how it could be
done in a way that multiple architectures would be happy with.
--
Mel Gorman
SUSE Labs
On Mon, Apr 12, 2021 at 11:06:19AM +0100, Valentin Schneider wrote:
> On 12/04/21 10:37, Mel Gorman wrote:
> > On Mon, Apr 12, 2021 at 11:54:36AM +0530, Srikar Dronamraju wrote:
> >> * Gautham R. Shenoy [2021-04-02 11:07:54]:
> >>
> >> >
> >> &g
rch depth
allows within the node with the LLC CPUs masked out. While there would be
a latency hit because cache is not shared, it would still be a CPU local
to memory that is idle. That would potentially be beneficial on Zen*
as well without having to introduce new domains in the topology hierarchy.
--
Mel Gorman
SUSE Labs
viewed-by: Bob Picco <bob.pi...@oracle.com>
>
> Considering that some HW might behave strangely and this would be rather
> hard to debug I would be tempted to mark this for stable. It should also
> be merged separately from the rest of the series.
>
> I have just one nit bel
locator.
> * All the reclaim decisions have to use this function rather than
> * populated_zone(). If the whole zone is reserved then we can easily
> * end up with populated_zone() && !managed_zone().
> */
>
> What do you think?
>
This makes a lot of sense. I've updat
ssible.
We cannot just convert populated_zone() as many existing users really
need to check for present_pages. This patch introduces a managed_zone()
helper and uses it in the few cases where it is critical that the check
is made for managed pages -- zonelist constuction and page reclaim.
int populated_zone(struct zone *zone)
{
- return (!!zone->present_pages);
+ return (!!zone->managed_pages);
}
extern int movable_zone;
--
Mel Gorman
SUSE Labs
he page allocator uses.
> > o Most importantly of all, reclaim from node 0 with multiple zones will
> > have similar aging and reclaiming characteristics as every
> > other node.
> >
> > Signed-off-by: Mel Gorman <mgor...@techsingularity.net>
> > A
On Wed, Aug 10, 2016 at 12:59:40PM -0500, Reza Arbab wrote:
> On Thu, Aug 04, 2016 at 10:24:04AM +0100, Mel Gorman wrote:
> >[1.713998] Unable to handle kernel paging request for data at address
> >0xff7a1
> >[1.714164] Faulting instruction address: 0xc027
r freed to the page allocator (eg.
> initrd).
>
It would be ideal if the amount of reserved memory that is freed later
in the normal case was estimated. If it's a small percentage of memory
then the difference is unlikely to be detectable and avoids ppc64 being
special.
--
Mel Gorman
SUSE Labs
lable pages then it really should be based on that and not
just a made-up number.
--
Mel Gorman
SUSE Labs
lps to identify if
> the current value needs to be incremented.
>
I think the parameter is ugly and it should have been just
inc_memory_reserve but at least it works.
--
Mel Gorman
SUSE Labs
ommon+0x20/0xa8
>
> Register the memory reserved by fadump, so that the cache sizes are
> calculated based on the free memory (i.e Total memory - reserved
> memory).
>
> Suggested-by: Mel Gorman <mgor...@techsingularity.net>
I didn't suggest this specifically. While it happens t
za Arbab <ar...@linux.vnet.ibm.com>
Signed-off-by: Mel Gorman <mgor...@techsingularity.net>
---
This has been compile-tested and boot-tested on a 32-bit KVM only. A
memoryless system was not available to test the patch with. A confirmation
from Paul and Reza that it resolves their problem is we
e end of the node where it could have the same limitations as
ZONE_HIGHMEM if necessary. It was also safe to assume that zones never
overlapped as zones were about addressing limitations. If ZONE_CMA or
ZONE_DEVICE can overlap with other zones during initialisation time then
ther
spin and send a patch for review.
>
Given that CONFIG_NO_BOOTMEM is not supported and bootmem is meant to be
slowly retiring, I would suggest instead making deferred memory init
depend on NO_BOOTMEM.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
t from my test on Power8 platform:
>
> For 4GB memory: 57% is improved
> For 50GB memory: 22% is improve
>
> Signed-off-by: Li Zhang <zhlci...@linux.vnet.ibm.com>
Acked-by: Mel Gorman <mgor...@techsingularity.net>
--
Mel Gorman
SUSE Labs
___
needs more
> memory.
> So this patch allocates 1GB for 0.25TB/node for large system
> as it is mentioned in https://lkml.org/lkml/2015/5/1/627
>
Acked-by: Mel Gorman <mgor...@techsingularity.net>
--
Mel Gorman
SUSE Labs
___
Linuxpp
if there was a point where
this was ever working. It could be a ppc64-specific bug but right now,
I'm still drawing a blank.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
? As it can be
easily reproduced, can the problem be bisected please?
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue, Mar 24, 2015 at 10:51:41PM +1100, Dave Chinner wrote:
On Mon, Mar 23, 2015 at 12:24:00PM +, Mel Gorman wrote:
These are three follow-on patches based on the xfsrepair workload Dave
Chinner reported was problematic in 4.0-rc1 due to changes in page table
management -- https
These are three follow-on patches based on the xfsrepair workload Dave
Chinner reported was problematic in 4.0-rc1 due to changes in page table
management -- https://lkml.org/lkml/2015/3/1/226.
Much of the problem was reduced by commit 53da3bc2ba9e (mm: fix up numa
read-only thread grouping
.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 9 -
mm/memory.c | 8 +++-
mm/mprotect.c| 3 +++
3 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2f12e9fcf1a2..0a42d1521aa4 100644
--- a/mm/huge_memory.c
to losing the writable information and
that should be reduced so I tried a few approaches. Ultimately, the one
that performed the best and was easiest to understand simply preserved
the writable bit across the protection update and page fault. I'll post
it later when I stick a changelog on it.
--
Mel
scanner may scan faster if the faults continue
to be remote. This means there is higher system CPU overhead and fault
trapping at exactly the time we know that migrations cannot happen. This
patch tracks when migration failures occur and slows the PTE scanner.
Signed-off-by: Mel Gorman mgor...@suse.de
flushes and sync also affect placement. This is unpredictable behaviour
which is impossible to reason about so this patch makes grouping decisions
based on the VMA flags.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 13 ++---
mm/memory.c | 19 +++
2
.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
migration.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Wed, Mar 18, 2015 at 10:31:28AM -0700, Linus Torvalds wrote:
- something completely different that I am entirely missing
So I think there's something I'm missing. For non-shared mappings, I
still have the idea that pte_dirty should be the same as pte_write.
And yet, your testing of 3.19
On Tue, Mar 10, 2015 at 04:55:52PM -0700, Linus Torvalds wrote:
On Mon, Mar 9, 2015 at 12:19 PM, Dave Chinner da...@fromorbit.com wrote:
On Mon, Mar 09, 2015 at 09:52:18AM -0700, Linus Torvalds wrote:
What's your virtual environment setup? Kernel config, and
virtualization environment to
On Thu, Mar 12, 2015 at 09:20:36AM -0700, Linus Torvalds wrote:
On Thu, Mar 12, 2015 at 6:10 AM, Mel Gorman mgor...@suse.de wrote:
I believe you're correct and it matches what was observed. I'm still
travelling and wireless is dirt but managed to queue a test using pmd_dirty
Ok, thanks
On Mon, Mar 09, 2015 at 09:02:19PM +, Mel Gorman wrote:
On Sun, Mar 08, 2015 at 08:40:25PM +, Mel Gorman wrote:
Because if the answer is 'yes', then we can safely say: 'we regressed
performance because correctness [not dropping dirty bits] comes before
performance
On Sun, Mar 08, 2015 at 08:40:25PM +, Mel Gorman wrote:
Because if the answer is 'yes', then we can safely say: 'we regressed
performance because correctness [not dropping dirty bits] comes before
performance'.
If the answer is 'no', then we still have a mystery (and a regression
On Sun, Mar 08, 2015 at 11:02:23AM +0100, Ingo Molnar wrote:
* Linus Torvalds torva...@linux-foundation.org wrote:
On Sat, Mar 7, 2015 at 8:36 AM, Ingo Molnar mi...@kernel.org wrote:
And the patch Dave bisected to is a relatively simple patch. Why
not simply revert it to see
throttling migrations can be lowered.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/sched.h | 9 +
kernel/sched/fair.c | 8 ++--
mm/huge_memory.c | 3 ++-
mm/memory.c | 3 ++-
4 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/include/linux
be addressed but beyond the scope of this series which is aimed at Dave
Chinners shrink workload that is unlikely to be affected by this issue.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index
On Sat, Mar 07, 2015 at 05:36:58PM +0100, Ingo Molnar wrote:
* Mel Gorman mgor...@suse.de wrote:
Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
Across the board the 4.0-rc1 numbers are much slower, and the
degradation is far worse when using the large
The wrong value is being returned by change_huge_pmd since commit
10c1045f28e8 (mm: numa: avoid unnecessary TLB flushes when setting
NUMA hinting entries) which allows a fallthrough that tries to adjust
non-existent PTEs. This patch corrects it.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm
Dave Chinner reported a problem due to excessive NUMA balancing activity
and bisected it. The first patch in this series corrects a major problem
that is unlikely to affect Dave but is still serious. Patch 2 is a minor
cleanup that was spotted while looking at scan rate control. Patch 3 is
minor
This code is dead since commit 9e645ab6d089 (sched/numa: Continue PTE
scanning even if migrate rate limited) so remove it.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/migrate.h | 5 -
mm/migrate.c| 20
2 files changed, 25 deletions(-)
diff
On Sat, Mar 07, 2015 at 12:31:03PM -0800, Linus Torvalds wrote:
On Sat, Mar 7, 2015 at 7:20 AM, Mel Gorman mgor...@suse.de wrote:
if (!prot_numa || !pmd_protnone(*pmd)) {
- ret = 1;
entry = pmdp_get_and_clear_notify(mm, addr
The wrong value is being returned by change_huge_pmd since commit
10c1045f28e8 (mm: numa: avoid unnecessary TLB flushes when setting
NUMA hinting entries) which allows a fallthrough that tries to adjust
non-existent PTEs. This patch corrects it.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm
Dave Chinner reported a problem due to excessive NUMA balancing activity and
bisected it. These are two patches that address two major issues with that
series. The first patch is almost certainly unrelated to what he saw due
to fact his vmstats showed no huge page activity but the fix is
511530 314936 371571
NUMA hint local percent 69 52 61
NUMA pages migrated 26366701 5424102 7073177
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/powerpc/include/asm/pgtable-ppc64.h | 16
arch/x86/include/asm
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva
closes the race.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index fab9b32
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva...@linux-foundation.org
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Tested-by: Sasha Levin sasha.le...@oracle.com
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Tested-by: Sasha Levin
Changelog since V4
o Rebase to 3.19-rc2(mel)
Changelog since V3
o Minor comment update (benh)
o Add ack'ed bys
Changelog since V2
o Rename *_protnone_numa to _protnone and extend docs (linus)
o Rebase
are accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman mgor...@suse.de
Tested-by: Sasha Levin sasha.le
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva...@linux-foundation.org
Acked-by: Aneesh Kumar aneesh.ku...@linux.vnet.ibm.com
Tested-by: Sasha Levin sasha.le...@oracle.com
---
include
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13
behaviour.
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b9a13e9..4673d6e 100644
--- a/arch/x86/include/asm/pgtable.h
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 14 --
mm/mprotect.c
do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions
On Thu, Dec 04, 2014 at 08:01:57AM +1100, Benjamin Herrenschmidt wrote:
On Wed, 2014-12-03 at 15:52 +, Mel Gorman wrote:
It's implied but can I assume it passed? If so, Ben and Paul, can I
consider the series to be acked by you other than the minor comment
updates?
Yes. Assuming
On Wed, Dec 03, 2014 at 10:50:35PM +0530, Aneesh Kumar K.V wrote:
Mel Gorman mgor...@suse.de writes:
On Wed, Dec 03, 2014 at 08:53:37PM +0530, Aneesh Kumar K.V wrote:
Benjamin Herrenschmidt b...@kernel.crashing.org writes:
On Tue, 2014-12-02 at 12:57 +0530, Aneesh Kumar K.V wrote
There are no functional changes here and I kept the mmotm-20141119 baseline
as that is what got tested but it rebases cleanly to current mmotm. The
series makes architectural changes but splitting this on a per-arch basis
would cause bisect-related brain damage. I'm hoping this can go through
closes the race.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva...@linux-foundation.org
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Tested-by: Sasha Levin sasha.le...@oracle.com
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Tested-by: Sasha Levin
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva...@linux-foundation.org
Acked-by: Aneesh Kumar aneesh.ku...@linux.vnet.ibm.com
Tested-by: Sasha Levin sasha.le...@oracle.com
---
include
are accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman mgor...@suse.de
Tested-by: Sasha Levin sasha.le
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13
behaviour.
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cf428a7..0dd5be3 100644
--- a/arch/x86/include/asm/pgtable.h
do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 14 --
mm/mprotect.c
assume it passed? If so, Ben and Paul, can I
consider the series to be acked by you other than the minor comment
updates?
Thanks.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc
On Tue, Dec 02, 2014 at 09:38:39AM +1100, Benjamin Herrenschmidt wrote:
On Fri, 2014-11-21 at 13:57 +, Mel Gorman wrote:
#ifdef CONFIG_NUMA_BALANCING
+/*
+ * These work without NUMA balancing but the kernel does not care. See the
+ * comment in include/asm-generic/pgtable.h
On Thu, Nov 20, 2014 at 04:50:25PM -0500, Sasha Levin wrote:
On 11/20/2014 05:19 AM, Mel Gorman wrote:
V1 failed while running under kvm-tools very quickly and a second report
indicated that it happens on bare metal as well. This version survived
an overnight run of trinity running under
On Thu, Nov 20, 2014 at 11:54:06AM -0800, Linus Torvalds wrote:
On Thu, Nov 20, 2014 at 2:19 AM, Mel Gorman mgor...@suse.de wrote:
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Oh, I hadn't noticed that you had renamed these things
closes the race.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e
The main change here is to rebase on mmotm-20141119 as the series had
significant conflicts that were non-obvious to resolve. The main blockers
for merging are independent testing from Sasha (trinity), independent
testing from Aneesh (ppc64 support) and acks from Ben and Paul on the
powerpc
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva...@linux-foundation.org
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable.h | 15
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva...@linux-foundation.org
Acked-by: Aneesh Kumar aneesh.ku...@linux.vnet.ibm.com
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c
are accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/powerpc/include/asm/pgtable.h
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13
behaviour.
Signed-off-by: Mel Gorman mgor...@suse.de
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cf428a7..0dd5be3 100644
--- a/arch/x86/include/asm/pgtable.h
do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 14 --
mm/mprotect.c
closes the race.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e
V1 failed while running under kvm-tools very quickly and a second report
indicated that it happens on bare metal as well. This version survived
an overnight run of trinity running under kvm-tools here but verification
from Sasha would be appreciated.
Changelog since V1
o ppc64 paranoia checks and
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva...@linux-foundation.org
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pgtable.h | 11
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Linus Torvalds torva
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
---
arch/powerpc/mm
1 - 100 of 191 matches
Mail list logo