will send a revert I would like to understand what
led to the patch in the first place. I do not see why would PPC use only
LOCAL_DISTANCE and REMOTE_DISTANCE distances and in fact machines I have
seen use different values.
Anton, could you comment please?
--
Michal Hocko
SUSE Labs
On Tue 18-02-14 14:27:11, David Rientjes wrote:
On Tue, 18 Feb 2014, Michal Hocko wrote:
Hi,
I have just noticed that ppc has RECLAIM_DISTANCE reduced to 10 set by
56608209d34b (powerpc/numa: Set a smaller value for RECLAIM_DISTANCE to
enable zone reclaim). The commit message suggests
On Tue 18-02-14 15:34:05, Nishanth Aravamudan wrote:
Hi Michal,
On 18.02.2014 [10:06:58 +0100], Michal Hocko wrote:
Hi,
I have just noticed that ppc has RECLAIM_DISTANCE reduced to 10 set by
56608209d34b (powerpc/numa: Set a smaller value for RECLAIM_DISTANCE to
enable zone reclaim
.
Agreed! Actually the code I am currently interested in is based on 3.0
kernel where zone_reclaim_mode is set in build_zonelists which relies on
find_next_best_node which iterates only N_HIGH_MEMORY nodes which should
have non 0 present pages.
[...]
--
Michal Hocko
SUSE Labs
On Wed 19-02-14 00:20:21, David Rientjes wrote:
On Wed, 19 Feb 2014, Michal Hocko wrote:
I strongly suspect that the patch is correct since powerpc node distances
are different than the architectures you're talking about and get doubled
for every NUMA domain that the hardware
N_MEMORY earlier
in early_calculate_totalpages as mentioned in other email.
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Cc: Paul Mundt let...@linux-sh.org
Signed-off-by: David Rientjes rient...@google.com
Reviewed-by: Michal Hocko mho...@suse.cz
---
arch/powerpc/mm/fault.c | 27 ---
arch/sh/mm/fault.c | 19 +++
arch/x86/mm/fault.c | 23
as ARCH_ENABLE_MEMORY_{HOTPLUG,HOTREMOVE} (and name it
ARCH_HAVE_BOOTMEM_INFO_NODE).
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
think that both patches should be merged into one and put to Andrew's
queue as
memory-hotplug-implement-register_page_bootmem_info_section-of-sparse-vmemmap-fix.patch
rather than a separate patch.
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
less complicated. But I do not have
any strong opinion on that. Looking at other ARCH_ENABLE_MEMORY_HOTPLUG
and others suggests that we should be consistent with that.
Thanks!
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev
management patches on top of the last major
release (since-.X.Y branch).
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Mon 28-01-13 09:33:49, Tang Chen wrote:
On 01/25/2013 09:17 PM, Michal Hocko wrote:
On Wed 23-01-13 06:29:31, Simon Jeons wrote:
On Tue, 2013-01-22 at 19:42 +0800, Tang Chen wrote:
Here are some bug fix patches for physical memory hot-remove. All these
patches are based on the latest -mm
...@linux.vnet.ibm.com
The patch as is doesn't seem to be harmful.
Reviewed-by: Michal Hocko mho...@suse.cz
---
v1 - v2:
Check against zone_reclaimable_pages, rather zone_reclaimable, based
upon feedback from Dave Hansen.
Dunno, but shouldn't we use the same thing here
On Fri 03-04-15 10:43:57, Nishanth Aravamudan wrote:
On 31.03.2015 [11:48:29 +0200], Michal Hocko wrote:
[...]
I would expect kswapd would be looping endlessly because the zone
wouldn't be balanced obviously. But I would be wrong... because
pgdat_balanced is doing
mmap(MAP_FIXED|MAP_LOCKED|MAP_READ|other_flags_you_need)
from the SIGSEGV handler?
You can generate a lot of vmas that way but you can mitigate that to a
certain level by mapping larger than PAGE_SIZE chunks in the fault
handler. Would that work in your usecase?
--
Michal Hocko
SUSE Labs
you played with batching? Has it helped? Anyway it is to be
expected that the overhead will be higher than a single mmap call. The
question is whether you can live with it because adding a new semantic
to mlock sounds trickier and MAP_LOCKED is tricky enough already...
--
Michal Hocko
SUSE Labs
.
Signed-off-by: Eric B Munson emun...@akamai.com
Cc: Michal Hocko mho...@suse.cz
Cc: linux-al...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-m...@linux-mips.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparcli...@vger.kernel.org
Cc: linux-xte
On Mon 22-06-15 10:18:06, Eric B Munson wrote:
On Mon, 22 Jun 2015, Michal Hocko wrote:
On Fri 19-06-15 12:43:33, Eric B Munson wrote:
[...]
Are you objecting to the addition of the VMA flag VM_LOCKONFAULT, or the
new MAP_LOCKONFAULT flag (or both)?
I thought the MAP_FAULTPOPULATE
On Tue 23-06-15 14:45:17, Vlastimil Babka wrote:
On 06/22/2015 04:18 PM, Eric B Munson wrote:
On Mon, 22 Jun 2015, Michal Hocko wrote:
On Fri 19-06-15 12:43:33, Eric B Munson wrote:
[...]
My thought on detecting was that someone might want to know if they had
a VMA that was VM_LOCKED
On Thu 18-06-15 16:30:48, Eric B Munson wrote:
On Thu, 18 Jun 2015, Michal Hocko wrote:
[...]
Wouldn't it be much more reasonable and straightforward to have
MAP_FAULTPOPULATE as a counterpart for MAP_POPULATE which would
explicitly disallow any form of pre-faulting? It would be usable
On Fri 19-06-15 12:43:33, Eric B Munson wrote:
On Fri, 19 Jun 2015, Michal Hocko wrote:
On Thu 18-06-15 16:30:48, Eric B Munson wrote:
On Thu, 18 Jun 2015, Michal Hocko wrote:
[...]
Wouldn't it be much more reasonable and straightforward to have
MAP_FAULTPOPULATE as a counterpart
of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https
Bonzini pbonz...@redhat.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
Cc: Cliff Whickman c...@sgi.com
Acked-by: Robin Holt robinmh...@gmail.com
Acked-by: Michal Hocko mho...@suse.com
---
v3: fixed the slob part (Christoph
On Thu 20-08-15 16:14:34, Michal Hocko wrote:
On Thu 20-08-15 13:43:21, Vlastimil Babka wrote:
Perform the same debug checks in alloc_pages_node() as are done in
__alloc_pages_node(), by making the former function a wrapper of the latter
one.
In addition to better diagnostics
On Tue 28-07-15 09:49:42, Eric B Munson wrote:
On Tue, 28 Jul 2015, Michal Hocko wrote:
[I am sorry but I didn't get to this sooner.]
On Mon 27-07-15 10:54:09, Eric B Munson wrote:
Now that VM_LOCKONFAULT is a modifier to VM_LOCKED and
cannot be specified independentally, it might
combinations sound weird to me.
Anyway munlock with flags opens new doors of trickiness.
--
Michal Hocko
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
it useful.
Looks good to me
Acked-by: Michal Hocko mho...@suse.com
Signed-off-by: Eric B Munson emun...@akamai.com
Acked-by: Vlastimil Babka vba...@suse.cz
Cc: Michal Hocko mho...@suse.cz
Cc: Vlastimil Babka vba...@suse.cz
Cc: Heiko Carstens heiko.carst...@de.ibm.com
Cc: Geert Uytterhoeven ge
On Fri 28-08-15 16:31:30, Michal Hocko wrote:
On Wed 26-08-15 14:24:23, Eric B Munson wrote:
The previous patch introduced a flag that specified pages in a VMA
should be placed on the unevictable LRU, but they should not be made
present when the area is created. This patch adds the ability
Cc: sparcli...@vger.kernel.org
Cc: linux-xte...@linux-xtensa.org
Cc: linux-a...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: linux...@kvack.org
I haven't checked the arch specific bits but the core part looks good to
me.
Acked-by: Michal Hocko mho...@suse.com
---
arch/alpha/include
details look for comment in __pte_alloc().
> + */
> + smp_wmb();
> +
what is the pairing memory barrier?
> spin_lock(>page_table_lock);
> #ifdef CONFIG_PPC_FSL_BOOK3E
> /*
> --
> 1.8.3.1
--
Michal Hocko
SUSE Labs
___
On Wed 06-04-16 15:39:17, Aneesh Kumar K.V wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > [ text/plain ]
> > On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote:
> > [...]
> >> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpa
_HUGE_MASK flag sounds more appropriate than the other one
> in the context. Hence change it back.
Yes, SHM_HUGE_MASK mixing with MAP_HUGE_SHIFT is not only misleading
it might bite us later should any of the two change.
>
> Signed-off-by: Anshuman Khandual <khand...@linux.vnet.ibm
sistently. Feel free to add to all patches
Acked-by: Michal Hocko <mho...@suse.com>
On a side note. I have received patches with broken threading - the
follow up patches are not in the single thread under this cover email.
I thought this was the default behavior of git send-email but mayb
On Thu 19-05-16 11:07:09, Arnd Bergmann wrote:
[...]
> > 6 mm/page_alloc.c:3651:6: warning: 'compact_result' may be used
> > uninitialized in this function [-Wmaybe-uninitialized]
>
> I'm surprised this one is still there, I sent a patch but Michal Hocko came
> u
8
> *15:34:57* [ 862.549193] ---[ end trace fcc50906d9164c56 ]---
> *15:34:57* [ 862.550562]
> *15:35:18* [ 883.551577] INFO: rcu_sched self-detected stall on CPU
> *15:35:18* [ 883.551578] INFO: rcu_sched self-detected stall on CPU
> *15:35:18* [ 883.551588]
On Fri 24-02-17 15:10:29, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > On Thu 23-02-17 19:14:27, Vitaly Kuznetsov wrote:
[...]
> >> Virtual guests under stress were getting into OOM easily and the OOM
> >> killer was even killi
On Thu 23-02-17 19:14:27, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > On Thu 23-02-17 17:36:38, Vitaly Kuznetsov wrote:
> >> Michal Hocko <mho...@kernel.org> writes:
> > [...]
> >> > Is a grow from 256M -> 128GB re
On Fri 24-02-17 16:05:18, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > On Fri 24-02-17 15:10:29, Vitaly Kuznetsov wrote:
[...]
> >> Just did a quick (and probably dirty) test, increasing guest memory from
> >> 4G to 8G (32 x 128m
On Fri 24-02-17 17:40:25, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > On Fri 24-02-17 17:09:13, Vitaly Kuznetsov wrote:
[...]
> >> While this will most probably work for me I still disagree with the
> >> concept of 'one size f
On Fri 24-02-17 17:09:13, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > On Fri 24-02-17 16:05:18, Vitaly Kuznetsov wrote:
> >> Michal Hocko <mho...@kernel.org> writes:
> >>
> >> > On Fri 24-02-17 15:10:29, Vita
On Thu 23-02-17 16:49:06, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > On Thu 23-02-17 14:31:24, Vitaly Kuznetsov wrote:
> >> Michal Hocko <mho...@kernel.org> writes:
> >>
> >>
ks
> continuosly refused to add this udev rule to udev calling it stupid as
> it actually is an unconditional and redundant ping-pong between kernel
> and udev.
This is a policy and as such it doesn't belong to the kernel. The whole
auto-enable in the kernel is just plain wrong IMHO and we shouldn't have
merged it.
--
Michal Hocko
SUSE Labs
On Thu 23-02-17 14:31:24, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
>
> > On Wed 22-02-17 10:32:34, Vitaly Kuznetsov wrote:
> > [...]
> >> > There is a workaround in that a user could online the memory or have
> >> > a u
On Thu 23-02-17 17:36:38, Vitaly Kuznetsov wrote:
> Michal Hocko <mho...@kernel.org> writes:
[...]
> > Is a grow from 256M -> 128GB really something that happens in real life?
> > Don't get me wrong but to me this sounds quite exaggerated. Hotmem add
> > which is an o
ed vs. present checks will be quite
subtle and it is not entirely clear when to use which one. I agree that
the reclaim path is the most critical one so the patch seems OK to me.
At least from a quick glance it should help with the reported issue so
feel free to add
Acked-by: Michal Hocko <mho
m hashes.
So I think that this is just a fallout from how fadump is hackish and
tricky. Reserving large portion/majority of memory from the kernel just
sounds like a mind field. This patchset is dealing with one particular
problem. Fair enough, it seems like the easiest way to go and something
that wou
t;per_cpu_nodestats;
>
> + if (!p)
> + continue;
> for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) {
> int v;
>
> @@ -748,6 +758,8 @@ void cpu_vm_stats_fold(int cpu)
> for_each_online_pgdat(pgdat) {
>
On Mon 03-10-16 14:47:16, Michal Hocko wrote:
> [Sorry I have only now noticed this email]
>
> On Thu 04-08-16 16:44:10, Paul Mackerras wrote:
[...]
> > [1.717648] Call Trace:
> > [1.717687] [c00ff0707b80] [c0270d08]
> > refresh_zone_stat_thresh
would require
FOLL_FORCE for access_remote_vm? I mean FOLL_FORCE is a really
non-trivial thing. It doesn't obey vma permissions so we should really
minimize its usage. Do all of those users really need FOLL_FORCE?
Anyway I would rather see the flag explicit and used at more places than
hidden behind a helper function.
--
Michal Hocko
SUSE Labs
On Wed 19-10-16 09:40:45, Lorenzo Stoakes wrote:
> On Wed, Oct 19, 2016 at 10:13:52AM +0200, Michal Hocko wrote:
> > On Wed 19-10-16 09:59:03, Jan Kara wrote:
> > > On Thu 13-10-16 01:20:18, Lorenzo Stoakes wrote:
> > > > This patch removes the write par
On Wed 19-10-16 09:58:15, Lorenzo Stoakes wrote:
> On Tue, Oct 18, 2016 at 05:30:50PM +0200, Michal Hocko wrote:
> > I am wondering whether we can go further. E.g. it is not really clear to
> > me whether we need an explicit FOLL_REMOTE when we can in fact check
> > mm !=
On Wed 19-10-16 10:06:46, Lorenzo Stoakes wrote:
> On Wed, Oct 19, 2016 at 10:52:05AM +0200, Michal Hocko wrote:
> > yes this is the desirable and expected behavior.
> >
> > > wonder if this is desirable behaviour or whether this ought to be limited
> > > to
>
_FORCE users was always a nightmare
and the flag behavior is really subtle so we should better be explicit
about it. I haven't gone through each patch separately but rather
applied the whole series and checked the resulting diff. This all seems
OK to me and feel free to add
Acked-by: Michal Hocko <mho
On Wed 19-10-16 10:23:55, Dave Hansen wrote:
> On 10/19/2016 10:01 AM, Michal Hocko wrote:
> > The question I had earlier was whether this has to be an explicit FOLL
> > flag used by g-u-p users or we can just use it internally when mm !=
> > current->mm
>
>
On Wed 19-10-16 09:49:43, Dave Hansen wrote:
> On 10/19/2016 02:07 AM, Michal Hocko wrote:
> > On Wed 19-10-16 09:58:15, Lorenzo Stoakes wrote:
> >> On Tue, Oct 18, 2016 at 05:30:50PM +0200, Michal Hocko wrote:
> >>> I am wondering whether we can go further. E.g. it i
>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: Johannes Weiner <han...@cmpxchg.org>
> Cc: Michal Hocko <mho...@kernel.org>
> Cc: Vladimir Davydov <vdavydov@gmail.com>
>
> I've tested this patches under a VM with two nodes and movabl
On Wed 23-11-16 18:50:42, Balbir Singh wrote:
>
>
> On 23/11/16 18:25, Michal Hocko wrote:
> > On Wed 23-11-16 15:36:51, Balbir Singh wrote:
> >> In the absence of hotplug we use extra memory proportional to
> >> (possible_nodes - online_nodes) * num
On Wed 23-11-16 19:37:16, Balbir Singh wrote:
>
>
> On 23/11/16 19:07, Michal Hocko wrote:
> > On Wed 23-11-16 18:50:42, Balbir Singh wrote:
> >>
> >>
> >> On 23/11/16 18:25, Michal Hocko wrote:
> >>> On Wed 23-11-16 15:36:51, Balbir Sing
On Thu 24-11-16 00:05:12, Balbir Singh wrote:
>
>
> On 23/11/16 20:28, Michal Hocko wrote:
[...]
> > I am more worried about synchronization with the hotplug which tends to
> > be a PITA in places were we were simply safe by definition until now. We
> > do not have a
data structures.
Thanks!
--
Michal Hocko
SUSE Labs
ring that some HW might behave strangely and this would be rather
> >hard to debug I would be tempted to mark this for stable. It should also
> >be merged separately from the rest of the series.
> >
> >I have just one nit below
> >Acked-by: Michal Hocko <mho...@suse.com>
>
On Fri 11-08-17 11:58:46, Pasha Tatashin wrote:
> On 08/11/2017 08:39 AM, Michal Hocko wrote:
> >On Mon 07-08-17 16:38:41, Pavel Tatashin wrote:
> >>A new variant of memblock_virt_alloc_* allocations:
> >>memblock_virt_alloc_try_nid_raw()
> >>
edious than simply
adding one option to the kernel command line.
--
Michal Hocko
SUSE Labs
}
-
__init_single_page(page, pfn, zid, nid);
if (!free_base_page) {
free_base_page = page;
--
Michal Hocko
SUSE Labs
1 +---
> mm/sparse-vmemmap.c | 10 ++-
> mm/sparse.c | 6 +-
> 14 files changed, 356 insertions(+), 88 deletions(-)
>
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
h/x86/kernel/setup.c
> @@ -790,7 +790,10 @@ early_param("reservelow", parse_reservelow);
>
> static void __init trim_low_memory_range(void)
> {
> - memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
> + unsigned long min_pfn = find_min_pfn_with_active_regions();
> + phys_addr_t base = min_pfn << PAGE_SHIFT;
> +
> + memblock_reserve(base, ALIGN(reserve_low, PAGE_SIZE));
> }
>
> /*
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
d this would be rather
hard to debug I would be tempted to mark this for stable. It should also
be merged separately from the rest of the series.
I have just one nit below
Acked-by: Michal Hocko <mho...@suse.com>
[...]
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 2cb25fe4452c..bf14ae
_end_pfn);
> if (!pfn_valid_within(pfn))
> goto free_range;
>
> @@ -1524,7 +1529,11 @@ static int __init deferred_init_memmap(void *data)
> cond_resched();
> }
>
> - if (page->flags) {
> + /*
> + * Check if this page has already been initialized due
> + * to being reserved during boot in memblock.
> + */
> + if (pfn >= resv_start_pfn) {
> VM_BUG_ON(page_zone(page) != zone);
> goto free_range;
> }
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
ed, and free_all_bootmem() initializes all the reserved
> + * deferred pages for us.
> + */
> + register_page_bootmem_info();
> +
> /* Register memory areas for /proc/kcore */
> kclist_add(_vsyscall, (void *)VSYSCALL_ADDR,
>PAGE_SIZE, KCORE_OTHER);
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
Sistare <steven.sist...@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
> Reviewed-by: Bob Picco <bob.pi...@oracle.com>
other than that
Acked-by: Michal Hocko <mho...@suse.com>
> ---
> include/linux/bo
cal page numbers. However, mem_map only begins to
> record
> * per-page information starting at pfn_base. This is to handle systems
> where
> * the first physical page in the machine is at some huge physical address,
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
by: Daniel Jordan <daniel.m.jor...@oracle.com>
> Reviewed-by: Bob Picco <bob.pi...@oracle.com>
OK, but as mentioned in the previous patch add memblock_virt_alloc_raw
in this patch.
Acked-by: Michal Hocko <mho...@suse.com>
> ---
> mm/page_alloc.c | 15 +++
.@oracle.com>
> Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
> Reviewed-by: Bob Picco <bob.pi...@oracle.com>
After the relevant information is added feel free add
Acked-by: Michal Hocko <mho.
IZE, __pa(MAX_DMA_ADDRESS),
> + BOOTMEM_ALLOC_ACCESSIBLE, nodeid);
> if (map) {
> for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
> if (!present_section_nr(pnum))
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
eturn memblock_virt_alloc_internal(size, align,
> - min_addr, max_addr, nid);
> + ptr = memblock_virt_alloc_internal(size, align,
> +min_addr, max_addr, nid);
> +#ifdef CONFIG_DEBUG_VM
> + if (ptr &&a
On Fri 11-08-17 11:13:07, Pasha Tatashin wrote:
> On 08/11/2017 03:58 AM, Michal Hocko wrote:
> >[I am sorry I didn't get to your previous versions]
>
> Thank you for reviewing this work. I will address your comments, and
> send-out a new patches.
>
> >>
> &
en we were setting reserved
> flags to struct page for PFN 0 in which was never initialized through
> __init_single_page(). The reason they were triggered is because we set all
> uninitialized memory to ones in one of the debug patches.
And why don't we need the same treatment for other architectures?
--
Michal Hocko
SUSE Labs
art. Please make it explicit in the changelog.
It is quite easy to get lost in the deep call chains.
--
Michal Hocko
SUSE Labs
in
> nobootmem headfile.
This is the standard way to do this. And it is usually preferred to
proliferate ifdefs in the code.
--
Michal Hocko
SUSE Labs
change it to CONFIG_MEMBLOCK_DEBUG,
> and let users decide what other debugging configs need to be enabled, as
> this is also OK.
Actually the more I think about it the more I am convinced that a kernel
boot parameter would be better because it doesn't need the kernel to be
recompiled and it is a single branch in not so hot path.
--
Michal Hocko
SUSE Labs
On Fri 11-08-17 11:55:39, Pasha Tatashin wrote:
> On 08/11/2017 05:37 AM, Michal Hocko wrote:
> >On Mon 07-08-17 16:38:39, Pavel Tatashin wrote:
> >>In deferred_init_memmap() where all deferred struct pages are initialized
> >>we have a check like this:
&g
> Yes, they said that the problem was bisected down to this patch. Do you know
> if there is a way to submit a patch to this test robot?
You can ask them for re testing with an updated patch by replying to
their report. ANyway I fail to see how the change could lead to this
patch.
--
Michal Hocko
SUSE Labs
m("reservelow", parse_reservelow);
>
> static void __init trim_low_memory_range(void)
> {
> - memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
> + unsigned long min_pfn = find_min_pfn_with_active_regions();
> + phys_addr_t base = min_pfn << PAGE_SHIFT;
> +
> + memblock_reserve(base, ALIGN(reserve_low, PAGE_SIZE));
> }
>
> /*
> --
> 2.14.0
--
Michal Hocko
SUSE Labs
st for other reasons then just
update that as well. But nothing really earth shattering.
--
Michal Hocko
SUSE Labs
On Tue 11-07-17 12:32:57, Ram Pai wrote:
> On Tue, Jul 11, 2017 at 04:52:46PM +0200, Michal Hocko wrote:
> > On Wed 05-07-17 14:21:37, Ram Pai wrote:
> > > Memory protection keys enable applications to protect its
> > > address space from inadvertent access or c
On Wed 12-07-17 09:23:37, Michal Hocko wrote:
> On Tue 11-07-17 12:32:57, Ram Pai wrote:
[...]
> > Ideally the MMU looks at the PTE for keys, in order to enforce
> > protection. This is the case with x86 and is the case with power9 Radix
> > page table. Hence the keys
o
you need to store anything in the pte? My understanding of PKEYs is that
the setup and teardown should be very cheap and so no page tables have
to updated. Or do I just misunderstand what you wrote here?
--
Michal Hocko
SUSE Labs
On Thu 13-07-17 08:53:52, Benjamin Herrenschmidt wrote:
> On Wed, 2017-07-12 at 09:23 +0200, Michal Hocko wrote:
> >
> > >
> > > Ideally the MMU looks at the PTE for keys, in order to enforce
> > > protection. This is the case with x86 and is the case with
se here the pmd_trans_huge
> - * and pmd_trans_splitting must remain set at all times on the pmd
> - * until the split is complete for this pmd), then we flush the SMP TLB
> - * and finally we write the non-huge version of the pmd entry with
> - * pmd_populate.
> - */
> - old = pmdp_invalidate(vma, haddr, pmd);
> -
> - /*
> - * Transfer dirty bit using value returned by pmd_invalidate() to be
> - * sure we don't race with CPU that can set the bit under us.
> - */
> - if (pmd_dirty(old))
> - SetPageDirty(page);
> -
> pmd_populate(mm, pmd, pgtable);
>
> if (freeze) {
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org
--
Michal Hocko
SUSE Labs
a parallel THP split work as expected.
>*/
> serialize_against_pte_lookup(vma->vm_mm);
> + return __pmd(old_pmd);
> }
>
> static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org
--
Michal Hocko
SUSE Labs
ed long address, pmd_t *pmdp)
> {
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org
--
Michal Hocko
SUSE Labs
of size %ld\n", huge_page_size(h));
> return 0;
>
> found:
> --
> 2.13.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org
--
Michal Hocko
SUSE Labs
On Thu 27-07-17 21:27:37, Aneesh Kumar K.V wrote:
>
>
> On 07/27/2017 06:27 PM, Michal Hocko wrote:
> >On Thu 27-07-17 14:07:56, Aneesh Kumar K.V wrote:
> >>Instead of marking the pmd ready for split, invalidate the pmd. This should
> >>take care of powerpc req
On Thu 27-07-17 21:50:35, Aneesh Kumar K.V wrote:
>
>
> On 07/27/2017 06:31 PM, Michal Hocko wrote:
> >On Thu 27-07-17 11:48:26, Aneesh Kumar K.V wrote:
> >>For ppc64, we want to call this function when we are not running as guest.
> >
> >What does this
On Thu 27-07-17 21:18:35, Aneesh Kumar K.V wrote:
>
>
> On 07/27/2017 06:24 PM, Michal Hocko wrote:
> >EMISSING_CHANGELOG
> >
> >besides that no user actually uses the return value. Please fold this
> >into the patch which uses the new functionality.
>
>
rved)
>
> This is exactly what we need here. So, I will update this patch to use this
> iterator, which will simplify it.
Please have a look at
http://lkml.kernel.org/r/20170815093306.gc29...@dhcp22.suse.cz
I believe we can simply drop the check altogether.
--
Michal Hocko
SUSE Labs
On Wed 10-05-17 11:19:43, David S. Miller wrote:
> From: Michal Hocko <mho...@kernel.org>
> Date: Wed, 10 May 2017 16:57:26 +0200
>
> > Have you measured that? I do not think it would be super hard to
> > measure. I would be quite surprised if this added much if anyth
unsigned long align,
unsigned long goal)
{
- return memblock_virt_alloc_try_nid(size, align, goal,
+ return memblock_virt_alloc_core(size, align, goal,
BOOTMEM_ALLOC_ACCESSIBLE, node);
}
--
Michal Hocko
SUSE Labs
erence count and other struct members. Almost nobody should be
looking at our page at this time and stealing the cache line. On the
other hand a large memcpy will basically wipe everything away from the
cpu cache. Or am I missing something?
--
Michal Hocko
SUSE Labs
worried then make it opt-in and make
it depend on ARCH_WANT_PER_PAGE_INIT and make it enabled for x86 and
sparc after memset optimization.
--
Michal Hocko
SUSE Labs
1 - 100 of 482 matches
Mail list logo