be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v5 -Remove left over hugetlb_vma_unlock_read
v4
be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v4 -Remove vma (pmd sharing) locking as this can
On 10/27/22 15:34, Peter Xu wrote:
> On Wed, Oct 26, 2022 at 05:34:04PM -0700, Mike Kravetz wrote:
> > On 10/26/22 17:59, Peter Xu wrote:
>
> If we want to use the vma read lock to protect here as the slow gup path,
> then please check again with below [1] - I think we'll al
On 10/26/22 17:59, Peter Xu wrote:
> Hi, Mike,
>
> On Sun, Sep 18, 2022 at 07:13:48PM -0700, Mike Kravetz wrote:
> > +struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
> > + unsigned long address, unsigned int flags)
> >
w that as well. It'd be easier to just limit the hugetlbfs max
> blocksize to 4GB. It's very unlikely anything else will use such large
> blocksizes and having to introduce new user interfaces for it doesn't sound
> right.
I was not around hugetlbfs when the decision was made to set 'blocksize =
pagesize'. However, I must say that it does seem to make sense as you
can only add or remove entire hugetlb pages from a hugetlbfs file. So,
the hugetlb page size does seem to correspond to the meaning of filesystem
blocksize.
Does any application code make use of this? I can not make a guess.
--
Mike Kravetz
pdated version of the patch was posted here,
https://lore.kernel.org/linux-mm/20220921202702.106069-1-mike.krav...@oracle.com/
Sorry about that,
--
Mike Kravetz
>
> Kernel attempted to read user page (34) - exploit attempt? (uid: 0)
> BUG: Kernel NULL pointer dereference on read at 0x
be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v3 -Change WARN_ON_ONCE() to BUILD_BUG
On September 14, 2022 10:43:52 AM GMT+01:00, Christophe Leroy
wrote:
>
>
>Le 14/09/2022 à 11:32, Mike Rapoport a écrit :
>> On Tue, Sep 13, 2022 at 02:36:13PM +0200, Christophe Leroy wrote:
>>>
>>>
>>> Le 13/09/2022 à 08:11, Christophe Leroy a éc
/* Ignore complete lowmem entries */
if (end <= max_low)
continue;
/* Truncate partial highmem entries */
if (start < max_low)
start = max_low;
for (; start < end; start++)
free_highmem_page(pfn_to_page(start));
}
#endif
}
> Thanks
> Christophe
>
--
Sincerely yours,
Mike.
On Sat, Sep 10, 2022 at 09:39:20AM +, Christophe Leroy wrote:
> + Adding Mike who might help if the problem is around memblock.
>
> Le 08/09/2022 à 22:17, Pali Rohár a écrit :
> > On Thursday 08 September 2022 17:35:11 Pali Rohár wrote:
> >> On Thursday 08 September
On 09/05/22 06:34, Christophe Leroy wrote:
>
>
> Le 02/09/2022 à 21:03, Mike Kravetz a écrit :
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index fe4944f89d34..275e554dd365 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linu
be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v2 -Added WARN_ON_ONCE() and updated comment
On 08/19/22 21:22, Michael Ellerman wrote:
> Mike Kravetz writes:
> > On 08/16/22 22:43, Andrew Morton wrote:
> >> On Wed, 17 Aug 2022 03:31:37 + "Wang, Haiyue"
> >> wrote:
> >>
> >> > > > }
> >> >
On Thu, Jun 23, 2022 at 10:56:57AM +0200, Christophe Leroy wrote:
> Rewrite p4d_populate() as a static inline function instead of
> a macro.
>
> This change allows typechecking and would have helped detecting
> a recently found bug in map_kernel_page().
>
> Cc: Mike R
es
> in order to avoid any confusion.
>
> Fixes: 2fb4706057bc ("powerpc: add support for folded p4d page tables")
> Cc: sta...@vger.kernel.org
> Cc: Mike Rapoport
> Signed-off-by: Christophe Leroy
Acked-by: Mike Rapoport
> ---
> arch/powerpc/mm/nohash/boo
le entries with hugetlb
- hugetlb pmds can be shared instead of copied
In any case, completely eliminating the copy at fork time should speed
things up.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Acked-by: David Hildenbrand
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
tags.
Baolin Wang (1):
arm64/hugetlb: Implement arm64 specific hugetlb_mask_last_page
Mike Kravetz (3):
hugetlb: skip to end of PT page mapping when pte not present
hugetlb: do not update address in huge_pmd_unshare
hugetlb: Lazy page table copies in fork()
arch/arm64/mm/hugetl
hugetlb_mask_last_page to update address if pmd is unshared.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Reviewed-by: Baolin Wang
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c| 46 +
mm/rmap.c | 4 ++--
3 files changed, 23 insertions
an ARM64 specific hugetlb_mask_last_page() to help this case.
[1]
https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.krav...@oracle.com/
Signed-off-by: Baolin Wang
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
---
arch/arm64/mm/hugetlbpage.c | 20
1 file
.
Signed-off-by: Mike Kravetz
Tested-by: Baolin Wang
Reviewed-by: Baolin Wang
Acked-by: Muchun Song
Reported-by: kernel test robot
---
include/linux/hugetlb.h | 1 +
mm/hugetlb.c| 56 +
2 files changed, 52 insertions(+), 5 deletions(-)
diff --git
On 06/17/22 19:26, kernel test robot wrote:
> Hi Mike,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on soc/for-next]
> [also build test ERROR on linus/master v5.19-rc2 next-20220617]
> [cannot apply to arm64/for-next/core arm/for-next kvmar
On 06/17/22 10:15, Peter Xu wrote:
> Hi, Mike,
>
> On Thu, Jun 16, 2022 at 02:05:15PM -0700, Mike Kravetz wrote:
> > @@ -6877,6 +6896,39 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
> > return (pte_t *)pmd;
> > }
> >
> > +/*
> > + * Return
hugetlb_mask_last_page to update address if pmd is unshared.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Reviewed-by: Baolin Wang
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c| 47 ++---
mm/rmap.c | 4 ++--
3 files changed, 24 insertions
le entries with hugetlb
- hugetlb pmds can be shared instead of copied
In any case, completely eliminating the copy at fork time should speed
things up.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Acked-by: David Hildenbrand
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletio
an ARM64 specific hugetlb_mask_last_page() to help this case.
[1]
https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.krav...@oracle.com/
Signed-off-by: Baolin Wang
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 20
1 file changed, 20 insertions
.
Signed-off-by: Mike Kravetz
Tested-by: Baolin Wang
Reviewed-by: Baolin Wang
---
include/linux/hugetlb.h | 1 +
mm/hugetlb.c| 62 +
2 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
4 specific hugetlb_mask_last_page
Mike Kravetz (3):
hugetlb: skip to end of PT page mapping when pte not present
hugetlb: do not update address in huge_pmd_unshare
hugetlb: Lazy page table copies in fork()
arch/arm64/mm/hugetlbpage.c | 20 +++
include/linux/hugetlb.h | 5 +-
mm
ld not easily be added to the first
if (folio_test_hugetlb(folio)) block in this routine. However, it
is fine to add here.
Looks good. Thanks for all these changes,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
d-off-by: Baolin Wang
> ---
> mm/rmap.c | 24 ++--
> 1 file changed, 18 insertions(+), 6 deletions(-)
With the addition of !CONFIG_HUGETLB_PAGE stubs,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
gt;>>
>>>>> So this is a preparation patch, which changes the
>>>>> huge_ptep_clear_flush()
>>>>> to return the original pte to help to nuke a hugetlb page table.
>>>>>
>>>>> Signed-off-by: Baolin Wang
>>>>>
On 5/5/22 20:39, Baolin Wang wrote:
>
> On 5/6/2022 7:53 AM, Mike Kravetz wrote:
>> On 4/29/22 01:14, Baolin Wang wrote:
>>> On some architectures (like ARM64), it can support CONT-PTE/PMD size
>>> hugetlb, which means it can support not only PMD/PUD size hugetlb:
&g
eval be overwritten here with
>>> pteval = swp_entry_to_pte(make_hwpoison_entry(subpage))?
>>> IOW, what sense does it make to save the returned pteval from
>>> huge_ptep_clear_flush(), when it is never being used anywhere?
>>
>> Please see previous code, we'll use the original pte value to check if
>> it is uffd-wp armed, and if need to mark it dirty though the hugetlbfs
>> is set noop_dirty_folio().
>>
>> pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval);
>
> Uh, ok, that wouldn't work on s390, but we also don't have
> CONFIG_PTE_MARKER_UFFD_WP / HAVE_ARCH_USERFAULTFD_WP set, so
> I guess we will be fine (for now).
>
> Still, I find it a bit unsettling that pte_install_uffd_wp_if_needed()
> would work on a potential hugetlb *pte, directly de-referencing it
> instead of using huge_ptep_get().
>
> The !pte_none(*pte) check at the beginning would be broken in the
> hugetlb case for s390 (not sure about other archs, but I think s390
> might be the only exception strictly requiring huge_ptep_get()
> for de-referencing hugetlb *pte pointers).
>
Adding Peter Wu mostly for above as he is working uffd_wp.
>>
>> /* Set the dirty flag on the folio now the pte is gone. */
>> if (pte_dirty(pteval))
>> folio_mark_dirty(folio);
>
> Ok, that should work fine, huge_ptep_clear_flush() will return
> a pteval properly de-referenced and converted with huge_ptep_get(),
> and that would contain the hugetlb pmd/pud dirty information.
>
--
Mike Kravetz
erhaps add
a VM_BUG_ON() to make sure the passed huge page is poisoned? This
would be in the same 'if block' where we call
adjust_range_if_pmd_sharing_possible.
--
Mike Kravetz
> which means now we will unmap only one pte entry for a CONT-PTE or
> CONT-PMD size poisoned hugetlb page,
set_pte_at(mm, address, pvmw.pte, pteval);
> + if (folio_test_hugetlb(folio))
> + set_huge_pte_at(mm, address, pvmw.pte,
> pteval);
And, we will use that pteval for ALL the PTE/PMDs here. So, we would set
the dirty or young bit in ALL PTE/PMDs.
Could that cause any issues? May be more of a question for the arm64 people.
--
Mike Kravetz
ush() today is hugetlb_cow/wp() in
mm/hugetlb.c. Any reason why you did not change that code? At least
cast the return of huge_ptep_clear_flush() to void with a comment?
Not absolutely necessary.
Acked-by: Mike Kravetz
--
Mike Kravetz
g this morning, it works for me, see below
> >
> >>
> >> On 23/03/2022 21:06, Mike Rapoport wrote:
> >>> Hi Catalin,
> >>>
> >>> On Wed, Mar 23, 2022 at 05:22:38PM +, Catalin Marinas wrote:
> >>>> Hi Ariel,
> >>&
ot a subset of the NODE domain
>
> Fixes: 09f49dca570a ("mm: handle uninitialized numa nodes gracefully")
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux...@kvack.org
> Cc: Michal Hocko
> Cc: Michael Ellerman
> Reported-by: Geetika Moolchandani
> Signed-off-by: S
ould enable kmemleak for KASAN on arm and arm64 that AFAIR
caused OOM in kmemleak and it also will limit tracking only to allocations
that do not specify 'end' explicitly ;-)
> /*
>* The min_count is set to 0 so that memblock allocated
>
ee if the loop found anything,
>but in that case it should something like
>
>if (list_entry_is_head(entry, head, member)) {
>return with error;
>}
>do_somethin_with(entry);
>
>Suffice? The list_entry_is_head() macro is designed to cope with the
>bogus entry on head problem.
Won't suffice because the end goal of this work is to limit scope of entry only
to loop. Hence the need for additional variable.
Besides, there are no guarantees that people won't do_something_with(entry)
without the check or won't compare entry to NULL to check if the loop finished
with break or not.
>James
--
Sincerely yours,
Mike
On Fri, Feb 11, 2022 at 11:41:30AM -0500, Zi Yan wrote:
> From: Zi Yan
>
> has_unmovable_pages() is only used in mm/page_isolation.c. Move it from
> mm/page_alloc.c and make it static.
>
> Signed-off-by: Zi Yan
> Reviewed-by: Oscar Salvador
Reviewed-by: Mike Rapoport
&
Hi Aneesh,
On Fri, Feb 11, 2022 at 05:22:13PM +0530, Aneesh Kumar K V wrote:
> On 2/11/22 16:03, Mike Rapoport wrote:
> > On Fri, Feb 11, 2022 at 12:03:28PM +0530, Aneesh Kumar K.V wrote:
> > > Keep it simple by using a #define and limiting hugepage size to 2M.
> > >
16
> +#else
> +#define PAGE_SHIFT 12
> +#endif
> +/*
> + * On ppc64 this will only work with radix 2M hugepage size
> + */
> #define HPAGE_SHIFT 21
>
> #define PAGE_SIZE (1 << PAGE_SHIFT)
> --
> 2.34.1
>
>
--
Sincerely yours,
Mike.
We are seeing errors like ' Error: unrecognized opcode: `ptesync''
'dssall' and 'stbcix' as a result of binutils changes Unless 'stbcix'
and friends aren't as exclusively PPC6 as I've gathered from
binutils/opcode/ppc-opc.c there shouldn't be much of a problem, but i
suspect a lot more needs to be
On Fri, Feb 04, 2022 at 08:27:37AM +0530, Anshuman Khandual wrote:
>
> On 2/3/22 11:45 PM, Mike Rapoport wrote:
> > On Mon, Jan 24, 2022 at 06:26:41PM +0530, Anshuman Khandual wrote:
> >> This defines and exports a platform specific custom vm_get_page_prot()
_BUG();
> + }
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + return __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +pgprot_val(powerpc_vm_get_page_prot(vm_flags)));
Any reason to keep powerpc_vm_get_page_prot() rather than open code it
here?
This applies to other architectures that implement arch_vm_get_page_prot()
and/or arch_filter_pgprot() as well.
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
> --
> 2.25.1
>
>
--
Sincerely yours,
Mike.
definitions. This makes huge pte creation much cleaner and easier
> to follow.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Michael Ellerman
> Cc: Paul Mackerras
> Cc: "David S. Miller"
> Cc: Mike Kravetz
> Cc: Andrew Morton
> Cc: lin
On Wed, Feb 02, 2022 at 06:25:31AM +, Christophe Leroy wrote:
>
>
> Le 02/02/2022 à 07:18, Mike Rapoport a écrit :
> > On Wed, Feb 02, 2022 at 11:08:06AM +0530, Anshuman Khandual wrote:
> >> Each call into pte_mkhuge() is invariably followed by arch_make_
available
> platforms definitions. This makes huge pte creation much cleaner and easier
> to follow.
Won't it break architectures that don't define arch_make_huge_pte()?
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Michael Ellerman
> Cc: Paul Mackerras
> Cc: "D
I just made the huge mistake of hibernating and resuming, I'm going trough
the process of rescue and all, thankfully I had a 2016 cd in the drive.
I'll read up once the sheer panic settles.
-Michael
On Wed, Jan 26, 2022, 21:22 John Paul Adrian Glaubitz <
glaub...@physik.fu-berlin.de> wrote:
>
t; - return (unsigned long)mod->init_layout.base <= addr &&
> -addr < (unsigned long)mod->init_layout.base +
> mod->init_layout.size;
> + return within_module_layout(addr, >init_layout);
> }
>
> static inline bool within_module(unsigned long addr, const struct module
> *mod)
> --
> 2.33.1
>
--
Sincerely yours,
Mike.
f
break;
#ifdef CONFIG_PPC64
case BARRIER_PTESYNC:
On Sun, 23 Jan 2022 at 15:18, Mike wrote:
>
> Maybe cite the correct parts of the patch where my questions arose for
> context.
> ---
> diff --git a/arch/powerpc/lib/sstep.c b/arch/pow
;);
break;
+#endif
}
break;
-
On Sun, 23 Jan 2022 at 14:43, Mike wrote:
>
> As some have probably noticed, we are seeing errors like ' Error:
> unrecognized opcode: `ptesync'' 'dssall' and 'stbcix' as a result of
> binutils changes, making compiling all that more fun again. Th
As some have probably noticed, we are seeing errors like ' Error:
unrecognized opcode: `ptesync'' 'dssall' and 'stbcix' as a result of
binutils changes, making compiling all that more fun again. The only
question on my mind still is this:
diff --git a/arch/powerpc/include/asm/io.h
It booted at least. I'll try your suggestions as soon as I can, I'm
progressing slower than ever, concentration is somewhat lapse still
Thanks.
Best regards
Michael
On Tue, Jan 11, 2022, 10:51 Christophe Leroy
wrote:
>
>
> Le 11/01/2022 à 10:32, Mike a écrit :
> > I
is a nice cleanup IMHO. Although the "has fallback" part is a
> bit imprecise. "migratetype_is_mergable()" might be a bit clearer.
> ideally "migratetype_is_mergable_with_other_types()". Can we come up
> with a nice name for that?
migratetype_is_mergable() kinda implies "_with_other_types", no?
I like migratetype_is_mergable() more than _has_fallback().
My $0.02 to bikeshedding :)
> --
> Thanks,
>
> David / dhildenb
>
>
--
Sincerely yours,
Mike.
. I usually consider the kernel a
place of sane code, as long as it's not a hurried vendor or so.
-mr pink hand
On Tue, Jan 11, 2022, 10:51 Christophe Leroy
wrote:
>
>
> Le 11/01/2022 à 10:32, Mike a écrit :
> > I managed to fix it in the end, patch attached, though i should have
Hey, so I originally sat down to compile the fast headers V2 patch, but
quickly discovered other things at play, and grabbed 5.16.0 a few hours
after it lifted off, arch/powerpc/mm/mmu_context.c I had to specifically
say had to include -maltivec or it barfed on a 'dssall', I'm fine with
that,
On Mon, Nov 29, 2021 at 06:08:10PM -0600, Rob Herring wrote:
> On Sun, Nov 21, 2021 at 08:43:47AM +0200, Mike Rapoport wrote:
> > On Fri, Nov 19, 2021 at 03:58:17PM +0800, Calvin Zhang wrote:
> > > The count of reserved regions in /reserved-memory was limited because
> > &
-
> include/linux/of_reserved_mem.h| 4 +
> 17 files changed, 207 insertions(+), 139 deletions(-)
>
> --
> 2.30.2
>
--
Sincerely yours,
Mike.
On Thu, 2021-11-11 at 10:56 +, Valentin Schneider wrote:
> On 11/11/21 11:32, Mike Galbraith wrote:
> > On Thu, 2021-11-11 at 10:36 +0100, Marco Elver wrote:
> > > I guess the question is if is_preempt_full() should be true also if
> > > is_preempt_rt() i
On Thu, 2021-11-11 at 10:36 +0100, Marco Elver wrote:
> On Thu, 11 Nov 2021 at 04:47, Mike Galbraith wrote:
> >
> > On Thu, 2021-11-11 at 04:35 +0100, Mike Galbraith wrote:
> > > On Thu, 2021-11-11 at 04:16 +0100, Mike Galbraith wrote:
> > > > On Wed, 2021-11-1
On Thu, 2021-11-11 at 04:47 +0100, Mike Galbraith wrote:
>
> So I suppose the powerpc spot should remain CONFIG_PREEMPT and become
> CONFIG_PREEMPTION when the RT change gets merged, because that spot is
> about full preemptibility, not a distinct preemption model.
KCSAN needs a
On Thu, 2021-11-11 at 04:35 +0100, Mike Galbraith wrote:
> On Thu, 2021-11-11 at 04:16 +0100, Mike Galbraith wrote:
> > On Wed, 2021-11-10 at 20:24 +, Valentin Schneider wrote:
> > >
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > ind
On Thu, 2021-11-11 at 04:16 +0100, Mike Galbraith wrote:
> On Wed, 2021-11-10 at 20:24 +, Valentin Schneider wrote:
> >
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 5f8db54226af..0640d5622496 100644
> > --- a/include/linux/sched.h
>
MPT_NONE)
> +#define is_preempt_voluntary() IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)
> +#define is_preempt_full() IS_ENABLED(CONFIG_PREEMPT)
I think that should be IS_ENABLED(CONFIG_PREEMPTION), see c1a280b68d4e.
Noticed while applying the series to an RT tree, where tglx
has done that replacement to the powerpc spot your next patch diddles.
-Mike
On Thu, Sep 30, 2021 at 02:20:33PM -0700, Linus Torvalds wrote:
> On Thu, Sep 30, 2021 at 11:50 AM Mike Rapoport wrote:
> >
> > The first patch is a cleanup of numa_distance allocation in arch_numa I've
> > spotted during the conversion.
> > The second patch is a
From: Mike Rapoport
Rename memblock_free_ptr() to memblock_free() and use memblock_free()
when freeing a virtual pointer so that memblock_free() will be a
counterpart of memblock_alloc()
The callers are updated with the below semantic patch and manual addition
of (void *) casting to pointers
From: Mike Rapoport
Since memblock_free() operates on a physical range, make its name reflect
it and rename it to memblock_phys_free(), so it will be a logical
counterpart to memblock_phys_alloc().
The callers are updated with the below semantic patch:
@@
expression addr;
expression size
From: Mike Rapoport
memblock_free_late() is a NOP wrapper for __memblock_free_late(), there is
no point to keep this indirection.
Drop the wrapper and rename __memblock_free_late() to memblock_free_late().
Signed-off-by: Mike Rapoport
---
include/linux/memblock.h | 7 +--
mm/memblock.c
From: Mike Rapoport
memblock_free_early_nid() is unused and memblock_free_early() is an alias
for memblock_free().
Replace calls to memblock_free_early() with calls to memblock_free() and
remove memblock_free_early() and memblock_free_early_nid().
Signed-off-by: Mike Rapoport
---
arch/mips
From: Mike Rapoport
free_p2m_page() wrongly passes a virtual pointer to memblock_free() that
treats it as a physical address.
Call memblock_free_ptr() instead that gets a virtual address to free the
memory.
Signed-off-by: Mike Rapoport
Reviewed-by: Juergen Gross
---
arch/x86/xen/p2m.c | 2
From: Mike Rapoport
Memory allocation of numa_distance uses memblock_phys_alloc_range() without
actual range limits, converts the returned physical address to virtual and
then only uses the virtual address for further initialization.
Simplify this by replacing memblock_phys_alloc_range
From: Mike Rapoport
Hi,
Following the discussion on [1] this is the fix for memblock freeing APIs
mismatch.
The first patch is a cleanup of numa_distance allocation in arch_numa I've
spotted during the conversion.
The second patch is a fix for Xen memory freeing on some of the error
paths.
I
On Thu, Sep 23, 2021 at 03:54:46PM +0200, Christophe Leroy wrote:
>
> Le 23/09/2021 à 14:01, Mike Rapoport a écrit :
> > On Thu, Sep 23, 2021 at 11:47:48AM +0200, Christophe Leroy wrote:
> > >
> > >
> > > Le 23/09/2021 à 09:43, Mike Rapopor
Hi Linus,
On Thu, Sep 23, 2021 at 09:01:46AM -0700, Linus Torvalds wrote:
> On Thu, Sep 23, 2021 at 12:43 AM Mike Rapoport wrote:
> >
> You need to be a LOT more careful.
>
> From a trivial check - exactly because I looked at doing it with a
> script, and decided it's
On Thu, Sep 23, 2021 at 11:47:48AM +0200, Christophe Leroy wrote:
>
>
> Le 23/09/2021 à 09:43, Mike Rapoport a écrit :
> > From: Mike Rapoport
> >
> > For ages memblock_free() interface dealt with physical addresses even
> > despite the existence of memblock
From: Mike Rapoport
For ages memblock_free() interface dealt with physical addresses even
despite the existence of memblock_alloc_xx() functions that return a
virtual pointer.
Introduce memblock_phys_free() for freeing physical ranges and repurpose
memblock_free() to free virtual pointers
From: Mike Rapoport
free_p2m_page() wrongly passes a virtual pointer to memblock_free() that
treats it as a physical address.
Call memblock_free_ptr() instead that gets a virtual address to free the
memory.
Signed-off-by: Mike Rapoport
---
arch/x86/xen/p2m.c | 2 +-
1 file changed, 1
From: Mike Rapoport
Memory allocation of numa_distance uses memblock_phys_alloc_range() without
actual range limits, converts the returned physical address to virtual and
then only uses the virtual address for further initialization.
Simplify this by replacing memblock_phys_alloc_range
From: Mike Rapoport
Hi,
Following the discussion on [1] this is the fix for memblock freeing APIs
mismatch.
The first patch is a cleanup of numa_distance allocation in arch_numa I've
spotted during the conversion.
The second patch is a fix for Xen memory freeing on some of the error
paths
On Fri, Jun 11, 2021 at 01:53:48PM -0700, Stephen Brennan wrote:
> Mike Rapoport writes:
> > From: Mike Rapoport
> >
> > There are no architectures that support DISCONTIGMEM left.
> >
> > Remove the configuration option and the dead code it was guarding in the
&
Hi Arnd,
On Wed, Jun 09, 2021 at 01:30:39PM +0200, Arnd Bergmann wrote:
> On Fri, Jun 4, 2021 at 8:49 AM Mike Rapoport wrote:
> >
> > From: Mike Rapoport
> >
> > Hi,
> >
> > SPARSEMEM memory model was supposed to entirely replace DISCONTIGMEM a
>
On Tue, Jun 08, 2021 at 05:25:44PM -0700, Andrew Morton wrote:
> On Tue, 8 Jun 2021 12:13:15 +0300 Mike Rapoport wrote:
>
> > From: Mike Rapoport
> >
> > After removal of DISCINTIGMEM the NEED_MULTIPLE_NODES and NUMA
> > configuration options
From: Mike Rapoport
After removal of the DISCONTIGMEM memory model the FLAT_NODE_MEM_MAP
configuration option is equivalent to FLATMEM.
Drop CONFIG_FLAT_NODE_MEM_MAP and use CONFIG_FLATMEM instead.
Signed-off-by: Mike Rapoport
---
include/linux/mmzone.h | 4 ++--
kernel/crash_core.c| 2
From: Mike Rapoport
After removal of DISCINTIGMEM the NEED_MULTIPLE_NODES and NUMA
configuration options are equivalent.
Drop CONFIG_NEED_MULTIPLE_NODES and use CONFIG_NUMA instead.
Done with
$ sed -i 's/CONFIG_NEED_MULTIPLE_NODES/CONFIG_NUMA/' \
$(git grep -wl
From: Mike Rapoport
Remove description of DISCONTIGMEM from the "Memory Models" document and
update VM sysctl description so that it won't mention DISCONIGMEM.
Signed-off-by: Mike Rapoport
---
Documentation/admin-guide/sysctl/vm.rst | 12 +++
Documentation/vm/memory-model.rst
From: Mike Rapoport
There are several places that mention DISCONIGMEM in comments or have stale
code guarded by CONFIG_DISCONTIGMEM.
Remove the dead code and update the comments.
Signed-off-by: Mike Rapoport
---
arch/ia64/kernel/topology.c | 5 ++---
arch/ia64/mm/numa.c | 5
From: Mike Rapoport
There are no architectures that support DISCONTIGMEM left.
Remove the configuration option and the dead code it was guarding in the
generic memory management code.
Signed-off-by: Mike Rapoport
---
include/asm-generic/memory_model.h | 37
From: Mike Rapoport
DISCONTIGMEM was replaced by FLATMEM with freeing of the unused memory map
in v5.11.
Remove the support for DISCONTIGMEM entirely.
Signed-off-by: Mike Rapoport
Reviewed-by: Geert Uytterhoeven
Acked-by: Geert Uytterhoeven
---
arch/m68k/Kconfig.cpu | 10
From: Mike Rapoport
DISCONTIGMEM was replaced by FLATMEM with freeing of the unused memory map
in v5.11.
Remove the support for DISCONTIGMEM entirely.
Signed-off-by: Mike Rapoport
Acked-by: Vineet Gupta
---
arch/arc/Kconfig | 13
arch/arc/include/asm/mmzone.h | 40
From: Mike Rapoport
Arc does not use DISCONTIGMEM to implement high memory, update the comment
describing how high memory works to reflect this.
Signed-off-by: Mike Rapoport
Acked-by: Vineet Gupta
---
arch/arc/mm/init.c | 13 +
1 file changed, 5 insertions(+), 8 deletions
From: Mike Rapoport
NUMA is marked broken on alpha for more than 15 years and DISCONTIGMEM was
replaced with SPARSEMEM in v5.11.
Remove both NUMA and DISCONTIGMEM support from alpha.
Signed-off-by: Mike Rapoport
---
arch/alpha/Kconfig| 22 ---
arch/alpha/include/asm
From: Mike Rapoport
Hi,
SPARSEMEM memory model was supposed to entirely replace DISCONTIGMEM a
(long) while ago. The last architectures that used DISCONTIGMEM were
updated to use other memory models in v5.11 and it is about the time to
entirely remove DISCONTIGMEM from the kernel.
This set
Hi,
On Mon, Jun 07, 2021 at 10:53:08AM +0200, Geert Uytterhoeven wrote:
> Hi Mike,
>
> On Fri, Jun 4, 2021 at 8:50 AM Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > After removal of DISCINTIGMEM the NEED_MULTIPLE_NODES and NUMA
> > configuration opt
(unsigned long)brk;
> +}
> /* Pointer magic because the dynamic array size confuses some compilers. */
> static inline void mm_init_cpumask(struct mm_struct *mm)
> --
> 2.26.2
>
>
> ___
> linux-riscv mailing list
> linux-ri...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
--
Sincerely yours,
Mike.
>
> ___
> linux-riscv mailing list
> linux-ri...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
--
Sincerely yours,
Mike.
On Fri, Jun 04, 2021 at 02:07:39PM +, Vineet Gupta wrote:
> On 6/3/21 11:49 PM, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > DISCONTIGMEM was replaced by FLATMEM with freeing of the unused memory map
> > in v5.11.
> >
> > Remove the support fo
From: Mike Rapoport
After removal of the DISCONTIGMEM memory model the FLAT_NODE_MEM_MAP
configuration option is equivalent to FLATMEM.
Drop CONFIG_FLAT_NODE_MEM_MAP and use CONFIG_FLATMEM instead.
Signed-off-by: Mike Rapoport
---
include/linux/mmzone.h | 4 ++--
kernel/crash_core.c| 2
From: Mike Rapoport
After removal of DISCINTIGMEM the NEED_MULTIPLE_NODES and NUMA
configuration options are equivalent.
Drop CONFIG_NEED_MULTIPLE_NODES and use CONFIG_NUMA instead.
Done with
$ sed -i 's/CONFIG_NEED_MULTIPLE_NODES/CONFIG_NUMA/' \
$(git grep -wl
From: Mike Rapoport
Remove description of DISCONTIGMEM from the "Memory Models" document and
update VM sysctl description so that it won't mention DISCONIGMEM.
Signed-off-by: Mike Rapoport
---
Documentation/admin-guide/sysctl/vm.rst | 12 +++
Documentation/vm/memory-model.rst
401 - 500 of 1497 matches
Mail list logo