b folder to use new header.
> Though for time being include new header back to kernel.h to avoid twisted
> indirected includes for existing users.
>
> Signed-off-by: Andy Shevchenko
Acked-by: Mike Rapoport
> ---
> arch/powerpc/kernel/setup-common.c | 1 +
> arch/
On Wed, Mar 17, 2021 at 09:52:10AM +0800, Kefeng Wang wrote:
> mem_init_print_info() is called in mem_init() on each architecture,
> and pass NULL argument, so using void argument and move it into mm_init().
>
> Acked-by: Dave Hansen
> Signed-off-by: Kefeng Wang
Acked-by
On Sat, Jan 23, 2021 at 06:09:11PM -0800, Andrew Morton wrote:
> On Fri, 22 Jan 2021 01:37:14 -0300 Thiago Jung Bauermann
> wrote:
>
> > Mike Rapoport writes:
> >
> > > > Signed-off-by: Roman Gushchin
> > >
> > > Reviewed-by: Mike Rapopor
_64BIT
> - On RISC-V, the normal page table format can support 34 bit
>addressing. There is no highmem support on RISC-V, so anything
>above 2GB is unused, but it might be useful to eventually support
>CONFIG_ZRAM for high pages.
>
> Fixes: 61989a80fb3a ("staging:
From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
Introduce hibernate_map_page() and hibernation_unmap_page() that will
explicitly use set_direct_map_{default
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail
Oops, this one has some rebase errors, I'll send v7 soon.
Sorry for the noise.
On Mon, Nov 09, 2020 at 06:24:11PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport
>
> Hi,
>
> During recent discussion about KVM protected memory, David raised a concern
> about usage of
From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
Introduce hibernate_map_page() and hibernation_unmap_page() that will
explicitly use set_direct_map_{default
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail
On Mon, Nov 09, 2020 at 12:33:46PM +0100, Vlastimil Babka wrote:
> On 11/8/20 7:57 AM, Mike Rapoport wrote:
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct
> > kmem_cache *cachep)
> >
From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
Introduce hibernate_map_page() and hibernation_unmap_page() that will
explicitly use set_direct_map_{default
From: Mike Rapoport
Instead of using slab_kernel_map() with 'map' parameter to remap pages when
DEBUG_PAGEALLOC is enabled, use dedicated helpers slab_kernel_map() and
slab_kernel_unmap().
Signed-off-by: Mike Rapoport
---
mm/slab.c | 26 +++---
1 file changed, 15
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail
On Wed, Nov 04, 2020 at 07:02:20PM +0100, Vlastimil Babka wrote:
> On 11/3/20 5:20 PM, Mike Rapoport wrote:
> > From: Mike Rapoport
>
> Subject should have "on DEBUG_PAGEALLOC" ?
>
> > The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must ne
On Wed, Nov 04, 2020 at 06:40:28PM +0100, Vlastimil Babka wrote:
> On 11/3/20 5:20 PM, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
> > not present in the direct map and has to
On Wed, Nov 04, 2020 at 06:35:50PM +0100, Vlastimil Babka wrote:
> On 11/3/20 5:20 PM, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
> > direct mapping after free_pages(). The pages
On Wed, Nov 04, 2020 at 10:50:07AM +0100, osalvador wrote:
> On Thu, Oct 29, 2020 at 05:27:17PM +0100, David Hildenbrand wrote:
> > Let's revert what we did in case seomthing goes wrong and we return an
> > error.
>
> Dumb question, but should not we do this for other arches as well?
It seems
From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
Introduce hibernate_map_page() that will explicitly use
set_direct_map_{default,invalid}_noflush
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail
On Tue, Nov 03, 2020 at 05:39:16PM +0300, Kirill A. Shutemov wrote:
> On Tue, Nov 03, 2020 at 02:13:50PM +0200, Mike Rapoport wrote:
> > > > +
> > > > + if (WARN_ON(ret))
> > >
> > > _ONCE?
> >
> > I've changed it to p
On Tue, Nov 03, 2020 at 02:08:16PM +0300, Kirill A. Shutemov wrote:
> On Sun, Nov 01, 2020 at 07:08:13PM +0200, Mike Rapoport wrote:
> > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> > index 46b1804c1ddf..054c8cce4236 100644
> > --- a/kernel/power/snapsh
On Mon, Nov 02, 2020 at 10:28:14AM +0100, David Hildenbrand wrote:
> On 01.11.20 18:08, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
> > verify that a page is mapped in the kern
On Mon, Nov 02, 2020 at 10:23:20AM +0100, David Hildenbrand wrote:
>
> > int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long
> > address,
> >unsigned numpages, unsigned long page_flags)
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
>
On Mon, Nov 02, 2020 at 10:19:36AM +0100, David Hildenbrand wrote:
> On 01.11.20 18:08, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
> > not present in the direct map and has to
From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
Introduce hibernate_map_page() that will explicitly use
set_direct_map_{default,invalid}_noflush
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail
On Thu, Oct 29, 2020 at 11:19:18PM +, Edgecombe, Rick P wrote:
> On Thu, 2020-10-29 at 09:54 +0200, Mike Rapoport wrote:
> > __kernel_map_pages() on arm64 will also bail out if rodata_full is
> > false:
> > void __kernel_map_pages(struct page *page, int
From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
On arm64 it is possible that a page would be removed from the direct map
using set_direct_map_invalid_noflush
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail
On Wed, Oct 28, 2020 at 09:03:31PM +, Edgecombe, Rick P wrote:
> > On Wed, Oct 28, 2020 at 11:20:12AM +, Will Deacon wrote:
> > > On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote:
> > > >
> &
On Wed, Oct 28, 2020 at 09:15:38PM +, Edgecombe, Rick P wrote:
> On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> > + unsigned long addr = (unsigned
> > long)page_address(page);
> &
On Wed, Oct 28, 2020 at 12:17:35PM +0100, David Hildenbrand wrote:
> On 28.10.20 12:09, Mike Rapoport wrote:
> > On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
> > > On 27.10.20 09:38, Mike Rapoport wrote:
> > > > On Mon, Oct 26, 2020 at 06:05
On Wed, Oct 28, 2020 at 11:20:12AM +, Will Deacon wrote:
> On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
> > > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote:
> > > >
On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
> On 27.10.20 09:38, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
> >
> > > Beyond whatever you are seeing, for the latter case of new things
> > > g
On Tue, Oct 27, 2020 at 10:44:21PM +, Edgecombe, Rick P wrote:
> On Tue, 2020-10-27 at 10:49 +0200, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 06:57:32PM +, Edgecombe, Rick P wrote:
> > > On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote:
> > > >
On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
> On 27.10.20 09:38, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
> > > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote:
> > > > On Mon, Oc
On Mon, Oct 26, 2020 at 06:57:32PM +, Edgecombe, Rick P wrote:
> On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 12:38:32AM +, Edgecombe, Rick P wrote:
> > > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> >
On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
> On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 01:13:52AM +, Edgecombe, Rick P wrote:
> > > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > >
On Mon, Oct 26, 2020 at 12:05:13PM +0100, David Hildenbrand wrote:
> On 25.10.20 11:15, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the
> > kernel direct mapping after free_pages(). The pages than
On Mon, Oct 26, 2020 at 12:54:01AM +, Edgecombe, Rick P wrote:
> On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > index 7f248fc45317..16f878c26667 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -2228,7 +2228,6
On Mon, Oct 26, 2020 at 12:38:32AM +, Edgecombe, Rick P wrote:
> On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may
> > be
> > not present in the di
On Mon, Oct 26, 2020 at 01:13:52AM +, Edgecombe, Rick P wrote:
> On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP
> > it is
> > possible that __kernel_map_pages() would fail, but since this
>
From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present() and update its forward
declarations and stubs
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
On arm64 it is possible that a page would be removed from the direct map
using set_direct_map_invalid_noflush
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail
ion to pXd values pass original
> pXdp pointers down to gup_pXd_range functions. And introduce
> pXd_offset_lockless helpers, which take an additional pXd
> entry value parameter. This has already been discussed in
> https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
>
Hi,
Some style comments below.
On Mon, Sep 07, 2020 at 08:00:58PM +0200, Gerald Schaefer wrote:
> From: Alexander Gordeev
>
> Since pXd_addr_end() macros take pXd page-table entry as a
> parameter it makes sense to check the entry type on compile.
> Even though most archs do not make use of
On Mon, Sep 07, 2020 at 08:00:55PM +0200, Gerald Schaefer wrote:
> This is v2 of an RFC previously discussed here:
> https://lore.kernel.org/lkml/20200828140314.8556-1-gerald.schae...@linux.ibm.com/
>
> Patch 1 is a fix for a regression in gup_fast on s390, after our conversion
> to common
On Wed, Aug 19, 2020 at 12:24:05PM -0700, Andrew Morton wrote:
> On Tue, 18 Aug 2020 18:16:26 +0300 Mike Rapoport wrote:
>
> > From: Mike Rapoport
> >
> > The only user of memblock_dbg() outside memblock was s390 setup code and it
> > is converted to use pr_de
From: Mike Rapoport
for_each_memblock() is used to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.
Introduce separate for_each_mem_region() and for_each_reserved_mem_region()
to improve encapsulation of memblock internals from its
From: Mike Rapoport
Iteration over memblock.reserved with for_each_reserved_mem_region() used
__next_reserved_mem_region() that implemented a subset of
__next_mem_region().
Use __for_each_mem_range() and, essentially, __next_mem_region() with
appropriate parameters to reduce code duplication
From: Mike Rapoport
The only user of memblock_mem_size() was x86 setup code, it is gone now and
memblock_mem_size() funciton can be removed.
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
include/linux/memblock.h | 1 -
mm/memblock.c| 15 ---
2 files
From: Mike Rapoport
* Replace magic numbers with defines
* Replace memblock_find_in_range() + memblock_reserve() with
memblock_phys_alloc_range()
* Stop checking for low memory size in reserve_crashkernel_low(). The
allocation from limited range will anyway fail if there is no enough
From: Mike Rapoport
Currently, initrd image is reserved very early during setup and then it
might be relocated and re-reserved after the initial physical memory
mapping is created. The "late" reservation of memblock verifies that mapped
memory size exceeds the size of initrd, then chec
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
/* do
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);
/* do something with start_pfn
From: Mike Rapoport
Currently for_each_mem_range() and for_each_mem_range_rev() iterators are
the most generic way to traverse memblock regions. As such, they have 8
parameters and they are hardly convenient to users. Most users choose to
utilize one of their wrappers and the only user
From: Mike Rapoport
The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
arch/s390
From: Mike Rapoport
for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
include/linux/memblock.h | 5 -
mm/memblock.c| 5 +
2 files changed, 5 insertions(+), 5
From: Mike Rapoport
microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.
Remove these calls and the surrounding code.
Signed-off-by: Mike
From: Mike Rapoport
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.
There is no need to call memblock_set_node(), remove this call and the
surrounding code.
Signed-off-by: Mike Rapoport
---
arch/riscv/mm/init.c | 9
From: Mike Rapoport
Instead of traversing memblock.memory regions to find memory_start and
memory_end, simply query memblock_{start,end}_of_DRAM().
Signed-off-by: Mike Rapoport
Acked-by: Stafford Horne
---
arch/h8300/kernel/setup.c| 8 +++-
arch/nds32/kernel/setup.c| 8
From: Mike Rapoport
dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.
Using a single call
From: Mike Rapoport
The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.
Replace open-coded implementation
From: Mike Rapoport
The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size
From: Mike Rapoport
The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could
From: Mike Rapoport
Hi,
These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.
The patches are on top of v5.9-rc1
v3 changes:
* rebase on v5.9-rc1, as the result this required some non-trivial changes
On Wed, Aug 05, 2020 at 12:20:24PM +0800, Baoquan He wrote:
> On 08/02/20 at 07:35pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Currently, initrd image is reserved very early during setup and then it
> > might be relocated and re-reserved after the initial
From: Mike Rapoport
for_each_memblock() is used to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.
Introduce separate for_each_mem_region() and for_each_reserved_mem_region()
to improve encapsulation of memblock internals from its
From: Mike Rapoport
Iteration over memblock.reserved with for_each_reserved_mem_region() used
__next_reserved_mem_region() that implemented a subset of
__next_mem_region().
Use __for_each_mem_range() and, essentially, __next_mem_region() with
appropriate parameters to reduce code duplication
From: Mike Rapoport
The only user of memblock_mem_size() was x86 setup code, it is gone now and
memblock_mem_size() funciton can be removed.
Signed-off-by: Mike Rapoport
---
include/linux/memblock.h | 1 -
mm/memblock.c| 15 ---
2 files changed, 16 deletions(-)
diff
From: Mike Rapoport
* Replace magic numbers with defines
* Replace memblock_find_in_range() + memblock_reserve() with
memblock_phys_alloc_range()
* Stop checking for low memory size in reserve_crashkernel_low(). The
allocation from limited range will anyway fail if there is no enough
From: Mike Rapoport
Currently, initrd image is reserved very early during setup and then it
might be relocated and re-reserved after the initial physical memory
mapping is created. The "late" reservation of memblock verifies that mapped
memory size exceeds the size of initrd, the chec
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
/* do
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);
/* do something with start_pfn
From: Mike Rapoport
Currently for_each_mem_range() iterator is the most generic way to traverse
memblock regions. As such, it has 8 parameters and it is hardly convenient
to users. Most users choose to utilize one of its wrappers and the only
user that actually needs most of the parameters
From: Mike Rapoport
The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
arch/s390
From: Mike Rapoport
for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
include/linux/memblock.h | 5 -
mm/memblock.c| 5 +
2 files changed, 5 insertions(+), 5
From: Mike Rapoport
microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.
Remove these calls and the surrounding code.
Signed-off-by: Mike
From: Mike Rapoport
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.
There is no need to call memblock_set_node(), remove this call and the
surrounding code.
Signed-off-by: Mike Rapoport
---
arch/riscv/mm/init.c | 9
1 - 100 of 587 matches
Mail list logo