Re: [PATCH v1 1/1] kernel.h: Split out panic and oops helpers

2021-04-06 Thread Mike Rapoport
b folder to use new header. > Though for time being include new header back to kernel.h to avoid twisted > indirected includes for existing users. > > Signed-off-by: Andy Shevchenko Acked-by: Mike Rapoport > --- > arch/powerpc/kernel/setup-common.c | 1 + > arch/

Re: [PATCH v2] mm: Move mem_init_print_info() into mm_init()

2021-03-31 Thread Mike Rapoport
On Wed, Mar 17, 2021 at 09:52:10AM +0800, Kefeng Wang wrote: > mem_init_print_info() is called in mem_init() on each architecture, > and pass NULL argument, so using void argument and move it into mm_init(). > > Acked-by: Dave Hansen > Signed-off-by: Kefeng Wang Acked-by

Re: [PATCH v2 2/2] memblock: do not start bottom-up allocations with kernel_end

2021-01-23 Thread Mike Rapoport
On Sat, Jan 23, 2021 at 06:09:11PM -0800, Andrew Morton wrote: > On Fri, 22 Jan 2021 01:37:14 -0300 Thiago Jung Bauermann > wrote: > > > Mike Rapoport writes: > > > > > > Signed-off-by: Roman Gushchin > > > > > > Reviewed-by: Mike Rapopor

Re: [PATCH] arch: pgtable: define MAX_POSSIBLE_PHYSMEM_BITS where needed

2020-11-14 Thread Mike Rapoport
_64BIT > - On RISC-V, the normal page table format can support 34 bit >addressing. There is no highmem support on RISC-V, so anything >above 2GB is unused, but it might be useful to eventually support >CONFIG_ZRAM for high pages. > > Fixes: 61989a80fb3a ("staging:

[PATCH v7 4/4] arch, mm: make kernel_page_present() always available

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part

[PATCH v7 3/4] arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH v7 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() and hibernation_unmap_page() that will explicitly use set_direct_map_{default

[PATCH v7 1/4] mm: introduce debug_pagealloc_{map, unmap}_pages() helpers

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled

[PATCH v7 0/4] arch, mm: improve robustness of direct map manipulation

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would fail

Re: [PATCH v6 0/4] arch, mm: improve robustness of direct map manipulation

2020-11-09 Thread Mike Rapoport
Oops, this one has some rebase errors, I'll send v7 soon. Sorry for the noise. On Mon, Nov 09, 2020 at 06:24:11PM +0200, Mike Rapoport wrote: > From: Mike Rapoport > > Hi, > > During recent discussion about KVM protected memory, David raised a concern > about usage of

[PATCH v6 4/4] arch, mm: make kernel_page_present() always available

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part

[PATCH v6 3/4] arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH v6 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() and hibernation_unmap_page() that will explicitly use set_direct_map_{default

[PATCH v6 1/4] mm: introduce debug_pagealloc_{map, unmap}_pages() helpers

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled

[PATCH v6 0/4] arch, mm: improve robustness of direct map manipulation

2020-11-09 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would fail

Re: [PATCH v5 1/5] mm: introduce debug_pagealloc_{map, unmap}_pages() helpers

2020-11-09 Thread Mike Rapoport
On Mon, Nov 09, 2020 at 12:33:46PM +0100, Vlastimil Babka wrote: > On 11/8/20 7:57 AM, Mike Rapoport wrote: > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct > > kmem_cache *cachep) > >

[PATCH v5 5/5] arch, mm: make kernel_page_present() always available

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part

[PATCH v5 4/5] arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH v5 3/5] PM: hibernate: make direct map manipulations more explicit

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() and hibernation_unmap_page() that will explicitly use set_direct_map_{default

[PATCH v5 2/5] slab: debug: split slab_kernel_map() to map and unmap variants

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport Instead of using slab_kernel_map() with 'map' parameter to remap pages when DEBUG_PAGEALLOC is enabled, use dedicated helpers slab_kernel_map() and slab_kernel_unmap(). Signed-off-by: Mike Rapoport --- mm/slab.c | 26 +++--- 1 file changed, 15

[PATCH v5 1/5] mm: introduce debug_pagealloc_{map, unmap}_pages() helpers

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled

[PATCH v5 0/5] arch, mm: improve robustness of direct map manipulation

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would fail

Re: [PATCH v4 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC

2020-11-05 Thread Mike Rapoport
On Wed, Nov 04, 2020 at 07:02:20PM +0100, Vlastimil Babka wrote: > On 11/3/20 5:20 PM, Mike Rapoport wrote: > > From: Mike Rapoport > > Subject should have "on DEBUG_PAGEALLOC" ? > > > The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must ne

Re: [PATCH v4 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-05 Thread Mike Rapoport
On Wed, Nov 04, 2020 at 06:40:28PM +0100, Vlastimil Babka wrote: > On 11/3/20 5:20 PM, Mike Rapoport wrote: > > From: Mike Rapoport > > > > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be > > not present in the direct map and has to

Re: [PATCH v4 1/4] mm: introduce debug_pagealloc_map_pages() helper

2020-11-05 Thread Mike Rapoport
On Wed, Nov 04, 2020 at 06:35:50PM +0100, Vlastimil Babka wrote: > On 11/3/20 5:20 PM, Mike Rapoport wrote: > > From: Mike Rapoport > > > > When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel > > direct mapping after free_pages(). The pages

Re: [PATCH v1 3/4] powerpc/mm: remove linear mapping if __add_pages() fails in arch_add_memory()

2020-11-04 Thread Mike Rapoport
On Wed, Nov 04, 2020 at 10:50:07AM +0100, osalvador wrote: > On Thu, Oct 29, 2020 at 05:27:17PM +0100, David Hildenbrand wrote: > > Let's revert what we did in case seomthing goes wrong and we return an > > error. > > Dumb question, but should not we do this for other arches as well? It seems

[PATCH v4 4/4] arch, mm: make kernel_page_present() always available

2020-11-03 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part

[PATCH v4 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC

2020-11-03 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH v4 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-03 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() that will explicitly use set_direct_map_{default,invalid}_noflush

[PATCH v4 1/4] mm: introduce debug_pagealloc_map_pages() helper

2020-11-03 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled

[PATCH v4 0/4] arch, mm: improve robustness of direct map manipulation

2020-11-03 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would fail

Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-03 Thread Mike Rapoport
On Tue, Nov 03, 2020 at 05:39:16PM +0300, Kirill A. Shutemov wrote: > On Tue, Nov 03, 2020 at 02:13:50PM +0200, Mike Rapoport wrote: > > > > + > > > > + if (WARN_ON(ret)) > > > > > > _ONCE? > > > > I've changed it to p

Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-03 Thread Mike Rapoport
On Tue, Nov 03, 2020 at 02:08:16PM +0300, Kirill A. Shutemov wrote: > On Sun, Nov 01, 2020 at 07:08:13PM +0200, Mike Rapoport wrote: > > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c > > index 46b1804c1ddf..054c8cce4236 100644 > > --- a/kernel/power/snapsh

Re: [PATCH v3 4/4] arch, mm: make kernel_page_present() always available

2020-11-02 Thread Mike Rapoport
On Mon, Nov 02, 2020 at 10:28:14AM +0100, David Hildenbrand wrote: > On 01.11.20 18:08, Mike Rapoport wrote: > > From: Mike Rapoport > > > > For architectures that enable ARCH_HAS_SET_MEMORY having the ability to > > verify that a page is mapped in the kern

Re: [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC

2020-11-02 Thread Mike Rapoport
On Mon, Nov 02, 2020 at 10:23:20AM +0100, David Hildenbrand wrote: > > > int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long > > address, > >unsigned numpages, unsigned long page_flags) > > diff --git a/include/linux/mm.h b/include/linux/mm.h >

Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-02 Thread Mike Rapoport
On Mon, Nov 02, 2020 at 10:19:36AM +0100, David Hildenbrand wrote: > On 01.11.20 18:08, Mike Rapoport wrote: > > From: Mike Rapoport > > > > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be > > not present in the direct map and has to

[PATCH v3 4/4] arch, mm: make kernel_page_present() always available

2020-11-01 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part

[PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC

2020-11-01 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit

2020-11-01 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() that will explicitly use set_direct_map_{default,invalid}_noflush

[PATCH v3 1/4] mm: introduce debug_pagealloc_map_pages() helper

2020-11-01 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled

[PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation

2020-11-01 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would fail

Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map

2020-11-01 Thread Mike Rapoport
On Thu, Oct 29, 2020 at 11:19:18PM +, Edgecombe, Rick P wrote: > On Thu, 2020-10-29 at 09:54 +0200, Mike Rapoport wrote: > > __kernel_map_pages() on arm64 will also bail out if rodata_full is > > false: > > void __kernel_map_pages(struct page *page, int

[PATCH v2 4/4] arch, mm: make kernel_page_present() always available

2020-10-29 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part

[PATCH v2 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC

2020-10-29 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH v2 2/4] PM: hibernate: make direct map manipulations more explicit

2020-10-29 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. On arm64 it is possible that a page would be removed from the direct map using set_direct_map_invalid_noflush

[PATCH v2 1/4] mm: introduce debug_pagealloc_map_pages() helper

2020-10-29 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled

[PATCH v2 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-29 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would fail

Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-29 Thread Mike Rapoport
On Wed, Oct 28, 2020 at 09:03:31PM +, Edgecombe, Rick P wrote: > > On Wed, Oct 28, 2020 at 11:20:12AM +, Will Deacon wrote: > > > On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote: > > > > > &

Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map

2020-10-29 Thread Mike Rapoport
On Wed, Oct 28, 2020 at 09:15:38PM +, Edgecombe, Rick P wrote: > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { > > + unsigned long addr = (unsigned > > long)page_address(page); > &

Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-28 Thread Mike Rapoport
On Wed, Oct 28, 2020 at 12:17:35PM +0100, David Hildenbrand wrote: > On 28.10.20 12:09, Mike Rapoport wrote: > > On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote: > > > On 27.10.20 09:38, Mike Rapoport wrote: > > > > On Mon, Oct 26, 2020 at 06:05

Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-28 Thread Mike Rapoport
On Wed, Oct 28, 2020 at 11:20:12AM +, Will Deacon wrote: > On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote: > > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote: > > > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote: > > > >

Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-28 Thread Mike Rapoport
On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote: > On 27.10.20 09:38, Mike Rapoport wrote: > > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote: > > > > > Beyond whatever you are seeing, for the latter case of new things > > > g

Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map

2020-10-28 Thread Mike Rapoport
On Tue, Oct 27, 2020 at 10:44:21PM +, Edgecombe, Rick P wrote: > On Tue, 2020-10-27 at 10:49 +0200, Mike Rapoport wrote: > > On Mon, Oct 26, 2020 at 06:57:32PM +, Edgecombe, Rick P wrote: > > > On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote: > > > >

Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-27 Thread Mike Rapoport
On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote: > On 27.10.20 09:38, Mike Rapoport wrote: > > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote: > > > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote: > > > > On Mon, Oc

Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map

2020-10-27 Thread Mike Rapoport
On Mon, Oct 26, 2020 at 06:57:32PM +, Edgecombe, Rick P wrote: > On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote: > > On Mon, Oct 26, 2020 at 12:38:32AM +, Edgecombe, Rick P wrote: > > > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > >

Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-27 Thread Mike Rapoport
On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote: > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote: > > On Mon, Oct 26, 2020 at 01:13:52AM +, Edgecombe, Rick P wrote: > > > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > >

Re: [PATCH 1/4] mm: introduce debug_pagealloc_map_pages() helper

2020-10-26 Thread Mike Rapoport
On Mon, Oct 26, 2020 at 12:05:13PM +0100, David Hildenbrand wrote: > On 25.10.20 11:15, Mike Rapoport wrote: > > From: Mike Rapoport > > > > When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the > > kernel direct mapping after free_pages(). The pages than

Re: [PATCH 4/4] arch, mm: make kernel_page_present() always available

2020-10-26 Thread Mike Rapoport
On Mon, Oct 26, 2020 at 12:54:01AM +, Edgecombe, Rick P wrote: > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > index 7f248fc45317..16f878c26667 100644 > > --- a/arch/x86/mm/pat/set_memory.c > > +++ b/arch/x86/mm/pat/set_memory.c > > @@ -2228,7 +2228,6

Re: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map

2020-10-26 Thread Mike Rapoport
On Mon, Oct 26, 2020 at 12:38:32AM +, Edgecombe, Rick P wrote: > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > From: Mike Rapoport > > > > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may > > be > > not present in the di

Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-26 Thread Mike Rapoport
On Mon, Oct 26, 2020 at 01:13:52AM +, Edgecombe, Rick P wrote: > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP > > it is > > possible that __kernel_map_pages() would fail, but since this >

[PATCH 4/4] arch, mm: make kernel_page_present() always available

2020-10-25 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present() and update its forward declarations and stubs

[PATCH 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC

2020-10-25 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map

2020-10-25 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. On arm64 it is possible that a page would be removed from the direct map using set_direct_map_invalid_noflush

[PATCH 1/4] mm: introduce debug_pagealloc_map_pages() helper

2020-10-25 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled

[PATCH 0/4] arch, mm: improve robustness of direct map manipulation

2020-10-25 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would fail

Re: [PATCH v2] mm/gup: fix gup_fast with dynamic page table folding

2020-09-15 Thread Mike Rapoport
ion to pXd values pass original > pXdp pointers down to gup_pXd_range functions. And introduce > pXd_offset_lockless helpers, which take an additional pXd > entry value parameter. This has already been discussed in > https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1 >

Re: [RFC PATCH v2 3/3] mm: make generic pXd_addr_end() macros inline functions

2020-09-07 Thread Mike Rapoport
Hi, Some style comments below. On Mon, Sep 07, 2020 at 08:00:58PM +0200, Gerald Schaefer wrote: > From: Alexander Gordeev > > Since pXd_addr_end() macros take pXd page-table entry as a > parameter it makes sense to check the entry type on compile. > Even though most archs do not make use of

Re: [RFC PATCH v2 0/3] mm/gup: fix gup_fast with dynamic page table folding

2020-09-07 Thread Mike Rapoport
On Mon, Sep 07, 2020 at 08:00:55PM +0200, Gerald Schaefer wrote: > This is v2 of an RFC previously discussed here: > https://lore.kernel.org/lkml/20200828140314.8556-1-gerald.schae...@linux.ibm.com/ > > Patch 1 is a fix for a regression in gup_fast on s390, after our conversion > to common

Re: [PATCH v3 09/17] memblock: make memblock_debug and related functionality private

2020-08-19 Thread Mike Rapoport
On Wed, Aug 19, 2020 at 12:24:05PM -0700, Andrew Morton wrote: > On Tue, 18 Aug 2020 18:16:26 +0300 Mike Rapoport wrote: > > > From: Mike Rapoport > > > > The only user of memblock_dbg() outside memblock was s390 setup code and it > > is converted to use pr_de

[PATCH v3 17/17] memblock: use separate iterators for memory and reserved regions

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges. Introduce separate for_each_mem_region() and for_each_reserved_mem_region() to improve encapsulation of memblock internals from its

[PATCH v3 16/17] memblock: implement for_each_reserved_mem_region() using __next_mem_region()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport Iteration over memblock.reserved with for_each_reserved_mem_region() used __next_reserved_mem_region() that implemented a subset of __next_mem_region(). Use __for_each_mem_range() and, essentially, __next_mem_region() with appropriate parameters to reduce code duplication

[PATCH v3 15/17] memblock: remove unused memblock_mem_size()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport The only user of memblock_mem_size() was x86 setup code, it is gone now and memblock_mem_size() funciton can be removed. Signed-off-by: Mike Rapoport Reviewed-by: Baoquan He --- include/linux/memblock.h | 1 - mm/memblock.c| 15 --- 2 files

[PATCH v3 14/17] x86/setup: simplify reserve_crashkernel()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport * Replace magic numbers with defines * Replace memblock_find_in_range() + memblock_reserve() with memblock_phys_alloc_range() * Stop checking for low memory size in reserve_crashkernel_low(). The allocation from limited range will anyway fail if there is no enough

[PATCH v3 13/17] x86/setup: simplify initrd relocation and reservation

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport Currently, initrd image is reserved very early during setup and then it might be relocated and re-reserved after the initial physical memory mapping is created. The "late" reservation of memblock verifies that mapped memory size exceeds the size of initrd, then chec

[PATCH v3 12/17] arch, drivers: replace for_each_membock() with for_each_mem_range()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start = __pfn_to_phys(memblock_region_memory_base_pfn(reg); end = __pfn_to_phys(memblock_region_memory_end_pfn(reg)); /* do

[PATCH v3 11/17] arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start_pfn = memblock_region_memory_base_pfn(reg); end_pfn = memblock_region_memory_end_pfn(reg); /* do something with start_pfn

[PATCH v3 10/17] memblock: reduce number of parameters in for_each_mem_range()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport Currently for_each_mem_range() and for_each_mem_range_rev() iterators are the most generic way to traverse memblock regions. As such, they have 8 parameters and they are hardly convenient to users. Most users choose to utilize one of their wrappers and the only user

[PATCH v3 09/17] memblock: make memblock_debug and related functionality private

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport The only user of memblock_dbg() outside memblock was s390 setup code and it is converted to use pr_debug() instead. This allows to stop exposing memblock_debug and memblock_dbg() to the rest of the kernel. Signed-off-by: Mike Rapoport Reviewed-by: Baoquan He --- arch/s390

[PATCH v3 08/17] memblock: make for_each_memblock_type() iterator private

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport for_each_memblock_type() is not used outside mm/memblock.c, move it there from include/linux/memblock.h Signed-off-by: Mike Rapoport Reviewed-by: Baoquan He --- include/linux/memblock.h | 5 - mm/memblock.c| 5 + 2 files changed, 5 insertions(+), 5

[PATCH v3 07/17] mircoblaze: drop unneeded NUMA and sparsemem initializations

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport microblaze does not support neither NUMA not SPARSMEM, so there is no point to call memblock_set_node() and sparse_memory_present_with_active_regions() functions during microblaze memory initialization. Remove these calls and the surrounding code. Signed-off-by: Mike

[PATCH v3 06/17] riscv: drop unneeded node initialization

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport RISC-V does not (yet) support NUMA and for UMA architectures node 0 is used implicitly during early memory initialization. There is no need to call memblock_set_node(), remove this call and the surrounding code. Signed-off-by: Mike Rapoport --- arch/riscv/mm/init.c | 9

[PATCH v3 05/17] h8300, nds32, openrisc: simplify detection of memory extents

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport Instead of traversing memblock.memory regions to find memory_start and memory_end, simply query memblock_{start,end}_of_DRAM(). Signed-off-by: Mike Rapoport Acked-by: Stafford Horne --- arch/h8300/kernel/setup.c| 8 +++- arch/nds32/kernel/setup.c| 8

[PATCH v3 04/17] arm64: numa: simplify dummy_numa_init()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport dummy_numa_init() loops over memblock.memory and passes nid=0 to numa_add_memblk() which essentially wraps memblock_set_node(). However, memblock_set_node() can cope with entire memory span itself, so the loop over memblock.memory regions is redundant. Using a single call

[PATCH v3 03/17] arm, xtensa: simplify initialization of high memory pages

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport The function free_highpages() in both arm and xtensa essentially open-code for_each_free_mem_range() loop to detect high memory pages that were not reserved and that should be initialized and passed to the buddy allocator. Replace open-coded implementation

[PATCH v3 02/17] dma-contiguous: simplify cma_early_percent_memory()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport The memory size calculation in cma_early_percent_memory() traverses memblock.memory rather than simply call memblock_phys_mem_size(). The comment in that function suggests that at some point there should have been call to memblock_analyze() before memblock_phys_mem_size

[PATCH v3 01/17] KVM: PPC: Book3S HV: simplify kvm_cma_reserve()

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport The memory size calculation in kvm_cma_reserve() traverses memblock.memory rather than simply call memblock_phys_mem_size(). The comment in that function suggests that at some point there should have been call to memblock_analyze() before memblock_phys_mem_size() could

[PATCH v3 00/17] memblock: seasonal cleaning^w cleanup

2020-08-18 Thread Mike Rapoport
From: Mike Rapoport Hi, These patches simplify several uses of memblock iterators and hide some of the memblock implementation details from the rest of the system. The patches are on top of v5.9-rc1 v3 changes: * rebase on v5.9-rc1, as the result this required some non-trivial changes

Re: [PATCH v2 13/17] x86/setup: simplify initrd relocation and reservation

2020-08-05 Thread Mike Rapoport
On Wed, Aug 05, 2020 at 12:20:24PM +0800, Baoquan He wrote: > On 08/02/20 at 07:35pm, Mike Rapoport wrote: > > From: Mike Rapoport > > > > Currently, initrd image is reserved very early during setup and then it > > might be relocated and re-reserved after the initial

[PATCH v2 17/17] memblock: use separate iterators for memory and reserved regions

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges. Introduce separate for_each_mem_region() and for_each_reserved_mem_region() to improve encapsulation of memblock internals from its

[PATCH v2 16/17] memblock: implement for_each_reserved_mem_region() using __next_mem_region()

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport Iteration over memblock.reserved with for_each_reserved_mem_region() used __next_reserved_mem_region() that implemented a subset of __next_mem_region(). Use __for_each_mem_range() and, essentially, __next_mem_region() with appropriate parameters to reduce code duplication

[PATCH v2 15/17] memblock: remove unused memblock_mem_size()

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport The only user of memblock_mem_size() was x86 setup code, it is gone now and memblock_mem_size() funciton can be removed. Signed-off-by: Mike Rapoport --- include/linux/memblock.h | 1 - mm/memblock.c| 15 --- 2 files changed, 16 deletions(-) diff

[PATCH v2 14/17] x86/setup: simplify reserve_crashkernel()

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport * Replace magic numbers with defines * Replace memblock_find_in_range() + memblock_reserve() with memblock_phys_alloc_range() * Stop checking for low memory size in reserve_crashkernel_low(). The allocation from limited range will anyway fail if there is no enough

[PATCH v2 13/17] x86/setup: simplify initrd relocation and reservation

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport Currently, initrd image is reserved very early during setup and then it might be relocated and re-reserved after the initial physical memory mapping is created. The "late" reservation of memblock verifies that mapped memory size exceeds the size of initrd, the chec

[PATCH v2 12/17] arch, drivers: replace for_each_membock() with for_each_mem_range()

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start = __pfn_to_phys(memblock_region_memory_base_pfn(reg); end = __pfn_to_phys(memblock_region_memory_end_pfn(reg)); /* do

[PATCH v2 11/17] arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start_pfn = memblock_region_memory_base_pfn(reg); end_pfn = memblock_region_memory_end_pfn(reg); /* do something with start_pfn

[PATCH v2 10/17] memblock: reduce number of parameters in for_each_mem_range()

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport Currently for_each_mem_range() iterator is the most generic way to traverse memblock regions. As such, it has 8 parameters and it is hardly convenient to users. Most users choose to utilize one of its wrappers and the only user that actually needs most of the parameters

[PATCH v2 09/17] memblock: make memblock_debug and related functionality private

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport The only user of memblock_dbg() outside memblock was s390 setup code and it is converted to use pr_debug() instead. This allows to stop exposing memblock_debug and memblock_dbg() to the rest of the kernel. Signed-off-by: Mike Rapoport Reviewed-by: Baoquan He --- arch/s390

[PATCH v2 08/17] memblock: make for_each_memblock_type() iterator private

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport for_each_memblock_type() is not used outside mm/memblock.c, move it there from include/linux/memblock.h Signed-off-by: Mike Rapoport Reviewed-by: Baoquan He --- include/linux/memblock.h | 5 - mm/memblock.c| 5 + 2 files changed, 5 insertions(+), 5

[PATCH v2 07/17] mircoblaze: drop unneeded NUMA and sparsemem initializations

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport microblaze does not support neither NUMA not SPARSMEM, so there is no point to call memblock_set_node() and sparse_memory_present_with_active_regions() functions during microblaze memory initialization. Remove these calls and the surrounding code. Signed-off-by: Mike

[PATCH v2 06/17] riscv: drop unneeded node initialization

2020-08-02 Thread Mike Rapoport
From: Mike Rapoport RISC-V does not (yet) support NUMA and for UMA architectures node 0 is used implicitly during early memory initialization. There is no need to call memblock_set_node(), remove this call and the surrounding code. Signed-off-by: Mike Rapoport --- arch/riscv/mm/init.c | 9

  1   2   3   4   5   6   >