From: Mike Rapoport
dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.
Using a single call
From: Mike Rapoport
The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.
Replace open-coded implementation
From: Mike Rapoport
The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size
From: Mike Rapoport
The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could
From: Mike Rapoport
Hi,
These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.
The patches are on top of v5.9-rc1
v3 changes:
* rebase on v5.9-rc1, as the result this required some non-trivial changes
On Wed, Aug 05, 2020 at 12:20:24PM +0800, Baoquan He wrote:
> On 08/02/20 at 07:35pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Currently, initrd image is reserved very early during setup and then it
> > might be relocated and re-reserved after the initial
From: Mike Rapoport
for_each_memblock() is used to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.
Introduce separate for_each_mem_region() and for_each_reserved_mem_region()
to improve encapsulation of memblock internals from its
From: Mike Rapoport
Iteration over memblock.reserved with for_each_reserved_mem_region() used
__next_reserved_mem_region() that implemented a subset of
__next_mem_region().
Use __for_each_mem_range() and, essentially, __next_mem_region() with
appropriate parameters to reduce code duplication
From: Mike Rapoport
The only user of memblock_mem_size() was x86 setup code, it is gone now and
memblock_mem_size() funciton can be removed.
Signed-off-by: Mike Rapoport
---
include/linux/memblock.h | 1 -
mm/memblock.c| 15 ---
2 files changed, 16 deletions(-)
diff
From: Mike Rapoport
* Replace magic numbers with defines
* Replace memblock_find_in_range() + memblock_reserve() with
memblock_phys_alloc_range()
* Stop checking for low memory size in reserve_crashkernel_low(). The
allocation from limited range will anyway fail if there is no enough
From: Mike Rapoport
Currently, initrd image is reserved very early during setup and then it
might be relocated and re-reserved after the initial physical memory
mapping is created. The "late" reservation of memblock verifies that mapped
memory size exceeds the size of initrd, the chec
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
/* do
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);
/* do something with start_pfn
From: Mike Rapoport
Currently for_each_mem_range() iterator is the most generic way to traverse
memblock regions. As such, it has 8 parameters and it is hardly convenient
to users. Most users choose to utilize one of its wrappers and the only
user that actually needs most of the parameters
From: Mike Rapoport
The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
arch/s390
From: Mike Rapoport
for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
include/linux/memblock.h | 5 -
mm/memblock.c| 5 +
2 files changed, 5 insertions(+), 5
From: Mike Rapoport
microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.
Remove these calls and the surrounding code.
Signed-off-by: Mike
From: Mike Rapoport
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.
There is no need to call memblock_set_node(), remove this call and the
surrounding code.
Signed-off-by: Mike Rapoport
---
arch/riscv/mm/init.c | 9
From: Mike Rapoport
Instead of traversing memblock.memory regions to find memory_start and
memory_end, simply query memblock_{start,end}_of_DRAM().
Signed-off-by: Mike Rapoport
Acked-by: Stafford Horne
---
arch/h8300/kernel/setup.c| 8 +++-
arch/nds32/kernel/setup.c| 8
From: Mike Rapoport
dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.
Using a single call
From: Mike Rapoport
The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.
Replace open-coded implementation
From: Mike Rapoport
The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size
From: Mike Rapoport
The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could
From: Mike Rapoport
Hi,
These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.
The patches are on top of v5.8-rc7 + cherry-pick of "mm/sparse: cleanup the
code surrounding memory_present()" [1] from
On Thu, Jul 30, 2020 at 10:15:13PM +1000, Michael Ellerman wrote:
> Mike Rapoport writes:
> > From: Mike Rapoport
> >
> > fadump_reserve_crash_area() reserves memory from a specified base address
> > till the end of the RAM.
> >
> > Replace iteration th
On Tue, Jul 28, 2020 at 07:02:54PM +0800, Baoquan He wrote:
> On 07/28/20 at 08:11am, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
> > regions to set node ID in memblock.
On Tue, Jul 28, 2020 at 12:44:40PM +0200, Ingo Molnar wrote:
>
> * Mike Rapoport wrote:
>
> > From: Mike Rapoport
> >
> > numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
> > regions to set node ID in memblock.reserved and than traverse
From: Mike Rapoport
for_each_memblock() is used exclusively to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.
Remove type parameter from the for_each_memblock() iterator to improve
encapsulation of memblock internals from its users
From: Mike Rapoport
numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
regions to set node ID in memblock.reserved and than traverses
memblock.reserved to update reserved_nodemask to include node IDs that were
set in the first loop.
Remove redundant traversal over
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
/* do
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);
/* do something with start_pfn
From: Mike Rapoport
Currently for_each_mem_range() iterator is the most generic way to traverse
memblock regions. As such, it has 8 parameters and it is hardly convenient
to users. Most users choose to utilize one of its wrappers and the only
user that actually needs most of the parameters
From: Mike Rapoport
The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.
Signed-off-by: Mike Rapoport
---
arch/s390/kernel/setup.c | 4
From: Mike Rapoport
for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h
Signed-off-by: Mike Rapoport
---
include/linux/memblock.h | 5 -
mm/memblock.c| 5 +
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git
From: Mike Rapoport
microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.
Remove these calls and the surrounding code.
Signed-off-by: Mike
From: Mike Rapoport
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.
There is no need to call memblock_set_node(), remove this call and the
surrounding code.
Signed-off-by: Mike Rapoport
---
arch/riscv/mm/init.c | 9
From: Mike Rapoport
fadump_reserve_crash_area() reserves memory from a specified base address
till the end of the RAM.
Replace iteration through the memblock.memory with a single call to
memblock_reserve() with appropriate that will take care of proper memory
reservation.
Signed-off-by: Mike
From: Mike Rapoport
Instead of traversing memblock.memory regions to find memory_start and
memory_end, simply query memblock_{start,end}_of_DRAM().
Signed-off-by: Mike Rapoport
---
arch/h8300/kernel/setup.c| 8 +++-
arch/nds32/kernel/setup.c| 8 ++--
arch/openrisc/kernel
From: Mike Rapoport
dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.
Replace the loop
From: Mike Rapoport
The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.
Replace open-coded implementation
From: Mike Rapoport
The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size
From: Mike Rapoport
The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could
From: Mike Rapoport
Hi,
These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.
The patches are on top of v5.8-rc7 + cherry-pick of "mm/sparse: cleanup the
code surrounding memory_present()" [1] from
On Tue, Jul 07, 2020 at 11:04:13AM -0700, Randy Dunlap wrote:
> Drop the doubled word "the".
>
> Signed-off-by: Randy Dunlap
> Cc: Jonathan Corbet
> Cc: linux-...@vger.kernel.org
> Cc: Andrew Morton
> Cc: linux...@kvack.org
Reviewed-by: Mike Rapoport
&
Gentle ping.
On Sat, Jun 27, 2020 at 05:34:45PM +0300, Mike Rapoport wrote:
> From: Mike Rapoport
>
> Hi,
>
> Most architectures have very similar versions of pXd_alloc_one() and
> pXd_free_one() for intermediate levels of page table.
> These patches add generic versio
On Wed, Jul 01 2020 at 4:59am -0400,
Christoph Hellwig wrote:
> Instead of setting up the queuedata as well just use one private data
> field.
>
> Signed-off-by: Christoph Hellwig
Acked-by: Mike Snitzer
On Sat, Jun 27, 2020 at 08:03:04PM +0100, Matthew Wilcox wrote:
> On Sat, Jun 27, 2020 at 05:34:49PM +0300, Mike Rapoport wrote:
> > More elaborate versions on arm64 and x86 account memory for the user page
> > tables and call to pgtable_pmd_page_ctor() as the part of PMD page
>
On Sat, Jun 27, 2020 at 08:03:04PM +0100, Matthew Wilcox wrote:
> On Sat, Jun 27, 2020 at 05:34:49PM +0300, Mike Rapoport wrote:
> > More elaborate versions on arm64 and x86 account memory for the user page
> > tables and call to pgtable_pmd_page_ctor() as the part of PMD page
>
With larger process address spaces and ASLR, the number of PMDs in use
> is higher than it used to be so the inaccuracy is starting to matter.
>
> Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Mike Rapoport
> ---
> include/linux/mm.h | 24
> 1 fil
: http://lkml.kernel.org/r/20200609120533.25867-1-j...@8bytes.org
Signed-off-by: Joerg Roedel
Cc: Peter Zijlstra (Intel)
Cc: Andy Lutomirski
Cc: Abdul Haleem
Cc: Satheesh Rajendran
Cc: Stephen Rothwell
Cc: Steven Rostedt (VMware)
Cc: Mike Rapoport
Cc: Christophe Leroy
Signed-off-by: Mike
From: Mike Rapoport
Most architectures define pgd_free() as a wrapper for free_page().
Provide a generic version in asm-generic/pgalloc.h and enable its use for
most architectures.
Signed-off-by: Mike Rapoport
---
arch/alpha/include/asm/pgalloc.h | 6 --
arch/arm/include/asm
From: Mike Rapoport
Several architectures define pud_alloc_one() as a wrapper for
__get_free_page() and pud_free() as a wrapper for free_page().
Provide a generic implementation in asm-generic/pgalloc.h and use it where
appropriate.
Signed-off-by: Mike Rapoport
---
arch/arm64/include/asm
From: Mike Rapoport
xtensa clears PTEs during allocation of the page tables and pte_clear()
sets the PTE to a non-zero value. Splitting ptes_clear() helper out of
pte_alloc_one() and pte_alloc_one_kernel() allows reuse of base generic
allocation methods (__pte_alloc_one
From: Mike Rapoport
Replace pte_alloc_one(), pte_free() and pte_free_kernel() with the generic
implementation. The only actual functional change is the addition of
__GFP_ACCOUT for the allocation of the user page tables.
The pte_alloc_one_kernel() is kept back because its implementation
From: Mike Rapoport
In the most cases header is required only for allocations
of page table memory. Most of the .c files that include that header do not
use symbols declared in and do not require that header.
As for the other header files that used to include , it is
possible to move
From: Mike Rapoport
Hi,
Most architectures have very similar versions of pXd_alloc_one() and
pXd_free_one() for intermediate levels of page table.
These patches add generic versions of these functions in
and enable use of the generic functions where
appropriate.
In addition, functions
ld be moved from include/linux/ to mm/.
It makes sense, but I am anyway planning consolidation of pgalloc.h, so
most probably pgalloc-track will not survive until 5.9-rc1 :)
If you think that it worth moving ioremap.c to mm/ regardless of chrun,
I can send a patch for that.
> Oh well.
--
Sincerely yours,
Mike.
)
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Mark Rutland
> Cc: Paul Walmsley
> Cc: Palmer Dabbelt
> Cc: Tony Luck
> Cc: Fenghua Yu
> Cc: Dave Hansen
> Cc: Andy Lutomirski
> Cc: Peter Zijlstra
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: D
st should
> always remain in sync.
>
> Cc: Jonathan Corbet
> Cc: Andrew Morton
> Cc: Mike Rapoport
> Cc: Vineet Gupta
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Michael Ellerman
> Cc: Heiko C
On Wed, Jun 17, 2020 at 09:21:42AM +1000, Michael Ellerman wrote:
> Andrew Morton writes:
> > On Mon, 15 Jun 2020 12:22:29 +0300 Mike Rapoport wrote:
> >
> >> From: Mike Rapoport
> >>
> >> The pte_update() implementation for PPC_8xx unfolds page tab
From: Mike Rapoport
The pte_update() implementation for PPC_8xx unfolds page table from the PGD
level to access a PMD entry. Since 8xx has only 2-level page table this can
be simplified with pmd_off() shortcut.
Replace explicit unfolding with pmd_off() and drop defines of pgd_index
Hi Greg,
On Mon, Jun 15, 2020 at 01:53:42PM +1000, Greg Ungerer wrote:
> Hi Mike,
>
> From: Mike Rapoport
> > Currently, architectures that use free_area_init() to initialize memory map
> > and node and zone structures need to calculate zone and hole sizes. We can
> &g
-off-by: Joerg Roedel
Acked-by: Mike Rapoport
> ---
> include/linux/mm.h| 45 ---
> include/linux/pgalloc-track.h | 51 +++
> lib/ioremap.c | 1 +
> mm/vmalloc.c | 1 +
>
return NULL;
> - *mod_mask |= PGTBL_P4D_MODIFIED;
> + *mod_mask |= PGTBL_PGD_MODIFIED;
> }
>
> - return pud_offset(p4d, address);
> + return p4d_offset(pgd, address);
> }
>
> +#endif /* !__ARCH_HAS_5LEVEL_HACK */
> +
> static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned
> long address)
> {
> return (unlikely(pud_none(*pud)) && __pmd_alloc(mm, pud, address))?
> --
> 2.26.2
>
--
Sincerely yours,
Mike.
code, the comment /* must be last! */ is stale...
@Peter, @Thomas, can you comment please?
>From e51d50ee6f4d1f446decf91c2c67230da14ff82c Mon Sep 17 00:00:00 2001
From: Mike Rapoport
Date: Thu, 4 Jun 2020 12:37:03 +0300
Subject: [PATCH] softirq: don't call lockdep_hardirq_exit() twice
After com
I'm going to bisect between there and HEAD.
The sparc issue should be fixed by
https://lore.kernel.org/lkml/20200526173302.377-1-w...@kernel.org
> Ira
--
Sincerely yours,
Mike.
>>>
> >>> Is there another test I need to run?
> >>
> >> This all petered out, but as I understand it, this patchset still might
> >> have issues on various architectures.
> >>
> >> Can folks please provide an update on the testing st
On Fri, May 15, 2020 at 11:40:12AM -0700, Andrew Morton wrote:
> On Tue, 14 Apr 2020 18:34:44 +0300 Mike Rapoport wrote:
>
> > Implement primitives necessary for the 4th level folding, add walks of p4d
> > level where appropriate, replace 5level-fixup.h with pgtable-nop4
From: Mike Rapoport
All architectures tables define pgd_offset() as an entry in the array of
PGDs indexed by the pgd_index(), where pgd_index() is
(address >> PGD_SHIFT) & (PTRS_PER_PGD - 1)
For the most cases, the pgd_offset() uses mm->pgd as the pointer to the
t
From: Mike Rapoport
All architectures that have at least four-level page tables define
pud_offset() as an entry in the array of PUDs indexed by the pud_index(),
where pud_index() is
(address >> PUD_SHIFT) & (PTRS_PER_PUD - 1)
For the most architectures the pud_offset() impl
From: Mike Rapoport
All architectures define pmd_index() as
(address >> PMD_SHIFT) & (PTRS_PER_PMD - 1)
and all architectures that have at least three-level page tables define
pmd_offset() as an entry in the array of PMDs indexed by the pmd_index().
For the most arc
From: Mike Rapoport
All architectures define pte_index() as
(address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)
and all architectures define pte_offset_kernel() as an entry
in the array of PTEs indexed by the pte_index().
For the most architectures the pte_offset_kernel() impl
From: Mike Rapoport
The powerpc 32-bit implementation of pgtable has nice shortcuts for
accessing kernel PMD and PTE for a given virtual address.
Make this helpers available for all architectures.
Signed-off-by: Mike Rapoport
---
arch/arc/mm/highmem.c | 10 +---
arch/arm
From: Mike Rapoport
There are three cases for the trampoline initialization:
* 32-bit does nothing
* 64-bit with kaslr disabled simply copies a PGD entry from the direct map
to the trampoline PGD
* 64-bit with kaslr enabled maps the real mode trampoline at PUD level
These cases are currently
From: Mike Rapoport
The cache_page() and nocache_page() functions are only used by the motorola
MMU variant for setting caching attributes for the page table pages.
Move the definitions of these functions from
arch/m68k/include/asm/motorola_pgtable.h closer to their usage in
arch/m68k/mm
From: Mike Rapoport
The comment about page table allocation functions resides in
include/asm/motorola_pgtable.h while the functions live in
include/asm/motorola_pgaloc.h.
Move the comment close to the code.
Signed-off-by: Mike Rapoport
---
arch/m68k/include/asm/motorola_pgalloc.h | 6
From: Mike Rapoport
All architectures use pXd_index() to get an entry in the page table page
corresponding to a virtual address.
Align csky with other architectures.
Signed-off-by: Mike Rapoport
---
arch/csky/include/asm/pgtable.h | 5 ++---
arch/csky/mm/fault.c| 2 +-
arch/csky
From: Mike Rapoport
The replacement of with made the include
of the latter in the middle of asm includes. Fix this up with the aid of
the below script and manual adjustments here and there.
import sys
import re
if len(sys.argv) is not 3:
print "USAG
From: Mike Rapoport
The linux/mm.h header includes to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include in
the files that include .
The include statements in such cases are remove with a simple
From: Mike Rapoport
Hi,
The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.
Most of these definitions are actually identical
On Tue, May 12, 2020 at 12:24:41PM -0700, Matthew Wilcox wrote:
> On Tue, May 12, 2020 at 09:44:18PM +0300, Mike Rapoport wrote:
> > +++ b/include/linux/pgtable.h
> > @@ -28,6 +28,24 @@
> > #define USER_PGTABLES_CEILING 0UL
> > #endif
> >
> >
On Tue, May 12, 2020 at 12:20:13PM -0700, Matthew Wilcox wrote:
> On Tue, May 12, 2020 at 09:44:13PM +0300, Mike Rapoport wrote:
> > diff --git a/arch/alpha/kernel/proto.h b/arch/alpha/kernel/proto.h
> > index a093cd45ec79..701a05090141 100644
> > --- a/arch/alpha/kernel/p
From: Mike Rapoport
All architectures tables define pgd_offset() as an entry in the array of
PGDs indexed by the pgd_index(), where pgd_index() is
(address >> PGD_SHIFT) & (PTRS_PER_PGD - 1)
For the most cases, the pgd_offset() uses mm->pgd as the pointer to the
t
From: Mike Rapoport
All architectures that have at least four-level page tables define
pud_offset() as an entry in the array of PUDs indexed by the pud_index(),
where pud_index() is
(address >> PUD_SHIFT) & (PTRS_PER_PUD - 1)
For the most architectures the pud_offset() impl
From: Mike Rapoport
All architectures define pmd_index() as
(address >> PMD_SHIFT) & (PTRS_PER_PMD - 1)
and all architectures that have at least three-level page tables define
pmd_offset() as an entry in the array of PMDs indexed by the pmd_index().
For the most arc
From: Mike Rapoport
All architectures define pte_index() as
(address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)
and all architectures define pte_offset_kernel() as an entry
in the array of PTEs indexed by the pte_index().
For the most architectures the pte_offset_kernel() impl
From: Mike Rapoport
The powerpc 32-bit implementation of pgtable has nice shortcuts for
accessing kernel PMD and PTE for a given virtual address.
Make this helpers available for all architectures.
Signed-off-by: Mike Rapoport
---
arch/arc/mm/highmem.c | 10 +---
arch/arm
From: Mike Rapoport
There are three cases for the trampoline initialization:
* 32-bit does nothing
* 64-bit with kaslr disabled simply copies a PGD entry from the direct map
to the trampoline PGD
* 64-bit with kaslr enabled maps the real mode trampoline at PUD level
These cases are currently
From: Mike Rapoport
The cache_page() and nocache_page() functions are only used by the morotola
MMU variant for setting caching attributes for the page table pages.
Move the definitions of these functions from
arch/m68k/include/asm/motorola_pgtable.h closer to their usage in
arch/m68k/mm
From: Mike Rapoport
The comment about page table allocation functions resides in
include/asm/motorola_pgtable.h while the functions live in
include/asm/motorola_pgaloc.h.
Move the comment close to the code.
Signed-off-by: Mike Rapoport
---
arch/m68k/include/asm/motorola_pgalloc.h | 6
From: Mike Rapoport
All architectures use pXd_index() to get an entry in the page table page
corresponding to a virtual address.
Align csky with other architectures.
Signed-off-by: Mike Rapoport
---
arch/csky/include/asm/pgtable.h | 5 ++---
arch/csky/mm/fault.c| 2 +-
arch/csky
From: Mike Rapoport
The replacement of with made the include
of the latter in the middle of asm includes. Fix this up with the aid of
the below script and manual adjustments here and there.
import sys
import re
if len(sys.argv) is not 3:
print "USAG
From: Mike Rapoport
The linux/mm.h header includes to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include in
the files that include .
The include statements in such cases are remove with a simple
From: Mike Rapoport
Hi,
The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.
Most of these definitions are actually identical
Cc: Palmer Dabbelt
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: Christian Borntraeger
> Cc: Yoshinori Sato
> Cc: Rich Felker
> Cc: "David S. Miller"
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Cc: "H. Peter Anvin"
On 5/10/20 8:14 PM, Anshuman Khandual wrote:
> On 05/09/2020 03:52 AM, Mike Kravetz wrote:
>> On 5/7/20 8:07 PM, Anshuman Khandual wrote:
>>
>> Did you try building without CONFIG_HUGETLB_PAGE defined? I'm guessing
>
> Yes I did for multiple platforms (s390, ar
Hi Marek,
On Mon, May 11, 2020 at 08:36:41AM +0200, Marek Szyprowski wrote:
> Hi Mike,
>
> On 08.05.2020 19:42, Mike Rapoport wrote:
> > On Fri, May 08, 2020 at 08:53:27AM +0200, Marek Szyprowski wrote:
> >> On 07.05.2020 18:11, Mike Rapoport wrote:
> >>> On T
Cc: Palmer Dabbelt
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: Christian Borntraeger
> Cc: Yoshinori Sato
> Cc: Rich Felker
> Cc: "David S. Miller"
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Cc: "H. Peter Anvin"
> C
On Fri, May 08, 2020 at 08:53:27AM +0200, Marek Szyprowski wrote:
> Hi Mike,
>
> On 07.05.2020 18:11, Mike Rapoport wrote:
> > On Thu, May 07, 2020 at 02:16:56PM +0200, Marek Szyprowski wrote:
> >> On 14.04.2020 17:34, Mike Rapoport wrote:
> >>> From:
Hi,
On Thu, May 07, 2020 at 02:16:56PM +0200, Marek Szyprowski wrote:
> Hi
>
> On 14.04.2020 17:34, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Implement primitives necessary for the 4th level folding, add walks of p4d
> > level where appropriate,
601 - 700 of 1493 matches
Mail list logo