From: Mike Rapoport
Hi,
This is an implementation of "secret" mappings backed by a file descriptor.
v4 changes:
* rebase on v5.9-rc1
* Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
* Make secret mappings exclusive by default and only require flags to
memfd_secret() s
From: Mike Rapoport
The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.
Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid
From: Mike Rapoport
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The user will create a file descriptor using the me
From: Mike Rapoport
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 10c677655912..40544fbf49c9
On Tue, Aug 18, 2020 at 08:30:29AM +0300, Jarkko Sakkinen wrote:
> On Sun, Jul 26, 2020 at 11:14:08AM +0300, Mike Rapoport wrote:
> > >
> > > I'm not still sure that I fully understand this feedback as I don't see
> > > any inherent and obvious difference to the
Hi Linus,
The following changes since commit 9123e3a74ec7b934a4a099e98af6a61c2f80bbf5:
Linux 5.9-rc1 (2020-08-16 13:04:57 -0700)
are available in the Git repository at:
https://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git
tags/fixes-2020-08-18
for you to fetch changes up to
On Wed, Aug 12, 2020 at 09:06:55AM +0800, Wei Li wrote:
> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> do not free the reserved memory for the page map, this patch do it.
I've been thinking about it a bit more and it seems that instead of
freeing unused memory map it
On Sun, Aug 16, 2020 at 10:52:21AM -0700, Linus Torvalds wrote:
> On Sun, Aug 16, 2020 at 10:43 AM Mike Rapoport wrote:
> >
> > I presume this is going via parisc tree, do you mind fixing up
> > while applying?
>
> I'll take it directly to not miss rc1, and I'll f
On Sun, Aug 16, 2020 at 03:42:09PM +0100, Matthew Wilcox wrote:
> On Sun, Aug 16, 2020 at 05:24:03PM +0300, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Commit 1355c31eeb7e ("asm-generic: pgalloc: provide generic pmd_alloc_one()
> > and pmd_free_one()&q
From: Mike Rapoport
Commit 1355c31eeb7e ("asm-generic: pgalloc: provide generic pmd_alloc_one()
and pmd_free_one()") converted parisc to use generic version of
pmd_alloc_one() but it missed the fact that parisc uses order-1 pages for
PMD.
Restore the original version of pmd
> mapping needs to be created for a different mm context (i.e. efi mm) at
> a later point of time.
>
> Use generic kernel page allocation function & macros for any mapping
> after setup_vm_final.
>
> Signed-off-by: Atish Patra
A nit below, otherwise
Acked-by: Mike
t.c:61:6: warning: no previous prototype for
> 'setup_zero_pages' [-Wmissing-prototypes]
> 61 | void setup_zero_pages(void)
> | ^~~~
>From 03ab41f5c9f12733be3581333b512d99b0f75cac Mon Sep 17 00:00:00 2001
From: Mike Rapoport
Date: Wed, 12 Aug 2020 22:06:49 +0300
t;mm: consolidate pte_index() and pte_offset_*()
> definitions")
> Reported-by: John Paul Adrian Glaubitz
> Signed-off-by: Jessica Clarke
> Tested-by: John Paul Adrian Glaubitz
Thanks for the fix, I don't insist on the changelog update, so with the
nit below
Reviewed-by: Mike
On Mon, Aug 10, 2020 at 07:27:33AM -0700, Dave Hansen wrote:
> ... adding Kirill
>
> On 8/7/20 1:40 AM, Joerg Roedel wrote:
> > + lvl = "p4d";
> > + p4d = p4d_alloc(_mm, pgd, addr);
> > + if (!p4d)
> > + goto failed;
> >
> > + /*
> > +
> p4d_alloc() and pud_alloc() to correctly pre-allocate the PGD entries.
>
> Reported-by: Jason A. Donenfeld
> Fixes: 6eb82f994026 ("x86/mm: Pre-allocate P4D/PUD pages for vmalloc area")
> Signed-off-by: Joerg Roedel
LGTM,
Reviewed-by: Mike Rapoport
> ---
> arc
On Wed, Aug 05, 2020 at 06:05:18AM -0700, Randy Dunlap wrote:
> On 8/4/20 2:50 AM, Mike Rapoport wrote:
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index f2104cc0d35c..8378175e72a4 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -872,4 +872,8 @@ c
On Thu, Aug 06, 2020 at 01:27:57PM +0300, Kirill A. Shutemov wrote:
> On Tue, Aug 04, 2020 at 12:50:32PM +0300, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Introduce "memfd_secret" system call with the ability to create memory
> > areas visible only
On Thu, Aug 06, 2020 at 01:11:12PM +0300, Kirill A. Shutemov wrote:
> On Tue, Aug 04, 2020 at 12:50:30PM +0300, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > The definition of PMD_PAGE_ORDER denoting the number of base pages in the
> > second-level leaf
From: Mike Rapoport
When a configuration has NUMA disabled and SGI_IP27 enabled, the build
fails:
CC kernel/bounds.s
CC arch/mips/kernel/asm-offsets.s
In file included from arch/mips/include/asm/topology.h:11,
from include/linux/topology.h:36
On Wed, Aug 05, 2020 at 12:20:24PM +0800, Baoquan He wrote:
> On 08/02/20 at 07:35pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Currently, initrd image is reserved very early during setup and then it
> > might be relocated and re-reserved after the initial
On Mon, Aug 03, 2020 at 07:58:54PM -0400, Joshua Kinard wrote:
> On 8/3/2020 15:49, Mike Rapoport wrote:
> > Hi,
> >
> > On Tue, Aug 04, 2020 at 01:39:14AM +0800, kernel test robot wrote:
> >> Hi Mike,
> >>
> >> FYI, the error/warning still remai
On Tue, Aug 04, 2020 at 07:34:39PM +0800, Jiaxun Yang wrote:
>
> 在 2020/8/4 上午7:58, Joshua Kinard 写道:
> > On 8/3/2020 15:49, Mike Rapoport wrote:
> > > Hi,
> > >
> > > On Tue, Aug 04, 2020 at 01:39:14AM +0800, kernel test robot wrote:
> > > &g
On Tue, Aug 04, 2020 at 11:55:10AM +0200, David Hildenbrand wrote:
> On 04.08.20 11:33, Mike Rapoport wrote:
> > On Tue, Aug 04, 2020 at 09:24:08AM +0200, David Hildenbrand wrote:
> >> Let's document what ZONE_MOVABLE means, how it's used, and which special
> >> cases
From: Mike Rapoport
Taking pages out from the direct map and bringing them back may create
undesired fragmentation and usage of the smaller pages in the direct
mapping of the physical memory.
This can be avoided if a significantly large area of the physical memory
would be reserved
From: Mike Rapoport
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 9886db20d94f..af0a92f8f6bc
From: Mike Rapoport
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The user will create a file descriptor using the me
From: Mike Rapoport
Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
Signed-off-by: Mike Rapoport
Acked-by: Palmer Dabbelt
Acked-by: Arnd Bergmann
---
arch/arm64/include/asm/unistd.h| 2 +-
arch/arm64/include/asm
From: Mike Rapoport
Hi,
This is an implementation of "secret" mappings backed by a file descriptor.
v3 changes:
* Squash kernel-parameters.txt update into the commit that added the
command line option.
* Make uncached mode explicitly selectable by architectures. For now enable
From: Mike Rapoport
The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.
Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid
From: Mike Rapoport
Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.
Add
o
> Cc: Michael S. Tsirkin
> Cc: Mike Kravetz
> Cc: Mike Rapoport
> Cc: Pankaj Gupta
> Cc: Baoquan He
> Signed-off-by: David Hildenbrand
Several nits below, othersize
Acked-by: Mike Rapoport
> ---
> include/linux/mmzone.h | 34 ++
>
Hi,
On Tue, Aug 04, 2020 at 01:39:14AM +0800, kernel test robot wrote:
> Hi Mike,
>
> FYI, the error/warning still remains.
>
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
> master
> head: bcf876870b95592b52519ed4aafcf9d95999bc9c
> commit:
line Linux but apparently
used in production. qemu-2.8 removed support for that ABI and newer kernels
(4.19+) can no longer be built with the old toolchain, so apparently
there will not be any future updates to that git tree.
----
Mike R
From: Mike Rapoport
for_each_memblock() is used to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.
Introduce separate for_each_mem_region() and for_each_reserved_mem_region()
to improve encapsulation of memblock internals from its
From: Mike Rapoport
Iteration over memblock.reserved with for_each_reserved_mem_region() used
__next_reserved_mem_region() that implemented a subset of
__next_mem_region().
Use __for_each_mem_range() and, essentially, __next_mem_region() with
appropriate parameters to reduce code duplication
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
/* do
From: Mike Rapoport
Currently, initrd image is reserved very early during setup and then it
might be relocated and re-reserved after the initial physical memory
mapping is created. The "late" reservation of memblock verifies that mapped
memory size exceeds the size of initrd, the chec
From: Mike Rapoport
The only user of memblock_mem_size() was x86 setup code, it is gone now and
memblock_mem_size() funciton can be removed.
Signed-off-by: Mike Rapoport
---
include/linux/memblock.h | 1 -
mm/memblock.c| 15 ---
2 files changed, 16 deletions(-)
diff
From: Mike Rapoport
* Replace magic numbers with defines
* Replace memblock_find_in_range() + memblock_reserve() with
memblock_phys_alloc_range()
* Stop checking for low memory size in reserve_crashkernel_low(). The
allocation from limited range will anyway fail if there is no enough
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);
/* do something with start_pfn
From: Mike Rapoport
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.
There is no need to call memblock_set_node(), remove this call and the
surrounding code.
Signed-off-by: Mike Rapoport
---
arch/riscv/mm/init.c | 9
From: Mike Rapoport
microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.
Remove these calls and the surrounding code.
Signed-off-by: Mike
From: Mike Rapoport
for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
include/linux/memblock.h | 5 -
mm/memblock.c| 5 +
2 files changed, 5 insertions(+), 5
From: Mike Rapoport
The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.
Signed-off-by: Mike Rapoport
Reviewed-by: Baoquan He
---
arch/s390
From: Mike Rapoport
Currently for_each_mem_range() iterator is the most generic way to traverse
memblock regions. As such, it has 8 parameters and it is hardly convenient
to users. Most users choose to utilize one of its wrappers and the only
user that actually needs most of the parameters
From: Mike Rapoport
The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could
From: Mike Rapoport
Instead of traversing memblock.memory regions to find memory_start and
memory_end, simply query memblock_{start,end}_of_DRAM().
Signed-off-by: Mike Rapoport
Acked-by: Stafford Horne
---
arch/h8300/kernel/setup.c| 8 +++-
arch/nds32/kernel/setup.c| 8
From: Mike Rapoport
dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.
Using a single call
From: Mike Rapoport
The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.
Replace open-coded implementation
From: Mike Rapoport
The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size
From: Mike Rapoport
Hi,
These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.
The patches are on top of v5.8-rc7 + cherry-pick of "mm/sparse: cleanup the
code surrounding memory_present()" [1] from
On Thu, Jul 30, 2020 at 10:15:13PM +1000, Michael Ellerman wrote:
> Mike Rapoport writes:
> > From: Mike Rapoport
> >
> > fadump_reserve_crash_area() reserves memory from a specified base address
> > till the end of the RAM.
> >
> > Replace iteration th
On Thu, Jul 30, 2020 at 05:22:10PM +0100, Catalin Marinas wrote:
> Hi Mike,
>
> On Mon, Jul 27, 2020 at 07:29:31PM +0300, Mike Rapoport wrote:
> > For instance, the following example will create an uncached mapping (error
> > handling is omitted):
> >
> > fd
Hi,
On Wed, Jul 29, 2020 at 04:36:29PM -0700, Atish Patra wrote:
> Currently, page table setup is done during setup_va_final where fixmap can
> be used to create the temporary mappings. The physical frame is allocated
> from memblock_alloc_* functions. However, this won't work if page table
>
On Wed, Jul 29, 2020 at 03:03:04PM +0200, David Hildenbrand wrote:
> On 29.07.20 15:00, Mike Rapoport wrote:
> > On Wed, Jul 29, 2020 at 11:35:20AM +0200, David Hildenbrand wrote:
> >>>
> >>> There is still large gap with ARM64_64K_PAGES, though.
> >&g
On Wed, Jul 29, 2020 at 11:35:20AM +0200, David Hildenbrand wrote:
> On 29.07.20 11:31, Mike Rapoport wrote:
> > Hi Justin,
> >
> > On Wed, Jul 29, 2020 at 08:27:58AM +, Justin He wrote:
> >> Hi David
> >>>>
> >>>> Without t
Hi Justin,
On Wed, Jul 29, 2020 at 08:27:58AM +, Justin He wrote:
> Hi David
> > >
> > > Without this series, if qemu creates a 4G bytes nvdimm device, we can
> > only
> > > use 2G bytes for dax pmem(kmem) in the worst case.
> > > e.g.
> > > 24000-33fdf : Persistent Memory
> > > We
On Tue, Jul 28, 2020 at 07:02:54PM +0800, Baoquan He wrote:
> On 07/28/20 at 08:11am, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
> > regions to set node ID in memblock.
On Tue, Jul 28, 2020 at 12:44:40PM +0200, Ingo Molnar wrote:
>
> * Mike Rapoport wrote:
>
> > From: Mike Rapoport
> >
> > numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
> > regions to set node ID in memblock.reserved and than traverse
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
/* do
From: Mike Rapoport
for_each_memblock() is used exclusively to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.
Remove type parameter from the for_each_memblock() iterator to improve
encapsulation of memblock internals from its users
From: Mike Rapoport
numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
regions to set node ID in memblock.reserved and than traverses
memblock.reserved to update reserved_nodemask to include node IDs that were
set in the first loop.
Remove redundant traversal over
From: Mike Rapoport
microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.
Remove these calls and the surrounding code.
Signed-off-by: Mike
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);
/* do something with start_pfn
From: Mike Rapoport
Currently for_each_mem_range() iterator is the most generic way to traverse
memblock regions. As such, it has 8 parameters and it is hardly convenient
to users. Most users choose to utilize one of its wrappers and the only
user that actually needs most of the parameters
From: Mike Rapoport
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.
There is no need to call memblock_set_node(), remove this call and the
surrounding code.
Signed-off-by: Mike Rapoport
---
arch/riscv/mm/init.c | 9
From: Mike Rapoport
for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h
Signed-off-by: Mike Rapoport
---
include/linux/memblock.h | 5 -
mm/memblock.c| 5 +
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git
From: Mike Rapoport
The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.
Signed-off-by: Mike Rapoport
---
arch/s390/kernel/setup.c | 4
From: Mike Rapoport
Instead of traversing memblock.memory regions to find memory_start and
memory_end, simply query memblock_{start,end}_of_DRAM().
Signed-off-by: Mike Rapoport
---
arch/h8300/kernel/setup.c| 8 +++-
arch/nds32/kernel/setup.c| 8 ++--
arch/openrisc/kernel
From: Mike Rapoport
fadump_reserve_crash_area() reserves memory from a specified base address
till the end of the RAM.
Replace iteration through the memblock.memory with a single call to
memblock_reserve() with appropriate that will take care of proper memory
reservation.
Signed-off-by: Mike
From: Mike Rapoport
The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could
From: Mike Rapoport
dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.
Replace the loop
From: Mike Rapoport
The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.
Replace open-coded implementation
From: Mike Rapoport
The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size
From: Mike Rapoport
Hi,
These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.
The patches are on top of v5.8-rc7 + cherry-pick of "mm/sparse: cleanup the
code surrounding memory_present()" [1] from
Oops, something went wrong with the rebase, this should have been
squashed into the previous patch...
On Mon, Jul 27, 2020 at 07:29:35PM +0300, Mike Rapoport wrote:
> From: Mike Rapoport
>
> Taking pages out from the direct map and bringing them back may create
> undesired f
From: Mike Rapoport
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The user will create a file descriptor using the me
From: Mike Rapoport
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 9886db20d94f..af0a92f8f6bc
From: Mike Rapoport
Taking pages out from the direct map and bringing them back may create
undesired fragmentation and usage of the smaller pages in the direct
mapping of the physical memory.
This can be avoided if a significantly large area of the physical memory
would be reserved
From: Mike Rapoport
Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
Signed-off-by: Mike Rapoport
Acked-by: Palmer Dabbelt
---
arch/arm64/include/asm/unistd32.h | 2 ++
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch
From: Mike Rapoport
Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.
Add
From: Mike Rapoport
Taking pages out from the direct map and bringing them back may create
undesired fragmentation and usage of the smaller pages in the direct
mapping of the physical memory.
This can be avoided if a significantly large area of the physical memory
would be reserved
From: Mike Rapoport
Hi,
This is an implementation of "secret" mappings backed by a file descriptor.
v2 changes:
* Follow Michael's suggestion and name the new system call 'memfd_secret'
* Add kernel-parameters documentation about the boot option
* Fix i386-tinyconfig regressio
From: Mike Rapoport
The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.
Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid
On Sat, Jul 25, 2020 at 06:16:48AM +0300, Jarkko Sakkinen wrote:
> On Fri, Jul 24, 2020 at 11:27:46AM +0200, Ingo Molnar wrote:
> >
> > * Jarkko Sakkinen wrote:
> >
> > > Use text_alloc() and text_free() instead of module_alloc() and
> > > module_memfree() when an arch provides them.
> > >
> >
st
> defined.
>
> Arvind Sankar (3):
> x86/mm: Drop unused MAX_PHYSADDR_BITS
> sh/mm: Drop unused MAX_PHYSADDR_BITS
> sparc: Drop unused MAX_PHYSADDR_BITS
For the series
Acked-by: Mike Rapoport
> arch/sh/include/asm/sparsemem.h| 4 +---
> arch/sparc/include/asm/s
On Fri, Jul 24, 2020 at 08:05:52AM +0300, Jarkko Sakkinen wrote:
> Use text_alloc() and text_free() instead of module_alloc() and
> module_memfree() when an arch provides them.
>
> Cc: linux...@kvack.org
> Cc: Andi Kleen
> Cc: Masami Hiramatsu
> Cc: Peter Zijlstra
> Signed-off-by: Jarkko
(cc people whi particpaged in v2 disuccsion)
On Fri, Jul 24, 2020 at 08:05:47AM +0300, Jarkko Sakkinen wrote:
> Remove MODULES dependency by migrating from module_alloc() to the new
> text_alloc() API. Essentially these changes provide preliminaries for
> allowing to compile a static kernel with
On Fri, Jul 24, 2020 at 08:05:49AM +0300, Jarkko Sakkinen wrote:
> Introduce functions for allocating memory for dynamic trampolines, such
> as kprobes. An arch can promote the availability of these functions with
> CONFIG_ARCH_HAS_TEXT_ALLOC.
As it was pointed out at the discussion on the
On Fri, Jul 24, 2020 at 08:05:48AM +0300, Jarkko Sakkinen wrote:
> Add lock_modules() and unlock_modules() wrappers for acquiring module_mutex
> in order to remove the compile time dependency to it.
>
> Cc: linux...@kvack.org
> Cc: Andi Kleen
> Cc: Peter Zijlstra
> Suggested-by: Masami
On Fri, Jul 24, 2020 at 01:28:35AM +0300, Jarkko Sakkinen wrote:
> On Sat, Jul 18, 2020 at 07:23:59PM +0300, Mike Rapoport wrote:
> > On Fri, Jul 17, 2020 at 06:04:17AM +0300, Jarkko Sakkinen wrote:
> > > Introduce functions for allocating memory for dynamic trampolines, suc
On Thu, Jul 23, 2020 at 12:29:26PM +0100, Catalin Marinas wrote:
> On Wed, Jul 22, 2020 at 01:40:34PM +, liwei (CM) wrote:
> > Catalin Marinas wrote:
> > > On Wed, Jul 22, 2020 at 08:41:17AM +, liwei (CM) wrote:
> > > > Mike Rapoport wrote:
> > >
Hi,
On Tue, Jul 21, 2020 at 03:32:03PM +0800, Wei Li wrote:
> For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP
> do not free the reserved memory for the page map, this patch do it.
Are there numbers showing how much memory is actually freed?
The freeing of empty memmap
so it is also removed.
>
> Please review.
>
> Thanks,
>
> Joerg
>
> Changes to v2:
>
> - Rebased to tip/master
> - Some rewording of the commit-messages
I have a small nitpick for the commit message of the first patch,
otheriwise,
Reviewed-by:
On Tue, Jul 21, 2020 at 11:59:51AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Pre-allocate the page-table pages for the vmalloc area at the level
> which needs synchronization on x86-64, which is P4D for 5-level and
> PUD for 4-level paging.
>
> Doing this at boot makes sure all no
dif directive [-Wendif-labels]
> Fix #endif comment.
Oops :)
> Cc: Mike Rapoport
> Fixes: 8f74afa22d9b ("xtensa: switch to generic version of pte allocation")
> Signed-off-by: Max Filippov
Reviewed-by: Mike Rapoport
> ---
> arch/xtensa/include/asm/pgalloc.h |
Hi,
On Tue, Jul 21, 2020 at 01:56:33AM +, liwei (CM) wrote:
> Hi, all
>
> I'm sorry to bother you, but still very hope you can give comments or
> suggestions to this patch, thank you very much.
I cannot find your patch neither in Inbox nor in the public archives.
Can you resend it please?
On Mon, Jul 20, 2020 at 04:34:12PM +0200, Arnd Bergmann wrote:
> On Mon, Jul 20, 2020 at 4:21 PM Mike Rapoport wrote:
> > On Mon, Jul 20, 2020 at 01:30:13PM +0200, Arnd Bergmann wrote:
> > > On Mon, Jul 20, 2020 at 11:25 AM Mike Rapoport wrote:
> > > &g
On Mon, Jul 20, 2020 at 01:30:13PM +0200, Arnd Bergmann wrote:
> On Mon, Jul 20, 2020 at 11:25 AM Mike Rapoport wrote:
> >
> > From: Mike Rapoport
> >
> > Introduce "secretmemfd" system call with the ability to create memory areas
> > visibl
From: Mike Rapoport
Taking pages out from the direct map and bringing them back may create
undesired fragmentation and usage of the smaller pages in the direct
mapping of the physical memory.
This can be avoided if a significantly large area of the physical memory
would be reserved
901 - 1000 of 2824 matches
Mail list logo