A new version of memblock_virt_alloc_* allocations:
- Does not zero the allocated memory
- Does not panic if request cannot be satisfied
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
include/linux/bootmem.h |
ero memory or can deffer it.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
include/linux/mm.h |9 +
mm/page_alloc.c|3 +++
2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/include/linu
quot; using the
boot CPU. This patch solves this problem, by deferring zeroing "struct
pages" to only when they are initialized.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
arch/powerpc/mm/init_64.c |
Remove duplicating code, by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate functions.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
arch/sparc/mm/init_64.c | 23 ++--
size)
because there are twice as many "struct page"es for the same amount of memory,
as base pages are twice smaller.
Pavel Tatashin (5):
sparc64: simplify vmemmap_populate
mm: defining memblock_virt_alloc_try_nid_raw
mm: add "zero" argument to vmemmap allocators
m
-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
arch/powerpc/mm/init_64.c |4 +-
arch/s390/mm/vmem.c |5 ++-
arch/sparc/mm/init_64.c |3 +-
arch/x86/mm/init_64.c |3 +-
include/linux/mm.h|
ero memory or can deffer it.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
include/linux/mm.h |9 +
mm/page_alloc.c|3 +++
2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/include/linu
t it takes only additional 0.456s per node, which
means on Intel we also benefit from having memset() and initializing all
other fields in one place.
Pavel Tatashin (5):
sparc64: simplify vmemmap_populate
mm: defining memblock_virt_alloc_try_nid_raw
mm: add "zero" argument to vme
quot; using the
boot CPU. This patch solves this problem, by deferring zeroing "struct
pages" to only when they are initialized.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
arch/powerpc/mm/init_64.c |
-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
arch/powerpc/mm/init_64.c |4 +-
arch/s390/mm/vmem.c |5 ++-
arch/sparc/mm/init_64.c |3 +-
arch/x86/mm/init_64.c |3 +-
include/linux/mm.h|
A new version of memblock_virt_alloc_* allocations:
- Does not zero the allocated memory
- Does not panic if request cannot be satisfied
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
include/linux/bootmem.h |
Remove duplicating code, by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate functions.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
arch/sparc/mm/init_64.c | 23 ++--
.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/tsc.c | 39 ++-
1 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 5add503..1c9fc23 100644
--- a/arch/x86/kernel
-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/include/asm/tsc.h |4 ++
arch/x86/kernel/tsc.c | 107
2 files changed, 111 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index 8
Newer processors implement cpuid extension to determine CPU frequency
from cpuid. This patch adds a function that can do this early in boot.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/tsc.c | 10 ++
1 files changed, 6 insertions(+), 4 del
.
Pavel Tatashin (9):
sched/clock: broken stable to unstable transfer
sched/clock: interface to allow timestamps early in boot
x86/cpu: determining x86 vendor early
x86/tsc: early MSR-based CPU/TSC frequency discovery
x86/tsc: disable early messages from quick_pit_calibrate
x86/tsc: use
In order to support early time stamps we must know the vendor id of the
chip early in boot. This patch implements it by getting vendor string from
cpuid, and comparing it against the known to Linux x86 vendors.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/inclu
Call tsc_early_init() to initialize early boot time stamps functionality on
the supported x86 platforms, and call tsc_early_fini() to finish this
feature after permanent tsc has been initialized.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/head64.c
e correct clock value when we determine
the boundaries for min/max clocks.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
kernel/sched/clock.c |9 +
1 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index a
sched_clock_early_init()
Tells sched clock that the early clock can be used
- Call sched_clock_early_fini()
Tells sched clock that the early clock is finished, and sched clock
should hand over the operation to permanent clock.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
include/linux
Allow discovering MSR-based CPU/TSC frequency early in boot. This method
works only for some Intel CPUs.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/include/asm/tsc.h |1 +
arch/x86/kernel/tsc_msr.c | 38 +-
2 files c
-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/tsc.c | 10 ++
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index c73a7f9..5add503 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@
.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/tsc.c | 39 ++-
1 file changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 5add503..1c9fc23 100644
--- a/arch/x86/kernel
-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/tsc.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index c73a7f9..5add503 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -580,7
e correct clock value when we determine
the boundaries for min/max clocks.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
kernel/sched/clock.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index a08795e.
e can show it with the early boot
time stamps, and avoid future regressions by having this data always
available to us.
Pavel Tatashin (9):
sched/clock: broken stable to unstable transfer
sched/clock: interface to allow timestamps early in boot
x86/cpu: determining x86 vendor early
x86/tsc: early MSR
Allow discovering MSR-based CPU/TSC frequency early in boot. This method
works only for some Intel CPUs.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/include/asm/tsc.h | 1 +
arch/x86/kernel/tsc_msr.c | 38 +-
2 files chang
In order to support early time stamps we must know the vendor id of the
chip early in boot. This patch implements it by getting vendor string from
cpuid, and comparing it against the known to Linux x86 vendors.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/inclu
sched_clock_early_init()
Tells sched clock that the early clock can be used
- Call sched_clock_early_fini()
Tells sched clock that the early clock is finished, and sched clock
should hand over the operation to permanent clock.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
include/linux
-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/include/asm/tsc.h | 4 ++
arch/x86/kernel/tsc.c | 127 +
2 files changed, 131 insertions(+)
diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index 893de0c..a8c7f2e
Newer processors implement cpuid extension to determine CPU frequency
from cpuid. This patch adds a function that can do this early in boot.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/tsc.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
Call tsc_early_init() to initialize early boot time stamps functionality on
the supported x86 platforms, and call tsc_early_fini() to finish this
feature after permanent tsc has been initialized.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/head64.c | 1 +
10.2s/T improvement
Pavel Tatashin (15):
x86/mm: reserve only exiting low pages
x86/mm: setting fields in deferred pages
sparc64/mm: setting fields in deferred pages
mm: discard memblock data later
mm: don't accessed uninitialized struct pages
sparc64: simplify vmemmap_pop
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without
calling memset(). We do eight regular stores, thus avoid cost of membar.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: D
e initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco &
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
quot;es
until they are initialized. This patch fixes this.
This patch defines a new accessor memblock_get_reserved_pfn_range()
which returns successive ranges of reserved PFNs. deferred_init_memmap()
calls it to determine if a PFN and its struct page has already been
initialized.
Signed-off-by: Pavel Tatashin
.
In this patchset we will stop zeroing struct page memory during allocation.
Therefore, this bug must be fixed in order to avoid random assert failures
caused by CONFIG_DEBUG_VM_PGFLAGS triggers.
The fix is to reserve memory from the first existing PFN.
Signed-off-by: Pavel Tatashin <pasha.ta
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
ialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan &l
A new variant of memblock_virt_alloc_* allocations:
memblock_virt_alloc_try_nid_raw()
- Does not zero the allocated memory
- Does not panic if request cannot be satisfied
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@o
were enabled.
Another reason why we want this problem fixed in this patch series is,
in the next patch, we will need to access memblock.reserved from
deferred_init_memmap().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
() interface, and thus improve the boot performance.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
-
Replace allocators in sprase-vmemmap to use the non-zeroing version. So,
we will get the performance improvement by zeroing the memory in parallel
when struct pages are zeroed.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@o
Add struct page zeroing as a part of initialization of other fields in
__init_single_page().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory
at its call sites for everything except struct pages. Struct page memory
is zero'd by struct page initialization.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <st
Add struct page zeroing as a part of initialization of other fields in
__init_single_page().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
were enabled.
Another reason why we want this problem fixed in this patch series is,
in the next patch, we will need to access memblock.reserved from
deferred_init_memmap().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
A new variant of memblock_virt_alloc_* allocations:
memblock_virt_alloc_try_nid_raw()
- Does not zero the allocated memory
- Does not panic if request cannot be satisfied
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@o
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without
calling memset(). We do eight regular stores, thus avoid cost of membar.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: D
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
() interface, and thus improve the boot performance.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
-
Replace allocators in sprase-vmemmap to use the non-zeroing version. So,
we will get the performance improvement by zeroing the memory in parallel
when struct pages are zeroed.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@o
threaded struct page init: 7.6s/T improvement
Deferred struct page init: 10.2s/T improvement
Pavel Tatashin (15):
x86/mm: reserve only exiting low pages
x86/mm: setting fields in deferred pages
sparc64/mm: setting fields in deferred pages
mm: discard memblock data later
mm: don't accessed
e initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco &
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
ialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan &l
.
In this patchset we will stop zeroing struct page memory during allocation.
Therefore, this bug must be fixed in order to avoid random assert failures
caused by CONFIG_DEBUG_VM_PGFLAGS triggers.
The fix is to reserve memory from the first existing PFN.
Signed-off-by: Pavel Tatashin <pasha.ta
quot;es
until they are initialized. This patch fixes this.
This patch defines a new accessor memblock_get_reserved_pfn_range()
which returns successive ranges of reserved PFNs. deferred_init_memmap()
calls it to determine if a PFN and its struct page has already been
initialized.
Signed-off-by: Pavel Tatashin
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory
at its call sites for everything except struct pages. Struct page memory
is zero'd by struct page initialization.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <st
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
tsc_early_init() to initialize early boot time stamps functionality on
the supported x86 platforms, and call tsc_early_fini() to finish this
feature after permanent tsc has been initialized.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/include/asm/tsc.h | 4
arch/x86/
specific read_boot_clock64()
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/time.c | 22
include/linux/sched/clock.h | 4 +++
kernel/sched/clock.c| 61 -
3 files changed, 86 insertions
ore:
https://hastebin.com/jadaqukubu.scala
After:
https://hastebin.com/nubipozacu.scala
As seen above, currently timestamps are available from around the time when
"Security Framework" is initialized. But, 26s already passed until we
reached to this point.
Pavel Tatashin (2):
sched/clock
specific read_boot_clock64()
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/time.c | 23 +
include/linux/sched/clock.h | 4 +++
kernel/sched/clock.c| 63 -
3 files changed, 89 insertions
tsc_early_init() to initialize early boot time stamps functionality on
the supported x86 platforms, and call tsc_early_fini() to finish this
feature after permanent tsc has been initialized.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/include/asm/tsc.h | 4
arch/x86/
stebin.com/nubipozacu.scala
As seen above, currently timestamps are available from around the time when
"Security Framework" is initialized. But, 26s already passed until we
reached to this point.
Pavel Tatashin (2):
sched/clock: interface to allow timestamps early in boot
x86/tsc: use
tsc_early_init() to initialize early boot time stamps functionality on
the supported x86 platforms, and call tsc_early_fini() to finish this
feature after permanent tsc has been initialized.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/include/asm/tsc.h | 4
arch/x86/
ala
After:
https://hastebin.com/nubipozacu.scala
As seen above, currently timestamps are available from around the time when
"Security Framework" is initialized. But, 26s already passed until we
reached to this point.
Pavel Tatashin (2):
sched/clock: interface to allow timestamps early in
specific read_boot_clock64()
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/kernel/time.c | 22
include/linux/sched/clock.h | 4 +++
kernel/sched/clock.c| 61 -
3 files changed, 86 insertions
were enabled.
Tested by reducing INIT_MEMBLOCK_REGIONS down to 4 from the current 128,
and verifying in qemu that this code is getting excuted and that the freed
pages are sane.
Fixes: 7e18adb4f80b ("mm: meminit: initialise remaining struct pages in
parallel with kswapd")
Signed-off
age memory.
Pavel Tatashin (1):
mm: discard memblock data later
include/linux/memblock.h | 8 +---
mm/memblock.c| 38 +-
mm/nobootmem.c | 16
mm/page_alloc.c | 4
4 files changed, 26 insertions(+), 40
were enabled.
Tested by reducing INIT_MEMBLOCK_REGIONS down to 4 from the current 128,
and verifying in qemu that this code is getting excuted and that the freed
pages are sane.
Fixes: 7e18adb4f80b ("mm: meminit: initialise remaining struct pages in
parallel with kswapd")
Signed-off
erred page initialization", where we do not zero the backing
struct page memory.
Pavel Tatashin (1):
mm: discard memblock data later
include/linux/memblock.h | 6 --
mm/memblock.c| 38 +-
mm/nobootmem.c | 16 --
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
A new variant of memblock_virt_alloc_* allocations:
memblock_virt_alloc_try_nid_raw()
- Does not zero the allocated memory
- Does not panic if request cannot be satisfied
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@o
e initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco &
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory
at its call sites for everything except struct pages. Struct page memory
is zero'd by struct page initialization.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <st
quot;es
until they are initialized. This patch fixes this.
This patch defines a new accessor memblock_get_reserved_pfn_range()
which returns successive ranges of reserved PFNs. deferred_init_memmap()
calls it to determine if a PFN and its struct page has already been
initialized.
Signed-off-by: Pavel Tatashin
were enabled.
Another reason why we want this problem fixed in this patch series is,
in the next patch, we will need to access memblock.reserved from
deferred_init_memmap().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
ization
Performance improvements on x86 machine with 8 nodes:
Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz
Single threaded struct page init: 7.6s/T improvement
Deferred struct page init: 10.2s/T improvement
Pavel Tatashin (15):
x86/mm: reserve only exiting low pages
x86/mm: setting fields in deferre
ialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan &l
() interface, and thus improve the boot performance.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
-
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
Add struct page zeroing as a part of initialization of other fields in
__init_single_page().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
Replace allocators in sprase-vmemmap to use the non-zeroing version. So,
we will get the performance improvement by zeroing the memory in parallel
when struct pages are zeroed.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@o
.
In this patchset we will stop zeroing struct page memory during allocation.
Therefore, this bug must be fixed in order to avoid random assert failures
caused by CONFIG_DEBUG_VM_PGFLAGS triggers.
The fix is to reserve memory from the first existing PFN.
Signed-off-by: Pavel Tatashin <pasha.ta
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without
calling memset(). We do eight to tent regular stores based on the size of
struct page. Compiler optimizes out the conditions of switch() statement.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Re
65536G 13 18 65536M 2048M
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
mm/page_alloc.c | 19 +++
1 file changed, 19 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afa63e81e73..15bba5c325a5 100644
--- a/mm/page_alloc.c
++
Changes from v3 - v4:
- Fixed an issue with 32-bit overflow (adapt is ull now instead ul)
- Added changes suggested by Michal Hocko: use high_limit instead of
a new flag to determine that we should use this new scaling.
Pavel Tatashin (1):
mm: Adaptive hash table scaling
mm/page_alloc.c
65536G 13 18 65536M 2048M
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
mm/page_alloc.c | 25 +
1 file changed, 25 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afa63e81e73..409e0cd35381 100644
--- a/mm/page_alloc.c
++
Changes from v5 - v4
- Disabled adaptive hash on 32 bit systems to avoid confusion of
whether base should be different for smaller systems, and to
avoid overflows.
Pavel Tatashin (1):
mm: Adaptive hash table scaling
mm/page_alloc.c | 25 +
1 file changed, 25
ero memory or can deffer it.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
include/linux/mm.h |9 +
mm/page_alloc.c|3 +++
2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/include/linu
quot; using the
boot CPU. This patch solves this problem, by deferring zeroing "struct
pages" to only when they are initialized on SPARC.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com>
---
arch/sparc/
memmap_buf beforehand. Let clients of alloc_block_buf() to
decide whether that is needed.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
mm/sparse-vmemmap.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index
1 - 100 of 2213 matches
Mail list logo