[v1 2/5] mm: defining memblock_virt_alloc_try_nid_raw

2017-03-23 Thread Pavel Tatashin
A new version of memblock_virt_alloc_* allocations: - Does not zero the allocated memory - Does not panic if request cannot be satisfied Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- include/linux/bootmem.h |

[v1 4/5] mm: zero struct pages during initialization

2017-03-23 Thread Pavel Tatashin
ero memory or can deffer it. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- include/linux/mm.h |9 + mm/page_alloc.c|3 +++ 2 files changed, 12 insertions(+), 0 deletions(-) diff --git a/include/linu

[v1 5/5] mm: teach platforms not to zero struct pages memory

2017-03-23 Thread Pavel Tatashin
quot; using the boot CPU. This patch solves this problem, by deferring zeroing "struct pages" to only when they are initialized. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- arch/powerpc/mm/init_64.c |

[v1 1/5] sparc64: simplify vmemmap_populate

2017-03-23 Thread Pavel Tatashin
Remove duplicating code, by using common functions vmemmap_pud_populate and vmemmap_pgd_populate functions. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- arch/sparc/mm/init_64.c | 23 ++--

[v1 0/5] parallelized "struct page" zeroing

2017-03-23 Thread Pavel Tatashin
size) because there are twice as many "struct page"es for the same amount of memory, as base pages are twice smaller. Pavel Tatashin (5): sparc64: simplify vmemmap_populate mm: defining memblock_virt_alloc_try_nid_raw mm: add "zero" argument to vmemmap allocators m

[v1 3/5] mm: add "zero" argument to vmemmap allocators

2017-03-23 Thread Pavel Tatashin
-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- arch/powerpc/mm/init_64.c |4 +- arch/s390/mm/vmem.c |5 ++- arch/sparc/mm/init_64.c |3 +- arch/x86/mm/init_64.c |3 +- include/linux/mm.h|

[v2 4/5] mm: zero struct pages during initialization

2017-03-24 Thread Pavel Tatashin
ero memory or can deffer it. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- include/linux/mm.h |9 + mm/page_alloc.c|3 +++ 2 files changed, 12 insertions(+), 0 deletions(-) diff --git a/include/linu

[v2 0/5] parallelized "struct page" zeroing

2017-03-24 Thread Pavel Tatashin
t it takes only additional 0.456s per node, which means on Intel we also benefit from having memset() and initializing all other fields in one place. Pavel Tatashin (5): sparc64: simplify vmemmap_populate mm: defining memblock_virt_alloc_try_nid_raw mm: add "zero" argument to vme

[v2 5/5] mm: teach platforms not to zero struct pages memory

2017-03-24 Thread Pavel Tatashin
quot; using the boot CPU. This patch solves this problem, by deferring zeroing "struct pages" to only when they are initialized. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- arch/powerpc/mm/init_64.c |

[v2 3/5] mm: add "zero" argument to vmemmap allocators

2017-03-24 Thread Pavel Tatashin
-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- arch/powerpc/mm/init_64.c |4 +- arch/s390/mm/vmem.c |5 ++- arch/sparc/mm/init_64.c |3 +- arch/x86/mm/init_64.c |3 +- include/linux/mm.h|

[v2 2/5] mm: defining memblock_virt_alloc_try_nid_raw

2017-03-24 Thread Pavel Tatashin
A new version of memblock_virt_alloc_* allocations: - Does not zero the allocated memory - Does not panic if request cannot be satisfied Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- include/linux/bootmem.h |

[v2 1/5] sparc64: simplify vmemmap_populate

2017-03-24 Thread Pavel Tatashin
Remove duplicating code, by using common functions vmemmap_pud_populate and vmemmap_pgd_populate functions. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- arch/sparc/mm/init_64.c | 23 ++--

[v1 6/9] x86/tsc: use cpuid to determine TSC frequency

2017-03-22 Thread Pavel Tatashin
. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/tsc.c | 39 ++- 1 files changed, 26 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 5add503..1c9fc23 100644 --- a/arch/x86/kernel

[v1 8/9] x86/tsc: tsc early

2017-03-22 Thread Pavel Tatashin
-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/include/asm/tsc.h |4 ++ arch/x86/kernel/tsc.c | 107 2 files changed, 111 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h index 8

[v1 7/9] x86/tsc: use cpuid to determine CPU frequency

2017-03-22 Thread Pavel Tatashin
Newer processors implement cpuid extension to determine CPU frequency from cpuid. This patch adds a function that can do this early in boot. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/tsc.c | 10 ++ 1 files changed, 6 insertions(+), 4 del

[v1 0/9] Early boot time stamps for x86

2017-03-22 Thread Pavel Tatashin
. Pavel Tatashin (9): sched/clock: broken stable to unstable transfer sched/clock: interface to allow timestamps early in boot x86/cpu: determining x86 vendor early x86/tsc: early MSR-based CPU/TSC frequency discovery x86/tsc: disable early messages from quick_pit_calibrate x86/tsc: use

[v1 3/9] x86/cpu: determining x86 vendor early

2017-03-22 Thread Pavel Tatashin
In order to support early time stamps we must know the vendor id of the chip early in boot. This patch implements it by getting vendor string from cpuid, and comparing it against the known to Linux x86 vendors. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/inclu

[v1 9/9] x86/tsc: use tsc early

2017-03-22 Thread Pavel Tatashin
Call tsc_early_init() to initialize early boot time stamps functionality on the supported x86 platforms, and call tsc_early_fini() to finish this feature after permanent tsc has been initialized. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/head64.c

[v1 1/9] sched/clock: broken stable to unstable transfer

2017-03-22 Thread Pavel Tatashin
e correct clock value when we determine the boundaries for min/max clocks. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- kernel/sched/clock.c |9 + 1 files changed, 5 insertions(+), 4 deletions(-) diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c index a

[v1 2/9] sched/clock: interface to allow timestamps early in boot

2017-03-22 Thread Pavel Tatashin
sched_clock_early_init() Tells sched clock that the early clock can be used - Call sched_clock_early_fini() Tells sched clock that the early clock is finished, and sched clock should hand over the operation to permanent clock. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- include/linux

[v1 4/9] x86/tsc: early MSR-based CPU/TSC frequency discovery

2017-03-22 Thread Pavel Tatashin
Allow discovering MSR-based CPU/TSC frequency early in boot. This method works only for some Intel CPUs. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/include/asm/tsc.h |1 + arch/x86/kernel/tsc_msr.c | 38 +- 2 files c

[v1 5/9] x86/tsc: disable early messages from quick_pit_calibrate

2017-03-22 Thread Pavel Tatashin
-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/tsc.c | 10 ++ 1 files changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index c73a7f9..5add503 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@

[v2 6/9] x86/tsc: use cpuid to determine TSC frequency

2017-03-24 Thread Pavel Tatashin
. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/tsc.c | 39 ++- 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 5add503..1c9fc23 100644 --- a/arch/x86/kernel

[v2 5/9] x86/tsc: disable early messages from quick_pit_calibrate

2017-03-24 Thread Pavel Tatashin
-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/tsc.c | 10 ++ 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index c73a7f9..5add503 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -580,7

[v2 1/9] sched/clock: broken stable to unstable transfer

2017-03-24 Thread Pavel Tatashin
e correct clock value when we determine the boundaries for min/max clocks. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- kernel/sched/clock.c | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c index a08795e.

[v2 0/9] Early boot time stamps for x86

2017-03-24 Thread Pavel Tatashin
e can show it with the early boot time stamps, and avoid future regressions by having this data always available to us. Pavel Tatashin (9): sched/clock: broken stable to unstable transfer sched/clock: interface to allow timestamps early in boot x86/cpu: determining x86 vendor early x86/tsc: early MSR

[v2 4/9] x86/tsc: early MSR-based CPU/TSC frequency discovery

2017-03-24 Thread Pavel Tatashin
Allow discovering MSR-based CPU/TSC frequency early in boot. This method works only for some Intel CPUs. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/include/asm/tsc.h | 1 + arch/x86/kernel/tsc_msr.c | 38 +- 2 files chang

[v2 3/9] x86/cpu: determining x86 vendor early

2017-03-24 Thread Pavel Tatashin
In order to support early time stamps we must know the vendor id of the chip early in boot. This patch implements it by getting vendor string from cpuid, and comparing it against the known to Linux x86 vendors. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/inclu

[v2 2/9] sched/clock: interface to allow timestamps early in boot

2017-03-24 Thread Pavel Tatashin
sched_clock_early_init() Tells sched clock that the early clock can be used - Call sched_clock_early_fini() Tells sched clock that the early clock is finished, and sched clock should hand over the operation to permanent clock. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- include/linux

[v2 8/9] x86/tsc: tsc early

2017-03-24 Thread Pavel Tatashin
-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/include/asm/tsc.h | 4 ++ arch/x86/kernel/tsc.c | 127 + 2 files changed, 131 insertions(+) diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h index 893de0c..a8c7f2e

[v2 7/9] x86/tsc: use cpuid to determine CPU frequency

2017-03-24 Thread Pavel Tatashin
Newer processors implement cpuid extension to determine CPU frequency from cpuid. This patch adds a function that can do this early in boot. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/tsc.c | 10 ++ 1 file changed, 6 insertions(+), 4 deletions(-)

[v2 9/9] x86/tsc: use tsc early

2017-03-24 Thread Pavel Tatashin
Call tsc_early_init() to initialize early boot time stamps functionality on the supported x86 platforms, and call tsc_early_fini() to finish this feature after permanent tsc has been initialized. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/head64.c | 1 +

[v4 00/15] complete deferred page initialization

2017-08-02 Thread Pavel Tatashin
10.2s/T improvement Pavel Tatashin (15): x86/mm: reserve only exiting low pages x86/mm: setting fields in deferred pages sparc64/mm: setting fields in deferred pages mm: discard memblock data later mm: don't accessed uninitialized struct pages sparc64: simplify vmemmap_pop

[v4 09/15] sparc64: optimized struct page zeroing

2017-08-02 Thread Pavel Tatashin
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without calling memset(). We do eight regular stores, thus avoid cost of membar. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: D

[v4 03/15] sparc64/mm: setting fields in deferred pages

2017-08-02 Thread Pavel Tatashin
e initialized in free_all_bootmem(). Therefore, the fix is to switch the above calls. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by: Bob Picco &

[v4 06/15] sparc64: simplify vmemmap_populate

2017-08-02 Thread Pavel Tatashin
Remove duplicating code by using common functions vmemmap_pud_populate and vmemmap_pgd_populate. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by:

[v4 05/15] mm: don't accessed uninitialized struct pages

2017-08-02 Thread Pavel Tatashin
quot;es until they are initialized. This patch fixes this. This patch defines a new accessor memblock_get_reserved_pfn_range() which returns successive ranges of reserved PFNs. deferred_init_memmap() calls it to determine if a PFN and its struct page has already been initialized. Signed-off-by: Pavel Tatashin

[v4 01/15] x86/mm: reserve only exiting low pages

2017-08-02 Thread Pavel Tatashin
. In this patchset we will stop zeroing struct page memory during allocation. Therefore, this bug must be fixed in order to avoid random assert failures caused by CONFIG_DEBUG_VM_PGFLAGS triggers. The fix is to reserve memory from the first existing PFN. Signed-off-by: Pavel Tatashin <pasha.ta

[v4 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-02 Thread Pavel Tatashin
To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin

[v4 15/15] mm: debug for raw alloctor

2017-08-02 Thread Pavel Tatashin
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no places excpect zeroed memory. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com&

[v4 10/15] x86/kasan: explicitly zero kasan shadow memory

2017-08-02 Thread Pavel Tatashin
To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin

[v4 02/15] x86/mm: setting fields in deferred pages

2017-08-02 Thread Pavel Tatashin
ialized prior to using them. The deferred-reserved pages are initialized in free_all_bootmem(). Therefore, the fix is to switch the above calls. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan &l

[v4 07/15] mm: defining memblock_virt_alloc_try_nid_raw

2017-08-02 Thread Pavel Tatashin
A new variant of memblock_virt_alloc_* allocations: memblock_virt_alloc_try_nid_raw() - Does not zero the allocated memory - Does not panic if request cannot be satisfied Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@o

[v4 04/15] mm: discard memblock data later

2017-08-02 Thread Pavel Tatashin
were enabled. Another reason why we want this problem fixed in this patch series is, in the next patch, we will need to access memblock.reserved from deferred_init_memmap(). Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com&

[v4 14/15] mm: optimize early system hash allocations

2017-08-02 Thread Pavel Tatashin
() interface, and thus improve the boot performance. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by: Bob Picco <bob.pi...@oracle.com> -

[v4 13/15] mm: stop zeroing memory during allocation in vmemmap

2017-08-02 Thread Pavel Tatashin
Replace allocators in sprase-vmemmap to use the non-zeroing version. So, we will get the performance improvement by zeroing the memory in parallel when struct pages are zeroed. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@o

[v4 08/15] mm: zero struct pages during initialization

2017-08-02 Thread Pavel Tatashin
Add struct page zeroing as a part of initialization of other fields in __init_single_page(). Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by:

[v4 12/15] mm: explicitly zero pagetable memory

2017-08-02 Thread Pavel Tatashin
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory at its call sites for everything except struct pages. Struct page memory is zero'd by struct page initialization. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <st

[v5 08/15] mm: zero struct pages during initialization

2017-08-03 Thread Pavel Tatashin
Add struct page zeroing as a part of initialization of other fields in __init_single_page(). Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by:

[v5 04/15] mm: discard memblock data later

2017-08-03 Thread Pavel Tatashin
were enabled. Another reason why we want this problem fixed in this patch series is, in the next patch, we will need to access memblock.reserved from deferred_init_memmap(). Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com&

[v5 07/15] mm: defining memblock_virt_alloc_try_nid_raw

2017-08-03 Thread Pavel Tatashin
A new variant of memblock_virt_alloc_* allocations: memblock_virt_alloc_try_nid_raw() - Does not zero the allocated memory - Does not panic if request cannot be satisfied Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@o

[v5 09/15] sparc64: optimized struct page zeroing

2017-08-03 Thread Pavel Tatashin
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without calling memset(). We do eight regular stores, thus avoid cost of membar. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: D

[v5 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-03 Thread Pavel Tatashin
To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin

[v5 14/15] mm: optimize early system hash allocations

2017-08-03 Thread Pavel Tatashin
() interface, and thus improve the boot performance. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by: Bob Picco <bob.pi...@oracle.com> -

[v5 13/15] mm: stop zeroing memory during allocation in vmemmap

2017-08-03 Thread Pavel Tatashin
Replace allocators in sprase-vmemmap to use the non-zeroing version. So, we will get the performance improvement by zeroing the memory in parallel when struct pages are zeroed. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@o

[v5 00/15] complete deferred page initialization

2017-08-03 Thread Pavel Tatashin
threaded struct page init: 7.6s/T improvement Deferred struct page init: 10.2s/T improvement Pavel Tatashin (15): x86/mm: reserve only exiting low pages x86/mm: setting fields in deferred pages sparc64/mm: setting fields in deferred pages mm: discard memblock data later mm: don't accessed

[v5 03/15] sparc64/mm: setting fields in deferred pages

2017-08-03 Thread Pavel Tatashin
e initialized in free_all_bootmem(). Therefore, the fix is to switch the above calls. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by: Bob Picco &

[v5 15/15] mm: debug for raw alloctor

2017-08-03 Thread Pavel Tatashin
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no places excpect zeroed memory. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com&

[v5 02/15] x86/mm: setting fields in deferred pages

2017-08-03 Thread Pavel Tatashin
ialized prior to using them. The deferred-reserved pages are initialized in free_all_bootmem(). Therefore, the fix is to switch the above calls. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan &l

[v5 01/15] x86/mm: reserve only exiting low pages

2017-08-03 Thread Pavel Tatashin
. In this patchset we will stop zeroing struct page memory during allocation. Therefore, this bug must be fixed in order to avoid random assert failures caused by CONFIG_DEBUG_VM_PGFLAGS triggers. The fix is to reserve memory from the first existing PFN. Signed-off-by: Pavel Tatashin <pasha.ta

[v5 05/15] mm: don't accessed uninitialized struct pages

2017-08-03 Thread Pavel Tatashin
quot;es until they are initialized. This patch fixes this. This patch defines a new accessor memblock_get_reserved_pfn_range() which returns successive ranges of reserved PFNs. deferred_init_memmap() calls it to determine if a PFN and its struct page has already been initialized. Signed-off-by: Pavel Tatashin

[v5 12/15] mm: explicitly zero pagetable memory

2017-08-03 Thread Pavel Tatashin
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory at its call sites for everything except struct pages. Struct page memory is zero'd by struct page initialization. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <st

[v5 10/15] x86/kasan: explicitly zero kasan shadow memory

2017-08-03 Thread Pavel Tatashin
To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin

[v5 06/15] sparc64: simplify vmemmap_populate

2017-08-03 Thread Pavel Tatashin
Remove duplicating code by using common functions vmemmap_pud_populate and vmemmap_pgd_populate. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by:

[v3 2/2] x86/tsc: use tsc early

2017-08-11 Thread Pavel Tatashin
tsc_early_init() to initialize early boot time stamps functionality on the supported x86 platforms, and call tsc_early_fini() to finish this feature after permanent tsc has been initialized. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/include/asm/tsc.h | 4 arch/x86/

[v3 1/2] sched/clock: interface to allow timestamps early in boot

2017-08-11 Thread Pavel Tatashin
specific read_boot_clock64() Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/time.c | 22 include/linux/sched/clock.h | 4 +++ kernel/sched/clock.c| 61 - 3 files changed, 86 insertions

[v3 0/2] Early boot time stamps for x86

2017-08-11 Thread Pavel Tatashin
ore: https://hastebin.com/jadaqukubu.scala After: https://hastebin.com/nubipozacu.scala As seen above, currently timestamps are available from around the time when "Security Framework" is initialized. But, 26s already passed until we reached to this point. Pavel Tatashin (2): sched/clock

[PATCH v4 1/2] sched/clock: interface to allow timestamps early in boot

2017-08-14 Thread Pavel Tatashin
specific read_boot_clock64() Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/time.c | 23 + include/linux/sched/clock.h | 4 +++ kernel/sched/clock.c| 63 - 3 files changed, 89 insertions

[PATCH v4 2/2] x86/tsc: use tsc early

2017-08-14 Thread Pavel Tatashin
tsc_early_init() to initialize early boot time stamps functionality on the supported x86 platforms, and call tsc_early_fini() to finish this feature after permanent tsc has been initialized. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/include/asm/tsc.h | 4 arch/x86/

[PATCH v4 0/2] Early boot time stamps for x86

2017-08-14 Thread Pavel Tatashin
stebin.com/nubipozacu.scala As seen above, currently timestamps are available from around the time when "Security Framework" is initialized. But, 26s already passed until we reached to this point. Pavel Tatashin (2): sched/clock: interface to allow timestamps early in boot x86/tsc: use

[v3 2/2] x86/tsc: use tsc early

2017-08-10 Thread Pavel Tatashin
tsc_early_init() to initialize early boot time stamps functionality on the supported x86 platforms, and call tsc_early_fini() to finish this feature after permanent tsc has been initialized. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/include/asm/tsc.h | 4 arch/x86/

[v3 0/2] *** SUBJECT HERE ***

2017-08-10 Thread Pavel Tatashin
ala After: https://hastebin.com/nubipozacu.scala As seen above, currently timestamps are available from around the time when "Security Framework" is initialized. But, 26s already passed until we reached to this point. Pavel Tatashin (2): sched/clock: interface to allow timestamps early in

[v3 1/2] sched/clock: interface to allow timestamps early in boot

2017-08-10 Thread Pavel Tatashin
specific read_boot_clock64() Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- arch/x86/kernel/time.c | 22 include/linux/sched/clock.h | 4 +++ kernel/sched/clock.c| 61 - 3 files changed, 86 insertions

[v1 1/1] mm: discard memblock data later

2017-08-11 Thread Pavel Tatashin
were enabled. Tested by reducing INIT_MEMBLOCK_REGIONS down to 4 from the current 128, and verifying in qemu that this code is getting excuted and that the freed pages are sane. Fixes: 7e18adb4f80b ("mm: meminit: initialise remaining struct pages in parallel with kswapd") Signed-off

[v1 0/1] discard memblock data later

2017-08-11 Thread Pavel Tatashin
age memory. Pavel Tatashin (1): mm: discard memblock data later include/linux/memblock.h | 8 +--- mm/memblock.c| 38 +- mm/nobootmem.c | 16 mm/page_alloc.c | 4 4 files changed, 26 insertions(+), 40

[v2 1/1] mm: discard memblock data later

2017-08-11 Thread Pavel Tatashin
were enabled. Tested by reducing INIT_MEMBLOCK_REGIONS down to 4 from the current 128, and verifying in qemu that this code is getting excuted and that the freed pages are sane. Fixes: 7e18adb4f80b ("mm: meminit: initialise remaining struct pages in parallel with kswapd") Signed-off

[v2 0/1] discard memblock data later

2017-08-11 Thread Pavel Tatashin
erred page initialization", where we do not zero the backing struct page memory. Pavel Tatashin (1): mm: discard memblock data later include/linux/memblock.h | 6 -- mm/memblock.c| 38 +- mm/nobootmem.c | 16 --

[v6 06/15] sparc64: simplify vmemmap_populate

2017-08-07 Thread Pavel Tatashin
Remove duplicating code by using common functions vmemmap_pud_populate and vmemmap_pgd_populate. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by:

[v6 10/15] x86/kasan: explicitly zero kasan shadow memory

2017-08-07 Thread Pavel Tatashin
To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin

[v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-07 Thread Pavel Tatashin
To optimize the performance of struct page initialization, vmemmap_populate() will no longer zero memory. We must explicitly zero the memory that is allocated by vmemmap_populate() for kasan, as this memory does not go through struct page initialization path. Signed-off-by: Pavel Tatashin

[v6 07/15] mm: defining memblock_virt_alloc_try_nid_raw

2017-08-07 Thread Pavel Tatashin
A new variant of memblock_virt_alloc_* allocations: memblock_virt_alloc_try_nid_raw() - Does not zero the allocated memory - Does not panic if request cannot be satisfied Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@o

[v6 03/15] sparc64/mm: setting fields in deferred pages

2017-08-07 Thread Pavel Tatashin
e initialized in free_all_bootmem(). Therefore, the fix is to switch the above calls. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by: Bob Picco &

[v6 12/15] mm: explicitly zero pagetable memory

2017-08-07 Thread Pavel Tatashin
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory at its call sites for everything except struct pages. Struct page memory is zero'd by struct page initialization. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <st

[v6 05/15] mm: don't accessed uninitialized struct pages

2017-08-07 Thread Pavel Tatashin
quot;es until they are initialized. This patch fixes this. This patch defines a new accessor memblock_get_reserved_pfn_range() which returns successive ranges of reserved PFNs. deferred_init_memmap() calls it to determine if a PFN and its struct page has already been initialized. Signed-off-by: Pavel Tatashin

[v6 04/15] mm: discard memblock data later

2017-08-07 Thread Pavel Tatashin
were enabled. Another reason why we want this problem fixed in this patch series is, in the next patch, we will need to access memblock.reserved from deferred_init_memmap(). Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com&

[v6 00/15] complete deferred page initialization

2017-08-07 Thread Pavel Tatashin
ization Performance improvements on x86 machine with 8 nodes: Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz Single threaded struct page init: 7.6s/T improvement Deferred struct page init: 10.2s/T improvement Pavel Tatashin (15): x86/mm: reserve only exiting low pages x86/mm: setting fields in deferre

[v6 02/15] x86/mm: setting fields in deferred pages

2017-08-07 Thread Pavel Tatashin
ialized prior to using them. The deferred-reserved pages are initialized in free_all_bootmem(). Therefore, the fix is to switch the above calls. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan &l

[v6 14/15] mm: optimize early system hash allocations

2017-08-07 Thread Pavel Tatashin
() interface, and thus improve the boot performance. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by: Bob Picco <bob.pi...@oracle.com> -

[v6 15/15] mm: debug for raw alloctor

2017-08-07 Thread Pavel Tatashin
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no places excpect zeroed memory. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com&

[v6 08/15] mm: zero struct pages during initialization

2017-08-07 Thread Pavel Tatashin
Add struct page zeroing as a part of initialization of other fields in __init_single_page(). Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@oracle.com> Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com> Reviewed-by:

[v6 13/15] mm: stop zeroing memory during allocation in vmemmap

2017-08-07 Thread Pavel Tatashin
Replace allocators in sprase-vmemmap to use the non-zeroing version. So, we will get the performance improvement by zeroing the memory in parallel when struct pages are zeroed. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Steven Sistare <steven.sist...@o

[v6 01/15] x86/mm: reserve only exiting low pages

2017-08-07 Thread Pavel Tatashin
. In this patchset we will stop zeroing struct page memory during allocation. Therefore, this bug must be fixed in order to avoid random assert failures caused by CONFIG_DEBUG_VM_PGFLAGS triggers. The fix is to reserve memory from the first existing PFN. Signed-off-by: Pavel Tatashin <pasha.ta

[v6 09/15] sparc64: optimized struct page zeroing

2017-08-07 Thread Pavel Tatashin
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without calling memset(). We do eight to tent regular stores based on the size of struct page. Compiler optimizes out the conditions of switch() statement. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Re

[v4 1/1] mm: Adaptive hash table scaling

2017-05-20 Thread Pavel Tatashin
65536G 13 18 65536M 2048M Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- mm/page_alloc.c | 19 +++ 1 file changed, 19 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afa63e81e73..15bba5c325a5 100644 --- a/mm/page_alloc.c ++

[v4 0/1] mm: Adaptive hash table scaling

2017-05-20 Thread Pavel Tatashin
Changes from v3 - v4: - Fixed an issue with 32-bit overflow (adapt is ull now instead ul) - Added changes suggested by Michal Hocko: use high_limit instead of a new flag to determine that we should use this new scaling. Pavel Tatashin (1): mm: Adaptive hash table scaling mm/page_alloc.c

[v5 1/1] mm: Adaptive hash table scaling

2017-05-22 Thread Pavel Tatashin
65536G 13 18 65536M 2048M Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- mm/page_alloc.c | 25 + 1 file changed, 25 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afa63e81e73..409e0cd35381 100644 --- a/mm/page_alloc.c ++

[v5 0/1] mm: Adaptive hash table scaling

2017-05-22 Thread Pavel Tatashin
Changes from v5 - v4 - Disabled adaptive hash on 32 bit systems to avoid confusion of whether base should be different for smaller systems, and to avoid overflows. Pavel Tatashin (1): mm: Adaptive hash table scaling mm/page_alloc.c | 25 + 1 file changed, 25

[v3 5/9] mm: zero struct pages during initialization

2017-05-05 Thread Pavel Tatashin
ero memory or can deffer it. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- include/linux/mm.h |9 + mm/page_alloc.c|3 +++ 2 files changed, 12 insertions(+), 0 deletions(-) diff --git a/include/linu

[v3 6/9] sparc64: teach sparc not to zero struct pages memory

2017-05-05 Thread Pavel Tatashin
quot; using the boot CPU. This patch solves this problem, by deferring zeroing "struct pages" to only when they are initialized on SPARC. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> Reviewed-by: Shannon Nelson <shannon.nel...@oracle.com> --- arch/sparc/

[v3 4/9] mm: do not zero vmemmap_buf

2017-05-05 Thread Pavel Tatashin
memmap_buf beforehand. Let clients of alloc_block_buf() to decide whether that is needed. Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com> --- mm/sparse-vmemmap.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index

  1   2   3   4   5   6   7   8   9   10   >