pdata.dump_oops = dump_oops;
> + /* If "max_reason" is set, its value has priority over "dump_oops". */
> + if (ramoops_max_reason != -1)
> + pdata.max_reason = ramoops_max_reason;
(ramoops_max_reason >= 0) might make more sense here, we do not want
negative max_reason
> #define parse_u32(name, field, default_value) {
> \
> ret = ramoops_parse_dt_u32(pdev, name, default_value, \
The series seems to be missing the patch where ramoops_parse_dt_size
-> ramoops_parse_dt_u32 get renamed, and updated to handle
gt; Link:
> https://lore.kernel.org/lkml/20200510202436.63222-8-keesc...@chromium.org/
> Acked-by: Petr Mladek
> Acked-by: Sergey Senozhatsky
> Signed-off-by: Kees Cook
Reviewed-by: Pavel Tatashin
ght before the
kmsg_dump(), thus the reason is distinguishable from the dmesg log
itself.
Reviewed-by: Pavel Tatashin
-pasha.tatas...@soleen.com
>
> Kees Cook (3):
> printk: Collapse shutdown types into a single dump reason
> printk: Introduce kmsg_dump_reason_str()
> pstore/ram: Introduce max_reason and convert dump_oops
>
> Pavel Tatashin (3):
> printk: honor the max_reason fie
> Cc: Greg Kroah-Hartman
> Cc: "Rafael J. Wysocki"
> Cc: David Hildenbrand
> Cc: "mike.tra...@hpe.com"
> Cc: Andrew Morton
> Cc: Ingo Molnar
> Cc: Andrew Banman
> Cc: Oscar Salvador
> Cc: Michal Hocko
> Cc: Pavel Tatashin
> Cc: Qian C
ally get rid of CONFIG_MEMORY_HOTREMOVE.
Reviewed-by: Pavel Tatashin
ichal Hocko
> Cc: David Hildenbrand
> Cc: Pavel Tatashin
> Cc: Qian Cai
> Cc: Wei Yang
> Cc: Arun KS
> Cc: Mathieu Malaterre
> Reviewed-by: Dan Williams
> Reviewed-by: Wei Yang
> Signed-off-by: David Hildenbrand
Reviewed-by: Pavel Tatashin
On Tue, Jul 17, 2018 at 6:49 AM Abdul Haleem
wrote:
>
> On Sat, 2018-07-14 at 10:55 +1000, Stephen Rothwell wrote:
> > Hi Abdul,
> >
> > On Fri, 13 Jul 2018 14:43:11 +0530 Abdul Haleem
> > wrote:
> > >
> > > On Thu, 2018-07-12 at 13:44 -0400, Pave
> Related commit could be one of below ? I see lots of patches related to mm
> and could not bisect
>
> 5479976fda7d3ab23ba0a4eb4d60b296eb88b866 mm: page_alloc: restore
> memblock_next_valid_pfn() on arm/arm64
> 41619b27b5696e7e5ef76d9c692dd7342c1ad7eb
>
On Thu, Jul 12, 2018 at 5:50 AM Oscar Salvador
wrote:
>
> > > I just roughly check, but if I checked the right place,
> > > vmemmap_populated() checks for the section to contain the flags we are
> > > setting in sparse_init_one_section().
> >
> > Yes.
> >
> > > But with this patch, we populate
I am OK, if this patch is removed from Baoquan's series. But, I would
still like to get rid of CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER, I
can work on this in my sparse_init re-write series. ppc64 should
really fallback safely to small chunks allocs, and if it does not
there is some existing bug.
Thank you Andy for the heads up. I might need to rebase my work
(http://lkml.kernel.org/r/20180629182541.6735-1-pasha.tatas...@oracle.com)
based on this change. But, it is possible it is going to be harder to
parallelize based on device tree. I will need to think about it.
Pavel
On Tue, Jul 3,
On Tue, Jun 19, 2018 at 9:50 AM Pavel Tatashin
wrote:
>
> On Sat, Jun 16, 2018 at 4:04 AM Jiri Slaby wrote:
> >
> > On 11/21/2017, 08:24 AM, Michal Hocko wrote:
> > > On Thu 16-11-17 20:46:01, Pavel Tatashin wrote:
> > >> There is no need to have
On Sat, Jun 16, 2018 at 4:04 AM Jiri Slaby wrote:
>
> On 11/21/2017, 08:24 AM, Michal Hocko wrote:
> > On Thu 16-11-17 20:46:01, Pavel Tatashin wrote:
> >> There is no need to have ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT,
> >> as all the page initialization code is
.
This patch allows to use deferred struct page initialization on all
platforms with memblock allocator.
Tested on x86, arm64, and sparc. Also, verified that code compiles on
PPC with CONFIG_MEMORY_HOTPLUG disabled.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/powerpc/Kconf
1. Replace these two patches:
arm64/kasan: add and use kasan_map_populate()
x86/kasan: add and use kasan_map_populate()
With:
x86/mm/kasan: don't use vmemmap_populate() to initialize
shadow
arm64/mm/kasan: don't use vmemmap_populate() to initialize
shadow
Pavel, could you please send
This looks good to me, thank you Andrew.
Pavel
:
On 10/18/2017 08:08 PM, Pavel Tatashin wrote:
As I said, I'm fine either way, I just didn't want to cause extra work
or rebasing:
http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
Makes sense. I am also fine either way, I can submit a new patch merging
together
Thank you Andrey, I will test this patch. Should it go on top or replace
the existing patch in mm-tree? ARM and x86 should be done the same
either both as follow-ups or both replace.
Pavel
As I said, I'm fine either way, I just didn't want to cause extra work
or rebasing:
http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
Makes sense. I am also fine either way, I can submit a new patch merging
together the two if needed.
Pavel
Hi Andrey,
I asked Will, about it, and he preferred to have this patched added to
the end of my series instead of replacing "arm64/kasan: add and use
kasan_map_populate()".
In addition, Will's patch stops using large pages for kasan memory, and
thus might add some regression in which case
ot studied it.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
I do not see any obvious issues in
fixed time: it
does not increase as memory is increased.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com
alloctor
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
d: https://hastebin.com/muhegoheyi.go
sparc fix deferred: https://hastebin.com/xadinobutu.go
Pavel Tatashin (10):
mm: deferred_init_memmap improvements
x86/mm: setting fields in deferred pages
sparc64/mm: setting fields in deferred pages
sparc64: simplify vmemmap_populate
mm: defini
is moved later in this patch into __init_single_page(), which is
called from zone_sizes_init().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
e that struct pages are properly
initialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com
() function to resolve this difference.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/arm64/mm/kasan_init.c | 72 ++
zeroed in order to avoid false
positives.
This patch removes our reliance on vmemmap_populate and reuses the
existing kasan page table code, which is already required for creating
the early shadow.
Signed-off-by: Will Deacon <will.dea...@arm.com>
Signed-off-by: Pavel Tatashin <pasha.tata
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
h+0x1f/0xbd
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by: Michal Hocko <mho...@suse.c
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by:
s.
Thus means if deferred struct pages are enabled on systems with these kind
of holes, linux would get memory corruptions. I have fixed this issue by
defining a new macro that performs all the necessary operations when we
free the current set of pages.
Signed-off-by: Pavel Tatashin <pasha.tat
() function to resolve this difference.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/mm/kasan_init_64.c | 75 ++---
BTW, don't we need the same aligments inside for_each_memblock() loop?
How about change kasan_map_populate() to accept regular VA start, end
address, and convert them internally after aligning to PAGE_SIZE?
Thank you,
Pavel
On Fri, Oct 13, 2017 at 11:54 AM, Pavel Tatashin
<pasha.ta
> Thanks for sharing the .config and tree. It looks like the problem is that
> kimg_shadow_start and kimg_shadow_end are not page-aligned. Whilst I fix
> them up in kasan_map_populate, they remain unaligned when passed to
> kasan_populate_zero_shadow, which confuses the loop termination conditions
Here is simplified qemu command:
qemu-system-aarch64 \
-display none \
-kernel ./arch/arm64/boot/Image \
-M virt -cpu cortex-a57 -s -S
In a separate terminal start arm64 cross debugger:
$ aarch64-unknown-linux-gnu-gdb ./vmlinux
...
Reading symbols from ./vmlinux...done.
(gdb)
> It shouldn't be difficult to use section mappings with my patch, I just
> don't really see the need to try to optimise TLB pressure when you're
> running with KASAN enabled which already has something like a 3x slowdown
> afaik. If it ends up being a big deal, we can always do that later, but
>
> Do you know what your physical memory layout looks like?
[0.00] Memory: 34960K/131072K available (16316K kernel code,
6716K rwdata, 7996K rodata, 1472K init, 8837K bss, 79728K reserved,
16384K cma-reserved)
[0.00] Virtual kernel memory layout:
[0.00] kasan :
gned. After, this modification everything is working. However, I
am not sure if this is a proper fix.
I feel, this patch requires more work, and I am troubled with using
base pages instead of large pages.
Thank you,
Pavel
On Tue, Oct 10, 2017 at 1:41 PM, Pavel Tatashin
<pasha.tatas...@oracle.com
Hi Will,
Ok, I will add your patch at the end of my series.
Thank you,
Pavel
>
> I was thinking that you could just add my patch to the end of your series
> and have the whole lot go up like that. If you want to merge it with your
> patch, I'm fine with that too.
>
> Will
>
> --
> To
I wanted to thank you Michal for spending time and doing the in-depth
reviews of every incremental change. Overall the series is in much
better shape now because of your feedback.
Pavel
On 10/10/2017 10:15 AM, Michal Hocko wrote:
Btw. thanks for your persistance and willingness to go over
Hi Will,
Thank you for doing this work. How would you like to proceed?
- If you OK for my series to be accepted as-is, so your patch can be
added later on top, I think, I need an ack from you for kasan changes.
- Otherwise, I can replace: 4267aaf1d279 arm64/kasan: add and use
> Btw. I would add your example from
> http://lkml.kernel.org/r/bcf24369-ac37-cedd-a264-3396fb5cf...@oracle.com
> to do changelog
>
Will add, thank you for your review.
Pavel
end)
Which iterates through reserved && !memory lists, and we zero struct pages
explicitly by calling mm_zero_struct_page().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.j
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by:
() function to resolve this difference.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/x86/mm/kasan_init_64.c | 75 ++---
https://hastebin.com/fariqimiyu.go
sparc fix no deferred: https://hastebin.com/muhegoheyi.go
sparc fix deferred: https://hastebin.com/xadinobutu.go
Pavel Tatashin (9):
x86/mm: setting fields in deferred pages
sparc64/mm: setting fields in deferred pages
sparc64: simplify vmem
is moved later in this patch into __init_single_page(), which is
called from zone_sizes_init().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
alloctor
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
fixed time: it
does not increase as memory is increased.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
() function to resolve this difference.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/arm64/mm/kasan_init.c | 72 ++
e that struct pages are properly
initialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com
>> I guess we could implement that on arm64 using our current vmemmap_populate
>> logic and an explicit memset.
Hi Will,
I will send out a new patch series with x86/arm64 versions of
kasan_map_populate(), so you could take a look if this is something
that is acceptable.
Thank you,
Pavel
>
> Ok, but I'm still missing why you think that is needed. What would be the
> second page table walker that needs implementing?
>
> I guess we could implement that on arm64 using our current vmemmap_populate
> logic and an explicit memset.
>
Hi Will,
What do you mean by explicit memset()? We
Hi Will,
> We have two table walks even with your patch series applied afaict: one in
> our definition of vmemmap_populate (arch/arm64/mm/mmu.c) and this one
> in the core code.
I meant to say implementing two new page table walkers, not at runtime.
> My worry is that these are actually highly
Hi Will,
In addition to what Michal wrote:
> As an interim step, why not introduce something like
> vmemmap_alloc_block_flags and make the page-table walking opt-out for
> architectures that don't want it? Then we can just pass __GFP_ZERO from
> our vmemmap_populate where necessary and other
03, 2017 at 03:48:46PM +0100, Mark Rutland wrote:
>> On Wed, Sep 20, 2017 at 04:17:11PM -0400, Pavel Tatashin wrote:
>> > During early boot, kasan uses vmemmap_populate() to establish its shadow
>> > memory. But, that interface is intended for struct pages use.
>>
fixed time: it
does not increase as memory is increased.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/ar
() function to resolve this difference.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/arm64/include/asm/pgtable.h | 3 ++
include/linux/kasan.h| 2 ++
mm/kasan/kasan_init.c| 67
3 files changed, 72 inse
alloctor
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
is moved later in this patch into __init_single_page(), which is
called from zone_sizes_init().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by:
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/
end)
Which iterates through reserved && !memory lists, and we zero struct pages
explicitly by calling mm_zero_struct_page().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.j
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
red: https://hastebin.com/ibobeteken.go
sparc base deferred: https://hastebin.com/fariqimiyu.go
sparc fix no deferred: https://hastebin.com/muhegoheyi.go
sparc fix deferred: https://hastebin.com/xadinobutu.go
Pavel Tatashin (10):
x86/mm: setting fields in deferred pages
spar
e that struct pages are properly
initialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com
set of pages.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@orac
is
called from zone_sizes_init().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by: Michal Hocko <mh
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/ar
by zeroing the memory in parallel
when struct pages are zeroed.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.co
alloctor
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
uct pages explicitly.
The patch involves adding a new memblock iterator:
for_each_resv_unavail_range(i, p_start, p_end)
Which iterates through reserved && !memory lists, and we zero struct pages
explicitly by calling mm_zero_struct_page().
Signed-off-by: Pavel Tatashin <pasha.tatas
orruptions. I have fixed this issue by
defining a new macro that performs all the necessary operations when we
free the current set of pages.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan &l
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/
() function to resolve this difference.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
---
arch/arm64/include/asm/pgtable.h | 3 ++
include/linux/kasan.h| 2 ++
mm/kasan/kasan_init.c| 67
3 files changed, 72 inse
fixed time: it
does not increase as memory is increased.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by: Davi
e that struct pages are properly
initialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com
iqimiyu.go
sparc fix no deferred: https://hastebin.com/muhegoheyi.go
sparc fix deferred: https://hastebin.com/xadinobutu.go
Pavel Tatashin (12):
x86/mm: setting fields in deferred pages
sparc64/mm: setting fields in deferred pages
mm: deferred_init_memmap improvements
sparc64: simpl
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
ject, than sure I
can send a new version with kasanmap_populate() functions.
Thank you,
Pasha
On 09/15/2017 04:38 PM, Mark Rutland wrote:
On Thu, Sep 14, 2017 at 09:30:28PM -0400, Pavel Tatashin wrote:
Hi Mark, Thank you for looking at this. We can't do this because page
table is not
Hi Mark,
Thank you for looking at this. We can't do this because page table is
not set until cpu_replace_ttbr1() is called. So, we can't do memset() on
this memory until then.
Pasha
Copy paste error, changing the subject for the header to v8 from v7.
On 09/14/2017 06:35 PM, Pavel Tatashin wrote:
Changelog:
v8 - v7
- Added Acked-by's from Dave Miller for SPARC changes
- Fixed a minor compiling issue on tile architecture reported by kbuild
v7 - v6
- Addressed comments from
arc base no deferred: https://hastebin.com/ibobeteken.go
sparc base deferred: https://hastebin.com/fariqimiyu.go
sparc fix no deferred: https://hastebin.com/muhegoheyi.go
sparc fix deferred: https://hastebin.com/xadinobutu.go
Pavel Tatashin (11):
x86/mm: setting fields in deferre
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by: Davi
fixed time: it
does not increase as memory is increased.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com
alloctor
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
by zeroing the memory in parallel
when struct pages are zeroed.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.co
orruptions. I have fixed this issue by
defining a new macro that performs all the necessary operations when we
free the current set of pages.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan &l
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.
We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.
Signed-off-by: Pavel Tatashin
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by:
uct pages explicitly.
The patch involves adding a new memblock iterator:
for_each_resv_unavail_range(i, p_start, p_end)
Which iterates through reserved && !memory lists, and we zero struct pages
explicitly by calling mm_zero_struct_page().
Signed-off-by: Pavel Tatashin <pasha.tatas
is
called from zone_sizes_init().
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jor...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Acked-by: Michal Hocko <mh
e that struct pages are properly
initialized prior to using them.
The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com
alloctor
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.
Signed-off-by: Pavel Tatashin <pasha.tatas...@oracle.com>
Reviewed-by: Steven Sistare <steven.sist...@oracle.com&
1 - 100 of 181 matches
Mail list logo