e40 Mon Sep 17 00:00:00 2001
From: "Aneesh Kumar K.V"
Date: Tue, 10 May 2016 12:24:34 +0530
Subject: [PATCH 4/4] powerpc/mm/radix: Implement tlb mmu gather flush
efficiently
Now that we track page size in mmu_gather, we can use address based
tlbie format when doing a tlb_flush(). We don
ff-by: Aneesh Kumar K.V
---
mm/hugetlb.c | 54 +-
1 file changed, 21 insertions(+), 33 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d26162e81fea..741429d01668 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3138,7 +3138,6 @@
page, we will force a tlb flush
and starts a new mmu gather.
Signed-off-by: Aneesh Kumar K.V
---
changes from V1:
* Fix build error
arch/arm/include/asm/tlb.h | 11 +++
arch/ia64/include/asm/tlb.h | 13 -
arch/s390/include/asm/tlb.h | 4 ++--
arch/sh/include/asm/tlb.h
This allows arch which need to do special handing with respect to
different page size when flushing tlb to implement the same in mmu gather
Signed-off-by: Aneesh Kumar K.V
---
arch/arm/include/asm/tlb.h | 18 ++
arch/ia64/include/asm/tlb.h | 18 ++
arch/s390
Andrew Morton writes:
> On Thu, 2 Jun 2016 15:09:49 +0530 "Aneesh Kumar K.V"
> wrote:
>
>> Now that we track page size in mmu_gather, we can use address based
>> tlbie format when doing a tlb_flush(). We don't do this if we are
>> invalidating the
This allows arch which need to do special handing with respect to
different page size when flushing tlb to implement the same in mmu gather
Signed-off-by: Aneesh Kumar K.V
---
arch/arm/include/asm/tlb.h | 18 +++
arch/ia64/include/asm/tlb.h | 18 +++
arch/s390/include
ff-by: Aneesh Kumar K.V
---
mm/hugetlb.c | 54 +-
1 file changed, 21 insertions(+), 33 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e4168484f249..8dd91cd5571c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3138,7 +3138,6 @@
page, we will force a tlb flush
and starts a new mmu gather.
Signed-off-by: Aneesh Kumar K.V
---
arch/arm/include/asm/tlb.h | 11 +++
arch/ia64/include/asm/tlb.h | 13 -
arch/s390/include/asm/tlb.h | 4 ++--
arch/sh/include/asm/tlb.h | 2 +-
arch/um/include/asm/tlb.h
Now that we track page size in mmu_gather, we can use address based
tlbie format when doing a tlb_flush(). We don't do this if we are
invalidating the full address space.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/tlb-radix.c | 28 +++-
1 file change
Andrew Morton writes:
> On Mon, 30 May 2016 11:14:19 +0530 "Aneesh Kumar K.V"
> wrote:
>
>> For hugetlb like THP (and unlike regular page), we do tlb flush after
>> dropping ptl. Because of the above, we don't need to track force_flush
>>
This enables us to do VM_WARN(condition, "warn message");
Signed-off-by: Aneesh Kumar K.V
---
include/linux/mmdebug.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index de7be78c6f0e..451a811f48f2 100644
--- a/include/linux
We don't need to check this always. The idea here is to capture the
wrong usage of find_linux_pte_or_hugepte and we can do that by
occasionally running with DEBUG_VM enabled.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/pgtable.h | 6 ++
1 file changed, 2 insertions(
Hillf Danton writes:
>> >> @@ -1202,7 +1205,12 @@ again:
>> >> if (force_flush) {
>> >> force_flush = 0;
>> >> tlb_flush_mmu_free(tlb);
>> >> -
>> >> + if (pending_page) {
>> >> + /* remove the page with new size */
>> >> + __tlb_adjus
Hillf Danton writes:
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 15322b73636b..a01db5bc756b 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -292,23 +292,24 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned
>> long start, unsigned long e
>> * handling the additional races i
This allows arch which need to do special handing with respect to
different page size when flushing tlb to implement the same in mmu gather
Signed-off-by: Aneesh Kumar K.V
---
arch/arm/include/asm/tlb.h | 18 +++
arch/ia64/include/asm/tlb.h | 18 +++
arch/s390/include
if mmu gather flush resulted in a page table free force a RIC=2 flush
with IS=1. Otherwise do a range flush with IS=0 and RIC=0
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 1 -
arch/powerpc/include/asm/book3s/64/pgalloc.h | 16 -
arch
page, we will force a tlb flush
and starts a new mmu gather.
Signed-off-by: Aneesh Kumar K.V
---
arch/arm/include/asm/tlb.h | 11 +++
arch/ia64/include/asm/tlb.h | 13 -
arch/s390/include/asm/tlb.h | 4 ++--
arch/sh/include/asm/tlb.h | 2 +-
arch/um/include/asm/tlb.h
ff-by: Aneesh Kumar K.V
---
mm/hugetlb.c | 54 +-
1 file changed, 21 insertions(+), 33 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e4168484f249..8dd91cd5571c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3138,7 +3138,6 @@
Valentin Rothberg writes:
> s/MMU_STD_64/STD_MMU_64/
>
> Fixes: 11ffc1cfa4c2 ("powerpc/mm/radix: Use STD_MMU_64 to properly
> isolate hash related code")
> Signed-off-by: Valentin Rothberg
Reviewed-by: Aneesh Kumar K.V
> ---
>
> I onl
Stephen Rothwell writes:
> Hi Andrew,
>
> Today's linux-next merge of the akpm-current tree got a conflict in:
>
> arch/powerpc/include/asm/book3s/64/pgtable.h
>
> between commit:
>
> dbaba7a16b7b ("powerpc/mm: THP is only available on hash64 as of now")
>
> from the powerpc tree and commit:
")
>
> from the powerpc tree.
>
> I applied this fix patch for today (hopefully this is still initialised
> early enough):
>
> From: Stephen Rothwell
> Date: Mon, 2 May 2016 18:25:42 +1000
> Subject: [PATCH] mm: make optimistic check for swapin readahead fix
>
> S
Michal Hocko writes:
> [ text/plain ]
> On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote:
> [...]
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index d991b9e..081f679 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -
ed on linux-next after this
> series is merged to linux-next.
>
I searched with ZONE_HIGHMEM and AFAICS this series do handle all the
highmem path.
For the series:
Reviewed-by: Aneesh Kumar K.V
-aneesh
Jerome Glisse writes:
> [ text/plain ]
> On Wed, Mar 23, 2016 at 12:22:23PM +0530, Aneesh Kumar K.V wrote:
>> Jérôme Glisse writes:
>>
>> > [ text/plain ]
>> > This patch add helper for device page fault. Thus helpers will fill
>> > the m
Jérôme Glisse writes:
> [ text/plain ]
> This patch add helper for device page fault. Thus helpers will fill
> the mirror page table using the CPU page table and synchronizing
> with any update to CPU page table.
>
> Changed since v1:
> - Add comment about directory lock.
>
> Changed since v2:
Guenter Roeck writes:
> [ text/plain ]
> Hi,
>
> Your commit 458aa76d132dc1 ("mm/thp/migration: switch from flush_tlb_range
> to flush_pmd_tlb_range") causes a build error when building
> arcv2:vdk_hs38_smp_defconfig.
>
> include/asm-generic/pgtable.h:799:45: note: in expansion of macro ‘BUILD_BU
"Kirill A. Shutemov" writes:
> [ text/plain ]
> On Mon, Mar 21, 2016 at 10:03:29AM +0530, Aneesh Kumar K.V wrote:
>> "Kirill A. Shutemov" writes:
>>
>> > [ text/plain ]
>> > On Fri, Mar 18, 2016 at 07:23:41PM +0530, An
Jérôme Glisse writes:
> +
> + /* Try to fail early on. */
> + if (unlikely(anon_vma_prepare(vma)))
> + return -ENOMEM;
> +
What is this about ?
> +retry:
> + lru_add_drain();
> + tlb_gather_mmu(&tlb, mm, range.start, range.end);
> + update_hiwater_rss(mm);
> +
Jerome Glisse writes:
> [ text/plain ]
> On Mon, Mar 21, 2016 at 04:57:32PM +0530, Aneesh Kumar K.V wrote:
>> Jérôme Glisse writes:
>
> [...]
>
>> > +
>> > +#ifdef CONFIG_HMM
>> > +/* mm_hmm_migrate_back() - lock HMM CPU page table entry and all
Jérôme Glisse writes:
> [ text/plain ]
> To migrate memory back we first need to lock HMM special CPU page
> table entry so we know no one else might try to migrate those entry
> back. Helper also allocate new page where data will be copied back
> from the device. Then we can proceed with the dev
"Kirill A. Shutemov" writes:
> [ text/plain ]
> On Fri, Mar 18, 2016 at 07:23:41PM +0530, Aneesh Kumar K.V wrote:
>> "Kirill A. Shutemov" writes:
>>
>> > [ text/plain ]
>> > split_huge_pmd() for file mappings (and DAX too) is implemente
"Kirill A. Shutemov" writes:
> [ text/plain ]
> Naive approach: on mapping/unmapping the page as compound we update
> ->_mapcount on each 4k page. That's not efficient, but it's not obvious
> how we can optimize this. We can look into optimization later.
>
> PG_double_map optimization doesn't wor
"Kirill A. Shutemov" writes:
> [ text/plain ]
> split_huge_pmd() for file mappings (and DAX too) is implemented by just
> clearing pmd entry as we can re-fill this area from page cache on pte
> level later.
>
> This means we don't need deposit page tables when file THP is mapped.
> Therefore we s
Anshuman Khandual writes:
> [ text/plain ]
> This adds two tests for memory page migration. One for normal page
> migration which works for both 4K or 64K base page size kernel and
> the other one is for huge page migration which works only on 64K
> base page sized 16MB huge page implemention at
Anshuman Khandual writes:
> [ text/plain ]
> This enables ARCH_WANT_GENERAL_HUGETLB for BOOK3S 64K in Kconfig.
> It also implements a new function 'pte_huge' which is required by
> function 'huge_pte_alloc' from generic VM. Existing BOOK3S 64K
> specific functions 'huge_pte_alloc' and 'huge_pte_o
Anshuman Khandual writes:
> [ text/plain ]
> From: root
>
> Currently the 'huge_pte_alloc' function has two versions, one for the
> BOOK3S and the other one for the BOOK3E platforms. This change splits
> the BOOK3S version into two parts, one for the 4K page size based
> implementation and the o
Christian Borntraeger writes:
> On 02/24/2016 11:41 AM, Will Deacon wrote:
>> On Wed, Feb 24, 2016 at 11:16:34AM +0100, Christian Borntraeger wrote:
>>> On 02/23/2016 09:22 PM, Will Deacon wrote:
On Tue, Feb 23, 2016 at 10:33:45PM +0300, Kirill A. Shutemov wrote:
> On Tue, Feb 23, 2016 a
Balbir Singh writes:
>> Now we can't depend for mm_cpumask, a parallel find_linux_pte_hugepte
>> can happen outside that. Now i had a variant for kick_all_cpus_sync that
>> ignored idle cpus. But then that needs more verification.
>>
>> http://article.gmane.org/gmane.linux.ports.ppc.embedded/811
Balbir Singh writes:
> On Tue, 2016-02-09 at 06:50 +0530, Aneesh Kumar K.V wrote:
>>
>> Also make sure we wait for irq disable section in other cpus to finish
>> before flipping a huge pte entry with a regular pmd entry. Code paths
>> like find_linux_pte_or_hugepte d
Michael Ellerman writes:
> On Tue, 2016-09-02 at 01:20:31 UTC, "Aneesh Kumar K.V" wrote:
>> With ppc64 we use the deposited pgtable_t to store the hash pte slot
>> information. We should not withdraw the deposited pgtable_t without
>> marking the pmd none. This e
Gerald Schaefer writes:
> On Fri, 12 Feb 2016 09:34:33 +0530
> "Aneesh Kumar K.V" wrote:
>
>> Gerald Schaefer writes:
>>
>> > On Thu, 11 Feb 2016 21:09:42 +0200
>> > "Kirill A. Shutemov" wrote:
>> >
>> >
Gerald Schaefer writes:
> On Thu, 11 Feb 2016 21:09:42 +0200
> "Kirill A. Shutemov" wrote:
>
>> On Thu, Feb 11, 2016 at 07:22:23PM +0100, Gerald Schaefer wrote:
>> > Hi,
>> >
>> > Sebastian Ott reported random kernel crashes beginning with v4.5-rc1 and
>> > he also bisected this to commit 61f5d
"Aneesh Kumar K.V" writes:
> We remove one instace of flush_tlb_range here. That was added by
> f714f4f20e59ea6eea264a86b9a51fd51b88fc54 ("mm: numa: call MMU notifiers
> on THP migration"). But the pmdp_huge_clear_flush_notify should have
> done the require flus
We remove one instace of flush_tlb_range here. That was added by
f714f4f20e59ea6eea264a86b9a51fd51b88fc54 ("mm: numa: call MMU notifiers
on THP migration"). But the pmdp_huge_clear_flush_notify should have
done the require flush for us. Hence remove the extra flush.
Signed-off-by: An
HPT.
>
> To support a PAPR extension allowing resizing of the HPT, we're going to
> want the memory size -> HPT size logic elsewhere, so split it out into a
> helper function.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: David Gibson
> ---
> arch/powerpc/inclu
etter
> to handle this non-fatally
>* An error message is also printed, so the failure won't be silent
>* As above a failure could cause a partially mapped region, we now
> clean this up.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: David Gibson
apping() we BUG_ON() any error. We change that to
> just a WARN_ON() in the case of ENOENT, since failing to remove a
> mapping that wasn't there in the first place probably shouldn't be
> fatal.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Dav
have already
>BUG()ed anyway. Put a WARN_ON() here, in lieu of a printk() since this
>really shouldn't be happening.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: David Gibson
> ---
> arch/powerpc/mm/hash_utils_64.c | 13 ++---
> 1 file chan
We remove one instace of flush_tlb_range here. That was added by
f714f4f20e59ea6eea264a86b9a51fd51b88fc54 ("mm: numa: call MMU notifiers
on THP migration"). But the pmdp_huge_clear_flush_notify should have
done the require flush for us. Hence remove the extra flush.
Signed-off-by: An
touching the generic pmdp_collapse_flush()
> by defining a ARC version, but that defeats the purpose of generic
> version, plus sementically this is the right thing to do.
>
> Fixes STAR 9000961194: LMBench on AXS103 triggering duplicate TLB
> exceptions with super pages
>
> Cc: Kir
Andrew Morton writes:
> On Tue, 9 Feb 2016 21:41:44 +0530 "Aneesh Kumar K.V"
> wrote:
>
>> With next generation power processor, we are having a new mmu model
>> [1] that require us to maintain a different linux page table format.
>>
>> Inord
"Kirill A. Shutemov" writes:
> On Tue, Feb 09, 2016 at 09:41:44PM +0530, Aneesh Kumar K.V wrote:
>> With next generation power processor, we are having a new mmu model
>> [1] that require us to maintain a different linux page table format.
>>
>> Inorder to
runtime. With the new MMU (radix MMU) added, we will
have two different pmd hugepage size 16MB for hash model and 2MB for
Radix model. Hence make HPAGE_PMD related values as a variable.
[1] http://ibm.biz/power-isa3 (Needs registration).
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/pgtable_64
depend on irq disable to get
a stable pte_t pointer. A parallel thp split need to make sure we
don't convert a pmd pte to a regular pmd entry without waiting for the
irq disable section to finish.
Acked-by: Kirill A. Shutemov
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book
"Kirill A. Shutemov" writes:
> On Mon, Feb 08, 2016 at 11:44:22AM +0530, Aneesh Kumar K.V wrote:
>> With ppc64 we use the deposited pgtable_t to store the hash pte slot
>> information. We should not withdraw the deposited pgtable_t without
>> marking the pmd none.
runtime. With the new MMU (radix MMU) added, we will
have two different pmd hugepage size 16MB for hash model and 2MB for
Radix model. Hence make HPAGE_PMD related values as a variable.
[1] http://ibm.biz/power-isa3 (Needs registration).
Signed-off-by: Aneesh Kumar K.V
---
arch/arm/include/asm
).
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h | 3 +++
arch/powerpc/include/asm/mman.h | 6 --
arch/powerpc/mm/hash_utils_64.c | 19 +++
include/linux/mman.h | 4
mm/mmap.c
r pmd entry without waiting for the
irq disable section to finish.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 4
arch/powerpc/mm/pgtable_64.c | 35 +++-
include/asm-generic/pgtable.h| 8 ++
Rik van Riel writes:
> Hi,
>
> I am trying to gauge interest in discussing VM containers at the LSF/MM
> summit this year. Projects like ClearLinux, Qubes, and others are all
> trying to use virtual machines as better isolated containers.
>
> That changes some of the goals the memory management s
akpm-current tree
> version of the second and then I applied the following merge fix patch:
>
> From: Stephen Rothwell
> Date: Tue, 15 Dec 2015 16:50:42 +1100
> Subject: [PATCH] merge fix for "powerpc, thp: remove infrastructure for
> handling splitting PMDs"
>
>
Michael Ellerman writes:
> On Wed, 2015-10-21 at 16:59 +1100, Stephen Rothwell wrote:
>> Hi Andrew,
>>
>> After merging the akpm-current tree, today's linux-next build (powerpc
>> allnoconfig) failed like this:
>
>> arch/powerpc/include/asm/pgtable.h: In function 'pte_pgprot':
>> arch/powerpc/in
age is backed in memory, and a new _PAGE_SWP_SOFT_DIRTY bit when
> the page is swapped out.
>
> The _PAGE_SWP_SOFT_DIRTY bit is dynamically put after the swap type
> in the swap pte. A check is added to ensure that the bit is not
> overwritten by _PAGE_HPTEFLAGS.
>
> Signed-of
Laurent Dufour writes:
> Don't build clear_soft_dirty_pmd() if the transparent huge pages are
> not enabled.
>
> Signed-off-by: Laurent Dufour
> CC: Aneesh Kumar K.V
Reviewed-by: Aneesh Kumar K.V
> ---
> fs/proc/task_mmu.c | 14 +++---
> 1 file changed,
e pte must
> be cleared before being modified.
>
> Signed-off-by: Laurent Dufour
> CC: Aneesh Kumar K.V
Reviewed-by: Aneesh Kumar K.V
> ---
> fs/proc/task_mmu.c | 7 ---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/
Andrew Morton writes:
> On Fri, 16 Oct 2015 14:07:05 +0200 Laurent Dufour
> wrote:
>
>> This series is enabling the software memory dirty tracking in the
>> kernel for powerpc. This is the follow up of the commit 0f8975ec4db2
>> ("mm: soft-dirty bits for user memory changes tracking") which
>>
Dongsheng Wang writes:
> From: Wang Dongsheng
>
> This issue caused on 'commit 990486c8af04 ("strscpy: zero any trailing
> garbage bytes in the destination")'.
>
> zero_bytemask is not implemented on PowerPC. So copy the zero_bytemask
> of BIG_ENDIAN implementation from include/asm-generic/word-
Andreas Gruenbacher writes:
> From: "Aneesh Kumar K.V"
>
> Support the richacl permission model in ext4. The richacls are stored
> in "system.richacl" xattrs. Richacls need to be enabled by tune2fs or
> at file system create time.
>
Signed-off-b
Andreas Gruenbacher writes:
> From: "Aneesh Kumar K.V"
>
> This feature flag selects richacl instead of posix acl support on the
> file system. In addition, the "acl" mount option is needed for enabling
> either of the two kinds of acls.
>
> Signed-off
Andreas Gruenbacher writes:
> The generic_{get,set,remove}xattr inode operations use the xattr name prefix
> to
> decide which of the defined xattr handlers to call, then call the appropriate
> handler's get or set operation. The name suffix is passed to the get or set
> operations, the prefix
Ryabinin
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/report.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index c5367089703c..7833f074ede8 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -173,12 +173,10
to generic functions.
Reviewed-by: Andrey Ryabinin
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/report.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index d269f2087faf..c5367089703c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasa
The function only disable/enable reporting. In the later patch
we will be adding a kasan early enable/disable. Rename kasan_enabled
to properly reflect its function.
Reviewed-by: Andrey Ryabinin
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/kasan.h | 2 +-
mm/kasan/report.c | 2 +-
2 files
Use is_module_address instead
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/report.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 6c3f82b0240b..d269f2087faf 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -22,6
The function only disable/enable reporting. In the later patch
we will be adding a kasan early enable/disable. Rename kasan_enabled
to properly reflect its function.
Reviewed-by: Andrey Ryabinin
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/kasan.h | 2 +-
mm/kasan/report.c | 2 +-
2 files
to generic functions.
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/report.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 01d2efec8ea4..440bda3a3ecd 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -164,14 +164,20
Use is_module_text_address instead
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/report.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 6c3f82b0240b..01d2efec8ea4 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
Kumar K.V
---
mm/kasan/report.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 440bda3a3ecd..8c409b1664c8 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -173,12 +173,10 @@ static void
Andrey Ryabinin writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V :
>> We we end up calling kasan_report in real mode, our shadow mapping
>> for even spinlock variable will show poisoned.
>
> Generally I agree with this patch. We should disable reports when we
> pr
Andrey Ryabinin writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V :
>> Some of the archs, may find it difficult to support inline KASan
>> mode. Add HAVE_ARCH_KASAN_INLINE so that we can disable inline
>> support at config time.
>>
>> Signed-off-by: Anee
Andrey Ryabinin writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V :
>> We add enable/disable callbacks in this patch which architecture
>> can implemement. We will use this in the later patches for architecture
>> like ppc64, that cannot have early zero page kasan
Andrey Ryabinin writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V :
>> Conditionalize the check using #ifdef
>>
>> Signed-off-by: Aneesh Kumar K.V
>> ---
>> mm/kasan/report.c | 11 ---
>> 1 file changed, 8 insertions(+), 3 deletions(-)
&g
Andrey Ryabinin writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V :
>> Hi,
>>
>> This patchset implements kernel address sanitizer for ppc64.
>> Since ppc64 virtual address range is divided into different regions,
>> we can't have one contigous area f
Missed to cherry-pick the updated version of this patch, before sending
the series out.
commit aeb324e09d95c189eda4ce03790da94b535d1dfc
Author: Aneesh Kumar K.V
Date: Fri Aug 14 12:28:58 2015 +0530
kasan: Don't use kasan shadow pointer in generic functions
We can't u
.
Known issues:
* Kasan is not yet enabled for arch/powerpc/kvm
* kexec hang
* outline stack and global support
Once we fix the kexec hang, we can look at merging ppc64 patch.
IMHO kasan changes can be reviewed/merged earlier
Aneesh Kumar K.V (10):
powerpc/mm: Add virt_to_pfn and use this instea
ing VMALLOC and VMEMAP area. Kasan is not
tracking both the region as of now
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kasan.h | 74
arch/powerpc/include/asm/pgtable-ppc64.h | 1 +
arch/powerpc/include/asm/ppc_asm.h | 10 +
arch/
.
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/kasan.c | 9 +
mm/kasan/kasan.h | 15 +++
2 files changed, 24 insertions(+)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7b28e9cdf1c7..e4d33afd0eaf 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -43,6 +43,9
want to do this we will have to have a kasan internal implemen
tation of print_hex_dump for which we will disable address sanitizer
operation.
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/report.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/repo
Now that we have two features KASAN and KASAN_INLINE, add new feature
support file for the same.
Signed-off-by: Aneesh Kumar K.V
---
.../debug/KASAN/KASAN_INLINE/arch-support.txt | 40 ++
.../KASAN/{ => KASAN_OUTLINE}/arch-support.txt | 0
2 files changed,
Conditionalize the check using #ifdef
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/report.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index e07c94fbd0ac..71ce7548d914 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan
This add helper virt_to_pfn and remove the opencoded usage of the
same.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/page.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index
Kumar K.V
---
mm/kasan/report.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 79fbc5d14bd2..82b41eb83e43 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -185,6 +185,10 @@ void kasan_report_error(struct kasan_access_info *info
Some of the archs, may find it difficult to support inline KASan
mode. Add HAVE_ARCH_KASAN_INLINE so that we can disable inline
support at config time.
Signed-off-by: Aneesh Kumar K.V
---
arch/x86/Kconfig | 1 +
lib/Kconfig.kasan | 2 ++
scripts/Makefile.kasan | 28
Some archs may want to provide kasan shadow memory as a constant
offset from the address. Such arch even though cannot use inline kasan
support, they can work with outofline kasan support.
Signed-off-by: Aneesh Kumar K.V
---
include/linux/kasan.h | 3 +++
mm/kasan/kasan.h | 3 +++
2 files
The function only disable/enable reporting. In the later patch
we will be adding a kasan early enable/disable. Rename kasan_enabled
to properly reflect its function.
Signed-off-by: Aneesh Kumar K.V
---
mm/kasan/kasan.h | 2 +-
mm/kasan/report.c | 2 +-
2 files changed, 2 insertions(+), 2
Andrey Ryabinin writes:
> Introduce generic kasan_populate_zero_shadow(start, end).
> This function maps kasan_zero_page to the [start, end] addresses.
>
> In follow on patches it will be used for ARMv8 (and maybe other
> architectures) and will replace x86_64 specific populate_zero_shadow().
>
>
ill do IPI as
> needed for fast_gup.
>
Reviewed-by: Aneesh Kumar K.V
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
The patchset drastically lower complexity of get_page()/put_page()
> codepaths. I encourage people look on this code before-and-after to
> justify time budget on reviewing this patchset.
>
Tested this series of ppc64. Please feel free to add to the series
Tested-by: Aneesh Kumar K.V
-aneesh
Without this we end up using the previous name of the compressor
in the loop in unpack_rootfs. For example we get errors like
"compression method gzip not configured" even when we have
CONFIG_DECOMPRESS_GZIP enabled.
Signed-off-by: Aneesh Kumar K.V
---
lib/decompress.c | 5 -
1 fi
the
> following 2-node machine where half memory on one node was occupied to show
> the difference.
>
>
.
> Without -p parameter, hugepage restriction to CPU-local node works as before.
>
> Fixes: 077fcf116c8c ("mm/thp: allocate transparent hugepages on local
j.gli...@gmail.com writes:
> From: Jérôme Glisse
>
> This patch only introduce core HMM functions for registering a new
> mirror and stopping a mirror as well as HMM device registering and
> unregistering.
>
> The lifecycle of HMM object is handled differently then the one of
> mmu_notifier becau
401 - 500 of 826 matches
Mail list logo