Re: [PATCH] arm64: kvm: use -fno-jump-tables with clang

2018-05-23 Thread Andrey Konovalov
On Wed, May 23, 2018 at 7:47 PM, Nick Desaulniers
<ndesaulni...@google.com> wrote:
> On Wed, May 23, 2018 at 4:54 AM Andrey Konovalov <andreyk...@google.com>
> wrote:
>> On Tue, May 22, 2018 at 8:28 PM, Nick Desaulniers
>> <ndesaulni...@google.com> wrote:
>> > On Fri, May 18, 2018 at 11:13 AM Marc Zyngier <marc.zyng...@arm.com>
> wrote:
>> >> > - you have checked that with a released version of the compiler, you
>> >
>> > On Tue, May 22, 2018 at 10:58 AM Andrey Konovalov <andreyk...@google.com
>
>> > wrote:
>> >> Tested-by: Andrey Konovalov <andreyk...@google.com>
>> >
>> > Hi Andrey,
>> > Thank you very much for this report.  Can you confirm as well the
> version
>> > of Clang that you were using?
>
>> I'm on 86852a40 ("[InstCombine] Calloc-ed strings optimizations").
>
>> > If it's not a binary release (built from
>> > source), would you be able to re-confirm with a released version?
>
>> Sure. Which release should I try and how do I get it?
>
> Maybe clang-6.0 as the latest release (though I suspect you may run into
> the recently-fixed-in-clang-7.0 "S" constraint bug that you reported).

Yes, and also into the "support for "r" prefixed variables in ARM
inline assembly" issue.

Tested on upstream commit ded4c39e (before both issues were
introduced) with -fno-jump-tables patch applied using clang 6.0.

Same result, the patch helps.

>
> I've had luck on debian based distributions installing from:
> http://apt.llvm.org/
>
> (These can be added to your /etc/apt/sources.list, then a `sudo apt update`
> and `sudo apt install clang-6.0`)
>
> If you're not able to add remote repositories (some employers block this ;)
> ), then you can find releases for download for a few different platforms:
> https://releases.llvm.org/
>
> For example, a quick:
> $ mkdir llvm-6.0
> $ cd !$
> $ wget
> https://releases.llvm.org/6.0.0/clang+llvm-6.0.0-x86_64-linux-gnu-debian8.tar.xz
> $ tar xvf clang+llvm-6.0.0-x86_64-linux-gnu-debian8.tar.xz
> $ ./clang+llvm-6.0.0-x86_64-linux-gnu-debian8/bin/clang-6.0 -v
> clang version 6.0.0 (tags/RELEASE_600/final)
> Target: x86_64-unknown-linux-gnu
> Thread model: posix
> InstalledDir: .../llvm-6.0/./clang+llvm-6.0.0-x86_64-linux-gnu-debian8/bin
> Found candidate GCC installation: ...
> Candidate multilib: .;@m64
> Selected multilib: .;@m64
>
> Seems to work.
> --
> Thanks,
> ~Nick Desaulniers
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] arm64: kvm: use -fno-jump-tables with clang

2018-05-23 Thread Andrey Konovalov
On Tue, May 22, 2018 at 8:28 PM, Nick Desaulniers
<ndesaulni...@google.com> wrote:
> On Fri, May 18, 2018 at 11:13 AM Marc Zyngier <marc.zyng...@arm.com> wrote:
>> > - you have checked that with a released version of the compiler, you
>
> On Tue, May 22, 2018 at 10:58 AM Andrey Konovalov <andreyk...@google.com>
> wrote:
>> Tested-by: Andrey Konovalov <andreyk...@google.com>
>
> Hi Andrey,
> Thank you very much for this report.  Can you confirm as well the version
> of Clang that you were using?

I'm on 86852a40 ("[InstCombine] Calloc-ed strings optimizations").

> If it's not a binary release (built from
> source), would you be able to re-confirm with a released version?

Sure. Which release should I try and how do I get it?
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] arm64: kvm: use -fno-jump-tables with clang

2018-05-22 Thread Andrey Konovalov
On Sat, May 19, 2018 at 12:44 PM, Marc Zyngier <marc.zyng...@arm.com> wrote:
> That would definitely be the right thing to do. Make sure you (or
> Andrey tests with the latest released mainline kernel (4.16 for now)
> or (even better) the tip of Linus' tree.

Hi!

I can confirm that after applying this patch onto 4.17-rc4 kernel the
Odroid C2 board that I have boots (it doesn't without the patch).

This is the result of running KVM tests that I was able to find [1]:

root@odroid64:/home/odroid/kvm-unit-tests# ./run_tests.sh
PASS selftest-setup (2 tests)
PASS selftest-vectors-kernel (2 tests)
PASS selftest-vectors-user (2 tests)
PASS selftest-smp (5 tests)
PASS pci-test (1 tests)
FAIL pmu (3 tests, 3 unexpected failures)
PASS gicv2-ipi (3 tests)
SKIP gicv3-ipi (qemu-system-aarch64: Initialization of device
kvm-arm-gicv3 failed: error creating in-kernel VGIC: No such device)
PASS gicv2-active (1 tests)
SKIP gicv3-active (qemu-system-aarch64: Initialization of device
kvm-arm-gicv3 failed: error creating in-kernel VGIC: No such device)
PASS psci (4 tests)
PASS timer (8 tests)

Here is the result of running the same tests on GCC compiled kernel
(looks the same):

root@odroid64:/home/odroid/kvm-unit-tests# ./run_tests.sh
PASS selftest-setup (2 tests)
PASS selftest-vectors-kernel (2 tests)
PASS selftest-vectors-user (2 tests)
PASS selftest-smp (5 tests)
PASS pci-test (1 tests)
FAIL pmu (3 tests, 3 unexpected failures)
PASS gicv2-ipi (3 tests)
SKIP gicv3-ipi (qemu-system-aarch64: Initialization of device
kvm-arm-gicv3 failed: error creating in-kernel VGIC: No such device)
PASS gicv2-active (1 tests)
SKIP gicv3-active (qemu-system-aarch64: Initialization of device
kvm-arm-gicv3 failed: error creating in-kernel VGIC: No such device)
PASS psci (4 tests)
PASS timer (8 tests)

Tested-by: Andrey Konovalov <andreyk...@google.com>

Thanks!

[1] https://www.linux-kvm.org/page/KVM-unit-tests
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Clang arm64 build is broken

2018-05-22 Thread Andrey Konovalov
On Mon, May 14, 2018 at 6:24 PM, Nick Desaulniers
<ndesaulni...@google.com> wrote:
> On Fri, Apr 20, 2018 at 7:59 AM Andrey Konovalov <andreyk...@google.com>
> wrote:
>> On Fri, Apr 20, 2018 at 10:13 AM, Marc Zyngier <marc.zyng...@arm.com>
> wrote:
>> >> The issue is that
>> >> clang doesn't know about the "S" asm constraint. I reported this to
>> >> clang [2], and hopefully this will get fixed. In the meantime, would
>> >> it possible to work around using the "S" constraint in the kernel?
>> >
>> > I have no idea, I've never used clang to build the kernel. Clang isn't
>> > really supported to build the arm64 kernel anyway (as you mention
>> > below), and working around clang deficiencies would mean that we leave
>> > with the workaround forever. I'd rather enable clang once it is at
>> > feature parity with GCC.
>
>> The fact that there are some existing issues with building arm64
>> kernel with clang doesn't sound like a good justification for adding
>> new issues :)
>
>> However in this case I do believe that this is more of a bug in clang
>> that should be fixed.
>
> Just to follow up with this thread;
>
> Support for "S" constraints is being (re-)added to Clang in:
> https://reviews.llvm.org/D46745

Hi Nick!

I can confirm that the latest clang (which includes this patch) is
able to build the kernel with CONFIG_KVM enabled.

Thanks!
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v3 14/15] khwasan, mm, arm64: tag non slab memory allocated via pagealloc

2018-04-20 Thread Andrey Konovalov
KWHASAN doesn't check memory accesses through pointers tagged with 0xff.
When page_address is used to get pointer to memory that corresponds to
some page, the tag of the resulting pointer gets set to 0xff, even though
the allocated memory might have been tagged differently.

For slab pages it's impossible to recover the correct tag to return from
page_address, since the page might contain multiple slab objects tagged
with different values, and we can't know in advance which one of them is
going to get accessed. For non slab pages however, we can recover the tag
in page_address, since the whole page was marked with the same tag.

This patch adds tagging to non slab memory allocated with pagealloc. To
set the tag of the pointer returned from page_address, the tag gets stored
to page->flags when the memory gets allocated.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/memory.h   | 11 +++
 include/linux/mm.h| 29 +
 include/linux/page-flags-layout.h | 10 ++
 mm/cma.c  |  1 +
 mm/kasan/common.c | 14 --
 mm/page_alloc.c   |  1 +
 6 files changed, 64 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index f206273469b5..9ec78a44c5ff 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -304,7 +304,18 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * 
sizeof(struct page))
 #define __page_to_voff(kaddr)  (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / 
sizeof(struct page))
 
+#ifndef CONFIG_KASAN_HW
 #define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET))
+#else
+#define page_to_virt(page) ({  \
+   unsigned long __addr =  \
+   ((__page_to_voff(page)) | PAGE_OFFSET); \
+   if (!PageSlab((struct page *)page)) \
+   __addr = KASAN_SET_TAG(__addr, page_kasan_tag(page));   \
+   ((void *)__addr);   \
+})
+#endif
+
 #define virt_to_page(vaddr)((struct page *)((__virt_to_pgoff(vaddr)) | 
VMEMMAP_START))
 
 #define _virt_addr_valid(kaddr)pfn_validu64)(kaddr) & 
~PAGE_OFFSET) \
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1ac1f06a4be6..d6d596824803 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -770,6 +770,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
 #define NODES_PGOFF(SECTIONS_PGOFF - NODES_WIDTH)
 #define ZONES_PGOFF(NODES_PGOFF - ZONES_WIDTH)
 #define LAST_CPUPID_PGOFF  (ZONES_PGOFF - LAST_CPUPID_WIDTH)
+#define KASAN_TAG_PGOFF(LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH)
 
 /*
  * Define the bit shifts to access each section.  For non-existent
@@ -780,6 +781,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
 #define NODES_PGSHIFT  (NODES_PGOFF * (NODES_WIDTH != 0))
 #define ZONES_PGSHIFT  (ZONES_PGOFF * (ZONES_WIDTH != 0))
 #define LAST_CPUPID_PGSHIFT(LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0))
+#define KASAN_TAG_PGSHIFT  (KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0))
 
 /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */
 #ifdef NODE_NOT_IN_PAGE_FLAGS
@@ -802,6 +804,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
 #define NODES_MASK ((1UL << NODES_WIDTH) - 1)
 #define SECTIONS_MASK  ((1UL << SECTIONS_WIDTH) - 1)
 #define LAST_CPUPID_MASK   ((1UL << LAST_CPUPID_SHIFT) - 1)
+#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)
 #define ZONEID_MASK((1UL << ZONEID_SHIFT) - 1)
 
 static inline enum zone_type page_zonenum(const struct page *page)
@@ -1021,6 +1024,32 @@ static inline bool cpupid_match_pid(struct task_struct 
*task, int cpupid)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+#ifdef CONFIG_KASAN_HW
+static inline u8 page_kasan_tag(const struct page *page)
+{
+   return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+}
+
+static inline void page_kasan_tag_set(struct page *page, u8 tag)
+{
+   page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
+   page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+}
+
+static inline void page_kasan_tag_reset(struct page *page)
+{
+   page_kasan_tag_set(page, 0xff);
+}
+#else
+static inline u8 page_kasan_tag(const struct page *page)
+{
+   return 0xff;
+}
+
+static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
+static inline void page_kasan_tag_reset(struct page *page) { }
+#endif
+
 static inline struct zone *page_zone(const struct page *page)
 {
return _DATA(page_to_nid(page))->node_zones[pa

[RFC PATCH v3 15/15] khwasan: update kasan documentation

2018-04-20 Thread Andrey Konovalov
This patch updates KASAN documentation to reflect the addition of KHWASAN.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 Documentation/dev-tools/kasan.rst | 212 +-
 1 file changed, 122 insertions(+), 90 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst 
b/Documentation/dev-tools/kasan.rst
index f7a18f274357..bd7859538b73 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -8,11 +8,18 @@ KernelAddressSANitizer (KASAN) is a dynamic memory error 
detector. It provides
 a fast and comprehensive solution for finding use-after-free and out-of-bounds
 bugs.
 
-KASAN uses compile-time instrumentation for checking every memory access,
-therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is
-required for detection of out-of-bounds accesses to stack or global variables.
+KASAN has two modes: classic KASAN (a classic version, similar to user space
+ASan) and KHWASAN (a version based on memory tagging, similar to user space
+HWASan).
 
-Currently KASAN is supported only for the x86_64 and arm64 architectures.
+KASAN uses compile-time instrumentation to insert validity checks before every
+memory access, and therefore requires a compiler version that supports that.
+For classic KASAN you need GCC version 4.9.2 or later. GCC 5.0 or later is
+required for detection of out-of-bounds accesses on stack and global variables.
+TODO: compiler requirements for KHWASAN
+
+Currently classic KASAN is supported for the x86_64, arm64 and xtensa
+architectures, and KHWASAN is supported only for arm64.
 
 Usage
 -
@@ -21,12 +28,14 @@ To enable KASAN configure kernel with::
 
  CONFIG_KASAN = y
 
-and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and
-inline are compiler instrumentation types. The former produces smaller binary
-the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC
+and choose between CONFIG_KASAN_GENERIC (to enable classic KASAN) and
+CONFIG_KASAN_HW (to enabled KHWASAN). You also need to choose choose between
+CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and inline are compiler
+instrumentation types. The former produces smaller binary the latter is
+1.1 - 2 times faster. For classic KASAN inline instrumentation requires GCC
 version 5.0 or later.
 
-KASAN works with both SLUB and SLAB memory allocators.
+Both KASAN modes work with both SLUB and SLAB memory allocators.
 For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
 
 To disable instrumentation for specific files or directories, add a line
@@ -43,85 +52,80 @@ similar to the following to the respective kernel Makefile:
 Error reports
 ~
 
-A typical out of bounds access report looks like this::
+A typical out-of-bounds access classic KASAN report looks like this::
 
 ==
-BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 
[test_kasan] at addr 8800693bc5d3
-Write of size 1 by task modprobe/1689
-
=
-BUG kmalloc-128 (Not tainted): kasan error
-
-
-
-Disabling lock debugging due to kernel taint
-INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 
pid=1689
- __slab_alloc+0x4b4/0x4f0
- kmem_cache_alloc_trace+0x10b/0x190
- kmalloc_oob_right+0x3d/0x75 [test_kasan]
- init_module+0x9/0x47 [test_kasan]
- do_one_initcall+0x99/0x200
- load_module+0x2cb3/0x3b20
- SyS_finit_module+0x76/0x80
- system_call_fastpath+0x12/0x17
-INFO: Slab 0xea0001a4ef00 objects=17 used=7 fp=0x8800693bd728 
flags=0x1004080
-INFO: Object 0x8800693bc558 @offset=1368 fp=0x8800693bc720
-
-Bytes b4 8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a 
 
-Object 8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  
kkk.
-Redzone 8800693bc5d8: cc cc cc cc cc cc cc cc  

-Padding 8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a  

-CPU: 0 PID

[RFC PATCH v3 13/15] khwasan, arm64: add brk handler for inline instrumentation

2018-04-20 Thread Andrey Konovalov
KHWASAN inline instrumentation mode (which embeds checks of shadow memory
into the generated code, instead of inserting a callback) generates a brk
instruction when a tag mismatch is detected.

This commit add a KHWASAN brk handler, that decodes the immediate value
passed to the brk instructions (to extract information about the memory
access that triggered the mismatch), reads the register values (x0 contains
the guilty address) and reports the bug.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/brk-imm.h |  2 +
 arch/arm64/kernel/traps.c| 69 +++-
 2 files changed, 69 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
index ed693c5bcec0..e4a7013321dc 100644
--- a/arch/arm64/include/asm/brk-imm.h
+++ b/arch/arm64/include/asm/brk-imm.h
@@ -16,10 +16,12 @@
  * 0x400: for dynamic BRK instruction
  * 0x401: for compile time BRK instruction
  * 0x800: kernel-mode BUG() and WARN() traps
+ * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff)
  */
 #define FAULT_BRK_IMM  0x100
 #define KGDB_DYN_DBG_BRK_IMM   0x400
 #define KGDB_COMPILED_DBG_BRK_IMM  0x401
 #define BUG_BRK_IMM0x800
+#define KHWASAN_BRK_IMM0x900
 
 #endif
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index ba964da31a25..b25effc7972a 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -269,10 +270,14 @@ void arm64_notify_die(const char *str, struct pt_regs 
*regs,
}
 }
 
-void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
+void __arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long 
size)
 {
regs->pc += size;
+}
 
+void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
+{
+   __arm64_skip_faulting_instruction(regs, size);
/*
 * If we were single stepping, we want to get the step exception after
 * we return from the trap.
@@ -789,7 +794,7 @@ static int bug_handler(struct pt_regs *regs, unsigned int 
esr)
}
 
/* If thread survives, skip over the BUG instruction and continue: */
-   arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+   __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
return DBG_HOOK_HANDLED;
 }
 
@@ -799,6 +804,59 @@ static struct break_hook bug_break_hook = {
.fn = bug_handler,
 };
 
+#ifdef CONFIG_KASAN_HW
+
+#define KHWASAN_ESR_RECOVER0x20
+#define KHWASAN_ESR_WRITE  0x10
+#define KHWASAN_ESR_SIZE_MASK  0x0f
+#define KHWASAN_ESR_SIZE(esr)  (1 << ((esr) & KHWASAN_ESR_SIZE_MASK))
+
+static int khwasan_handler(struct pt_regs *regs, unsigned int esr)
+{
+   bool recover = esr & KHWASAN_ESR_RECOVER;
+   bool write = esr & KHWASAN_ESR_WRITE;
+   size_t size = KHWASAN_ESR_SIZE(esr);
+   u64 addr = regs->regs[0];
+   u64 pc = regs->pc;
+
+   if (user_mode(regs))
+   return DBG_HOOK_ERROR;
+
+   kasan_report(addr, size, write, pc);
+
+   /*
+* The instrumentation allows to control whether we can proceed after
+* a crash was detected. This is done by passing the -recover flag to
+* the compiler. Disabling recovery allows to generate more compact
+* code.
+*
+* Unfortunately disabling recovery doesn't work for the kernel right
+* now. KHWASAN reporting is disabled in some contexts (for example when
+* the allocator accesses slab object metadata; same is true for KASAN;
+* this is controlled by current->kasan_depth). All these accesses are
+* detected by the tool, even though the reports for them are not
+* printed.
+*
+* This is something that might be fixed at some point in the future.
+*/
+   if (!recover)
+   die("Oops - KHWASAN", regs, 0);
+
+   /* If thread survives, skip over the brk instruction and continue: */
+   __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+   return DBG_HOOK_HANDLED;
+}
+
+#define KHWASAN_ESR_VAL (0xf200 | KHWASAN_BRK_IMM)
+#define KHWASAN_ESR_MASK 0xff00
+
+static struct break_hook khwasan_break_hook = {
+   .esr_val = KHWASAN_ESR_VAL,
+   .esr_mask = KHWASAN_ESR_MASK,
+   .fn = khwasan_handler,
+};
+#endif
+
 /*
  * Initial handler for AArch64 BRK exceptions
  * This handler only used until debug_traps_init().
@@ -806,6 +864,10 @@ static struct break_hook bug_break_hook = {
 int __init early_brk64(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
 {
+#ifdef CONFIG_KASAN_HW
+   if ((esr & KHWASAN_ESR_MASK) == KHWASAN_ESR_VAL)
+   return khwasan_handler(regs, esr) != DBG_HOOK_HANDLED;
+#endif
   

[RFC PATCH v3 10/15] khwasan: split out kasan_report.c from report.c

2018-04-20 Thread Andrey Konovalov
This patch moves KASAN specific error reporting routines to kasan_report.c
without any functional changes, leaving common error reporting code in
report.c to be later reused by KHWASAN.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 mm/kasan/Makefile |   4 +-
 mm/kasan/kasan.h  |   7 ++
 mm/kasan/kasan_report.c   | 158 +
 mm/kasan/khwasan_report.c |  39 +++
 mm/kasan/report.c | 234 +-
 5 files changed, 257 insertions(+), 185 deletions(-)
 create mode 100644 mm/kasan/kasan_report.c
 create mode 100644 mm/kasan/khwasan_report.c

diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 14955add96d3..7ef536390365 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -14,5 +14,5 @@ CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack 
-fno-stack-protector)
 CFLAGS_khwasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 
 obj-$(CONFIG_KASAN) := common.o kasan_init.o report.o
-obj-$(CONFIG_KASAN_GENERIC) += kasan.o quarantine.o
-obj-$(CONFIG_KASAN_HW) += khwasan.o
+obj-$(CONFIG_KASAN_GENERIC) += kasan.o kasan_report.o quarantine.o
+obj-$(CONFIG_KASAN_HW) += khwasan.o khwasan_report.o
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index cd51ae9d8149..a76aee9e095f 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -118,11 +118,18 @@ static inline const void *kasan_shadow_to_mem(const void 
*shadow_addr)
<< KASAN_SHADOW_SCALE_SHIFT);
 }
 
+static inline bool addr_has_shadow(const void *addr)
+{
+   return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
+}
+
 void kasan_poison_shadow(const void *address, size_t size, u8 value);
 
 void check_memory_region(unsigned long addr, size_t size, bool write,
unsigned long ret_ip);
 
+const char *get_bug_type(struct kasan_access_info *info);
+
 void kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
 void kasan_report_invalid_free(void *object, unsigned long ip);
diff --git a/mm/kasan/kasan_report.c b/mm/kasan/kasan_report.c
new file mode 100644
index ..2d8decbecbd5
--- /dev/null
+++ b/mm/kasan/kasan_report.c
@@ -0,0 +1,158 @@
+/*
+ * This file contains KASAN specific error reporting code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin@gmail.com>
+ *
+ * Some code borrowed from https://github.com/xairy/kasan-prototype by
+ *    Andrey Konovalov <andreyk...@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "kasan.h"
+#include "../slab.h"
+
+static const void *find_first_bad_addr(const void *addr, size_t size)
+{
+   u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
+   const void *first_bad_addr = addr;
+
+   while (!shadow_val && first_bad_addr < addr + size) {
+   first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+   shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr);
+   }
+   return first_bad_addr;
+}
+
+static const char *get_shadow_bug_type(struct kasan_access_info *info)
+{
+   const char *bug_type = "unknown-crash";
+   u8 *shadow_addr;
+
+   info->first_bad_addr = find_first_bad_addr(info->access_addr,
+   info->access_size);
+
+   shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr);
+
+   /*
+* If shadow byte value is in [0, KASAN_SHADOW_SCALE_SIZE) we can look
+* at the next shadow byte to determine the type of the bad access.
+*/
+   if (*shadow_addr > 0 && *shadow_addr <= KASAN_SHADOW_SCALE_SIZE - 1)
+   shadow_addr++;
+
+   switch (*shadow_addr) {
+   case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+   /*
+* In theory it's still possible to see these shadow values
+* due to a data race in the kernel code.
+*/
+   bug_type = "out-of-bounds";
+   break;
+   case KASAN_PAGE_REDZONE:
+   case KASAN_KMALLOC_REDZONE:
+   bug_type = "slab-out-of-bounds";
+   break;
+   case KASAN_GLOBAL_REDZONE:
+   bug_type = "global-out-of-bounds";
+   break;
+   case KASAN_STACK_LEFT:
+   case KASAN_STACK_MID:
+   case KASAN_STACK_RIGHT:
+   case KASAN_STACK_PARTIAL:
+   bug_type = "stack-out-of-bounds";
+   break;
+   case KASAN_FREE_PAGE:
+   case KASAN_KMALLOC_

[RFC PATCH v3 12/15] khwasan: add hooks implementation

2018-04-20 Thread Andrey Konovalov
This commit adds KHWASAN specific hooks implementation and adjusts
common KASAN and KHWASAN ones.

1. When a new slab cache is created, KHWASAN rounds up the size of the
   objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
   that corresponds to this object to this tag, and embeds this tag value
   into the top byte of the returned pointer.

3. On each kfree KHWASAN poisons the shadow memory with a random tag to
   allow detection of use-after-free bugs.

The rest of the logic of the hook implementation is very much similar to
the one provided by KASAN. KHWASAN saves allocation and free stack metadata
to the slab object the same was KASAN does this.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 mm/kasan/common.c  | 73 --
 mm/kasan/kasan.h   |  8 +
 mm/kasan/khwasan.c | 40 +
 3 files changed, 105 insertions(+), 16 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 0c1159feaf5e..0654bf97257b 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, 
u8 value)
 {
void *shadow_start, *shadow_end;
 
+   /* Perform shadow offset calculation based on untagged address */
+   address = reset_tag(address);
+
shadow_start = kasan_mem_to_shadow(address);
shadow_end = kasan_mem_to_shadow(address + size);
 
@@ -148,11 +151,15 @@ void kasan_poison_shadow(const void *address, size_t 
size, u8 value)
 
 void kasan_unpoison_shadow(const void *address, size_t size)
 {
-   kasan_poison_shadow(address, size, 0);
+   kasan_poison_shadow(address, size, get_tag(address));
 
if (size & KASAN_SHADOW_MASK) {
u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
-   *shadow = size & KASAN_SHADOW_MASK;
+
+   if (IS_ENABLED(CONFIG_KASAN_HW))
+   *shadow = get_tag(address);
+   else
+   *shadow = size & KASAN_SHADOW_MASK;
}
 }
 
@@ -216,6 +223,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned 
int *size,
slab_flags_t *flags)
 {
unsigned int orig_size = *size;
+   unsigned int redzone_size = 0;
int redzone_adjust;
 
/* Add alloc meta. */
@@ -223,20 +231,20 @@ void kasan_cache_create(struct kmem_cache *cache, 
unsigned int *size,
*size += sizeof(struct kasan_alloc_meta);
 
/* Add free meta. */
-   if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
-   cache->object_size < sizeof(struct kasan_free_meta)) {
+   if (IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+   (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
+cache->object_size < sizeof(struct kasan_free_meta))) {
cache->kasan_info.free_meta_offset = *size;
*size += sizeof(struct kasan_free_meta);
}
-   redzone_adjust = optimal_redzone(cache->object_size) -
-   (*size - cache->object_size);
 
+   redzone_size = optimal_redzone(cache->object_size);
+   redzone_adjust = redzone_size - (*size - cache->object_size);
if (redzone_adjust > 0)
*size += redzone_adjust;
 
*size = min_t(unsigned int, KMALLOC_MAX_SIZE,
-   max(*size, cache->object_size +
-   optimal_redzone(cache->object_size)));
+   max(*size, cache->object_size + redzone_size));
 
/*
 * If the metadata doesn't fit, don't enable KASAN at all.
@@ -306,18 +314,30 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const 
void *object)
 
 void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
-   return kasan_kmalloc(cache, object, cache->object_size, flags);
+   object = kasan_kmalloc(cache, object, cache->object_size, flags);
+   if (IS_ENABLED(CONFIG_KASAN_HW) && unlikely(cache->ctor)) {
+   /*
+* Cache constructor might use object's pointer value to
+* initialize some of its fields.
+*/
+   cache->ctor(object);
+   }
+   return object;
 }
 
 static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
  unsigned long ip, bool quarantine)
 {
s8 shadow_byte;
+   u8 tag;
unsigned long rounded_up_size;
 
+   tag = get_tag(object);
+   object = reset_tag(object);
+
if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
object)) {
-   kasan_report_invalid_free(object, ip);
+   kasan_report_invalid_free(set_tag(object, tag), ip);
return true;
}
 
@@ -326,20

[RFC PATCH v3 09/15] khwasan, mm: perform untagged pointers comparison in krealloc

2018-04-20 Thread Andrey Konovalov
The krealloc function checks where the same buffer was reused or a new one
allocated by comparing kernel pointers. KHWASAN changes memory tag on the
krealloc'ed chunk of memory and therefore also changes the pointer tag of
the returned pointer. Therefore we need to perform comparison on untagged
(with tags reset) pointers to check whether it's the same memory region or
not.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 mm/slab_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0582004351c4..451b094b8c5b 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1478,7 +1478,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t 
flags)
}
 
ret = __do_krealloc(p, new_size, flags);
-   if (ret && p != ret)
+   if (ret && khwasan_reset_tag(p) != khwasan_reset_tag(ret))
kfree(p);
 
return ret;
-- 
2.17.0.484.g0c8726318c-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v3 11/15] khwasan: add bug reporting routines

2018-04-20 Thread Andrey Konovalov
This commit adds rountines, that print KHWASAN error reports. Those are
quite similar to KASAN, the difference is:

1. The way KHWASAN finds the first bad shadow cell (with a mismatching
   tag). KHWASAN compares memory tags from the shadow memory to the pointer
   tag.

2. KHWASAN reports all bugs with the "KASAN: invalid-access" header. This
   is done, so various external tools that already parse the kernel logs
   looking for KASAN reports wouldn't need to be changed.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 include/linux/kasan.h |  3 +++
 mm/kasan/kasan.h  |  7 +
 mm/kasan/kasan_report.c   |  7 ++---
 mm/kasan/khwasan_report.c | 21 +++
 mm/kasan/report.c | 57 +--
 5 files changed, 64 insertions(+), 31 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d7624b879d86..e209027f3b52 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -161,6 +161,9 @@ void *khwasan_set_tag(const void *addr, u8 tag);
 u8 khwasan_get_tag(const void *addr);
 void *khwasan_reset_tag(const void *ptr);
 
+void kasan_report(unsigned long addr, size_t size,
+   bool write, unsigned long ip);
+
 #else /* CONFIG_KASAN_HW */
 
 static inline void khwasan_init(void) { }
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index a76aee9e095f..620941d1e84f 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -128,8 +128,15 @@ void kasan_poison_shadow(const void *address, size_t size, 
u8 value);
 void check_memory_region(unsigned long addr, size_t size, bool write,
unsigned long ret_ip);
 
+void *find_first_bad_addr(void *addr, size_t size);
 const char *get_bug_type(struct kasan_access_info *info);
 
+#ifdef CONFIG_KASAN_HW
+void print_tags(u8 addr_tag, const void *addr);
+#else
+static inline void print_tags(u8 addr_tag, const void *addr) { }
+#endif
+
 void kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
 void kasan_report_invalid_free(void *object, unsigned long ip);
diff --git a/mm/kasan/kasan_report.c b/mm/kasan/kasan_report.c
index 2d8decbecbd5..fdf2d77e3125 100644
--- a/mm/kasan/kasan_report.c
+++ b/mm/kasan/kasan_report.c
@@ -33,10 +33,10 @@
 #include "kasan.h"
 #include "../slab.h"
 
-static const void *find_first_bad_addr(const void *addr, size_t size)
+void *find_first_bad_addr(void *addr, size_t size)
 {
u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr);
-   const void *first_bad_addr = addr;
+   void *first_bad_addr = addr;
 
while (!shadow_val && first_bad_addr < addr + size) {
first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
@@ -50,9 +50,6 @@ static const char *get_shadow_bug_type(struct 
kasan_access_info *info)
const char *bug_type = "unknown-crash";
u8 *shadow_addr;
 
-   info->first_bad_addr = find_first_bad_addr(info->access_addr,
-   info->access_size);
-
shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr);
 
/*
diff --git a/mm/kasan/khwasan_report.c b/mm/kasan/khwasan_report.c
index 2edbc3c76be5..51238b404b08 100644
--- a/mm/kasan/khwasan_report.c
+++ b/mm/kasan/khwasan_report.c
@@ -37,3 +37,24 @@ const char *get_bug_type(struct kasan_access_info *info)
 {
return "invalid-access";
 }
+
+void *find_first_bad_addr(void *addr, size_t size)
+{
+   u8 tag = get_tag(addr);
+   void *untagged_addr = reset_tag(addr);
+   u8 *shadow = (u8 *)kasan_mem_to_shadow(untagged_addr);
+   void *first_bad_addr = untagged_addr;
+
+   while (*shadow == tag && first_bad_addr < untagged_addr + size) {
+   first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+   shadow = (u8 *)kasan_mem_to_shadow(first_bad_addr);
+   }
+   return first_bad_addr;
+}
+
+void print_tags(u8 addr_tag, const void *addr)
+{
+   u8 *shadow = (u8 *)kasan_mem_to_shadow(addr);
+
+   pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", addr_tag, *shadow);
+}
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 155247a6f8a8..e031c78f2e52 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -64,11 +64,10 @@ static int __init kasan_set_multi_shot(char *str)
 }
 __setup("kasan_multi_shot", kasan_set_multi_shot);
 
-static void print_error_description(struct kasan_access_info *info,
-   const char *bug_type)
+static void print_error_description(struct kasan_access_info *info)
 {
pr_err("BUG: KASAN: %s in %pS\n",
-   bug_type, (void *)info->ip);
+   get_bug_type(info), (void *)info->ip);
pr_err("%s of size %zu at addr %px by task %s/%d\n",
info->is_write ? "Write" : "Read", info->access

[RFC PATCH v3 08/15] khwasan, arm64: enable top byte ignore for the kernel

2018-04-20 Thread Andrey Konovalov
KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
which enables Top Byte Ignore for the kernel, when KHWASAN is used.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/pgtable-hwdef.h | 1 +
 arch/arm64/mm/proc.S   | 8 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pgtable-hwdef.h 
b/arch/arm64/include/asm/pgtable-hwdef.h
index fd208eac9f2a..483aceedad76 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -289,6 +289,7 @@
 #define TCR_A1 (UL(1) << 22)
 #define TCR_ASID16 (UL(1) << 36)
 #define TCR_TBI0   (UL(1) << 37)
+#define TCR_TBI1   (UL(1) << 38)
 #define TCR_HA (UL(1) << 39)
 #define TCR_HD (UL(1) << 40)
 #define TCR_NFD1   (UL(1) << 54)
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 5f9a73a4452c..f3dfcd74a285 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -47,6 +47,12 @@
 /* PTWs cacheable, inner/outer WBWA */
 #define TCR_CACHE_FLAGSTCR_IRGN_WBWA | TCR_ORGN_WBWA
 
+#ifdef CONFIG_KASAN_HW
+#define TCR_KASAN_FLAGS TCR_TBI1
+#else
+#define TCR_KASAN_FLAGS 0
+#endif
+
 #define MAIR(attr, mt) ((attr) << ((mt) * 8))
 
 /*
@@ -439,7 +445,7 @@ ENTRY(__cpu_setup)
 */
ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
-   TCR_TBI0 | TCR_A1
+   TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS
tcr_set_idmap_t0sz  x10, x9
 
/*
-- 
2.17.0.484.g0c8726318c-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v3 04/15] khwasan: initialize shadow to 0xff

2018-04-20 Thread Andrey Konovalov
A KHWASAN shadow memory cell contains a memory tag, that corresponds to
the tag in the top byte of the pointer, that points to that memory. The
native top byte value of kernel pointers is 0xff, so with KHWASAN we
need to initialize shadow memory to 0xff. This commit does that.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/mm/kasan_init.c | 16 ++--
 include/linux/kasan.h  |  8 
 mm/kasan/common.c  |  3 ++-
 3 files changed, 24 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index dabfc1ecda3d..85b21292ee68 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -44,6 +44,15 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node)
return __pa(p);
 }
 
+static phys_addr_t __init kasan_alloc_raw_page(int node)
+{
+   void *p = memblock_virt_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE,
+ __pa(MAX_DMA_ADDRESS),
+ MEMBLOCK_ALLOC_ACCESSIBLE,
+ node);
+   return __pa(p);
+}
+
 static pte_t *__init kasan_pte_offset(pmd_t *pmdp, unsigned long addr, int 
node,
  bool early)
 {
@@ -89,7 +98,9 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned 
long addr,
 
do {
phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page)
- : kasan_alloc_zeroed_page(node);
+ : kasan_alloc_raw_page(node);
+   if (!early)
+   memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
next = addr + PAGE_SIZE;
set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
} while (ptep++, addr = next, addr != end && 
pte_none(READ_ONCE(*ptep)));
@@ -139,6 +150,7 @@ asmlinkage void __init kasan_early_init(void)
KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
+
kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
   true);
 }
@@ -235,7 +247,7 @@ void __init kasan_init(void)
set_pte(_zero_pte[i],
pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
 
-   memset(kasan_zero_page, 0, PAGE_SIZE);
+   memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
/* At this point kasan is fully initialized. Enable error messages */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6608aa9b35ac..336385baf926 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -139,6 +139,8 @@ static inline size_t kasan_metadata_size(struct kmem_cache 
*cache) { return 0; }
 
 #ifdef CONFIG_KASAN_GENERIC
 
+#define KASAN_SHADOW_INIT 0
+
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
 
@@ -149,4 +151,10 @@ static inline void kasan_cache_shutdown(struct kmem_cache 
*cache) {}
 
 #endif /* CONFIG_KASAN_GENERIC */
 
+#ifdef CONFIG_KASAN_HW
+
+#define KASAN_SHADOW_INIT 0xFF
+
+#endif /* CONFIG_KASAN_HW */
+
 #endif /* LINUX_KASAN_H */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index ebb48415e4cf..0c1159feaf5e 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -454,11 +454,12 @@ int kasan_module_alloc(void *addr, size_t size)
 
ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
shadow_start + shadow_size,
-   GFP_KERNEL | __GFP_ZERO,
+   GFP_KERNEL,
PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
__builtin_return_address(0));
 
if (ret) {
+   __memset(ret, KASAN_SHADOW_INIT, shadow_size);
find_vm_area(addr)->flags |= VM_KASAN;
kmemleak_ignore(ret);
return 0;
-- 
2.17.0.484.g0c8726318c-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v3 07/15] khwasan: add tag related helper functions

2018-04-20 Thread Andrey Konovalov
This commit adds a few helper functions, that are meant to be used to
work with tags embedded in the top byte of kernel pointers: to set, to
get or to reset (set to 0xff) the top byte.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/mm/kasan_init.c |  2 ++
 include/linux/kasan.h  | 23 
 mm/kasan/kasan.h   | 55 ++
 mm/kasan/khwasan.c | 48 +
 4 files changed, 128 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 85b21292ee68..8ef9b1bc6d81 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -250,6 +250,8 @@ void __init kasan_init(void)
memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
+   khwasan_init();
+
/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n");
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 336385baf926..d7624b879d86 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -155,6 +155,29 @@ static inline void kasan_cache_shutdown(struct kmem_cache 
*cache) {}
 
 #define KASAN_SHADOW_INIT 0xFF
 
+void khwasan_init(void);
+
+void *khwasan_set_tag(const void *addr, u8 tag);
+u8 khwasan_get_tag(const void *addr);
+void *khwasan_reset_tag(const void *ptr);
+
+#else /* CONFIG_KASAN_HW */
+
+static inline void khwasan_init(void) { }
+
+static inline void *khwasan_set_tag(const void *addr, u8 tag)
+{
+   return (void *)addr;
+}
+static inline u8 khwasan_get_tag(const void *addr)
+{
+   return 0xFF;
+}
+static inline void *khwasan_reset_tag(const void *ptr)
+{
+   return (void *)ptr;
+}
+
 #endif /* CONFIG_KASAN_HW */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5091a433f266..cd51ae9d8149 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -8,6 +8,10 @@
 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
 #define KASAN_SHADOW_MASK   (KASAN_SHADOW_SCALE_SIZE - 1)
 
+#define KHWASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
+#define KHWASAN_TAG_INVALID0xFE /* inaccessible memory tag */
+#define KHWASAN_TAG_MAX0xFD /* maximum value for random tags */
+
 #define KASAN_FREE_PAGE 0xFF  /* page was freed */
 #define KASAN_PAGE_REDZONE  0xFE  /* redzone for kmalloc_large allocations 
*/
 #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
@@ -135,6 +139,57 @@ static inline void quarantine_reduce(void) { }
 static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
 #endif
 
+#ifdef CONFIG_KASAN_HW
+
+#define KHWASAN_TAG_SHIFT 56
+#define KHWASAN_TAG_MASK (0xFFUL << KHWASAN_TAG_SHIFT)
+
+u8 random_tag(void);
+
+static inline void *set_tag(const void *addr, u8 tag)
+{
+   u64 a = (u64)addr;
+
+   a &= ~KHWASAN_TAG_MASK;
+   a |= ((u64)tag << KHWASAN_TAG_SHIFT);
+
+   return (void *)a;
+}
+
+static inline u8 get_tag(const void *addr)
+{
+   return (u8)((u64)addr >> KHWASAN_TAG_SHIFT);
+}
+
+static inline void *reset_tag(const void *addr)
+{
+   return set_tag(addr, KHWASAN_TAG_KERNEL);
+}
+
+#else /* CONFIG_KASAN_HW */
+
+static inline u8 random_tag(void)
+{
+   return 0;
+}
+
+static inline void *set_tag(const void *addr, u8 tag)
+{
+   return (void *)addr;
+}
+
+static inline u8 get_tag(const void *addr)
+{
+   return 0;
+}
+
+static inline void *reset_tag(const void *addr)
+{
+   return (void *)addr;
+}
+
+#endif /* CONFIG_KASAN_HW */
+
 /*
  * Exported functions for interfaces called from assembly or from generated
  * code. Declarations here to avoid warning about missing declarations.
diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
index e2c3a7f7fd1f..4e253c1e4d35 100644
--- a/mm/kasan/khwasan.c
+++ b/mm/kasan/khwasan.c
@@ -38,6 +38,54 @@
 #include "kasan.h"
 #include "../slab.h"
 
+static DEFINE_PER_CPU(u32, prng_state);
+
+void khwasan_init(void)
+{
+   int cpu;
+
+   for_each_possible_cpu(cpu) {
+   per_cpu(prng_state, cpu) = get_random_u32();
+   }
+}
+
+/*
+ * If a preemption happens between this_cpu_read and this_cpu_write, the only
+ * side effect is that we'll give a few allocated in different contexts objects
+ * the same tag. Since KHWASAN is meant to be used a probabilistic 
bug-detection
+ * debug feature, this doesn’t have significant negative impact.
+ *
+ * Ideally the tags use strong randomness to prevent any attempts to predict
+ * them during explicit exploit attempts. But strong randomness is expensive,
+ * and we did an intentional trade-off to use a PRNG. This non-atomic RMW
+ * sequence has in fact positive effect, since interrupts that randomly skew
+ * PRNG at unpredictable points do only good.
+ */
+u8 

[RFC PATCH v3 06/15] khwasan, arm64: fix up fault handling logic

2018-04-20 Thread Andrey Konovalov
show_pte in arm64 fault handling relies on the fact that the top byte of
a kernel pointer is 0xff, which isn't always the case with KHWASAN enabled.
Reset the top byte.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/mm/fault.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 4165485e8b6e..e834fe76f5d2 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -134,6 +135,8 @@ void show_pte(unsigned long addr)
pgd_t *pgdp;
pgd_t pgd;
 
+   addr = (unsigned long)khwasan_reset_tag((void *)addr);
+
if (addr < TASK_SIZE) {
/* TTBR0 */
mm = current->active_mm;
-- 
2.17.0.484.g0c8726318c-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v3 05/15] khwasan, arm64: untag virt address in __kimg_to_phys

2018-04-20 Thread Andrey Konovalov
__kimg_to_phys (which is used by virt_to_phys) assumes that the top byte
of the address is 0xff, which isn't always the case with KHWASAN enabled.
The solution is to reset the tag in __kimg_to_phys.

__lm_to_phys doesn't require any fixups, as it zeroes out the top byte
with the current implementation.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/memory.h | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 6d084431b7f7..f206273469b5 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -92,6 +92,12 @@
 #define KASAN_THREAD_SHIFT 0
 #endif
 
+#ifdef CONFIG_KASAN_HW
+#define KASAN_TAG_SHIFTED(tag) ((unsigned long)(tag) << 56)
+#define KASAN_SET_TAG(addr, tag)   (((addr) & ~KASAN_TAG_SHIFTED(0xff)) | \
+   KASAN_TAG_SHIFTED(tag))
+#endif
+
 #define MIN_THREAD_SHIFT   (14 + KASAN_THREAD_SHIFT)
 
 /*
@@ -225,7 +231,12 @@ static inline unsigned long kaslr_offset(void)
 #define __is_lm_address(addr)  (!!((addr) & BIT(VA_BITS - 1)))
 
 #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+
+#ifdef CONFIG_KASAN_HW
+#define __kimg_to_phys(addr)   (KASAN_SET_TAG((addr), 0xff) - kimage_voffset)
+#else
 #define __kimg_to_phys(addr)   ((addr) - kimage_voffset)
+#endif
 
 #define __virt_to_phys_nodebug(x) ({   \
phys_addr_t __x = (phys_addr_t)(x); \
-- 
2.17.0.484.g0c8726318c-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v3 03/15] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW

2018-04-20 Thread Andrey Konovalov
KWHASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires
1/16th of the kernel virtual address space for the shadow memory.

This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when KHWASAN is enabled.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/Makefile |  2 +-
 arch/arm64/include/asm/memory.h | 13 +
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 15402861bb59..49092d763673 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -95,7 +95,7 @@ endif
 # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
 #   - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
 # in 32-bit arithmetic
-KASAN_SHADOW_SCALE_SHIFT := 3
+KASAN_SHADOW_SCALE_SHIFT := $(if $(CONFIG_KASAN_HW), 4, 3)
 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x\n" $$(( \
(0x & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 49d99214f43c..6d084431b7f7 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -74,12 +74,17 @@
 #define KERNEL_END_end
 
 /*
- * KASAN requires 1/8th of the kernel virtual address space for the shadow
- * region. KASAN can bloat the stack significantly, so double the (minimum)
- * stack size when KASAN is in use.
+ * KASAN and KHWASAN require 1/8th and 1/16th of the kernel virtual address
+ * space for the shadow region respectively. They can bloat the stack
+ * significantly, so double the (minimum) stack size when they are in use.
  */
-#ifdef CONFIG_KASAN
+#ifdef CONFIG_KASAN_GENERIC
 #define KASAN_SHADOW_SCALE_SHIFT 3
+#endif
+#ifdef CONFIG_KASAN_HW
+#define KASAN_SHADOW_SCALE_SHIFT 4
+#endif
+#ifdef CONFIG_KASAN
 #define KASAN_SHADOW_SIZE  (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
 #define KASAN_THREAD_SHIFT 1
 #else
-- 
2.17.0.484.g0c8726318c-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v3 02/15] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW

2018-04-20 Thread Andrey Konovalov
This commit splits the current CONFIG_KASAN config option into two:
1. CONFIG_KASAN_GENERIC, that enables the generic software-only KASAN
   version (the one that exists now);
2. CONFIG_KASAN_HW, that enables KHWASAN.

With CONFIG_KASAN_HW enabled, compiler options are changed to instrument
kernel files wiht -fsantize=hwaddress (except the ones for which
KASAN_SANITIZE := n is set).

Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW support both
CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.

This commit also adds empty placeholder (for now) implementation of
KHWASAN specific hooks inserted by the compiler and adjusts common hooks
implementation to compile correctly with each of the config options.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/Kconfig |  1 +
 include/linux/compiler-clang.h |  5 ++-
 include/linux/compiler-gcc.h   |  4 ++
 include/linux/compiler.h   |  3 +-
 include/linux/kasan.h  | 16 ++--
 lib/Kconfig.kasan  | 68 ++
 mm/kasan/Makefile  |  6 ++-
 mm/kasan/kasan.h   | 11 -
 mm/kasan/khwasan.c | 75 ++
 mm/slub.c  |  2 +-
 scripts/Makefile.kasan | 27 +++-
 11 files changed, 192 insertions(+), 26 deletions(-)
 create mode 100644 mm/kasan/khwasan.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..6553aaa61e6a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -88,6 +88,7 @@ config ARM64
select HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+   select HAVE_ARCH_KASAN_HW if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index 7d98e263e048..72681c6fd418 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -21,13 +21,16 @@
 #define KASAN_ABI_VERSION 5
 
 /* emulate gcc's __SANITIZE_ADDRESS__ flag */
-#if __has_feature(address_sanitizer)
+#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
 #define __SANITIZE_ADDRESS__
 #endif
 
 #undef __no_sanitize_address
 #define __no_sanitize_address __attribute__((no_sanitize("address")))
 
+#undef __no_sanitize_hwaddress
+#define __no_sanitize_hwaddress __attribute__((no_sanitize("hwaddress")))
+
 /* Clang doesn't have a way to turn it off per-function, yet. */
 #ifdef __noretpoline
 #undef __noretpoline
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index b4bf73f5e38f..00a51feb786d 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -338,6 +338,10 @@
 #define __no_sanitize_address
 #endif
 
+#if !defined(__no_sanitize_hwaddress)
+#define __no_sanitize_hwaddress/* gcc doesn't support KHWASAN */
+#endif
+
 /*
  * A trick to suppress uninitialized variable warning without generating any
  * code
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ab4711c63601..6142bae513e8 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -195,7 +195,8 @@ void __read_once_size(const volatile void *p, void *res, 
int size)
  * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368
  * '__maybe_unused' allows us to avoid defined-but-not-used warnings.
  */
-# define __no_kasan_or_inline __no_sanitize_address __maybe_unused
+# define __no_kasan_or_inline __no_sanitize_address __no_sanitize_hwaddress \
+ __maybe_unused
 #else
 # define __no_kasan_or_inline __always_inline
 #endif
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index cbdc54543803..6608aa9b35ac 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -45,8 +45,6 @@ void kasan_free_pages(struct page *page, unsigned int order);
 
 void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
slab_flags_t *flags);
-void kasan_cache_shrink(struct kmem_cache *cache);
-void kasan_cache_shutdown(struct kmem_cache *cache);
 
 void kasan_poison_slab(struct page *page);
 void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
@@ -94,8 +92,6 @@ static inline void kasan_free_pages(struct page *page, 
unsigned int order) {}
 static inline void kasan_cache_create(struct kmem_cache *cache,
  unsigned int *size,
  slab_flags_t *flags) {}
-static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
-static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
 
 static inline void kasan_poison_slab(struct page *page) {}
 static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
@@ -141,4 +137,16 @@ stati

[RFC PATCH v3 01/15] khwasan: move common kasan and khwasan code to common.c

2018-04-20 Thread Andrey Konovalov
KHWASAN will reuse a significant part of KASAN code, so move the common
parts to common.c without any functional changes.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 mm/kasan/Makefile |   5 +-
 mm/kasan/common.c | 524 ++
 mm/kasan/kasan.c  | 493 +--
 mm/kasan/kasan.h  |   6 +
 4 files changed, 537 insertions(+), 491 deletions(-)
 create mode 100644 mm/kasan/common.c

diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 3289db38bc87..a6df14bffb6b 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -1,11 +1,14 @@
 # SPDX-License-Identifier: GPL-2.0
 KASAN_SANITIZE := n
+UBSAN_SANITIZE_common.o := n
 UBSAN_SANITIZE_kasan.o := n
 KCOV_INSTRUMENT := n
 
 CFLAGS_REMOVE_kasan.o = -pg
 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+
+CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 
-obj-y := kasan.o report.o kasan_init.o quarantine.o
+obj-y := common.o kasan.o report.o kasan_init.o quarantine.o
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
new file mode 100644
index ..ebb48415e4cf
--- /dev/null
+++ b/mm/kasan/common.c
@@ -0,0 +1,524 @@
+/*
+ * This file contains common KASAN and KHWASAN code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin@gmail.com>
+ *
+ * Some code borrowed from https://github.com/xairy/kasan-prototype by
+ *    Andrey Konovalov <andreyk...@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kasan.h"
+#include "../slab.h"
+
+static inline int in_irqentry_text(unsigned long ptr)
+{
+   return (ptr >= (unsigned long)&__irqentry_text_start &&
+   ptr < (unsigned long)&__irqentry_text_end) ||
+   (ptr >= (unsigned long)&__softirqentry_text_start &&
+ptr < (unsigned long)&__softirqentry_text_end);
+}
+
+static inline void filter_irq_stacks(struct stack_trace *trace)
+{
+   int i;
+
+   if (!trace->nr_entries)
+   return;
+   for (i = 0; i < trace->nr_entries; i++)
+   if (in_irqentry_text(trace->entries[i])) {
+   /* Include the irqentry function into the stack. */
+   trace->nr_entries = i + 1;
+   break;
+   }
+}
+
+static inline depot_stack_handle_t save_stack(gfp_t flags)
+{
+   unsigned long entries[KASAN_STACK_DEPTH];
+   struct stack_trace trace = {
+   .nr_entries = 0,
+   .entries = entries,
+   .max_entries = KASAN_STACK_DEPTH,
+   .skip = 0
+   };
+
+   save_stack_trace();
+   filter_irq_stacks();
+   if (trace.nr_entries != 0 &&
+   trace.entries[trace.nr_entries-1] == ULONG_MAX)
+   trace.nr_entries--;
+
+   return depot_save_stack(, flags);
+}
+
+void set_track(struct kasan_track *track, gfp_t flags)
+{
+   track->pid = current->pid;
+   track->stack = save_stack(flags);
+}
+
+void kasan_enable_current(void)
+{
+   current->kasan_depth++;
+}
+
+void kasan_disable_current(void)
+{
+   current->kasan_depth--;
+}
+
+void kasan_check_read(const volatile void *p, unsigned int size)
+{
+   check_memory_region((unsigned long)p, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_read);
+
+void kasan_check_write(const volatile void *p, unsigned int size)
+{
+   check_memory_region((unsigned long)p, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_write);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+   check_memory_region((unsigned long)addr, len, true, _RET_IP_);
+
+   return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+   check_memory_region((unsigned long)src, len, false, _RET_IP_);
+   check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+   return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+   check_memory_region((unsigned long)src, len, false, _RET_IP_);
+   check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+   return __memcpy(dest, src, len);
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory

[RFC PATCH v3 00/15] khwasan: kernel hardware assisted address sanitizer

2018-04-20 Thread Andrey Konovalov
ters in __kimg_to_phys, which is used by virt_to_phys.
- Untagged pointers in show_ptr in fault handling logic.
- Untagged pointers passed to KVM.
- Added two reserved tag values: 0xFF and 0xFE.
- Used the reserved tag 0xFF to disable validity checking (to resolve the
  issue with pointer tag being lost after page_address + kmap usage).
- Used the reserved tag 0xFE to mark redzones and freed objects.
- Added mnemonics for esr manipulation in KHWASAN brk handler.
- Added a comment about the -recover flag.
- Some minor cleanups and fixes.
- Rebased onto 3215b9d5 (4.16-rc6+).
- Tested on real hardware (Odroid C2 board).
- Added better benchmarks.

Andrey Konovalov (15):
  khwasan: move common kasan and khwasan code to common.c
  khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW
  khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW
  khwasan: initialize shadow to 0xff
  khwasan, arm64: untag virt address in __kimg_to_phys
  khwasan, arm64: fix up fault handling logic
  khwasan: add tag related helper functions
  khwasan, arm64: enable top byte ignore for the kernel
  khwasan, mm: perform untagged pointers comparison in krealloc
  khwasan: split out kasan_report.c from report.c
  khwasan: add bug reporting routines
  khwasan: add hooks implementation
  khwasan, arm64: add brk handler for inline instrumentation
  khwasan, mm, arm64: tag non slab memory allocated via pagealloc
  khwasan: update kasan documentation

 Documentation/dev-tools/kasan.rst  | 212 +
 arch/arm64/Kconfig |   1 +
 arch/arm64/Makefile|   2 +-
 arch/arm64/include/asm/brk-imm.h   |   2 +
 arch/arm64/include/asm/memory.h|  35 +-
 arch/arm64/include/asm/pgtable-hwdef.h |   1 +
 arch/arm64/kernel/traps.c  |  69 ++-
 arch/arm64/mm/fault.c  |   3 +
 arch/arm64/mm/kasan_init.c |  18 +-
 arch/arm64/mm/proc.S   |   8 +-
 include/linux/compiler-clang.h |   5 +-
 include/linux/compiler-gcc.h   |   4 +
 include/linux/compiler.h   |   3 +-
 include/linux/kasan.h  |  50 ++-
 include/linux/mm.h |  29 ++
 include/linux/page-flags-layout.h  |  10 +
 lib/Kconfig.kasan  |  68 ++-
 mm/cma.c   |   1 +
 mm/kasan/Makefile  |   9 +-
 mm/kasan/common.c  | 576 +
 mm/kasan/kasan.c   | 493 +
 mm/kasan/kasan.h   |  94 +++-
 mm/kasan/kasan_report.c| 155 +++
 mm/kasan/khwasan.c | 163 +++
 mm/kasan/khwasan_report.c  |  60 +++
 mm/kasan/report.c  | 271 
 mm/page_alloc.c|   1 +
 mm/slab_common.c   |   2 +-
 mm/slub.c  |   2 +-
 scripts/Makefile.kasan |  27 +-
 30 files changed, 1558 insertions(+), 816 deletions(-)
 create mode 100644 mm/kasan/common.c
 create mode 100644 mm/kasan/kasan_report.c
 create mode 100644 mm/kasan/khwasan.c
 create mode 100644 mm/kasan/khwasan_report.c

-- 
2.17.0.484.g0c8726318c-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Clang arm64 build is broken

2018-04-20 Thread Andrey Konovalov
On Fri, Apr 20, 2018 at 10:13 AM, Marc Zyngier  wrote:
>> The issue is that
>> clang doesn't know about the "S" asm constraint. I reported this to
>> clang [2], and hopefully this will get fixed. In the meantime, would
>> it possible to work around using the "S" constraint in the kernel?
>
> I have no idea, I've never used clang to build the kernel. Clang isn't
> really supported to build the arm64 kernel anyway (as you mention
> below), and working around clang deficiencies would mean that we leave
> with the workaround forever. I'd rather enable clang once it is at
> feature parity with GCC.

The fact that there are some existing issues with building arm64
kernel with clang doesn't sound like a good justification for adding
new issues :)

However in this case I do believe that this is more of a bug in clang
that should be fixed.

>> While we're here, regarding the other issue with kvm [3], I didn't
>> receive any comments as to whether it makes sense to send the fix that
>> adds -fno-jump-tables flag when building kvm with clang.
>
> Is that the only thing missing? Are you sure that there is no other way
> for clang to generate absolute addresses that will then lead to a crash?
> Again, I'd rather make sure we have the full picture.

Well, I have tried applying that patch and running kvm tests that I
could find [1], and they passed (actually I think there was an issue
with one of them, but I saw the same thing when I tried running them
on a kernel built with GCC).

[1] https://www.linux-kvm.org/page/KVM-unit-tests
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Clang arm64 build is broken

2018-04-19 Thread Andrey Konovalov
Hi Marc!

Your recent commit [1] broke clang build on arm64. The issue is that
clang doesn't know about the "S" asm constraint. I reported this to
clang [2], and hopefully this will get fixed. In the meantime, would
it possible to work around using the "S" constraint in the kernel?

While we're here, regarding the other issue with kvm [3], I didn't
receive any comments as to whether it makes sense to send the fix that
adds -fno-jump-tables flag when building kvm with clang.

Thanks!

[1] 
https://github.com/torvalds/linux/commit/44a497abd621a71c645f06d3d545ae2f46448830

[2] https://github.com/ClangBuiltLinux/linux/issues/13

[3] https://lkml.org/lkml/2018/3/16/476
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation

2018-04-12 Thread Andrey Konovalov
On Thu, Apr 12, 2018 at 7:20 PM, Andrey Ryabinin
 wrote:
>> 1. Tag memory with a random tag in kasan_alloc_pages() and returned a
>> tagged pointer from pagealloc.
>
> Tag memory with a random tag in kasan_alloc_pages() and store that tag in 
> page struct (that part is also in kasan_alloc_pages()).
> page_address(page) will retrieve that tag from struct page to return tagged 
> address.
>
> I've no idea what do you mean by "returning a tagged pointer from pagealloc".
> Once again, the page allocator (__alloc_pages_nodemask()) returns pointer to 
> *struct page*,
> not the address in the linear mapping where is that page mapped (or not 
> mapped at all if this is highmem).
> One have to call page_address()/kmap() to use that page.

Ah, that's what I've been missing.

OK, I'll do that.

Thanks!

>
>
>> 2. Restore the tag for the pointers returned from page_address for
>> !PageSlab() pages.
>>
>
> Right.
>
>> 3. Set the tag to 0xff for the pointers returned from page_address for
>> PageSlab() pages.
>>
>
> Right.
>
>> Is this correct?
>>
>> In 2 instead of storing the tag in page_struct, we can just recover it
>> from the shadow memory that corresponds to that page. What do you
>> think about this?
>
> Sounds ok. Don't see any problem with that.
>
>
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: arm64 kvm built with clang doesn't boot

2018-04-12 Thread Andrey Konovalov
On Fri, Mar 16, 2018 at 3:31 PM, Mark Rutland <mark.rutl...@arm.com> wrote:
> On Fri, Mar 16, 2018 at 02:13:14PM +, Mark Rutland wrote:
>> On Fri, Mar 16, 2018 at 02:49:00PM +0100, Andrey Konovalov wrote:
>> > Hi!
>>
>> Hi,
>>
>> > I've recently tried to boot clang built kernel on real hardware
>> > (Odroid C2 board) instead of using a VM. The issue that I stumbled
>> > upon is that arm64 kvm built with clang doesn't boot.
>> >
>> > Adding -fno-jump-tables compiler flag to arch/arm64/kvm/* helps. There
>> > was a patch some time ago that did exactly that
>> > (https://patchwork.kernel.org/patch/10060381/), but it wasn't accepted
>> > AFAICT (see the discussion on that thread).
>> >
>> > What would be the best way to get this fixed?
>>
>> I think that patch is our best bet currently, but to save ourselves pain
>> in future it would be *really* nice if GCC and clang could provide an
>> option line -fno-absolute-addressing that would implicitly disable any
>> feature that would generate an absolute address as jump tables do.
>>
>> > I've also had to disable CONFIG_JUMP_LABEL to get the kernel boot
>> > (even without kvm enabled), but that might be a different (though
>> > related) issue.
>>
>> With v4.15 (and clang 5.0.0), I did not have to disable jump labels to
>> get a kernel booting on a Juno platform, though I did have to pass
>> -fno-jump-tables to the hyp code.
>
> FWIW, with that same compiler and patch applied atop of v4.16-rc4, and
> some bodges around clang not liking the rX register naming in the SMCCC
> code, I get a kernel that boots on my Juno, though I immediately hit a
> KASAN splat:
>
> [8.476766] 
> ==
> [8.483990] BUG: KASAN: slab-out-of-bounds in __d_lookup_rcu+0x350/0x400
> [8.490664] Read of size 8 at addr 8009336e2a30 by task init/1

Hi Mark!

Just FYI, this should be fixed with https://reviews.llvm.org/D44981 +
https://patchwork.kernel.org/patch/10339103/

Thanks!
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation

2018-04-12 Thread Andrey Konovalov
On Tue, Apr 10, 2018 at 6:31 PM, Andrey Ryabinin
<aryabi...@virtuozzo.com> wrote:
>
>
> On 04/10/2018 07:07 PM, Andrey Konovalov wrote:
>> On Fri, Apr 6, 2018 at 2:27 PM, Andrey Ryabinin <aryabi...@virtuozzo.com> 
>> wrote:
>>> On 04/06/2018 03:14 PM, Andrey Konovalov wrote:
>>>> On Thu, Apr 5, 2018 at 3:02 PM, Andrey Ryabinin <aryabi...@virtuozzo.com> 
>>>> wrote:
>>>>> Nevertheless, this doesn't mean that we should ignore *all* accesses to 
>>>>> !slab memory.
>>>>
>>>> So you mean we need to find a way to ignore accesses via pointers
>>>> returned by page_address(), but still check accesses through all other
>>>> pointers tagged with 0xFF? I don't see an obvious way to do this. I'm
>>>> open to suggestions though.
>>>>
>>>
>>> I'm saying that we need to ignore accesses to slab objects if pointer
>>> to slab object obtained via page_address() + offset_in_page() trick, but 
>>> don't ignore
>>> anything else.
>>>
>>> So, save tag somewhere in page struct and poison shadow with that tag. Make 
>>> page_address() to
>>> return tagged address for all !PageSlab() pages. For PageSlab() pages 
>>> page_address() should return
>>> 0xff tagged address, so we could ignore such accesses.
>>
>> Which pages do you mean by !PageSlab()?
>
> Literally the "PageSlab(page) == false" pages.
>
>> The ones that are allocated and freed by pagealloc, but mot managed by the 
>> slab allocator?
>
> Yes.
>
>> Perhaps we should then add tagging to the pagealloc hook instead?
>>
>
> Of course the tagging would be in kasan_alloc_pages(), where else that could 
> be? And instead of what?

I think I misunderstood your suggestion twice already :)

To make it clear, you're suggesting:

1. Tag memory with a random tag in kasan_alloc_pages() and returned a
tagged pointer from pagealloc.

2. Restore the tag for the pointers returned from page_address for
!PageSlab() pages.

3. Set the tag to 0xff for the pointers returned from page_address for
PageSlab() pages.

Is this correct?

In 2 instead of storing the tag in page_struct, we can just recover it
from the shadow memory that corresponds to that page. What do you
think about this?
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation

2018-04-06 Thread Andrey Konovalov
On Thu, Apr 5, 2018 at 3:02 PM, Andrey Ryabinin <aryabi...@virtuozzo.com> wrote:
> On 04/04/2018 08:00 PM, Andrey Konovalov wrote:
>> On Wed, Apr 4, 2018 at 2:39 PM, Andrey Ryabinin <aryabi...@virtuozzo.com> 
>> wrote:
>>>>>
>>>>> You can save tag somewhere in page struct and make page_address() return 
>>>>> tagged address.
>>>>>
>>>>> I'm not sure it might be even possible to squeeze the tag into 
>>>>> page->flags on some configurations,
>>>>> see include/linux/page-flags-layout.h
>>>>
>>>> One page can contain multiple objects with different tags, so we would
>>>> need to save the tag for each of them.
>>>
>>> What do you mean? Slab page? The per-page tag is needed only for !PageSlab 
>>> pages.
>>> For slab pages we have kmalloc/kmem_cache_alloc() which already return 
>>> properly tagged address.
>>>
>>> But the page allocator returns a pointer to struct page. One has to call 
>>> page_address(page)
>>> to use that page. Returning 'ignore-me'-tagged address from page_address() 
>>> makes the whole
>>> class of bugs invisible to KHWASAN. This is a serious downside comparing to 
>>> classic KASAN which can
>>> detect missuses of page allocator API.
>>
>> Yes, slab page. Here's an example:
>>
>> 1. do_get_write_access() allocates frozen_buffer with jbd2_alloc,
>> which calls kmem_cache_alloc, and then saves the result to
>> jh->b_frozen_data.
>>
>> 2. jbd2_journal_write_metadata_buffer() takes the value of
>> jh_in->b_frozen_data and calls virt_to_page() (and offset_in_page())
>> on it.
>>
>> 3. jbd2_journal_write_metadata_buffer() then calls kmap_atomic(),
>> which calls page_address(), on the resulting page address.
>>
>> The tag gets erased. The page belongs to slab and can contain multiple
>> objects with different tags.
>>
>
> I see. Ideally that kind of problem should be fixed by reworking/redesigning 
> such code,
> however jbd2_journal_write_metadata_buffer() is far from the only place which
> does that trick. Fixing all of them would be a huge task probably, so 
> ignoring such
> accesses seems to be the only choice we have.
>
> Nevertheless, this doesn't mean that we should ignore *all* accesses to !slab 
> memory.

So you mean we need to find a way to ignore accesses via pointers
returned by page_address(), but still check accesses through all other
pointers tagged with 0xFF? I don't see an obvious way to do this. I'm
open to suggestions though.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation

2018-04-04 Thread Andrey Konovalov
On Wed, Apr 4, 2018 at 2:39 PM, Andrey Ryabinin  wrote:
>>>
>>> You can save tag somewhere in page struct and make page_address() return 
>>> tagged address.
>>>
>>> I'm not sure it might be even possible to squeeze the tag into page->flags 
>>> on some configurations,
>>> see include/linux/page-flags-layout.h
>>
>> One page can contain multiple objects with different tags, so we would
>> need to save the tag for each of them.
>
> What do you mean? Slab page? The per-page tag is needed only for !PageSlab 
> pages.
> For slab pages we have kmalloc/kmem_cache_alloc() which already return 
> properly tagged address.
>
> But the page allocator returns a pointer to struct page. One has to call 
> page_address(page)
> to use that page. Returning 'ignore-me'-tagged address from page_address() 
> makes the whole
> class of bugs invisible to KHWASAN. This is a serious downside comparing to 
> classic KASAN which can
> detect missuses of page allocator API.

Yes, slab page. Here's an example:

1. do_get_write_access() allocates frozen_buffer with jbd2_alloc,
which calls kmem_cache_alloc, and then saves the result to
jh->b_frozen_data.

2. jbd2_journal_write_metadata_buffer() takes the value of
jh_in->b_frozen_data and calls virt_to_page() (and offset_in_page())
on it.

3. jbd2_journal_write_metadata_buffer() then calls kmap_atomic(),
which calls page_address(), on the resulting page address.

The tag gets erased. The page belongs to slab and can contain multiple
objects with different tags.

>>> I don't see any possible way of khwasan_enabled being 0 here.
>>
>> Can't kmem_cache_alloc be called for the temporary caches that are
>> used before the slab allocator and kasan are initialized?
>
> kasan_init() runs before allocators are initialized.
> slab allocator obviously has to be initialized before it can be used.

Checked the code, it seems you are right. Boot caches are created
after kasan_init() is called. I will remove khwasan_enabled.

Thanks!
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v2 13/15] khwasan: add hooks implementation

2018-04-03 Thread Andrey Konovalov
On Fri, Mar 30, 2018 at 7:47 PM, Andrey Ryabinin
<aryabi...@virtuozzo.com> wrote:
> On 03/23/2018 09:05 PM, Andrey Konovalov wrote:
>> This commit adds KHWASAN hooks implementation.
>>
>> 1. When a new slab cache is created, KHWASAN rounds up the size of the
>>objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
>>
>> 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
>>that corresponds to this object to this tag, and embeds this tag value
>>into the top byte of the returned pointer.
>>
>> 3. On each kfree KHWASAN poisons the shadow memory with a random tag to
>>allow detection of use-after-free bugs.
>>
>> The rest of the logic of the hook implementation is very much similar to
>> the one provided by KASAN. KHWASAN saves allocation and free stack metadata
>> to the slab object the same was KASAN does this.
>>
>> Signed-off-by: Andrey Konovalov <andreyk...@google.com>
>> ---
>>  mm/kasan/khwasan.c | 200 -
>>  1 file changed, 197 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
>> index da4b17997c71..e8bed5a078c7 100644
>> --- a/mm/kasan/khwasan.c
>> +++ b/mm/kasan/khwasan.c
>> @@ -90,69 +90,260 @@ void *khwasan_reset_tag(const void *addr)
>>   return reset_tag(addr);
>>  }
>>
>> +void kasan_poison_shadow(const void *address, size_t size, u8 value)
>> +{
>> + void *shadow_start, *shadow_end;
>> +
>> + /* Perform shadow offset calculation based on untagged address */
>> + address = reset_tag(address);
>> +
>> + shadow_start = kasan_mem_to_shadow(address);
>> + shadow_end = kasan_mem_to_shadow(address + size);
>> +
>> + memset(shadow_start, value, shadow_end - shadow_start);
>> +}
>> +
>>  void kasan_unpoison_shadow(const void *address, size_t size)
>>  {
>> + /* KHWASAN only allows 16-byte granularity */
>> + size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
>> + kasan_poison_shadow(address, size, get_tag(address));
>>  }
>>
>
>
> This is way too much of copy-paste/code duplication. Ideally, you should have 
> only
> check_memory_region() stuff separated, the rest (poisoning/unpoisoning, slabs 
> management) should be
> in common.c code.
>
> So it should be something like this:
>
> in kasan.h
> ...
> #ifdef CONFIG_KASAN_CLASSIC
> #define KASAN_FREE_PAGE 0xFF  /* page was freed */
> #define KASAN_PAGE_REDZONE  0xFE  /* redzone for kmalloc_large 
> allocations */
> #define KASAN_KMALLOC_REDZONE   0xFC  /* redzone inside slub object */
> #define KASAN_KMALLOC_FREE  0xFB  /* object was freed 
> (kmem_cache_free/kfree) */
> #else
> #define KASAN_FREE_PAGE 0xFE
> #define KASAN_PAGE_REDZONE  0xFE
> #define KASAN_KMALLOC_REDZONE   0xFE
> #define KASAN_KMALLOC_FREE  0xFE
> #endif
>
> ...
>
> #ifdef CONFIG_KASAN_CLASSIC
> static inline void *reset_tag(const void *addr)
> {
> return (void *)addr;
> }
> static inline u8 get_tag(const void *addr)
> {
> return 0;
> }
> #else
> static inline u8 get_tag(const void *addr)
> {
> return (u8)((u64)addr >> KHWASAN_TAG_SHIFT);
> }
>
> static inline void *reset_tag(const void *addr)
> {
> return set_tag(addr, KHWASAN_TAG_KERNEL);
> }
> #endif
>
>
> in kasan/common.c:
>
>
> void kasan_poison_shadow(const void *address, size_t size, u8 value)
> {
> void *shadow_start, *shadow_end;
>
> address = reset_tag(address);
>
> shadow_start = kasan_mem_to_shadow(address);
> shadow_end = kasan_mem_to_shadow(address + size);
>
> memset(shadow_start, value, shadow_end - shadow_start);
> }
>
> void kasan_unpoison_shadow(const void *address, size_t size)
> {
>
> kasan_poison_shadow(address, size, get_tag(address));
>
> if (size & KASAN_SHADOW_MASK) {
> u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
>
> if (IS_ENABLED(CONFIG_KASAN_TAGS)
> *shadow = get_tag(address);
> else
> *shadow = size & KASAN_SHADOW_MASK;
> }
> }
>
> void kasan_free_pages(struct page *page, unsigned int order)
> {
> if (likely(!PageHighMem(page)))
> kasan_poison_shadow(page_address(page),
> PAGE_SIZE << order,
> KASAN_FREE_PAGE);
> }
>
> etc.

OK, I'll rework

Re: [RFC PATCH v2 08/15] khwasan: add tag related helper functions

2018-04-03 Thread Andrey Konovalov
On Fri, Mar 30, 2018 at 6:13 PM, Andrey Ryabinin
<aryabi...@virtuozzo.com> wrote:
>
>
> On 03/23/2018 09:05 PM, Andrey Konovalov wrote:
>
>> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
>> index 24d75245e9d0..da4b17997c71 100644
>> --- a/mm/kasan/khwasan.c
>> +++ b/mm/kasan/khwasan.c
>> @@ -39,6 +39,57 @@
>>  #include "kasan.h"
>>  #include "../slab.h"
>>
>> +int khwasan_enabled;
>
> This is not unused (set, but never used).

It's used in the "khwasan: add hooks implementation" patch. I'll move
it's declaration there as well.

Thanks!

>
>> +
>> +static DEFINE_PER_CPU(u32, prng_state);
>> +
>> +void khwasan_init(void)
>> +{
>> + int cpu;
>> +
>> + for_each_possible_cpu(cpu) {
>> + per_cpu(prng_state, cpu) = get_random_u32();
>> + }
>> + WRITE_ONCE(khwasan_enabled, 1);
>> +}
>> +
>
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v2 05/15] khwasan: initialize shadow to 0xff

2018-04-03 Thread Andrey Konovalov
On Fri, Mar 30, 2018 at 6:07 PM, Andrey Ryabinin
<aryabi...@virtuozzo.com> wrote:
> On 03/23/2018 09:05 PM, Andrey Konovalov wrote:
>> A KHWASAN shadow memory cell contains a memory tag, that corresponds to
>> the tag in the top byte of the pointer, that points to that memory. The
>> native top byte value of kernel pointers is 0xff, so with KHWASAN we
>> need to initialize shadow memory to 0xff. This commit does that.
>>
>> Signed-off-by: Andrey Konovalov <andreyk...@google.com>
>> ---
>>  arch/arm64/mm/kasan_init.c | 11 ++-
>>  include/linux/kasan.h  |  8 
>>  mm/kasan/common.c  |  7 +++
>>  3 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index dabfc1ecda3d..d4bceba60010 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -90,6 +90,10 @@ static void __init kasan_pte_populate(pmd_t *pmdp, 
>> unsigned long addr,
>>   do {
>>   phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page)
>> : kasan_alloc_zeroed_page(node);
>> +#if KASAN_SHADOW_INIT != 0
>> + if (!early)
>> + memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
>> +#endif
>
> Less ugly way to do the same:
> if (KASAN_SHADOW_INIT != 0 && !early)
> memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
>
>
> But the right approach here would be allocating uninitialized memory (see 
> memblock_virt_alloc_try_nid_raw())
> and do "if (!early) memset(.., KASAN_SHADOW_INIT, ..)" afterwards.

Will do!

>
>
>>   next = addr + PAGE_SIZE;
>>   set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
>>   } while (ptep++, addr = next, addr != end && 
>> pte_none(READ_ONCE(*ptep)));
>> @@ -139,6 +143,11 @@ asmlinkage void __init kasan_early_init(void)
>>   KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>>   BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
>>   BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
>> +
>> +#if KASAN_SHADOW_INIT != 0
>> + memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
>> +#endif
>> +
>
>  if (KASAN_SHADOW_INIT)
> memset(...)
>
> Note that, if poisoning of stack variables will work in the same fashion as 
> classic
> KASAN (compiler generated code writes to shadow in function prologue) than 
> content
> of this page will be ruined very fast. Which makes this initialization 
> questionable.

I think I agree with you on this. Since this page immediately gets
dirty and we ignore all reports until proper shadow is set up anyway,
there's no need to initialize it.

>
>
>
>>   kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
>>  true);
>>  }
>> @@ -235,7 +244,7 @@ void __init kasan_init(void)
>>   set_pte(_zero_pte[i],
>>   pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
>>
>> - memset(kasan_zero_page, 0, PAGE_SIZE);
>> + memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
>>   cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
>>
>>   /* At this point kasan is fully initialized. Enable error messages */
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 3c45e273a936..700734dff218 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -139,6 +139,8 @@ static inline size_t kasan_metadata_size(struct 
>> kmem_cache *cache) { return 0; }
>>
>>  #ifdef CONFIG_KASAN_CLASSIC
>>
>> +#define KASAN_SHADOW_INIT 0
>> +
>>  void kasan_cache_shrink(struct kmem_cache *cache);
>>  void kasan_cache_shutdown(struct kmem_cache *cache);
>>
>> @@ -149,4 +151,10 @@ static inline void kasan_cache_shutdown(struct 
>> kmem_cache *cache) {}
>>
>>  #endif /* CONFIG_KASAN_CLASSIC */
>>
>> +#ifdef CONFIG_KASAN_TAGS
>> +
>> +#define KASAN_SHADOW_INIT 0xFF
>> +
>> +#endif /* CONFIG_KASAN_TAGS */
>> +
>>  #endif /* LINUX_KASAN_H */
>> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
>> index 08f6c8cb9f84..f4ccb9425655 100644
>> --- a/mm/kasan/common.c
>> +++ b/mm/kasan/common.c
>> @@ -253,6 +253,9 @@ int kasan_module_alloc(void *addr, size_t size)
>>   __builtin_return_address(0));
>>
>>   if (ret) {
>> +#i

Re: [RFC PATCH v2 03/15] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS

2018-03-27 Thread Andrey Konovalov
On Sat, Mar 24, 2018 at 9:43 AM, Ingo Molnar <mi...@kernel.org> wrote:
>
> * Andrey Konovalov <andreyk...@google.com> wrote:
>
>> This commit splits the current CONFIG_KASAN config option into two:
>> 1. CONFIG_KASAN_CLASSIC, that enables the classic KASAN version (the one
>>that exists now);
>> 2. CONFIG_KASAN_TAGS, that enables KHWASAN.
>
> Sorry, but this is pretty obscure naming scheme that doesn't explain the 
> primary
> difference between these KASAN models to users: that the first one is a pure
> software implementation and the other is hardware-assisted.
>
> Reminds me of the transparency of galactic buerocracy in "The Hitchhiker's 
> Guide
> to the Galaxy":
>
>   “But look, you found the notice, didn’t you?”
>   “Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked 
> filing
>cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware 
> of the
>Leopard.”
>
> I'd suggest something more expressive, such as:
>
> CONFIG_KASAN
>   CONFIG_KASAN_GENERIC
>   CONFIG_KASAN_HW_ASSIST
>
> or so?
>
> The 'generic' variant will basically run on any CPU. The 'hardware assisted' 
> one
> needs support from the CPU.
>
> The following ones might also work:
>
>CONFIG_KASAN_HWASSIST
>CONFIG_KASAN_HW_TAGS
>CONFIG_KASAN_HWTAGS
>
> ... or simply CONFIG_KASAN_SW/CONFIG_KASAN_HW.
>
> If other types of KASAN hardware acceleration are implemented in the future 
> then
> the CONFIG_KASAN_HW namespace can be extended:
>
> CONFIG_KASAN_HW_TAGS
> CONFIG_KASAN_HW_KEYS
> etc.

How about these two:

CONFIG_KASAN_GENERIC
CONFIG_KASAN_HW

?

Shorter config name looks better to me and I think it makes sense to
name the new config just HW, as there's only one HW implementation
right now. When (and if) there are more, we can expand the config name
as you suggested (CONFIG_KASAN_HW_TAGS, CONFIG_KASAN_HW_KEYS, etc).

>
>> Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_CLASSIC support both
>> CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.
>
> It would be very surprising if that wasn't so!
>
> Or did you mean 'Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS'! ;-)
>
> Thanks,
>
> Ingo
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v2 11/15] khwasan, mm: perform untagged pointers comparison in krealloc

2018-03-27 Thread Andrey Konovalov
On Sat, Mar 24, 2018 at 9:29 AM, Ingo Molnar <mi...@kernel.org> wrote:
>
> * Andrey Konovalov <andreyk...@google.com> wrote:
>
>> The krealloc function checks where the same buffer was reused or a new one
>> allocated by comparing kernel pointers. KHWASAN changes memory tag on the
>> krealloc'ed chunk of memory and therefore also changes the pointer tag of
>> the returned pointer. Therefore we need to perform comparison on untagged
>> (with tags reset) pointers to check whether it's the same memory region or
>> not.
>>
>> Signed-off-by: Andrey Konovalov <andreyk...@google.com>
>> ---
>>  mm/slab_common.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/slab_common.c b/mm/slab_common.c
>> index a33e61315ca6..5911f2194cf7 100644
>> --- a/mm/slab_common.c
>> +++ b/mm/slab_common.c
>> @@ -1494,7 +1494,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t 
>> flags)
>>   }
>>
>>   ret = __do_krealloc(p, new_size, flags);
>> - if (ret && p != ret)
>> + if (ret && khwasan_reset_tag(p) != khwasan_reset_tag(ret))
>>   kfree(p);
>
> Small nit:
>
> If 'reset' here means an all zeroes tag (upper byte) then khwasan_clear_tag()
> might be a slightly easier to read primitive?

'Reset' means to set the upper byte to the value that is native for
kernel pointers, and that is 0xFF. So it sets the tag to all ones, not
all zeroes. I can still rename it to khwasan_clear_tag(), if you think
that makes sense in this case as well.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 15/15] khwasan: update kasan documentation

2018-03-23 Thread Andrey Konovalov
This patch updates KASAN documentation to reflect the addition of KHWASAN.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 Documentation/dev-tools/kasan.rst | 212 +-
 1 file changed, 122 insertions(+), 90 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst 
b/Documentation/dev-tools/kasan.rst
index f7a18f274357..a817f4c4285c 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -8,11 +8,18 @@ KernelAddressSANitizer (KASAN) is a dynamic memory error 
detector. It provides
 a fast and comprehensive solution for finding use-after-free and out-of-bounds
 bugs.
 
-KASAN uses compile-time instrumentation for checking every memory access,
-therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is
-required for detection of out-of-bounds accesses to stack or global variables.
+KASAN has two modes: classic KASAN (a classic version, similar to user space
+ASan) and KHWASAN (a version based on memory tagging, similar to user space
+HWASan).
 
-Currently KASAN is supported only for the x86_64 and arm64 architectures.
+KASAN uses compile-time instrumentation to insert validity checks before every
+memory access, and therefore requires a compiler version that supports that.
+For classic KASAN you need GCC version 4.9.2 or later. GCC 5.0 or later is
+required for detection of out-of-bounds accesses on stack and global variables.
+TODO: compiler requirements for KHWASAN
+
+Currently classic KASAN is supported for the x86_64, arm64 and xtensa
+architectures, and KHWASAN is supported only for arm64.
 
 Usage
 -
@@ -21,12 +28,14 @@ To enable KASAN configure kernel with::
 
  CONFIG_KASAN = y
 
-and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and
-inline are compiler instrumentation types. The former produces smaller binary
-the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC
+and choose between CONFIG_KASAN_CLASSIC (to enable classic KASAN) and
+CONFIG_KASAN_TAGS (to enabled KHWASAN). You also need to choose choose between
+CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and inline are compiler
+instrumentation types. The former produces smaller binary the latter is
+1.1 - 2 times faster. For classic KASAN inline instrumentation requires GCC
 version 5.0 or later.
 
-KASAN works with both SLUB and SLAB memory allocators.
+Both KASAN modes work with both SLUB and SLAB memory allocators.
 For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
 
 To disable instrumentation for specific files or directories, add a line
@@ -43,85 +52,80 @@ similar to the following to the respective kernel Makefile:
 Error reports
 ~
 
-A typical out of bounds access report looks like this::
+A typical out-of-bounds access classic KASAN report looks like this::
 
 ==
-BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 
[test_kasan] at addr 8800693bc5d3
-Write of size 1 by task modprobe/1689
-
=
-BUG kmalloc-128 (Not tainted): kasan error
-
-
-
-Disabling lock debugging due to kernel taint
-INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 
pid=1689
- __slab_alloc+0x4b4/0x4f0
- kmem_cache_alloc_trace+0x10b/0x190
- kmalloc_oob_right+0x3d/0x75 [test_kasan]
- init_module+0x9/0x47 [test_kasan]
- do_one_initcall+0x99/0x200
- load_module+0x2cb3/0x3b20
- SyS_finit_module+0x76/0x80
- system_call_fastpath+0x12/0x17
-INFO: Slab 0xea0001a4ef00 objects=17 used=7 fp=0x8800693bd728 
flags=0x1004080
-INFO: Object 0x8800693bc558 @offset=1368 fp=0x8800693bc720
-
-Bytes b4 8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a 
 
-Object 8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  

-Object 8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  
kkk.
-Redzone 8800693bc5d8: cc cc cc cc cc cc cc cc  

-Padding 8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a  

-CPU:

[RFC PATCH v2 14/15] khwasan, arm64: add brk handler for inline instrumentation

2018-03-23 Thread Andrey Konovalov
KHWASAN inline instrumentation mode (which embeds checks of shadow memory
into the generated code, instead of inserting a callback) generates a brk
instruction when a tag mismatch is detected.

This commit add a KHWASAN brk handler, that decodes the immediate value
passed to the brk instructions (to extract information about the memory
access that triggered the mismatch), reads the register values (x0 contains
the guilty address) and reports the bug.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/brk-imm.h |  2 ++
 arch/arm64/kernel/traps.c| 61 
 2 files changed, 63 insertions(+)

diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
index ed693c5bcec0..e4a7013321dc 100644
--- a/arch/arm64/include/asm/brk-imm.h
+++ b/arch/arm64/include/asm/brk-imm.h
@@ -16,10 +16,12 @@
  * 0x400: for dynamic BRK instruction
  * 0x401: for compile time BRK instruction
  * 0x800: kernel-mode BUG() and WARN() traps
+ * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff)
  */
 #define FAULT_BRK_IMM  0x100
 #define KGDB_DYN_DBG_BRK_IMM   0x400
 #define KGDB_COMPILED_DBG_BRK_IMM  0x401
 #define BUG_BRK_IMM0x800
+#define KHWASAN_BRK_IMM0x900
 
 #endif
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index eb2d15147e8d..9d9ca4eb6d2f 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -771,6 +772,59 @@ static struct break_hook bug_break_hook = {
.fn = bug_handler,
 };
 
+#ifdef CONFIG_KASAN_TAGS
+
+#define KHWASAN_ESR_RECOVER0x20
+#define KHWASAN_ESR_WRITE  0x10
+#define KHWASAN_ESR_SIZE_MASK  0x0f
+#define KHWASAN_ESR_SIZE(esr)  (1 << ((esr) & KHWASAN_ESR_SIZE_MASK))
+
+static int khwasan_handler(struct pt_regs *regs, unsigned int esr)
+{
+   bool recover = esr & KHWASAN_ESR_RECOVER;
+   bool write = esr & KHWASAN_ESR_WRITE;
+   size_t size = KHWASAN_ESR_SIZE(esr);
+   u64 addr = regs->regs[0];
+   u64 pc = regs->pc;
+
+   if (user_mode(regs))
+   return DBG_HOOK_ERROR;
+
+   khwasan_report(addr, size, write, pc);
+
+   /*
+* The instrumentation allows to control whether we can proceed after
+* a crash was detected. This is done by passing the -recover flag to
+* the compiler. Disabling recovery allows to generate more compact
+* code.
+*
+* Unfortunately disabling recovery doesn't work for the kernel right
+* now. KHWASAN reporting is disabled in some contexts (for example when
+* the allocator accesses slab object metadata; same is true for KASAN;
+* this is controlled by current->kasan_depth). All these accesses are
+* detected by the tool, even though the reports for them are not
+* printed.
+*
+* This is something that might be fixed at some point in the future.
+*/
+   if (!recover)
+   die("Oops - KHWASAN", regs, 0);
+
+   /* If thread survives, skip over the BUG instruction and continue: */
+   arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+   return DBG_HOOK_HANDLED;
+}
+
+#define KHWASAN_ESR_VAL (0xf200 | KHWASAN_BRK_IMM)
+#define KHWASAN_ESR_MASK 0xff00
+
+static struct break_hook khwasan_break_hook = {
+   .esr_val = KHWASAN_ESR_VAL,
+   .esr_mask = KHWASAN_ESR_MASK,
+   .fn = khwasan_handler,
+};
+#endif
+
 /*
  * Initial handler for AArch64 BRK exceptions
  * This handler only used until debug_traps_init().
@@ -778,6 +832,10 @@ static struct break_hook bug_break_hook = {
 int __init early_brk64(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
 {
+#ifdef CONFIG_KASAN_TAGS
+   if ((esr & KHWASAN_ESR_MASK) == KHWASAN_ESR_VAL)
+   return khwasan_handler(regs, esr) != DBG_HOOK_HANDLED;
+#endif
return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
 }
 
@@ -785,4 +843,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
 void __init trap_init(void)
 {
register_break_hook(_break_hook);
+#ifdef CONFIG_KASAN_TAGS
+   register_break_hook(_break_hook);
+#endif
 }
-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 13/15] khwasan: add hooks implementation

2018-03-23 Thread Andrey Konovalov
This commit adds KHWASAN hooks implementation.

1. When a new slab cache is created, KHWASAN rounds up the size of the
   objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
   that corresponds to this object to this tag, and embeds this tag value
   into the top byte of the returned pointer.

3. On each kfree KHWASAN poisons the shadow memory with a random tag to
   allow detection of use-after-free bugs.

The rest of the logic of the hook implementation is very much similar to
the one provided by KASAN. KHWASAN saves allocation and free stack metadata
to the slab object the same was KASAN does this.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 mm/kasan/khwasan.c | 200 -
 1 file changed, 197 insertions(+), 3 deletions(-)

diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
index da4b17997c71..e8bed5a078c7 100644
--- a/mm/kasan/khwasan.c
+++ b/mm/kasan/khwasan.c
@@ -90,69 +90,260 @@ void *khwasan_reset_tag(const void *addr)
return reset_tag(addr);
 }
 
+void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+   void *shadow_start, *shadow_end;
+
+   /* Perform shadow offset calculation based on untagged address */
+   address = reset_tag(address);
+
+   shadow_start = kasan_mem_to_shadow(address);
+   shadow_end = kasan_mem_to_shadow(address + size);
+
+   memset(shadow_start, value, shadow_end - shadow_start);
+}
+
 void kasan_unpoison_shadow(const void *address, size_t size)
 {
+   /* KHWASAN only allows 16-byte granularity */
+   size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+   kasan_poison_shadow(address, size, get_tag(address));
 }
 
 void check_memory_region(unsigned long addr, size_t size, bool write,
unsigned long ret_ip)
 {
+   u8 tag;
+   u8 *shadow_first, *shadow_last, *shadow;
+   void *untagged_addr;
+
+   tag = get_tag((const void *)addr);
+
+   /* Ignore accesses for pointers tagged with 0xff (native kernel
+* pointer tag) to suppress false positives caused by kmap.
+*
+* Some kernel code was written to account for archs that don't keep
+* high memory mapped all the time, but rather map and unmap particular
+* pages when needed. Instead of storing a pointer to the kernel memory,
+* this code saves the address of the page structure and offset within
+* that page for later use. Those pages are then mapped and unmapped
+* with kmap/kunmap when necessary and virt_to_page is used to get the
+* virtual address of the page. For arm64 (that keeps the high memory
+* mapped all the time), kmap is turned into a page_address call.
+
+* The issue is that with use of the page_address + virt_to_page
+* sequence the top byte value of the original pointer gets lost (gets
+* set to 0xff.
+*/
+   if (tag == 0xff)
+   return;
+
+   untagged_addr = reset_tag((const void *)addr);
+   shadow_first = kasan_mem_to_shadow(untagged_addr);
+   shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1);
+
+   for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
+   if (*shadow != tag) {
+   khwasan_report(addr, size, write, ret_ip);
+   return;
+   }
+   }
 }
 
 void kasan_free_pages(struct page *page, unsigned int order)
 {
+   if (likely(!PageHighMem(page)))
+   kasan_poison_shadow(page_address(page),
+   PAGE_SIZE << order,
+   KHWASAN_TAG_INVALID);
 }
 
 void kasan_cache_create(struct kmem_cache *cache, size_t *size,
slab_flags_t *flags)
 {
+   int orig_size = *size;
+
+   cache->kasan_info.alloc_meta_offset = *size;
+   *size += sizeof(struct kasan_alloc_meta);
+
+   if (*size % KASAN_SHADOW_SCALE_SIZE != 0)
+   *size = round_up(*size, KASAN_SHADOW_SCALE_SIZE);
+
+
+   if (*size > KMALLOC_MAX_SIZE) {
+   *size = orig_size;
+   return;
+   }
+
+   cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
+
+   *flags |= SLAB_KASAN;
 }
 
 void kasan_poison_slab(struct page *page)
 {
+   kasan_poison_shadow(page_address(page),
+   PAGE_SIZE << compound_order(page),
+   KHWASAN_TAG_INVALID);
 }
 
 void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 {
+   kasan_poison_shadow(object,
+   round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+   KHWASAN_TAG_INVALID);
 }
 
 void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
+   if (!READ_ONCE(khwasan_enabled))
+   return object;
+   object 

[RFC PATCH v2 11/15] khwasan, mm: perform untagged pointers comparison in krealloc

2018-03-23 Thread Andrey Konovalov
The krealloc function checks where the same buffer was reused or a new one
allocated by comparing kernel pointers. KHWASAN changes memory tag on the
krealloc'ed chunk of memory and therefore also changes the pointer tag of
the returned pointer. Therefore we need to perform comparison on untagged
(with tags reset) pointers to check whether it's the same memory region or
not.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 mm/slab_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a33e61315ca6..5911f2194cf7 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1494,7 +1494,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t 
flags)
}
 
ret = __do_krealloc(p, new_size, flags);
-   if (ret && p != ret)
+   if (ret && khwasan_reset_tag(p) != khwasan_reset_tag(ret))
kfree(p);
 
return ret;
-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 12/15] khwasan: add bug reporting routines

2018-03-23 Thread Andrey Konovalov
This commit adds rountines, that print KHWASAN error reports. Those are
quite similar to KASAN, the difference is:

1. The way KHWASAN finds the first bad shadow cell (with a mismatching
   tag). KHWASAN compares memory tags from the shadow memory to the pointer
   tag.

2. KHWASAN reports all bugs with the "KASAN: invalid-access" header. This
   is done, so various external tools that already parse the kernel logs
   looking for KASAN reports wouldn't need to be changed.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 include/linux/kasan.h |  3 ++
 mm/kasan/report.c | 88 ++-
 2 files changed, 82 insertions(+), 9 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a2869464a8be..54e7c437dc8f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -161,6 +161,9 @@ void *khwasan_set_tag(const void *addr, u8 tag);
 u8 khwasan_get_tag(const void *addr);
 void *khwasan_reset_tag(const void *ptr);
 
+void khwasan_report(unsigned long addr, size_t size, bool write,
+   unsigned long ip);
+
 #else /* CONFIG_KASAN_TAGS */
 
 static inline void khwasan_init(void) { }
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 5c169aa688fd..ed17168a083e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -51,10 +51,9 @@ static const void *find_first_bad_addr(const void *addr, 
size_t size)
return first_bad_addr;
 }
 
-static bool addr_has_shadow(struct kasan_access_info *info)
+static bool addr_has_shadow(const void *addr)
 {
-   return (info->access_addr >=
-   kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
+   return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
 }
 
 static const char *get_shadow_bug_type(struct kasan_access_info *info)
@@ -127,15 +126,14 @@ static const char *get_wild_bug_type(struct 
kasan_access_info *info)
 
 static const char *get_bug_type(struct kasan_access_info *info)
 {
-   if (addr_has_shadow(info))
+   if (addr_has_shadow(info->access_addr))
return get_shadow_bug_type(info);
return get_wild_bug_type(info);
 }
 
-static void print_error_description(struct kasan_access_info *info)
+static void print_error_description(struct kasan_access_info *info,
+   const char *bug_type)
 {
-   const char *bug_type = get_bug_type(info);
-
pr_err("BUG: KASAN: %s in %pS\n",
bug_type, (void *)info->ip);
pr_err("%s of size %zu at addr %px by task %s/%d\n",
@@ -345,10 +343,10 @@ static void kasan_report_error(struct kasan_access_info 
*info)
 
kasan_start_report();
 
-   print_error_description(info);
+   print_error_description(info, get_bug_type(info));
pr_err("\n");
 
-   if (!addr_has_shadow(info)) {
+   if (!addr_has_shadow(info->access_addr)) {
dump_stack();
} else {
print_address_description((void *)info->access_addr);
@@ -412,6 +410,78 @@ void kasan_report(unsigned long addr, size_t size,
kasan_report_error();
 }
 
+static inline void khwasan_print_tags(const void *addr)
+{
+   u8 addr_tag = get_tag(addr);
+   void *untagged_addr = reset_tag(addr);
+   u8 *shadow = (u8 *)kasan_mem_to_shadow(untagged_addr);
+
+   pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", addr_tag, *shadow);
+}
+
+static const void *khwasan_find_first_bad_addr(const void *addr, size_t size)
+{
+   u8 tag = get_tag((void *)addr);
+   void *untagged_addr = reset_tag((void *)addr);
+   u8 *shadow = (u8 *)kasan_mem_to_shadow(untagged_addr);
+   const void *first_bad_addr = untagged_addr;
+
+   while (*shadow == tag && first_bad_addr < untagged_addr + size) {
+   first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+   shadow = (u8 *)kasan_mem_to_shadow(first_bad_addr);
+   }
+   return first_bad_addr;
+}
+
+void khwasan_report(unsigned long addr, size_t size, bool write,
+   unsigned long ip)
+{
+   struct kasan_access_info info;
+   unsigned long flags;
+   void *untagged_addr = reset_tag((void *)addr);
+
+   if (likely(!kasan_report_enabled()))
+   return;
+
+   disable_trace_on_warning();
+
+   info.access_addr = (void *)addr;
+   info.first_bad_addr = khwasan_find_first_bad_addr((void *)addr, size);
+   info.access_size = size;
+   info.is_write = write;
+   info.ip = ip;
+
+   kasan_start_report();
+
+   print_error_description(, "invalid-access");
+   khwasan_print_tags((void *)addr);
+   pr_err("\n");
+
+   if (!addr_has_shadow(untagged_addr)) {
+   dump_stack();
+   } else {
+   print_address_description(untagged_addr);
+   pr_err("\n");
+   print_shadow_for_add

[RFC PATCH v2 10/15] khwasan, arm64: enable top byte ignore for the kernel

2018-03-23 Thread Andrey Konovalov
KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
which enables Top Byte Ignore for the kernel, when KHWASAN is used.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/pgtable-hwdef.h | 1 +
 arch/arm64/mm/proc.S   | 9 -
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pgtable-hwdef.h 
b/arch/arm64/include/asm/pgtable-hwdef.h
index cdfe3e657a9e..ae6b6405eacc 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -289,6 +289,7 @@
 #define TCR_A1 (UL(1) << 22)
 #define TCR_ASID16 (UL(1) << 36)
 #define TCR_TBI0   (UL(1) << 37)
+#define TCR_TBI1   (UL(1) << 38)
 #define TCR_HA (UL(1) << 39)
 #define TCR_HD (UL(1) << 40)
 
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index c0af47617299..d64ce2ea40ec 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -41,6 +41,12 @@
 /* PTWs cacheable, inner/outer WBWA */
 #define TCR_CACHE_FLAGSTCR_IRGN_WBWA | TCR_ORGN_WBWA
 
+#ifdef CONFIG_KASAN_TAGS
+#define KASAN_TCR_FLAGS TCR_TBI1
+#else
+#define KASAN_TCR_FLAGS 0
+#endif
+
 #define MAIR(attr, mt) ((attr) << ((mt) * 8))
 
 /*
@@ -432,7 +438,8 @@ ENTRY(__cpu_setup)
 * both user and kernel.
 */
ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
-   TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
+   TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1 | \
+   KASAN_TCR_FLAGS
tcr_set_idmap_t0sz  x10, x9
 
/*
-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 09/15] khwasan, kvm: untag pointers in kern_hyp_va

2018-03-23 Thread Andrey Konovalov
kern_hyp_va that converts kernel VA into a HYP VA relies on the top byte
of kernel pointers being 0xff. Untag pointers passed to it with KHWASAN
enabled.

Also fix create_hyp_mappings() and create_hyp_io_mappings(), to use the
untagged kernel pointers for address computations.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/kvm_mmu.h |  8 
 virt/kvm/arm/mmu.c   | 20 +++-
 2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 7faed6e48b46..5149ff83b4c4 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -97,6 +97,9 @@
  * Should be completely invisible on any viable CPU.
  */
 .macro kern_hyp_va reg
+#ifdef CONFIG_KASAN_TAGS
+   orr \reg, \reg, #KASAN_PTR_TAG_MASK
+#endif
 alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
and \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK
 alternative_else_nop_endif
@@ -115,6 +118,11 @@ alternative_else_nop_endif
 
 static inline unsigned long __kern_hyp_va(unsigned long v)
 {
+#ifdef CONFIG_KASAN_TAGS
+   asm volatile("orr %0, %0, %1"
+: "+r" (v)
+: "i" (KASAN_PTR_TAG_MASK));
+#endif
asm volatile(ALTERNATIVE("and %0, %0, %1",
 "nop",
 ARM64_HAS_VIRT_HOST_EXTN)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index b960acdd0c05..3dba9b60e0a0 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -683,9 +684,13 @@ static phys_addr_t kvm_kaddr_to_phys(void *kaddr)
 int create_hyp_mappings(void *from, void *to, pgprot_t prot)
 {
phys_addr_t phys_addr;
-   unsigned long virt_addr;
-   unsigned long start = kern_hyp_va((unsigned long)from);
-   unsigned long end = kern_hyp_va((unsigned long)to);
+   unsigned long virt_addr, start, end;
+
+   from = khwasan_reset_tag(from);
+   to = khwasan_reset_tag(to);
+
+   start = kern_hyp_va((unsigned long)from);
+   end = kern_hyp_va((unsigned long)to);
 
if (is_kernel_in_hyp_mode())
return 0;
@@ -719,8 +724,13 @@ int create_hyp_mappings(void *from, void *to, pgprot_t 
prot)
  */
 int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
 {
-   unsigned long start = kern_hyp_va((unsigned long)from);
-   unsigned long end = kern_hyp_va((unsigned long)to);
+   unsigned long start, end;
+
+   from = khwasan_reset_tag(from);
+   to = khwasan_reset_tag(to);
+
+   start = kern_hyp_va((unsigned long)from);
+   end = kern_hyp_va((unsigned long)to);
 
if (is_kernel_in_hyp_mode())
return 0;
-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 08/15] khwasan: add tag related helper functions

2018-03-23 Thread Andrey Konovalov
This commit adds a few helper functions, that are meant to be used to
work with tags embedded in the top byte of kernel pointers: to set, to
get or to reset (set to 0xff) the top byte.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/mm/kasan_init.c |  2 ++
 include/linux/kasan.h  | 23 +
 mm/kasan/kasan.h   | 29 ++
 mm/kasan/khwasan.c | 51 ++
 4 files changed, 105 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d4bceba60010..7fd9aee88069 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -247,6 +247,8 @@ void __init kasan_init(void)
memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
+   khwasan_init();
+
/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n");
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 700734dff218..a2869464a8be 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -155,6 +155,29 @@ static inline void kasan_cache_shutdown(struct kmem_cache 
*cache) {}
 
 #define KASAN_SHADOW_INIT 0xFF
 
+void khwasan_init(void);
+
+void *khwasan_set_tag(const void *addr, u8 tag);
+u8 khwasan_get_tag(const void *addr);
+void *khwasan_reset_tag(const void *ptr);
+
+#else /* CONFIG_KASAN_TAGS */
+
+static inline void khwasan_init(void) { }
+
+static inline void *khwasan_set_tag(const void *addr, u8 tag)
+{
+   return (void *)addr;
+}
+static inline u8 khwasan_get_tag(const void *addr)
+{
+   return 0xFF;
+}
+static inline void *khwasan_reset_tag(const void *ptr)
+{
+   return (void *)ptr;
+}
+
 #endif /* CONFIG_KASAN_TAGS */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2be31754278e..c715b44c4780 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -113,6 +113,35 @@ void kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
 void kasan_report_invalid_free(void *object, unsigned long ip);
 
+#define KHWASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
+#define KHWASAN_TAG_INVALID0xFE /* redzone or free memory tag */
+#define KHWASAN_TAG_MAX0xFD /* maximum value for random tags */
+
+#define KHWASAN_TAG_SHIFT 56
+#define KHWASAN_TAG_MASK (0xFFUL << KHWASAN_TAG_SHIFT)
+
+static inline void *set_tag(const void *addr, u8 tag)
+{
+   u64 a = (u64)addr;
+
+   a &= ~KHWASAN_TAG_MASK;
+   a |= ((u64)tag << KHWASAN_TAG_SHIFT);
+
+   return (void *)a;
+}
+
+static inline u8 get_tag(const void *addr)
+{
+   return (u8)((u64)addr >> KHWASAN_TAG_SHIFT);
+}
+
+static inline void *reset_tag(const void *addr)
+{
+   return set_tag(addr, KHWASAN_TAG_KERNEL);
+}
+
+void khwasan_report_invalid_free(void *object, unsigned long ip);
+
 #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
 void quarantine_reduce(void);
diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
index 24d75245e9d0..da4b17997c71 100644
--- a/mm/kasan/khwasan.c
+++ b/mm/kasan/khwasan.c
@@ -39,6 +39,57 @@
 #include "kasan.h"
 #include "../slab.h"
 
+int khwasan_enabled;
+
+static DEFINE_PER_CPU(u32, prng_state);
+
+void khwasan_init(void)
+{
+   int cpu;
+
+   for_each_possible_cpu(cpu) {
+   per_cpu(prng_state, cpu) = get_random_u32();
+   }
+   WRITE_ONCE(khwasan_enabled, 1);
+}
+
+/*
+ * If a preemption happens between this_cpu_read and this_cpu_write, the only
+ * side effect is that we'll give a few allocated in different contexts objects
+ * the same tag. Since KHWASAN is meant to be used a probabilistic 
bug-detection
+ * debug feature, this doesn’t have significant negative impact.
+ *
+ * Ideally the tags use strong randomness to prevent any attempts to predict
+ * them during explicit exploit attempts. But strong randomness is expensive,
+ * and we did an intentional trade-off to use a PRNG. This non-atomic RMW
+ * sequence has in fact positive effect, since interrupts that randomly skew
+ * PRNG at unpredictable points do only good.
+ */
+static inline u8 khwasan_random_tag(void)
+{
+   u32 state = this_cpu_read(prng_state);
+
+   state = 1664525 * state + 1013904223;
+   this_cpu_write(prng_state, state);
+
+   return (u8)state % (KHWASAN_TAG_MAX + 1);
+}
+
+void *khwasan_set_tag(const void *addr, u8 tag)
+{
+   return set_tag(addr, tag);
+}
+
+u8 khwasan_get_tag(const void *addr)
+{
+   return get_tag(addr);
+}
+
+void *khwasan_reset_tag(const void *addr)
+{
+   return reset_tag(addr);
+}
+
 void kasan_unpoison_shadow(const void *address, size_t size)
 {
 }
-- 
2.17.0.rc0.231.g781580f067-goog


[RFC PATCH v2 05/15] khwasan: initialize shadow to 0xff

2018-03-23 Thread Andrey Konovalov
A KHWASAN shadow memory cell contains a memory tag, that corresponds to
the tag in the top byte of the pointer, that points to that memory. The
native top byte value of kernel pointers is 0xff, so with KHWASAN we
need to initialize shadow memory to 0xff. This commit does that.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/mm/kasan_init.c | 11 ++-
 include/linux/kasan.h  |  8 
 mm/kasan/common.c  |  7 +++
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index dabfc1ecda3d..d4bceba60010 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -90,6 +90,10 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned 
long addr,
do {
phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page)
  : kasan_alloc_zeroed_page(node);
+#if KASAN_SHADOW_INIT != 0
+   if (!early)
+   memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
+#endif
next = addr + PAGE_SIZE;
set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
} while (ptep++, addr = next, addr != end && 
pte_none(READ_ONCE(*ptep)));
@@ -139,6 +143,11 @@ asmlinkage void __init kasan_early_init(void)
KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
+
+#if KASAN_SHADOW_INIT != 0
+   memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
+#endif
+
kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
   true);
 }
@@ -235,7 +244,7 @@ void __init kasan_init(void)
set_pte(_zero_pte[i],
pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
 
-   memset(kasan_zero_page, 0, PAGE_SIZE);
+   memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
/* At this point kasan is fully initialized. Enable error messages */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 3c45e273a936..700734dff218 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -139,6 +139,8 @@ static inline size_t kasan_metadata_size(struct kmem_cache 
*cache) { return 0; }
 
 #ifdef CONFIG_KASAN_CLASSIC
 
+#define KASAN_SHADOW_INIT 0
+
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
 
@@ -149,4 +151,10 @@ static inline void kasan_cache_shutdown(struct kmem_cache 
*cache) {}
 
 #endif /* CONFIG_KASAN_CLASSIC */
 
+#ifdef CONFIG_KASAN_TAGS
+
+#define KASAN_SHADOW_INIT 0xFF
+
+#endif /* CONFIG_KASAN_TAGS */
+
 #endif /* LINUX_KASAN_H */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 08f6c8cb9f84..f4ccb9425655 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -253,6 +253,9 @@ int kasan_module_alloc(void *addr, size_t size)
__builtin_return_address(0));
 
if (ret) {
+#if KASAN_SHADOW_INIT != 0
+   __memset(ret, KASAN_SHADOW_INIT, shadow_size);
+#endif
find_vm_area(addr)->flags |= VM_KASAN;
kmemleak_ignore(ret);
return 0;
@@ -297,6 +300,10 @@ static int __meminit kasan_mem_notifier(struct 
notifier_block *nb,
if (!ret)
return NOTIFY_BAD;
 
+#if KASAN_SHADOW_INIT != 0
+   __memset(ret, KASAN_SHADOW_INIT, shadow_end - shadow_start);
+#endif
+
kmemleak_ignore(ret);
return NOTIFY_OK;
}
-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 06/15] khwasan, arm64: untag virt address in __kimg_to_phys

2018-03-23 Thread Andrey Konovalov
__kimg_to_phys (which is used by virt_to_phys) assumes that the top byte
of the address is 0xff, which isn't always the case with KHWASAN enabled.
The solution is to reset the tag in __kimg_to_phys.

__lm_to_phys doesn't require any fixups, as it zeroes out the top byte
with the current implementation.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/include/asm/memory.h | 9 +
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index febd54ff3354..c13b89257352 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -98,6 +98,10 @@
 #define KASAN_THREAD_SHIFT 0
 #endif
 
+#ifdef CONFIG_KASAN_TAGS
+#define KASAN_PTR_TAG_MASK (UL(0xff) << 56)
+#endif
+
 #define MIN_THREAD_SHIFT   (14 + KASAN_THREAD_SHIFT)
 
 /*
@@ -231,7 +235,12 @@ static inline unsigned long kaslr_offset(void)
 #define __is_lm_address(addr)  (!!((addr) & BIT(VA_BITS - 1)))
 
 #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+
+#ifdef CONFIG_KASAN_TAGS
+#define __kimg_to_phys(addr)   (((addr) | KASAN_PTR_TAG_MASK) - kimage_voffset)
+#else
 #define __kimg_to_phys(addr)   ((addr) - kimage_voffset)
+#endif
 
 #define __virt_to_phys_nodebug(x) ({   \
phys_addr_t __x = (phys_addr_t)(x); \
-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 07/15] khwasan, arm64: fix up fault handling logic

2018-03-23 Thread Andrey Konovalov
show_pte in arm64 fault handling relies on the fact that the top byte of
a kernel pointer is 0xff, which isn't always the case with KHWASAN enabled.
Reset the top byte.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/mm/fault.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index bff11553eb05..234613777f2a 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -133,6 +134,8 @@ void show_pte(unsigned long addr)
pgd_t *pgdp;
pgd_t pgd;
 
+   addr = (unsigned long)khwasan_reset_tag((void *)addr);
+
if (addr < TASK_SIZE) {
/* TTBR0 */
mm = current->active_mm;
-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 03/15] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS

2018-03-23 Thread Andrey Konovalov
This commit splits the current CONFIG_KASAN config option into two:
1. CONFIG_KASAN_CLASSIC, that enables the classic KASAN version (the one
   that exists now);
2. CONFIG_KASAN_TAGS, that enables KHWASAN.

With CONFIG_KASAN_TAGS enabled, compiler options are changed to instrument
kernel files wiht -fsantize=hwaddress (except the ones for which
KASAN_SANITIZE := n is set).

Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_CLASSIC support both
CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.

This commit also adds empty placeholder (for now) KHWASAN implementation
of KASAN hooks (which KHWASAN reuses) and placeholder implementation
of KHWASAN specific hooks inserted by the compiler.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 arch/arm64/Kconfig |   1 +
 include/linux/compiler-clang.h |   9 ++-
 include/linux/compiler-gcc.h   |   4 ++
 include/linux/compiler.h   |   3 +-
 include/linux/kasan.h  |  16 +++--
 lib/Kconfig.kasan  |  68 +-
 mm/kasan/Makefile  |   6 +-
 mm/kasan/khwasan.c | 127 +
 mm/slub.c  |   2 +-
 scripts/Makefile.kasan |  32 -
 10 files changed, 242 insertions(+), 26 deletions(-)
 create mode 100644 mm/kasan/khwasan.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7381eeb7ef8e..759871510f87 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -88,6 +88,7 @@ config ARM64
select HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+   select HAVE_ARCH_KASAN_TAGS if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index c8f4eea6a5f3..fccebb963ee3 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -24,15 +24,20 @@
 #define KASAN_ABI_VERSION 5
 
 /* emulate gcc's __SANITIZE_ADDRESS__ flag */
-#if __has_feature(address_sanitizer)
+#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
 #define __SANITIZE_ADDRESS__
 #endif
 
-#ifdef CONFIG_KASAN
+#ifdef CONFIG_KASAN_CLASSIC
 #undef __no_sanitize_address
 #define __no_sanitize_address __attribute__((no_sanitize("kernel-address")))
 #endif
 
+#ifdef CONFIG_KASAN_TAGS
+#undef __no_sanitize_hwaddress
+#define __no_sanitize_hwaddress __attribute__((no_sanitize("hwaddress")))
+#endif
+
 /* Clang doesn't have a way to turn it off per-function, yet. */
 #ifdef __noretpoline
 #undef __noretpoline
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index e2c7f4369eff..e9bc985c1227 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -344,6 +344,10 @@
 #define __no_sanitize_address
 #endif
 
+#if !defined(__no_sanitize_hwaddress)
+#define __no_sanitize_hwaddress/* gcc doesn't support KHWASAN */
+#endif
+
 /*
  * A trick to suppress uninitialized variable warning without generating any
  * code
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ab4711c63601..6142bae513e8 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -195,7 +195,8 @@ void __read_once_size(const volatile void *p, void *res, 
int size)
  * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368
  * '__maybe_unused' allows us to avoid defined-but-not-used warnings.
  */
-# define __no_kasan_or_inline __no_sanitize_address __maybe_unused
+# define __no_kasan_or_inline __no_sanitize_address __no_sanitize_hwaddress \
+ __maybe_unused
 #else
 # define __no_kasan_or_inline __always_inline
 #endif
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 3bfebcf7ad2b..3c45e273a936 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -45,8 +45,6 @@ void kasan_free_pages(struct page *page, unsigned int order);
 
 void kasan_cache_create(struct kmem_cache *cache, size_t *size,
slab_flags_t *flags);
-void kasan_cache_shrink(struct kmem_cache *cache);
-void kasan_cache_shutdown(struct kmem_cache *cache);
 
 void kasan_poison_slab(struct page *page);
 void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
@@ -94,8 +92,6 @@ static inline void kasan_free_pages(struct page *page, 
unsigned int order) {}
 static inline void kasan_cache_create(struct kmem_cache *cache,
  size_t *size,
  slab_flags_t *flags) {}
-static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
-static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
 
 static inline void kasan_poison_slab(struct page *page) {}
 static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
@@ -141,4 +137,16 @

[RFC PATCH v2 00/15] khwasan: kernel hardware assisted address sanitizer

2018-03-23 Thread Andrey Konovalov
tter benchmarks.

Andrey Konovalov (15):
  khwasan, mm: change kasan hooks signatures
  khwasan: move common kasan and khwasan code to common.c
  khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS
  khwasan, arm64: adjust shadow size for CONFIG_KASAN_TAGS
  khwasan: initialize shadow to 0xff
  khwasan, arm64: untag virt address in __kimg_to_phys
  khwasan, arm64: fix up fault handling logic
  khwasan: add tag related helper functions
  khwasan, kvm: untag pointers in kern_hyp_va
  khwasan, arm64: enable top byte ignore for the kernel
  khwasan, mm: perform untagged pointers comparison in krealloc
  khwasan: add bug reporting routines
  khwasan: add hooks implementation
  khwasan, arm64: add brk handler for inline instrumentation
  khwasan: update kasan documentation

 Documentation/dev-tools/kasan.rst  | 212 --
 arch/arm64/Kconfig |   1 +
 arch/arm64/Makefile|   2 +-
 arch/arm64/include/asm/brk-imm.h   |   2 +
 arch/arm64/include/asm/kvm_mmu.h   |   8 +
 arch/arm64/include/asm/memory.h|  22 +-
 arch/arm64/include/asm/pgtable-hwdef.h |   1 +
 arch/arm64/kernel/traps.c  |  61 
 arch/arm64/mm/fault.c  |   3 +
 arch/arm64/mm/kasan_init.c |  13 +-
 arch/arm64/mm/proc.S   |   9 +-
 include/linux/compiler-clang.h |   9 +-
 include/linux/compiler-gcc.h   |   4 +
 include/linux/compiler.h   |   3 +-
 include/linux/kasan.h  |  84 +-
 lib/Kconfig.kasan  |  68 +++--
 mm/kasan/Makefile  |   9 +-
 mm/kasan/common.c  | 325 +
 mm/kasan/kasan.c   | 302 +---
 mm/kasan/kasan.h   |  33 +++
 mm/kasan/khwasan.c | 372 +
 mm/kasan/report.c  |  88 +-
 mm/slab.c  |  12 +-
 mm/slab.h  |   2 +-
 mm/slab_common.c   |   6 +-
 mm/slub.c  |  18 +-
 scripts/Makefile.kasan |  32 ++-
 virt/kvm/arm/mmu.c |  20 +-
 28 files changed, 1266 insertions(+), 455 deletions(-)
 create mode 100644 mm/kasan/common.c
 create mode 100644 mm/kasan/khwasan.c

-- 
2.17.0.rc0.231.g781580f067-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[RFC PATCH v2 01/15] khwasan, mm: change kasan hooks signatures

2018-03-23 Thread Andrey Konovalov
KHWASAN will change the value of the top byte of pointers returned from the
kernel allocation functions (such as kmalloc). This patch updates KASAN
hooks signatures and their usage in SLAB and SLUB code to reflect that.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 include/linux/kasan.h | 34 +++---
 mm/kasan/kasan.c  | 24 ++--
 mm/slab.c | 12 ++--
 mm/slab.h |  2 +-
 mm/slab_common.c  |  4 ++--
 mm/slub.c | 16 
 6 files changed, 54 insertions(+), 38 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index adc13474a53b..3bfebcf7ad2b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -53,14 +53,14 @@ void kasan_unpoison_object_data(struct kmem_cache *cache, 
void *object);
 void kasan_poison_object_data(struct kmem_cache *cache, void *object);
 void kasan_init_slab_obj(struct kmem_cache *cache, const void *object);
 
-void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
+void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
 void kasan_kfree_large(void *ptr, unsigned long ip);
 void kasan_poison_kfree(void *ptr, unsigned long ip);
-void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size,
+void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size,
  gfp_t flags);
-void kasan_krealloc(const void *object, size_t new_size, gfp_t flags);
+void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags);
 
-void kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags);
+void *kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags);
 bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
 
 struct kasan_cache {
@@ -105,16 +105,28 @@ static inline void kasan_poison_object_data(struct 
kmem_cache *cache,
 static inline void kasan_init_slab_obj(struct kmem_cache *cache,
const void *object) {}
 
-static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {}
+static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
+{
+   return ptr;
+}
 static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
 static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
-static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
-   size_t size, gfp_t flags) {}
-static inline void kasan_krealloc(const void *object, size_t new_size,
-gfp_t flags) {}
+static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
+   size_t size, gfp_t flags)
+{
+   return (void *)object;
+}
+static inline void *kasan_krealloc(const void *object, size_t new_size,
+gfp_t flags)
+{
+   return (void *)object;
+}
 
-static inline void kasan_slab_alloc(struct kmem_cache *s, void *object,
-  gfp_t flags) {}
+static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
+  gfp_t flags)
+{
+   return object;
+}
 static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
   unsigned long ip)
 {
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e13d911251e7..d8cb63bd1ecc 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -484,9 +484,9 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const 
void *object)
__memset(alloc_info, 0, sizeof(*alloc_info));
 }
 
-void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
+void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
-   kasan_kmalloc(cache, object, cache->object_size, flags);
+   return kasan_kmalloc(cache, object, cache->object_size, flags);
 }
 
 static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
@@ -527,7 +527,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void 
*object, unsigned long ip)
return __kasan_slab_free(cache, object, ip, true);
 }
 
-void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
+void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
   gfp_t flags)
 {
unsigned long redzone_start;
@@ -537,7 +537,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void 
*object, size_t size,
quarantine_reduce();
 
if (unlikely(object == NULL))
-   return;
+   return NULL;
 
redzone_start = round_up((unsigned long)(object + size),
KASAN_SHADOW_SCALE_SIZE);
@@ -550,10 +550,12 @@ void kasan_kmalloc(struct kmem_cache *cache, const void 
*object, size_t size,
 
if (cache->flags & SLAB_KASAN)
set_track(_alloc_info(cache, object)->alloc_track, fl

[RFC PATCH v2 02/15] khwasan: move common kasan and khwasan code to common.c

2018-03-23 Thread Andrey Konovalov
KHWASAN will reuse a significant part of KASAN code, so move the common
parts to common.c without any functional changes.

Signed-off-by: Andrey Konovalov <andreyk...@google.com>
---
 mm/kasan/Makefile |   5 +-
 mm/kasan/common.c | 318 ++
 mm/kasan/kasan.c  | 288 +
 mm/kasan/kasan.h  |   4 +
 4 files changed, 330 insertions(+), 285 deletions(-)
 create mode 100644 mm/kasan/common.c

diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 3289db38bc87..a6df14bffb6b 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -1,11 +1,14 @@
 # SPDX-License-Identifier: GPL-2.0
 KASAN_SANITIZE := n
+UBSAN_SANITIZE_common.o := n
 UBSAN_SANITIZE_kasan.o := n
 KCOV_INSTRUMENT := n
 
 CFLAGS_REMOVE_kasan.o = -pg
 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+
+CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 
-obj-y := kasan.o report.o kasan_init.o quarantine.o
+obj-y := common.o kasan.o report.o kasan_init.o quarantine.o
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
new file mode 100644
index ..08f6c8cb9f84
--- /dev/null
+++ b/mm/kasan/common.c
@@ -0,0 +1,318 @@
+/*
+ * This file contains common KASAN and KHWASAN code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin@gmail.com>
+ *
+ * Some code borrowed from https://github.com/xairy/kasan-prototype by
+ *    Andrey Konovalov <andreyk...@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kasan.h"
+#include "../slab.h"
+
+void kasan_enable_current(void)
+{
+   current->kasan_depth++;
+}
+
+void kasan_disable_current(void)
+{
+   current->kasan_depth--;
+}
+
+static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
+{
+   void *base = task_stack_page(task);
+   size_t size = sp - base;
+
+   kasan_unpoison_shadow(base, size);
+}
+
+/* Unpoison the entire stack for a task. */
+void kasan_unpoison_task_stack(struct task_struct *task)
+{
+   __kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE);
+}
+
+/* Unpoison the stack for the current task beyond a watermark sp value. */
+asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
+{
+   /*
+* Calculate the task stack base address.  Avoid using 'current'
+* because this function is called by early resume code which hasn't
+* yet set up the percpu register (%gs).
+*/
+   void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1));
+
+   kasan_unpoison_shadow(base, watermark - base);
+}
+
+/*
+ * Clear all poison for the region between the current SP and a provided
+ * watermark value, as is sometimes required prior to hand-crafted asm function
+ * returns in the middle of functions.
+ */
+void kasan_unpoison_stack_above_sp_to(const void *watermark)
+{
+   const void *sp = __builtin_frame_address(0);
+   size_t size = watermark - sp;
+
+   if (WARN_ON(sp > watermark))
+   return;
+   kasan_unpoison_shadow(sp, size);
+}
+
+void kasan_check_read(const volatile void *p, unsigned int size)
+{
+   check_memory_region((unsigned long)p, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_read);
+
+void kasan_check_write(const volatile void *p, unsigned int size)
+{
+   check_memory_region((unsigned long)p, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_write);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+   check_memory_region((unsigned long)addr, len, true, _RET_IP_);
+
+   return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+   check_memory_region((unsigned long)src, len, false, _RET_IP_);
+   check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+   return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+   check_memory_region((unsigned long)src, len, false, _RET_IP_);
+   check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+   return __memcpy(dest, src, len);
+}
+
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+   if (likely(!PageHighMem(page)))
+   kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+size_t kasan_metadat

Re: arm64 kvm built with clang doesn't boot

2018-03-16 Thread Andrey Konovalov
On Fri, Mar 16, 2018 at 3:31 PM, Mark Rutland  wrote:
>
> FWIW, with that same compiler and patch applied atop of v4.16-rc4, and
> some bodges around clang not liking the rX register naming in the SMCCC
> code, I get a kernel that boots on my Juno, though I immediately hit a
> KASAN splat:
>
> [8.476766] 
> ==
> [8.483990] BUG: KASAN: slab-out-of-bounds in __d_lookup_rcu+0x350/0x400
> [8.490664] Read of size 8 at addr 8009336e2a30 by task init/1

I see this as well, I'm looking into it. It seems that
__no_sanitize_address is not defined for clang (defining it doesn't
help though, so the issue might be deeper).
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: arm64 kvm built with clang doesn't boot

2018-03-16 Thread Andrey Konovalov
On Fri, Mar 16, 2018 at 3:13 PM, Marc Zyngier  wrote:
> I wasn't aware of that discussion, but this is indeed quite annoying.
> Note that you should be able to restrict this to arch/arm64/kvm/hyp/*
> and virt/kvm/arm/hyp/*.

That works as well (tried it, the kernel boots). I've also tried
compiling without the flag for virt/kvm/arm/hyp/*, it boots as well.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: arm64 kvm built with clang doesn't boot

2018-03-16 Thread Andrey Konovalov
On Fri, Mar 16, 2018 at 3:13 PM, Mark Rutland  wrote:
> I think that patch is our best bet currently, but to save ourselves pain
> in future it would be *really* nice if GCC and clang could provide an
> option line -fno-absolute-addressing that would implicitly disable any
> feature that would generate an absolute address as jump tables do.
>

Let me know if you want me to mail that patch again.

Perhaps Nick can comment on whether something like
-fno-absolute-addressing would be feasible in clang. Although even if
it gets implemented, it won't fix the already released clang versions.

> With v4.15 (and clang 5.0.0), I did not have to disable jump labels to
> get a kernel booting on a Juno platform, though I did have to pass
> -fno-jump-tables to the hyp code.
>
> Which kernel version and clang version are you using?

I've rechecked and I think I was wrong here. I disabled
COFNIG_JUMP_LABEL while trying to get the kernel booting before I
added the kvm flags. It seems it's not needed after all.

But just for the reference, I'm using 4.16-rc4 with a patch to fix
SMCCC issues that you mentioned. As for clang, I'm using LLVM revision
325711 (a couple of weeks old).
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


arm64 kvm built with clang doesn't boot

2018-03-16 Thread Andrey Konovalov
Hi!

I've recently tried to boot clang built kernel on real hardware
(Odroid C2 board) instead of using a VM. The issue that I stumbled
upon is that arm64 kvm built with clang doesn't boot.

Adding -fno-jump-tables compiler flag to arch/arm64/kvm/* helps. There
was a patch some time ago that did exactly that
(https://patchwork.kernel.org/patch/10060381/), but it wasn't accepted
AFAICT (see the discussion on that thread).

What would be the best way to get this fixed?

I've also had to disable CONFIG_JUMP_LABEL to get the kernel boot
(even without kvm enabled), but that might be a different (though
related) issue.

Thanks!
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: kvm/arm64: use-after-free in kvm_unmap_hva_handler/unmap_stage2_pmds

2017-04-13 Thread Andrey Konovalov
On Thu, Apr 13, 2017 at 5:50 PM, Suzuki K. Poulose
<suzuki.poul...@arm.com> wrote:
> On Thu, Apr 13, 2017 at 10:17:54AM +0100, Suzuki K Poulose wrote:
>> On 12/04/17 19:43, Marc Zyngier wrote:
>> > On 12/04/17 17:19, Andrey Konovalov wrote:
>> >
>> > Hi Andrey,
>> >
>> > > Apparently this wasn't fixed, I've got this report again on
>> > > linux-next-c4e7b35a3 (Apr 11), which includes 8b3405e34 "kvm:
>> > > arm/arm64: Fix locking for kvm_free_stage2_pgd".
>> >
>> > This looks like a different bug.
>> >
>> > >
>> > > I now have a way to reproduce it, so I can test proposed patches. I
>> > > don't have a simple C reproducer though.
>> > >
>> > > The bug happens when the following syzkaller program is executed:
>> > >
>> > > mmap(&(0x7f00/0xc000)=nil, (0xc000), 0x3, 0x32, 
>> > > 0x, 0x0)
>> > > unshare(0x400)
>> > > perf_event_open(&(0x7f02f000-0x78)={0x1, 0x78, 0x0, 0x0, 0x0, 0x0,
>> > > 0x0, 0x6, 0x0, 0x0, 0xd34, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
>> > > 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, 0x0, 0x,
>> > > 0x, 0x0)
>> > > r0 = openat$kvm(0xff9c,
>> > > &(0x7f00c000-0x9)="2f6465762f6b766d00", 0x0, 0x0)
>> > > ioctl$TIOCSBRK(0x, 0x5427)
>> > > r1 = ioctl$KVM_CREATE_VM(r0, 0xae01, 0x0)
>> > > syz_kvm_setup_cpu$arm64(r1, 0x,
>> > > &(0x7fdc6000/0x18000)=nil, &(0x7f00c000)=[{0x0,
>> > > &(0x7f00c000)="5ba3c16f533efbed09f8221253c73763327fadce2371813b45dd7f7982f84a873e4ae89a6c2bd1af83a6024c36a1ff518318",
>> > > 0x32}], 0x1, 0x0, &(0x7f00d000-0x10)=[@featur2={0x1, 0x3}], 0x1)
>> >
>> > Is that the only thing the program does? Or is there anything running in
>> > parallel?
>> >
>> > > ==
>> > > BUG: KASAN: use-after-free in arch_spin_is_locked
>> > > include/linux/compiler.h:254 [inline]
>> > > BUG: KASAN: use-after-free in unmap_stage2_range+0x990/0x9a8
>> > > arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:295
>> > > Read of size 8 at addr 84476730 by task syz-executor/13106
>> > >
>> > > CPU: 1 PID: 13106 Comm: syz-executor Not tainted
>> > > 4.11.0-rc6-next-20170411-xc2-11025-gc4e7b35a33d4-dirty #5
>> > > Hardware name: Hardkernel ODROID-C2 (DT)
>> > > Call trace:
>> > > [] dump_backtrace+0x0/0x440 
>> > > arch/arm64/kernel/traps.c:505
>> > > [] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
>> > > [] __dump_stack lib/dump_stack.c:16 [inline]
>> > > [] dump_stack+0x110/0x168 lib/dump_stack.c:52
>> > > [] print_address_description+0x60/0x248 
>> > > mm/kasan/report.c:252
>> > > [] kasan_report_error mm/kasan/report.c:351 [inline]
>> > > [] kasan_report+0x218/0x300 mm/kasan/report.c:408
>> > > [] __asan_report_load8_noabort+0x18/0x20 
>> > > mm/kasan/report.c:429
>> > > [] arch_spin_is_locked include/linux/compiler.h:254 
>> > > [inline]
>> >
>> > This is the assert on the spinlock, and the memory is gone.
>> >
>> > > [] unmap_stage2_range+0x990/0x9a8
>> > > arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:295
>> > > [] kvm_free_stage2_pgd.part.16+0x30/0x98
>> > > arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:842
>> > > [] kvm_free_stage2_pgd
>> > > arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:838 [inline]
>> >
>> > But we've taken than lock here. There's only a handful of instructions
>> > in between, and the memory can only go away if there is something
>> > messing with us in parallel.
>> >
>> > > [] kvm_arch_flush_shadow_all+0x40/0x58
>> > > arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1895
>> > > [] kvm_mmu_notifier_release+0x154/0x1d0
>> > > arch/arm64/kvm/../../../virt/kvm/kvm_main.c:472
>> > > [] __mmu_notifier_release+0x1c0/0x3e0 
>> > > mm/mmu_notifier.c:75
>> > > [] mmu_notifier_release
>> > > include/linux/mmu_notifier.h:235 [inline]
>> > > [] exit_mmap+0x21c/0x288 mm/mmap.c:2941
>> > > [] __mmput kernel/fork.c:888 [inline]
>> > > [] mmput+0xdc/0x2e0 kernel/fork.c:910
>> > > [] exit_mm kernel/exit.c:557

Re: kvm/arm64: use-after-free in kvm_unmap_hva_handler/unmap_stage2_pmds

2017-04-13 Thread Andrey Konovalov
On Thu, Apr 13, 2017 at 11:34 AM, Mark Rutland <mark.rutl...@arm.com> wrote:
> On Wed, Apr 12, 2017 at 08:51:31PM +0200, Andrey Konovalov wrote:
>> On Wed, Apr 12, 2017 at 8:43 PM, Marc Zyngier <marc.zyng...@arm.com> wrote:
>> > On 12/04/17 17:19, Andrey Konovalov wrote:
>
>> >> I now have a way to reproduce it, so I can test proposed patches. I
>> >> don't have a simple C reproducer though.
>> >>
>> >> The bug happens when the following syzkaller program is executed:
>> >>
>> >> mmap(&(0x7f00/0xc000)=nil, (0xc000), 0x3, 0x32, 
>> >> 0x, 0x0)
>> >> unshare(0x400)
>> >> perf_event_open(&(0x7f02f000-0x78)={0x1, 0x78, 0x0, 0x0, 0x0, 0x0,
>> >> 0x0, 0x6, 0x0, 0x0, 0xd34, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
>> >> 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, 0x0, 0x,
>> >> 0x, 0x0)
>> >> r0 = openat$kvm(0xff9c,
>> >> &(0x7f00c000-0x9)="2f6465762f6b766d00", 0x0, 0x0)
>> >> ioctl$TIOCSBRK(0x, 0x5427)
>> >> r1 = ioctl$KVM_CREATE_VM(r0, 0xae01, 0x0)
>> >> syz_kvm_setup_cpu$arm64(r1, 0x,
>> >> &(0x7fdc6000/0x18000)=nil, &(0x7f00c000)=[{0x0,
>> >> &(0x7f00c000)="5ba3c16f533efbed09f8221253c73763327fadce2371813b45dd7f7982f84a873e4ae89a6c2bd1af83a6024c36a1ff518318",
>> >> 0x32}], 0x1, 0x0, &(0x7f00d000-0x10)=[@featur2={0x1, 0x3}], 0x1)
>> >
>> > Is that the only thing the program does? Or is there anything running in
>> > parallel?
>>
>> These calls are executed repeatedly and in random order. That's all.
>>
>> Except that I'm running the reproducer on a real arm board, so there's
>> probably a bunch of stuff going on besides these calls.
>
> I had a go at reproducing this on an arm64 board following [1], but so
> far I've had no luck. I've dumped the above into syz-kvm-bug, and I'm
> trying to reproduce the issue with:
>
> PATH=$PATH:$(pwd)/bin syz-execprog \
> -executor $(pwd)/bin/syz-executor \
> -cover=0 -repeat=0 -procs 16 \
> syz-kvm-bug
>
> Just to check, is that the correct way to reproduce the problem with the
> above log?

Hi Mark,

You assume that you have KASAN enabled.

Since a few unintended line breaks were added in the email, here's the
program in plaintext:
https://gist.githubusercontent.com/xairy/69864355b5a64f74e7cb445b7325a7df/raw/bdbbbf177dbea13eac0a5ddfa9c3e6c32695b13b/kvm-arm-uaf-log

I run it as:
# ./syz-execprog -repeat=0 -collide=false -sandbox=namespace ./kvm-arm-uaf-log

And it takes less than a second to trigger the bug.

Thanks!

>
> How quickly does that reproduce the problem for you?
>
> Thanks,
> Mark.
>
> [1] https://github.com/google/syzkaller/wiki/How-to-execute-syzkaller-programs
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: kvm/arm64: use-after-free in kvm_unmap_hva_handler/unmap_stage2_pmds

2017-04-12 Thread Andrey Konovalov
On Wed, Apr 12, 2017 at 8:43 PM, Marc Zyngier <marc.zyng...@arm.com> wrote:
> On 12/04/17 17:19, Andrey Konovalov wrote:
>
> Hi Andrey,
>
>> Apparently this wasn't fixed, I've got this report again on
>> linux-next-c4e7b35a3 (Apr 11), which includes 8b3405e34 "kvm:
>> arm/arm64: Fix locking for kvm_free_stage2_pgd".
>
> This looks like a different bug.

Oh, OK.

>
>>
>> I now have a way to reproduce it, so I can test proposed patches. I
>> don't have a simple C reproducer though.
>>
>> The bug happens when the following syzkaller program is executed:
>>
>> mmap(&(0x7f00/0xc000)=nil, (0xc000), 0x3, 0x32, 0x, 
>> 0x0)
>> unshare(0x400)
>> perf_event_open(&(0x7f02f000-0x78)={0x1, 0x78, 0x0, 0x0, 0x0, 0x0,
>> 0x0, 0x6, 0x0, 0x0, 0xd34, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
>> 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, 0x0, 0x,
>> 0x, 0x0)
>> r0 = openat$kvm(0xff9c,
>> &(0x7f00c000-0x9)="2f6465762f6b766d00", 0x0, 0x0)
>> ioctl$TIOCSBRK(0x, 0x5427)
>> r1 = ioctl$KVM_CREATE_VM(r0, 0xae01, 0x0)
>> syz_kvm_setup_cpu$arm64(r1, 0x,
>> &(0x7fdc6000/0x18000)=nil, &(0x7f00c000)=[{0x0,
>> &(0x7f00c000)="5ba3c16f533efbed09f8221253c73763327fadce2371813b45dd7f7982f84a873e4ae89a6c2bd1af83a6024c36a1ff518318",
>> 0x32}], 0x1, 0x0, &(0x7f00d000-0x10)=[@featur2={0x1, 0x3}], 0x1)
>
> Is that the only thing the program does? Or is there anything running in
> parallel?

These calls are executed repeatedly and in random order. That's all.

Except that I'm running the reproducer on a real arm board, so there's
probably a bunch of stuff going on besides these calls.

>
>> ==
>> BUG: KASAN: use-after-free in arch_spin_is_locked
>> include/linux/compiler.h:254 [inline]
>> BUG: KASAN: use-after-free in unmap_stage2_range+0x990/0x9a8
>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:295
>> Read of size 8 at addr 84476730 by task syz-executor/13106
>>
>> CPU: 1 PID: 13106 Comm: syz-executor Not tainted
>> 4.11.0-rc6-next-20170411-xc2-11025-gc4e7b35a33d4-dirty #5
>> Hardware name: Hardkernel ODROID-C2 (DT)
>> Call trace:
>> [] dump_backtrace+0x0/0x440 arch/arm64/kernel/traps.c:505
>> [] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
>> [] __dump_stack lib/dump_stack.c:16 [inline]
>> [] dump_stack+0x110/0x168 lib/dump_stack.c:52
>> [] print_address_description+0x60/0x248 
>> mm/kasan/report.c:252
>> [] kasan_report_error mm/kasan/report.c:351 [inline]
>> [] kasan_report+0x218/0x300 mm/kasan/report.c:408
>> [] __asan_report_load8_noabort+0x18/0x20 
>> mm/kasan/report.c:429
>> [] arch_spin_is_locked include/linux/compiler.h:254 
>> [inline]
>
> This is the assert on the spinlock, and the memory is gone.
>
>> [] unmap_stage2_range+0x990/0x9a8
>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:295
>> [] kvm_free_stage2_pgd.part.16+0x30/0x98
>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:842
>> [] kvm_free_stage2_pgd
>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:838 [inline]
>
> But we've taken than lock here. There's only a handful of instructions
> in between, and the memory can only go away if there is something
> messing with us in parallel.
>
>> [] kvm_arch_flush_shadow_all+0x40/0x58
>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1895
>> [] kvm_mmu_notifier_release+0x154/0x1d0
>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:472
>> [] __mmu_notifier_release+0x1c0/0x3e0 mm/mmu_notifier.c:75
>> [] mmu_notifier_release
>> include/linux/mmu_notifier.h:235 [inline]
>> [] exit_mmap+0x21c/0x288 mm/mmap.c:2941
>> [] __mmput kernel/fork.c:888 [inline]
>> [] mmput+0xdc/0x2e0 kernel/fork.c:910
>> [] exit_mm kernel/exit.c:557 [inline]
>> [] do_exit+0x648/0x2020 kernel/exit.c:865
>> [] do_group_exit+0xdc/0x260 kernel/exit.c:982
>> [] get_signal+0x358/0xf58 kernel/signal.c:2318
>> [] do_signal+0x170/0xc10 arch/arm64/kernel/signal.c:370
>> [] do_notify_resume+0xe4/0x120 
>> arch/arm64/kernel/signal.c:421
>> [] work_pending+0x8/0x14
>
> So we're being serviced with a signal. Do you know if this signal is
> generated by your syzkaller program? We could be racing between do_exit
> triggered by a fatal signal (this trace) and the closing of the two file
> descriptors (vcpu and vm).

I'm not sure.

>
> Paolo: does this look possible to you? I can't see what locking we have
> that could prevent this race.
>
> Thanks,
>
> M.
> --
> Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: kvm/arm64: use-after-free in kvm_unmap_hva_handler/unmap_stage2_pmds

2017-04-12 Thread Andrey Konovalov
On Tue, Mar 14, 2017 at 5:57 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
>
> On 14/03/2017 12:07, Suzuki K Poulose wrote:
>> On 10/03/17 13:34, Andrey Konovalov wrote:
>>> Hi,
>>>
>>> I've got the following error report while fuzzing the kernel with
>>> syzkaller.
>>>
>>> On linux-next commit 56b8bad5e066c23e8fa273ef5fba50bd3da2ace8 (Mar 8).
>>>
>>> Unfortunately I can't reproduce it.
>>>
>>> ==
>>> BUG: KASAN: use-after-free in put_page include/linux/compiler.h:243
>>> [inline]
>>> BUG: KASAN: use-after-free in unmap_stage2_pmds
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:240 [inline]
>>> BUG: KASAN: use-after-free in unmap_stage2_puds
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:269 [inline]
>>> BUG: KASAN: use-after-free in unmap_stage2_range+0x884/0x938
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:299
>>> Read of size 8 at addr 80004585c000 by task syz-executor/5176
>>
>>
>>> [] put_page include/linux/compiler.h:243 [inline]
>>> [] unmap_stage2_pmds
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:240 [inline]
>>> [] unmap_stage2_puds
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:269 [inline]
>>> [] unmap_stage2_range+0x884/0x938
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:299
>>> [] kvm_unmap_hva_handler+0x28/0x38
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1556
>>> [] handle_hva_to_gpa+0x140/0x250
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1547
>>> [] kvm_unmap_hva_range+0x60/0x80
>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1579
>>> []
>>> kvm_mmu_notifier_invalidate_range_start+0x194/0x278
>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:357
>>> [] __mmu_notifier_invalidate_range_start+0x1d0/0x2a0
>>> mm/mmu_notifier.c:199
>>> [] mmu_notifier_invalidate_range_start
>>> include/linux/mmu_notifier.h:282 [inline]
>>> [] unmap_vmas+0x12c/0x198 mm/memory.c:1372
>>> [] unmap_region+0x128/0x230 mm/mmap.c:2460
>>> [] update_hiwater_vm include/linux/mm.h:1483 [inline]
>>> [] remove_vma_list mm/mmap.c:2432 [inline]
>>> [] do_munmap+0x598/0x9b0 mm/mmap.c:2662
>>> [] find_vma_links mm/mmap.c:495 [inline]
>>> [] mmap_region+0x138/0xc78 mm/mmap.c:1610
>>> [] is_file_hugepages include/linux/hugetlb.h:269
>>> [inline]
>>> [] do_mmap+0x3cc/0x848 mm/mmap.c:1446
>>> [] do_mmap_pgoff include/linux/mm.h:2039 [inline]
>>> [] vm_mmap_pgoff+0xec/0x120 mm/util.c:305
>>> [] SYSC_mmap_pgoff mm/mmap.c:1475 [inline]
>>> [] SyS_mmap_pgoff+0x220/0x420 mm/mmap.c:1458
>>> [] sys_mmap+0x58/0x80 arch/arm64/kernel/sys.c:37
>>> [] el0_svc_naked+0x24/0x28
>>>
>>
>> We hold kvm->mmu_lock, while manipulating the stage2 ranges. However, I
>> find that
>> we don't take the lock, when we do it f rom kvm_free_stage2_pgd(), which
>> could
>> potentially have caused a problem with an munmap of a memslot ?
>>
>> Lightly tested...
>>
>>
>> commit fa75684dbf0fe845cf8403517d6e0c2c3344a544
>> Author: Suzuki K Poulose <suzuki.poul...@arm.com>
>> Date:   Tue Mar 14 10:26:54 2017 +
>>
>> kvm: arm: Fix locking for kvm_free_stage2_pgd
>> In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while
>> calling
>> unmap_stage2_range() on the entire memory range for the guest. This
>> could
>> cause problems with other callers (e.g, munmap on a memslot) trying to
>> unmap a range.
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>> Signed-off-by: Suzuki K Poulose <suzuki.poul...@arm.com>
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index a07ce3e..7f97063 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -831,7 +831,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
>> if (kvm->arch.pgd == NULL)
>> return;
>>
>> +   spin_lock(>mmu_lock);
>> unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
>> +   spin_unlock(>mmu_lock);
>> +
>> /* Free the HW pgd, one page at a time */
>> free_pages_exact(kvm->arch.pgd, S2_PGD_SIZE);
>> kvm->arch.pgd = NULL;
>
> Reviewed-by: Paolo Bonzini <pbonz...@redhat.com>
>>
>>
>>
>>> The buggy address belongs to the page:

Re: kvm/arm64: use-after-free in kvm_vm_ioctl/vmacache_update

2017-04-11 Thread Andrey Konovalov
On Tue, Apr 11, 2017 at 5:36 PM, Marc Zyngier <marc.zyng...@arm.com> wrote:
> On 11/04/17 16:26, Andrey Konovalov wrote:
>> On Tue, Mar 14, 2017 at 1:26 PM, Marc Zyngier <marc.zyng...@arm.com> wrote:
>>> On 14/03/17 11:03, Suzuki K Poulose wrote:
>>>> On 13/03/17 09:58, Marc Zyngier wrote:
>>>>> On 10/03/17 18:37, Suzuki K Poulose wrote:
>>>>>> On 10/03/17 15:50, Andrey Konovalov wrote:
>>>>>>> On Fri, Mar 10, 2017 at 2:38 PM, Andrey Konovalov 
>>>>>>> <andreyk...@google.com> wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I've got the following error report while fuzzing the kernel with 
>>>>>>>> syzkaller.
>>>>>>>>
>>>>>>>> On linux-next commit 56b8bad5e066c23e8fa273ef5fba50bd3da2ace8 (Mar 8).
>>>>>>>>
>>>>>>>> Unfortunately I can't reproduce it.
>>>>>>>>
>>>>>>>> ==
>>>>>>>> BUG: KASAN: use-after-free in vmacache_update+0x114/0x118 
>>>>>>>> mm/vmacache.c:63
>>>>>>>> Read of size 8 at addr 80003b9a2040 by task syz-executor/26615
>>>>>>>>
>>>>>>>> CPU: 1 PID: 26615 Comm: syz-executor Not tainted
>>>>>>>> 4.11.0-rc1-next-20170308-xc2-dirty #3
>>>>>>>> Hardware name: Hardkernel ODROID-C2 (DT)
>>>>>>>> Call trace:
>>>>>>>> [] dump_backtrace+0x0/0x440 
>>>>>>>> arch/arm64/kernel/traps.c:505
>>>>>>>> [] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
>>>>>>>> [] __dump_stack lib/dump_stack.c:16 [inline]
>>>>>>>> [] dump_stack+0x110/0x168 lib/dump_stack.c:52
>>>>>>>> [] print_address_description+0x60/0x248 
>>>>>>>> mm/kasan/report.c:250
>>>>>>>> [] kasan_report_error+0xe8/0x250 
>>>>>>>> mm/kasan/report.c:349
>>>>>>>> [] kasan_report mm/kasan/report.c:372 [inline]
>>>>>>>> [] __asan_report_load8_noabort+0x3c/0x48 
>>>>>>>> mm/kasan/report.c:393
>>>>>>>> [] vmacache_update+0x114/0x118 mm/vmacache.c:63
>>>>>>>> [] find_vma+0xf8/0x150 mm/mmap.c:2124
>>>>>>>> [] kvm_arch_prepare_memory_region+0x2ac/0x488
>>>>>>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1817
>>>>>>>> [] __kvm_set_memory_region+0x3d8/0x12b8
>>>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1026
>>>>>>>> [] kvm_set_memory_region+0x38/0x58
>>>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1075
>>>>>>>> [] kvm_vm_ioctl_set_memory_region
>>>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1087 [inline]
>>>>>>>> [] kvm_vm_ioctl+0xb94/0x1308
>>>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:2960
>>>>>>>> [] vfs_ioctl fs/ioctl.c:45 [inline]
>>>>>>>> [] do_vfs_ioctl+0x128/0xfc0 fs/ioctl.c:685
>>>>>>>> [] SYSC_ioctl fs/ioctl.c:700 [inline]
>>>>>>>> [] SyS_ioctl+0xa8/0xb8 fs/ioctl.c:691
>>>>>>>> [] el0_svc_naked+0x24/0x28
>>>>>>>>
>>>>>>>> Allocated by task 26657:
>>>>>>>>  save_stack_trace_tsk+0x0/0x330 arch/arm64/kernel/stacktrace.c:133
>>>>>>>>  save_stack_trace+0x20/0x30 arch/arm64/kernel/stacktrace.c:216
>>>>>>>>  save_stack mm/kasan/kasan.c:515 [inline]
>>>>>>>>  set_track mm/kasan/kasan.c:527 [inline]
>>>>>>>>  kasan_kmalloc+0xd4/0x180 mm/kasan/kasan.c:619
>>>>>>>>  kasan_slab_alloc+0x14/0x20 mm/kasan/kasan.c:557
>>>>>>>>  slab_post_alloc_hook mm/slab.h:456 [inline]
>>>>>>>>  slab_alloc_node mm/slub.c:2718 [inline]
>>>>>>>>  slab_alloc mm/slub.c:2726 [inline]
>>>>>>>>  kmem_cache_alloc+0x144/0x230 mm/slub.c:2731
>>>>>>>>  __split_vma+0x118/0x608 mm/mmap.c:2515
>>>>>>>>  do_munmap+0x194/0x9b0 mm/mmap.c:2636
>>>>>>>> Freed by task 26657:
>>>>

Re: kvm/arm64: use-after-free in kvm_vm_ioctl/vmacache_update

2017-04-11 Thread Andrey Konovalov
On Tue, Mar 14, 2017 at 1:26 PM, Marc Zyngier <marc.zyng...@arm.com> wrote:
> On 14/03/17 11:03, Suzuki K Poulose wrote:
>> On 13/03/17 09:58, Marc Zyngier wrote:
>>> On 10/03/17 18:37, Suzuki K Poulose wrote:
>>>> On 10/03/17 15:50, Andrey Konovalov wrote:
>>>>> On Fri, Mar 10, 2017 at 2:38 PM, Andrey Konovalov <andreyk...@google.com> 
>>>>> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I've got the following error report while fuzzing the kernel with 
>>>>>> syzkaller.
>>>>>>
>>>>>> On linux-next commit 56b8bad5e066c23e8fa273ef5fba50bd3da2ace8 (Mar 8).
>>>>>>
>>>>>> Unfortunately I can't reproduce it.
>>>>>>
>>>>>> ==
>>>>>> BUG: KASAN: use-after-free in vmacache_update+0x114/0x118 
>>>>>> mm/vmacache.c:63
>>>>>> Read of size 8 at addr 80003b9a2040 by task syz-executor/26615
>>>>>>
>>>>>> CPU: 1 PID: 26615 Comm: syz-executor Not tainted
>>>>>> 4.11.0-rc1-next-20170308-xc2-dirty #3
>>>>>> Hardware name: Hardkernel ODROID-C2 (DT)
>>>>>> Call trace:
>>>>>> [] dump_backtrace+0x0/0x440 
>>>>>> arch/arm64/kernel/traps.c:505
>>>>>> [] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
>>>>>> [] __dump_stack lib/dump_stack.c:16 [inline]
>>>>>> [] dump_stack+0x110/0x168 lib/dump_stack.c:52
>>>>>> [] print_address_description+0x60/0x248 
>>>>>> mm/kasan/report.c:250
>>>>>> [] kasan_report_error+0xe8/0x250 mm/kasan/report.c:349
>>>>>> [] kasan_report mm/kasan/report.c:372 [inline]
>>>>>> [] __asan_report_load8_noabort+0x3c/0x48 
>>>>>> mm/kasan/report.c:393
>>>>>> [] vmacache_update+0x114/0x118 mm/vmacache.c:63
>>>>>> [] find_vma+0xf8/0x150 mm/mmap.c:2124
>>>>>> [] kvm_arch_prepare_memory_region+0x2ac/0x488
>>>>>> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1817
>>>>>> [] __kvm_set_memory_region+0x3d8/0x12b8
>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1026
>>>>>> [] kvm_set_memory_region+0x38/0x58
>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1075
>>>>>> [] kvm_vm_ioctl_set_memory_region
>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1087 [inline]
>>>>>> [] kvm_vm_ioctl+0xb94/0x1308
>>>>>> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:2960
>>>>>> [] vfs_ioctl fs/ioctl.c:45 [inline]
>>>>>> [] do_vfs_ioctl+0x128/0xfc0 fs/ioctl.c:685
>>>>>> [] SYSC_ioctl fs/ioctl.c:700 [inline]
>>>>>> [] SyS_ioctl+0xa8/0xb8 fs/ioctl.c:691
>>>>>> [] el0_svc_naked+0x24/0x28
>>>>>>
>>>>>> Allocated by task 26657:
>>>>>>  save_stack_trace_tsk+0x0/0x330 arch/arm64/kernel/stacktrace.c:133
>>>>>>  save_stack_trace+0x20/0x30 arch/arm64/kernel/stacktrace.c:216
>>>>>>  save_stack mm/kasan/kasan.c:515 [inline]
>>>>>>  set_track mm/kasan/kasan.c:527 [inline]
>>>>>>  kasan_kmalloc+0xd4/0x180 mm/kasan/kasan.c:619
>>>>>>  kasan_slab_alloc+0x14/0x20 mm/kasan/kasan.c:557
>>>>>>  slab_post_alloc_hook mm/slab.h:456 [inline]
>>>>>>  slab_alloc_node mm/slub.c:2718 [inline]
>>>>>>  slab_alloc mm/slub.c:2726 [inline]
>>>>>>  kmem_cache_alloc+0x144/0x230 mm/slub.c:2731
>>>>>>  __split_vma+0x118/0x608 mm/mmap.c:2515
>>>>>>  do_munmap+0x194/0x9b0 mm/mmap.c:2636
>>>>>> Freed by task 26657:
>>>>>>  save_stack_trace_tsk+0x0/0x330 arch/arm64/kernel/stacktrace.c:133
>>>>>>  save_stack_trace+0x20/0x30 arch/arm64/kernel/stacktrace.c:216
>>>>>>  save_stack mm/kasan/kasan.c:515 [inline]
>>>>>>  set_track mm/kasan/kasan.c:527 [inline]
>>>>>>  kasan_slab_free+0x84/0x198 mm/kasan/kasan.c:592
>>>>>>  slab_free_hook mm/slub.c:1357 [inline]
>>>>>>  slab_free_freelist_hook mm/slub.c:1379 [inline]
>>>>>>  slab_free mm/slub.c:2961 [inline]
>>>>>>  kmem_cache_free+0x80/0x258 mm/slub.c:2983
>>>>>>  __vma_adjust+0x6b0/0xf mm/mmap.c:890]  el0_svc_naked+0x24/0x28
>>

Re: kvm/arm64: use-after-free in kvm_vm_ioctl/vmacache_update

2017-03-10 Thread Andrey Konovalov
On Fri, Mar 10, 2017 at 2:38 PM, Andrey Konovalov <andreyk...@google.com> wrote:
> Hi,
>
> I've got the following error report while fuzzing the kernel with syzkaller.
>
> On linux-next commit 56b8bad5e066c23e8fa273ef5fba50bd3da2ace8 (Mar 8).
>
> Unfortunately I can't reproduce it.
>
> ==
> BUG: KASAN: use-after-free in vmacache_update+0x114/0x118 mm/vmacache.c:63
> Read of size 8 at addr 80003b9a2040 by task syz-executor/26615
>
> CPU: 1 PID: 26615 Comm: syz-executor Not tainted
> 4.11.0-rc1-next-20170308-xc2-dirty #3
> Hardware name: Hardkernel ODROID-C2 (DT)
> Call trace:
> [] dump_backtrace+0x0/0x440 arch/arm64/kernel/traps.c:505
> [] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
> [] __dump_stack lib/dump_stack.c:16 [inline]
> [] dump_stack+0x110/0x168 lib/dump_stack.c:52
> [] print_address_description+0x60/0x248 
> mm/kasan/report.c:250
> [] kasan_report_error+0xe8/0x250 mm/kasan/report.c:349
> [] kasan_report mm/kasan/report.c:372 [inline]
> [] __asan_report_load8_noabort+0x3c/0x48 
> mm/kasan/report.c:393
> [] vmacache_update+0x114/0x118 mm/vmacache.c:63
> [] find_vma+0xf8/0x150 mm/mmap.c:2124
> [] kvm_arch_prepare_memory_region+0x2ac/0x488
> arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1817
> [] __kvm_set_memory_region+0x3d8/0x12b8
> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1026
> [] kvm_set_memory_region+0x38/0x58
> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1075
> [] kvm_vm_ioctl_set_memory_region
> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1087 [inline]
> [] kvm_vm_ioctl+0xb94/0x1308
> arch/arm64/kvm/../../../virt/kvm/kvm_main.c:2960
> [] vfs_ioctl fs/ioctl.c:45 [inline]
> [] do_vfs_ioctl+0x128/0xfc0 fs/ioctl.c:685
> [] SYSC_ioctl fs/ioctl.c:700 [inline]
> [] SyS_ioctl+0xa8/0xb8 fs/ioctl.c:691
> [] el0_svc_naked+0x24/0x28
>
> Allocated by task 26657:
>  save_stack_trace_tsk+0x0/0x330 arch/arm64/kernel/stacktrace.c:133
>  save_stack_trace+0x20/0x30 arch/arm64/kernel/stacktrace.c:216
>  save_stack mm/kasan/kasan.c:515 [inline]
>  set_track mm/kasan/kasan.c:527 [inline]
>  kasan_kmalloc+0xd4/0x180 mm/kasan/kasan.c:619
>  kasan_slab_alloc+0x14/0x20 mm/kasan/kasan.c:557
>  slab_post_alloc_hook mm/slab.h:456 [inline]
>  slab_alloc_node mm/slub.c:2718 [inline]
>  slab_alloc mm/slub.c:2726 [inline]
>  kmem_cache_alloc+0x144/0x230 mm/slub.c:2731
>  __split_vma+0x118/0x608 mm/mmap.c:2515
>  do_munmap+0x194/0x9b0 mm/mmap.c:2636
> Freed by task 26657:
>  save_stack_trace_tsk+0x0/0x330 arch/arm64/kernel/stacktrace.c:133
>  save_stack_trace+0x20/0x30 arch/arm64/kernel/stacktrace.c:216
>  save_stack mm/kasan/kasan.c:515 [inline]
>  set_track mm/kasan/kasan.c:527 [inline]
>  kasan_slab_free+0x84/0x198 mm/kasan/kasan.c:592
>  slab_free_hook mm/slub.c:1357 [inline]
>  slab_free_freelist_hook mm/slub.c:1379 [inline]
>  slab_free mm/slub.c:2961 [inline]
>  kmem_cache_free+0x80/0x258 mm/slub.c:2983
>  __vma_adjust+0x6b0/0xf mm/mmap.c:890]  el0_svc_naked+0x24/0x28
>
> The buggy address belongs to the object at 80003b9a2000
>  which belongs to the cache vm_area_struct(647:session-6.scope) of size 184
> The buggy address is located 64 bytes inside of
>  184-byte region [80003b9a2000, 80003b9a20b8)
> The buggy address belongs to the page:
> page:7eee6880 count:1 mapcount:0 mapping:  (null) index:0x0
> flags: 0xfffc100(slab)
> raw: 0fffc100   000180100010
> raw:  000c0001 80005a5cc600 80005ac99980
> page dumped because: kasan: bad access detected
> page->mem_cgroup:80005ac99980
>
> Memory state around the buggy address:
>  80003b9a1f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>  80003b9a1f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>>80003b9a2000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>^
>  80003b9a2080: fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc fb
>  80003b9a2100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ==

Another one that looks related and doesn't have parts of stack traces missing:

==
BUG: KASAN: use-after-free in find_vma+0x140/0x150 mm/mmap.c:2114
Read of size 8 at addr 800031a03e90 by task syz-executor/4360

CPU: 2 PID: 4360 Comm: syz-executor Not tainted
4.11.0-rc1-next-20170308-xc2-dirty #3
Hardware name: Hardkernel ODROID-C2 (DT)
Call trace:
[] dump_backtrace+0x0/0x440 arch/arm64/kernel/traps.c:505
[] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
[] __dump_stack lib/dump_stack.c:16 [inline]
[] dump_stack+0x1

kvm/arm64: use-after-free in kvm_unmap_hva_handler/unmap_stage2_pmds

2017-03-10 Thread Andrey Konovalov
Hi,

I've got the following error report while fuzzing the kernel with syzkaller.

On linux-next commit 56b8bad5e066c23e8fa273ef5fba50bd3da2ace8 (Mar 8).

Unfortunately I can't reproduce it.

==
BUG: KASAN: use-after-free in put_page include/linux/compiler.h:243 [inline]
BUG: KASAN: use-after-free in unmap_stage2_pmds
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:240 [inline]
BUG: KASAN: use-after-free in unmap_stage2_puds
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:269 [inline]
BUG: KASAN: use-after-free in unmap_stage2_range+0x884/0x938
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:299
Read of size 8 at addr 80004585c000 by task syz-executor/5176

CPU: 1 PID: 5176 Comm: syz-executor Not tainted
4.11.0-rc1-next-20170308-xc2-dirty #3
Hardware name: Hardkernel ODROID-C2 (DT)
Call trace:
[] dump_backtrace+0x0/0x440 arch/arm64/kernel/traps.c:69
[] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:219
[] __write_once_size include/linux/compiler.h:271 [inline]
[] dump_stack+0x110/0x168 lib/dump_stack.c:54
[] print_address_description+0x60/0x248
[] print_error_description mm/kasan/report.c:98 [inline]
[] kasan_report_error+0xe8/0x250 mm/kasan/report.c:287
[] kasan_report mm/kasan/report.c:308 [inline]
[] __asan_report_load8_noabort+0x3c/0x48 mm/kasan/report.c:329
[] put_page include/linux/compiler.h:243 [inline]
[] unmap_stage2_pmds
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:240 [inline]
[] unmap_stage2_puds
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:269 [inline]
[] unmap_stage2_range+0x884/0x938
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:299
[] kvm_unmap_hva_handler+0x28/0x38
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1556
[] handle_hva_to_gpa+0x140/0x250
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1547
[] kvm_unmap_hva_range+0x60/0x80
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1579
[]
kvm_mmu_notifier_invalidate_range_start+0x194/0x278
arch/arm64/kvm/../../../virt/kvm/kvm_main.c:357
[] __mmu_notifier_invalidate_range_start+0x1d0/0x2a0
mm/mmu_notifier.c:199
[] mmu_notifier_invalidate_range_start
include/linux/mmu_notifier.h:282 [inline]
[] unmap_vmas+0x12c/0x198 mm/memory.c:1372
[] unmap_region+0x128/0x230 mm/mmap.c:2460
[] update_hiwater_vm include/linux/mm.h:1483 [inline]
[] remove_vma_list mm/mmap.c:2432 [inline]
[] do_munmap+0x598/0x9b0 mm/mmap.c:2662
[] find_vma_links mm/mmap.c:495 [inline]
[] mmap_region+0x138/0xc78 mm/mmap.c:1610
[] is_file_hugepages include/linux/hugetlb.h:269 [inline]
[] do_mmap+0x3cc/0x848 mm/mmap.c:1446
[] do_mmap_pgoff include/linux/mm.h:2039 [inline]
[] vm_mmap_pgoff+0xec/0x120 mm/util.c:305
[] SYSC_mmap_pgoff mm/mmap.c:1475 [inline]
[] SyS_mmap_pgoff+0x220/0x420 mm/mmap.c:1458
[] sys_mmap+0x58/0x80 arch/arm64/kernel/sys.c:37
[] el0_svc_naked+0x24/0x28

The buggy address belongs to the page:
page:7e0001161700 count:0 mapcount:0 mapping:  (null) index:0x0
flags: 0xfffc000()
raw: 0fffc000   
raw: 7e00018c9120 7eea8b60  
page dumped because: kasan: bad access detected

Memory state around the buggy address:
 80004585bf00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
 80004585bf80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>80004585c000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
   ^
 80004585c080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
 80004585c100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
==
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


kvm/arm64: use-after-free in kvm_vm_ioctl/vmacache_update

2017-03-10 Thread Andrey Konovalov
Hi,

I've got the following error report while fuzzing the kernel with syzkaller.

On linux-next commit 56b8bad5e066c23e8fa273ef5fba50bd3da2ace8 (Mar 8).

Unfortunately I can't reproduce it.

==
BUG: KASAN: use-after-free in vmacache_update+0x114/0x118 mm/vmacache.c:63
Read of size 8 at addr 80003b9a2040 by task syz-executor/26615

CPU: 1 PID: 26615 Comm: syz-executor Not tainted
4.11.0-rc1-next-20170308-xc2-dirty #3
Hardware name: Hardkernel ODROID-C2 (DT)
Call trace:
[] dump_backtrace+0x0/0x440 arch/arm64/kernel/traps.c:505
[] show_stack+0x20/0x30 arch/arm64/kernel/traps.c:228
[] __dump_stack lib/dump_stack.c:16 [inline]
[] dump_stack+0x110/0x168 lib/dump_stack.c:52
[] print_address_description+0x60/0x248 mm/kasan/report.c:250
[] kasan_report_error+0xe8/0x250 mm/kasan/report.c:349
[] kasan_report mm/kasan/report.c:372 [inline]
[] __asan_report_load8_noabort+0x3c/0x48 mm/kasan/report.c:393
[] vmacache_update+0x114/0x118 mm/vmacache.c:63
[] find_vma+0xf8/0x150 mm/mmap.c:2124
[] kvm_arch_prepare_memory_region+0x2ac/0x488
arch/arm64/kvm/../../../arch/arm/kvm/mmu.c:1817
[] __kvm_set_memory_region+0x3d8/0x12b8
arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1026
[] kvm_set_memory_region+0x38/0x58
arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1075
[] kvm_vm_ioctl_set_memory_region
arch/arm64/kvm/../../../virt/kvm/kvm_main.c:1087 [inline]
[] kvm_vm_ioctl+0xb94/0x1308
arch/arm64/kvm/../../../virt/kvm/kvm_main.c:2960
[] vfs_ioctl fs/ioctl.c:45 [inline]
[] do_vfs_ioctl+0x128/0xfc0 fs/ioctl.c:685
[] SYSC_ioctl fs/ioctl.c:700 [inline]
[] SyS_ioctl+0xa8/0xb8 fs/ioctl.c:691
[] el0_svc_naked+0x24/0x28

Allocated by task 26657:
 save_stack_trace_tsk+0x0/0x330 arch/arm64/kernel/stacktrace.c:133
 save_stack_trace+0x20/0x30 arch/arm64/kernel/stacktrace.c:216
 save_stack mm/kasan/kasan.c:515 [inline]
 set_track mm/kasan/kasan.c:527 [inline]
 kasan_kmalloc+0xd4/0x180 mm/kasan/kasan.c:619
 kasan_slab_alloc+0x14/0x20 mm/kasan/kasan.c:557
 slab_post_alloc_hook mm/slab.h:456 [inline]
 slab_alloc_node mm/slub.c:2718 [inline]
 slab_alloc mm/slub.c:2726 [inline]
 kmem_cache_alloc+0x144/0x230 mm/slub.c:2731
 __split_vma+0x118/0x608 mm/mmap.c:2515
 do_munmap+0x194/0x9b0 mm/mmap.c:2636
Freed by task 26657:
 save_stack_trace_tsk+0x0/0x330 arch/arm64/kernel/stacktrace.c:133
 save_stack_trace+0x20/0x30 arch/arm64/kernel/stacktrace.c:216
 save_stack mm/kasan/kasan.c:515 [inline]
 set_track mm/kasan/kasan.c:527 [inline]
 kasan_slab_free+0x84/0x198 mm/kasan/kasan.c:592
 slab_free_hook mm/slub.c:1357 [inline]
 slab_free_freelist_hook mm/slub.c:1379 [inline]
 slab_free mm/slub.c:2961 [inline]
 kmem_cache_free+0x80/0x258 mm/slub.c:2983
 __vma_adjust+0x6b0/0xf mm/mmap.c:890]  el0_svc_naked+0x24/0x28

The buggy address belongs to the object at 80003b9a2000
 which belongs to the cache vm_area_struct(647:session-6.scope) of size 184
The buggy address is located 64 bytes inside of
 184-byte region [80003b9a2000, 80003b9a20b8)
The buggy address belongs to the page:
page:7eee6880 count:1 mapcount:0 mapping:  (null) index:0x0
flags: 0xfffc100(slab)
raw: 0fffc100   000180100010
raw:  000c0001 80005a5cc600 80005ac99980
page dumped because: kasan: bad access detected
page->mem_cgroup:80005ac99980

Memory state around the buggy address:
 80003b9a1f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
 80003b9a1f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>80003b9a2000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
   ^
 80003b9a2080: fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc fb
 80003b9a2100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm