Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.
This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).
Signed-off-by: Andrey Ryabinin a.ryabi
don't want to check memory accesses there.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/boot/compressed/eboot.c | 2 ++
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/string_64.h | 18 +-
arch/x86/kernel/x8664_ksyms_64.c | 10 --
arch/x86
as accessible.
Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 21
include/linux/slab.h | 11 --
lib/Kconfig.kasan
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slub_def.h | 5 +
mm/slub.c| 4 ++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff
kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5]
https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov adech...@gmail.com
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 6 ++
mm/compaction.c | 2 ++
mm/kasan/kasan.c
stack/global variables out of bounds accesses and so on).
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/Kconfig.kasan | 8 ++
lib/Makefile | 1 +
lib/test_kasan.c | 254 ++
3 files changed, 263 insertions(+)
create mode
call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/Kconfig | 1 +
arch/x86/boot/Makefile| 2 +
arch/x86/boot/compressed/Makefile | 2
virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slub_def.h | 5 +
1 file changed, 5 insertions(+)
diff --git
in build failure with CONFIG_KASAN=y and
CONFIG_OPTIMIZE_INLINING=y.
b) Add -fno-stack-protector for mm/kasan/kasan.c
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/kernel/cpu/common.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu
2014-11-24 21:45 GMT+03:00 Sasha Levin sasha.le...@oracle.com:
On 11/24/2014 01:02 PM, Andrey Ryabinin wrote:
+static int kasan_die_handler(struct notifier_block *self,
+ unsigned long val,
+ void *data)
+{
+ if (val == DIE_GPF
this out in the documentation.
This came up during the kernel address sanitizer discussions
where it was proposed to use this area for other kernel things.
Signed-off-by: Dave Hansen dave.han...@linux.intel.com
Cc: Andrey Ryabinin ryabinin@gmail.com
Cc: Dmitry Vyukov dvyu...@google.com
Cc
On 09/26/2014 08:48 AM, Dmitry Vyukov wrote:
On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin a.ryabi...@samsung.com
wrote:
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -6,6 +6,7 @@ if HAVE_ARCH_KASAN
config KASAN
bool AddressSanitizer: runtime memory debugger
2014-09-26 21:01 GMT+04:00 Sasha Levin sasha.le...@oracle.com:
On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
Hi.
This is a third iteration of kerenel address sanitizer (KASan).
KASan is a runtime memory debugger designed to find use-after-free
and out-of-bounds bugs.
Currently KASAN
2014-09-26 21:07 GMT+04:00 Dmitry Vyukov dvyu...@google.com:
On Fri, Sep 26, 2014 at 10:01 AM, Sasha Levin sasha.le...@oracle.com wrote:
On 09/24/2014 08:43 AM, Andrey Ryabinin wrote:
Hi.
This is a third iteration of kerenel address sanitizer (KASan).
KASan is a runtime memory debugger
regards,
Andrey Ryabinin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
2014-09-26 21:10 GMT+04:00 Dmitry Vyukov dvyu...@google.com:
Looks good to me.
We can disable kasan instrumentation of this file as well.
Yes, but why? I don't think we need that.
On Wed, Sep 24, 2014 at 5:44 AM, Andrey Ryabinin a.ryabi...@samsung.com
wrote:
kmalloc internally round up
On 09/29/2014 06:28 PM, Dmitry Vyukov wrote:
On Fri, Sep 26, 2014 at 9:33 PM, Andrey Ryabinin ryabinin@gmail.com
wrote:
2014-09-26 21:18 GMT+04:00 Dmitry Vyukov dvyu...@google.com:
Yikes!
So this works during bootstrap, for user memory accesses, valloc
memory, etc, right?
Yes
irq_desc' requires 16 byte alignment.
It's wrong, in my setup it should be 64 bytes. This looks like a gcc bug,
but it doesn't change the fact that irq_desc is misaligned.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
kernel/irq/irqdesc.c | 11 ---
1 file changed, 8 insertions
kmem_cache_zalloc_node() allocates zeroed memory for a particular
cache from a specified memory node. To be used for struct irq_desc.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slab.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/include/linux/slab.h b
says that 'struct irqaction' requires 16 byte alignment.
It's wrong, in my setup it should be 64 bytes. This looks like a gcc bug,
but it doesn't change the fact that irqaction is misaligned.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
kernel/irq/internals.h | 2 ++
kernel/irq
Hi Andrew,
Now we have stable GCC(4.9.2) which supports kasan and from my point of view
patchset is ready for merging.
I could have sent v7 (it's just rebased v6), but I see no point in doing that
and bothering people,
unless you are ready to take it.
So how should I proceed?
Thanks,
Andrey.
FYI I've spotted this:
[ 180.202810]
[ 180.203600] UBSan: Undefined behaviour in ../net/netfilter/nfnetlink.c:467:28
[ 180.204249] index 9 is out of range for type 'int [9]'
[ 180.204697] CPU: 0 PID: 1771 Comm:
/onlinedocs/gcc/Debugging-Options.html
[3] -
http://developerblog.redhat.com/2014/10/16/gcc-undefined-behavior-sanitizer-ubsan/
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Documentation/ubsan.txt | 69 +
Makefile | 10 +-
arch/x86/Kconfig
++
only.
- Added checker for __builtin_unreachable() calls.
- Removed redundant -fno-sanitize=float-cast-overflow from CFLAGS.
- Added lock to prevent mixing reports.
Andrey Ryabinin (2):
kernel: printk: specify alignment for struct printk_log
UBSan: run-time undefined behavior sanity
printk() with logbuf_lock held by top printk() call.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
kernel/printk/printk.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index ced2b84..39be027 100644
2014-10-20 5:40 GMT+03:00 Sasha Levin sasha.le...@oracle.com:
gcc5 changes the default standard to c11, which makes kernel
build unhappy.
Explicitly define the kernel standard to be gnu89 which should
keep everything working exactly like it was before gcc5.
Ping.
--
To unsubscribe from this
is not valid an error printed.
Andrey Ryabinin (13):
Add kernel address sanitizer infrastructure.
efi: libstub: disable KASAN for efistub
x86_64: load_percpu_segment: read irq_stack_union.gs_base before
load_segment
x86_64: add KASan support
mm: page_alloc: add kasan hooks on alloc
adech...@gmail.com
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Documentation/kasan.txt | 179 ++
Makefile| 11 +-
include/linux/kasan.h | 42 ++
include/linux/sched.h | 3 +
lib/Kconfig.debug | 2 +
lib/Kconfig.kasan | 15
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 6 ++
mm/compaction.c | 2 ++
mm/kasan/kasan.c
hexfilename
32064759 1598688 946176 34609623 21019d7 inline/vmlinux
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Makefile | 5 +
lib/Kconfig.kasan | 24
mm/kasan/report.c | 45 +
3 files changed
kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.
Reported-by: Dmitry Vyukov dvyu...@google.com
Signed-off-by: Andrey Ryabinin a.ryabi
kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
KASan as many other options should be disabled for this stub
to prevent build failures.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
drivers/firmware/efi/libstub/Makefile | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/firmware/efi/libstub/Makefile
b/drivers/firmware
stack/global variables out of bounds accesses and so on).
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/Kconfig.kasan | 8 ++
lib/Makefile | 1 +
lib/test_kasan.c | 254 ++
3 files changed, 263 insertions(+)
create mode
virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slub_def.h | 5 +
1 file changed, 5 insertions(+)
diff --git
as accessible.
Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 24 +
include/linux/slab.h | 11 --
lib/Kconfig.kasan
Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.
This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).
Signed-off-by: Andrey Ryabinin a.ryabi
in build failure with CONFIG_KASAN=y and
CONFIG_OPTIMIZE_INLINING=y.
b) Add -fno-stack-protector for mm/kasan/kasan.c
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/kernel/cpu/common.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu
call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.
Change-Id: I289ea19eab98e572df7f80cacec661813ea61281
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/Kconfig | 1 +
arch/x86/boot/Makefile
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slub_def.h | 4
mm/slub.c| 4 ++--
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git
2014-12-22 23:39 GMT+03:00 Eric W. Biederman ebied...@xmission.com:
Sasha Levin sasha.le...@oracle.com writes:
On 12/22/2014 12:52 PM, Andrey Ryabinin wrote:
2014-12-22 18:51 GMT+03:00 Eric W. Biederman ebied...@xmission.com:
These two instructions:
11: 4d 85 fftest
On 11/25/2014 02:14 PM, Dmitry Chernenkov wrote:
I have a bit of concern about tests.
A) they are not fully automated, there is no checking whether they
pass or not. This is implemented in our repository using special tags
in the log
On 11/25/2014 03:22 PM, Dmitry Chernenkov wrote:
LGTM
Does this mean we're going to sanitize the slub code itself?)
Nope, to sanitize slub itself we need much more than just this.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to
On 11/25/2014 03:17 PM, Dmitry Chernenkov wrote:
FYI, when I backported Kasan to 3.14, in kasan_mark_slab_padding()
sometimes a negative size of padding was generated.
I don't see how this could happen if pointers passed to
kasan_mark_slab_padding() are correct.
Negative padding would mean
On 11/25/2014 03:40 PM, Dmitry Chernenkov wrote:
I'm a little concerned with how enabling/disabling works. If an
enable() is forgotten once, it's disabled forever. If disable() is
forgotten once, the toggle is reversed for the forseable future. MB
check for inequality in kasan_enabled()? like
2014-11-26 17:00 GMT+03:00 Sasha Levin sasha.le...@oracle.com:
We've used to detect integer overflows by causing an overflow and testing the
result. For example, to test for addition overflow we would:
if (a + b a)
/* Overflow detected */
While it works, this is
2014-11-26 17:00 GMT+03:00 Sasha Levin sasha.le...@oracle.com:
Detect integer overflows using safe operations rather than relying on
undefined behaviour.
Unsigned overflow is defined.
args-addr and args-len both unsigned, so there is no UB here.
Signed-off-by: Sasha Levin
On 12/13/2014 11:51 PM, Manfred Spraul wrote:
Hi,
On 12/04/2014 12:25 AM, Andrew Morton wrote:
On Wed, 03 Dec 2014 15:41:21 +0300 Andrey Ryabinin a.ryabi...@samsung.com
wrote:
Use the 'unsigned long' type for 'zero' variable to fix this.
Changing type to 'unsigned long' shouldn't affect
==
Zero 'level' (e.g. on non-NUMA system) causing out of bounds access
in this line:
sched_max_numa_distance = sched_domains_numa_distance[level - 1];
Fix this by exiting from sched_init_numa() earlier.
Signed-off-by: Andrey Ryabinin a.ryabi
From: Andrey Ryabinin a.ryabi...@samsung.com
Setting smack label on file (e.g. 'attr -S -s SMACK64 -V test test')
triggered following spew on the kernel with KASan applied:
==
BUG: AddressSanitizer: out of bounds access
2014-11-14 20:22 GMT+03:00 Joe Perches j...@perches.com:
On Fri, 2014-11-14 at 15:50 +0300, Andrey Ryabinin wrote:
On architectures that have support for efficient unaligned access
struct printk_log has 4-byte alignment.
Specify alignment attribute in type declaration.
The whole point
On 11/14/2014 08:44 PM, Pablo Neira Ayuso wrote:
On Thu, Nov 13, 2014 at 12:00:43PM +0300, Andrey Ryabinin wrote:
FYI I've spotted this:
[ 180.202810]
[ 180.203600] UBSan: Undefined behaviour in
../net
2014-12-02 10:53 GMT+03:00 Dmitry Vyukov dvyu...@google.com:
Hi,
I am working on Kernel AddressSanitizer, a fast memory error detector
for kernel:
https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
Here is an error report that I got while running trinity:
On 12/03/2014 12:04 PM, Dmitry Vyukov wrote:
Hi,
I am working on AddressSanitizer, a fast memory error detector for kernel:
https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
Here is a bug report that I've got while running trinity:
==
Use the 'unsigned long' type for 'zero' variable to fix this.
Changing type to 'unsigned long' shouldn't affect any other users
of this variable.
Reported-by: Dmitry Vyukov dvyu...@google.com
Fixes: ed4d4902ebdd (mm, hugetlb: remove hugetlb_zero and hugetlb_infinity)
Signed-off-by: Andrey
On 12/03/2014 04:27 PM, Dmitry Vyukov wrote:
On Wed, Dec 3, 2014 at 3:39 PM, Andrey Ryabinin a.ryabi...@samsung.com
wrote:
On 12/03/2014 12:04 PM, Dmitry Vyukov wrote:
Hi,
I am working on AddressSanitizer, a fast memory error detector for kernel:
https://code.google.com/p/address-sanitizer
2014-12-22 17:37 GMT+03:00 Sasha Levin sasha.le...@oracle.com:
Hi all,
While fuzzing with trinity inside a KVM tools guest running the latest -next
kernel, I've stumbled on the following spew:
[ 2015.960381] general protection fault: [#1] PREEMPT SMP KASAN
Actually this is NULL-ptr
2014-12-22 18:51 GMT+03:00 Eric W. Biederman ebied...@xmission.com:
Andrey Ryabinin ryabinin@gmail.com writes:
2014-12-22 17:37 GMT+03:00 Sasha Levin sasha.le...@oracle.com:
Hi all,
While fuzzing with trinity inside a KVM tools guest running the latest -next
kernel, I've stumbled
2014-12-16 5:42 GMT+03:00 Joonsoo Kim iamjoonsoo@lge.com:
On Mon, Dec 15, 2014 at 08:16:00AM -0600, Christoph Lameter wrote:
On Mon, 15 Dec 2014, Joonsoo Kim wrote:
+static bool same_slab_page(struct kmem_cache *s, struct page *page,
void *p)
+{
+ long d = p - page-address;
2014-12-16 17:53 GMT+03:00 Christoph Lameter c...@linux.com:
On Tue, 16 Dec 2014, Joonsoo Kim wrote:
Like this:
return d 0 d page-objects * s-size;
Yes! That's what I'm looking for.
Christoph, how about above change?
Ok but now there is a multiplication in the fast path.
2014-12-16 18:15 GMT+03:00 Jesper Dangaard Brouer bro...@redhat.com:
On Tue, 16 Dec 2014 08:53:08 -0600 (CST)
Christoph Lameter c...@linux.com wrote:
On Tue, 16 Dec 2014, Joonsoo Kim wrote:
Like this:
return d 0 d page-objects * s-size;
Yes! That's what I'm looking
= sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec,
},
This seems harmless, but it's better to use int type here.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/hugetlb.h | 2 +-
mm/hugetlb.c| 2
, which is 0 for unsigned types.
Reported-by: Dmitry Vyukov dvyu...@google.com
Suggested-by: Manfred Spraul manf...@colorfullife.com
Fixes: ed4d4902ebdd (mm, hugetlb: remove hugetlb_zero and hugetlb_infinity)
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
kernel/sysctl.c | 3 ---
1 file
kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.
Reported-by: Dmitry Vyukov dvyu...@google.com
Signed-off-by: Andrey Ryabinin a.ryabi
as accessible.
Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 21
include/linux/slab.h | 11 --
lib/Kconfig.kasan
stack/global variables out of bounds accesses and so on).
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/Kconfig.kasan | 8 ++
lib/Makefile | 1 +
lib/test_kasan.c | 254 ++
3 files changed, 263 insertions(+)
create mode
in build failure with CONFIG_KASAN=y and
CONFIG_OPTIMIZE_INLINING=y.
b) Add -fno-stack-protector for mm/kasan/kasan.c
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/kernel/cpu/common.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu
Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.
This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).
Signed-off-by: Andrey Ryabinin a.ryabi
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slub_def.h | 5 +
mm/slub.c| 4 ++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff
virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slub_def.h | 5 +
1 file changed, 5 insertions(+)
diff --git
function calls (__asan_load*(addr),
__asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by
checking
corresponding shadow memory. If access is not valid an error printed.
Andrey Ryabinin (11
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 6 ++
mm/compaction.c | 2 ++
mm/kasan/kasan.c
call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/Kconfig | 1 +
arch/x86/boot/Makefile| 2 +
arch/x86/boot/compressed/Makefile | 2
...@gmail.com
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Documentation/kasan.txt | 169
Makefile | 23 ++-
drivers/firmware/efi/libstub/Makefile | 1 +
include/linux/kasan.h | 42
include/linux/sched.h
kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
/slub.c: In function ‘on_freelist’:
mm/slub.c:905:4: warning: format ‘%d’ expects argument of type ‘int’, but
argument 5 has type ‘long unsigned int’ [-Wformat=]
should be %d, page-objects, max_objects);
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Cc: Christoph Lameter c...@linux.com
of max_object from unsigned long to int.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Cc: Christoph Lameter c...@linux.com
Cc: Pekka Enberg penb...@kernel.org
Cc: David Rientjes rient...@google.com
Cc: Joonsoo Kim iamjoonsoo@lge.com
---
Changes since v1:
- To fix the last warning change
/memcpy/memmove (inserts
__asan_load/__asan_store call before mem*() calls).
- branch profiling disabled for mm/kasan/kasan.c to avoid recursion
(__asan_load - ftrace_likely_update - __asan_load - ...)
- kasan hooks for buddy allocator moved to right places
Andrey Ryabinin (12):
Add
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as inaccessible.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 6 ++
mm/compaction.c | 2 ++
mm/kasan/kasan.c
Wrap access to object's metadata in external functions with
metadata_access_enable()/metadata_access_disable() function calls.
This hooks separates payload accesses from metadata accesses
which might be useful for different checkers (e.g. KASan).
Signed-off-by: Andrey Ryabinin a.ryabi
kmalloc internally round up allocation size, and kmemleak
uses rounded up size as object's size. This makes kasan
to complain while kmemleak scans memory or calculates of object's
checksum. The simplest solution here is to disable kasan.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
don't want to check memory accesses there.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/boot/compressed/eboot.c | 3 +--
arch/x86/boot/compressed/misc.h| 1 +
arch/x86/include/asm/string_64.h | 18 +-
arch/x86/kernel/x8664_ksyms_64.c
stack/global variables out of bounds accesses and so on).
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/Kconfig.kasan | 8 ++
lib/Makefile | 1 +
lib/test_kasan.c | 254 ++
3 files changed, 263 insertions(+)
create mode
kmalloc and kmalloc internally round up allocation size.
So this is not a bug, but this makes kasan to complain about
such accesses.
To avoid such reports we mark rounded up allocation size in
shadow as accessible.
Reported-by: Dmitry Vyukov dvyu...@google.com
Signed-off-by: Andrey Ryabinin a.ryabi
as accessible.
Code in slub.c and slab_common.c files could validly access to object's
metadata, so instrumentation for this files are disabled.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 21
include/linux/slab.h | 11 --
lib/Kconfig.kasan
/p/thread-sanitizer/wiki/FoundBugs
[4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
[5]
https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
Based on work by Andrey Konovalov adech...@gmail.com
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/slub_def.h | 5 +
mm/slub.c| 4 ++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff
virt_to_obj takes kmem_cache address, address of slab page,
address x pointing somewhere inside slab object,
and returns address of the begging of object.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Acked-by: Christoph Lameter c...@linux.com
---
include/linux/slub_def.h | 5 +
1
call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before
shadow area initialized.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/Kconfig | 1 +
arch/x86/boot/Makefile| 2 +
arch/x86/boot/compressed/Makefile | 2
in build failure with CONFIG_KASAN=y and
CONFIG_OPTIMIZE_INLINING=y.
b) Add -fno-stack-protector for mm/kasan/kasan.c
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/kernel/cpu/common.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu
On 12/04/2014 03:19 AM, Andrew Morton wrote:
On Wed, 3 Dec 2014 15:25:24 -0800 Andrew Morton a...@linux-foundation.org
wrote:
On Wed, 03 Dec 2014 15:41:21 +0300 Andrey Ryabinin a.ryabi...@samsung.com
wrote:
Use the 'unsigned long' type for 'zero' variable to fix this.
Changing type
Add might_sleep() calls to vfree(), kvfree() to catch potential
sleep-in-atomic bugs earlier.
Signed-off-by: Andrey Ryabinin
---
mm/util.c| 2 ++
mm/vmalloc.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/mm/util.c b/mm/util.c
index 7f1f165f46af..929ed1795bc1 100644
--- a/mm/util.c
vfree() might sleep if called not in interrupt context.
So does kvfree() too. Fix misleading kvfree()'s comment about
allowed context.
Fixes: 04b8e946075d ("mm/util.c: improve kvfree() kerneldoc")
Signed-off-by: Andrey Ryabinin
---
mm/util.c | 2 +-
1 file changed, 1 insertion(+),
vfree() might sleep if called not in interrupt context. Explain
that in the comment.
Signed-off-by: Andrey Ryabinin
---
mm/vmalloc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a728fc492557..d00d42d6bf79 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
ply always kmalloc:
if ((flags & GFP_KERNEL) != GFP_KERNEL)
return kmalloc_node(size, flags, node);
So in the above case, kvfree() always frees kmalloced memory -> and never calls
vfree().
Signed-off-by: Andrey Ryabinin
---
mm/util.c | 2 --
1 file changed, 2
parse failed
Signed-off-by: Andrey Ryabinin
---
lib/dynamic_debug.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
index f959c39..e488d9a 100644
--- a/lib/dynamic_debug.c
+++ b/lib/dynamic_debug.c
@@ -352,8 +352,10 @@ stat
parse_lineno() returns either negative error code or zero.
We don't need to print something here because if parse_lineno
fails it will print error message.
Signed-off-by: Andrey Ryabinin
---
lib/dynamic_debug.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/lib
Signed-off-by: Andrey Ryabinin
---
lib/dynamic_debug.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
index e488d9a..7288e38 100644
--- a/lib/dynamic_debug.c
+++ b/lib/dynamic_debug.c
@@ -268,14 +268,12 @@ static int
201 - 300 of 2765 matches
Mail list logo