anon_vma before freeing of anon_vma-root.
Cc: sta...@vger.kernel.org # v3.0+
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
mm/rmap.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 9c3e773..161bffc7 100644
--- a/mm/rmap.c
+++ b/mm
On 06/06/14 15:56, Peter Zijlstra wrote:
On Fri, Jun 06, 2014 at 03:30:55PM +0400, Andrey Ryabinin wrote:
While working address sanitizer for kernel I've discovered use-after-free
bug in __put_anon_vma.
For the last anon_vma, anon_vma-root freed before child anon_vma.
Later in anon_vma_free
anon_vma before freeing of anon_vma-root.
Cc: sta...@vger.kernel.org # v3.0+
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Changes since v1:
- just made it more simple following Peter's suggestion
mm/rmap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm
On 05/07/14 00:13, Nicolas Pitre wrote:
Please push this patch into Russell's patch system.
Thanks.
Done - http://www.arm.linux.org.uk/developer/patches/viewpatch.php?id=8051/1
Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to
On 05/08/14 03:42, Afzal Mohammed wrote:
int is to be converted to unsigned char in memset, would having above
change immediately upon entry to memset rather than at a place where it
won't always execute make intention clearer ? (although it doesn't make
difference)
I think it's better to
On 05/08/14 11:59, Andrey Ryabinin wrote:
On 05/08/14 03:42, Afzal Mohammed wrote:
int is to be converted to unsigned char in memset, would having above
change immediately upon entry to memset rather than at a place where it
won't always execute make intention clearer ? (although it doesn't
On 05/08/14 12:38, Vladimir Murzin wrote:
Vladimir Murzin murzin.v at gmail.com writes:
Andrey Ryabinin a.ryabinin at samsung.com writes:
memset doesn't work right for following example:
signed char c = 0xF0;
memset(addr, c, size);
Variable c is signed, so after typcasting
and THREAD_SIZE_ORDER.
Now changing stack size becomes simply changing THREAD_SIZE_ORDER.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/include/asm/assembler.h | 8 +---
arch/arm/include/asm/thread_info.h | 3 ++-
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git
Guarding section:
#ifndef MM_SLAB_H
#define MM_SLAB_H
...
#endif
currently doesn't cover the whole mm/slab.h. It seems like it was
done unintentionally.
Wrap the whole file by moving closing #endif to the end of it.
Signed-off-by: Andrey Ryabinin a.ryabi
On 06/18/14 18:31, Will Deacon wrote:
On Wed, Jun 18, 2014 at 02:50:22PM +0100, Andrey Ryabinin wrote:
Changing kernel stack size on arm is not as simple as it should be:
1) THRED_SIZE macro doen't respect PAGE_SIZE and THREAD_SIZE_ORDER
THREAD_SIZE
Yup, I just found some more typos in my
failed
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/dynamic_debug.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
index f959c39..e488d9a 100644
--- a/lib/dynamic_debug.c
+++ b/lib/dynamic_debug.c
@@ -352,8 +352,10
parse_lineno() returns either negative error code or zero.
We don't need to print something here because if parse_lineno
fails it will print error message.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/dynamic_debug.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/dynamic_debug.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c
index e488d9a..7288e38 100644
--- a/lib/dynamic_debug.c
+++ b/lib/dynamic_debug.c
@@ -268,14 +268,12
.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/include/asm/uaccess.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 12c3a5d..4b584ac 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b
in r1, so memory will be filled
with 0xFFF0 instead of expected 0xF0F0F0F0.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/lib/memset.S | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index 94b0650..a010f76
On 05/05/14 13:01, Russell King - ARM Linux wrote:
On Mon, May 05, 2014 at 10:13:58AM +0400, Andrey Ryabinin wrote:
According to arm procedure call standart r2 register is call-cloberred.
So after the result of x expression was put into r2 any following
function call in p may overwrite r2
.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Reviewed-by: Nicolas Pitre n...@linaro.org
Cc: sta...@vger.kernel.org
---
Since v1:
- tmp_p variable renamed to __tmp_p
- added Reviewed-by tag
- added Cc: sta...@vger.kernel.org
arch/arm/include/asm/uaccess.h | 3 ++-
1 file changed, 2
We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.
Reported-by: Dmitry Vyukov dvyu...@google.com
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
fs/dcache.c | 3
Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are
instrumented,
so to avoid false positive reports such places are protected by
kasan_disable_local()/kasan_enable_local() calls.
Signed-off-by: Andrey
To avoid build errors, compiler's instrumentation used for kernel
address sanitizer, must be disabled for code not linked with kernel.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/boot/compressed/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/boot
This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
mm/slab.c | 6 --
mm/slab.h | 10 +++---
2 files changed, 7 insertions
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c52d1ca..c62db6c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/Kconfig.debug | 8 ++
lib/Makefile
as accessible by kasan_krealloc call.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 22 ++
include/linux/slab.h | 19 +++--
lib/Kconfig.kasan | 2 +
mm/kasan/kasan.c | 110 ++
mm/kasan/kasan.h
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d..02fce2c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -22,6 +22,7 @@
#include linux/memblock.h
Instrumentation of this files may result in unbootable machine.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/kernel/cpu/Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 7fd54f0..a7bb360 100644
When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible and the rest of its size as redzone.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/include/asm/string.h | 30 ++
1 file changed, 30 insertions(+)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index cf4f3aa
for one function.
We could disable compiler's instrumentation for one function by using
__atribute__((no_sanitize_address)).
But the problem here is that memset call will be replaced by instumented
version kasan_memset since currently it's implemented as define:
Signed-off-by: Andrey Ryabinin a.ryabi
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
mm/slab.h | 5 +
mm/slub.c | 4 ++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
Code in slub.c and slab_common.c files could validly access to object's
redzones
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
mm/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/Makefile b/mm/Makefile
index 6a9c3f8..59cc184 100644
--- a/mm/Makefile
+++ b/mm/Makefile
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f971306..d9925ee 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
#include linux/swap.h
Cc: linux-kbu...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: x...@kernel.org
Cc: linux...@kvack.org
Andrey Ryabinin (21):
Add kernel address sanitizer infrastructure.
init: main: initialize kasan's shadow area on boot
x86: add kasan hooks fort memcpy/memmove/memset functions
To avoid build errors, compiler's instrumentation must be disabled
for code not linked with kernel image.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/boot/Makefile| 2 ++
arch/x86/boot/compressed/Makefile | 2 ++
arch/x86/realmode/Makefile| 2 +-
arch
] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Documentation/kasan.txt | 224 +
Makefile| 8 +-
commit | 3 +
include/linux/kasan.h | 33
-by: Andrey Ryabinin a.ryabi...@samsung.com
---
init/main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/init/main.c b/init/main.c
index bb1aed9..d06a636 100644
--- a/init/main.c
+++ b/init/main.c
@@ -78,6 +78,7 @@
#include linux/context_tracking.h
#include linux/random.h
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 6 ++
mm/Makefile | 2 ++
mm/kasan/kasan.c
,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/include/asm/string_32.h | 28
arch/x86/include/asm/string_64.h | 24
arch/x86/lib/Makefile| 2 ++
3 files changed
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8657c06..f9863b3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.
Reported-by: Dmitry Vyukov dvyu...@google.com
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
fs/dcache.c | 3
To avoid build errors, compiler's instrumentation used for kernel
address sanitizer, must be disabled for code not linked with kernel.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/boot/compressed/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/boot
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f971306..d9925ee 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
#include linux/swap.h
,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/include/asm/string.h | 30 ++
1 file changed, 30 insertions(+)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index cf4f3aa
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c52d1ca..c62db6c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/Kconfig.debug | 8 ++
lib/Makefile
Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are
instrumented,
so to avoid false positive reports such places are protected by
kasan_disable_local()/kasan_enable_local() calls.
Signed-off-by: Andrey
When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible and the rest of its size as redzone.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
as accessible by kasan_krealloc call.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 22 ++
include/linux/slab.h | 19 +++--
lib/Kconfig.kasan | 2 +
mm/kasan/kasan.c | 110 ++
mm/kasan/kasan.h
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d..02fce2c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -22,6 +22,7 @@
#include linux/memblock.h
for one function.
We could disable compiler's instrumentation for one function by using
__atribute__((no_sanitize_address)).
But the problem here is that memset call will be replaced by instumented
version kasan_memset since currently it's implemented as define:
Signed-off-by: Andrey Ryabinin a.ryabi
Code in slub.c and slab_common.c files could validly access to object's
redzones
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
mm/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/Makefile b/mm/Makefile
index 6a9c3f8..59cc184 100644
--- a/mm/Makefile
+++ b/mm/Makefile
To avoid build errors, compiler's instrumentation must be disabled
for code not linked with kernel image.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/boot/Makefile| 2 ++
arch/x86/boot/compressed/Makefile | 2 ++
arch/x86/realmode/Makefile| 2 +-
arch
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
include/linux/kasan.h | 6 ++
mm/Makefile | 2 ++
mm/kasan/kasan.c
This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
mm/slab.c | 6 --
mm/slab.h | 10 +++---
2 files changed, 7 insertions
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8657c06..f9863b3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
mm/slab.h | 5 +
mm/slub.c | 4 ++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
Instrumentation of this files may result in unbootable machine.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/kernel/cpu/Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 7fd54f0..a7bb360 100644
: Joonsoo Kim iamjoonsoo@lge.com
Cc: Andrew Morton a...@linux-foundation.org
Cc: linux-kbu...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: x...@kernel.org
Cc: linux...@kvack.org
Andrey Ryabinin (21):
Add kernel address sanitizer infrastructure.
init: main: initialize kasan's
] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Documentation/kasan.txt | 224 +
Makefile| 8 +-
commit | 3 +
include/linux/kasan.h | 33
-by: Andrey Ryabinin a.ryabi...@samsung.com
---
init/main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/init/main.c b/init/main.c
index bb1aed9..d06a636 100644
--- a/init/main.c
+++ b/init/main.c
@@ -78,6 +78,7 @@
#include linux/context_tracking.h
#include linux/random.h
,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/x86/include/asm/string_32.h | 28
arch/x86/include/asm/string_64.h | 24
arch/x86/lib/Makefile| 2 ++
3 files changed
On 07/09/14 18:26, Christoph Lameter wrote:
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE 0xFF /* page was freed */
+#define KASAN_PAGE_REDZONE 0xFE /* redzone
On 07/09/14 18:29, Christoph Lameter wrote:
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Hmmm... This is allocator specific. At some future point it would be good
to move error
On 07/09/14 18:32, Christoph Lameter wrote:
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
To avoid false positive reports in kernel address sanitizer krealloc/kzfree
functions shouldn't be instrumented. Since we want to instrument other
functions in mm/util.c, krealloc/kzfree moved
On 07/09/14 18:33, Christoph Lameter wrote:
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible
On 07/09/14 18:48, Christoph Lameter wrote:
On Wed, 9 Jul 2014, Andrey Ryabinin wrote:
With this patch kasan will be able to catch bugs in memory allocated
by slub.
Allocated slab page, this whole page marked as unaccessible
in corresponding shadow memory.
On allocation of slub object
On 07/09/14 23:29, Andi Kleen wrote:
Andrey Ryabinin a.ryabi...@samsung.com writes:
Seems like a useful facility. Thanks for working on it. Overall the code
looks fairly good. Some comments below.
+
+Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It
provides
On 07/10/14 00:26, Dave Hansen wrote:
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
Address sanitizer dedicates 1/8 of the low memory to the shadow memory and
uses direct
mapping with a scale and offset to translate a memory address to its
corresponding
shadow address.
Here is function
On 07/10/14 15:55, Sasha Levin wrote:
On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
Address sanitizer for kernel (kasan) is a dynamic memory error detector.
The main features of kasan is:
- is based on compiler instrumentation (fast),
- detects out of bounds for both writes and reads
On 07/09/14 23:33, Andi Kleen wrote:
Andrey Ryabinin a.ryabi...@samsung.com writes:
Instrumentation of this files may result in unbootable machine.
This doesn't make sense. Is the code not NMI safe?
If yes that would need to be fixed because
Please debug more.
Sure.
It turns out
On 07/10/14 17:31, Sasha Levin wrote:
On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
On 07/10/14 15:55, Sasha Levin wrote:
On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
Address sanitizer for kernel (kasan) is a dynamic memory error detector.
The main features of kasan is:
- is based
On 07/10/14 17:31, Sasha Levin wrote:
On 07/10/2014 09:01 AM, Andrey Ryabinin wrote:
On 07/10/14 15:55, Sasha Levin wrote:
On 07/09/2014 07:29 AM, Andrey Ryabinin wrote:
Address sanitizer for kernel (kasan) is a dynamic memory error detector.
The main features of kasan is:
- is based
On 07/09/14 23:31, Andi Kleen wrote:
Andrey Ryabinin a.ryabi...@samsung.com writes:
+
+#undef memcpy
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define
On 07/10/14 01:59, Vegard Nossum wrote:
On 9 July 2014 23:44, Andi Kleen a...@firstfloor.org wrote:
Dave Hansen dave.han...@intel.com writes:
You're also claiming that KASAN is better than all of
better as in finding more bugs, but surely not better as in
do so with less overhead
2014-07-10 18:02 GMT+04:00 Sasha Levin sasha.le...@oracle.com:
On 07/10/2014 09:39 AM, Andrey Ryabinin wrote:
Anyways, the machine won't boot with more than 1GB of RAM, is there a
solution to
get KASAN running on my machine?
Could you share you .config? I'll try to boot it by myself
2014-07-10 19:55 GMT+04:00 Dave Hansen dave.han...@intel.com:
On 07/10/2014 05:12 AM, Andrey Ryabinin wrote:
On 07/10/14 00:26, Dave Hansen wrote:
On 07/09/2014 04:29 AM, Andrey Ryabinin wrote:
Address sanitizer dedicates 1/8 of the low memory to the shadow memory and
uses direct
mapping
Functions krealloc(), __krealloc(), kzfree() belongs to slab API,
so should be placed in slab_common.c
Also move slab allocator's tracepoints defenitions to slab_common.c
No functional changes here.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Acked-by: Christoph Lameter c...@linux.com
are exiting out of loop and
never use p, such behaviour is undefined and should be avoided.
Fix this by moving pointer derference to the beggining of the loop, right
before we will use it.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
lib/idr.c | 25 ++---
1 file
On 06/24/14 05:26, Lai Jiangshan wrote:
On 06/23/2014 09:37 PM, Andrey Ryabinin wrote:
I'm working on address sanitizer project for kernel. Recently we started
experiments with stack instrumentation, to detect out-of-bounds
read/write bugs on stack.
Just after booting I've hit out-of-bounds
On 06/24/14 11:48, Lai Jiangshan wrote:
326cf0f0f308 (idr: fix top layer handling) enlarged the pa array.
But the additional +1 space is only used in id-allocation, it is free
in other usage, (paa may point to the additional +1 space, but not
dereference it).
so you can reuse it.
In the
On 06/18/14 18:40, Nicolas Pitre wrote:
On Wed, 18 Jun 2014, Andrey Ryabinin wrote:
Changing kernel stack size on arm is not as simple as it should be:
1) THRED_SIZE macro doen't respect PAGE_SIZE and THREAD_SIZE_ORDER
2) stack size is hardcoded in get_thread_info macro
This patch fixes
On 06/19/14 00:22, David Rientjes wrote:
On Wed, 18 Jun 2014, Andrey Ryabinin wrote:
Guarding section:
#ifndef MM_SLAB_H
#define MM_SLAB_H
...
#endif
currently doesn't cover the whole mm/slab.h. It seems like it was
done unintentionally.
Wrap the whole file
in allocation path for SLUB_DEBUG=n and all
other debugging features disabled. might_sleep_if() call can generate some code
even if DEBUG_ATOMIC_SLEEP=n. For PREEMPT_VOLUNTARY=y migth_sleep() inserts
_cond_resched() call, but I think it should be ok.
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
2014-06-20 1:06 GMT+04:00 Andrew Morton a...@linux-foundation.org:
On Thu, 19 Jun 2014 15:56:56 -0500 (CDT) Christoph Lameter c...@gentwo.org
wrote:
On Thu, 19 Jun 2014, Andrey Ryabinin wrote:
I see no reason why calls to other debugging subsystems (LOCKDEP,
DEBUG_ATOMIC_SLEEP, KMEMCHECK
2014-07-12 4:59 GMT+04:00 H. Peter Anvin h...@zytor.com:
On 07/09/2014 04:00 AM, Andrey Ryabinin wrote:
Address sanitizer dedicates 1/8 of the low memory to the shadow memory and
uses direct
mapping with a scale and offset to translate a memory address to its
corresponding
shadow address
On 07/14/14 13:04, Peter Zijlstra wrote:
On Sun, Jul 13, 2014 at 07:45:56PM -0400, Sasha Levin wrote:
On 07/13/2014 05:51 PM, Sasha Levin wrote:
Hi all,
While fuzzing with trinity inside a KVM tools guest running the latest -next
kernel with the KASAN patchset, I've stumbled on the following
On 07/14/14 13:58, Peter Zijlstra wrote:
On Mon, Jul 14, 2014 at 01:34:40PM +0400, Andrey Ryabinin wrote:
On 07/14/14 13:04, Peter Zijlstra wrote:
On Sun, Jul 13, 2014 at 07:45:56PM -0400, Sasha Levin wrote:
On 07/13/2014 05:51 PM, Sasha Levin wrote:
Hi all,
While fuzzing with trinity
On 07/14/14 18:49, Oleg Nesterov wrote:
On 07/14, Peter Zijlstra wrote:
On Sun, Jul 13, 2014 at 07:45:56PM -0400, Sasha Levin wrote:
[ 876.319044]
==
[ 876.319044] AddressSanitizer: use after free in
-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
arch/arm/mach-iop13xx/include/mach/iop13xx.h | 2 +-
arch/arm/mach-iop13xx/setup.c| 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mach-iop13xx/include/mach/iop13xx.h
b/arch/arm/mach-iop13xx/include
On 07/15/14 09:52, Joonsoo Kim wrote:
On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.
Signed-off-by: Andrey Ryabinin a.ryabi
On 07/15/14 09:53, Joonsoo Kim wrote:
On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.
I think that this patch won't
On 07/15/14 10:04, Joonsoo Kim wrote:
On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are
instrumented,
so to avoid false positive
On 07/15/14 10:09, Joonsoo Kim wrote:
On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
With this patch kasan will be able to catch bugs in memory allocated
by slub.
Allocated slab page, this whole page marked as unaccessible
in corresponding shadow memory.
On allocation
On 07/15/14 10:12, Joonsoo Kim wrote:
On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.
If it read
On 07/15/14 12:18, Joonsoo Kim wrote:
On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
On 07/15/14 10:04, Joonsoo Kim wrote:
On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
Some code in slub could validly touch memory marked by kasan as
unaccessible.
Even
On 07/14/14 19:13, Christoph Lameter wrote:
On Sun, 13 Jul 2014, Andrey Ryabinin wrote:
How does that work when memory is sparsely populated?
Sparsemem configurations currently may not work with kasan.
I suppose I will have to move shadow area to vmalloc address space and
make it (shadow
On 07/15/14 18:26, Christoph Lameter wrote:
On Tue, 15 Jul 2014, Joonsoo Kim wrote:
I think putting disable/enable only where we strictly need them might be a
problem for future maintenance of slub.
If someone is going to add a new function call somewhere, he must ensure
that it this call
On 07/04/14 00:24, Arnd Bergmann wrote:
On Wednesday 18 June 2014, Andrey Ryabinin wrote:
diff --git a/arch/arm/include/asm/thread_info.h
b/arch/arm/include/asm/thread_info.h
index f989d7c..f85d2b0 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
On 07/04/14 14:27, Arnd Bergmann wrote:
On Friday 04 July 2014 11:13:31 Andrey Ryabinin wrote:
but I wonder if there is a way to avoid the extra include here, as it might
also
cause a general slowdown because of asm/memory.h getting pulled into more .c
files. Would it be reasonable
: a9b0f861(mm: nominate faultaround area in bytes rather than page order)
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
Reported-by: Sasha Levin sasha.le...@oracle.com
Cc: sta...@vger.kernel.org # 3.15.x
Cc: Kirill A. Shutemov kirill.shute...@linux.intel.com
Cc: Mel Gorman mgor...@suse.de
Cc: Rik
1 - 100 of 2765 matches
Mail list logo