[PINGv2][PATCH] Asan optimization for aligned accesses.
On 09/10/2014 04:30 PM, Marat Zakirov wrote: On 09/02/2014 07:09 PM, Marat Zakirov wrote: Hi all! Here's a simple optimization patch for Asan. It stores alignment information into ASAN_CHECK which is then extracted by sanopt to reduce number of and 0x7 instructions for sufficiently aligned accesses. I checked it on linux kernel by comparing results of objdump -d -j .text vmlinux | grep and.*0x7, for optimized and regular cases. It eliminates 12% of and 0x7's. No regressions. Sanitized GCC was successfully Asan-bootstrapped. No false positives were found in kernel. --Marat gcc/ChangeLog: 2014-09-02 Marat Zakirov m.zaki...@samsung.com * asan.c (build_check_stmt): Alignment arg was added. (asan_expand_check_ifn): Optimization for alignment = 8. gcc/testsuite/ChangeLog: 2014-09-02 Marat Zakirov m.zaki...@samsung.com * c-c++-common/asan/red-align-1.c: New test. * c-c++-common/asan/red-align-2.c: New test. diff --git a/gcc/asan.c b/gcc/asan.c index 58e7719..aed5ede 100644 --- a/gcc/asan.c +++ b/gcc/asan.c @@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len, if (end_instrumented) flags |= ASAN_CHECK_END_INSTRUMENTED; - g = gimple_build_call_internal (IFN_ASAN_CHECK, 3, + g = gimple_build_call_internal (IFN_ASAN_CHECK, 4, build_int_cst (integer_type_node, flags), - base, len); + base, len, + build_int_cst (integer_type_node, + align/BITS_PER_UNIT)); gimple_set_location (g, loc); if (before_p) gsi_insert_before (gsi, g, GSI_SAME_STMT); @@ -2434,6 +2436,7 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls) tree base = gimple_call_arg (g, 1); tree len = gimple_call_arg (g, 2); + HOST_WIDE_INT align = tree_to_shwi (gimple_call_arg (g, 3)); HOST_WIDE_INT size_in_bytes = is_scalar_access tree_fits_shwi_p (len) ? tree_to_shwi (len) : -1; @@ -2547,7 +2550,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls) gimple shadow_test = build_assign (NE_EXPR, shadow, 0); gimple_seq seq = NULL; gimple_seq_add_stmt (seq, shadow_test); - gimple_seq_add_stmt (seq, build_assign (BIT_AND_EXPR, base_addr, 7)); + /* Aligned (= 8 bytes) access do not need 7. */ + if (align 8) + gimple_seq_add_stmt (seq, build_assign (BIT_AND_EXPR, + base_addr, 7)); gimple_seq_add_stmt (seq, build_type_cast (shadow_type, gimple_seq_last (seq))); if (real_size_in_bytes 1) diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def index 7ae60f3..54ade9f 100644 --- a/gcc/internal-fn.def +++ b/gcc/internal-fn.def @@ -55,4 +55,4 @@ DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL) DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) -DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, .W..) +DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, .W...) diff --git a/gcc/testsuite/c-c++-common/asan/red-align-1.c b/gcc/testsuite/c-c++-common/asan/red-align-1.c new file mode 100644 index 000..1edb3a2 --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ + +/* { dg-options -fdump-tree-sanopt } */ +/* { dg-do compile } */ +/* { dg-skip-if { *-*-* } { -flto } { } } */ + +struct st { + int a; + int b; + int c; +} __attribute__((aligned(16))); + +int foo (struct st * s_p) +{ + return s_p-a; +} + +/* { dg-final { scan-tree-dump-times 7 0 sanopt } } */ +/* { dg-final { cleanup-tree-dump sanopt } } */ diff --git a/gcc/testsuite/c-c++-common/asan/red-align-2.c b/gcc/testsuite/c-c++-common/asan/red-align-2.c new file mode 100644 index 000..161fe3c --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ + +/* { dg-options -fdump-tree-sanopt } */ +/* { dg-do compile } */ +/* { dg-skip-if { *-*-* } { -flto } { } } */ + +struct st { + int a; + int b; + int c; +} __attribute__((aligned(16))); + +int foo (struct st * s_p) +{ + return s_p-b; +} + +/* { dg-final { scan-tree-dump-times 7 1 sanopt } } */ +/* { dg-final { cleanup-tree-dump sanopt } } */
Re: [PINGv2][PATCH] Asan optimization for aligned accesses.
On Tue, Sep 16, 2014 at 06:59:57PM +0400, Marat Zakirov wrote: --- a/gcc/asan.c +++ b/gcc/asan.c @@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len, if (end_instrumented) flags |= ASAN_CHECK_END_INSTRUMENTED; - g = gimple_build_call_internal (IFN_ASAN_CHECK, 3, + g = gimple_build_call_internal (IFN_ASAN_CHECK, 4, build_int_cst (integer_type_node, flags), - base, len); + base, len, + build_int_cst (integer_type_node, + align/BITS_PER_UNIT)); Formatting. Spaces should be around / (both before and after). --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ absence of redundant --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ Likewise. Otherwise, LGTM. Jakub
[PING][PATCH] Asan optimization for aligned accesses.
On 09/02/2014 07:09 PM, Marat Zakirov wrote: Hi all! Here's a simple optimization patch for Asan. It stores alignment information into ASAN_CHECK which is then extracted by sanopt to reduce number of and 0x7 instructions for sufficiently aligned accesses. I checked it on linux kernel by comparing results of objdump -d -j .text vmlinux | grep and.*0x7, for optimized and regular cases. It eliminates 12% of and 0x7's. No regressions. Sanitized GCC was successfully Asan-bootstrapped. No false positives were found in kernel. --Marat gcc/ChangeLog: 2014-09-02 Marat Zakirov m.zaki...@samsung.com * asan.c (build_check_stmt): Alignment arg was added. (asan_expand_check_ifn): Optimization for alignment = 8. gcc/testsuite/ChangeLog: 2014-09-02 Marat Zakirov m.zaki...@samsung.com * c-c++-common/asan/red-align-1.c: New test. * c-c++-common/asan/red-align-2.c: New test. diff --git a/gcc/asan.c b/gcc/asan.c index 58e7719..aed5ede 100644 --- a/gcc/asan.c +++ b/gcc/asan.c @@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len, if (end_instrumented) flags |= ASAN_CHECK_END_INSTRUMENTED; - g = gimple_build_call_internal (IFN_ASAN_CHECK, 3, + g = gimple_build_call_internal (IFN_ASAN_CHECK, 4, build_int_cst (integer_type_node, flags), - base, len); + base, len, + build_int_cst (integer_type_node, + align/BITS_PER_UNIT)); gimple_set_location (g, loc); if (before_p) gsi_insert_before (gsi, g, GSI_SAME_STMT); @@ -2434,6 +2436,7 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls) tree base = gimple_call_arg (g, 1); tree len = gimple_call_arg (g, 2); + HOST_WIDE_INT align = tree_to_shwi (gimple_call_arg (g, 3)); HOST_WIDE_INT size_in_bytes = is_scalar_access tree_fits_shwi_p (len) ? tree_to_shwi (len) : -1; @@ -2547,7 +2550,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls) gimple shadow_test = build_assign (NE_EXPR, shadow, 0); gimple_seq seq = NULL; gimple_seq_add_stmt (seq, shadow_test); - gimple_seq_add_stmt (seq, build_assign (BIT_AND_EXPR, base_addr, 7)); + /* Aligned (= 8 bytes) access do not need 7. */ + if (align 8) + gimple_seq_add_stmt (seq, build_assign (BIT_AND_EXPR, + base_addr, 7)); gimple_seq_add_stmt (seq, build_type_cast (shadow_type, gimple_seq_last (seq))); if (real_size_in_bytes 1) diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def index 7ae60f3..54ade9f 100644 --- a/gcc/internal-fn.def +++ b/gcc/internal-fn.def @@ -55,4 +55,4 @@ DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL) DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) -DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, .W..) +DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, .W...) diff --git a/gcc/testsuite/c-c++-common/asan/red-align-1.c b/gcc/testsuite/c-c++-common/asan/red-align-1.c new file mode 100644 index 000..1edb3a2 --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ + +/* { dg-options -fdump-tree-sanopt } */ +/* { dg-do compile } */ +/* { dg-skip-if { *-*-* } { -flto } { } } */ + +struct st { + int a; + int b; + int c; +} __attribute__((aligned(16))); + +int foo (struct st * s_p) +{ + return s_p-a; +} + +/* { dg-final { scan-tree-dump-times 7 0 sanopt } } */ +/* { dg-final { cleanup-tree-dump sanopt } } */ diff --git a/gcc/testsuite/c-c++-common/asan/red-align-2.c b/gcc/testsuite/c-c++-common/asan/red-align-2.c new file mode 100644 index 000..161fe3c --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ + +/* { dg-options -fdump-tree-sanopt } */ +/* { dg-do compile } */ +/* { dg-skip-if { *-*-* } { -flto } { } } */ + +struct st { + int a; + int b; + int c; +} __attribute__((aligned(16))); + +int foo (struct st * s_p) +{ + return s_p-b; +} + +/* { dg-final { scan-tree-dump-times 7 1 sanopt } } */ +/* { dg-final { cleanup-tree-dump sanopt } } */
[PATCH] Asan optimization for aligned accesses.
Sorry for wrong subject! On 09/02/2014 07:03 PM, Marat Zakirov wrote: Hi all! Here's a simple optimization patch for Asan. It stores alignment information into ASAN_CHECK which is then extracted by sanopt to reduce number of and 0x7 instructions for sufficiently aligned accesses. I checked it on linux kernel by comparing results of objdump -d -j .text vmlinux | grep and.*0x7, for optimized and regular cases. It eliminates 12% of and 0x7's. No regressions. Sanitized GCC was successfully Asan-bootstrapped. No false positives were found in kernel. --Marat gcc/ChangeLog: 2014-09-02 Marat Zakirov m.zaki...@samsung.com * asan.c (build_check_stmt): Alignment arg was added. (asan_expand_check_ifn): Optimization for alignment = 8. gcc/testsuite/ChangeLog: 2014-09-02 Marat Zakirov m.zaki...@samsung.com * c-c++-common/asan/red-align-1.c: New test. * c-c++-common/asan/red-align-2.c: New test. diff --git a/gcc/asan.c b/gcc/asan.c index 58e7719..aed5ede 100644 --- a/gcc/asan.c +++ b/gcc/asan.c @@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len, if (end_instrumented) flags |= ASAN_CHECK_END_INSTRUMENTED; - g = gimple_build_call_internal (IFN_ASAN_CHECK, 3, + g = gimple_build_call_internal (IFN_ASAN_CHECK, 4, build_int_cst (integer_type_node, flags), - base, len); + base, len, + build_int_cst (integer_type_node, + align/BITS_PER_UNIT)); gimple_set_location (g, loc); if (before_p) gsi_insert_before (gsi, g, GSI_SAME_STMT); @@ -2434,6 +2436,7 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls) tree base = gimple_call_arg (g, 1); tree len = gimple_call_arg (g, 2); + HOST_WIDE_INT align = tree_to_shwi (gimple_call_arg (g, 3)); HOST_WIDE_INT size_in_bytes = is_scalar_access tree_fits_shwi_p (len) ? tree_to_shwi (len) : -1; @@ -2547,7 +2550,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls) gimple shadow_test = build_assign (NE_EXPR, shadow, 0); gimple_seq seq = NULL; gimple_seq_add_stmt (seq, shadow_test); - gimple_seq_add_stmt (seq, build_assign (BIT_AND_EXPR, base_addr, 7)); + /* Aligned (= 8 bytes) access do not need 7. */ + if (align 8) + gimple_seq_add_stmt (seq, build_assign (BIT_AND_EXPR, + base_addr, 7)); gimple_seq_add_stmt (seq, build_type_cast (shadow_type, gimple_seq_last (seq))); if (real_size_in_bytes 1) diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def index 7ae60f3..54ade9f 100644 --- a/gcc/internal-fn.def +++ b/gcc/internal-fn.def @@ -55,4 +55,4 @@ DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL) DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL) -DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, .W..) +DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, .W...) diff --git a/gcc/testsuite/c-c++-common/asan/red-align-1.c b/gcc/testsuite/c-c++-common/asan/red-align-1.c new file mode 100644 index 000..1edb3a2 --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ + +/* { dg-options -fdump-tree-sanopt } */ +/* { dg-do compile } */ +/* { dg-skip-if { *-*-* } { -flto } { } } */ + +struct st { + int a; + int b; + int c; +} __attribute__((aligned(16))); + +int foo (struct st * s_p) +{ + return s_p-a; +} + +/* { dg-final { scan-tree-dump-times 7 0 sanopt } } */ +/* { dg-final { cleanup-tree-dump sanopt } } */ diff --git a/gcc/testsuite/c-c++-common/asan/red-align-2.c b/gcc/testsuite/c-c++-common/asan/red-align-2.c new file mode 100644 index 000..161fe3c --- /dev/null +++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c @@ -0,0 +1,20 @@ +/* This tests aligment propagation to structure elem and + abcense of redudant 7. */ + +/* { dg-options -fdump-tree-sanopt } */ +/* { dg-do compile } */ +/* { dg-skip-if { *-*-* } { -flto } { } } */ + +struct st { + int a; + int b; + int c; +} __attribute__((aligned(16))); + +int foo (struct st * s_p) +{ + return s_p-b; +} + +/* { dg-final { scan-tree-dump-times 7 1 sanopt } } */ +/* { dg-final { cleanup-tree-dump sanopt } } */