Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
On Mon, Oct 7, 2019 at 3:05 AM Richard Biener wrote: > > On Tue, Oct 1, 2019 at 1:48 PM Dmitrij Pochepko > wrote: > > > > Hi Richard, > > > > I updated patch according to all your comments. > > Also bootstrapped and tested again on x86_64-pc-linux-gnu and > > aarch64-linux-gnu, which took some time. > > > > attached v3. > > OK. This introduced PR 93098 (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93098 ). Thanks, Andrew Pinski > > Thanks, > Richard. > > > Thanks, > > Dmitrij > > > > On Thu, Sep 26, 2019 at 09:47:04AM +0200, Richard Biener wrote: > > > On Tue, Sep 24, 2019 at 5:29 PM Dmitrij Pochepko > > > wrote: > > > > > > > > Hi, > > > > > > > > can anybody take a look at v2? > > > > > > +(if (tree_to_uhwi (@4) == 1 > > > + && tree_to_uhwi (@10) == 2 && tree_to_uhwi (@5) == 4 > > > > > > those will still ICE for large __int128_t constants. Since you do not > > > match > > > any conversions you should probably restrict the precision of 'type' like > > > with > > >(if (TYPE_PRECISION (type) <= 64 > > > && tree_to_uhwi (@4) ... > > > > > > likewise tree_to_uhwi will fail for negative constants thus if the > > > pattern assumes > > > unsigned you should verify that as well with && TYPE_UNSIGNED (type). > > > > > > Your 'argtype' is simply 'type' so you can elide it. > > > > > > + (switch > > > + (if (types_match (argtype, long_long_unsigned_type_node)) > > > + (convert (BUILT_IN_POPCOUNTLL:integer_type_node @0))) > > > + (if (types_match (argtype, long_unsigned_type_node)) > > > + (convert (BUILT_IN_POPCOUNTL:integer_type_node @0))) > > > + (if (types_match (argtype, unsigned_type_node)) > > > + (convert (BUILT_IN_POPCOUNT:integer_type_node @0))) > > > > > > Please test small types first so we can avoid popcountll when long == > > > long long > > > or long == int. I also wonder if we really want to use the builtins and > > > check optab availability or if we nowadays should use > > > direct_internal_fn_supported_p (IFN_POPCOUNT, integer_type_node, type, > > > OPTIMIZE_FOR_BOTH) and > > > > > > (convert (IFN_POPCOUNT:type @0)) > > > > > > without the switch? > > > > > > Thanks, > > > Richard. > > > > > > > Thanks, > > > > Dmitrij > > > > > > > > On Mon, Sep 09, 2019 at 10:03:40PM +0300, Dmitrij Pochepko wrote: > > > > > Hi all. > > > > > > > > > > Please take a look at v2 (attached). > > > > > I changed patch according to review comments. The same testing was > > > > > performed again. > > > > > > > > > > Thanks, > > > > > Dmitrij > > > > > > > > > > On Thu, Sep 05, 2019 at 06:34:49PM +0300, Dmitrij Pochepko wrote: > > > > > > This patch adds matching for Hamming weight (popcount) > > > > > > implementation. The following sources: > > > > > > > > > > > > int > > > > > > foo64 (unsigned long long a) > > > > > > { > > > > > > unsigned long long b = a; > > > > > > b -= ((b>>1) & 0xULL); > > > > > > b = ((b>>2) & 0xULL) + (b & > > > > > > 0xULL); > > > > > > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > > > > > > b *= 0x0101010101010101ULL; > > > > > > return (int)(b >> 56); > > > > > > } > > > > > > > > > > > > and > > > > > > > > > > > > int > > > > > > foo32 (unsigned int a) > > > > > > { > > > > > > unsigned long b = a; > > > > > > b -= ((b>>1) & 0xUL); > > > > > > b = ((b>>2) & 0xUL) + (b & 0xUL); > > > > > > b = ((b>>4) + b) & 0x0F0F0F0FUL; > > > > > > b *= 0x01010101UL; > > > > > > return (int)(b >> 24); > > > > > > } > > > > > > > > > > > > and equivalents are now recognized as popcount for platforms with > > > > > > hw popcount support. Bootstrapped and tested on x86_64-pc-linux-gnu > > > > > > and aarch64-linux-gnu systems with no regressions. > > > > > > > > > > > > (I have no write access to repo) > > > > > > > > > > > > Thanks, > > > > > > Dmitrij > > > > > > > > > > > > > > > > > > gcc/ChangeLog: > > > > > > > > > > > > PR tree-optimization/90836 > > > > > > > > > > > > * gcc/match.pd (popcount): New pattern. > > > > > > > > > > > > gcc/testsuite/ChangeLog: > > > > > > > > > > > > PR tree-optimization/90836 > > > > > > > > > > > > * lib/target-supports.exp (check_effective_target_popcount) > > > > > > (check_effective_target_popcountll): New effective targets. > > > > > > * gcc.dg/tree-ssa/popcount4.c: New test. > > > > > > * gcc.dg/tree-ssa/popcount4l.c: New test. > > > > > > * gcc.dg/tree-ssa/popcount4ll.c: New test. > > > > > > > > > > > diff --git a/gcc/match.pd b/gcc/match.pd > > > > > > index 0317bc7..b1867bf 100644 > > > > > > --- a/gcc/match.pd > > > > > > +++ b/gcc/match.pd > > > > > > @@ -5358,6 +5358,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > > > > > >(cmp (popcount @0) integer_zerop) > > > > > >(rep @0 { build_zero_cst (TREE_TYPE (@0)); } > > > > > > > > > > > > +/* 64- and 32-bits branchless
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
On Mon, 2019-10-07 at 12:04 +0200, Richard Biener wrote: > On Tue, Oct 1, 2019 at 1:48 PM Dmitrij Pochepko > wrote: > > > > Hi Richard, > > > > I updated patch according to all your comments. > > Also bootstrapped and tested again on x86_64-pc-linux-gnu and > > aarch64-linux-gnu, which took some time. > > > > attached v3. > > OK. > > Thanks, > Richard. Dmitrij, I checked in this patch for you. Steve Ellcey sell...@marvell.com
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
On Tue, Oct 1, 2019 at 1:48 PM Dmitrij Pochepko wrote: > > Hi Richard, > > I updated patch according to all your comments. > Also bootstrapped and tested again on x86_64-pc-linux-gnu and > aarch64-linux-gnu, which took some time. > > attached v3. OK. Thanks, Richard. > Thanks, > Dmitrij > > On Thu, Sep 26, 2019 at 09:47:04AM +0200, Richard Biener wrote: > > On Tue, Sep 24, 2019 at 5:29 PM Dmitrij Pochepko > > wrote: > > > > > > Hi, > > > > > > can anybody take a look at v2? > > > > +(if (tree_to_uhwi (@4) == 1 > > + && tree_to_uhwi (@10) == 2 && tree_to_uhwi (@5) == 4 > > > > those will still ICE for large __int128_t constants. Since you do not match > > any conversions you should probably restrict the precision of 'type' like > > with > >(if (TYPE_PRECISION (type) <= 64 > > && tree_to_uhwi (@4) ... > > > > likewise tree_to_uhwi will fail for negative constants thus if the > > pattern assumes > > unsigned you should verify that as well with && TYPE_UNSIGNED (type). > > > > Your 'argtype' is simply 'type' so you can elide it. > > > > + (switch > > + (if (types_match (argtype, long_long_unsigned_type_node)) > > + (convert (BUILT_IN_POPCOUNTLL:integer_type_node @0))) > > + (if (types_match (argtype, long_unsigned_type_node)) > > + (convert (BUILT_IN_POPCOUNTL:integer_type_node @0))) > > + (if (types_match (argtype, unsigned_type_node)) > > + (convert (BUILT_IN_POPCOUNT:integer_type_node @0))) > > > > Please test small types first so we can avoid popcountll when long == long > > long > > or long == int. I also wonder if we really want to use the builtins and > > check optab availability or if we nowadays should use > > direct_internal_fn_supported_p (IFN_POPCOUNT, integer_type_node, type, > > OPTIMIZE_FOR_BOTH) and > > > > (convert (IFN_POPCOUNT:type @0)) > > > > without the switch? > > > > Thanks, > > Richard. > > > > > Thanks, > > > Dmitrij > > > > > > On Mon, Sep 09, 2019 at 10:03:40PM +0300, Dmitrij Pochepko wrote: > > > > Hi all. > > > > > > > > Please take a look at v2 (attached). > > > > I changed patch according to review comments. The same testing was > > > > performed again. > > > > > > > > Thanks, > > > > Dmitrij > > > > > > > > On Thu, Sep 05, 2019 at 06:34:49PM +0300, Dmitrij Pochepko wrote: > > > > > This patch adds matching for Hamming weight (popcount) > > > > > implementation. The following sources: > > > > > > > > > > int > > > > > foo64 (unsigned long long a) > > > > > { > > > > > unsigned long long b = a; > > > > > b -= ((b>>1) & 0xULL); > > > > > b = ((b>>2) & 0xULL) + (b & > > > > > 0xULL); > > > > > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > > > > > b *= 0x0101010101010101ULL; > > > > > return (int)(b >> 56); > > > > > } > > > > > > > > > > and > > > > > > > > > > int > > > > > foo32 (unsigned int a) > > > > > { > > > > > unsigned long b = a; > > > > > b -= ((b>>1) & 0xUL); > > > > > b = ((b>>2) & 0xUL) + (b & 0xUL); > > > > > b = ((b>>4) + b) & 0x0F0F0F0FUL; > > > > > b *= 0x01010101UL; > > > > > return (int)(b >> 24); > > > > > } > > > > > > > > > > and equivalents are now recognized as popcount for platforms with hw > > > > > popcount support. Bootstrapped and tested on x86_64-pc-linux-gnu and > > > > > aarch64-linux-gnu systems with no regressions. > > > > > > > > > > (I have no write access to repo) > > > > > > > > > > Thanks, > > > > > Dmitrij > > > > > > > > > > > > > > > gcc/ChangeLog: > > > > > > > > > > PR tree-optimization/90836 > > > > > > > > > > * gcc/match.pd (popcount): New pattern. > > > > > > > > > > gcc/testsuite/ChangeLog: > > > > > > > > > > PR tree-optimization/90836 > > > > > > > > > > * lib/target-supports.exp (check_effective_target_popcount) > > > > > (check_effective_target_popcountll): New effective targets. > > > > > * gcc.dg/tree-ssa/popcount4.c: New test. > > > > > * gcc.dg/tree-ssa/popcount4l.c: New test. > > > > > * gcc.dg/tree-ssa/popcount4ll.c: New test. > > > > > > > > > diff --git a/gcc/match.pd b/gcc/match.pd > > > > > index 0317bc7..b1867bf 100644 > > > > > --- a/gcc/match.pd > > > > > +++ b/gcc/match.pd > > > > > @@ -5358,6 +5358,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > > > > >(cmp (popcount @0) integer_zerop) > > > > >(rep @0 { build_zero_cst (TREE_TYPE (@0)); } > > > > > > > > > > +/* 64- and 32-bits branchless implementations of popcount are > > > > > detected: > > > > > + > > > > > + int popcount64c (uint64_t x) > > > > > + { > > > > > + x -= (x >> 1) & 0xULL; > > > > > + x = (x & 0xULL) + ((x >> 2) & > > > > > 0xULL); > > > > > + x = (x + (x >> 4)) & 0x0f0f0f0f0f0f0f0fULL; > > > > > + return (x * 0x0101010101010101ULL) >> 56; > > > > > + } > > > > > + > > > > > + int popcount32c
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
Hi Richard, I updated patch according to all your comments. Also bootstrapped and tested again on x86_64-pc-linux-gnu and aarch64-linux-gnu, which took some time. attached v3. Thanks, Dmitrij On Thu, Sep 26, 2019 at 09:47:04AM +0200, Richard Biener wrote: > On Tue, Sep 24, 2019 at 5:29 PM Dmitrij Pochepko > wrote: > > > > Hi, > > > > can anybody take a look at v2? > > +(if (tree_to_uhwi (@4) == 1 > + && tree_to_uhwi (@10) == 2 && tree_to_uhwi (@5) == 4 > > those will still ICE for large __int128_t constants. Since you do not match > any conversions you should probably restrict the precision of 'type' like > with >(if (TYPE_PRECISION (type) <= 64 > && tree_to_uhwi (@4) ... > > likewise tree_to_uhwi will fail for negative constants thus if the > pattern assumes > unsigned you should verify that as well with && TYPE_UNSIGNED (type). > > Your 'argtype' is simply 'type' so you can elide it. > > + (switch > + (if (types_match (argtype, long_long_unsigned_type_node)) > + (convert (BUILT_IN_POPCOUNTLL:integer_type_node @0))) > + (if (types_match (argtype, long_unsigned_type_node)) > + (convert (BUILT_IN_POPCOUNTL:integer_type_node @0))) > + (if (types_match (argtype, unsigned_type_node)) > + (convert (BUILT_IN_POPCOUNT:integer_type_node @0))) > > Please test small types first so we can avoid popcountll when long == long > long > or long == int. I also wonder if we really want to use the builtins and > check optab availability or if we nowadays should use > direct_internal_fn_supported_p (IFN_POPCOUNT, integer_type_node, type, > OPTIMIZE_FOR_BOTH) and > > (convert (IFN_POPCOUNT:type @0)) > > without the switch? > > Thanks, > Richard. > > > Thanks, > > Dmitrij > > > > On Mon, Sep 09, 2019 at 10:03:40PM +0300, Dmitrij Pochepko wrote: > > > Hi all. > > > > > > Please take a look at v2 (attached). > > > I changed patch according to review comments. The same testing was > > > performed again. > > > > > > Thanks, > > > Dmitrij > > > > > > On Thu, Sep 05, 2019 at 06:34:49PM +0300, Dmitrij Pochepko wrote: > > > > This patch adds matching for Hamming weight (popcount) implementation. > > > > The following sources: > > > > > > > > int > > > > foo64 (unsigned long long a) > > > > { > > > > unsigned long long b = a; > > > > b -= ((b>>1) & 0xULL); > > > > b = ((b>>2) & 0xULL) + (b & 0xULL); > > > > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > > > > b *= 0x0101010101010101ULL; > > > > return (int)(b >> 56); > > > > } > > > > > > > > and > > > > > > > > int > > > > foo32 (unsigned int a) > > > > { > > > > unsigned long b = a; > > > > b -= ((b>>1) & 0xUL); > > > > b = ((b>>2) & 0xUL) + (b & 0xUL); > > > > b = ((b>>4) + b) & 0x0F0F0F0FUL; > > > > b *= 0x01010101UL; > > > > return (int)(b >> 24); > > > > } > > > > > > > > and equivalents are now recognized as popcount for platforms with hw > > > > popcount support. Bootstrapped and tested on x86_64-pc-linux-gnu and > > > > aarch64-linux-gnu systems with no regressions. > > > > > > > > (I have no write access to repo) > > > > > > > > Thanks, > > > > Dmitrij > > > > > > > > > > > > gcc/ChangeLog: > > > > > > > > PR tree-optimization/90836 > > > > > > > > * gcc/match.pd (popcount): New pattern. > > > > > > > > gcc/testsuite/ChangeLog: > > > > > > > > PR tree-optimization/90836 > > > > > > > > * lib/target-supports.exp (check_effective_target_popcount) > > > > (check_effective_target_popcountll): New effective targets. > > > > * gcc.dg/tree-ssa/popcount4.c: New test. > > > > * gcc.dg/tree-ssa/popcount4l.c: New test. > > > > * gcc.dg/tree-ssa/popcount4ll.c: New test. > > > > > > > diff --git a/gcc/match.pd b/gcc/match.pd > > > > index 0317bc7..b1867bf 100644 > > > > --- a/gcc/match.pd > > > > +++ b/gcc/match.pd > > > > @@ -5358,6 +5358,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > > > >(cmp (popcount @0) integer_zerop) > > > >(rep @0 { build_zero_cst (TREE_TYPE (@0)); } > > > > > > > > +/* 64- and 32-bits branchless implementations of popcount are detected: > > > > + > > > > + int popcount64c (uint64_t x) > > > > + { > > > > + x -= (x >> 1) & 0xULL; > > > > + x = (x & 0xULL) + ((x >> 2) & > > > > 0xULL); > > > > + x = (x + (x >> 4)) & 0x0f0f0f0f0f0f0f0fULL; > > > > + return (x * 0x0101010101010101ULL) >> 56; > > > > + } > > > > + > > > > + int popcount32c (uint32_t x) > > > > + { > > > > + x -= (x >> 1) & 0x; > > > > + x = (x & 0x) + ((x >> 2) & 0x); > > > > + x = (x + (x >> 4)) & 0x0f0f0f0f; > > > > + return (x * 0x01010101) >> 24; > > > > + } */ > > > > +(simplify > > > > + (convert > > > > +(rshift > > > > + (mult > > > > + (bit_and:c > > > > + (plus:c > > > >
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
On Tue, Sep 24, 2019 at 5:29 PM Dmitrij Pochepko wrote: > > Hi, > > can anybody take a look at v2? +(if (tree_to_uhwi (@4) == 1 + && tree_to_uhwi (@10) == 2 && tree_to_uhwi (@5) == 4 those will still ICE for large __int128_t constants. Since you do not match any conversions you should probably restrict the precision of 'type' like with (if (TYPE_PRECISION (type) <= 64 && tree_to_uhwi (@4) ... likewise tree_to_uhwi will fail for negative constants thus if the pattern assumes unsigned you should verify that as well with && TYPE_UNSIGNED (type). Your 'argtype' is simply 'type' so you can elide it. + (switch + (if (types_match (argtype, long_long_unsigned_type_node)) + (convert (BUILT_IN_POPCOUNTLL:integer_type_node @0))) + (if (types_match (argtype, long_unsigned_type_node)) + (convert (BUILT_IN_POPCOUNTL:integer_type_node @0))) + (if (types_match (argtype, unsigned_type_node)) + (convert (BUILT_IN_POPCOUNT:integer_type_node @0))) Please test small types first so we can avoid popcountll when long == long long or long == int. I also wonder if we really want to use the builtins and check optab availability or if we nowadays should use direct_internal_fn_supported_p (IFN_POPCOUNT, integer_type_node, type, OPTIMIZE_FOR_BOTH) and (convert (IFN_POPCOUNT:type @0)) without the switch? Thanks, Richard. > Thanks, > Dmitrij > > On Mon, Sep 09, 2019 at 10:03:40PM +0300, Dmitrij Pochepko wrote: > > Hi all. > > > > Please take a look at v2 (attached). > > I changed patch according to review comments. The same testing was > > performed again. > > > > Thanks, > > Dmitrij > > > > On Thu, Sep 05, 2019 at 06:34:49PM +0300, Dmitrij Pochepko wrote: > > > This patch adds matching for Hamming weight (popcount) implementation. > > > The following sources: > > > > > > int > > > foo64 (unsigned long long a) > > > { > > > unsigned long long b = a; > > > b -= ((b>>1) & 0xULL); > > > b = ((b>>2) & 0xULL) + (b & 0xULL); > > > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > > > b *= 0x0101010101010101ULL; > > > return (int)(b >> 56); > > > } > > > > > > and > > > > > > int > > > foo32 (unsigned int a) > > > { > > > unsigned long b = a; > > > b -= ((b>>1) & 0xUL); > > > b = ((b>>2) & 0xUL) + (b & 0xUL); > > > b = ((b>>4) + b) & 0x0F0F0F0FUL; > > > b *= 0x01010101UL; > > > return (int)(b >> 24); > > > } > > > > > > and equivalents are now recognized as popcount for platforms with hw > > > popcount support. Bootstrapped and tested on x86_64-pc-linux-gnu and > > > aarch64-linux-gnu systems with no regressions. > > > > > > (I have no write access to repo) > > > > > > Thanks, > > > Dmitrij > > > > > > > > > gcc/ChangeLog: > > > > > > PR tree-optimization/90836 > > > > > > * gcc/match.pd (popcount): New pattern. > > > > > > gcc/testsuite/ChangeLog: > > > > > > PR tree-optimization/90836 > > > > > > * lib/target-supports.exp (check_effective_target_popcount) > > > (check_effective_target_popcountll): New effective targets. > > > * gcc.dg/tree-ssa/popcount4.c: New test. > > > * gcc.dg/tree-ssa/popcount4l.c: New test. > > > * gcc.dg/tree-ssa/popcount4ll.c: New test. > > > > > diff --git a/gcc/match.pd b/gcc/match.pd > > > index 0317bc7..b1867bf 100644 > > > --- a/gcc/match.pd > > > +++ b/gcc/match.pd > > > @@ -5358,6 +5358,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > > >(cmp (popcount @0) integer_zerop) > > >(rep @0 { build_zero_cst (TREE_TYPE (@0)); } > > > > > > +/* 64- and 32-bits branchless implementations of popcount are detected: > > > + > > > + int popcount64c (uint64_t x) > > > + { > > > + x -= (x >> 1) & 0xULL; > > > + x = (x & 0xULL) + ((x >> 2) & > > > 0xULL); > > > + x = (x + (x >> 4)) & 0x0f0f0f0f0f0f0f0fULL; > > > + return (x * 0x0101010101010101ULL) >> 56; > > > + } > > > + > > > + int popcount32c (uint32_t x) > > > + { > > > + x -= (x >> 1) & 0x; > > > + x = (x & 0x) + ((x >> 2) & 0x); > > > + x = (x + (x >> 4)) & 0x0f0f0f0f; > > > + return (x * 0x01010101) >> 24; > > > + } */ > > > +(simplify > > > + (convert > > > +(rshift > > > + (mult > > > + (bit_and:c > > > + (plus:c > > > + (rshift @8 INTEGER_CST@5) > > > + (plus:c@8 > > > + (bit_and @6 INTEGER_CST@7) > > > + (bit_and > > > + (rshift > > > + (minus@6 > > > + @0 > > > + (bit_and > > > + (rshift @0 INTEGER_CST@4) > > > + INTEGER_CST@11)) > > > + INTEGER_CST@10) > > > + INTEGER_CST@9))) > > > + INTEGER_CST@3) > > > + INTEGER_CST@2) > > > + INTEGER_CST@1)) > > > + /* Check constants and optab. */ > > > + (with > > > +
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
Hi, can anybody take a look at v2? Thanks, Dmitrij On Mon, Sep 09, 2019 at 10:03:40PM +0300, Dmitrij Pochepko wrote: > Hi all. > > Please take a look at v2 (attached). > I changed patch according to review comments. The same testing was performed > again. > > Thanks, > Dmitrij > > On Thu, Sep 05, 2019 at 06:34:49PM +0300, Dmitrij Pochepko wrote: > > This patch adds matching for Hamming weight (popcount) implementation. The > > following sources: > > > > int > > foo64 (unsigned long long a) > > { > > unsigned long long b = a; > > b -= ((b>>1) & 0xULL); > > b = ((b>>2) & 0xULL) + (b & 0xULL); > > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > > b *= 0x0101010101010101ULL; > > return (int)(b >> 56); > > } > > > > and > > > > int > > foo32 (unsigned int a) > > { > > unsigned long b = a; > > b -= ((b>>1) & 0xUL); > > b = ((b>>2) & 0xUL) + (b & 0xUL); > > b = ((b>>4) + b) & 0x0F0F0F0FUL; > > b *= 0x01010101UL; > > return (int)(b >> 24); > > } > > > > and equivalents are now recognized as popcount for platforms with hw > > popcount support. Bootstrapped and tested on x86_64-pc-linux-gnu and > > aarch64-linux-gnu systems with no regressions. > > > > (I have no write access to repo) > > > > Thanks, > > Dmitrij > > > > > > gcc/ChangeLog: > > > > PR tree-optimization/90836 > > > > * gcc/match.pd (popcount): New pattern. > > > > gcc/testsuite/ChangeLog: > > > > PR tree-optimization/90836 > > > > * lib/target-supports.exp (check_effective_target_popcount) > > (check_effective_target_popcountll): New effective targets. > > * gcc.dg/tree-ssa/popcount4.c: New test. > > * gcc.dg/tree-ssa/popcount4l.c: New test. > > * gcc.dg/tree-ssa/popcount4ll.c: New test. > > > diff --git a/gcc/match.pd b/gcc/match.pd > > index 0317bc7..b1867bf 100644 > > --- a/gcc/match.pd > > +++ b/gcc/match.pd > > @@ -5358,6 +5358,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > >(cmp (popcount @0) integer_zerop) > >(rep @0 { build_zero_cst (TREE_TYPE (@0)); } > > > > +/* 64- and 32-bits branchless implementations of popcount are detected: > > + > > + int popcount64c (uint64_t x) > > + { > > + x -= (x >> 1) & 0xULL; > > + x = (x & 0xULL) + ((x >> 2) & 0xULL); > > + x = (x + (x >> 4)) & 0x0f0f0f0f0f0f0f0fULL; > > + return (x * 0x0101010101010101ULL) >> 56; > > + } > > + > > + int popcount32c (uint32_t x) > > + { > > + x -= (x >> 1) & 0x; > > + x = (x & 0x) + ((x >> 2) & 0x); > > + x = (x + (x >> 4)) & 0x0f0f0f0f; > > + return (x * 0x01010101) >> 24; > > + } */ > > +(simplify > > + (convert > > +(rshift > > + (mult > > + (bit_and:c > > + (plus:c > > + (rshift @8 INTEGER_CST@5) > > + (plus:c@8 > > + (bit_and @6 INTEGER_CST@7) > > + (bit_and > > + (rshift > > + (minus@6 > > + @0 > > + (bit_and > > + (rshift @0 INTEGER_CST@4) > > + INTEGER_CST@11)) > > + INTEGER_CST@10) > > + INTEGER_CST@9))) > > + INTEGER_CST@3) > > + INTEGER_CST@2) > > + INTEGER_CST@1)) > > + /* Check constants and optab. */ > > + (with > > + { > > + tree argtype = TREE_TYPE (@0); > > + unsigned prec = TYPE_PRECISION (argtype); > > + int shift = TYPE_PRECISION (long_long_unsigned_type_node) - prec; > > + const unsigned long long c1 = 0x0101010101010101ULL >> shift, > > + c2 = 0x0F0F0F0F0F0F0F0FULL >> shift, > > + c3 = 0xULL >> shift, > > + c4 = 0xULL >> shift; > > + } > > +(if (types_match (type, integer_type_node) && tree_to_uhwi (@4) == 1 > > + && tree_to_uhwi (@10) == 2 && tree_to_uhwi (@5) == 4 > > + && tree_to_uhwi (@1) == prec - 8 && tree_to_uhwi (@2) == c1 > > + && tree_to_uhwi (@3) == c2 && tree_to_uhwi (@9) == c3 > > + && tree_to_uhwi (@7) == c3 && tree_to_uhwi (@11) == c4 > > + && optab_handler (popcount_optab, TYPE_MODE (argtype)) > > + != CODE_FOR_nothing) > > + (switch > > + (if (types_match (argtype, long_long_unsigned_type_node)) > > + (BUILT_IN_POPCOUNTLL @0)) > > + (if (types_match (argtype, long_unsigned_type_node)) > > + (BUILT_IN_POPCOUNTL @0)) > > + (if (types_match (argtype, unsigned_type_node)) > > + (BUILT_IN_POPCOUNT @0)) > > + > > /* Simplify: > > > > a = a1 op a2 > > diff --git a/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c > > b/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c > > new file mode 100644 > > index 000..9f759f8 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c > > @@ -0,0 +1,22 @@ > > +/* { dg-do compile } */ > > +/* {
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
Hi all. Please take a look at v2 (attached). I changed patch according to review comments. The same testing was performed again. Thanks, Dmitrij On Thu, Sep 05, 2019 at 06:34:49PM +0300, Dmitrij Pochepko wrote: > This patch adds matching for Hamming weight (popcount) implementation. The > following sources: > > int > foo64 (unsigned long long a) > { > unsigned long long b = a; > b -= ((b>>1) & 0xULL); > b = ((b>>2) & 0xULL) + (b & 0xULL); > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > b *= 0x0101010101010101ULL; > return (int)(b >> 56); > } > > and > > int > foo32 (unsigned int a) > { > unsigned long b = a; > b -= ((b>>1) & 0xUL); > b = ((b>>2) & 0xUL) + (b & 0xUL); > b = ((b>>4) + b) & 0x0F0F0F0FUL; > b *= 0x01010101UL; > return (int)(b >> 24); > } > > and equivalents are now recognized as popcount for platforms with hw popcount > support. Bootstrapped and tested on x86_64-pc-linux-gnu and aarch64-linux-gnu > systems with no regressions. > > (I have no write access to repo) > > Thanks, > Dmitrij > > > gcc/ChangeLog: > > PR tree-optimization/90836 > > * gcc/match.pd (popcount): New pattern. > > gcc/testsuite/ChangeLog: > > PR tree-optimization/90836 > > * lib/target-supports.exp (check_effective_target_popcount) > (check_effective_target_popcountll): New effective targets. > * gcc.dg/tree-ssa/popcount4.c: New test. > * gcc.dg/tree-ssa/popcount4l.c: New test. > * gcc.dg/tree-ssa/popcount4ll.c: New test. > diff --git a/gcc/match.pd b/gcc/match.pd > index 0317bc7..b1867bf 100644 > --- a/gcc/match.pd > +++ b/gcc/match.pd > @@ -5358,6 +5358,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) >(cmp (popcount @0) integer_zerop) >(rep @0 { build_zero_cst (TREE_TYPE (@0)); } > > +/* 64- and 32-bits branchless implementations of popcount are detected: > + > + int popcount64c (uint64_t x) > + { > + x -= (x >> 1) & 0xULL; > + x = (x & 0xULL) + ((x >> 2) & 0xULL); > + x = (x + (x >> 4)) & 0x0f0f0f0f0f0f0f0fULL; > + return (x * 0x0101010101010101ULL) >> 56; > + } > + > + int popcount32c (uint32_t x) > + { > + x -= (x >> 1) & 0x; > + x = (x & 0x) + ((x >> 2) & 0x); > + x = (x + (x >> 4)) & 0x0f0f0f0f; > + return (x * 0x01010101) >> 24; > + } */ > +(simplify > + (convert > +(rshift > + (mult > + (bit_and:c > + (plus:c > + (rshift @8 INTEGER_CST@5) > + (plus:c@8 > + (bit_and @6 INTEGER_CST@7) > + (bit_and > + (rshift > + (minus@6 > + @0 > + (bit_and > + (rshift @0 INTEGER_CST@4) > + INTEGER_CST@11)) > + INTEGER_CST@10) > + INTEGER_CST@9))) > + INTEGER_CST@3) > + INTEGER_CST@2) > + INTEGER_CST@1)) > + /* Check constants and optab. */ > + (with > + { > + tree argtype = TREE_TYPE (@0); > + unsigned prec = TYPE_PRECISION (argtype); > + int shift = TYPE_PRECISION (long_long_unsigned_type_node) - prec; > + const unsigned long long c1 = 0x0101010101010101ULL >> shift, > + c2 = 0x0F0F0F0F0F0F0F0FULL >> shift, > + c3 = 0xULL >> shift, > + c4 = 0xULL >> shift; > + } > +(if (types_match (type, integer_type_node) && tree_to_uhwi (@4) == 1 > + && tree_to_uhwi (@10) == 2 && tree_to_uhwi (@5) == 4 > + && tree_to_uhwi (@1) == prec - 8 && tree_to_uhwi (@2) == c1 > + && tree_to_uhwi (@3) == c2 && tree_to_uhwi (@9) == c3 > + && tree_to_uhwi (@7) == c3 && tree_to_uhwi (@11) == c4 > + && optab_handler (popcount_optab, TYPE_MODE (argtype)) > + != CODE_FOR_nothing) > + (switch > + (if (types_match (argtype, long_long_unsigned_type_node)) > + (BUILT_IN_POPCOUNTLL @0)) > + (if (types_match (argtype, long_unsigned_type_node)) > + (BUILT_IN_POPCOUNTL @0)) > + (if (types_match (argtype, unsigned_type_node)) > + (BUILT_IN_POPCOUNT @0)) > + > /* Simplify: > > a = a1 op a2 > diff --git a/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c > b/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c > new file mode 100644 > index 000..9f759f8 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c > @@ -0,0 +1,22 @@ > +/* { dg-do compile } */ > +/* { dg-require-effective-target popcount } */ > +/* { dg-require-effective-target int32plus } */ > +/* { dg-options "-O2 -fdump-tree-optimized" } */ > + > +const unsigned m1 = 0xUL; > +const unsigned m2 = 0xUL; > +const unsigned m4 = 0x0F0F0F0FUL; > +const unsigned h01 = 0x01010101UL; > +const int shift = 24; > + > +int popcount64c(unsigned x)
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
Hi, thank you for looking into it. On Fri, Sep 06, 2019 at 12:13:34PM +, Wilco Dijkstra wrote: > Hi, > > +(simplify > + (convert > +(rshift > + (mult > > > is the outer convert really necessary? That is, if we change > > the simplification result to > > Indeed that should be "convert?" to make it optional. > I removed this one as Richard suggested in the new patch version. > > Is the Hamming weight popcount > > faster than the libgcc table-based approach? I wonder if we really > > need to restrict this conversion to the case where the target > > has an expander. > > Well libgcc uses the exact same sequence (not a table): > > objdump -d ./aarch64-unknown-linux-gnu/libgcc/_popcountsi2.o > > <__popcountdi2>: >0: d341fc01lsr x1, x0, #1 >4: b200c3e3mov x3, #0x101010101010101 // > #72340172838076673 >8: 9200f021and x1, x1, #0x >c: cb010001sub x1, x0, x1 > 10: 9200e422and x2, x1, #0x > 14: d342fc21lsr x1, x1, #2 > 18: 9200e421and x1, x1, #0x > 1c: 8b010041add x1, x2, x1 > 20: 8b411021add x1, x1, x1, lsr #4 > 24: 9200cc20and x0, x1, #0xf0f0f0f0f0f0f0f > 28: 9b037c00mul x0, x0, x3 > 2c: d378fc00lsr x0, x0, #56 > 30: d65f03c0ret > > So if you don't check for an expander you get an endless loop in libgcc since > the makefile doesn't appear to use -fno-builtin anywhere... The patch is designed to avoid such endless loop - libgcc popcount call is compiled into popcount cpu instruction(s) on supported platforms and the patch is only allowing simplification on such platforms. This is implemented via "optab_handler (popcount_optab, TYPE_MODE (argtype)) != CODE_FOR_nothing" check. Thanks, Dmitrij > > Wilco >
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
Hi, thank you for looking into it. On Fri, Sep 06, 2019 at 12:23:40PM +0200, Richard Biener wrote: > On Thu, Sep 5, 2019 at 5:35 PM Dmitrij Pochepko > wrote: > > > > This patch adds matching for Hamming weight (popcount) implementation. The > > following sources: > > > > int > > foo64 (unsigned long long a) > > { > > unsigned long long b = a; > > b -= ((b>>1) & 0xULL); > > b = ((b>>2) & 0xULL) + (b & 0xULL); > > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > > b *= 0x0101010101010101ULL; > > return (int)(b >> 56); > > } > > > > and > > > > int > > foo32 (unsigned int a) > > { > > unsigned long b = a; > > b -= ((b>>1) & 0xUL); > > b = ((b>>2) & 0xUL) + (b & 0xUL); > > b = ((b>>4) + b) & 0x0F0F0F0FUL; > > b *= 0x01010101UL; > > return (int)(b >> 24); > > } > > > > and equivalents are now recognized as popcount for platforms with hw > > popcount support. Bootstrapped and tested on x86_64-pc-linux-gnu and > > aarch64-linux-gnu systems with no regressions. > > > > (I have no write access to repo) > > +(simplify > + (convert > +(rshift > + (mult > > is the outer convert really necessary? That is, if we change > the simplification result to > > (convert (BUILT_IN_POPCOUNT @0)) > > wouldn't that be correct as well? Yes, this is better. I fixed it in the new version. > > Is the Hamming weight popcount > faster than the libgcc table-based approach? I wonder if we really > need to restrict this conversion to the case where the target > has an expander. > > + (mult > + (bit_and:c > > this doesn't need :c (second operand is a constant). Yes. Agree, this is redundant. > > + int shift = TYPE_PRECISION (long_long_unsigned_type_node) - prec; > + const unsigned long long c1 = 0x0101010101010101ULL >> shift, > > I think this mixes host and target properties. I guess intead of > 'const unsigned long long' you want to use 'const uint64_t' and > instead of TYPE_PRECISION (long_long_unsigned_type_node) 64? > Since you are later comparing with unsigned HOST_WIDE_INT > eventually unsigned HOST_WIDE_INT is better (that's always 64bit as well). Agree. It is better to use HOST_WIDE_INT. > > You are using tree_to_uhwi but nowhere verifying if @0 is unsigned. > What happens if 'prec' is > 64? (__int128 ...). Ah, I guess the > final selection will simply select nothing... > > Otherwise the patch looks reasonable, even if the pattern > is a bit unwieldly... ;) > > Does it work for targets where 'unsigned int' is smaller than 32bit? Yes. The only 16-bit-int architecture with popcount support on hw level is avr. I built gcc for avr and checked that 16-bit popcount algorithm is recognized successfully. Thanks, Dmitrij > > Thanks, > Richard. > > > > Thanks, > > Dmitrij > > > > > > gcc/ChangeLog: > > > > PR tree-optimization/90836 > > > > * gcc/match.pd (popcount): New pattern. > > > > gcc/testsuite/ChangeLog: > > > > PR tree-optimization/90836 > > > > * lib/target-supports.exp (check_effective_target_popcount) > > (check_effective_target_popcountll): New effective targets. > > * gcc.dg/tree-ssa/popcount4.c: New test. > > * gcc.dg/tree-ssa/popcount4l.c: New test. > > * gcc.dg/tree-ssa/popcount4ll.c: New test.
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
On Fri, Sep 6, 2019 at 2:13 PM Wilco Dijkstra wrote: > > Hi, > > +(simplify > + (convert > +(rshift > + (mult > > > is the outer convert really necessary? That is, if we change > > the simplification result to > > Indeed that should be "convert?" to make it optional. Rather drop it, a generated conversion should be elided by conversion simplification. > > Is the Hamming weight popcount > > faster than the libgcc table-based approach? I wonder if we really > > need to restrict this conversion to the case where the target > > has an expander. > > Well libgcc uses the exact same sequence (not a table): > > objdump -d ./aarch64-unknown-linux-gnu/libgcc/_popcountsi2.o > > <__popcountdi2>: >0: d341fc01lsr x1, x0, #1 >4: b200c3e3mov x3, #0x101010101010101 // > #72340172838076673 >8: 9200f021and x1, x1, #0x >c: cb010001sub x1, x0, x1 > 10: 9200e422and x2, x1, #0x > 14: d342fc21lsr x1, x1, #2 > 18: 9200e421and x1, x1, #0x > 1c: 8b010041add x1, x2, x1 > 20: 8b411021add x1, x1, x1, lsr #4 > 24: 9200cc20and x0, x1, #0xf0f0f0f0f0f0f0f > 28: 9b037c00mul x0, x0, x3 > 2c: d378fc00lsr x0, x0, #56 > 30: d65f03c0ret > > So if you don't check for an expander you get an endless loop in libgcc since > the makefile doesn't appear to use -fno-builtin anywhere... Hm, must be aarch specific. But indeed it should use -fno-builtin ... Richard. > > Wilco >
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
Hi, +(simplify + (convert +(rshift + (mult > is the outer convert really necessary? That is, if we change > the simplification result to Indeed that should be "convert?" to make it optional. > Is the Hamming weight popcount > faster than the libgcc table-based approach? I wonder if we really > need to restrict this conversion to the case where the target > has an expander. Well libgcc uses the exact same sequence (not a table): objdump -d ./aarch64-unknown-linux-gnu/libgcc/_popcountsi2.o <__popcountdi2>: 0: d341fc01lsr x1, x0, #1 4: b200c3e3mov x3, #0x101010101010101 // #72340172838076673 8: 9200f021and x1, x1, #0x c: cb010001sub x1, x0, x1 10: 9200e422and x2, x1, #0x 14: d342fc21lsr x1, x1, #2 18: 9200e421and x1, x1, #0x 1c: 8b010041add x1, x2, x1 20: 8b411021add x1, x1, x1, lsr #4 24: 9200cc20and x0, x1, #0xf0f0f0f0f0f0f0f 28: 9b037c00mul x0, x0, x3 2c: d378fc00lsr x0, x0, #56 30: d65f03c0ret So if you don't check for an expander you get an endless loop in libgcc since the makefile doesn't appear to use -fno-builtin anywhere... Wilco
Re: [PATCH] PR tree-optimization/90836 Missing popcount pattern matching
On Thu, Sep 5, 2019 at 5:35 PM Dmitrij Pochepko wrote: > > This patch adds matching for Hamming weight (popcount) implementation. The > following sources: > > int > foo64 (unsigned long long a) > { > unsigned long long b = a; > b -= ((b>>1) & 0xULL); > b = ((b>>2) & 0xULL) + (b & 0xULL); > b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; > b *= 0x0101010101010101ULL; > return (int)(b >> 56); > } > > and > > int > foo32 (unsigned int a) > { > unsigned long b = a; > b -= ((b>>1) & 0xUL); > b = ((b>>2) & 0xUL) + (b & 0xUL); > b = ((b>>4) + b) & 0x0F0F0F0FUL; > b *= 0x01010101UL; > return (int)(b >> 24); > } > > and equivalents are now recognized as popcount for platforms with hw popcount > support. Bootstrapped and tested on x86_64-pc-linux-gnu and aarch64-linux-gnu > systems with no regressions. > > (I have no write access to repo) +(simplify + (convert +(rshift + (mult is the outer convert really necessary? That is, if we change the simplification result to (convert (BUILT_IN_POPCOUNT @0)) wouldn't that be correct as well? Is the Hamming weight popcount faster than the libgcc table-based approach? I wonder if we really need to restrict this conversion to the case where the target has an expander. + (mult + (bit_and:c this doesn't need :c (second operand is a constant). + int shift = TYPE_PRECISION (long_long_unsigned_type_node) - prec; + const unsigned long long c1 = 0x0101010101010101ULL >> shift, I think this mixes host and target properties. I guess intead of 'const unsigned long long' you want to use 'const uint64_t' and instead of TYPE_PRECISION (long_long_unsigned_type_node) 64? Since you are later comparing with unsigned HOST_WIDE_INT eventually unsigned HOST_WIDE_INT is better (that's always 64bit as well). You are using tree_to_uhwi but nowhere verifying if @0 is unsigned. What happens if 'prec' is > 64? (__int128 ...). Ah, I guess the final selection will simply select nothing... Otherwise the patch looks reasonable, even if the pattern is a bit unwieldly... ;) Does it work for targets where 'unsigned int' is smaller than 32bit? Thanks, Richard. > > Thanks, > Dmitrij > > > gcc/ChangeLog: > > PR tree-optimization/90836 > > * gcc/match.pd (popcount): New pattern. > > gcc/testsuite/ChangeLog: > > PR tree-optimization/90836 > > * lib/target-supports.exp (check_effective_target_popcount) > (check_effective_target_popcountll): New effective targets. > * gcc.dg/tree-ssa/popcount4.c: New test. > * gcc.dg/tree-ssa/popcount4l.c: New test. > * gcc.dg/tree-ssa/popcount4ll.c: New test.
[PATCH] PR tree-optimization/90836 Missing popcount pattern matching
This patch adds matching for Hamming weight (popcount) implementation. The following sources: int foo64 (unsigned long long a) { unsigned long long b = a; b -= ((b>>1) & 0xULL); b = ((b>>2) & 0xULL) + (b & 0xULL); b = ((b>>4) + b) & 0x0F0F0F0F0F0F0F0FULL; b *= 0x0101010101010101ULL; return (int)(b >> 56); } and int foo32 (unsigned int a) { unsigned long b = a; b -= ((b>>1) & 0xUL); b = ((b>>2) & 0xUL) + (b & 0xUL); b = ((b>>4) + b) & 0x0F0F0F0FUL; b *= 0x01010101UL; return (int)(b >> 24); } and equivalents are now recognized as popcount for platforms with hw popcount support. Bootstrapped and tested on x86_64-pc-linux-gnu and aarch64-linux-gnu systems with no regressions. (I have no write access to repo) Thanks, Dmitrij gcc/ChangeLog: PR tree-optimization/90836 * gcc/match.pd (popcount): New pattern. gcc/testsuite/ChangeLog: PR tree-optimization/90836 * lib/target-supports.exp (check_effective_target_popcount) (check_effective_target_popcountll): New effective targets. * gcc.dg/tree-ssa/popcount4.c: New test. * gcc.dg/tree-ssa/popcount4l.c: New test. * gcc.dg/tree-ssa/popcount4ll.c: New test. diff --git a/gcc/match.pd b/gcc/match.pd index 0317bc7..b1867bf 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -5358,6 +5358,70 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (cmp (popcount @0) integer_zerop) (rep @0 { build_zero_cst (TREE_TYPE (@0)); } +/* 64- and 32-bits branchless implementations of popcount are detected: + + int popcount64c (uint64_t x) + { + x -= (x >> 1) & 0xULL; + x = (x & 0xULL) + ((x >> 2) & 0xULL); + x = (x + (x >> 4)) & 0x0f0f0f0f0f0f0f0fULL; + return (x * 0x0101010101010101ULL) >> 56; + } + + int popcount32c (uint32_t x) + { + x -= (x >> 1) & 0x; + x = (x & 0x) + ((x >> 2) & 0x); + x = (x + (x >> 4)) & 0x0f0f0f0f; + return (x * 0x01010101) >> 24; + } */ +(simplify + (convert +(rshift + (mult + (bit_and:c + (plus:c + (rshift @8 INTEGER_CST@5) + (plus:c@8 + (bit_and @6 INTEGER_CST@7) + (bit_and + (rshift + (minus@6 + @0 + (bit_and + (rshift @0 INTEGER_CST@4) + INTEGER_CST@11)) + INTEGER_CST@10) + INTEGER_CST@9))) + INTEGER_CST@3) + INTEGER_CST@2) + INTEGER_CST@1)) + /* Check constants and optab. */ + (with + { + tree argtype = TREE_TYPE (@0); + unsigned prec = TYPE_PRECISION (argtype); + int shift = TYPE_PRECISION (long_long_unsigned_type_node) - prec; + const unsigned long long c1 = 0x0101010101010101ULL >> shift, +c2 = 0x0F0F0F0F0F0F0F0FULL >> shift, +c3 = 0xULL >> shift, +c4 = 0xULL >> shift; + } +(if (types_match (type, integer_type_node) && tree_to_uhwi (@4) == 1 + && tree_to_uhwi (@10) == 2 && tree_to_uhwi (@5) == 4 + && tree_to_uhwi (@1) == prec - 8 && tree_to_uhwi (@2) == c1 + && tree_to_uhwi (@3) == c2 && tree_to_uhwi (@9) == c3 + && tree_to_uhwi (@7) == c3 && tree_to_uhwi (@11) == c4 + && optab_handler (popcount_optab, TYPE_MODE (argtype)) + != CODE_FOR_nothing) + (switch + (if (types_match (argtype, long_long_unsigned_type_node)) + (BUILT_IN_POPCOUNTLL @0)) + (if (types_match (argtype, long_unsigned_type_node)) + (BUILT_IN_POPCOUNTL @0)) + (if (types_match (argtype, unsigned_type_node)) + (BUILT_IN_POPCOUNT @0)) + /* Simplify: a = a1 op a2 diff --git a/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c b/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c new file mode 100644 index 000..9f759f8 --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/popcount4.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target popcount } */ +/* { dg-require-effective-target int32plus } */ +/* { dg-options "-O2 -fdump-tree-optimized" } */ + +const unsigned m1 = 0xUL; +const unsigned m2 = 0xUL; +const unsigned m4 = 0x0F0F0F0FUL; +const unsigned h01 = 0x01010101UL; +const int shift = 24; + +int popcount64c(unsigned x) +{ +x -= (x >> 1) & m1; +x = (x & m2) + ((x >> 2) & m2); +x = (x + (x >> 4)) & m4; +return (x * h01) >> shift; +} + +/* { dg-final { scan-tree-dump-times "__builtin_popcount" 1 "optimized" } } */ + + diff --git a/gcc/testsuite/gcc.dg/tree-ssa/popcount4l.c b/gcc/testsuite/gcc.dg/tree-ssa/popcount4l.c new file mode 100644 index 000..ab33f79 --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/popcount4l.c @@ -0,0 +1,30 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target popcountl } */ +/* { dg-options "-O2 -fdump-tree-optimized" } */ + +#if __SIZEOF_LONG__ == 4 +const unsigned long m1 = 0xUL; +const unsigned long m2 = 0xUL; +const unsigned long m4 = 0x0F0F0F0FUL; +const unsigned long