https://gcc.gnu.org/bugzilla/show_bug.cgi?id=123286
Andrew Pinski <pinskia at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |TREE
--- Comment #5 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
(In reply to XChy from comment #4)
> Yes, our fuzzer relies on this feature to resist middle-end
> canonicalization, so it can detect backend issues. Just out of curiosity, is
> there any stable middle-end optimization barrier that only folds in the
> backends in GCC, like vqadd_u64(x, 0)? And how do you think of testing with
> such barriers?
The only stable thing here is to use the gimple front-end really.
And note RTL optimizations are still part of the middle-end in GCC terms; even
though the RTL that gets optimized is usually backend depedent it is still
common code.
So things like vqadd_u64 constant folding happens in the RTL level of GCC's
middle-end. RTL SS_PLUS/US_PLUS/SS_MINUS/US_MINUS is used to represent the
saturated add/minus. It just that SS_PLUS/US_PLUS/SS_MINUS/US_MINUS is older
than SAT_ADD and the simplifications on the rtl level is still newish too;
added r12-4231-g555fa3545efe23. SAT_ADD on the gimple-level was only added in
r15-576-g52b0536710ff3f. And vqadd_u64 being done as SAT_ADD was done with
r15-7018-gaa361611490947 though SS_PLUS has been used since the aarch64 backend
was added.
So you can see using vqadd_u64 was not stable to use in the first place.
Especially pre GCC 12.