https://gcc.gnu.org/bugzilla/show_bug.cgi?id=121959
--- Comment #5 from Robin Dapp <rdapp at gcc dot gnu.org> --- AFAICT using a signed intermediate type for subtraction (and subsequent sign extension) is correct. One idea would be to cancel the over-widening pattern when its single use result feeds into another widening operation (or even if the operation itself can be expressed as a widening operation?). But this hinges on vect_recog_widen_op_pattern recognizing the individual operations as widening (something I'm trying to support for riscv and which is working locally). And even if we recognize them, widen_minus has the same logic and correctly uses a signed output type. We could, again, recognize that the result has a single use and feeds into a widening operation. In that case we wouldn't use a signed result type but an unsigned one. I don't like that, though, in case anything later in the pipeline goes wrong, we're left with a potentially wrong type... I see three (unsatisfactory) options: - In vect_recog_widen_op_pattern see if instead of introducing an intermediate type we can use two widening ops and emit them there directly. - Go with Pan's backend idea to transform sign-extension + left shift into widening left shift if the shift count ensures there are no sign-extension bits left, like when extending from HI to SI, a count of 16 or more. That could also be a match.pd thing? - Exploit that left shift of negative values is undefined behavior. Richi, do you happen to have any insight still?
