https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87763

--- Comment #52 from Wilco <wilco at gcc dot gnu.org> ---
(In reply to Jeffrey A. Law from comment #49)
> I think the insv_1 (and it's closely related insv_2) regressions can be
> fixed by a single ior/and pattern in the backend or by hacking up combine a
> bit.  I'm still playing with the latter, but may have to put it on the back
> burner because of the pain of note management :(  Hoping to draw a
> conclusion on that by the end of this  week.  If I can't get a clean combine
> solution, then my recommendation would be to build the suitable backend
> pattern.   It just has to match stuff like
> 
> (set (reg1) (ior (and (reg2) ...)) with a matching constraint on reg1 and
> reg2 to ensure it's a RMW operand.

I don't think the current insv patterns are very useful, a more general
approach would be to support bitfield insert of any immediate which is not
currently supported. This can then be expanded into a bic/orr, bic/add, mov/bfi
or movk depending on the mask/immediate.

Note the register allocation issue as discussed in PR87871 which causes the
codesize regressions after combine inserts extra moves is still the worst part
of this issue.

Reply via email to