Richard Earnshaw <rearn...@arm.com> writes: > On 27/05/14 16:07, Richard Sandiford wrote: >> Richard Sandiford <rdsandif...@googlemail.com> writes: >>> Richard Sandiford <rsand...@linux.vnet.ibm.com> writes: >>>> Does the following patch help? >>> >>> Bah, it won't of course: %i1 needs to be the operator. >> >> Here's v2. I tested that it worked for simple tests like: >> >> int f1 (int x, int y) { return x + (y << 4); } >> int f2 (int x, int y) { return x - (y << 4); } >> int f3 (int x, int y) { return x & (y << 4); } >> int f4 (int x, int y) { return x | (y << 4); } >> int f5 (int x, int y) { return x ^ (y << 4); } >> int f6 (int x, int y) { return (y << 4) - x; } >> int g1 (int x, int y, int z) { return x + (y << z); } >> int g2 (int x, int y, int z) { return x - (y << z); } >> int g3 (int x, int y, int z) { return x & (y << z); } >> int g4 (int x, int y, int z) { return x | (y << z); } >> int g5 (int x, int y, int z) { return x ^ (y << z); } >> int g6 (int x, int y, int z) { return (y << z) - x; } >> >> as well as the testcase. >> >> Thanks, >> Richard >> >> >> gcc/ >> * config/arm/iterators.md (shiftable_ops): New code iterator. >> (t2_binop0, arith_shift_insn): New code attributes. >> * config/arm/arm.md (insn_enabled): Delete. >> (enabled): Remove insn_enabled test. >> (*arith_shiftsi): Split out... >> (*arith_multsi): ...this pattern and remove insn_enabled attribute. >> > > > Thanks, Richard. > I've tweaked this as followed and committed it. > > I now consider "shift_operator" in the arm backend deprecated. We > should be moving towards using shift_nomul_operator. > > There's one final wart still to be handled, though. 'rotate' can only > take an immediate operand, not a register. We can currently deal with > this, but it's not clean in terms of constraint handling. I'll see if I > can fix this up sometime, but not today.
Thanks for picking it up. I realised later that I'd fluffed the MULT check in: > @@ -9876,38 +9868,34 @@ > > ;; Patterns to allow combination of arithmetic, cond code and shifts > > -(define_insn "*arith_shiftsi" > - [(set (match_operand:SI 0 "s_register_operand" "=r,r,r,r") > - (match_operator:SI 1 "shiftable_operator" > - [(match_operator:SI 3 "shift_operator" > - [(match_operand:SI 4 "s_register_operand" "r,r,r,r") > - (match_operand:SI 5 "shift_amount_operand" "M,M,M,r")]) > - (match_operand:SI 2 "s_register_operand" "rk,rk,r,rk")]))] > +(define_insn "*<arith_shift_insn>_multsi" > + [(set (match_operand:SI 0 "s_register_operand" "=r,r") > + (shiftable_ops:SI > + (mult:SI (match_operand:SI 2 "s_register_operand" "r,r") > + (match_operand:SI 3 "power_of_two_operand" "")) > + (match_operand:SI 1 "s_register_operand" "rk,<t2_binop0>")))] > "TARGET_32BIT" > - "%i1%?\\t%0, %2, %4%S3" > + "<arith_shift_insn>%?\\t%0, %1, %2, lsl %b3" > + [(set_attr "predicable" "yes") > + (set_attr "predicable_short_it" "no") > + (set_attr "shift" "4") > + (set_attr "arch" "a,t2") > + (set_attr "type" "alu_shift_imm")]) > + > +(define_insn "*<arith_shift_insn>_shiftsi" > + [(set (match_operand:SI 0 "s_register_operand" "=r,r,r") > + (shiftable_ops:SI > + (match_operator:SI 2 "shift_nomul_operator" > + [(match_operand:SI 3 "s_register_operand" "r,r,r") > + (match_operand:SI 4 "shift_amount_operand" "M,M,r")]) > + (match_operand:SI 1 "s_register_operand" "rk,<t2_binop0>,rk")))] > + "TARGET_32BIT && GET_CODE (operands[3]) != MULT" ...this condition: operands[3] was the old numbering of the operator rather than the new numbering. It looks like shift_nomul_operator should make it redundant anyway. Richard