https://bugs.llvm.org/show_bug.cgi?id=40163
Bug ID: 40163
Summary: NEON: Minor optimization: Use vsli/vsri instead of
vshr/vshl and vorr
Product: libraries
Version: trunk
Hardware: All
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: ARM
Assignee: [email protected]
Reporter: [email protected]
CC: [email protected], [email protected],
[email protected]
Note: Also applies to aarch64.
typedef unsigned U32x4 __attribute__((vector_size(16)));
U32x4 vrol_accq_n_u32_17(U32x4 x)
{
return x + ((x << 17) | (x >> (32 - 17)));
}
Generates the following for ARMv7a:
vrol_accq_n_u32_17:
vshr.u32 q8, q0, #17
vshl.i32 q9, q0, #15
vorr q8, q9, q8
vadd.i32 q0, q8, q0
bx lr
In terms of performance, there isn't any faster option, but an instruction can
be saved by replacing vshl.i64/vorr with vsli.i64. It has the same performance
(usually) but with 4 bytes of code saved.
vrol_accq_n_u32_17:
vshr.u32 q8, q0, #17
vsli.32 q8, q0, #15
vadd.i32 q0, q8, q0
bx lr
However, in the case where vsli/vsri would be the last instruction in the
function (e.g. a rotate left function), and it uses q0 in the shift, it is
faster to use vshl/vshr + vorr, as it would prevent a register swap.
--
You are receiving this mail because:
You are on the CC list for the bug._______________________________________________
llvm-bugs mailing list
[email protected]
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs