| Issue |
183489
|
| Summary |
Missed optimization: `uint64_t` shift/add-by-constant lowered to `__aeabi_lmul` on Cortex-M0
|
| Labels |
new issue
|
| Assignees |
|
| Reporter |
tobgaisbifx
|
**Problem**
For Cortex-M0 targets, LLVM optimizes a 64-bit shift & sub _expression_ back to an equivalent 64-bit multiplication using `__aeabi_lmul`.
**Why this matters**
On Cortex-M0, `__aeabi_lmul` call overhead + software 64-bit multiply cost is substantially higher than the inline shift/add/sub sequence generated by GCC.
[https://godbolt.org/z/YqP9Kr5qx](https://godbolt.org/z/YqP9Kr5qx)
_______________________________________________
llvm-bugs mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs