[Bug tree-optimization/66012] Sub-optimal 64bit load is generated instead of zero-extension
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66012 Andrew Pinski changed: What|Removed |Added Last reconfirmed|2015-12-23 00:00:00 |2016-9-21 --- Comment #4 from Andrew Pinski --- On the trunk we generate one 32bit load and one 64bit load (at least for aarch64): test: adrpx0, l add x1, x0, :lo12:l ldr w0, [x0, #:lo12:l] ; 32bit load to w0 ldr x1, [x1, 8] ; 64bit load to x1 orr x0, x0, x1, lsl 32 ret
[Bug tree-optimization/66012] Sub-optimal 64bit load is generated instead of zero-extension
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66012 Andrew Pinski changed: What|Removed |Added Keywords||missed-optimization Status|UNCONFIRMED |NEW Last reconfirmed||2015-12-23 Ever confirmed|0 |1 Severity|normal |enhancement --- Comment #3 from Andrew Pinski --- Confirmed.
[Bug tree-optimization/66012] Sub-optimal 64bit load is generated instead of zero-extension
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66012 --- Comment #2 from Andrew Pinski --- Happens on aarch64 also: test: adrpx0, l add x1, x0, :lo12:l ldr x1, [x1, 8] ldr w0, [x0, #:lo12:l] orr x0, x0, x1, lsl 32 ret
[Bug tree-optimization/66012] Sub-optimal 64bit load is generated instead of zero-extension
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66012 Jakub Jelinek changed: What|Removed |Added CC||jakub at gcc dot gnu.org --- Comment #1 from Jakub Jelinek --- In GIMPLE that masking is generally useless though, you shift the bits away, and without the extra BIT_AND_EXPR the expression is more canonical and shorter. So, supposedly during expansion or combine you could figure this from the left shift with large shift count.