[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2021-12-12 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=41464 Andrew Pinski changed: What|Removed |Added Target Milestone|--- |4.9.0 Known to work|

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2010-02-04 Thread bredelin at ucla dot edu
--- Comment #9 from bredelin at ucla dot edu 2010-02-04 20:29 --- In reply to comment #8 So in the end all this boils down to the Frontend / middle-end issue of weak handling of aligned types. Would you mind giving a general idea of what the outlook for improvement on this front is?

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2010-01-24 Thread rguenth at gcc dot gnu dot org
--- Comment #7 from rguenth at gcc dot gnu dot org 2010-01-24 11:52 --- *** Bug 42846 has been marked as a duplicate of this bug. *** -- rguenth at gcc dot gnu dot org changed: What|Removed |Added

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2010-01-24 Thread rguenth at gcc dot gnu dot org
--- Comment #8 from rguenth at gcc dot gnu dot org 2010-01-24 12:08 --- In the testcase from PR42846 one issue is that base_address: p__3(D) offset from base address: 0 constant offset from base address: 0 step: 4 aligned to: 128

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2009-09-27 Thread irar at il dot ibm dot com
--- Comment #4 from irar at il dot ibm dot com 2009-09-27 08:06 --- (In reply to comment #1) The interesting thing is that data-ref analysis sees 128bit alignment but the vectorizer still produces vect_var_.24_59 = M*vect_p.20_57{misalignment: 0}; D.2564_12 = *D.2563_11;

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2009-09-27 Thread rguenther at suse dot de
--- Comment #5 from rguenther at suse dot de 2009-09-27 09:43 --- Subject: Re: vector loads are unnecessarily split into high and low loads On Sun, 27 Sep 2009, irar at il dot ibm dot com wrote: --- Comment #4 from irar at il dot ibm dot com 2009-09-27 08:06 --- (In reply

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2009-09-27 Thread irar at il dot ibm dot com
--- Comment #6 from irar at il dot ibm dot com 2009-09-27 09:56 --- (In reply to comment #5) aligned to refers to the offset misalignment and not to the misalignment of base. Hmm, I believe it refers to base + offset + constant offset. tree-data-refs.h: /* Alignment

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2009-09-25 Thread rguenth at gcc dot gnu dot org
--- Comment #1 from rguenth at gcc dot gnu dot org 2009-09-25 09:06 --- The interesting thing is that data-ref analysis sees 128bit alignment but the vectorizer still produces vect_var_.24_59 = M*vect_p.20_57{misalignment: 0}; D.2564_12 = *D.2563_11; vect_var_.25_61 =

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2009-09-25 Thread nmiell at comcast dot net
--- Comment #2 from nmiell at comcast dot net 2009-09-25 17:12 --- Even if it thinks the arrays aren't aligned, that doesn't explain the completely unnecessarily zeroing of XMM0 or the choice of the load high/low instructions over MOVUPS. --

[Bug tree-optimization/41464] vector loads are unnecessarily split into high and low loads

2009-09-25 Thread ubizjak at gmail dot com
--- Comment #3 from ubizjak at gmail dot com 2009-09-25 17:33 --- (In reply to comment #2) Even if it thinks the arrays aren't aligned, that doesn't explain the completely unnecessarily zeroing of XMM0 or the choice of the load high/low instructions over MOVUPS. This is by design,