On 19-Aug-2015 13:09, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <[email protected]>" wrote:
On Wednesday, 19 August 2015 at 09:55:19 UTC, Dmitry Olshansky wrote:
On 19-Aug-2015 12:46, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
<[email protected]>" wrote:
Well, you can start on this now, but by the time it is ready and
hardened, LLVM might have received improved AVX2 and AVX-512 code gen
from Intel. Which basically will leave DMD in the dust.


On numerics, video-codecs and the like. Not like compilers solely
depend on AVX.

Compilers are often written for scalars, but they are also just one
benchmark that compilers are evaluated by.

DMD could use multiple backends, use it's own performance-estimator (ran
on generated code) and pick the best output from each backend.


This meets what goal? As I said it's apparent that folks like DMD for fast compile times not for inhumanly good codegen.

D could leverage increased register sizes for parameter transfer between
non-C callable functions. Just that alone could be beneficial. Clearly
having 256/512 bit wide registers matters.

Load/unload via shuffling or RTT through stack going to murder that though.

And you need to coordinate
how the packing is done so you don't have to shuffle.


Given how flexble the current data types are I hardly see it implemented in a sane way not to mention benefits could berather slim. Lastly - why the "omnipotnent" (by this thread) LLVM/GCC guys won't implement it yet?

Lots of options in there, but you need to be different from LLVM. You
can't just take an old SSA and improve on it.

To slightly gain? Again the goal of maximizing the gains of vectors ops is hardly interesting IMO.



--
Dmitry Olshansky

Reply via email to