On 9/20/13 9:28 AM, Joseph Rushton Wakeling wrote:
On 20/09/13 18:01, Walter Bright wrote:
I know that, at least with dmd's back end, it's because the optimizer
was built
around the kinds of things that C++ programmers tend to write. The D
range/algorithm generates unusual code (for the back end) that the
back end
doesn't optimize for.

For example, ranges tend to lump several variables together and call
them a
'struct'. The back end is not tuned to deal with structs as an
aggregate of
discreet 'variables', meaning that such variables don't get assigned to
registers. Structs are treated as a lump.

This is not a fundamental performance problem with D.

The back end needs to be improved to "dis-aggregate" structs back into
discreet
variables.

I wouldn't single out DMD for criticism -- I don't know to what extent
the underlying reasons overlap, but all the compilers cope less well
with range-based material than they might in theory.

The canonical example would be something like,

     foreach (i; iota(10)) { ... }

which in theory shouldn't be any slower than,

     foreach (i; 0 .. 10) { ... }

but in practice is, no matter what the compiler.

I think I know how to fix that. I hypothesize it's about using actual increment instead of a stored value "step" for the particular case when step is 1.

Andrei

Reply via email to