On Sat, Feb 09, 2019 at 01:08:55AM +0000, bitwise via Digitalmars-d-announce wrote: > On Saturday, 9 February 2019 at 00:04:20 UTC, Dennis wrote: > > On Friday, 8 February 2019 at 23:58:49 UTC, H. S. Teoh wrote: > > > Yep, the moral of the story is, if codegen quality is important to > > > you, use ldc (and presumably gdc too) rather than dmd. > > > > That's definitely true, but that leaves the question whether > > lowering rvalue references to lambdas is acceptable. There's the > > 'dmd for fast builds, gdc/ldc for fast code' motto, but if your > > debug builds of your game make it run at 15 fps it becomes unusable. > > I don't want the gap between dmd and compilers with modern back-ends > > to widen. > > Since the user doesn't explicitly place the lambda in their code, > wouldn't it be justifiable for the compiler to take it back out again > at a later step in compilation, even in debug mode?
Using lowering to lambdas as a way of defining semantics is not the same thing as actually using lambdas to implement a feature in the compiler! While it can be convenient to do the latter as a first stab, I'd expect that the optimizer could make use of special knowledge available in the compiler to implement this more efficiently. Since the compiler will always use a fixed pattern for the lowering, the backend could detect this pattern and optimize accordingly. Or the compiler implementation could lower it directly to something more efficient in the first place. T -- If you look at a thing nine hundred and ninety-nine times, you are perfectly safe; if you look at it the thousandth time, you are in frightful danger of seeing it for the first time. -- G. K. Chesterton