On 8/2/18 9:21 AM, Stefan Koch wrote:
On Thursday, 2 August 2018 at 12:57:02 UTC, Steven Schveighoffer wrote:
On 8/2/18 8:42 AM, Stefan Koch wrote:
On Monday, 30 July 2018 at 21:02:56 UTC, Steven Schveighoffer wrote:
Would it be a valid optimization to have D remove the requirement for allocation when it can determine that the entire data structure of the item in question is an rvalue, and would fit into the data pointer part of the delegate?

Don't do that. It's valid in simple cases.

Those are the cases I'm referring to.

I meant it seems valid in simple cases, and I doubt you can distinguish between cases that work and cases which don't work with 100% accuracy.

When the data needed in the delegate is all effectively immutable (or maybe *actually* immutable), I can't see how using a pointer to it is different than using a copy of it.

As soon as you want to chain delegate pointers it falls apart.

And so, don't do the skinny delegate optimization in that case?

Again the question is whether you can tell the cases apart with local information.

This idea requires cooperation from the compiler, all the way down to code generation -- it's not really a lowering but a change in how the data is fetched from the context.

At a minimum, I would say anything that only depends on immutables can be a valid use case.

In order to prove effective immutability (anything that could possibly change elsewhere can't qualify for this), you would need a lot of local information. So maybe this only works if the type is actually immutable.

Also the delegate might require heap allocation to be memory correct.

Does the "might" depend on the caller or on the delegate implementation itself? The former would squash this idea.

I am not sure about that, it's just a gut feeling that skinny delegates will be breaking some invariant, I may be wrong.

Coming up with specific rules and throwing them at the wall until they don't break anymore, is fine for applications used in limited controlled circumstances, I would not want to do it with a compiler which is used by an unknown number of users.

I wasn't planning on that method of proof. What I wanted to do was start with the tightest possible, but easily provable constraints -- all data must be immutable -- and then relax as it makes sense.

What I don't know is the implications on the optimizer or semantic analysis -- does the context pointer being really a pointer have any affect on those pieces?

I also don't know what you meant by "memory correct".

-Steve

Reply via email to