On Saturday, 23 November 2019 at 15:06:43 UTC, Adam D. Ruppe wrote:
Is your worry in slowness of virtual functions or heap allocation?

I'm trying to measure how well optimised naive and very generic code is, basically an exploration of "expressiveness", "strong typing support" and "effort" vs "performance". If the performance with classes and virtuals turns out to be close to static structs then that would be quite interesting, indeed. I will probably try out both.

Basically exploring "how high level can I go without paying a high price?".

void main() {
        import std.typecons;
        auto c = scoped!Child; // on-stack store
Child obj = c; // get the reference back out for template function calls
[…]
The three lines in main around scoped make it a stack class instead of a heap one.

Ok, thanks, that is worth exploring. Right now the objects I am looking at are pure values, so that might be a bit verbose though.

The `final` keyword on the Child class tells the compiler it is allowed to aggressively devirtualize interior calls from it (in fact, ldc optimizes that `childResult` function into plain

Yes, LDC makes most sense for this purpose.

The only function here that didn't get devirtualized is the abstract method itself, childResult. All the surrounding stuff was though.

I also guess LTO happens too late in the process to carry over information about whether virtuals have been specialised or not.

If you want to get even that devirtualized.... there's the curiously-recurring template pattern to make it happen.
[…]
// child inherits base specialized on itself, so it can see final
// down in the base as well as on the outside
final class Child : Base!(Child) {
        override int childResult() { return defaultValue() + 5; }
}

That was an interesting trick, maybe if LDC emitted information about specialization during compilation of individual files then the LTO pass could be modified to lock down virtuals as "has not been specialised"?


It depends just how crazy you want to go with it, but I think this is less crazy than trying to redo it all with mixins and structs - classes and interfaces do a lot of nice stuff in D. You get attribute inheritance, static checks, and familiar looking code to traditional OO programmers.

Right, that would be one argument in favour of OO. As a first step I am trying to find ways that does not bring the optimizer at a disadvantage as a baseline. Then it might be interesting to compare that to doing it with OO. The measurable difference between two approaches, could be an interesting outcome, I think.


If you really wanna talk about structs specifically, I can think about that too, just I wouldn't write off classes so easily! I legit think they are one of the most underrated D features in this community.

Yes, I am starting with structs, because pure values ought to be easier on the optimizer. So one should be able to go to a fairly high abstraction level and still get good performance.

I plan to look at graphs later, and then classes as the main construct would make sense.

Reply via email to