On Tuesday, 18 March 2014 at 13:01:56 UTC, Marco Leise wrote:
Let's just say it will never detect all cases, so the "final"
keyword will still be around. Can you find any research papers
that indicate that such compiler technology can be implemented
with satisfactory results? Because it just sounds like a nice
idea on paper to me that only works when a lot of questions
have been answered with yes.

I don't think this is such a theoretical interesting question. Isn't this actually a special case of a partial correctness proof where you try to establish constraints on types? I am sure you can find a lot of papers covering bits and pieces of that.

These not entirely random objects from a class hierarchy could
well have frequently used final methods like a name or
position. I also mentioned objects passed as parameters into
delegates.

I am not sure I understand what you are getting at.

You start with the assumption that a pointer to base class A is the full set of that hierarchy. Then establish constraints for all the subclasses it cannot be. Best effort. Then you can inline any virtual function call that is not specialized across that constrained result set. Or you can inline all candidates in a switch statement and let the compiler do common subexpression elimination & co.

If you want speed you create separate paths for the dominant instance types. Whole program optimizations is guided by profiling data.

Another optimization, ok. The compiler still needs to know
that the instance type cannot be sub-classed.

Not really. It only needs to know that in the current execution path you do have instance of type X (which is most frequent) then you have another execution path for the inverted set.

Thinking about it, it might not even be good to duplicate
code. It could easily lead to instruction cache misses.

You have heuristics for that. After all, you do have the execution pattern. You have the data of a running system on typical input. If you log all input events (which is useful for a simulation) you can rerun the program in as many configurations you want. Then you skip the optimizations that leads to worse performance.

Also this is way too much involvement from both the coder and
the compiler.

Why? Nobody claimed that near optimal whole program optimization has to be fast.

At this point I'd ask for "final" if it wasn't already there, if just to be sure the compiler gets it right.

Nobody said that you should not have final, but final won't help you inlining virtual functions where possible.

you can have a high level specification language asserting pre and post conditions if you insist on closed source.

More shoulds and cans and ifs... :-(

Err… well, you can of course start with a blank slate after calling a closed source library function.

I don't get the big picture. What does the compiler have to do
with plugins? And what do you mean by allowed to do and
access and how does it interact with virtuality of a method?
I'm confused.

In my view plugins should not be allowed to subclass. I think it is ugly, but IFF then you need to tell the compiler which classes it can subclass, instantiate etc. As well as what side effects the call to the plugin may and may not have.

Why is that confusing? If you shake the world, you need to tell the compiler what the effect is. Otherwise you have to assume "anything" upon return from said function call.

That said, I am personally not interested in plugins without constraints imposed on them (or at all). Most programs can do fine with just static linkage, so I find the whole dynamic linkage argument less interesting.

Closed source library calls are more interesting, especially if you can say something about the state of that library. That could provide you with detectors for wrong library usage (which could be the OS itself). E.g. that a file has to be opened before it is closed etc.

Reply via email to