Am Mon, 17 Mar 2014 18:16:13 +0000 schrieb "Ola Fosheim Grøstad" <[email protected]>:
> On Monday, 17 March 2014 at 06:26:09 UTC, Marco Leise wrote: > > About two years ago we had that discussion and my opinion > > remains that there are too many "if"s and "assume"s for the > > compiler. > > It is not so simple to trace back where an object originated > > from when you call a method on it. > > It might not be easy, but in my view the language should be > designed to support future advanced compilers. If D gains > traction on the C++ level then the resources will become > available iff the language has the right constructs or affords > extensions that makes advanced optimizations tractable. What is > possible today is less imoortant... Let's just say it will never detect all cases, so the "final" keyword will still be around. Can you find any research papers that indicate that such compiler technology can be implemented with satisfactory results? Because it just sounds like a nice idea on paper to me that only works when a lot of questions have been answered with yes. > >It could be created though > > the factory mechanism in Object using a runtime string or it > > If it is random then you know that it is random. These not entirely random objects from a class hierarchy could well have frequently used final methods like a name or position. I also mentioned objects passed as parameters into delegates. > If you want > speed you create separate paths for the dominant instance types. > Whole program optimizations is guided by profiling data. Another optimization, ok. The compiler still needs to know that the instance type cannot be sub-classed. > > There are plenty of situations where it is virtually > > impossible to know the instance type statically. > > But you might know that it is either A and B or C and D in most > cases. Then you inline those cases and create specialized > execution paths where profitable. Thinking about it, it might not even be good to duplicate code. It could easily lead to instruction cache misses. Also this is way too much involvement from both the coder and the compiler. At this point I'd ask for "final" if it wasn't already there, if just to be sure the compiler gets it right. > > Whole program analysis only works on ... well, whole programs. > > If you split off a library or two it doesn't work. E.g. you > > have your math stuff in a library and in your main program > > you write: > > > > Matrix m1, m2; > > m1.crossProduct(m2); > > > > Inside crossProduct (which is in the math lib), the compiler > > could not statically verify if it is the Matrix class or a > > sub-class. > > In my view you should avoid not having source access, but even > then it is sufficient to know the effect of the function. E.g. > you can have a high level specification language asserting pre > and post conditions if you insist on closed source. More shoulds and cans and ifs... :-( > >> With a compiler switch or pragmas that tell the compiler what > >> can be dynamically subclassed the compiler can assume all > >> leaves in the compile time specialization hierarchies to be > >> final. > > > > Can you explain, how this would work and where it is used? > > You specify what plugins are allowed to do and access at whatever > resolution is necessary to enable the optimizations your program > needs? > > Ola. I don't get the big picture. What does the compiler have to do with plugins? And what do you mean by allowed to do and access and how does it interact with virtuality of a method? I'm confused. -- Marco
