Stewart Stremler wrote:
(Is saving 2-5 characters *really* that significant?)
When I'm the one doing the typing, and you all know what I mean, and using full text in some cases would cause compilation errors... yes.
        When it sees foo_norm(&normal), it can do analysis to determine
that normal isn't changed before or after calling the function, but it
needs access to the definition of foo_norm() (which it often does not
have) in order to be certain that it isn't modified *during* the call.

This depends on the language and the environment, and is an example of
where runtime optimization can beat compile-time optimization (thus, how
Java can beat out C and C++ for some real-world problems, despite the
howls of outrage and accusation of "lies and cheating" from that camp).
I must have missed the page in the C/C++ language spec that says, "thou shalt have a non-optimizing runtime". ;-)

That said, the optimization I'm talking about here isn't a runtime optimization, but a link-time one. A runtime optimizer might notice that the variable is rarely optimized, but would always have to have some way of detecting and responding to the case where it was.

It is worth noting that the "final" keyword in Java serves a similar purpose to "const" in C++, and does provide a performance boost. Java's optimizations in this area also fall apart when they hit non-Java code (although some VM's have C++/C99 bindings that make them aware of constness contracts... now they just need to be aware of strict aliasing). The nice thing about being able to declare a contract is that the optimizations can be applied even if some of the code involved runs outside of your native runtime.
Now, the linker might be able to figure this out, iff it is statically
linking in foo_norm(), but if it's a shared library, no dice. With the
case of foo_const(&constant), it doesn't need to worry about that. In
fact it can safely just hard code in 3 for the printf line.

An optimizing linker?

I suppose it could be done. Probably has been.
It's pretty much par for the course. There are even optimizing dynamic linkers, but the trade offs aren't always desirable.
But as our processors are getting more and more sophisticated, that
just seems like wasted work, if not actively harmful.  Let the runtime
do the optimization -- it *knows* what's actually going on.
The runtime knows what has gone on historically, which does help with optimizations, but it is not the same thing as knowing what is going to happen. The linker is the last line of defense where the optimizer can "prove" that an optimization is safe. The runtime can help inform as to whether it is *useful*, which is not the same thing.
I have a big problem with all this effort being done for *performance*
reasons, before a profiler has even been run on the code.
a) The effort is not being done for "performance" reasons. Improved optimization is just a nice byproduct of more expressive contracts.
b) Who said the profiler has not been run on the code?

--Chris

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to