André Pang wrote:
The usual term for your idea of "deterministic function" is "pure function". Congratulations, you've just invented Haskell ;). <http://www.haskell.org/>

You can also specify pure functions in GCC by declaring a function as e.g.

 int square (int) __attribute__ ((pure));

Sweet.  I'm glad I brought this up, although I take your point about C.

Compilers can theoretically do this if you give the compiler enough annotations about which functions are pure and which aren't: the name for the optimisation is common subexpression elimination (CSE). However, the verdict's still out about whether this optimisation is actually useful; at least one paper was written that showed it wasn't very beneficial for Haskell:

 http://www.springerlink.com/content/87m4m1294j472456/
 http://www.haskell.org/haskellwiki/GHC:FAQ

but Haskell's a very different programming language from C/C++, so it's possible that more traditional imperative languages would gain a lot more from CSE than Haskell does. In practice, I don't think any compiler actually does this at all, except for some very basic maths functions.

My gut feeling is that it would be unreasonable to expect much of a return from this optimization in a compiled language, if the optimization is restricted to the scope of the calling function.

eg. If functions b & c both call DoSomething, and b & c are both called from a, how can a compiler know that the call to DoSomething that occurs in c, will be with the same parameters as the call to DoSomething that occurs in b? Perhaps it's possible if c is only ever called from a, but it's beginning to seem like a fairly special case.

Not sure how it would play out in a dynamic language, but I think DoSomething would need to be fairly expensive for it to be worth the trouble.

Adelle.


_______________________________________________
coders mailing list
coders@slug.org.au
http://lists.slug.org.au/listinfo/coders

Reply via email to