I've only really researched and improved the peephole optimizer, which is the assembler stage. I'm not sure how much optimisation is done at earlier stages, but I do know that care is taken in deciding the best implementation of a case block, for example, evaluating factors such as how many separate branches there are. As it currently stands, pure function evaluation would be at the pre-compilation stage, where Pascal code is converted into platform-independent nodes. Gareth
On Mon 09/07/18 03:33 , "R0b0t1" r03...@gmail.com sent: n Sun, Jul 8, 2018 at 9:07 PM, Dmitry Boyarintsev wrote: > > On Sun, Jul 8, 2018 at 8:15 PM, J. Gareth Moreton > wrote: >> >> Yes, if any parameters are variables, then the function is not evaluated. >> My intention is that the purity of a function is only determined when it >> comes to evaluating it in an expression, but because of how complex >> functions can become, the "pure" directive hints to the compiler that the >> given function is pure and it should attempt the laborous task of evaluating >> it, rather than the opposite approach of attempting to evaluate all >> functions with constant actual parameters and potentially increasing the >> compilation time by several orders of magnitude (don't forget it might be >> attempting to do the same thing with system functions if the project is >> undergoing a full build). > > > if FPC assembler reader powerful enough to analyze and trust assembler > functions marked as pure? > At which stages is optimization done? GCC's backend optimizes each step in compilation (as I am aware), i.e. GENERIC -> GIMPLE -> RTL -> assembly. Many optimizations work best or are only possible at a certain stage. The various representations are also what makes analysis efficient. Most optimization passes do not happen on assembly.
_______________________________________________ fpc-devel maillist - fpc-devel@lists.freepascal.org http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel