rif wrote on Fri, Feb 28, 2003 at 02:44:31PM -0500: > > It seems that profiling and recompilation have some weird > interactions. In particular, if I recompile with profiling on, I get > a lot of warnings about functions previous taking two arguments that > now take one, and so forth. So I guess the answer to this is just to > turn profiling off before recompiling.
Yepp. Can't work unless you teach the profiler about what to do when a profiled function gets compiled, introducing hooks to the compiler etc pp. > I'm also not clear what happens with inlining. My current guess is > that compilation is essentially a "one-pass" system, so that foo can > inline bar only if bar is *already* known to be inline when foo is > processed. yes > Therefore, if bar is after foo in the file and I declare > it inline, I will in general have to recompile *twice* for this to > take effect. Is this true? Of course. The code has to be known to expand it. > Then there are other things I still have no handle on. For instance, > it seems that sometimes a type declaration will have no effect --- I > will declare an argument x to be (type (simple-array double-float x)), > but will get compiler notes to the effect that x is a vector, not a > simple-array double-float. However, if I blow away all the .x86f > files in the directory and recompile them, these notes go away and the > code becomes fast. Any ideas what to even look for that causes this > problem? How exactly do you do these declarations? And in what order do you load things? > Thanks to everyone for all the help you've given so far. A couple > weeks ago my LISP program ran in 55 seconds. Now I get the same > results in .9 seconds, and I've developed a nice little macro language > I'll be able to use to make other programs fast much more easily. Sounds very good. If you didn't already, also watch for consing. The time macro will report it. Everything that conses without actually needed storage is probably worth looking into. Martin
