While I agree with you, I think that there are so many things we are
already trying to address, that this one can wait. I think we've been
doing a very good job on large functions too, and I believe that authors
of very large functions are just getting not only what they deserve, but
actually what the expect: large compile times (superlinear).
Not too mention, that these huge functions are usually central to the
program. If GCC decided that it is not worth optimizing the
machine-generated bytecode interpreter of GNU Smalltalk, for example, I
might as well rewrite it in assembly (or as a JIT compiler). Same for
interpret.cc in libjava, though it is a tad smaller than GNU Smalltalk's
interpreter.
Unlike the authors of other VM's, I have no problem writing code so that
the *latest* version of GCC will do its best, instead of complaining
that GCC compiles my code worse on every release. So, I am ok with GCC
doing stupid things because of bugs that I/we can fix, but not with GCC
just giving up optimization on code that has always been compiled
perfectly (in one/two minutes for about 30,000 lines of
machine-generated code, despite being chock-full of computed gotos),
that *can* be optimized very well, and that is central to the
performance of a program.
Paolo
- Re: compiling very large functions. Paolo Bonzini
-