Gabriel Sechan wrote:
From: Andrew Lentvorski <[EMAIL PROTECTED]>
JIT compilers do two primary things:
1) compile JVM bytecodes down to native assembly
This doesn't gain you much over normal compilers. However, it is nice
in that the system/os/microprocessor specific idiocies are encapsulated.
It actually tends to be a loss- you're now going through 2 levels of
compilation, which may destroy optimizations available at the original
level, and takes away almost all ability of the developer to actually
optimize (asside from the high level optimization of picking the best
algorithm).
This is kind of nonsensical. Most byte-code representations are strictly
pre-parsed source code. JVM bytecodes are slightly lower level, but I
haven't seen much in the way of cases where the interfere with
optimizations (indeed, looking at native Java compilers that can take
either source or byte-codes as input, none of them seem to get better
performance from working with the source). Furthermore, the entire
reason developers can't "optimize" is because these optimizations are
being performed at runtime. Anyway, the extent that a developer in any
language can optimize at the level you're talking about is the extent to
which the language's object code generation (compiler, JIT, whatever) is
broken.
2) make runtime choices for optimizations
This can be huge. There are certain classes of optimizations that use
information only available at runtime. The big one is being able to
optimize across library boundaries. There are some smaller ones like
branch prediction and loop unrolling which can be much more aggressive
if there is runtime information available. These optimizations are
especially important when you start running on machines which don't
have the heavy duty out-of-order execution and branch prediction
engines of the big microprocessors.
THey talk about this being huge. The reality is that the only tests
which have shown any real advantage to them have been cooked. You
spend more processor time analyzing than you gain.
Umm... sorry no. That's kind of a meaningless statement. even if you
spend *years* of execution time analyzing the code, if the program runs
indefinitely it will pay off. If your program's runtime lifecycle is too
short, you can (and should) always use various JVM flags to turn off or
reduce the extent to with this kind of profile guided optimization is
done (indeed, try running Java programs with -client vs. -server, and
you'll quickly find that sometimes the extra profiling is a really big win).
Just as another datapoint on this: most serious compilers support
profile guided optimizations. It's a popular feature with developers
working on performance critical code, with very measurable benefits.
While you can't quite do as much crazy stuff as you can with the JVM's
execution model, if the extra profiling really was a disadvantage, you'd
think they would just skip this step.
--Chris
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg