Hi Will,
As a lurker on this list, I thought I'd share the following article:
http://www.osnews.com/story.php?news_id=10729
This might be a good article for those new to JVMs. It's not as
technical as the various white paper citations being bandied about but
it's a nice introduction to just-in-time-compiling / optimization,
Swing threading, and other JVM performance-related issues.
Yes. I think this is a nice and high level overview. These subjects
(almost exactly) are covered in greater depth in the various links I've
posted, most particularly in the talk by Dave Grove and others which I
posted most recently. If any readers found Will's post interesting, I
*strongly* encourage you to read the following for more on the *how* and
*why*...
I've just added another tutorial to the wiki. This is a great talk
which Dave Grove (IBM Research) gave at U Texas a few weeks ago. For
those interested in performance related issues and VM design, I'd
regard this as "essential reading". :-)
http://www.research.ibm.com/people/d/dgrove/talks/SoftwareOptimizationAndVirtualMachines.pdf
Skipping to the bottom of your email...
I assume these will all be goals for a Harmony project JVM?
Absolutely! Most of the ideas you mentioned is already implemented in
Jikes RVM's opt compiler, and I think most likely in ORP's compiler,
even the more out-there ones, such as OSR (on-stack-replacement) and
guarded inlining. This is the sort of work Harmony is trying to
leverage, so it would be a failure of the project if it did not achieve
these goals!
This technology is very exciting and I'd encourage interested list
readers to read the article you pointed to, read the tutorials, and ask
questions! :-) Understanding the implications of this technology is
important to understanding how Harmony will work and where the future of
VMs are headed.
Cheers,
--Steve
"I think Java is somehow still seen as an interpreted language; in
fact, it does get compiled to native code using Just In Time (JIT)
compilation. It is also a myth to think that JIT code is slower than
pre-compiled code. The only difference is that bytecode gets JITed
once its required (i.e., the first time a method is called - and the
time is negligible) it then gets cached for subsequent calls. JIT code
can benefit from all the same optimisations that pre-compiled can get,
plus some more (from Lewis and Neumann, 2004):
* The compiler knows what processor it is running on, and can generate
code specifically for that processor. It knows whether (for example)
the processor is a PIII or P4, if SSE2 is present, and how big the
caches are. A pre-compiler on the other hand has to target the
least-common-denominator processor, at least in the case of commercial
software.
* Because the compiler knows which classes are actually loaded and
being called, it knows which methods can be de-virtualized and
inlined. (Remarkably, modern Java compilers also know how to
"uncompile" inlined calls in the case where an overriding method is
loaded after the JIT compilation happens.)
* A dynamic compiler may also get the branch prediction hints right
more often than a static compiler."