Hi Peter,

sorry for the delay, but I have been traveling.


 
For all the statements I did, one of the thoughts behind it was, that HPJ 
compiled code does not run in a JVM as HPJ was implemented years ago. So the 
whole JVM runtime was replaced by PDS members that are load modules just like 
any other languages load module. So with HPJ there was no JVM during execution.

According to your reply:
1. Thats correct, however as I stated, you loose all the standard stuff that 
comes with a JVM and you loose the capability to have a runtime (JVM) that 
behaves similar and can be tuned and parameterized similar to other platforms.

You are right, that HLL languages can use memory management too, but there must 
be some overhead in doing that or switching it on, so that was basically what I 
was refering too. The compiled code must implement functions etc. to do the 
memory managment, because it does not use the JVM and its garbage collection 
mechanisms anymore.
Or as a prereq, the whole compiling stuff would not make sense, if you still 
run in a JVM, because of the options that todays JVMs provide.

2. I recall from the implementation of the High Performance Java Compiler, that 
the compiled application is a statically linked huge load module (think of a 
50MB classpath). In addition, if you have many Java classes with a Java main 
Method, which is executable as a starting point, each of these Java Main 
Programms will be its own statically linked huge module and also has all 
classes from the classpath compiled into it. So HPJ produced huge load modules 
and was very inflexible, not only in terms of reusing compiled code. If it were 
enhanced to support dynamic calls, you would have a need to have a 1:1 
relationship between Java class and load module, thus a lot of members in the 
load PDS.

3. Because of the fact, that you cannot really influence the behaviour of the 
JIT compiler, we only test with all optimizations switched on, that works 
pretty well. Once in while there are applications which do not work with the 
highest JVM Optimization level and there are switches to force a certain 
optimization level or to restrict optimization up to a specific level.
Yes we trust the compiler, however if something goes wrong, you still get an 
exception, traces, etc., which you do not get with the compiled version, 
because the JIT compiler is somehow part of the whole JVM construct and in the 
compiled version you do not have the JVM anymore.
Basically when there are tuning advices for WebSphere z/OS or for a distributed 
JDK, most of the recommendations also apply to the JVM on z/OS, so getting rid 
of the JVM in favour of just having compiled code would also make the execution 
environment on z/OS incompatible to all other Java environments in terms of 
behaviour, problem determination, tuning and customization.

Nevertheless I admit, that there would be some benefit, if the JVM on z/OS 
could save JIT compiled code into datasets or HFS image, once the compiled code 
reached a higher level of optimization. This could be enhanced to load a JIT 
image during JVM startup which has the most used classes already compiled.
Or when classes are used among multiple JVMs, in addition to the shared 
classloader cache having a shared JIT cache would likely have its benefits. But 
reading the documentation, these are not there nor planned.

4. I recall that transmeta had a processor, which loaded a Millicode on 
startup, so basically a RISC processor with that capability could easily load 
the newest JRE version on every IPL.

5. Yes it could, but it never did, by the time the HPJ was there, we had no 
zAAP. And HPJ code never run on a zAAP.

Hope that helps.
Denis.


 

-----Original Message-----
From: Hunkeler Peter (KIUP 4) <peter.hunke...@credit-suisse.com>
To: IBM-MAIN@bama.ua.edu
Sent: Wed, Aug 12, 2009 12:47 pm
Subject: Re: Java question










Denis,
Thanks for the excellent argumentation. I basically concur with you.

I'd like to reply to a few arguments, though:

1. Today's JVMs offer the option to have the byte code compiled on the 
fly when certain conditions are met. So, these JVMs already have the
capability to run "machine code" instead of "byte code". This is the
runtime environment you'd need to run Java code that has been compiled
at the will of the programmer instead of at the will of the JVM. All 
you need is an option to tell the JVM where to find and/or how to 
recognize pre-compiled java class files.

No need for a new runtime environment, no need for application
programmers
to care any more about memory management as they need to care about with
today's Java environment. You still instantiate a JVM and tell it which
Java class file to run.

BTW, programmer's don't need to care about memory management in other
HLL languages, do they? It's the HLL's runtime that manages this.

2. Why do you think that compiled Java class code needs to fill PDSs?
The
JVM does *not* create PDS members when the JIT compiles byte code into
machine code on the fly, right? It stores the machine code in some JVM
maintained cache. Use the UNIX file system instead of the cache for the
pre-compiled programs. Let the JVM search the UNIX file system in 
addition to the cache to find compiled versions of class files.

3. Do you test your application in the byte code version as well as in
the
JIT compiled machine code version today? If so, there isn't any more
testing to do than you already do today. If not, why do you trust the
JIT
to produce machine code exactly equivalent to the byte code and would
not
trust the compile done in advance to produce the same reliable code? I'd

expect this to be the very same version of the compiler, namely the JIT 
compiler. 

Regarding the Java releases coming out very frequently: The built in JIT
is already at the correct level for each Java release. As I mentioned
before
using this as the standalone compiler eliminates the need for another 
compiler to be kept up to date.

4. Regarding a "pure Java processor": How would you implement the
different
versions and levels of today's JVMs into one physical processor? One
"chip"
per version and modification level? Being serious -- I think it is not
currently practically doable; Java is still being developed and extended

to much.

5. zAAPs. JIT compiled Java code can still be offloaded to zAAPs, can't
it. 
Why would this not apply to pre-compiled code being executed in the same
runtime environment together with byte code and JIT machine code?

--
Peter Hunkeler
CREDIT SUISSE

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



 


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to