I believe one of the reasons is, that you need a JVM as runtime in order to 
get 100% "Java" certified. In addition your runtime components would have to 
implement the standard Java configuration switches, such as -classpath -cp or 
memory settings such as -mx in order to be standard. The runtime would have to 
produce the standard traces, support will likely ask you for a verbose or 
verbose:gc trace or runhprof file, if an application does not perform or has 
memory leaks, how would you create that with compiled code?
Java is known as a memory managed runtime, which means that the application 
programmer does not care about memory and the runtime will have to manage it 
(that is the reason for garbage collection in Java), so running a poorly 
programmed Java application as compiled code, will give the runtime environment 
a hard time, since it would have to log all the getmains and do the freemains, 
since the Java application is not designed to do that (Although good 
programming practice would be to reuse as much objects as possible). So there 
are a couple of things which would add overhead to the "native java code" 
runtime, so that the difference to running the JVM would be rather small.

I just tested ILOG JRULES ISV application (which allows to create business 
rules on a GUI and generate Java class implementations of the created rules) 
and the jexcel API (open source project which allows you to create Excel files 
from Java - we used a sample to create Excel Files out of IMS databases) 
nativly in z/OS by just putting the jar files to an HFS and adding it to the 
IMS Java regions classpath, for ILOG these are about 50MB of .jar files. Given 
that the compile will not fail it would create a huge PDS and a huge amount of 
load modules, so I am not sure if in the end you would have much benefit of 
compiled code. Basically you loose a lot of flexibility and you would have to 
test an application twice - once in your development environment and once as 
compiled code. There is also cost associated with that, not only MLC based on 
MSUs.

Since JDK 1.5 the JVMs have a codecache, which caches the native code which was 
generated by the Just in Time compiler (JIT) out of static methods. So if a JVM 
lives for 10 days, the benefit of compiled code would be that the first 10 
calls would probably be faster with compiled code (given that all the required 
load modules are preloaded and you don't need IOs. For the JDKs from 1.5. you 
have the shared classloader cache (and you can have multiple of it for 
different application workloads, e.g. IMS Java regions 1-4 are for App A and 
use the same shared classloader cache and IMS Java regions 5-10 are for App B 
and use a different shared classloader cache - similar to having different 
Proclib member with lists of modules to be preloaded) which saves IOs once the 
classes are loaded. In addition the newer JVMs are able to optimize the native 
code as needed, so when a java method is heavily used, the JVM will try harder 
to optimize the created native code.



 I always wondered why the concept of having a pure Java processor or 
co-processor (a processor that implements the Java Byte code nativly or in 
millicode) never arrived on the market, although everybody would expect that to 
be more powerful than executing JVMs on general purpose processors. But having 
a native Java processor or compiled code alone will not address all the 
associated stuff that comes with an application life cycle and not to forget 
the operational aspects of running an application. It rather seams to be more 
practical and cheaper to have loads of processor cores (look at SUNs last Sparc 
processors) which are capable to run a huge amount of JVM threads in parallel 
with one thread associated with one processor core
.
Another thing that comes into mind, think of the Java releases and versions 
coming out on a monthly base, how men power consuming it would be to keep your 
compiler in sync with the current Java versions.

And last but not least, there is a big chance that the pure Java stuff can be 
offloaded to zAAPs.

So given some of the facts above, I cannot really understand what the benefit 
of compiling shall be. ;-)

Denis.


 

-----Original Message-----
From: Hunkeler Peter (KIUP 4) <[email protected]>
To: [email protected]
Sent: Thu, Aug 6, 2009 8:39 am
Subject: Re: Java question










>High Performance Java (HPJ) [snip]
>It was discontinued, for a number of reasons, when Java 1.2 was
released.

Out of curiosity: What were those reasons? 

I for one cannot understand what the advantage of not compiling shall
be, 
especially on a platform everybody is screaming for "MSU reduction".

--
Peter Hunkeler
CREDIT SUISSE

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



 


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to