There have been lots of war stories from some of the big JVM users like Twitter where long running GC cycles have resulted in fail-over being triggered where a node has been marked as unresponsive. In distributed systems this can lead to serious problems similar to network partitions. It's no secret that languages like Java that use non-deterministic garbage collection have issues and are unsuitable for certain types of applications. A case in point are systems where human safety is involved like avionics or medical hardware. Would you want to be hooked up to a life support machine where a GC cycle can disrupt the reliability of the system?

To be fair to IBM they are one of the major innovators in JVM technology. We've had shared classes since Java 5 and with mutli-tenanted JVMs coming and other good stuff like packed-objects on the horizon we're fortunate to have a vendor that is putting serious R&D into a technology that is critical to the longevity of our platform.

Having said that we've been backed into a corner where all we have on z/OS is Java for modernizing our applications. Other platforms, including zLinux have more choice. zLinux now has Node.js which can totally nuke a typical one-thread-per-connection Java web application for speed and memory. Java isn't a new technology, and while it may be a fine technology it's not cutting edge compared to what's available on other platforms.

From a vendor perspective if it wasn't for zIIP offload I would much rather use C++. Modern C++ has deterministic GC and has features such as type inference, lambdas and other goodies years before Java 8. The zIIP is what forces us to use Java. Application developers are different because they have a different set of requirements, such as skills and familiarity.

On 7/08/2015 8:14 PM, Staller, Allan wrote:
Would'nt the garbage collection cause page-in references as objects are 
collected and co-located?
Thus negatively affecting performance on page sensitive (e.g. CICS....) 
middleware/applications.

Seems the advice to avoid garbage collection is sound to me (from a performance 
perspective).


<snip>
I have seen the advice to avoid garbage collection in batch from IBMers before. 
I don't understand it, and I am curious to know where it is coming from. I 
doubt it is endorsed by the JVM developers. I suspect it might just be that 
suddenly we can measure memory management overhead, where it is more difficult 
in other languages.

Garbage collection is Java's way of returning unused memory for reuse.
You could reduce memory management overhead of a batch C++ program by removing 
all delete statements, and increasing the virtual storage available until it 
never ran out. You COULD, but no-one would recommend it as good practice. 
Overallocating the heap to avoid garbage collection is basically the same thing.

Applications tend to evolve and grow over time. If you deliberately set up your 
application to avoid GC, you may be in for a rude shock when the application 
grows and one day GC is triggered.

There can also be performance advantages from GC. GC moves objects together in 
storage, making it much more likely that your application data will be in the 
processor caches. If GC keeps your data in processor cache it will perform much 
better than if it's scattered across a GB of storage.

</snip>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to