On 23/10/2021 6:23 pm, David Crayford wrote:
When I ran the drag race on our full capacity enterprise class machine Java was always faster than C++ and the GCPs run at the same speed as the zIIPs. And Python also beat C++ and I couldn't get my head around the veracity of that result.

There's some discussion of the Python changes here:
https://github.com/PlummersSoftwareLLC/Primes/discussions/227

I don't know Python so it doesn't mean much to me.

GC on our z15 is performed using pause-less GC hardware assist. I've have never seen a "stop-the-world" cycle over a couple of microseconds in the Grafana dashboard.

Speculating here, I think GC can be optimized for 3 different objectives:
1) Minimize total CPU
2) Minimize elapsed time
3) Minimize pause time

#1 is what you probably want for a batch job.
For #2, you run as much of the GC as possible on parallel threads even if it costs more total CPU time. However if you are competing with other work for CPU (common on z/OS) maybe you don't get much gain.
#3 is probably like #2 but with the specific emphasis on pause times.

My gut feeling is that as you run more in parallel and reduce pause times you probably pay a cost in total CPU usage. But I haven't found anything that actually discusses it. Most discussion about GC relates to pause times.

Personally for the stuff I write I don't think pause time is an issue. I want to minimize total CPU. Some time ago I did play around with different GC policies, but I found that the default (gencon) was good for my programs. Changing it only increased the CPU time, sometimes markedly, so I haven't done much experimentation since.

Andrew Rowley

--
Andrew Rowley
Black Hill Software

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to