[email protected] (Ed Gould) writes:
> At one time (MVS) there was a product called QCM. Which did measure
> precisely the amount of CPU time that was used by the task and by MVS.
> Alas it is (AFAIK) no longer marketed.

re:
http://www.garlic.com/~lynn/2014b.html#78 CPU time
http://www.garlic.com/~lynn/2014b.html#80 CPU time

note accounting ... but performance ... for ecps microcode assist,
originally for 138/148 ... we were told that the machine had 6kbytes of
available microcode space ... and 370 kernel instructions mapped
approx. 1:1 in no. bytes into microcode instructions ... and we were to
find the highest used 6k bytes of vm370 kernel.

two approaches were done ... modification of 370/145 microcode to sample
the psw address and increment a counter for the corresponding address
range (had counter for every 32bytes). the other was special vm370
kernel was built that generated time-stamp at entry and exit of every
module (intra-module paths could be time between calls to other
modules). this is old post with the results of this ... 
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

which showed the highest 6kbytes of kernel code accounting for 79.55% of
kernel execution time.

note that in the early 70s, the science center 
http://www.garlic.com/~lynn/subtopic.html#545tech

had done something similar to the microcode monitor but with a software
full instruction simulator ... which tracked every instruction executed
and every data fetch and every data store. a application was then
written that took all the addresses and did semi-automated program
reorganization optimizing for running in virtual memory paged
environment ... and of course ... it could also be used for hot-spot
identification. A lot of the internal development groups began using it
as part of the transition to 370 virtual memory operation. It was also
released as a product in 1976 as VS/Repack.

I did a special data collection for VS/Repack ... a vm370 kernel option
that would run an application in 10 real pages and record virtual page
faults. The granularity wasn't as good as full instruction simulation
... but it ran significantly faster and was nearly as good for the
purposes of program reorganization for paged environment.

note that the science center besides doing virtual machines, the
internal network (also used for bitnet), inventing GML (which morphs
into SGML and then HTML) ... also did extensive work on performance
montioring, performance tuning, system modeling and workload profiling
... which then morphs into capacity planning.

One of the system models was an APL system analytical model which was
made available on (world-wide sales&marketing support) HONE called the
"performance predictor" ... customer SEs could provide customer system
and workload characteristics and then ask what-if questions about what
happens if the hardware and/or workload was changed.
http://www.garlic.com/~lynn/subtopic.html#hone

somebody in europe obtains the rights to a descendent of the
"performance predictor" in the early 90s (in the period that the company
had gone into the red, had been reorganized into the 13 "baby blues" in
preparation for breaking up the company) and had ran it through a APL to
C-language translator. I run into him last decade doing consulting work
at large mainframe financial datacenters (operations with 40+ maxed out
mainframes, billion dollar+, machines constantly being upgrade, none
older than 18m ... these operations account for major portion of annual
mainframe revenue). I had found 14% improvement in application that ran
every night on 40+ maxed out (MVS) mainframes (the number of machines
sized so the application finishes in the overnight batch window).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to