Gilbert Saint-Flour wrote:
I keep a list here:

  http://gsf-soft.com/Documents/MVS-APPL-DEBUGGING.shtml


for some drift ... in the early 70s, the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

had done a lot of work on performance monitoring and measurement
technologies ... some of it later evolved into capacity planning.

there were sort of three kinds of technology

* monitoring & sampling
* simulation & modeling
* multiple regression analysis

all had their strengths and weaknesses and there were various
situations were one of the technologies could identify an issue when
the other two couldn't.

there were both software and hardware monitors. when work was being
done for selecting what should go into the ecps microcode assist, the
kernel was instrumented with a software monitor and then the person at
the palo alto science center responsible for the apl microcode assist
did a microcode based PSW sampler. old standby posting describing
ecps microcode assist analysis
http://www.garlic.com/~lynn/94.html#21

for other drift ... recent post mentioning the apl microcode assist
http://www.garlic.com/~lynn/2006o.html#13

there were two or three different simulation and modeling projects at
the science center. there was an event driven system model written in
PLI ... used among other things for modeling paging behavior ...  and
an analytical system model written in apl.

we used a variation on the apl system model in the automated benchmarking for validating the resource manager before release.
http://www.garlic.com/~lynn/subtopic.html#fairshare

basically in excess of two thousand benchmarks were run taking three
months elapsed time. initially there were something like 1000
different benchmarks defined that had wide range of configuration,
workload, and system parameters. the the modified apl system model was
feed all results and allowed to select the set of conditions for the
next benchmark. these results were fed back into the apl system model
and it repeated the condition selection settings for the next set
of benchmarks. this was repeated for another 1000 or so benchmarks
http://www.garlic.com/~lynn/subtopic.html#bench

the apl system model was also adapted to the HONE system (world wide
vm-based system that supported all field, sales, and marketing, by the mid-70s, salesmen couldn't even place a mainframe order w/o having first run it thru one of the HONE configurators)
http://www.garlic.com/~lynn/subtopic.html#hone

where it was called the performance predictor and allowed marketing
people to input customer configuration and workload information and
ask "what-if" questions (what happens if amount of memory is doubled
or the workload changes, etc).

another instruction analysis tool was "REDCAP" which had been developed in POK for doing workload instruction traces ... for studying detailed workload instruction execution characteristics as aid in processor design. the science center adapted REDCAP for analyzing application execution in virtual memory environments. this was used to analyze the port of apl\360 to cms\apl (and execution characteristics of the memory
allocation and garbage collection change from small 16k-32k byte real
memory workspaces to very large virtual memory workspaces). It was
also used by a number of application development groups to study their
application in the transition from real storage to virtual memory
operation (applications like IMS). It was also released as a product
called VS/Repack. VS/Repack would also perform cluster analysis of
program operation and attempt semi-automated program reorganization
for improved execution in virtual memory environment.

some recent references to old vs/repack
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006j.html#18 virtual memory
http://www.garlic.com/~lynn/2006j.html#22 virtual memory
http://www.garlic.com/~lynn/2006j.html#24 virtual memory
http://www.garlic.com/~lynn/2006l.html#11 virtual memory

A few years ago, I ran into a consultant (european) that was doing
work using a descendant of the performance predictor (with nearly 20
years of enhancements). During the corporate troubles in 1992, the
vendor had acquired the rights to the software and had run it through
an APL-to-C language converter ... and then subsequently made
additional enhancements.

He was doing some consulting at a large datacenter that had an
enormous application that ran across a large number of mainframes ($$
value in the 8-9 digit range). Even a few percentage performance
improvements in the application translated into large number of
hardware dollars. The application had been studied extensively using
standard mainframe monitoring tools and heavily optimized. The
consultant, using his enhanced performance modeling tool had
identified additional areas that resulted in another ten percent
optimization savings.

Remembering the science center experience from the early 70s (nearly
35 years earlier), i wondered if multiple regression analysis could
identify opportunities; in fact it turned up something accounting for
over 20% of total usage. The issue has been that things like instruction
sampling and event modeling tend to turn up things at the micro level
... while multiple regression analysis frequently can highlight more
macro level issues. The identified feature was a complex, spaghetti
combination of low-level stuff ... which turned out could be optimized
(at the macro level) and resulted in 14% total system savings.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to