The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Ted MacNEIL) writes:
> My degree is a major in computer science with a minor in statistics.
> My first job was as a capacity analyst.
> My degree was 100% applicable.

a lot of capacity planning came out of a lot of performance work at the
science center 
http://www.garlic.com/~lynn/subtopic.html#545tech

... including modeling and workload profiling and the fundamentals for
capacity planning.

science center had done the port of apl\360 for cms\apl ... and rather
than the toy 16-32kbyte workspaces ... they could be as large as virtual
memory/machine size.

cms\apl became the basis for much of the sales/marketing support
applications on the world-wide hone system (sometime in the early 70s,
branch office couldn't even submit mainframe orders that hadn't first
been processed by hone application)
http://www.garlic.com/~lynn/subtopic.html#hone

one of the applications deployed on HONE was the "performance predictor"
(system analytically model from the science center implemented in apl
... allowing branch people to characterize the customer's configuration
and workload and then ask "what-if" questions regarding changes to
configuration and/or workload.

the production science center online system (first cp67 and then vm370)
was heavily instrumented and eventually had 7x24 activity data for
approaching two decades and established similar standard for other
internal systems.

besides the system analytical modeling (including the work that resulted
in the performance predictor application) there was also a number of
event-driven model implementations.

there were also a number of execution sampling implementations ... one
which resulted in VS/Repack product in the mid-70s ... which would take
trace of instruction address & storage references and do semi-automated
program reorganization for optimal paging operation. Before release as a
product, it was used extensively internally by several products making
transition from real-storage environment to virtual storage environment
(i.e. for instance, IMS made extensive use of the application).

another trace/sampling implementation was also used to determine what
functions went into VM ECPS (i.e. 6k bytes of vm370 kernel instructions
that represented approx. 70percent of kernel pathlength execution was
moved to microcode).
http://www.garlic.com/~lynn/94.html#21

another performance optimization methodology used at the science center
was multiple regression analysis ... with the detailed 7x24 system
monitoring ... it was possible to determine where the system was
spending large percentage of its time.

i've mentioned before using this methodology on large 450k line cobol
application that had been heavily studied and optimized over period of a
couple of decades (using products like strobe). it ran on 40+ max
configured mainframe CECs ($1.2b-$1.5b aggregate). i used multiple
regression analysis to identify another 14percent performance
improvement (that hadn't been turned up using the other methodologies).
misc. recent references:
http://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
http://www.garlic.com/~lynn/2006s.html#24 Curiousity: CPU % for COBOL program
http://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in 
Mainframe?
http://www.garlic.com/~lynn/2007l.html#20 John W. Backus, 82, Fortran 
developer, dies
http://www.garlic.com/~lynn/2007u.html#21 Distributed Computing

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to