The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[email protected] (McKown, John) writes:
> What! You don't fondly remember the joys of running a Stage 1 / Stage
> 2 sysgen? How you could be productive for HOURS by just sitting and
> monitoring their execution? Or doing an EDT gen by "throwing away"
> jobs and steps from the Stage 2? HCD makes that "so easy a caveman can
> do it" (sm). That was when "men were men" and grrls weren't allowed
> into the sanctified areas of DP. <GRIN> (please don't kill me, ladies!
> It was a joke, honest.)

undergraduate in the 60s ... i worked out being able to do sysgen in
production jobstream ... it required some stand alone fiddling and some
other stuff. I took the output of stage1 sysgen and reworked the steps
into individual jobs (and other stuff). I also re-arranged the steps and
frequently move/copy statements within steps ... in order to optimally
place datasets & members within PDS ... for optimized arm motion.  for
typical student academic workload ... I was able to increase thruput by
a factor of three times (in large part because of reduced arm
motion). problem was that typical system maintenance over period of six
months or so ... "replacing" (critical) PDS members (and messing up
careful ordering) ... would reduce increased thruput to less than two
times ... sometimes eventually motivating a careful rebuild.

the problem was that univ. student workload had run tape-to-tape on 709
ibsys monitor. moving to 360/65 (actually 360/67 but running
non-relocate) under os/360 ... went from subsecond per student job to
over minute per student job (unit record & multiple step job scheduler).
installing hasp got it down to something over 30 seconds (effectively
multiple step job scheduling ... extremely disk i/o intensive).
eventually when 360 watfor became available ... the issue was
significantly improved.

part of old presentation at '68 boston share:
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

(virtual machine) cp/67 had been installed at the univ. jan68. I got to
play with it on weekends ... but the univ. continued to run "bare
machine" os/360 production during the week. between jan68 and the boston
share meeting ... i was able to rewrite large portions of cp/67 during
my weekend play periods (although that was also when I had to some
amount of support & maint for the production os/360 system).

the above presentation mentions thruput improvement of os/360 under
cp/67 (mostly because of cp67 pathlength reductions part of my cp/67
rewrites) ... but also mentions some of the stuff I was doing for
os/360. other stuff done for cp/67 was things like ordered arm seek and
lots of algorithm work (it made little difference for os/360 batch
process ... but contributed to handling multiple cms workload).

misc. other posts in this thread:
http://www.garlic.com/~lynn/2009q.html#67 Now is time for banks to replace core 
system according to Accenture
http://www.garlic.com/~lynn/2009q.html#68 Now is time for banks to replace core 
system according to Accenture
http://www.garlic.com/~lynn/2009q.html#69 Now is time for banks to replace core 
system according to Accenture
http://www.garlic.com/~lynn/2009q.html#70 Now is time for banks to replace core 
system according to Accenture
http://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core 
system according to Accenture

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to