Re: Computer History Museum

2009-01-07 Thread Scott Ford
BTW I am 2nd generation 
Dr. Merrill I bow to the power...very cool...

BTW I am 2nd generation IT ..Been around computers or EDP as it was once 
called since the 50'sMom and Dad both in the industry
 
Scott J Ford
 





From: Barry Merrill ba...@mxg.com
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, December 25, 2008 12:53:24 PM
Subject: Re: Computer History Museum

In 1966 at Purdue's Labratory for Agricultural Remote Processing, LARS, later 
renamed to the Labratory for Applied Remote Sensing,
where K.S. Fu's pattern recognition algorithms were implemented and validated 
(using 16 channels of spectral data with 6 bits for
the amplitude of each channel, gathered on a DEC PDP- in a DC-3 that flew at 
2000 feet over a 5 mile long strip of Indiana fields),
we took delivery of S/360-44 serial number two (number one was in Pook).  I had 
implemented the Cooley-Tukey Fast Fourier Transform
in Fortran directly from their original paper on the University's 7090, and had 
moved it and my programming of the Karhunen-Loeve
transform using polynomial approximation for my Masters Thesis to the new Model 
44, using an AM radio to listen to the program
execution to detect when programming errors send it into a never-ending loop, 
and to hear when it went from slow to fast loops.
Diagnosis of those never-ending loops involved using address stops and single 
step instruction toggles from the console.

Twice, my program abruptly stopped, redlighting the console, and IBM engineers 
came on site, both times finding that I had actually
burned out the transistors in the floating point divide unit.  After the second 
failure, they returned with a floating point divide
unit that had been redesigned with a new heat sink added to fix the problem.

Originally, there was no disk with the Model 44, running TOS as I recall, with 
five tape drives, and the Fortran compiler was only
on tape - it took all five drives to compile and punch out the deck which was 
then read in and executed, and all five tapes were
spinning during every compile.

After writing my own program for the FFT, the Lab directory and my major 
professor, Dave Landgrebe, gave me a note from the computer
center that there was a new FFT subroutine that had been written by Tukey 
himself available from something called the SHARE library.
Upon examination, I discovered I was at
best only a coder and Tukey was a real programmer; whereas my major loop was 
250 or so Fortran statements, I marvelled at the
correct way to do it, in about 25 statements, and thus was introduced to the 
SHARE program library.

My part-time job at LARS was to create the Ground Truth data base, well 
before the actual flight.  On the day of the flight, the
agronomists were going to photo and measure and record the plant statistics 
from each field and then populate my database for
correlation with the spectral data, but as there was no actual field data yet, 
I created several sample fields of opium poppies and
cannibis to show the agronomists what the reports would eventually look like, 
and they were all humoroed by my samples.  However,
one Saturday afternoon the campus police showed up at my apartment and told me 
I was urgently needed at LARS and took me there; it
seems that the U.S. State Department had been discussing with their Turkish 
counterparts the future possibility of using the LARS
programs to detect those crops, but had assurred the Turks that the project was 
still in its infantcy, and had not been trained on
any of those crops.  Unfortunately, one of the agronomists had shown the Turks 
one of my sample reports, so they assumed they were
being lied to by the State Dept rep, who dragged me out to the site to meet 
with the Turks and, finally, having accepted that I was
the author and just a college student programmer, they did finally realize this 
was just a joke.

The Ground Truth was written in FORTRAN II, which did not have character 
literals; to print CORN, I had to set the variable CORN
equal to 
the decimal value that, when written with A4 (as I recall) format, would print 
CORN, etc., for each text value.  Just as I finished,
Fortran IV became available.

Merrilly Christmas

Herbert W. Barry Merrill, PhD
President-Programmer
Merrill Consultants
MXG Software
10717 Cromwell Drive
Dallas, TX 75220
214 351 1966 tel
214 350 3694 fax
www.mxg.com
ba...@mxg.com


P.S. I first programmed a digital computer, an IBM 610 in September, 1959 as a 
Sophomore at the University of Notre Dame; who of you
all will I have to outlive to claim to have been programming digital computers 
longer than anyone else?? 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Re: Computer History Museum

2009-01-03 Thread Scott Ford
I moded DMKCCW in the VM/SP 1, 2 3 and up days...
VM was always great especially when you had source code
After OCO days, well we wont go there...
 
Scott J Ford
 





From: Anne  Lynn Wheeler l...@garlic.com
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, December 25, 2008 4:16:01 PM
Subject: Re: Computer History Museum

The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


rfocht...@ync.net (Rick Fochtman) writes:
 I don't remember all the mods we made at NCSS, but one change that made 
 a BIG difference on the simplex and duplex 360/67's was this: in the CP 
 kernel, ALL SVC instructions were modified to a BAL to a specific 
 address in the first 4K of storage, where a vector table rerouted the 
 call to a specific CP subroutine. All those interrupts and PSW swaps 
 took FOREVER on the 360/67, whereas a BAL to low storage SEEMED to fly 
 almost instantaineously. The change also seemed to be beneficial when we 
 switched to 370/168 platforms as well. The CMS kernel used a HVC (in 
 actual fact, a DIAGNOSE) to request services from the CP kernel, 
 including I/O services. We also modified MVT to run in a virtual machine 
 using DIAGNOSE, rather than SIO/TIO/HIO, for I/O services. Made MVT run 
 MUCH FASTER in the virtual machine and freed us from all the related 
 emulation of these I/O instructions. One thing I miss: Grant wrote a 
 program, called IMAGE, that created a complete image of the CP kernel, 
 which would load in record time when bringing up the system. I wish I 
 had a copy of that program now, because of its rather unique processing 
 of the RLD data from the object code. I've never quite understood how 
 RLD data is processed by either the linkage editor or the loader. :-(

re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#54 Computer History Museum

as an undergraduate ... before joining the science center ... I first
looked at the standard SVC linkage routine (for all kernel calls) and
cut the pathlength by about 75%. I then looked at the most frequently
called subroutines ... and changed them to BALRs ... leaving the
remaining as SVC ... since it no longer represented a significant
portion of CP overhead  i.e. while SVC/LPSW was expensive with
regard to BALR ... the actual time spent in the original SVC
linkagereturn was much, much larger than the SVC/LPSW instruction
... most of the benefit came from reducing the logic. The next was the
BALR ... not only replaced the SVC/LPSW instructions but were also
eliminated the rest of the logic for the linkage/return for high-use
routines. When that was done, the remaining SVC/LPSW (and associated
linkage/return overhead) was a trivial percentage of overall time spent
in the kernel.

Remaining big overhead wasn't so much the SIO instruction ... but the
channel program simulation overhead done in CCWTRANS. CMS turned out
to do very stylized disk channel programs. I created a fastpath channel
program emulation operation for CMS disk I/O (that was also syncronous
... avoiding all the virtual machine gorp for entering wait state,
asyncronous interrupts, etc). This got severely criticized by the people
at the science center (mostly bob adair) because it violated the 360
principles of operation. However, it did significantly reduce cp67
kernel overhead for operating CMS virtual machines. This was then redone
using DIAGNOSE instruction ... since the 360 principles of operation
defines the DIAGNOSE instruction operation as model-dependent. The
facade was that there was a 360 virtual machine machine model which
had its own definition for DIAGNOSE instruction operation.

Standard CP67 saved core image of the loaded kernel to disk (routine
SAVECP) and a very fast loader sequence that brought back that image
back into memory on IPL and then transferred to CP67 startup routine
CPINIT. One of the people at the science center modified CP67 kernel
failure processing to write a image dump to disk area and then simulate
reloaded the disk kernel image from scratch ... basically automagically
failure/restart ... this is mentioned in one of the referenced stories
at MULTICS websites ... one of the people who supported CP67 system at
MIT (and later worked on MULTICS) had modified TTY/ASCII terminal line
processing that would cause the system to crash ... and one day CP67
crashed and automagically (fast) restarted 27 times in a single day
(which help instigate some MULTICS rewrite because it was taking an hour
elapsed time to restart).

The cp67 kernel was undergoing was amount of evolution with new
functions being added. On 768k real storage machine ... every little bit
hurt. So I did a little slight of hand and created a virtual address
space that mapped the cp67 kernel image ... and then flagged the
standard portion as fixed

Re: Computer History Museum

2008-12-26 Thread Rick Fochtman
I could sure use a copy of that BPS Loader source code, if anyone has it 
and is willing to share..


TIA
---

Anne  Lynn Wheeler wrote:


The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


rfocht...@ync.net (Rick Fochtman) writes:
 

I don't remember all the mods we made at NCSS, but one change that made 
a BIG difference on the simplex and duplex 360/67's was this: in the CP 
kernel, ALL SVC instructions were modified to a BAL to a specific 
address in the first 4K of storage, where a vector table rerouted the 
call to a specific CP subroutine. All those interrupts and PSW swaps 
took FOREVER on the 360/67, whereas a BAL to low storage SEEMED to fly 
almost instantaineously. The change also seemed to be beneficial when we 
switched to 370/168 platforms as well. The CMS kernel used a HVC (in 
actual fact, a DIAGNOSE) to request services from the CP kernel, 
including I/O services. We also modified MVT to run in a virtual machine 
using DIAGNOSE, rather than SIO/TIO/HIO, for I/O services. Made MVT run 
MUCH FASTER in the virtual machine and freed us from all the related 
emulation of these I/O instructions. One thing I miss: Grant wrote a 
program, called IMAGE, that created a complete image of the CP kernel, 
which would load in record time when bringing up the system. I wish I 
had a copy of that program now, because of its rather unique processing 
of the RLD data from the object code. I've never quite understood how 
RLD data is processed by either the linkage editor or the loader. :-(
   



re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#54 Computer History Museum

as an undergraduate ... before joining the science center ... I first
looked at the standard SVC linkage routine (for all kernel calls) and
cut the pathlength by about 75%. I then looked at the most frequently
called subroutines ... and changed them to BALRs ... leaving the
remaining as SVC ... since it no longer represented a significant
portion of CP overhead  i.e. while SVC/LPSW was expensive with
regard to BALR ... the actual time spent in the original SVC
linkagereturn was much, much larger than the SVC/LPSW instruction
... most of the benefit came from reducing the logic. The next was the
BALR ... not only replaced the SVC/LPSW instructions but were also
eliminated the rest of the logic for the linkage/return for high-use
routines. When that was done, the remaining SVC/LPSW (and associated
linkage/return overhead) was a trivial percentage of overall time spent
in the kernel.

Remaining big overhead wasn't so much the SIO instruction ... but the
channel program simulation overhead done in CCWTRANS. CMS turned out
to do very stylized disk channel programs. I created a fastpath channel
program emulation operation for CMS disk I/O (that was also syncronous
... avoiding all the virtual machine gorp for entering wait state,
asyncronous interrupts, etc). This got severely criticized by the people
at the science center (mostly bob adair) because it violated the 360
principles of operation. However, it did significantly reduce cp67
kernel overhead for operating CMS virtual machines. This was then redone
using DIAGNOSE instruction ... since the 360 principles of operation
defines the DIAGNOSE instruction operation as model-dependent. The
facade was that there was a 360 virtual machine machine model which
had its own definition for DIAGNOSE instruction operation.

Standard CP67 saved core image of the loaded kernel to disk (routine
SAVECP) and a very fast loader sequence that brought back that image
back into memory on IPL and then transferred to CP67 startup routine
CPINIT. One of the people at the science center modified CP67 kernel
failure processing to write a image dump to disk area and then simulate
reloaded the disk kernel image from scratch ... basically automagically
failure/restart ... this is mentioned in one of the referenced stories
at MULTICS websites ... one of the people who supported CP67 system at
MIT (and later worked on MULTICS) had modified TTY/ASCII terminal line
processing that would cause the system to crash ... and one day CP67
crashed and automagically (fast) restarted 27 times in a single day
(which help instigate some MULTICS rewrite because it was taking an hour
elapsed time to restart).

The cp67 kernel was undergoing was amount of evolution with new
functions being added. On 768k real storage machine ... every little bit
hurt. So I did a little slight of hand and created a virtual address
space that mapped the cp67 kernel image ... and then flagged the
standard portion as fixed ... but created an infrastructure that allowed
other portions to be paged in  out. This required enhancing the SVC
linkage infrastructure to recognize portions

Re: Computer History Museum

2008-12-26 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

rfocht...@ync.net (Rick Fochtman) writes:
 I could sure use a copy of that BPS Loader source code, if anyone has
 it and is willing to share..

re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#54 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#56 Computer History Museum


the vm370 loader source (both dmkldr00e and dmsldr) have genesis from
similar origins.

here is reference to hercules vm370 release 6 (that has source):
http://osdir.com/ml/emulators.hercules390.vm/2004-06/msg00018.html

the mentioned cp67 pageable kernel modifications weren't picked up and
shiped as part of cp67 product ... but something similar did appear with
vm370 product.

the hercules reference (CMS) DMSLDR (in release 6) having 255 externals
per TEXT file ... while the original BPS loader had limitation of a
total of 255 external symbols.

reference for vm/370 R6 (including base source):
http://www.cbttape.org/vm6.htm

the above has reference to aws file (which is over 4mbytes, compress,
over 32mbytes uncompressed)
http://www.cbttape.org/ftp/vm6/base-source.aws.bz2

reference to awstape utility:
http://www.cbttape.org/awstape.htm

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar70

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-26 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


rfocht...@ync.net (Rick Fochtman) writes:
 I could sure use a copy of that BPS Loader source code, if anyone has
 it and is willing to share..

re:
http://www.garlic.com/~lynn/2008s.html#64 Computer History Museum

double checking, the vm370 R6 base-source.aws file contains both (CMS)
DMSLDR ASSEMBLE and (CP) DMKLD00E ASSEMBLE files. DMKLD00E ASSEMBLE
would be the closest to the original BPS loader source.

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar70

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-26 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#54 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#56 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#64 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#65 Computer History Museum

for additional topic drift, mention of NCSS ( nomad) in an old baybunch
announcement:
http://www.garlic.com/~lynn/2007c.html#email830711b

in this post
http://www.garlic.com/~lynn/2007c.html#12

the post includes a number of old email ... including discussion of
highly modified internal vm370 distribution
http://www.garlic.com/~lynn/2007c.html#email830705
and
http://www.garlic.com/~lynn/2007c.html#email830709
and
http://www.garlic.com/~lynn/2007c.html#email830711

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar70

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-25 Thread Barry Merrill
In 1966 at Purdue's Labratory for Agricultural Remote Processing, LARS, later 
renamed to the Labratory for Applied Remote Sensing,
where K.S. Fu's pattern recognition algorithms were implemented and validated 
(using 16 channels of spectral data with 6 bits for
the amplitude of each channel, gathered on a DEC PDP- in a DC-3 that flew at 
2000 feet over a 5 mile long strip of Indiana fields),
we took delivery of S/360-44 serial number two (number one was in Pook).  I had 
implemented the Cooley-Tukey Fast Fourier Transform
in Fortran directly from their original paper on the University's 7090, and had 
moved it and my programming of the Karhunen-Loeve
transform using polynomial approximation for my Masters Thesis to the new Model 
44, using an AM radio to listen to the program
execution to detect when programming errors send it into a never-ending loop, 
and to hear when it went from slow to fast loops.
Diagnosis of those never-ending loops involved using address stops and single 
step instruction toggles from the console.

Twice, my program abruptly stopped, redlighting the console, and IBM engineers 
came on site, both times finding that I had actually
burned out the transistors in the floating point divide unit.  After the second 
failure, they returned with a floating point divide
unit that had been redesigned with a new heat sink added to fix the problem.

Originally, there was no disk with the Model 44, running TOS as I recall, with 
five tape drives, and the Fortran compiler was only
on tape - it took all five drives to compile and punch out the deck which was 
then read in and executed, and all five tapes were
spinning during every compile.

After writing my own program for the FFT, the Lab directory and my major 
professor, Dave Landgrebe, gave me a note from the computer
center that there was a new FFT subroutine that had been written by Tukey 
himself available from something called the SHARE library.
Upon examination, I discovered I was at
best only a coder and Tukey was a real programmer; whereas my major loop was 
250 or so Fortran statements, I marvelled at the
correct way to do it, in about 25 statements, and thus was introduced to the 
SHARE program library.

My part-time job at LARS was to create the Ground Truth data base, well 
before the actual flight.  On the day of the flight, the
agronomists were going to photo and measure and record the plant statistics 
from each field and then populate my database for
correlation with the spectral data, but as there was no actual field data yet, 
I created several sample fields of opium poppies and
cannibis to show the agronomists what the reports would eventually look like, 
and they were all humoroed by my samples.  However,
one Saturday afternoon the campus police showed up at my apartment and told me 
I was urgently needed at LARS and took me there; it
seems that the U.S. State Department had been discussing with their Turkish 
counterparts the future possibility of using the LARS
programs to detect those crops, but had assurred the Turks that the project was 
still in its infantcy, and had not been trained on
any of those crops.  Unfortunately, one of the agronomists had shown the Turks 
one of my sample reports, so they assumed they were
being lied to by the State Dept rep, who dragged me out to the site to meet 
with the Turks and, finally, having accepted that I was
the author and just a college student programmer, they did finally realize this 
was just a joke.

The Ground Truth was written in FORTRAN II, which did not have character 
literals; to print CORN, I had to set the variable CORN
equal to 
the decimal value that, when written with A4 (as I recall) format, would print 
CORN, etc., for each text value.  Just as I finished,
Fortran IV became available.

Merrilly Christmas

Herbert W. Barry Merrill, PhD
President-Programmer
Merrill Consultants
MXG Software
10717 Cromwell Drive
Dallas, TX 75220
214 351 1966 tel
214 350 3694 fax
www.mxg.com
ba...@mxg.com
 

P.S. I first programmed a digital computer, an IBM 610 in September, 1959 as a 
Sophomore at the University of Notre Dame; who of you
all will I have to outlive to claim to have been programming digital computers 
longer than anyone else?? 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-25 Thread Rick Fochtman
I don't remember all the mods we made at NCSS, but one change that made 
a BIG difference on the simplex and duplex 360/67's was this: in the CP 
kernel, ALL SVC instructions were modified to a BAL to a specific 
address in the first 4K of storage, where a vector table rerouted the 
call to a specific CP subroutine. All those interrupts and PSW swaps 
took FOREVER on the 360/67, whereas a BAL to low storage SEEMED to fly 
almost instantaineously. The change also seemed to be beneficial when we 
switched to 370/168 platforms as well. The CMS kernel used a HVC (in 
actual fact, a DIAGNOSE) to request services from the CP kernel, 
including I/O services. We also modified MVT to run in a virtual machine 
using DIAGNOSE, rather than SIO/TIO/HIO, for I/O services. Made MVT run 
MUCH FASTER in the virtual machine and freed us from all the related 
emulation of these I/O instructions. One thing I miss: Grant wrote a 
program, called IMAGE, that created a complete image of the CP kernel, 
which would load in record time when bringing up the system. I wish I 
had a copy of that program now, because of its rather unique processing 
of the RLD data from the object code. I've never quite understood how 
RLD data is processed by either the linkage editor or the loader. :-(


-snip
Anne  Lynn Wheeler wrote:


The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


rfocht...@ync.net (Rick Fochtman) writes:
 


We called them Gap Records at NCSS and they worked very well for
paging on 2305 devices. We'd use the first exposure for the first
page, the second exposure for the third page and so on. They were
connected to a 370/168 via 2860 Selector Channels. Grant Tegtmeier
designed the code and computed the sizes of the Gap Records. He also
installed 3330 and 2305 support code, of his own design, in our
modified CP67/CMS system. In a long weekend! He was noted for long
periods of seeming boredom, punctuated by spurts of sheer genius.
   



re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum

there were 3 people that came out from the science center to the
univ. to install cp67 the last week in jan68. One of these people left
the science center june68 to be part of ncss. He was suppose to teach a
cp67 class the following week (after he gave notice) for customers
... and the science center had to really scamble to find people to fill
in for him.

the initial cp67 code had fifo single operation processing for 2311,
2314s, and 2301 (drums). It would get about 80page transfers/sec on
2301. I redid the 2301 to do chained processing which increased peak
2301 page transfers to 300/sec. 2301 didn't have multiple request
exposure. i also redid the 2311  2314 code to implement ordered seek
operation (for all queued requests) ... both cp requests and cms
requests ... as well as chained request for page operations. On heavily
loaded CMS systems, the ordered seek queueing made big difference
... both graceful degradation as load increased ... as well as peak
throughput.

i also redid a whole bunch of the kernel pathlengths. This old
post:
http://www.garlic.com/~lynn/94.html#18 CP/67  OS MFT14

contains part of presentation that I made at the '68 SHARE meeting in
Boston.

I had been doing heavy optimization of OS MFT system generations ...
carefully reordering all the STAGE2 output (from STAGE1) so that the
result would optimally place os/360 system files and PDS members on disk
(in order to minimize avg. arm seek distance). For the univ. student
work load, I would get a factor of about three times thruput
improvement. This would degrade over time as PTF maintenance was applied
... affecting high use system components. I would then have to
periodically rebuild the system in order to restore the carefully order
placement of files and PDS members.

I also got to do some work rewritting cp67 kernel ... besides redoing
the i/o stuff ... i also reworked a lot of the pathlengths ... in some
cases getting factor of 100 times improvement for some of the stuff.

As mentioned in the presentation, the original unmodified cp67 kernel
had 534 cpu seconds overhead for running MFT14 workload that took 322
seconds elapsed time. In the period between Jan68 and Aug68, I was able
to get that cp67 kernel virtual machine overhead down from 534 cpu
seconds to 113 cpu seconds (by rewritting several parts of the cp67
kernel).

I normally had classes during the week ... so much of my maintenance and
support work for OS/360 MFT and work on cp67 occurred on weekends.  The
univ. typically shutdown the datacenter from 8am Sat. until 8am Monday
... during which time I could have the whole place for my personal use.
Monday classes were sometimes a problem after having been up for 48hrs
straight.

I had also done a dynamic adaptive resource

Re: Computer History Museum

2008-12-25 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


rfocht...@ync.net (Rick Fochtman) writes:
 I don't remember all the mods we made at NCSS, but one change that made 
 a BIG difference on the simplex and duplex 360/67's was this: in the CP 
 kernel, ALL SVC instructions were modified to a BAL to a specific 
 address in the first 4K of storage, where a vector table rerouted the 
 call to a specific CP subroutine. All those interrupts and PSW swaps 
 took FOREVER on the 360/67, whereas a BAL to low storage SEEMED to fly 
 almost instantaineously. The change also seemed to be beneficial when we 
 switched to 370/168 platforms as well. The CMS kernel used a HVC (in 
 actual fact, a DIAGNOSE) to request services from the CP kernel, 
 including I/O services. We also modified MVT to run in a virtual machine 
 using DIAGNOSE, rather than SIO/TIO/HIO, for I/O services. Made MVT run 
 MUCH FASTER in the virtual machine and freed us from all the related 
 emulation of these I/O instructions. One thing I miss: Grant wrote a 
 program, called IMAGE, that created a complete image of the CP kernel, 
 which would load in record time when bringing up the system. I wish I 
 had a copy of that program now, because of its rather unique processing 
 of the RLD data from the object code. I've never quite understood how 
 RLD data is processed by either the linkage editor or the loader. :-(

re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#54 Computer History Museum

as an undergraduate ... before joining the science center ... I first
looked at the standard SVC linkage routine (for all kernel calls) and
cut the pathlength by about 75%. I then looked at the most frequently
called subroutines ... and changed them to BALRs ... leaving the
remaining as SVC ... since it no longer represented a significant
portion of CP overhead  i.e. while SVC/LPSW was expensive with
regard to BALR ... the actual time spent in the original SVC
linkagereturn was much, much larger than the SVC/LPSW instruction
... most of the benefit came from reducing the logic. The next was the
BALR ... not only replaced the SVC/LPSW instructions but were also
eliminated the rest of the logic for the linkage/return for high-use
routines. When that was done, the remaining SVC/LPSW (and associated
linkage/return overhead) was a trivial percentage of overall time spent
in the kernel.

Remaining big overhead wasn't so much the SIO instruction ... but the
channel program simulation overhead done in CCWTRANS. CMS turned out
to do very stylized disk channel programs. I created a fastpath channel
program emulation operation for CMS disk I/O (that was also syncronous
... avoiding all the virtual machine gorp for entering wait state,
asyncronous interrupts, etc). This got severely criticized by the people
at the science center (mostly bob adair) because it violated the 360
principles of operation. However, it did significantly reduce cp67
kernel overhead for operating CMS virtual machines. This was then redone
using DIAGNOSE instruction ... since the 360 principles of operation
defines the DIAGNOSE instruction operation as model-dependent. The
facade was that there was a 360 virtual machine machine model which
had its own definition for DIAGNOSE instruction operation.

Standard CP67 saved core image of the loaded kernel to disk (routine
SAVECP) and a very fast loader sequence that brought back that image
back into memory on IPL and then transferred to CP67 startup routine
CPINIT. One of the people at the science center modified CP67 kernel
failure processing to write a image dump to disk area and then simulate
reloaded the disk kernel image from scratch ... basically automagically
failure/restart ... this is mentioned in one of the referenced stories
at MULTICS websites ... one of the people who supported CP67 system at
MIT (and later worked on MULTICS) had modified TTY/ASCII terminal line
processing that would cause the system to crash ... and one day CP67
crashed and automagically (fast) restarted 27 times in a single day
(which help instigate some MULTICS rewrite because it was taking an hour
elapsed time to restart).

The cp67 kernel was undergoing was amount of evolution with new
functions being added. On 768k real storage machine ... every little bit
hurt. So I did a little slight of hand and created a virtual address
space that mapped the cp67 kernel image ... and then flagged the
standard portion as fixed ... but created an infrastructure that allowed
other portions to be paged in  out. This required enhancing the SVC
linkage infrastructure to recognize portions of the kernel that could be
pageable (and do page fetch operation before doing the linkage).

The standard CP67 kernel was built up of card decks which had the BPS
loader slapped on the front

Re: Computer History Museum

2008-12-24 Thread Kelman, Tom
That is interesting.  I would like to visit the museum some day.  Here's
descriptions from a couple of the pictures.

IBM's System/360 of the mid 1960s came in five different speed and size
ranges, starting at 4K of memory and eight 16-bit registers. The
architecture dominated business markets and computer science for three
decades.

Can you imagine that we once worked with computers with only 4K of
memory.  Oh, and they successors of these are still very important in
the business world.  Bill Gates just won't admit it.

The PDP-8 from DEC was the first mass-produced minicomputer. By 1973 it
was the best-selling computer in the world, and over 25 years, DEC
produced more than a dozen variations of the PDP-8 architecture.

When I was in college I worked with a professor who was studying brain
waves.  He had placed probes from a PDP-8 into the brains of mice (I
know - poor little mice), and I did the programming to produce analysis
reports.


Tom Kelman


 Posted by Kopischke, David G.
 Sent: Tuesday, December 23, 2008 11:40 AM
 
 Here's an interesting one from Intelligent Enterprise today
 
 - Computer History Museum Tour in Pictures
 

http://www.intelligententerprise.com/channels/information_management/sho
 wArticle.jhtml?articleID=212501470cid=nl_ie_week
 
 By Doug Henschen
 Our favorite event venue of 2008? Hands down it was the Computer
History
 Museum in Mountain View, Calif. Take the tour in pictures.
 
 

http://www.intelligententerprise.com/galleries/showImage.jhtml?galleryID
 =23imageID=1articleID=212501470
 



*
If you wish to communicate securely with Commerce Bank and its
affiliates, you must log into your account under Online Services at 
http://www.commercebank.com or use the Commerce Bank Secure
Email Message Center at https://securemail.commercebank.com

NOTICE: This electronic mail message and any attached files are
confidential. The information is exclusively for the use of the
individual or entity intended as the recipient. If you are not
the intended recipient, any use, copying, printing, reviewing,
retention, disclosure, distribution or forwarding of the message
or any attached file is not authorized and is strictly prohibited.
If you have received this electronic mail message in error, please
advise the sender by reply electronic mail immediately and
permanently delete the original transmission, any attachments
and any copies of this message from your computer system.
*

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-24 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


thomas.kel...@commercebank.com (Kelman, Tom) writes:
 That is interesting.  I would like to visit the museum some day.  Here's
 descriptions from a couple of the pictures.

 IBM's System/360 of the mid 1960s came in five different speed and size
 ranges, starting at 4K of memory and eight 16-bit registers. The
 architecture dominated business markets and computer science for three
 decades.

 Can you imagine that we once worked with computers with only 4K of
 memory.  Oh, and they successors of these are still very important in
 the business world.  Bill Gates just won't admit it.

 The PDP-8 from DEC was the first mass-produced minicomputer. By 1973 it
 was the best-selling computer in the world, and over 25 years, DEC
 produced more than a dozen variations of the PDP-8 architecture.

 When I was in college I worked with a professor who was studying brain
 waves.  He had placed probes from a PDP-8 into the brains of mice (I
 know - poor little mice), and I did the programming to produce analysis
 reports.

univ. got a 64kbyte, 360/30 as part of planned transition from 709/1401
setup to tss/360 running on 360/67. i got a undergrad student job
programming 360/30 in assembler. the univ. was accustomed to shutting
down the datacenter from 8am sat until 8am mon ... i would then have the
whole place to myself for 48hrs. in that sense, my first personal
computer was that 64kbyte 360/30 (upgraded to 768k 360/67).

there were lots of problems with tss/360 ... so when the univ. got
360/67 (and discontinued the 709), it mostly ran os/360 starting with
pcp. my undergrad responsibilities expanded to supporting os/360
.. including system generations starting with release 9.5.

along the way, the univ played with (virtual machine) cp67 ... and I got
opportunity to rewrite large portions of the code.

this is post from yesterday referencing adding tty/ascii terminal
support to cp67:
http://www.garlic.com/2008s.html#48 New machine code

part of that exercise was trying to get the 2702 terminal controller to
do something ... it turned out it couldn't do. this as at least part of
the motivation for the univ. to build a clone replacement ...  using an
interdata/3, reverse engineering the channel interface ... and building
a channel interface board for the interdata/3, programming the
interdata/3 to emulate 2702. this was picked up and sold by interdata as
standard product ... and later when perkin/elmer bought interdata ...
sold under the perkin/elmer logo. the implementation went thru a number
of evoluations ... an early upgrade was a cluster ... with interdata/4
handling the channel interface, and multiple interdata/3 processors
dedicated to linescanner interfaces. this got written up blaming us for
(at least parts) clone controller business ... misc. past posts
referencing 360 plug-compatible controller market:
http://www.garlic.com/~lynn/subtopic.html#360pcm

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar70

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-24 Thread Chris Mason
Tom

It doesn't inspire much confidence in the curatorship of the museum that they 
picked on the register size of the 360 Model 20 as an indication of the 
capability of the range. You don't start at a register size for a range of 
computing machines. Assuming that the range will all be capable of running 
the same software, you had jolly well better have the same register size for 
all 
machines in the range.

What this ridiculous comment hides is that the 360 Model 20 was sufficient of 
a variation on the common features of the remainder of the range to be 
treated quite separately. It had its own operating system which was not 
intended for use on other machines in the range and vice versa.

Indeed in a relatively short period of time, the 360 Model 20 transmogrified 
into the range of machines which we know today as the iSeries - follow the 
rather special programming language Report Program Generator (RPG) using 
which the grist was created for the Model 20 mill and which was supposed to 
help codify the wire plugs of the unit record machines the Model 20 was 
designed to replace.

I once did some programming on the Model 20 - excluding the Model 20 hands-
on class where I wrote RPG programs - obviously! The exercise was to 
examine binary synchronous communications (BSC) logic and, incidentally, how 
to program a multiple wait on the Model 20 hardware while bypassing the 
operating system. Thus card reading, packing the data and sending were 
logically separate tasks and receiving data, unpacking the data and printing 
were also separate tasks. I must have got quite familiar with those 16-bit 
registers at the time!

I happen to know rather precisely when I performed this exercise because of 
an important event which happened as I was desk-checking the BSC logic. It 
was March 1971. I'm pretty sure that the department which supported the 
Model 20 were really much more interested in supporting the System/3 so that 
date indicates approximately when the transmogrification occurred.

Incidentally, the partner programs were written in BATS (over BTAM) and ran 
on a real 360 - or maybe 370 by then.

Another incidentally: the Model 25, introduced rather later than the original 
models, was also a real 360.

If the author of the text to accompany the 360 machines in the museum 
insisted on mentioning the register size - somewhat pointless really unless to 
compare with some other machine range perhaps, he/she would be obliged to 
mention that a 32-bit register size applied to the whole range - excluding the 
Model 20 with its special characteristics such as of having 16-bit registers.[1]

As for memory size, I suppose the Model 20 may have managed with a size of 
4K and still managed to support the operating system. As for real 360 
machines, the Model 30 probably had the option to be limited to 4K but the 
smallest operating system I ever worked with itself required 4K. Thus a 
sensible minimum storage size is likely to have started as 8K.

Nevertheless, I have the very faintest memory - of a manager suggesting we 
use them! - of some I/O routines being available from IBM which presumably 
would allow the practical use of a 4K machine. In order to support a 1287 on 
an 8K Model 30 without disks, it was either work with these I/O routines or 
adapt a 4K card-based operating system (the 1231 Support Package - or 
some such name) to handle the special requirements of the 1287 optical 
character (hand-written numbers) reader. I chose the latter, eventually it 
worked and a salesman's rear-end was saved!

Chris Mason

[1] Actually there may be other exceptions. The 360 Model 44 for example 
was also a special machine somehow specifically designed to support science 
and engineering (Fortran) as I recall.

On Wed, 24 Dec 2008 06:48:58 -0600, Kelman, Tom 
thomas.kel...@commercebank.com wrote:

That is interesting.  I would like to visit the museum some day.  Here's
descriptions from a couple of the pictures.

IBM's System/360 of the mid 1960s came in five different speed and size
ranges, starting at 4K of memory and eight 16-bit registers. The
architecture dominated business markets and computer science for three
decades.

Can you imagine that we once worked with computers with only 4K of
memory.  Oh, and they successors of these are still very important in
the business world.  Bill Gates just won't admit it.

The PDP-8 from DEC was the first mass-produced minicomputer. By 1973 it
was the best-selling computer in the world, and over 25 years, DEC
produced more than a dozen variations of the PDP-8 architecture.

When I was in college I worked with a professor who was studying brain
waves.  He had placed probes from a PDP-8 into the brains of mice (I
know - poor little mice), and I did the programming to produce analysis
reports.


Tom Kelman


 Posted by Kopischke, David G.
 Sent: Tuesday, December 23, 2008 11:40 AM

 Here's an interesting one from Intelligent Enterprise today

 - Computer History Museum

Re: Computer History Museum

2008-12-24 Thread Gerhard Postpischil

Chris Mason wrote:
[1] Actually there may be other exceptions. The 360 Model 44 for example 
was also a special machine somehow specifically designed to support science 
and engineering (Fortran) as I recall.


The USGS had a 44 without main memory. They used it with custom 
micro code to process real-time seismic data, and the regular 
storage access just wasn't fast enough.



Gerhard Postpischil
Bradford, VT

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-24 Thread Rick Fochtman

snip-
[1] Actually there may be other exceptions. The 360 Model 44 for example 
was also a special machine somehow specifically designed to support 
science and engineering (Fortran) as I recall.

-unsnip---
The 360/44 was my first 360 system and brings back many fond memories. 
Our first operating system was something called 44/PS and ran from a 
single-platter disk drive in the side of the CPU, which we called The 
Frisbee. We shortly migrated to OS/360 PCP (Sysgened as a Model 50), 
followed by a storage upgrade from 128K to 256K and a shift to MFT at 
about release 12.6. The 360/44 Commercial Feature was implemented by a 
software emulator in Bump Core. A special program interrupt for 
invalid OP-CODE sent execution into the emulator, which would return 
execution to real storage via a special form of the LPSW instruction. 
High-speed general registers and variable-length floating point 
registers made FORTRAN programs run fairly fast (for that day). SS 
instructions, on the other hand, seemed to take FOREVER, due to the 
emulation involved. The same was true of LM, STM, BXH and BXLE, which 
weren't emulated but weren't installed except as part of the Commercial 
Feature.


Channels were simple: three high-speed byte-multiplex channels were all 
you could get, but they handled our 2314's, 2400 tapes (on 2803 
controllers) and 2821 Reader/Punch/Printer controller fairly well. We 
even had a pair of 2321 Data Cell Drives for archiving and a 2860 
Controller with some 2260 terminals, for the times we brought up RAX 
(There's a blast from the past!) If I remember correctly, our 2321's 
were on a 2803 controller. We even added a 2501 reader, in a niche in 
the hall outside the computer room, so students could submit their jobs 
directly to HASP. But we didn't let them near the 1403 N1 printer or the 
2540 Reader/Punch. :-)


--
Rick
--
Remember that if you’re not the lead dog, the view never changes.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-24 Thread Rick Fochtman

-snip-
The USGS had a 44 without main memory. They used it with custom micro 
code to process real-time seismic data, and the regular storage access 
just wasn't fast enough.

---unsnip--
And if you opened the covers of the old 2880 Block Multiplexor Channel 
box, you found a downsized version of the 360/44 front panel. Fancy 
that! :-)


--
Rick
--
Remember that if you’re not the lead dog, the view never changes.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-24 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main as well.


rfocht...@ync.net (Rick Fochtman) writes:
 And if you opened the covers of the old 2880 Block Multiplexor Channel
 box, you found a downsized version of the 360/44 front panel. Fancy
 that! :-)

the next phase was the 303x channel director ... which was 370/158
engine with just the integrated channel microcode (and w/o the 370
instruction microcode). a 3031 was a 370/158 engine with just the 370
instruction microcode (and w/o the integrated channel micrcode),
configured to operate with channel director. 3032 was 370/168 configured
to run with channel director (instead of 28x0 channel boxes). 3033
started out as 370/168 wiring diagram mapped to 20% faster chips. the
3033 chips also had ten times the number of circuits (as chips used in
370/168) but started out with the additional circuits not being
used. during the development cycle, some critical sections were
redesigned to make better use of the additional on-chip circuits ... and
the 3033 eventually came out 50% faster than 168 (leveraging higher
integrated, on-chip operations).

there use to be some technology laying out data records on 3330
cylinders with dummy spacer records that would allow for channel
program processing latency to do a head switch operation (on the same
cylincer) between the end of a data record (on one track) and the start
of the (next) data record (on another track) ... without a rotational
miss.  Several 370s; 145, 148,  168, the channel processing was fast
enough to execute the head-switch in the time it took a 3330 disk to
rotate the dummy spacer record amount.

The problem was that 158 channels had higher latency and would only make
the head-switch (w/o a miss  additional revolution) 20-30% of the time
(the rest of the time, the head-switch would miss picking the next
record and have to may a complete revolution before trying again). The
3330 track size wasn't large enuf to make the dummy record sizes larger
(using 4k data records).  It turned out that the same rotational miss
rates was true for the 303x channel directors (regardless of the machine
they were attached to; since they all used the same 158 integrated
channel processing).

misc. past posts discussing dummy records  channel program head-switch
latency:
http://www.garlic.com/~lynn/2000d.html#7 4341 was Is a VAX a mainframe?
http://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
http://www.garlic.com/~lynn/2002b.html#17 index searching
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, 
and other rambling folklore
http://www.garlic.com/~lynn/2004d.html#64 System/360 40 years old today
http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
http://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
http://www.garlic.com/~lynn/2005p.html#38 storage key question
http://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar70

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-24 Thread Rick Fochtman

--snip
there use to be some technology laying out data records on 3330 
cylinders with dummy spacer records that would allow for channel 
program processing latency to do a head switch operation (on the same 
cylincer) between the end of a data record (on one track) and the start 
of the (next) data record (on another track) ... without a rotational 
miss. Several 370s; 145, 148,  168, the channel processing was fast 
enough to execute the head-switch in the time it took a 3330 disk to 
rotate the dummy spacer record amount.


The problem was that 158 channels had higher latency and would only make 
the head-switch (w/o a miss  additional revolution) 20-30% of the time 
(the rest of the time, the head-switch would miss picking the next 
record and have to may a complete revolution before trying again). The 
3330 track size wasn't large enuf to make the dummy record sizes larger 
(using 4k data records). It turned out that the same rotational miss 
rates was true for the 303x channel directors (regardless of the machine 
they were attached to; since they all used the same 158 integrated 
channel processing).

unsnip
We called them Gap Records at NCSS and they worked very well for 
paging on 2305 devices. We'd use the first exposure for the first page, 
the second exposure for the third page and so on. They were connected to 
a 370/168 via 2860 Selector Channels. Grant Tegtmeier designed the code 
and computed the sizes of the Gap Records. He also installed 3330 and 
2305 support code, of his own design, in our modified CP67/CMS system. 
In a long weekend! He was noted for long periods of seeming boredom, 
punctuated by spurts of sheer genius.


--
Rick
--
Remember that if you’re not the lead dog, the view never changes.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Computer History Museum

2008-12-24 Thread Anne Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


rfocht...@ync.net (Rick Fochtman) writes:
 We called them Gap Records at NCSS and they worked very well for
 paging on 2305 devices. We'd use the first exposure for the first
 page, the second exposure for the third page and so on. They were
 connected to a 370/168 via 2860 Selector Channels. Grant Tegtmeier
 designed the code and computed the sizes of the Gap Records. He also
 installed 3330 and 2305 support code, of his own design, in our
 modified CP67/CMS system. In a long weekend! He was noted for long
 periods of seeming boredom, punctuated by spurts of sheer genius.

re:
http://www.garlic.com/~lynn/2008s.html#51 Computer History Museum
http://www.garlic.com/~lynn/2008s.html#52 Computer History Museum

there were 3 people that came out from the science center to the
univ. to install cp67 the last week in jan68. One of these people left
the science center june68 to be part of ncss. He was suppose to teach a
cp67 class the following week (after he gave notice) for customers
... and the science center had to really scamble to find people to fill
in for him.

the initial cp67 code had fifo single operation processing for 2311,
2314s, and 2301 (drums). It would get about 80page transfers/sec on
2301. I redid the 2301 to do chained processing which increased peak
2301 page transfers to 300/sec. 2301 didn't have multiple request
exposure. i also redid the 2311  2314 code to implement ordered seek
operation (for all queued requests) ... both cp requests and cms
requests ... as well as chained request for page operations. On heavily
loaded CMS systems, the ordered seek queueing made big difference
... both graceful degradation as load increased ... as well as peak
throughput.

i also redid a whole bunch of the kernel pathlengths. This old
post:
http://www.garlic.com/~lynn/94.html#18 CP/67  OS MFT14

contains part of presentation that I made at the '68 SHARE meeting in
Boston.

I had been doing heavy optimization of OS MFT system generations ...
carefully reordering all the STAGE2 output (from STAGE1) so that the
result would optimally place os/360 system files and PDS members on disk
(in order to minimize avg. arm seek distance). For the univ. student
work load, I would get a factor of about three times thruput
improvement. This would degrade over time as PTF maintenance was applied
... affecting high use system components. I would then have to
periodically rebuild the system in order to restore the carefully order
placement of files and PDS members.

I also got to do some work rewritting cp67 kernel ... besides redoing
the i/o stuff ... i also reworked a lot of the pathlengths ... in some
cases getting factor of 100 times improvement for some of the stuff.

As mentioned in the presentation, the original unmodified cp67 kernel
had 534 cpu seconds overhead for running MFT14 workload that took 322
seconds elapsed time. In the period between Jan68 and Aug68, I was able
to get that cp67 kernel virtual machine overhead down from 534 cpu
seconds to 113 cpu seconds (by rewritting several parts of the cp67
kernel).

I normally had classes during the week ... so much of my maintenance and
support work for OS/360 MFT and work on cp67 occurred on weekends.  The
univ. typically shutdown the datacenter from 8am Sat. until 8am Monday
... during which time I could have the whole place for my personal use.
Monday classes were sometimes a problem after having been up for 48hrs
straight.

I had also done a dynamic adaptive resource manager and my own page
replacement algorithm and thrashing controls for cp67 ... lots of the
stuff IBM picked up and shipped in product while I was still
undergraduate at the univ (including the TTY/ASCII terminal support
mentioned in earlier post in this thread).

This recent post:
http://www.garlic.com/~lynn/2008s.html#17 IBM PC competitors

mentioned that I continued to do various cp67 things ... but much of it
was dropped in the product morph from cp67 to vm370. The above has
references/pointers to some old email regarding migrating various of the
pieces from cp67 to vm370 (after the science center finally replaced
their 360/67 with a 370/155-II).

some of the old email 
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

... included these posts from a couple years ago:
http://www.garlic.com/~lynn/2006v.html#36
http://www.garlic.com/~lynn/2006w.html#7
http://www.garlic.com/~lynn/2006w.html#8

Before the decision was made to release some of it in the standard vm370
product ... they let me build, distribute, and support highly modified
vm370 (aka csc/vm) systems for large number of internal systems. at one
point I would joke with the people on the 5th flr that the number peaked
about the same as the total number that they were supporting ... recent
reference

Computer History Museum

2008-12-23 Thread Kopischke, David G.
Here's an interesting one from Intelligent Enterprise today

- Computer History Museum Tour in Pictures

http://www.intelligententerprise.com/channels/information_management/sho
wArticle.jhtml?articleID=212501470cid=nl_ie_week

By Doug Henschen
Our favorite event venue of 2008? Hands down it was the Computer History
Museum in Mountain View, Calif. Take the tour in pictures.


http://www.intelligententerprise.com/galleries/showImage.jhtml?galleryID
=23imageID=1articleID=212501470

--
This e-mail transmission may contain information that is proprietary, 
privileged and/or confidential and is intended exclusively for the person(s) to 
whom it is addressed. Any use, copying, retention or disclosure by any person 
other than the intended recipient or the intended recipient's designees is 
strictly prohibited. If you are not the intended recipient or their designee, 
please notify the sender immediately by return e-mail and delete all copies. 
OppenheimerFunds may, at its sole discretion, monitor, review, retain and/or 
disclose the content of all email communications. 
==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html