The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Gerhard Postpischil) writes:
> Back in the seventies I was in charge of the systems group at a
> service bureau. One of our customers was from a local university,
> running an APL application that tracked students vs. classes, and a
> few other things. It was gold mine - whenever it ran, the CPU went
> 100% busy and stayed that way for a long time. The same thing written
> in another language might have cost one or two percent as much.

recent post mentioning world-wide hone system
http://www.garlic.com/~lynn/2007.html#30 V2X2 vs. Shark (SnapShot v. FlashCopy)
http://www.garlic.com/~lynn/2007.html#31 V2X2 vs. Shark (SnapShot v. FlashCopy)
http://www.garlic.com/~lynn/2007.html#46 How many 36-bit Unix ports in the old 
days?
http://www.garlic.com/~lynn/2007b.html#51 Special characters in passwords was 
Re: RACF - Password rules
http://www.garlic.com/~lynn/2007d.html#39 old tapes
http://www.garlic.com/~lynn/2007e.html#38 FBA rant
http://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems 
history
http://www.garlic.com/~lynn/2007f.html#12 FBA rant
http://www.garlic.com/~lynn/2007f.html#20 Historical curiosity question
http://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging
http://www.garlic.com/~lynn/2007i.html#34 Internal DASD Pathing
http://www.garlic.com/~lynn/2007i.html#77 Sizing CPU
http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate
http://www.garlic.com/~lynn/2007k.html#60 3350 failures

HONE ("hands-on") started out in the US with cp67 ... sort of to allow
branch office SEs to have "hands-on" with various operating sysetms
(running in virtual machines). prior to 23jun69 unbundling announcement
http://www.garlic.com/~lynn/subtopic.html#unbundle

a lot of SEs got much of their "hands-on" experience in their customer
accounts. after the unbundling announcement, SE time was being charged
for ... and not a lot of customers were interested in paying to have SEs
learn.

however, the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

in addition to doing virtual machines, cms, inventing GML (precursor
to SGML, HTML, XML, etc)
http://www.garlic.com/~lynn/subtopic.html#sgml

and the internal networking technology
http://www.garlic.com/~lynn/subnetwork.html#internalnet

which was also used in bitnet and earn
http://www.garlic.com/~lynn/subnetwork.html#bitnet

also did a port of apl\360 to cms (cms\apl). apl\360 had its own
monitor, scheduler, workspace swapping, terminal handler, etc ... all of
which could be discarded in the port for cms\apl. also in moving from
the 16kbyte (sometimes 32kbyte) "real" workspace sizes .... to CMS
... where the workspace size could be all of virtual memory ... the
whole way that APL managed storage had to be reworked (the real storage
stategy resulted in enormous page thrashing).

part of cms\apl was also the ability to access system services (things
like read/write files) ... something that apl didn't previously
have. the combination of really large workspace sizes and the access to
system services ... opened up APL for a lot of real-world problems. A
lot of modeling off all kinds was done ... as well as a lot of stuff
that these days are implementing with spreadsheets.

One of the early "big" APL uses (at cambride) were a number of business
planners from corporate hdqtrs in armonk. they forwarded a tape to
cambridge with all of the most sensitive corporate customer business
data ... and would do significant amount of business modeling and
planning. this created an interested "security" scenario for the service
at cambridge since there were a lot of non-employees using the system
from various educational institutions in the cambridge area.

one instance is this slightly related DNS trivia topic drift ... more
than a decade before DNS
http://www.garlic.com/~lynn/2007k.html#33 Even worse than Unix

Before long there were a significant number of CMS\APL applications
written that supported sales & marketing and deployed on the HONE system
... effectively taking over its whole use for sales & marketing (and
eliminating the original "hands-on" use for SEs). Before long, sales
couldn't even submit customer orders that hadn't been processed by some
CMS\APL application. HONE transitioned from cp67 to vm370-based platform
and from cms\apl to apl\cms (enhancements done by the palo alto science
center ... including the 370/145 apl microcode assist) ... and clones of
(US) HONE system were sprouting up all over the world (some of the early
ones i even got to handle ... like when EMEA hdqtrs moved from the US to
Paris).

lots of other posts mentioning HONE and/or APL
http://www.garlic.com/~lynn/subtopic.html#hone

in the mid-70s, the US HONE datacenters were consolidated in silicon
valley. The large customer base (all US sales and marketing) drove the
requirement for large disk farm ... and the heavy APL processor
requirements helped contribute to it (probably) being the largest single
system image installation in the world at the time ... aka large number
of loosely-coupled processors with front-end that would route new logins
based on availability and load.

later the silicon valley HONE datacenter was replicated first in Dallas
and then a 3rd replica in Boulder (with availability and fall-over
across the three datacenters).

the science center had also done a lot of performance tuning and
modeling ... including much of the stuff leading up to capacity
planning. a sophisticated science center performance model written in
APL was made available on the HONE system as the "performance predictor"
... where sales & marketing could provide workload and configuration
characterizations of a customer installation and ask "what-if" questions
about what happen when workload &/or configuration changes were made.

when i was getting ready to release my resource manager ... besides
getting selected to be guinea pig for charging for kernel software ... I
had developed a bunch of automated benchmarking processes. there was
something like a 1000 benchmarks defined that covered a broad range of
configurations, workload, and performance tuning parameters. as each
benchmark was run ... the results were fed into a specially modified
version of the performance predictor. then the performance predictor was
programmed to start selecting various tuning, configuration and workload
specifications ... for something like another 1000 benchmarks
(predicting what should happen in each benchmark and then verifying the
results). the 2000 benchmarks took something like 3mnths elapsed time to
run ...
http://www.garlic.com/~lynn/subtopic.html#bench
leading up to the release of the resource manager.
httl://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to