Anne & Lynn Wheeler wrote:
eventually all system orders had to be processed by some hone
application or another. the "performance predictor" was a "what-if" type
application for sales ... you entered some amount of information about
customer workload and configuration (typically in machine readable form)
and asked "what-if" questions about changes in workload or configuration
(type of stuff that is normally done by spreadsheet applications these
days).
i used variation off the "performance predictor" was also used for
selecting next configuration/workload in the automated benchmark process
that I used for calibrating/validating the resource manager before
product ship (one sequence involved 2000 benchmarks that took 3 months
elapsed to run)
http://www.garlic.com/~lynn/subtopic.html#benchmark
the performance monitoring, tuning, and management (along with workload profiling)
work evolved into capacity planning.
ref:
http://www.garlic.com/~lynn/2006f.html#22 A very basic question
for a little more drift ... when the US hone datacenters were
consolidated into a single center in the bayarea in the late 70s ...
possibly the largest single-system cluster anywhere (at the time) was
created. a variation of the "performance predictor" was used to monitor
real time activity of all the members of the cluster and when a new
branch office (or other sales, marketing or field) person initiated a
login, the overall view of the cluster was used to decide which machine
the user actually logged into (aka providing both availability and
load-balancing at log-in)
http://www.garlic.com/~lynn/subtopic.html#hone
In the 70s, the base system didn't have process migration between
different processors in a cluster (once logged on).
Note, however, one of the time-sharing service bureaus using the same
(cp/cms) platform, had done such a enhancement in the mid-70s ...
http://www.garlic.com/~lynn/subtopic.html#timeshare
this particular service bureau had moved to 7x24 operation with
customers world-wide ... and so there was no period where the system was
totally idle and could be taken down for service. When a member of their
(mainframe) cluster configuration needed to be taken offline for service
or preventive maintenance ... it was possible to migrate all active
processes off the piece of hardware (to other members in the cluster,
transparently to the end user), and take the hardware offline.
they had datacenter near boston and another in san francisco and claimed
to be able to do transparent process migration between datacenters (as
well as between members of the cluster within a datacenter) ... modulo
having replicated data at the two centers.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html