The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

[EMAIL PROTECTED] writes:
> When?  I never considered IBM world and its batch environment 
> timesharing.  Timesharing does not do large data processing tasks
> well; and it's not supposed to.

there were somewhat distinct, different environments ... one was
commercial dataprocessing and the other was interactive computing and
timesharing.

the commercial, batch, production environment was oriented towards
business dataprocessing ... it wasn't computing done on behalf of some
specific person ... it was computing done on behalf of some business
operation ... like the organizations payroll and printing checks. the
requirement was that the business dataprocessing be done ... frequently
on very determined scheduled ... independent of any specific
person. over time, there was lots of batch technology evolved to
guarentee that specific operations could be done reliably, predictably,
and deterministicly independent of any human involvement.

much of the interactive and virtual machine paradigm evolved totally
independently at the science center ... first with cp40/cms, morphing
into cp67/cms, followed by vm370/cms (even tho during the 70s, the batch
infrastructure and the timesharing infastructure shared a common 370
hardware platform):
http://www.garlic.com/~lynn/subtopic.html#545tech

both multics (on the 5th flr) and science center (on the 4th flr) could
trace common heritage back to ctss (and unix traces some heritage back
to multics).

even tho there was a relatively large timesharing install base (in most
cases larger than any other vendor's timesharing install base that might
be more commonly associated with timesharing) ... in the period, it was
dwarfed by the commerical batch install base. I've joked before that at
one period, the installed commercial customer install base was much
larger than the timesharing customer install base, and the timesharing
customer install base was much larger than the timesharing internal
install base, and the timesharing internal install base was much larger
than the internal installations that I directly supported (built,
distributed, fixed bugs, on highly customized/modified kernel and
services). However, at one point the number of internal installations
that I directly supported was as large as the total number of Multics
installations that ever existed. lots of past posts mentioning the
timesharing environment
http://www.garlic.com/~lynn/subtopic.html#timeshare

much of that timesharing install base was cms personal computing ...
while other was mixed-mode operation with cms personal computing and
other kinds of operating systems in virtual machines ... aka the same
timesharing infrastructure supporting both interactive cms personal
computing as well as production (frequently batch) guest operating
systems. this required a timesharing dispatching/scheduling policy
infrastructure that could support a broad range of requirements.
for a little topic drift, slightly related recent post:
http://www.garlic.com/~lynn/2007m.html#46 Rate Monotonic scheduling (RMS) vs. 
OS Scheduling

also coming out of the science center in the period (besides virtual
machines, a lot of timesharing and interactive/personal computer)
... somewhat reflecting the timesharing and personal computing
orientation was much of the internal networking technology
http://www.garlic.com/~lynn/subnetwork.html#internalnet

as well as things like the invention of GML, precusor to SGML, HTML,
XML, etc
http://www.garlic.com/~lynn/subtopic.html#sgml

with the advent of PCs ... a lot of the cms personal computing migrated
to PCs ...  although the (mainframe) virtual machine operating system
continues to survive ... and even had seen some resurgent in the early
part of this decade supporting large numbers of virtual machines running
linux ... somewhat in the "server consolidation" market segment

recently, "server consolidation" has become something of a more widely
recognized buzzword ... pushing a combination of virtual machine
capability migrated to PC hardware platforms possibly in combination
with large BLADE form-factors farms ... where a business with hundreds,
thousands, or even tens of thousands of servers are consolidating into
much smaller space.

Microsoft Looks to Stop Internal Server Sprawl
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=296360

from above:

The profile of Microsoft Corp.’s in-house server farm is similar to
those of many other companies: one application per server, with less
than 20% peak server utilization on average. But Devin Murray,
Microsoft’s group manager of utility services, is working to change
that. Murray’s team manages about 17,000 servers that support 40,000 of
Microsoft’s end users worldwide.

... snip ...

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to