Jack Woehr <[EMAIL PROTECTED]> writes:
> 7. But the best feature of VM on the 3x0 architecture is that
> the same jokes stay fresh forever, since that platform changes
> at roughly the same rate as the continents drift !

somewhat more suptle joke from the resource manager ... initially
(re)release was 1976.

it introduced a new module ... dmkstp ... which was a take-off on
a old tv commercial ... something about the "racer's edge".

the resource manager had some policy setting parameters and all this
dynamic adaptive feedback stuff. however, leading up to release of the
product ... somebody from corporate insisted that all the "modern"
performance management implementations had enormous numbers of
performance turning knobs. the major operating system release of the
period had a system resource manager ... with a humongous matrix of
performance tuning options. there used to be frequent share
presentations about enormous number of benchmarks where the numerous
performance options were somewhat randomly changed ...attempting to
discover static combinations of tuning knob settings that showed up
(on the avg.) better thruput for specific kinds of workloads.

somehow it was felt that all the dynamic adaptive feedback features
weren't significantly modern ... and it required static performance
tuning nodes that could be tweaked this way and that.

so before release, some number of static tuning knobs were introduced
and fully documented. the joke had to do with the nature of dynamic
adaptive feedback algorithms and something sometimes referred to as
"degrees of freedom" (and what had the greatest degrees of freedom,
the static tuning knobs or the dynamic adaptive feedback controls, aka
could the dynamic feedback control compensate for all possible tuning
knob changes).

misc. collected scheduling & resource manager posts
http://www.garlic.com/~lynn/subtopic.html#fairshare

there is a story about a couple years before the resource manager was
released, an early version leaked out to AT&T longlines. longlines
migrated this kernel to some number of machines ... including bringing
it up on newer generaion of machines as they were installed. coming up
to about 3090 timeframe , I was contacted by the national account rep
for at&t ... who was facing a problem. this early, leaked kernel
predated smp support and it wouldn't be possible to sell SMP
processors to longlines unless they could be migrated off this
machine. however, the dynamic adaptive stuff in this leaked kernel had
managed to survive nearly ten years and operate on a range of
processors that had a two-orders of magnitude different in computer
power (i.e. there was an increase computing power of one hundred times
between the earliest, entry machine and the latest, highest end
machine). misc. past post mentioning at&t longlines
http://www.garlic.com/~lynn/95.html#14 characters
http://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
http://www.garlic.com/~lynn/97.html#15 OSes commerical, history
http://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer
of the century)
http://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was
(Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2001f.html#3 Oldest program you've written, and
still in use?
http://www.garlic.com/~lynn/2002.html#4 Buffer overflow
http://www.garlic.com/~lynn/2002.html#11 The demise of compaq
http://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002i.html#32 IBM was: CDC6600 - just how powerful
a machine was it?
http://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math
skills to do data processing ?
http://www.garlic.com/~lynn/2002p.html#23 Cost of computing in 1958?
http://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
http://www.garlic.com/~lynn/2003d.html#46 unix
http://www.garlic.com/~lynn/2003k.html#4 1950s AT&T/IBM lack of collaboration?
http://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes
http://www.garlic.com/~lynn/2004m.html#58 Shipwrecks
http://www.garlic.com/~lynn/2005p.html#31 z/VM performance

a 3090 specific story has to do with erep data. i had done the drivers
for hyperchannel as part of migrating something like 300 people from
the ims group in santa teresa lab to off-site building. lots of
collected hyperchannel & hsdt (high speed data transport) project
posts:
http://www.garlic.com/~lynn/subnetwork.html#hsdt

they had considered remote-3270s, but discarded it as intolerable.
hyperchannel supported mainframe channel extension over telco links
... so that local 3270s could be used at the remote location. the 3274
local channel controllers operated at something like 640kbytes and the
telco channele xtenders ran over T1-links (aka around 150kbytes).
instead of response slightly declining, it improved because of some
secondary issues with local channel attached 3274 and overall system
thruput.

in any case, i adopted the use of simulating channel check error in
situations where there was an unrecoverable telco error and i needed
to bump error retry/recovery up a level.

after 3090 had been in customer shops a year, somebody from POK
contacted me about a problem they were seeing in the industry
reporting error statistics about 3090. 3090 channels had been designed
to have something like 3-5 total channel check errors in a year over
all customers (not 3-5 errors per 3090 per year ... but total
aggregate 3-5 errors per year across all 3090s). Reports had shown up
a total of something like 15-20 rather than 3-5 (for the first year).
They had tracked it down to some customers with hyperchannel installed
(aka the extra were these simulated errors). I looked into it and
determined that reflected IFCC (interface control check) instead of CC
would kick off essentially the identical error retry operations.

misc. past mention of CC/IFCC 3090 issue:
http://www.garlic.com/~lynn/94.html#24 CP spooling & programming technology
http://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
http://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
http://www.garlic.com/~lynn/2004q.html#51 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005d.html#28 Adversarial Testing, was Re: Thou
shalt have no
http://www.garlic.com/~lynn/2005e.html#13 Device and channel
http://www.garlic.com/~lynn/2005u.html#22 Channel Distances

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Reply via email to