[email protected] (Shmuel Metz  , Seymour J.) writes:
> No, that is not how conditional branching in channel programs has
> always worked.

re:
http://www.garlic.com/~lynn/2014c.html#62 Optimization, CPU time, and related 
issues
http://www.garlic.com/~lynn/2014c.html#64 Optimization, CPU time, and related 
issues

note in the middle 70s ... they consolidated all the US HONE systems
(worldwide online sales&marketing support) in silicon valley (when
FACEBOOK started they moved into a new bldg built next door to the old
US HONE datacenter ... this is before FACEBOOK took over the old SUN
campus).  
http://www.garlic.com/~lynn/subtopic.html#hone

Part of the effort for HONE was creating the largest single-system image
loosely-coupled operation in the world (of multiprocessor
systems). Normal operation had been to use disk controller
RESERVE/RELEASE commands for loosely-coupled operation (analogous to
LOCK/UNLOCK operations in tightly-coupled multiprocessor operation).
However, a (much more efficient) "compare&swap" channel program was
developed. A full-track record was defined for each disk. A processor
did read of the record ... and updated the image to reflect the
resources it would be using ... and then did a compare&swap channel
program ... basically search (data) equal on the record ... and if
succesful would do a write operation with the updated record
... otherwise the operation would fail. This cluster operation supported
workload throughput load-balancing and failure recovery across the
complex. In part because of earthquake concerns the Cal. datacenter was
replicated first in Dallas and then a 3rd in Boulder in the early 80s
(could do load balancing and failure recovery across all 3 datacenters
... but somewhat more complicated).

Of course none of this cluster support was ever released to customers
... a little of similar support finally recently leaking out 30yrs
later:
http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No 
Software Before Its Time
http://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No 
Software Before Its Time
http://www.garlic.com/~lynn/2011m.html#46 From The Annals of Release No 
Software Before Its Time
http://www.garlic.com/~lynn/2011m.html#47 From The Annals of Release No 
Software Before Its Time
http://www.garlic.com/~lynn/2011m.html#59 From The Annals of Release No 
Software Before Its Time

This was about the time my wife was con'ed into going to POK to be in
charge of (mainframe) loosely-coupled architecture and came up with
peer-coupled shared data architecture ... some past posts
http://www.garlic.com/~lynn/submain.html#shareddata

however, it found little uptake (except for IMS hotstandby) until
sysplex & parallel sysplex ... which contributed to her not staying long
in the position. Another factor was that the SNA forces were constantly
trying to force her into using SNA for loosely-coupled operation ...
there would be temporary truces where she could do whatever she wanted
within the walls of the datacenter (SNA "owned" everything that crossed
the datacenter wall) ... but then they would start attacking again.

re: reserve/release, compare&swap channel program ... modulo not having
ACP-RPQ locking installed on 3830 disk controller. some past refs:
http://www.garlic.com/~lynn/2008i.html#39 American Airlines
http://www.garlic.com/~lynn/2008j.html#50 Another difference between platforms
http://www.garlic.com/~lynn/2011b.html#12 Testing hardware RESERVE

and of course, the compare&swap instruction was originally
invited by charlie when he was doing fine-grain multiprocessor
locking for cp67 at the science center (name of instruction
chosen because CAS are charlie's initials). past posts mentioning
multiprocessor &/or compare&swap instruction
http://www.garlic.com/~lynn/subtopic.html#smp

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to