Paul B. Nieman wrote:
> In the early 1990's we consolidated a data center from Sydney into
> Philadelphia.  We used SYBACK to do a full dump of specific (most)
> minidisks to tape and shipped the tapes.  We then performed daily
> incrementals to disk, and sent the incrementals via RSCS, via a 9600
> baud line at most.  I think we had a 9600 baud line that was shared
> for RSCS and VTAM traffic, but the telecom part wasn't mine to worry
> over.  Each minidisk intended to move was a separate file and sent via
> SENDFILE.  There were service machines written to send and receive
> them.  I think the first incrementals arrived before the tapes.  In
> any case, we kept track of different day's incrementals for a whole
> week and applied them as they finished arriving.  The line was kept
> very busy and watched closely, but it was easy to restart if it
> dropped.
>
> Our actual cutover the following weekend went fairly quickly and met
> whatever target we had, which I certainly think wasn't enough to allow
> for backing up, shipping, and applying the tapes.

in the later part of the mid-70s, one of the vm370 based commercial
time-sharing services had datacenter on the east coast and put in
datacenter on the west coast connected via 56kbit link.

the had enhanced vm370 to support process migration between
loosely-coupled machines in the same datacenter cluster ... i.e. for one
thing, as they moved to 7x24 worldwide service ... there was no window
left for doing preventive maintenance. process migration allowed them to
move everything off a complex (that needed to be taken down for maint).
the claim was that they could even do process migration over the 56kbit
link ... modulo most of the file stuff having been replicated (so that
there wasn't a lot needing movement in real time).

misc. past posts mentioning vm time-sharing service
http://www.garlic.com/~lynn/subtopic.html#timeshare

they had also implemented page mapped filesystem capability with lots of
advanced bells and whistles ... similar to the cms page mapped
filesystem stuff that i had originally done in the early 70s for cp67.
http://www.garlic.com/~lynn/subtopic.html#mmap

which also included a superset of the memory segment stuff ... a small
subset was later released as DCSS
http://www.garlic.com/~lynn/subtopic.html#adcon

for other drift ... as mentioned before ... the internal network was
larger than the arpanet/internet from just about the beginning until
around sometime mid-85. misc. posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

one of the issues was what to do about JES2 nodes on the internal
network. one of the issues was that relatively trivial changes in JES2
headers between releases would precipitate JES2 (& MVS) system crashes.
for that reason (and quite a few others), JES2 nodes were pretty well
limited to a few boundary nodes. A library of vnet/rscs line drivers
grew up for JES2 that supported a cannonical JES2 header format ... and
the nearest VNET/RSCS node would have the specific line-driver started
that would make sure that all JES2 headers sent to the JES2 system ...
met the requirements of that specific JES2 version/release.
Sporadically, there were still some (infamous) cases where JES2 systems
on one side of the world would precipitate JES2 systems on the other
side of the world to crash (one particular well known case was JES2
systems in san jose causing JES2/MVS systems in hursley to crash). misc.
past posts mentioning hasp &/or jes2
http://www.garlic.com/~lynn/subtopic.html#hasp

Another scenario was there was some work to do load-balancing offload
between STL/bld90 and Hursley around 1980 (since they were offset by a
full shift). the test was between two jes2 systems (carefully checked to
be at compatible release/version) ... over a double-hop 56kbit satellite
link (i.e. up from west coast to satellite over the us, down to the east
coast, up to satellite over the atlantic, down to UK). JES2 couldn't
make connection ... but all error indicators were clean. So finally it
was suggested to try the link between two vnet systems. The link came up
and ran with no problem.

The person overseeing the operations was extremely sna/vtam
indoctrinated. So the first reaction was what ever caused the problem
went away. So it was time to switch it back between JES2 ... it didn't
work. Several more switches were made ... always ran between VNET, never
ran between JES2. The person overseeing the operation finally declared
that the link actually had severe error problems but the primitive VNET
drivers weren't seeing them ... and only the advanced VTAM error
analysis was realizing there was lots of errors.

it turned out the problem was with the double-hop satellite roundtrip
(four hops, 44k miles per hop, 22k-up, 22k-down) propogation delay ...
which VNET tolerated and vtam/jes2 didn't (not only was the majority of
the internal network not jes2 ... it was also not sna/vtam)

Reply via email to