The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.
hmerr...@jackhenry.com (Hal Merritt) writes: > Something to consider, however, is that we found that email delivery > of critical reports to customers to be unacceptable. Our side worked > perfectly, but we found full mailboxes, people out of the office, > reports too large, accidental deletion, broken PC's, etc, etc, > etc. That is, we had no control over the far end and yet we still got > beat up when the reports were delayed/lost. > > And that was before the requirement to encrypt sensitive data. old email from long ago and far away discussing PGP-like email operation: http://www.garlic.com/~lynn/2007d.html#email810506 http://www.garlic.com/~lynn/2006w.html#email810515 I had done a (cms) rexx exec early on to handle smtp/822 email that was distributed internal and went thru several generations. the internal network was mostly vm/cms machines ... some old posts http://www.garlic.com/~lynn/subnetwork.html#internalnet ... & larger than internet from just about the beginning until possibly late '85 or early '86. there were some MVS machines ... but there was significant difficiencies in the MVS networking implementation ... including not being able to address all the nodes in the network ... and would trash network traffic if they didn't recognize either the origination or the destination (as a result, MVS network nodes were carefully controlled to "edge" nodes ... to minimize the damage they would do to network traffic). 80s was still period where govs. viewed encryption with lots of suspicion. corporate had requirement for at least link encryptors on any links that left corporate grounds (between corporate locations, there was some observation that in the mid-80s, the internal network had over half of all the link encryptors in the world). there were periodic battles with various gov. agencies around the world with installing link encryptors on a link going between two different corporate locations in different countries. bitnet (where this ibm-main mailing list originated) used similar technology to that of the internal network. however, vm/cms network design was layered and could have drivers that talked to other infrastructures (including MVS). for much of the bitnet period, the standard vm/cms network product had stopped shipping native drivers (which had higher thruput and performance ... even over same exact telecommunication hardware), and only shipped MVS network drivers. One of the problems with the non-layered MVS network design ... was that traffic between different MVS systems at different release levels could result in MVS system failures (forcing reboot). There was infamous scenario of traffic from some internal San Jose MVS systems resulting in MVS system failures in Hursley. They then tried to blame it on the Hursley vm/cms network machines. The issue was that MVS systems were so fragile and vulnerable ... that lots of software was developed for the vm/cms MVS drivers ... to rewrite control information into format acceptable to each specific directly connected MVS system (and since Hursley MVS systems were crashing ... it was obvious that the vm/cms network nodes was at fault for not preventing the MVS failures). the internal network had high growth year in '83 ... when it passed 1000 nodes (at time when arpanet/internet was passing 255 nodes) ... old post listing locations around the world that added one or more new nodes during 1983: http://www.garlic.com/~lynn/2006k.html#8 Arpa address past posts mentioning bitnet (/earn ... europe version of bitnet) http://www.garlic.com/~lynn/subnetwork.html#bitnet -- 40+yrs virtualization experience (since Jan68), online at home since Mar1970 ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html