[email protected] (Shmuel Metz  , Seymour J.) writes:
> There's an error in that article and in the RSCS article; RSCS uses
> connection-oriented protocols, not connectionless protocols.

re:
http://www.garlic.com/~lynn/2012j.html#83 Gordon Crovitz: Who Really Invented 
the Internet?
http://www.garlic.com/~lynn/2012j.html#84 Gordon Crovitz: Who Really Invented 
the Internet?
http://www.garlic.com/~lynn/2012j.html#87 Gordon Crovitz: Who Really Invented 
the Internet?
http://www.garlic.com/~lynn/2012j.html#88 Gordon Crovitz: Who Really Invented 
the Internet?
http://www.garlic.com/~lynn/2012j.html#89 Gordon Crovitz: Who Really Invented 
the Internet?


another part of the issue was that RSCS had native vnet drivers and then
NJI (hasp/jes2) drivers. During period that BITNET was growing in the
mid-80s, they stopped shipping the native vnet drivers ... leaving only
the NJI drivers ... although the native vnet drivers continued to be
used on the internal network because they were much more efficient
... at least up until the change-over of the internal network to SNA in
the late 80s.

arpanet used IMPs for network nodes that did packet-based communication
... but the connected hosts did host-to-host end-to-end connection
protocol. In the a.f.c. thread it was pointed out that even by 1975, it
was recognized that it wasn't scaling. A comparison from the period was
post-office analogy ... to get something from new york city to fairbanks
alaska ... required that all the post offices between NYC and fairbanks
and alaska to be up and operational simultaneously ... which wasn't a
requirement for RSCS. RSCS traffic would eventually get from NYC to
fairbanks ... even if there was only intermediate connectivity between
the intermediate nodes (including if there was *never* full end-to-end
connectivity).

For lots of reasons, the internal network was larger than the
arpanet/internet from just about the beginning until either late '85 or
early '86 ... the internet growth and passing internal network primarily
because of the switch-over to internetworking protocol on 1jan1983.

At the time of the 1983 switch-over there were approximately 100 IMP
nodes and possibly 255 hosts ... while the internal network was in the
process of passing 1000

misc. past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet
misc. past posts mentioning bitnet (&/or earn)
http://www.garlic.com/~lynn/subnetwork.html#bitnet

in the late 80s there was lot of mis-information from the communication
group (not only about its applicability to the nsfnet backbone) involved
in justification for converting internal network to sna
http://www.garlic.com/~lynn/2006x.html#email870302
http://www.garlic.com/~lynn/2011.html#email870306

even though by that time, it would have been much more efficient and
cost-effective to have converted rscs drivers to tcp/ip (in much the
same way that was done for "bitnet-II").

the vm370 tcp/ip product was available ... even tho there was some
performance issues (limited to about 44kbytes/sec using nearly whole
3090 processor) ... but I would be shortly be doing the changes to
support rfc1044 ... and in some tuning tests at cray research got
channel thruput between 4341 and cray using only modest amount of 4341
processor (possibly 500 times improvement in bytes moved per instruction
executed) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

note ... later the vm370 tcp/ip product was ported to mvs by adding
simulation for some of the vm370 functions.

piece of recent post from a.f.c. about the requirement for doing
NJI drivers in RSCS:

Internally between mostly campus hasp systems, they were running some
support that came from triangle university ("TUCC" in cols 68-71 source
code). The implementation was intertwined with standard HASP support and
not cleanly layered ... and node definitions were done by taking empty
entries in the HASP psuedo-device table (255 entry table used for hasp
for psuedo unit-record devices ... typical HASP installation might have
60-80 entries in use ... so the TUCC code could define up to 170-190
network nodes).

The VNET code had to be cleanly layered with gateway-like functionality
and support both native VNET drivers as well as gateway drivers that
would talk to HASP/JES2. As the HASP/JES2 evolved, it became even more
convoluted ... since the HASP/JES2 network support code was so
intertwined with rest of its operations ... traffice between two
different HASP/JES2 nodes at different releases could result in
HASP/JES2 crash bringing down the whole operating system.

Internally, the VNET gateway function had to be expanded so that there
were large library of HASP/JES2 drivers ... with the specific driver
started that corresponded to the HASP/JES2 level at the other end of the
link. It became the responsibilty of the VNET HASP/JES2 drivers to
convert traffic into a canonical form and then translate into the
specific form required by the HASP/JES2 on the other end of the link
(eventually HASP/JES2 systems couldn't be trusted to directly
communicate with each other, requiring intermediate VNET nodes
... unless the installation tightly synchronized all the release
levels).

Internal network also quickly exceeded the 170-190 HASP/JES2 limitation
... and HASP/JES2 implementation would also discard traffic if either
the origin node or the destination node wasn't in its local table. The
combination of all the factors, pretty much limited HASP/JES2 to
boundary nodes.

...

note that JES2 eventually did expand support to 999 nodes ... but that
was only after the internal network had passed 1000 nodes ... some
reference to internal network exceeding 1000 nodes in 1983 (also
referenced in the edson wiki entry):
http://www.garlic.com/~lynn/2006k.html#8

It was in the late 80s that the communication group was generating a lot
of mis-information about justification for converting internal network
to SNA ... as well as its applicability to internet (as previously
mentioned). It was also in this period that a senior disk engineer got a
talk scheduled at the world-wide, internal-only annual communication
group conference ... and opened the talk with the statement that the
communication group was going to be responsible for the demise of the
disk division. The scenario was that the communication group was
attempting to preseve its dumb terminal (vtam) paradigm (including
terminal emulation install base) and had stranglehold on the datacenter
(strategic "ownership" for everything that cross the datacenter walls);
the disk division was seeing the leading edge of data fleeing the
datacenter to more distributed computing friendly platforms in the
drop-off in disk sales. The disk division had come up with several
products to address the opportunity ... which were constantly being
vetoed by the communication group.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to