[EMAIL PROTECTED] wrote:
> in the early 70s, the stuff for page-map support of cms filesystem
> (originally on cp67:
> http://www.garlic.com/~lynn/subtopic.html#mmap

one issue with chained requests for cms ril3 operation ... was cms
filesystem (dating back to 1965) uw3e single block allocation at a
time and didn't have any semantics for supporting explicit contiguous
allocation. the standard cms filesystem multi-block read was on the
off chance that sequnetial allocation of individual file blocks
accidentially happened to be sequential on disk (which might happen if
all files were sequentially dumped to tape and all files then erased
... creating a clean filesystem, and then sequentially reloadint the
files). so i added some new allocation semantics that could create
advisory request for multiple, physical contiguous blocks. this helped
with being able to do multi-block chained i/o operations.

i also used a variation on the page mapped interface in the 80s for
doing a rewrite of the vm370 spool file systtem ... somewhat as part
of hsdt project
http://www.garlic.com/~lynn/subnetwork.html#hsdt

the base spool filesystem was written in assembler as part of the
kernel ... it provided a "high-speed" interface for applications that
involved syncronously moving 4k blocks back and forth across the
kernel line.  this syncronous character became a problem for vnet/rscs
when trying to support lots of high-speed links. a heavily loaded
spool file system might be doing 40-50 4k transfers a second. however,
because it was a syncronous api, rscs/vnet was non-runanable during
the actual disk transfers ... and never could have more than one
request active at a time.  as a result rscs/vnet might only get 5-6 4k
spool transfers per second (competing with all the other uses of the
spool system).

so my objectivs for the hsdt (high speed data transfer) spool file
system (SFS) rewrite were:

1) move implementation from kernel to virtual address space
2) implement in pascal/vs rather than assembler
3) overall pathlength of the new pascal-based implementation running
in a virtual address space should be less than the existing assembler,
kernel implementation
4) support contiguous allocation
5) support multiple block transfer requests
6) support asyncronous transfer requests

so cp kernel needed several modifications ... first it had to be able
to come up w/o having a spool system active at initial boot (like it
was custom to), be able to activate the spooling subsystem for
managing spool areas ... and handle spooling activity by making
up-calls into the spool processor address space.

an annoyance in the existing implementation was that all spool areas
were treated as one large resource ... all spool resrouces had to be
available ... or the system didn't come up. the kernel now had to be
able to operate independently of the spool resource. so while i was at
it, i added some extended integrity and availabilitity. each spool
physical area could essentially be independently activated/deactivate
(varied on/off). there was an overall checkpoint/warm start facility
...  however there was additional information added to spool records
...  that if checkpoint and warm start information was missing ... it
was possible for the spooling resource to sequentially physical read a
physical area (it could generate paged mapped requests for 150 4k
blocks at a time ... and the kernel might even chain these into a
single physical i/o, aka if it happened to be a 3380 cylinder) and
recover all allocated spool files (and nominally do it significantly
faster than the existing cp ckeckpoint process ... which sort of had
starting records for each file ... but then had to sequentially
following a chain of records, one read at a time). if warm start
information wasn't available ... the large sequential physical read
tended to be significantly faster than the one at a time, checkpoint
scatter read.

the standard kernel spool implementation had sequentially chained
control blocks representing each file. for large active spool system,
the implementation spent significant pathlength running the sequential
chains. the pascal implementation used library routines for hash table
and red/block tree management of all the spool file control
blocks. this tended to more than offset any pathlength lost moving the
function into virtual address space.

the high-speed spool api was extended to allow specifying multiple 4k
blocks for both reads & writes ... and enhanced to allow the api to
operate asyncronously. a single full-duplex 56kbit link could mean
around up to 2 4k transfers per sec (1 4k transfers in each
direction). several loaded 56kbit links could easily run into spool
file thruput bottleneck on heavily loaded systems (rscs/vnet possibly
be limited to 5-6 4k records/sec)

hsdt machine had several channel connections to other machines in the
local area and multiple full-duplex T1 (1.5mbits/sec) connections. a
single T1 has about 30 times the thruput of a 56kbit ... which in turn
increases the two 4k record thruput requirements to 60 4k record
thruput per second (for a single full-duplex T1 link). an hsdt
vnet/rscs node might reasonably be expected to have thruput capacity
of several hundred 4k records/sec (design point thruput possibly one
hundred times a nominal rscs/vnet node).

hsdt operated three sat. stations, san jose, austin, and yorktown ...
with hsdt node having multiple channel and T1 links to other machines
in the local area. the sat. bandwidth was initially configured as
multiple T1 full-duplex links between the three nodes. however we
designed and were building a packet broadcast operation. The earch
stations were TDMA so that each station had specific times when it
could transmit. The transmit bursts could then be configurated to
simulate full-duplex T1 operation. The packet switch-over was to
eliminate the telco T1 emulation and treat it purely as packet
broadcast architecture (somewhat analogous to t/r lan operation but w/o
the time-delay of token passing since the bird in the sky provided
clock syncronization for tdma operation).

the san jose hsdt node was in bldg. 29, but there were high-speed
channel links to other machines in bldg. 29 and telco T1 links to
other machines in the san jose area ... besides the sat. links.

one of the challenges was that all corporate transmission had to be
encrypted. the internal network had been larger than the whole
arpanet/internet from just about the beginning until sometime mid-85.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

arpanet was about 250 nodes at the time it converted to tcp/ip
on 1/1/83. by comparison, later that year, the internal network
passed 1000 nodes ... minor reference
http://www.garlic.com/~lynn/internet.htm#22

note the size of the internal network does not include bitnet/earn
nodes ... which were univ. nodes using rscs/vnet technology (and was
about the same size as arpanet/internet in the period. misc. posts
mentioning bitnet &/or earn:
http://www.garlic.com/~lynn/subnetwork.html#bitnet

about the time we were starting hsdt, the claim was that the internal
network had over half of all link encrypters in the world. moving from
an emulated telco processing for hsdt also eliminated the ability to
use link encrypters ... so we had to design a packet-based encryption
hardware that potentially was changing key on every packet ... and
aggregate thruput hit multiple megabytes/second. we further
complicated the task by establishing an objective that the card could
be manufactored for less then $100 (using off-the-shelf chips ...  and
still support mbyte/sec or above thruput). also wanted to be able to
use it in lieu of stnadard link encrypters which were running
something like $5k-$6k per box.

the other piece was hsdt nodes was making a lot of use of HYPERchannel
hardware .... so when the initial mainframe tcp/ip implementation was
done in pascal ... i added rfc 1044 support. the base product shipped
with 8232 controller which had some idiosyncrasies; the support would
consume a whole 3090 processor getting 44kbytes/sec. by contrast, in
some 1944 tuning tests at cray research, we got 1mbyte channel speed
between a 4341-clone and a cray machine ... using only a modest amount
of the 4341 processor.
http://www.garlic.com/~lynn/subnetwork.html#1044

having drifted this far ... i get to also mention that we weren't
allowed to bid on nsfnet backbone (the arpanet change over over to
tcp/ip protocol on 1/1/83 was major technology milestone for the
internet.  however, the birth of modern internetworking ... i.e.
operational prelude to the modern internet ... was the deployment of
the nsfnet backbone ... supporting internetworking of multiple
networks) however, my wife appealed to the director of nsf and got a
technical audit ...  which concluded that what we had running was at
least five years ahead of all nsfnet bids to build something
new. minor recent post on the subject:
http://www.garlic.com/~lynn/2005q.html#46 Intel strikes back with a
parallel x86 design

past posts mentioning SFS ... spool file system rewrite (as opposed to
that other SFS that came later ... shared file system):
http://www.garlic.com/~lynn/99.html#34 why is there an "@" key?
http://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging
device (was Re: removal of paging device)
http://www.garlic.com/~lynn/2001n.html#7 More newbie stop the war here!
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer
(long post warning)
http://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape
format (long post)
http://www.garlic.com/~lynn/2003b.html#46 internal network drift (was
filesystem structure)
http://www.garlic.com/~lynn/2003g.html#27 SYSPROF and the 190 disk
http://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or
nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
http://www.garlic.com/~lynn/2004g.html#19 HERCULES
http://www.garlic.com/~lynn/2004m.html#33 Shipwrecks
http://www.garlic.com/~lynn/2004p.html#3 History of C
http://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs
ESAMAP
http://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs
ESAMAP
http://www.garlic.com/~lynn/2005n.html#36 Code density and performance?

past post mentioning link encrypters
http://www.garlic.com/~lynn/aepay11.htm#37 Who's afraid of Mallory Wolf?
http://www.garlic.com/~lynn/aadsm14.htm#0 The case against directories
http://www.garlic.com/~lynn/aadsm14.htm#1 Who's afraid of Mallory Wolf?
http://www.garlic.com/~lynn/aadsm18.htm#51 link-layer encryptors for
Ethernet?
http://www.garlic.com/~lynn/99.html#210 AES cyphers leak information
like sieves
http://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
http://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk
(was: IBM Mainframe at home)
http://www.garlic.com/~lynn/2002d.html#11 Security Proportional to Risk
(was: IBM Mainframe at home)
http://www.garlic.com/~lynn/2002j.html#52 "Slower is more secure"
http://www.garlic.com/~lynn/2003e.html#34 Use of SSL as a VPN
http://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
http://www.garlic.com/~lynn/2003i.html#62 Wireless security
http://www.garlic.com/~lynn/2004g.html#33 network history
http://www.garlic.com/~lynn/2004g.html#34 network history
http://www.garlic.com/~lynn/2004p.html#44 IBM 3614 and 3624 ATM's
http://www.garlic.com/~lynn/2004p.html#51 IBM 3614 and 3624 ATM's
http://www.garlic.com/~lynn/2004p.html#55 IBM 3614 and 3624 ATM's
http://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over
from sci.crypt
http://www.garlic.com/~lynn/2005c.html#38 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005r.html#10 Intel strikes back with a
parallel x86 design

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to