John A Pershing Jr <persh...@alum.mit.edu> writes:
> Yeah, and it was a poor implementation, Talk about pathlengths!  I
> worked with a couple guys in Endicott in the early '90s on a
> hyper-optimized implementation of SNA on VM.  We left VTAM up there in
> its virtual machine, and let it do its PU5 stuff like session initiation
> and termination.  However, we built the routing function into CP: once a
> session was set up, data would flow from the wire into CP and be routed
> straight up to the target guest, do not pass VTAM, do not collect 100K
> instructions.  Outbound was even faster, since we transmitted the data
> straight from the guest's memory, avoiding a copy into CP's memory.
> It's been a long time, but I seem to remember that our send/receive
> pathlength was around about 1000 instructions, which matched the
> performance of the "native" IUCV-based networking that was being
> introduced at the time.  Alas, this effort was cancelled.  Sigh...

re:
http://www.garlic.com/~lynn/2009l.html#13 SNA: conflicting opinions
http://www.garlic.com/~lynn/2009l.html#15 SNA: conflicting opinions
http://www.garlic.com/~lynn/2009l.html#17 SNA: conflicting opinions
http://www.garlic.com/~lynn/2009l.html#43 SNA: conflicting opinions

part of the issue was that after future system was canceled ... misc.
past posts
http://www.garlic.com/~lynn/submain.html#futuresys

there was mad rush to get products back into 370 product pipeline (since
future system was going to be completely different & replace 360/370
... there was no point in putting additional effort into 370). part of
that involved the mvs group convincing corporate that the vm370 product
be killed, the vm370 development group shutdown and all the people moved
to POK to support mvs/xa ... or otherwise mvs/xa wouldn't be able to
meet its schedule.

eventually endicott managed to convince corporate to acquire the vm370
product mission ... but they had to reconstitute a development group
from scratch.

IUCV was involved in all that. There had been a superset of IUCV
deployed on internal systems for quite some time ... but the new group
in Endicott decided to re-invent their own. Part of this required IUCV &
related mechanisms to go thru a series of enhancments and product
releases before they came close to matching the functionality of the
original internal implementation.

One of my hobbies was doing my own internal product release & support
(including HONE) for internal datacenters. some old email discussing
move from cp67 to vm370 of lots of my enhancements (mostly during
the future system period)
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

including picking up and merging (IUCV superset) special message (from
vm370 datacenter in POK). The original special message had been
implemented on cp67 at Pisa science center and then later ported to
vm370.

i had to do something similar in the early 80s when I was doing HSDT
project
http://www.garlic.com/~lynn/subnetwork.html#hsdt

RSCS was basically limited to 56kbit since couldn't get 37xx controller
with more than that. RSCS also had some amount of serialization in the
vm370 kernel interface ... that could limit RSCS aggregate thruput to
5-6 4k records/second.

In HSDT, I had multiple (full-duplex) T1 and higher speed links ... and
using RSCS for some of the links, could run into thruput bottlenecks
because of the serialization mechanism (maybe 20kbytes-30kbytes/sec). I
needed 300kbytes/sec per full-duplex T1 links (and other links required
even higher thruput). So in the early 80s, I needed several mbytes/sec.

This is recent post going into some detail of replacing that whole
serialization mechanism
http://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines

part of the effort was lifting a whole part of the vm370 kernel
implemented in assembler ... moving it into a virtual address space
... re-implementing it in vs/pascal ... allowing the existing
synchronous API to work ... as well as allowing asynchronous mode of
operation ... and making it run 10-100 times faster (including
eliminating all buffer copies).

recent post mentioning 37xx 56kbits, fat pipes, and other issues with
the communication group (in the early & mid-80s)
http://www.garlic.com/~lynn/2009l.html#24 August 7, 1944: today is the 65th 
Anniversary of the Birth of the  Computer

from above ...

In that time-frame, hsdt project was getting some equipment built to
spec ... one friday before I was to leave for trip to the other side of
the pacific ... somebody in the communication group distributed an
announcement for new online computer discussion group on the subject of
"high-speed" communication ... that included the following definition:

low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

the following Monday morning, on the wall of a conference room on the
other side of the pacific was:

low-speed               <20mbits
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         >600mbits

... snip ...

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to