[EMAIL PROTECTED] (Ron and Jenny Hawkins) writes:
> Native can have a lot of meanings depending on whether you are using SM
> between ESCD.
>
> If I recall correctly Hiperchannel was an emulated channel, and not a
> channel extender.

HYPERChannel was a 50mbit/sec LAN with lots of various kinds of
adapters. The adapters allowed for connection of up to four (possibly
parallel) 50mbit/sec LAN interfaces.

It was by NSC founded by Thorton (cray and thorton did cdc6600,
thorton left to do NSC, cray left to do cray research). there were
adapters for a lot of different kinds of processors. there was also a
A51x adapter than emulated ibm channel interface and allowed
connection of ibm mainframe controllers.

there were a couple locations that built very early (original) nas/san
type of implementations using ibm mainframe as storage controller
... and some even supported hierarchical filesystem ... where the ibm
mainframe supported staging from tape to disk ... and then passed back
information to possibly a cray machine ... that then used hyperchannel
connectivity to pull data directly off ibm mainframe disk (mainframe
handled control and setup prebuilt CCWs in the memory of the A51x
adapter ... self-modifying CCWs weren't supported, other processors
then could get authorization for using specific prebuilt CCWs).

hyperchannel also had T1 (and later T3) telco lan extender boxes.
when used in connection with A51x adapters for accessing ibm mainframe
controllers, it could effectively be used as a form of channel
extension.

i did project for santa tersa lab when there were moving something
like 300 IMS developers to an offsite location. they considered the
idea of remote 3270 support hideous compared to what they had been
used to with local channel-attached 3270 vm/cms support. HYPERChannel
configuration was created using high-speed emulated telco between
STL/bldg.90 and the off-site bldg.96/97/98 complex

There was already a T3 collins digital radio between stl/bldg.90 and
the roof of bldg.12 on the main san jose plant site. the roof of
bldg.12 had line-of-site to the roof of the off-site bldg.96/97/98
complex. created T1 subchannel on the bldg.90/12 microwave link
... and then put in dedicate microwave link between bldg. 12 & 96
... with a patch thru in bldg. 12.

the relocated ims developers then had local "channel-attached" 3270s
at the remote site ... using the HYPERChannel channel extension ..
with apparent local 3270 response. there was an unanticipated
side-effect of replacing the 3274 controllers that directly attached
to the ibm channel with HYPERChannel A220 adapters. It had been a
fully configured 168-3 with full set of 16 channels. There were a
mixture of 3830 and 3274 controllers spread across all the channels.
It turned out that the A220 adapters had significantly lower channel
busy time/overhead doing the same operations that had been performed
by the "direct channel-attached" 3274 controllers. Replacing the 3274
controllers with A220 adapters and remoting the 3274 controllers
behind the A220/A51x combination ... reduced channel busy overhead
(for 3270 terminal i/o) and resulted in an overall system thruput
increase of 10-15 percent.

misc. past hyperchannel and/or hsdt postings
http://www.garlic.com/~lynn/subnetwork.html#hsdt

the configuration was later replicated for a similar relocation of a
couple hundred people in boulder to an adjacent building located
across a hiway. for this installation, T1 infrared optical modems
were used mounted on the roofs of the two bldgs.

there is a funny story several years later involving 3090s. i had
chosen to reflect *channel check* if I had an unrecoverable T1 error,
where the operating system then recorded the error and invoked various
kinds of higher level recovery. this was perpetuated into a number of
later HYPERChannel driver implementations. when 3090s first shipped,
they expected to see something like a total 3-5 *channel checks*
aggregate across all machines over the first year. there was something
closer to 20 *channel checks* that were recorded. investigation
eventually narrowed it to HYPERchannel driver. I got contacted and
after some amount of research ... it turned out that reflecting IFCC
(*interface control check*) resulted in effectively the same sequence
of recovery operations. a few retellings of the 3090 cc/ifcc
hyperchannel story:
http://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
http://www.garlic.com/~lynn/2005e.html#13 Device and channel

some past posts specifically mentioning thorton:
http://www.garlic.com/~lynn/2002i.html#13 CDC6600 - just how powerful a
machine was it?
http://www.garlic.com/~lynn/2005k.html#15 3705
http://www.garlic.com/~lynn/2005m.html#49 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005r.html#14 Intel strikes back with a
parallel x86 design

a few past posts mentioning the boulder installation and infrared modems
http://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
http://www.garlic.com/~lynn/99.html#137 Mainframe emulation
http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe"
still have a meaning?
http://www.garlic.com/~lynn/2001e.html#72 Stoopidest Hardware Repair Call?
http://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
http://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
http://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
http://www.garlic.com/~lynn/2005e.html#21 He Who Thought He Knew
Something About DASD

a few past posts mention the san jose plant site collins digital radio
http://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons
Corner, Virginia)
http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe"
still have a meaning?
http://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
http://www.garlic.com/~lynn/2002q.html#45 ibm time machine in new york
times?
http://www.garlic.com/~lynn/2003k.html#3 Ping:  Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
http://www.garlic.com/~lynn/2005n.html#17 Communications Computers -
Data communications over telegraph

quite a few past posts discussing 327x and response ... which was a
really hot topic in the period ... especially after the introduction
of 3278/3274 combination. the problem was that good vm/cms terminal
response was on the order of the 3272 hardware latency. the 3274
hardware latency was easily 3-4 times that of the 3272 and was
noticeable to the internal vm/cms users that had gotten use to good
human factors interactive response. one the other hand, the normal
mvs/tso response was so bad that those users didn't notice the
difference between a 3272 direct channel attached controller and a
3274 direct channel attached controller.
http://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
http://www.garlic.com/~lynn/96.html#14 mainframe tcp/ip
http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe"
still have a meaning?
http://www.garlic.com/~lynn/2000c.html#66 Does the word "mainframe"
still have a meaning?
http://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers
that ran as slow as today's supercompu
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol
http://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a
machine was it?
http://www.garlic.com/~lynn/2002i.html#48 CDC6600 - just how powerful a
machine was it?
http://www.garlic.com/~lynn/2002i.html#50 CDC6600 - just how powerful a
machine was it?
http://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
http://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and
controllers (was Re: Itanium2 power
http://www.garlic.com/~lynn/2002q.html#51 windows office xp
http://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
http://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was
filesystem structure (long warning)
http://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians -
360/91
http://www.garlic.com/~lynn/2003c.html#72 OT: One for the historians -
360/91
http://www.garlic.com/~lynn/2003d.html#23 CPU Impact of degraded I/O
http://www.garlic.com/~lynn/2003h.html#15 Mainframe Tape Drive Usage
Metrics
http://www.garlic.com/~lynn/2003k.html#20 What is timesharing, anyway?
http://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
http://www.garlic.com/~lynn/2003m.html#19 Throughput vs. response time
http://www.garlic.com/~lynn/2004c.html#30 Moribund TSO/E
http://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
http://www.garlic.com/~lynn/2004e.html#0 were dumb terminals actually so
dumb???
http://www.garlic.com/~lynn/2004g.html#11 Infiniband - practicalities
for small clusters
http://www.garlic.com/~lynn/2005e.html#13 Device and channel
http://www.garlic.com/~lynn/2005h.html#41 Systems Programming for 8
Year-olds
http://www.garlic.com/~lynn/2005r.html#1 Intel strikes back with a
parallel x86 design
http://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a
parallel x86 design
http://www.garlic.com/~lynn/2005r.html#14 Intel strikes back with a
parallel x86 design
http://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a
parallel x86 design
http://www.garlic.com/~lynn/2005r.html#20 Intel strikes back with a
parallel x86 design
http://www.garlic.com/~lynn/2005r.html#28 Intel strikes back with a
parallel x86 design

--
Anne & Lynn Wheeler |  http://www.garlic.com/~lynn/

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to