What does this have to do with this thread???

On Fri, Aug 24, 2012 at 4:13 PM, Anne & Lynn Wheeler <l...@garlic.com>wrote:

> scott_j_f...@yahoo.com (Scott Ford) writes:
> > Just for my 2 cents worth, ran P390s in one environment attached to two
> T1s.
> > Attached to them we're 3800 laser printers and some 3274s we couldnt
> replace.
> > The mainframes were an hour plus away in NJ, and our printed output
> queued up to the P390s.
> > Everything worked like a champ. I am now on Z/Pdt z/os1.12 on a intel
> > i7', everything s good, but are also only doing development.
> >
> > Scott ford
> > www.identityforge.com
>
> re:
> http://www.garlic.com/~lynn/2012l.html#16 X86 server
> http://www.garlic.com/~lynn/2012l.html#18 X86 server
> http://www.garlic.com/~lynn/2012l.html#19 X86 server
> http://www.garlic.com/~lynn/2012l.html#20 X86 server
>
> 1980 STL is bursting at the seams and they are moving 300 people from
> IMS group to off-site bldg. the group tries remote 3270 support and find
> it intolerable. I get con'ed into writing HYPERChannel support for use
> as channel extender ... allowing them to put local channel-attached 3270
> controllers at the remote site. Runs over T1 channel on the *campus*
> collins digital radio T3 microwave. They don't notice any change from
> cms local 3270 controllers in STL (maintaining their subsecond response
> ... back when mvs/tso people were claiming noody needed subsecond
> response). System thruput actually improves ... issue is the
> HYPERChannel A220s sitting on real channel have significantly lower
> channel busy (for the same operation) than 3270 controllers ... total
> system throughput improves 10-15% (the 3270 controller channel busy is
> masked at the remote site).
>
> I try to get approval to release the software to customers ... which a
> group in POK manages to block. That group was playing with some fiber
> stuff (that eventually gets out as ESCON), and they are afraid if my
> HYPERChannel support is released to customers ... it would interfer with
> someday being able to get their fiber stuff out. As a result NSC has to
> re-do my implementation from scratch.
>
> Roll forward several years, the 3090 product administrator tracks me
> down.  the 3090 channels were designed to have 3-5 channel checks
> annually aggregate across the whole customer base. the industry service
> that collects erep data shows that there have been an aggregate of 20
> channel checks the first year.
>
> Turns out they are at customers running 3800 over HYERPChannel channel
> extender. In my original implementation ... if I had an unrecoverable
> transmission error ... I would simulate channel check in the CSW ... for
> the host software to go through its retry operation ... and the NSC
> faithfully reproduced that in their implementation. After some amount of
> toiling through error recovery code ... i determined that simulating
> IFCC would have effectively the same result as channel check and got NSC
> to update their implementation.
>
> as an aside, long ago and far away somebody in Boulder does build a
> hardware channel emulator for ibm/pc which is used for 3800 testing.
>
> --
> virtualization experience starting Jan1968, online at home since Mar1970
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
zMan -- "I've got a mainframe and I'm not afraid to use it"

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to