[email protected] (Jerry Whitteridge) writes:
> I miss HONE !
>
> Jerry Whitteridge
> Lead Systems Engineer
> Safeway Inc.

I was recently asked when HONE actually shutdown
http://www.garlic.com/~lynn/2015c.html#93 HONE Shutdown

and found an email from may1998 saying it was going away

HONE (hands-on network environemnt), some past posts
http://www.garlic.com/~lynn/subtopic.html#hone

had started out after the 23jun1969 unbundling announcement (starting to
charge for application software, se services, etc), some past posts
http://www.garlic.com/~lynn/submain.html#unbundle

with (virtual machine) CP67 (running on 360/67), to give branch SEs
hands-on practice with operating systems. Previously SEs got sort of
journeyman training as part of large group onsite in customer
datacenter, after unbundling, nobody could figure out how not to charge
customers for this SE time onsite at customer.

Science center very early did enhancements to CP67 that provided the
simulation of the new 370 (before virtual memory instructions), so they
could work with latest operating systems gen'ed for 370.

For CP67/CMS, the science center ... some past posts
http://www.garlic.com/~lynn/subtopic.html#545tech

had also ported apl\360 to CMS for cms\apl. HONE then started offering
apl-based sales&marketing support tools ... which soon came to dominate
all HONE activity and the virtual guest operating system use disappeared
... and HONE clone systems would start sprouting up all over the world

HONE eventually migrated to VM370 from the custom science center
cp67/cms. This is old email from 40yrs ago today (30apr1975)
http://www.garlic.com/~lynn/2006w.html#email750430

where I've moved a bunch of enhancements from CP67 to VM370 and made it
(csc/vm) available to internal datacenters ... including HONE ... which
would run my enhanced custom operating systems for another decade
(including after I moved to SJR and called in sjr/vm, I then started
moving off doing mainframe work).

Not long after the above email, US HONE consolidated all its (US)
datacenters in Palo Alto. By the end of the 70s, the US HONE datacenter
was the largest single system image operation in the world ... several
large (POK) multiprocessor mainframes operating in loosely-coupled
operation with load balancing and workload fall-over (in case of failure
... effectively "peer-coupled shared data" architecture mentioned in
this recent post
http://www.garlic.com/~lynn/2015c.html#112 JES2 as primary with JES3 as a 
secondary

but was not released to customers. Then in the early 80s, the US HONE
complex was replicated first in Dallas and then a 3rd in Boulder
... with load balancing and fallover ... countermeasure to disaster
scenarios (like earthquake in california) ... also not released to
customers.

for other drift, the previous post reference to IMS hot-standby ... had
a fall-over operational problem. IMS configuration would be large CEC
(one or more shared-memory processor) with fall-over hot-standby. IMS
could immediately fall-over but VTAM sessions were enormous problem
... large systems could have 30,000-60,000 terminals ... which could
take VTAM one or more hrs to get backup and running. We actually did
some work with a 37x5/NCP emulator that spoofed mainframe VTAM that
sessions were being managed cross-domain ... but was actually done by
outboard non-VTAM processing ... which could manage replicated shadow
sessions to the hot-standby machine ... so IMS hot-standby fall-over
would be nearly immediately. However, this met enormous amount of
resistence from the communication group (for lots of reasons, no
communication group hardware and SNA RUs were carried over real network
with lots of feature/function that couldn't be done in SNA).

This sort of issue then in internet server environment where operations
are connectionless ... in theory doesn't require long-term session
maintenance overhead .... server workload is propotional to the workload
... not to the number of "clients" ... much easier to have replicated
servers, load balancing and workload fall-over.

The browser HTTP people did "mess" up ... they did use (session) TCP
(instead of UDP) to implement a connectionless protocol ... it would go
to all the overhead of setting up a session to do a connectionless
operation and then immediately tear it down. Besides all the
(unecessary) processing overhead (for TCP session setup/shutdown), TCP
protocol (chatter) has a minimum of seven packet exchange. This was
initially noted as webserver load started to scaleup. Industry standard
TCP had what was called "FINWAIT" (to handle session dangling packets
after session was closed) list ... and ran the list linearly looking if
incoming packet was part of recently closed session ... aka expectation
had been FINWAIT list was empty or a few entries. Increasing HTTP
webserver workload started to have thousands of entries on the FINWAIT
list ... and FINWAIT processing would consume 95% of webserver
processor.

We had been brought in as consultants to small client/server startup
(after leaving IBM) that wanted to do payment transactions on their
server, they had also invented this technology they called "SSL" they
wanted to use; the result is now frequently called "electronic
commerce". "SSL" HTTPS is even worse than HTTP ... it is HTTP with a
bunch of extra startup protocol chatter (over and above the encryption
overhead). We had control of several parts of the implementation and
deployment ... but not that.

In the 80s, I had worked on high-performance protocol that did reliable
transaction in minimum 3-packet exchange (compared to seven for TCP).
Part of the implementation was piggy-backing a bunch of option selection
packaged with the initial packet (I was still at IBM and communication
group complained that I worked on non-SNA stuff) some past posts
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

Later, I proposed a high-performance, low-latency "SSL" that included
all the HTTPS additional stuff also piggy-backed with the initial packet
... enabling HTTPS transaction in minimum 3-packet exchange. However,
the people were enamored with TCP and lots of protocol chatter
back&forth. However, the recent GOOGLE high-performance protocol now has
some similarities (20yrs after the "fast" SSL-proposal, and nearly 30yrs
after the 3-packet high-speed protocol work)
https://developers.google.com/speed/spdy/
and
http://www.infoq.com/news/2015/02/google-spdy-http2

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to