Bill Richter wrote:
It appears that Google architecture is the antithesis of conventional
mainframe application achitecture in all aspects.
http://labs.google.com/papers/googlecluster-ieee.pdf
and the difference between that and loosely-coupled or parallel sysplex?
long ago and far away, my wife was con'ed to going to POK to be in
charge of loosely-coupled architecture ... she was in the same
organization with the guy in charge of tightly-coupled architecture.
while she had come up with peer-coupled shared data architecture
http://www.garlic.com/~lynn/subtopic.html#shareddata
it was tough slogging because all the attention was focused on
tightly-coupled architecture at the time. also she had battles with the
sna forces ... who wanted control of all communication that left the
processor complex (i.e. outside of direct disk i/o, etc).
part of the problem was that in the early days of SNA ... she had
co-authored a "peer-to-peer" network architecture with Bert Moldow ...
AWP39 (somewhat viewed in competition with sna). while SNA was tailored
for centralized control of a large number of dumb terminals ... it was
decidedly lacking in doing peer-to-peer operations with large numbers of
intelligent peers.
a trivial example was sjr had done cluster 4341 implementation used
highly optimized peer-to-peer protocols running over a slightly modified
trotter/3088 (i.e. eventually came out as conventional ctca ... but with
interconnection for eight processors/channels). peer-to-peer,
asynchronous could achieve cluster synchronization in under a second
elapsed time (for eight processors). doing the same thing with SNA
increased the elapsed time to approx. a minute. the group was forced to
only release the SNA-based implementation to customers ... which
obviously had severe scaling properties as the numbers in a cluster
increased.
the communication division did help with significant uptake of PCs in
the commercial environment. a customer could replace a dumb 327x with a
PC for approx. the same price, get datacenter terminal emulation
connectivity and in the same desktop footprint also have some local
computing capability. as a result, you also found the communication
group with a large install base of products in the terminal emulation
market segment (with tens of millions of emulated dumb terminals)
http://www.garlic.com/~lynn/subnetwork.html#emulation
in the late 80s, we had come up with 3-tier architecture (as an
extension to 2-tier, client/server) and were out pitching it to customer
executives. however, the communication group had come up with SAA which
was oriented trying to stem the tide moving to peer-to-peer networking,
client/server, and away from dumb terminals. as a result, we tended to
take a lot of heat from the SAA forces.
http://www.garlic.com/~lynn/subnetwork.html#3tier
in the same time frame, a senior engineer from the disk group in san
jose managed to sneek a talk into the internal, annual world-wide
communication conference. he began his talk with the statement that the
communication group was going to be responsible for the demise of the
disk division. basically the disk division had been coming up with all
sorts of high-thruput, peer-to-peer network capability for PCs and
workstations to access the datacenter mainframe disk farms. the
communication was constantly opposing the efforts, protecting the
installed base of terminal emulation products. recent reference to
that talk:
http://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?
i had started the high-speed data transport project in the early 80s ...
hsdt
http://www.garlic.com/~lynn/subnetwork.html#hsdt
and had a number of T1 (1.5mbit) and higher speed links for various
high-speed backbone applications. one friday, somebody in the
communication group started an internal discussion on high-speed
communication with some definitions ... recent posting referencing this
http://www.garlic.com/~lynn/2006e.html#36
low-speed <9.6kbits
medium-speed 19.2kbits
high-speed 56kbits
very high-speed 1.5mbits
the following monday, i was in the far-east talking about purchasing
some hardware and they had the following definitions on their conference
room wall
low-speed >20mbits
medium-speed 100mbits
high-speed 200-300mbits
very high-speed >600mbits
part of this was the communication division 37xx product line only
supported up to 56kbit links. They had recently done a study to
determine if T1 support was required ... which concluded that in 8-10
years there would only be 200 mainframe customers requiring T1
communication support. The issue could have been that the people doing
the study were suppose to come up with the results supporting the
current product line ... or maybe they didn't understand the evolving
communication market segment, or possibly both.
their methodology was to look at customers using 37xx "fat pipes" ...
basically being able to operate multiple parallel 56kbit links as a
simulated single connection. They found several customers with two
parallel links, some with three parallel links, a few with four parallel
links and none with higher number. Based on that, they projected that it
would take nearly a decade before there were any number of customers
with parallel links approaching T1 (1.5.mbits) capacity.
the problem with the analysis at the time was that the telcos were
tariffing T1 at approx. the same as five 56kbit links. customers going
to more than four 56kbit links were buying full T1 and operating them
with hardware from other vendors. A trivial two-week survey turned up
200 mainframe customers with full T1 operations ... something that the
communication group was projecting wouldn't occur for nearly another decade.
so last fall, i was at a conference and there was a talk about "google
as a supercomputer". the basic thesis was that google was managing to
aggregate large collection of processing power and data storage well
into the supercomputer range ... and doing it for 1/3rd the cost of the
next closest implementation (in terms of cost).
slightly related old post
http://www.garlic.com/~lynn/95.html#13
from when we were working on scaling for our ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html