Tom Longfellow <[email protected]> writes:
> Let the pedantry begin:  Superdome, Xeon, Rack servers, Blades, etc.
> For this discussion they are all the same:  A separately maintained set 
> of many  boxes (with some virtualization to extend their reach)  versus 
> the Great Satan, called MAINFRAME.    
> I have been in places where the P7 platform came in a big monolithic box 
> that had three times the memory and 'activated' CPU cores.  Looked a lot 
> like the classic CMOS mainframe to me from the outside.

trivia: Itanium architecture was in large part done by long time IBMer
who went to HP in the early 80s (originally working on "snake", HP's
risc processors). One of the last thing he did at IBM was retrofit
subset of 370/XA access registers to 3033 as dual-address space mode.
Itanium was suppose to be wide-spread next generation 64bit "server"
machine.  When AMD did 64bit I86 ... which was taking over the market
instead ... Intel also moved to 64bit I86. XEON is (supposedly) 64bit
I86 with RAS features borrowed from Itanium.

SCI was fiber optic protocol adapted for a number of things ...
including channel I/O ... but also a scalable (64-port) multiprocessor
memory interface. Sequent (& Data General) used it for 256-way server
with 64 four I486 chip boards (ibm later buys sequent).

Convex does a 128-way "snake" ... SCI with 64 two processor "HP snake"
chip boards. HP then buys Convex. An engineer that had been at Cray,
then IBM Kingston engineer&scientific, then IBM Austin RS6000 is hired
by HP to do superdome ... sort of a less expensive convex machine.

After leaving IBM, we did some consulting for both Sequent and Convex.
Then the guy doing superdome tries to talk us into joining him. At the
time we were doing some work for major payment card processor and HP had
bought one of major point-of-sale terminal companies. The former CEO of
the point-of-sale terminal company and the guy doing superdome both
report to the same HP executives ... and we have to have meetings with
all of them for different reasons.

all of these mostly predate ibm cmos highend mainframes.

While still at IBM, in 1988 I had been asked to help LLNL standardize
some serial stuff they have that quickly becomes fibre channel standard,
including some stuff I had worked with in 1980 for channel extender.
Later some POK channel engineers become involved and define a heavy
weight protocol that radically reduces the native throuput that
is eventually released as FICON.

latest peak I/O benchmark that I've found is z196 getting 2M IOPS with
104 FICON (running over 104 fibre standard). At about the same time a
fibre channel was announced for e5-2600 claiming over million IOPS (two
such have higher throughput than 104 FICON). There is reference to TCW
for zHPF that is little like what I did in 1980, but it only claims 30%
improvement (say 70 FICON instead of 104).

e5-2600v1 in the time-frame of z196 had between 400-530BIPS (depending
on model) compared to 80-way z196 rated at 50BIPS.

Since then there have been 101 ec12 at 75BIPS and 141 Z13 at 100BIPS,
and e5-2600 v2, v3, and v4 ... with v4 somewhere around 1500BIPS.

before IBM sold off its I86 server business it had announced an
high-density e5-2600 rack with something like 64 e5-2600 blades ... or
around 3500 BIPs for v1 e5-2600 and nearly 10,000 BIPS for v4 (something
like equivalent of 100 z13).

the large megadatacenters with hundreds of thousands of blades ...  have
done an enormous amount of automation, a typical megadatacenter run by
staff of 80-120 people.  However, the enormous optimization in blade
cost, blade operation, automation, etc ... by the large megadatacenters
likely contributed to IBM selling off its i86 server business.

part of z196 performance claims (compared to z10) is the introduction of
memory latency compensation features (out of order execution, branch
prediction, etc) that have been in many of these other chip platforms
for decades; with further improvements for ec12 and z13 ... although

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012

early z13 specs said 30% more performance than ec12 (100BIPS) with 40%
more processors (or 710MIPS/processors??) ... some current z13 specs
says 40% more performance (w/40% more processors, so maybe same
MIPS/proc).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to