[email protected] (Charles Mills) writes:
> The mainframe seems to me to have also some "architectural"
> advantages. It seems to support a denser "clustering." It does not
> seem to me that there is anything in the Windows/Linux world that
> duplicates the advantages of 100 or so very-closely-coupled (sharing
> all main storage potentially) CPUs. Sure, you can link a thousand
> Windows or Linux 8-way servers on a super-fast net, and it is fine for
> some things -- incredibly powerful for some of them, but it seems
> there are some things the mainframe architecture is inherently better
> at.

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017

industy standard MIPS benchmark is number of interations compared to
370/158 assumed to be 1MIP processor (not actual count of instructions)

z196 (@50BIPS) comparison was e5-2600 blade with two 4-processor chips
(8 processors shared memory) getting between 400-530 BIPS (depending on
model, 50BIPS-65BIPS/processor), ten times max configured z196

most recent peak I/O published benchmark (I've found) is for z196
getting 2M IOPS using 104 FICONs running over 104 Fibre Channel
Standard.  FICON is protocol that radically reduces the native I/O
throughput.  At time of the z196 peak I/O benchmarks, a fibre channel
was announced for e5-2600 blades claiming over million IOPS (two such
fibre channel have higher throughput than 104 FICON (running over 104
fibre channel).

the naming convention for current sever blades have been revised
... family of chips
https://www.servethehome.com/intel-xeon-scalable-processor-family-platinum-gold-silver-bronze-naming-conventions/intel-scalable-processor-family-skylake-sp-platinum-gold-silver-bronze/

code name having inceasing throughput, 2017 ... blades potentially one
to eight chips (with shared memory) and 4-28 cores (i.e. processors) per
chip (max 8*28 ... or 224 processors, and possibly 448 threads.
https://ark.intel.com/content/www/us/en/ark/products/series/125191/intel-xeon-scalable-processors.html

each high end blades a few TIPS (thousand BIPS) or more than ten times
max configured z14.  Dense rack packaging might have 50-60 such blades
in a rack ...  about the floor space of z14 and potentially thousand
times the throughput.

Most recent announce (last month) 56-core (processors) Platinum 9200 
https://www.anandtech.com/show/14182/hands-on-with-the-56core-xeon-platinum-9200-cpu-intels-biggest-cpu-package-ever
https://www.servethehome.com/intel-xeon-platinum-9200-formerly-cascade-lake-ap-launched/
https://www.storagereview.com/intel_releases_second_generation_intel_xeon_scalable_cpus
https://www.hpcwire.com/2019/04/02/intel-launches-second-gen-scalable-xeons-with-up-to-56-cores/

We are delivering 8-core Xeons all the way up to 56-core, the highest
core count we've ever delivered on Xeon," said Shenoy. "We are
delivering support for 1- 2- 4- and 8-socket glueless support for Xeon."

... snip ...

aka 8-socket (8 chips), 56-core (processors per chip), 448 cores
(processors, shared memory)

above has discussions about customers building supercomputers with
thousands of such blades.

trivia: 1980 STL was full and moving 300 people from the IMS group to
offsite bldg, they tried "remote" 3270 and found the human factors
totally unacceptable. I get con'ed into doing channel extender support,
allowing local channel attached 3270 controllers to be placed at the
offsite bldg (with service back to STL datacenter) ... and see no
difference in human factors. Hardware vendor tries to get IBM to let
them distribute my support ... but there was group in POK playing with
some serial stuff that gets that vetoed (they were afraid it would make
it harder for them to release their stuff).

In 1988, I'm asked to help LLNL standardize some serial stuff they are
playing with ... which quickly becomes the fibre channel standard
(including some stuff I did in 1980). The POK people finally get their
stuff released in 1990 with ES/9000 as ESCON when it is already
obsolete. Then some POK people get involved in fibre channel standard
and define heavy weight protocol that radically reduces the native
throughput ... which eventually is released as FICON.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to