[email protected] (Mike Schwab) writes:
> Put a Hercules emulator and z/OS on that blade, 50 z/OS MIPS per
> hyperthread, so 100 MIPS per core, 1600 MIPS per blade (per
> TurboHercules).  Perhaps $5,000 per blade?  Some blades do have 4
> sockets.

re:
http://www.garlic.com/~lynn/2012l.html#51 Turn Off Another Light - Univ. of 
Tennessee 

This mentions 3.2BIPS with 8-way Nehalem (compared to z10 30BIPS with 64
processors at z196 50BIPS with 80 processors)
http://en.wikipedia.org/wiki/Hercules_%28emulator%29

there has big thruput difference between i86 and risc ... risc having
out-or-order execution, branch prediction, speculative execution, etc
for a couple decades. last couple i86 chip generations have moved to
risc with hardware layer that translates i86 instructions to risc
micro-ops ... mitigating much of the thruput difference. Hyperthreading
(two simulated processors per core) is also used to further increase
instructions per second (by feeding execution units from two independent
instruction streams). Hyperthreading was worked on back in early 70s for
370/195 ... but never shipped to customers. The 370/195 was out-of-order
and pipelined but didn't have branch-prediction or speculative
instructions. Peak thruput was 10MIPS ... but most codes ran at 5MIPS
because of branch stalls. It was planned that two independent
instruction streams ... each running at 5MIPS effective thruput would
obtain aggregate of 10MIPS.

even z196 claims the introduction of out-of-order execution was big part
of thruput improvement from z10 to z196 ... with further out-of-order
enhancements coming with the newest generation. Announcements claim 50%
thruput increase for max EC12 over 80 processor z196 ..  which would put
it about 75BIPS.

traditional 370 simulation has claimed 10:1 (with further improvements
in some of the commerical simuators with just-in-time "compilation"
... aka dynamic translation of repeatedly executive 370 snippets to
native for direct execution). e5-2600 is two socket (chip) with 8 cores
(processors) per chip for total of 16 processors benchmarked at
aggregate of 527BIPS ... which might yield as much as mainframe 53BIPS
(at 10:1) ... or approx. same as 80-processor z196. IBM has base price
for e5-2600 blade at $1815 ... compared to $28M for 80-processor z196.

Note analysis from couple days ago claims IBM sells $5.25M in mainframe
software, services and storage for every $1M in mainframe hardware. That
would imply customers spend closer to $175M for 80-processor z196 ($28M
+ $147M).

If the Amazon "supercomputer" at $4829/hr, 51,132 cores, $48M/annum were
running mainframe simulation (at 10:1) ... that would still be the
equivalent of 3,380 z196 80 processor machines or $625B (one-tenth of
the $6.25trillion from the previous post). The problem is would it
require the equivalent amount of IBM mainframe software (at the cost of
several hundred billion)?

Four socket e5-4600 blades are arriving ... which theoritically will be
1000BIPS machines ... but I haven't seen any published benchmarks yet
... four sockets (chips), 32 cores (processors), 64 hyperthreads ...
which theoritically could give mainframe 100BIPS per blade.

Way back when, when I was involved in doing ECPS (initially for 138/148)
... there was factor 10 times increase in thruput dropping segments of
vm370 kernel code into native "microcode". We were given that machines
had 6kbytes of space available for ECPS native microcode and were to
choose the 6kbytes of highest used vm370 kernel pathlengths. Old post
given the result of kernel pathlength timings ... ordered by percent of
total kernel time. 6kbyte cut-off accounted for 79.55% of vm370 kernel
execution time ... dropping it into m'code resulted in ten times speedup
(aka about 8% ... eliminating 72% of vm370 kernel time) ... aka low &
mid-range 370s use to all be 370 simulator with software runnong on some
native engine at approx. 10:1 overhead:
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

as aside ... this mentions doing HA/CMP for unix platforms along
with cluster scaleup.
http://www.garlic.com/~lynn/95.html#13

at the time, mainframe DB2 complained that if I was allowed to
continue it would be a minimum of 5yrs ahead of them ... in
both scaleup and availability. misc. past posts mentioning 
HA/CMP product
http://www.garlic.com/~lynn/subtopic.html#hacmp

also at the time, out doing marketing pitches, I coined the terms
"disaster survivability" and "geographic survivability" ... and was also
asked to write a section for the corporate continuous availability
strategy document ... but the section got pulled when both Rochester and
POK complained that they couldn't meet the requirements ... misc. past
posts mentioning "continuous availability" 
http://www.garlic.com/~lynn/submain.html#available

post from 2009 mentioning from (IBM) Annals of Release No Software
Before Its Time:
http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No 
Software Before Its Time

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to