[email protected] (Patrick Vogt) writes:
> If you look at coding/automation on decentralized platforms, the
> Mainframe is technically still ahead of the other platforms. It may be
> slower on implementations but that's not to do with the Mainframe but
> of people coding like 20 years ago and structures of the companies.

with regard to later statement about mainfrmaes now being out-of-order,
RISC chips have been doing out-of-order for decades ... 20 years ago,
the I86 chip makers started doing hardware decomposition of I86
instructions into RISC micro-ops (with out-of-order and branch
prediction) ... largely eliminating the performance advantage of RISC
over I86. out-of-order is one of the methods of attempting to compensate
for cache miss & increasing processor/memory speed mismatch.

current cache miss, memory access latency, counted in number of
processor cycles is compariable to the 60s disk access latency when
counted in 60s processor cycles ... accounts out-of-level and hardware
multi-threaded worked ... comparable to starting to do
multiprogramming/multitasking in the 60s ... i.e MFT & MVT. Note
folklore is that original justification for moving all of 370 to virtual
memory was based on horrible MVT storage managment ...  regions required
4times the memory typical used, typical MVT 1mbyte 165 only had four
regions, moving to virtual memory would allow four times the regions
with little or no actual paging (better CPU utilization and aggregate
throughput, larger virtual memory size compensating for the horrible MVT
storage management problem).

note that 360/195 pipeline had out-of-order but no branch prediction, so
conditional branches drained the pipeline and most codes only ran half
processor peak throughput (and some amount of RISC literature attribute
out-of-order to the 195 work). I had gotten sucked into helping with
hardware (hyper)threading for 370/195 (never announced or shipped), two
instructions streams (simulating two processors) ... running two
simulated CPUs typically running at half througput ... keeping single
195 pipeline 100% utilized. hardware threading mentioned in this article
about end of (360) ACS (canceled because executives thot it would
advance state-of-art too fast and IBM would loose control of the market,
also references some ACS features show up more than 20yrs later with
ES9000)
https://people.cs.clemson.edu/~mark/acs_end.html

latest mainframes

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012

z196 documentation claims that half the per processor performance
improvement (compared to z10), is the introduction of out-of-order
(compared to being used for decades in other processors) ...  i.e. half
of 156MIPS increase from 469MIPS to 625MIPS. Part of the 118MIPS
improvement from z196 to EC12 is attributed to further refinement in
out-of-order implementation.

z13 claims 30% increased (system) throughput (over EC12) or about
100BIPS with 40% increase in processors or about 710MIPS/proc.

z196 era e5-2600v1 blades are rated at 400-500+BIPS (depending on
model/frequency) ... latest e5-2600v4 blades are 3-4 times that, around
1.5TIPS (1500BIPS) ... they've had decades more experience with
processor design for throughput.

trivia: mid-70s, I was involved in 16-way 370 effort ... and had
involved spare time of some early 3033 processor engineers. People in
POK thot it was really great until somebody told the head of POK that it
might be decades before the POK favorite son operating system had
effective 16-way support. Head of POK then invited some of us to never
visit POK again ... 16-way, z900 finally ships in 2000 (almost 25yrs
later).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to