charl...@mcn.org (Charles Mills) writes:
> Now, in a sense, mainframes ARE getting faster. More cache. Higher
> real memory limits and for Z, dramatically lowered memory prices. That
> processor multi-threading thing. But especially, new instructions that
> are inherently faster than the old way of doing things. Load and store
> on condition are the i-cache's dream instructions! Lots and lots of
> new "faster way to do things" instructions on the z12 and z13.

cache miss access to memory ... when measured in number of processor
cycles ... is compariable to 60s disk access time when measured in
number of 60s processor cycles. non-mainframe processors have been doing
memory latency compensation for decades, out-of-order execution, branch
prediction, speculative execution, hyperthreading, etc (aka waiting for
memory access increasing being treated like multiprogramming in the 60s
while waiting for disk i/o). Also, industry standard, non-risc
processors some time ago introduced risc micro-ops ... where standard
instructions were translated into risc microops for execution
scheduling.

mainframe implementations are more & more reusing industry standard
implementations, fixed-block disks, fibre-channel standard, CMOS,
etc. Half the per-processor performance improvement from z10->z196
playing catchup, is claimed to be introduction of some of these industry
standard memory access compensation technologies .... with further
additions in z12 (its not clear about z13 ... some numbers about total
system throughput compared to z12, is less than the increase in number
of processors ... possibly implying that per processor throughput didn't
increase or even declined).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to