On 2/18/2014 12:06 AM, Timothy Sipples wrote:
I also agree with the opinions expressed about optimizing where it makes
the most sense (or dollars, euro, yen....) and only there, in some priority
order. Though I'd mostly disagree about that additional peak MSU.

Of course, "Fugget about it" is expected from IBM, but there are software vendors making a comfortable living helping their customers save serious cash by minimizing their monthly peak R4HAs. Some of them spread out batch workload; some of them raise, lower, and move LPAR caps around; some are simply far more CPU efficient than popular alternatives.

In benchmarks, our flagship product uses only between 8% and 28% of the CPU time used by our nearest competitor to do *exactly* the same request -- and that doesn't even consider the fact that ours can redirect 94% of that work to zIIP (if available) without lowering ITR/ETR. We got there by taking performance seriously.

My question is simple. Why not do things (much) more efficiently if you can?

System z engineers have spent many $ millions implementing new instructions intended to facilitate micro-optimization of programs by minimizing memory accesses, which have become UNBELIEVABLY slow relative to processor speed. To put into perspective, it took 13 cycles to access a doubleword operand on a S/360 Model 91. That same memory access on a zEC12 can now be up to 75 TIMES slower (relatively speaking).

"Denial is not just the name of a river in Egypt." - Anonymous

--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
http://www.phoenixsoftware.com/

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to