1985? You're just a young'un.

-
-teD
-
  Original Message  
From: Martin Packer
Sent: Wednesday, October 1, 2014 15:20
To: [email protected]
Reply To: IBM Mainframe Discussion List
Subject: Re: zOS 1.13 – CPU latent demand

FWIW the vast majority of my customer set have multiple engines in their 
machines and the vast majority of their LPARs are defined as logical 2-way 
and above.

And to think when I got into this game (in 1985) there was still quite a 
lot of suspicion of multiprocessors.

Going from a max of 4 to a max of 6 with 3090E was quite a big deal. And 
from 6 to 8 (and then to 10) with 9021 likewise. Now we add handfuls at a 
time. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: [email protected]

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From: Anne & Lynn Wheeler <[email protected]>
To: [email protected]
Date: 01/10/2014 20:16
Subject: Re: zOS 1.13 – CPU latent demand
Sent by: IBM Mainframe Discussion List <[email protected]>



[email protected] (Gibney, Dave) writes:
> In my opinion, back in the day, there as a benefit of going to
> fewer/faster engines. But, with a deep drop off a precipice when fewer
> reached one.
>
> Never again will I willingly agree to be on a single CPU machine.

in the past, multiple engines have been beneficial

1) constraining ill-performing tasks, partially compensating
for poor resource management software.

2) increasing importance of cache hit ratios ... careful control of task
switching can significantly improve cache hit ratios and aggregate
throughput. multiple engines can also help poor resource management
software minimizing task switches resulting in having to change
cache contents each time.

large mainframes use to have large penalty going from one processor to
two processor (and enormous penalty going to four processors). two
processor 370 hardware used to be only 1.8 times single processor
(clocks slowed down to handle cross-cache invalidation) and throughput
typically rated at 1.3-1.5 times single processor (because of enormous
operating system multiprocessor overhead).

in the mid-70s, i managed to do some slight of hand where got nearly 2.0
times throughput with two processors (over one processor), it was some
superfast operating system pathlength for management of two processors
along with careful task switch management ... that improved cache hit
ratio improving throughput compensating for the 20% slowdown hardware
slowdown (part of it was logically a little like SAPs where I/O
interrupt handling could be partially batched on same processor,
improving interrupt handling throughput because of cache affinity of the
interrupt handler and improving cache hit ratio of applications on the
other processor with fewer interrupts).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to