200 processors in a CEC?
A couple of production sysplexes with fully configured internal CFs and
some test LPARS+CF  to use any spare capacity (capacity which could be used
in production should it be needed) and you could easily get to 200
processors.
One customer I knew,  had a test sysplex bigger than production - they
typically ran max expected production workload + 50% Transactions Per
Second.
I think internal CFs are faster than external CFs because of the proximity
to the data.  It is a speed of light problem.   Saving 10 microseconds per
CF request can soon mount up and reduce your overall transaction response
time.

The box with our performance sysplex had to IPL the LPARs in a certain
order to maintain a consistent baseline.  If they were IPLed in the wrong
order, the LPARs got non optimum processors ( e.g. different chip,
different draw) and the measurements were different.

Colin







On Sun, 21 Apr 2024 at 08:41, Massimo Biancucci <
[email protected]> wrote:

> I found a youtube video of the GSE presentation:
> https://www.youtube.com/watch?v=WAjMr4q4lUk
>
> Slide 10 contains the topic.
> It's not fully clear what "system does mean".
> Refer is to the multiprocessor factor that should be related to CEC
> configuration and at the same time talks about adding a processor to a
> member that should be related to LPAR configuration.
>
> If we refer to CEC configuration, it should lead me to think that it's
> better to have more small physical machines (with 8 processors ?), which
> means I need 8 3931-708 CECs to raise 100K Mips.
> It does not seem so cheap and doesn't fully explain why IBM builds CEC with
> 200 processors.
>
> So, in a real configuration with two CECS and N LPARs, Data Sharing, some
> partitions with more than 12 GPs + 8 zIIPs, would it be better to add a
> couple more partitions with less GPs and zIIP ?
> Polarization is OK.
> Does there exist a factor (I mainly think to WLM and dispatcher) that can
> be so stressed in managing such a number of processors and processes ?
> Is there any advantage in lowering the number of processes per LPAR ?
>
> Best regards.
> Max
>
>
>
>
>
>
>
> <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail
> >
> Privo
> di virus.www.avast.com
> <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail
> >
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> Il giorno sab 20 apr 2024 alle ore 11:34 Graham Harris <
> [email protected]> ha scritto:
>
> > "It doesn't take an extremely large number of CPUs
> > before a single-image system will deliver less capacity than a sysplex
> > configuration of two systems, each with half as many CPUs".
> >
> > In the original context of the GSE material, does "system" here mean
> > physical CEC, or LPAR?
> >
> > It is unclear, and "It matters".
> >
> > For LPAR setup on a single CEC, in my view, the less LPARs the better
> > (although there may well be some "crossing point" with extremely large
> > numbers of CPUs, but have no experience of any differential comparisons
> to
> > be able to comment.  zPCR may give a clue, maybe).
> > Much more chance of maximising High polarity LPs and thus letting
> > Hiperdispatch do what it is there to do, which, by design, maximises
> > efficiency.
> >
> > We have smallish CECs for our Development which have loads of LPARs and
> > with the significantly bigger engine sizes these days, even on 5xx
> models,
> > we struggle to get any High polarity LPs on any LPAR.  Which is not good.
> > I have requested consideration of an "inbetween" hardware level between
> 4xx
> > & 5xx, as the uniprocessor size gap is now just so huge on z16.  But am
> not
> > holding my breath.
> >
> >
> >
> > On Sat, 20 Apr 2024 at 09:52, Colin Paice <
> > [email protected]> wrote:
> >
> > > IBM provides tables of the cpu available with different processors and
> > > different numbers of engines.  Search for LSPR
> > > <
> > >
> >
> https://www.ibm.com/support/pages/ibm-z-large-systems-performance-reference
> > > >.
> > >
> > > https://www.ibm.com/support/pages/ibm-z-lspr-itr-zosv2r4#ibmz16A02
> > gives
> > > 1 CPU 13 MSU
> > > 2          25
> > > 3          37
> > > 4         48
> > > 5         59
> > > 6         69
> > >
> > > So  one system with 6 CPU has 69 MSU ..  6 systems each with one CPU
> has
> > 78
> > > MSU
> > >
> > > Part of this is serialisation of data. If two CPUs want to access the
> > same
> > > piece of real memory they interact.  In simplistic terms the microcode
> > may
> > > have to go to a different physical chip to ensure only one processor is
> > > using the RAM.  If the two CPUs are adjacent on a chip, it is faster.
> > >
> > > We had multiple threads running in an address space.  We used a common
> > > buffer for storing trace data from the threads, and used Compare and
> Swap
> > > to update the "next free buffer".   I think about 10-20 % of the total
> > > address space CPU was used for this CS instruction, because every
> thread
> > > was trying to get exclusive access to the field and the instruction had
> > to
> > > spin  waiting for the buffer.  The more CPUs the more spin  - and so
> less
> > > CPU available for productive work..
> > > We solved the problem by giving each thread its own trace buffer and
> > merged
> > > these when processing the dump.   This hotspot simply disappeared
> > > Colin
> > >
> > >
> > >
> > > On Fri, 19 Apr 2024 at 09:33, Massimo Biancucci <
> > > [email protected]> wrote:
> > >
> > > > Hi everybody,
> > > >
> > > > In a presentation at GSE I saw a slide with a graph about the
> advantage
> > > of
> > > > having more small sysplex LPARs versus a bigger one.
> > > > So for instance, it's better to have 5 LPARs with 4 processors than
> one
> > > > with 20.
> > > >
> > > > There was a sentence: "It doesn't take an extremely large number of
> > CPUs
> > > > before a single-image system will deliver less capacity than a
> sysplex
> > > > configuration of two systems, each with half as many CPUs".
> > > > And: "In contrast to a multiprocessor, sysplex scaling is near
> linear.
> > > > Adding another system to the sysplex may give you more effective
> > capacity
> > > > than adding another CP to an existing system."
> > > >
> > > > We've been told (IBM Labs, it seems) that a 4 ways DataSharing with 8
> > > CPUs
> > > > perform 20% better than a single LPARs with 32 CPUs.
> > > > The same (at another customer site) with "having more than 8 CPUs in
> a
> > > > single LPAR is counterproductive".
> > > >
> > > > Putting these infos all together, it seems it's better to have more
> > small
> > > > partitions (how small ???) in data sharing than, let me say, four
> > bigger
> > > > ones (in data sharing too).
> > > >
> > > > Anybody there has direct experience on doing and measuring such
> > > scenarios ?
> > > > Mainly standard CICS/Batch/DB2 application.
> > > > Of course I'm talking about well defined LPARs with High Polarization
> > > CPUs,
> > > > so don't think about that.
> > > >
> > > > Could you imagine and share your thoughts (direct experiences would
> be
> > > > better) about where the inefficiency comes from ?
> > > > Excluding HW issues (Polarization and so on), could it come from zOS
> > > > related inefficiency (WLM queue management) ?
> > > > If so, do zIIP CPUs participate in inefficiency growth ?
> > > >
> > > > I know that the usual response is "it depends", anyway I'm looking
> for
> > > > general guidelines that allow me to choose.
> > > >
> > > > Thanks a lot in advance for your valuable time.
> > > > Max
> > > >
> > > > <
> > > >
> > >
> >
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail
> > > > >
> > > > Privo
> > > > di virus.www.avast.com
> > > > <
> > > >
> > >
> >
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail
> > > > >
> > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> > > >
> > > >
> ----------------------------------------------------------------------
> > > > For IBM-MAIN subscribe / signoff / archive access instructions,
> > > > send email to [email protected] with the message: INFO
> IBM-MAIN
> > > >
> > >
> > > ----------------------------------------------------------------------
> > > For IBM-MAIN subscribe / signoff / archive access instructions,
> > > send email to [email protected] with the message: INFO IBM-MAIN
> > >
> >
> > ----------------------------------------------------------------------
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to [email protected] with the message: INFO IBM-MAIN
> >
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to