Gregory,

The 9117-MMD could range from 1 chip/4 cores all the way up to 16 chips/64
cores at either 3.80 or 4.22 GHz. If it has 15 cores, then it was likely
the 4.22 GHz 5 chip/15 core version. Using 10 out of 15 cores (even at 100%
busy) should fit on 5 z14 ZR1 or z14 M0x IFLs. Sounds like there is
something causing thrashing. Do you have a z/VM performance producs
(Velocity or IBM?) as that might help isolate where the bottleneck is.

Jim Elliott
Senior IT Consultant - GlassHouse Systems Inc.


On Tue, Nov 3, 2020 at 12:25 PM Grzegorz Powiedziuk <gpowiedz...@gmail.com>
wrote:

> Hi Jim,
> correction - we have z14 not z114
>  .. .not sure why I keep calling our z14 z114 ;)  We have z14
>
> We have 16 IFLs in total shared across 5 z/VM lparps but really there is
> literally nothing running in there yet beside this one huge VM which has 10
> IFLs configured. We have plenty of spare memory left and this one VM has
> 150G configured. It is about the same as what they had for this database
> when it was on the AIX. This number has been calculated by DBAs and it
> seems ok. I am not sure how to tell if DB2 is happy with what it has or
> not, but the linux OS is definitely not starving for memory.
>
> that p7 was 9117-MMD. And I just found that it was had EC set to 10 but it
> could pull up to 15 processors. I am not sure how that works over there.
>
>
>
> On Tue, Nov 3, 2020 at 10:58 AM Jim Elliott <jlelliot...@gmail.com> wrote:
>
> > Gregory:
> >
> > Do you have a z114 with 10 IFLs? That is the maximum number of IFLs
> > available on a z114 (2818-M10) and would be unusual. Is this a single
> z/VM
> > LPAR? How much memory is on the z114 (and in this LPAR)? Also, what was
> the
> > specific MT/Model for the P7 box?
> >
> > If you were to compare a 12-core Power 730 (8231-E2C) to a 10 IFL z114
> the
> > Power system has 1.4 to 2.0 times the capacity of the z114.
> >
> > Jim Elliott
> > Senior IT Consultant - GlassHouse Systems Inc.
> >
> >
> > On Tue, Nov 3, 2020 at 8:47 AM Grzegorz Powiedziuk <
> gpowiedz...@gmail.com>
> > wrote:
> >
> > > Hi, I could use some ideas. We moved a huge db2 from old p7 aix to
> rhel7
> > on
> > > Z and we are having big performance issues.
> > > Same memory, CPU number is down from 12 to 10.  Although they had
> > > multithreading ON so they saw more "cpus" We have faster disks (moved
> to
> > > flash), faster FCP cards and faster network adapters.
> > > We are running on z114 and at this point that is practically the only
> VM
> > > running with IFLs on this box.
> > >
> > > It seems that when "jobs" run on their own, they finish faster than
> what
> > > they were getting on AIX.
> > > But problems start if there is more than we can chew. So either few
> jobs
> > > running at the same time or some reorgs running in the database.
> > >
> > > Load average goes to 150-200, cpus are at 100%  (kernel time can go to
> > > 20-30% ) but no iowaits.
> > > Plenty of memory available.
> > > At this point everything becomes extremely slow, people are starting
> > having
> > > problems with connecting to db2 (annd sshing), basically it becomes a
> > > nightmare
> > >
> > > This db2 is massive (30+TB) and it is a multinode configuration (17
> nodes
> > > running on the same host). We moved it like this 1:1 from that old AIX.
> > >
> > > DB2 is running on the ext4 filesystem (Actually a huge number of
> > > filesystems- each NODE is a separate logical volume). Separate for
> logs,
> > > data.
> > >
> > > If this continues like this, we will add 2 cpus but I have a feeling
> that
> > > it will not make much difference.
> > >
> > > I know that we end up with a massive number of processes and a massive
> > > number of file descriptors (lsof sice it shows also threads now, is
> > > practically useless - it would run for way too long - 10-30 minutes
> > > probably) .
> > >
> > > A snapshot from just now:
> > >
> > > top - 08:37:50 up 11 days, 12:04, 28 users,  load average: 188.29,
> > 151.07,
> > > 133.54
> > > Tasks: 1843 total,  11 running, 1832 sleeping,   0 stopped,   0 zombie
> > > %Cpu0  : 76.3 us, 16.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  1.0 hi,  3.2 si,
> > >  2.9 st
> > > %Cpu1  : 66.1 us, 31.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
> > >  0.6 st
> > > %Cpu2  : 66.9 us, 31.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> > >  0.3 st
> > > %Cpu3  : 74.7 us, 23.4 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> > >  0.3 st
> > > %Cpu4  : 86.7 us, 10.7 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
> > >  0.6 st
> > > %Cpu5  : 83.8 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
> > >  0.3 st
> > > %Cpu6  : 81.6 us, 15.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
> > >  0.6 st
> > > %Cpu7  : 70.6 us, 26.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
> > >  0.6 st
> > > %Cpu8  : 70.5 us, 26.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
> > >  0.6 st
> > > %Cpu9  : 84.1 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> > >  0.6 st
> > > KiB Mem : 15424256+total,  1069280 free, 18452168 used,
> > 13472112+buff/cache
> > > KiB Swap: 52305904 total, 51231216 free,  1074688 used. 17399028 avail
> > Mem
> > >
> > > Where  can I look for potential relief? Everyone was hoping for a
> better
> > > performance not worse.I am hoping that there is something we can tweak
> to
> > > make this better.
> > > I will appreciate any ideas!
> > > thanks
> > > Gregory
> > >
> > > ----------------------------------------------------------------------
> > > For LINUX-390 subscribe / signoff / archive access instructions,
> > > send email to lists...@vm.marist.edu with the message: INFO LINUX-390
> or
> > > visit
> > > http://www2.marist.edu/htbin/wlvindex?LINUX-390
> > >
> >
> > ----------------------------------------------------------------------
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www2.marist.edu/htbin/wlvindex?LINUX-390
> >
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to