Hello Jim,

        Guess what.  We just doubled our CPU.  That is why I can
actually see
a change.   Before I would run at 100% from 5:30 am on Monday morning
until 
about 3:30 pm on Thursday afternoon.

        We went from a MP3000 H50 to a z890-160.  
        
        Now we are planning on a better I/O system.

        Each bottle neck removed just moves the bottle neck.

        And we had LOTS of latent demand.

        ON WARD WITH TUNING.

        Another question, is MDC better with STORAGE or XSTORAGE?
        

Ed Martin 
Aultman Health Foundation
330-588-4723
[EMAIL PROTECTED] 
ext. 40441

> -----Original Message-----
> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
> Behalf Of Jim Bohnsack
> Sent: Wednesday, July 12, 2006 10:48 AM
> To: [email protected]
> Subject: Re: MDC, Storage, xstore, and cache on External dasd.
> 
> Your results when using MDC suggest that you were I/O bound without
> MDC.  You were I/O constrained and now, by doing 25% more work, you're
CPU
> constrained.  You had more work to do than you could get done before
and
> by
> running at 100% now, you may still have more work to do than you are
able
> to handle.  Performance tuning is always a matter of getting past one
> bottleneck and then being constrained by the next one.
> 
> Buy a faster CPU and give my IBM stock a boost.
> Jim
> 
> At 10:16 AM 7/12/2006, you wrote:
> >Hello And thanks to everyone,
> >
> >         I do appreciate everyone's input and opinions.   We have the
> >memory.
> >
> >8 gig total,  5 gig defined for storage,  2 gig to xstore, and the
rest
> >used=20
> >by the HMC. =20
> >
> >         I do think that the problem is the MDC is only hitting
77-80%
> >and the=20
> >cpu gets driven up to 100%.   It was at 92% before I do the SET MDC
> >SYSTEM ON.   I am weighting the overall results of the MDC to storage
to
> >CPU.
> >
> >         This is a NOMAD2/ULTRAQUEST/TCPIP set of transactions.
> >
> >q xstore                                                         =20
> >XSTORE=3D 2048M online=3D 2048M
=20
> >XSTORE=3D 2048M userid=3D SYSTEM usage=3D 51% retained=3D 0M
pending=3D =
> >0M =20
> >XSTORE MDC min=3D0M, max=3D1024M, usage=3D49%
> =20
> >XSTORE=3D 2048M userid=3D  (none)  max. attach=3D 2048M
> =20
> >Ready; T=3D0.01/0.01 10:01:25
=20
> >q store                                                          =20
> >STORAGE =3D 5G
=20
> >Ready; T=3D0.01/0.01 10:01:59
=20
> >ind                                                              =20
> >AVGPROC-099% 01                                                  =20
> >XSTORE-000000/SEC MIGRATE-0000/SEC                               =20
> >MDC READS-000488/SEC WRITES-000006/SEC HIT RATIO-077%            =20
> >STORAGE-012% PAGING-0001/SEC STEAL-000%                          =20
> >Q0-00001(00000)                           DORMANT-00018=20
> >Q1-00000(00000)           E1-00000(00000)              =20
> >Q2-00000(00000) EXPAN-001 E2-00000(00000)              =20
> >Q3-00005(00000) EXPAN-001 E3-00000(00000)              =20
> >PROC 0000-099%                                         =20
> >LIMITED-00000                                                   =20
> >
> >Ed Martin=20
> >Aultman Health Foundation
> >330-588-4723
> >[EMAIL PROTECTED]
> >ext. 40441
> > > -----Original Message-----
> > > From: The IBM z/VM Operating System
[mailto:[EMAIL PROTECTED]
> >On
> > > Behalf Of Tom Duerbusch
> > > Sent: Tuesday, July 11, 2006 3:06 PM
> > > To: [email protected]
> > > Subject: Re: MDC, Storage, xstore, and cache on External dasd.
> > >=20
> > > Your concern is justified.
> > >=20
> > > The question is....real memory vs CPU.
> > >=20
> > > You shouldn't have much of an I/O bottleneck with your caching
> > > controller, assumming you have ficon or better channel speeds.
> > >=20
> > > But if your read I/O is satisfied from MDC, you won't go thru the
I/O
> > > boundry which is a saving in CPU time.
> > >=20
> > > So the question becomes can you allocate sufficient real memory
for
> >MDC
> > > in order to have a sufficiently high MDC read hit ratio, to have a
> >real
> > > savings in CPU?  Or do you care about a few percent savings in
CPU?
> > >=20
> > > If you are tight in main memory, it may be better to eliminate MDC
and
> > > use the memory to reduce paging.
> > > If you are tight in CPU, then the CPU savings may be worth it.
> > >=20
> > > An old rule of thumb was caching closer to the application is
better
> > > than caching farther away from the application.  But that is only
if
> >the
> > > memory for caching was of equal sizes.  I would rather have 6 GB
> > > controller cache, then 2 MB for VSAM buffers.
> > >=20
> > > Anyway, I would experiment with MDC cache.  If you can't get a
high
> >hit
> > > ratio, say 95% or better, I would turn it off.  But there is
always
> > > "that application" that may benefit greatly, for a short period of
> >time,
> > > by the use of MDC.
> > >=20
> > > Tom Duerbusch
> > > THD Consulting
> > >=20
> > > >>> [EMAIL PROTECTED] 7/11/2006 1:27 PM >>>
> > > Hello Everyone,
> > >=20
> > >       I have found some time here to re-evaluate some parameters.
> > >=20
> > >       We have a large amount of Cache (6 gig) on the EMC box.  The
> > > EMC
> > > is doing lots of
> > > caching.
> > >=20
> > >       I am wondering about the overhead of the dual caching and
the
> > > benefits.
> > > It seems to me that having MDC on for the system is just overhead
and
> > > dual caching.
> > >=20
> > >=20
> > > z/VM side
> > > q cache 740
> > > 0740 CACHE 0 available for subsystem
> > > 0740 CACHE 1 available for subsystem
> > > 06324150K Bytes configured
> > > 06324150K Bytes available
> > > 00000000K Bytes offline
> > > 00000000K Bytes pinned
> > >=20
> > > 0740 CACHE activated for device
> > >=20
> > > VSE/ESA side
> > >=20
> > > cache subsys=3D740,status
> > > AR 0015 SUBSYSTEM CACHING STATUS: ACTIVE
> > > AR 0015         CACHE FAST WRITE: ACTIVE
> > > AR 0015            CACHE STORAGE: CONFIG.  .......   6324150K
> > > AR 0015            CACHE STORAGE: AVAIL.   .......   6324150K
> > > AR 0015               NVS STATUS: AVAILABLE
> > > AR 0015              NVS STORAGE: CONFIG.  .......    196608K
> > > AR 0015 1I40I  READY
> > >=20
> > > cache subsys=3D740,report
> > >=20
> > > AR 0015 3990-E9 SUBSYSTEM COUNTERS REPORT
> > >=20
> > > AR 0015 VOLUME 'RAM040' DEVICE ID=3DX'00'
> > >=20
> > > AR 0015                               CHANNEL OPERATIONS
> > >=20
> > > AR 0015                 <----SEARCH/READ---->
> > > <-------------WRITE------------>
> > > AR 0015                 <----SEARCH/READ---->
> > > <-------------WRITE------------>
> > > AR 0015                    TOTAL   CACHE-READ    TOTAL
CACHE-WRITE
> > > DASD-FAST
> > > AR 0015 REQUESTS
> > >=20
> > > AR 0015   NORMAL         837170781  824709019    7467393
7463857
> > > 7467393
> > > AR 0015   SEQUENTIAL      13620747   13148843     168445
168286
> > > 168445
> > > AR 0015   CACHE FAST WRT         0          0          0
0
> > > N/A
> > > AR 0015
> > >=20
> > > AR 0015 TOTALS           850791528  837857862    7635838
7632143
> > > 7635838
> > > AR 0015
> > >=20
> > > AR 0015 REQUESTS
> > >=20
> > > AR 0015   INHIBIT CACHE LOADING             0
> > >=20
> > > AR 0015   BYPASS CACHE                     31
> > >=20
> > > AR 0015
> > >=20
> > > AR 0015 DATA TRANSFERS             DASD->CACHE
CACHE->DASD
> > >=20
> > > AR 0015   NORMAL                      9571687
762405
> > >=20
> > > AR 0015   SEQUENTIAL                  1600428
N/A
> > >=20
> > > AR 0015 1I40I  READY
> > >=20
> > >=20
> > >=20
> > >=20
> > > Ed Martin
> > > Aultman Health Foundation
> > > 330-588-4723
> > > [EMAIL PROTECTED]
> > > ext. 40441
> 
> Jim Bohnsack
> Cornell Univ.
> (607) 255-1760

Reply via email to