After thinking about it a little more since my reply on Friday...

If you have a good performance monitor that can measure the MDC hit rate
on your Linux packs (note I said hit rate, not hit ratio), if MDC has a
high hit rate, then MDC is working well for that pack.  If it has a low
hit rate, perhaps it makes sense to turn it off.

Why hit rate instead of hit ratio?  You can have a high hit ratio, but
if you are only doing 1 I/O per second, it doesn't matter.

But if the MDC hit rate eliminates 50 I/Os per second (for example),
then MDC is working well.

Of course if the sum of all other MDC dasd is swamping the MDC, then MDC
will have, in total, a low hit ratio.  If the dasd subsystem can handle
the I/O, then turn off MDC for your low priority workloads and/or
increase the storage for MDC.  If your dasd subsystem isn't able to
handle the I/O workload, either increase caching (to reduce the real
I/Os), add channels/controllers/convert to ficon to increase I/O
capacity or change your applications to reduce the I/O needs.

Last Friday, what I was trying to say, but as I reread my post, I didn't
directly say it, was:

What is your critical resource?

Currently, mine is main memory and my perception is more from a memory
viewpoint.  Funny, I went from 1GB to 8GB and I think that is our
critical resource.  Right now, I think memory is the limiting factor in
determining the amount of work we can do.  (I use to run Oracle under VM
on a 4381 92E, with 16 MBs that also supported two VSE machines which
supported 400 users.  16MB!  Now Oracle 10g need 512MB to install.  I
hope to get running test systems in about a quarter of that.)

Tom Duerbusch
THD Consulting

Phil Smith III wrote:
James Melin <[EMAIL PROTECTED]> wrote:

My VM guy here is saying that he read some place that having mini-disk
cache turned on for mini-disk volumes used by linux for system disk is a
good thing.


My position has always been that Linux is caching what VM is caching and
it's a double fault. Since I'm not the VM guru (and neither is he really,
been doing VM here for just over a year) I don't have any traction in
getting this changed.


As others have noted, "It depends" (and some of what I'm about to write also maps to what you and 
others have said; I'm not going to keep saying "as x said", for brevity).  At the one end of the 
spectrum you have swap, which probably never (for varying values of "never") makes sense to cache.  
At the other end, you might have a heavily used R/O minidisk, which makes a fair amount of sense to cache *if 
the performance boost that this suggests is important*.

Yes, stuff will get double-cached (triple-cached, if you count the controller) 
to some extent; you can't stop that.  But you can control it a bit, by 
carefully tuning virtual machine sizes to minimize the amount of storage 
available for file cache, and then use minidisk cache.  Assuming you have any 
handle on load on the machines, of course.

VM minidisk cache is pretty smart; I'd be surprised if turning it on was 
generally a *bad* thing.  Certainly turning it on for any frequently read data 
seems like it should be a good thing.  We do recommend that the R/O minidisks 
that are part of the shared filesystem used by our product be cached, but 
that's a special case.

And, of course, the bottom line is that VM can generate monitor data to let you 
really see whether it's doing any good.  And Barton (or ASG) will be happy to 
sell you his product to analyze that data.

...phsiii

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to