On Fri, 14 Jul 2006 08:45:55 -0400, Bill Bitner <[EMAIL PROTECTED]> 
wrote:

>Brian, when you say 'relative performance metrics' are you asking about
>the CPU cost associated with the various paths? If so, I don't think
>I have any current data on that. Partly because, it does depend on a lot

>of different things.

Not just the CPU cost.  For example, to compare the performance of an MDC
 
cache hit with a controller cache hit requires comparing the MDC CPU cost
 
with the I/O CPU cost + channel delay time.

>My suggestion there is to find a virtual machine
>that is fairly 'typical' and using the methods Barton & Rob describe
d
>get the r/w view. Then do an experiment with and without MDC.

It's on my list to do after I've researched existing information.

>The tricky part here, as was pointed out, is shared minidisks. Look at
>the CP CPU time for the guest with and without, prorated on a per virtua
l
>I/O.

In my particular Linux environment we don't use shared minidisks.  The 

Linux guests don't even share the same volumes.  Almost all minidisks are
 
all fullpack (excluding cyl zero).

>I should mention that they did pay a lot of attention to pathlengths
>when MDC was implemented, so I would not expect the difference to
>be significant in terms of processor time. At least not enough to
>make the time it takes to continue to do that type of analysis
>worthwhile.

Comparing an MDC read cache miss followed by an I/O vs. just the I/O is a
 
ratio that is very sensitive to the relative magnitude of the MDC path 

length and the I/O path length and channel times.

I'm sure that they paid a lot of attention to the MDC pathlengths, but 

it's not zero.  If the MDC pathlengths were an insignificant fraction of 

the time to perform an I/O then I would not expect recommendations I've 

seen to turn off MDC for certain workloads if they are not almost all 
reads.  My motivation is to understand the underlying performance costs 

sufficiently to be able to make more intelligent decisions.

>I'm sure there are extreme cases (such as using mapped minidisks to
>dataspaces and MDC at the same time) where costs are noticeable.

No dataspaces here, just garden variety minidisks.

>So, what we have found more effective is go after the
>big hitters, such as read-once/write-once, write-only disks, etc.
>and make sure MDC is off for those.

Most of the Linux data is in LVM's (each Linux guest usually has just one
 
LVM) spread across lots of physical DASD volumes.  So tuning MDC at the 

minidisk level is pretty much a non-starter.  Which is why I'm focusing o
n 
tuning MDC at the userid level.

Brian Nielsen

Reply via email to