In a message dated 5/25/2007 8:23:00 A.M. Central Daylight Time,  
[EMAIL PROTECTED] writes:
>Depending on your actual spinning DASD, compression may be  counter
productive for actual disk utilization.  It can save logical  space and
cache space but if the controller is compressing the data in cache  to
store it on disk, this may foul up the compression algorithm.
 
What else would the controller use to store the working copy of data that  it 
is compressing/decompressing if not within the controller's cache  storage?  
Controllers have huge amounts of cache storage and much smaller  amounts of 
much higher speed (and much more expensive) storage in which they  store all 
the 
data they need to control I/O operations, just as the processor  complex has 
several different flavors of storage with different access times and  costs 
per byte.  How can compressing the data in cache foul up the  algorithm?  I 
would think that the algorithm requires that the data be in  cache.  Maybe you 
meant that the algorithm would foul up cache?  I can  see how it would take the 
controller a lot more time to transfer each data block  if it has to compress 
or decompress the block, but that's not because the data  is in the 
controller's cache but rather because more microcode instructions have  to be 
executed 
within the controller.  If you want compressed data,  somebody's software 
somewhere has to compress it.  You could compress it  while it is in a central 
storage buffer at the expense of executing many z/OS  machine instructions (aka 
"CPU time"), you could compress it inside the channel  subsystem as it goes 
from 
central storage to a channel path (if such microcode  exists, which I don't 
know), you could compress it inside the channel path as it  goes from the 
channel subsystem to the controller (if such microcode exists,  which I don't 
know), 
you could compress it inside the controller just before  writing it to the 
real device, or maybe the hard drive has its own compression  logic that sits 
in 
between the controller's microcode and the real tracks.   Wherever the 
compression is happening, instructions are being executed that  elongate the 
time to 
do the I/O.  If the controller is doing it, then that  means the controller 
has fewer microcode cycles per second available to do  everything else it must 
be doing simultaneously, such as handling a large number  of I/O operations 
with other devices that may or not need  compression/decompression.  
Microcircuits are blindingly fast now, but they  still don't work 
instantaneously.
 
It seems intuitive that any optional feature designed to optimize resource  X 
would likely be counterproductive for resource Y, and I would like to  
understand how compression may be counterproductive for actual disk 
utilization,  
but I don't see it yet from your comment.  And the word "utilization" can  mean 
many different things.
 
Bill  Fairchild
Plainfield, IL





************************************** See what's free at http://www.aol.com.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to