On 2 December 2010 05:53, Ron Hawkins <ron.hawkins1...@sbcglobal.net> wrote:
> Johnny,
>
> The saving in hardware assisted compression is in decompression - when you 
> read it. Look at what should be a much lower CPU cost to decompress the files 
> during restore and decide if the speed of restoring the data concurrently is 
> worth the increase in CPU required to back it up in the first place.

I am a little surprised at this. Certainly for most of the current
dynamic dictionary based algorithms (and many more as well),
decompression will always, except in pathological cases, be a good
deal faster than compression. This is intuitively obvious, since the
compression code must not only go through the mechanics of
transforming input data into the output codestream, but must do it
with some eye to actually compressing as best it can with the
knowledge available to it, rather than making things worse. The
decompression simply takes what it is given, and algorithmically
transforms it back with no choice.

Whether a hardware assisted - which in this case means one using the
tree manipulation instructions - decompression is disproportionately
faster than a similar compression, I don't know, but I'd be surprised
if it's much different.

But regardless, surely it is a strange claim that an installation
would use hardware assisted compression in order to make their
restores faster, particularly at the expense of their dumps. What
would be the business case for such a thing? How many installations do
restores on any kind of regular basis? How many have a need to have
them run even faster than they do naturally when compared to the
dumps?

Tony H.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to