> Even the most expensive decompression algorithms
> generally run
> significantly faster than I/O to disk -- at least
> when real disks are
> involved.  So, as long as you don't run out of CPU
> and have to wait for
> CPU to be available for decompression, the
> decompression will win.  The
> same concept is true for dedup, although I don't
> necessarily think of
> dedup as a form of compression (others might
> reasonably do so though.)

Effectively, dedup is a form of compression of the
filesystem rather than any single file, but one
oriented to not interfering with access to any of what
may be sharing blocks.

I would imagine that if it's read-mostly, it's a win, but
otherwise it costs more than it saves.  Even more conventional
compression tends to be more resource intensive than decompression...

What I'm wondering is when dedup is a better value than compression.
Most obviously, when there are a lot of identical blocks across different
files; but I'm not sure how often that happens, aside from maybe
blocks of zeros (which may well be sparse anyway).
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to