[EMAIL PROTECTED] wrote on 07/08/2008 01:26:15 PM:

> Something else came to mind which is a negative regarding
> deduplication.  When zfs writes new sequential files, it should try to
> allocate blocks in a way which minimizes "fragmentation" (disk seeks).
> Disk seeks are the bane of existing storage systems since they come
> out of the available IOPS budget, which is only a couple hundred
> ops/second per drive.  The deduplication algorithm will surely result
> in increasing effective fragmentation (decreasing sequential
> performance) since duplicated blocks will result in a seek to the
> master copy of the block followed by a seek to the next block.  Disk
> seeks will remain an issue until rotating media goes away, which (in
> spite of popular opinion) is likely quite a while from now.

Yes,  I think it should be close to common sense to realize that you are
trading speed for space (but should be well documented if dedup/squash ever
makes it into the codebase).  You find these types of tradoffs in just
about every area of disk administration from the type of raid you select,
inode numbers, block size,  to the number of spindles and size of disk you
use.  The key here is that it would be a choice  just as compression is per
fs -- let the administrator choose her path.  In some situations it would
make sense,  in others not.

-Wade

>
> Someone has to play devil's advocate here. :-)

Debate is welcome,  it is the only way to flesh out the issues.


>
> Bob
> ======================================
> Bob Friesenhahn
> [EMAIL PROTECTED],
http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to