Even though ZFS is "the last word" in filesystems, is there something more that an application can do when writing large files sequentially in order to assure that the data is stored as contiguously as possible? Does this notion even make sense given that ZFS load-shares large blocks across a set of disks?
It seems that with some filesystems, doing a ftruncate() to length may help, but with ZFS and its copy-on-write semantics, this may actually make the problem worse and slows down the writes. Bob ====================================== Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss