On Sun, 15 Jan 2012, Edward Ned Harvey wrote:
While I'm waiting for this to run, I'll make some predictions:
The file is 2GB (16 Gbit) and the disk reads around 1Gbit/sec, so reading
the initial sequential file should take ~16 sec
After fragmentation, it should be essentially random 4k fragments (32768
bits). I figure each time the head is able to find useful data, it takes
The 4k fragments is the part I don't agree with. Zfs does not do
that. If you were to run raidzN over a wide enough array of disks you
could end up with 4K fragments (distributed across the disks), but
then you would always have 4K fragments.
Zfs writes linear strips of data in units of the zfs blocksize, unless
it is sliced-n-diced by raidzN for striping across disks. If part of
a zfs filesystem block is overwritten, then the underlying block is
read, modified in memory, and then the whole block written to a new
location. The need to read the existing block is a reason why the zfs
ARC is so vitally important to write performance.
If the filesystem has compression enabled, then the blocksize is still
the same, but the data written may be shorter (due to compression).
File tail blocks may also be shorter.
There are dtrace tools you can use to observe low level I/O and see
the size of the writes.
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
zfs-discuss mailing list