2012-01-09 19:14, Bob Friesenhahn wrote:
In summary, with zfs's default 128K block size, data fragmentation is
not a significant issue, If the zfs filesystem block size is reduced to
a much smaller value (e.g. 8K) then it can become a significant issue.
As Richard Elling points out, a database layered on top of zfs may
already be fragmented by design.
I THINK there is some fallacy in your discussion: I've seen 128K
referred to as the maximum filesystem block size, i.e. for large
"streaming" writes. For smaller writes ZFS adapts with smaller
blocks. I am not sure how it would rewrite a few bytes inside
a larger block - split it into many smaller ones or COW all 128K.
Intermixing variable-sized indivisible blocks can in turn lead
to more fragmentation than would otherwise be expected/possible ;)
Fixed block sizes are used (only?) for volume datasets.
> If the metadata is not conveniently close to the data, then it may
> result in a big ugly disk seek (same impact as data fragmentation)
> to read it.
Also I'm not sure about ths argument. If VDEV prefetch does not
slurp in data blocks, then by the time metadata is discovered in
read-from-disk blocks and data block locations are determined,
the disk may have rotated away from the head, so at least one
rotational delay is incurred even if metadata is immediately
followed by its referred data... no?
zfs-discuss mailing list