> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of maillist reader
> I read though that ZFS does not have a "defragmentation" tool, is this
> It would seem with such a performance difference between
> "sequential reads" and "random reads" for raidzN's, a defragmentation tool
> would be very high on ZFS's TODO list ;).
It is high on the todo list, and in fact a lot of other useful stuff is
dependent on the same code, so when/if implemented, it will enable a lot of
new features, where defrag is just one such new feature.
However, there's a very difficult decision regarding *what* do you count as
defragmentation? (Not to mention, a lot of work to be done.) The goal of
defrag is to align data on disks serially so as to maximize the useful speed
of the disks. Unfortunately, there are some really big competing demands -
where data is read in different orders.
For example, the traditional perception of defrag would align disk blocks of
individual files. Thus, when you later return to read those files
sequentially, you would have maximum performance. But that's not the same
order of data read as compared to scrub/resilver/zfs send.
Scrub/resilver/zfs send operate in (at least approximate) temporal order.
So if you defrag at a file level, you hurt the performance of
scrub/resilver/send. If you defrag at the temporal pool level (which is the
default position, current behavior) you hurt performance of file operations.
Pick your poison.
zfs-discuss mailing list