> From: Richard Elling [mailto:rich...@nexenta.com]
> > With appropriate write caching and grouping or re-ordering of writes
> algorithms, it should be possible to minimize the amount of file
> interleaving and fragmentation on write that takes place.
> 
> To some degree, ZFS already does this.  The dynamic block sizing tries
> to ensure
> that a file is written into the largest block[1]

Yes, but the block sizes in question are typically up to 128K.
As computed in my email 1 minute ago ... The "fragment" size needs to be on
the order of 40 MB in order to effectively eliminate performance loss of
fragmentation.


> Also, ZFS has an intelligent prefetch algorithm that can hide some
> performance
> aspects of "defragmentation" on HDDs.

Unfortunately, prefetch can only hide fragmentation on systems that have
idle disk time.  Prefetch isn't going to help you if you actually need to
transfer a whole file as fast as possible.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to