So while I'm feeling optimistic :-) we really ought to be able to do this in 
two I/O operations. If we have, say, 500K of data to write (including all of 
the metadata), we should be able to allocate a contiguous 500K block on disk 
and write that with a single operation. Then we update the überblock.

The only inherent problem preventing this right now is that we don't have 
general scatter/gather at the driver level (ugh). This is a bug that should be 
fixed, IMO. Then ZFS just needs to delay choosing physical block locations 
until they’re being written as part of a group. (Of course, as NetApp points 
out in their WAFL papers, the goal of optimizing writes can conflict with the 
goal of optimizing reads, so taken to an extreme, this optimization isn’t 
always desirable.)
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to