Hi,
Thanks for the reply.
So the problem of sequential read after random write problem exist in zfs.
I wonder if it is a real problem, ie, for example cause longer backup time, 
will it be addressed in future?

So I should ask anther question: is zfs suitable for an environment that has 
lots of data changes? I think for random I/O, there will be no such performance 
penalty, but if you backup a zfs dataset, must the backup utility sequentially 
read blocks of a dataset? Will zfs dataset suitable for database temporary 
tablespace or online redo logs?

Will a defrag utility be implemented?

Regards
Victor
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to