On Mon, 2 May 2011, Eric D. Mudama wrote:
Yea, kept googling and it makes sense. I guess I am simply surprised that the application would have done the seek+write combination, since on NTFS (which doesn't support sparse) these would have been real 1.5GB files, and there would be hundreds or thousands of them in normal usage.
This is a reason why a Solaris server may be much faster than a Windows/NTFS server when there is a seek followed by a write. In my own application, I see that a long seek past the end of the file followed by a write is quite slow on NTFS but is quite fast on Unix filesystems which support holes. Apple's HFS Plus is another popular filesystem which does not support holes and is therefore quite slow at creating large files comprised of zeros.
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss