>> prov01:/mnt/.test10# time rsync -are ssh /tmp/MTOS-4.261-ja >> 4500-07.unix:/mnt/.test10/ >> >> real 0m1.387s > > What was in 4500-07.unix:/mnt/.test10/ before you ran rsync?
All tests consists of me creating .testX and untaring the tarball inside. So a new, and empty, directory. Perhaps a clearer commandline would have been: # mkdir .test28 && time gtar --directory=.test28 -zxf /tmp/MTOS-4.261-ja.tar.gz >> prov01:/mnt/.test12# time rsync -are ssh /tmp/MTOS-4.261-ja . >> >> real 3m44.857s > > What was in . before you invoked rsync? New and empty directory. >> real 0m24.480s > > How smb handle caches? Went all of the data really over the wire? In untared it just like all other cases. That it is ZIL vs nfsd seems to be clear now. Perhaps nfsd is the only piece of software that does consistency properly, which is why rsync over ssh, and smb is fast, and nfsd is terribly slow. But I don't know. I created a "slog" on the mirrored boot pool, and added it to the storage pool. This brought the test of untar over NFS down to: real 1m59.253s (about ~30 second save) I then replaced the zboot/slog device with a /tmp/slog 2GB dd'ed file. (obviously just for testing, since having slog in volatile memory would be equivalent to zil_disable, if I understand things correctly.) real 0m8.910s Now it can beat the old netapp, using NFS, and ZIL enabled. It would then appear that NFS needs a separate log device more than anything else. Perhaps this conversation belongs over in zfs-discuss now. We will pick up a CF 600X (90MB/s) flash card for the x4540 and see what write speeds we get. If that fails (or rather still performs too slowly), we will take out one of the hard disk and replace with X25-M SSD memory. Thanks for looking at this, it has been a learning experience. Lund -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)