On 09/ 9/10 02:42 PM, Fei Xu wrote:
now it gets extremly slow at around 400G sent.
first iostat result is captured when the send operation starts.
capacity operations bandwidth
pool alloc free read write read write
----------- ----- ----- ----- ----- ----- -----
sh001a 37.6G 16.2T 0 1.17K 82 146M
raidz2 37.6G 16.2T 0 1.17K 82 146M
<snip>
-----------------------------------------------------------------------------------------------
below result is when ZFS send stuck @ 397G. Seems the HD I/O is quite normal.
then, where is the data... notice that, IOstat command response very slow.
capacity operations bandwidth
pool alloc free read write read write
----------- ----- ----- ----- ----- ----- -----
sh001a 397G 15.9T 0 1.08K 490 136M
raidz2 397G 15.9T 0 1.08K 490 136M
Have you get dedup enabled? Note the read bandwith is much higher.
--
Ian.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss