Andrew Gabriel wrote:
> Ian Collins wrote:
>>
>>
>> I don't see the 5 second bursty behaviour described in the bug
>> report. It's more like 5 second interval gaps in the network traffic
>> while the
>> data is written to disk.
>
> That is exactly the issue. When the zfs recv data has been written,
> zfs recv starts reading the network again, but there's only a tiny
> amount of data buffered in the TCP/IP stack, so it has to wait for the
> network to heave more data across. In effect, it's a single buffered
> copy. The addition of a buffer program turns it into a double-buffered
> (or cyclic buffered) copy, with the disks running flat out
> continuously, and the network streaming data across continuously at
> the disk platter speed.
>
> What are your theoretical max speeds for network and disk i/o?
> Taking the smaller of these two, are you seeing the sustained
> send/recv performance match that (excluding the ~1MB/sec periods which
> is some other problem)?
>
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x.  I have a small (140K) buffer on the sending
side to ensure the minimum number of sent packets

The times I get for 3.1GB of data (b101 ISO and some smaller files) to a
modest mirror at the receive end are:

1m36s for cp over NFS,
2m48s for zfs send though ssh and
1m14s through a socket.

-- 
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to