Richard Elling wrote:
> Ian Collins wrote:
>> Ian Collins wrote:  
>>> Andrew Gabriel wrote:  
>>>> Ian Collins wrote:  
>>>>> I've just finished a small application to couple zfs_send and
>>>>> zfs_receive through a socket to remove ssh from the equation and the
>>>>> speed up is better than 2x.  I have a small (140K) buffer on the
>>>>> sending
>>>>> side to ensure the minimum number of sent packets
>>>>>
>>>>> The times I get for 3.1GB of data (b101 ISO and some smaller
>>>>> files) to a
>>>>> modest mirror at the receive end are:
>>>>>
>>>>> 1m36s for cp over NFS,
>>>>> 2m48s for zfs send though ssh and
>>>>> 1m14s through a socket.
>>>>>               
>>>> So the best speed is equivalent to 42MB/s. 
>>>> It would be interesting to try putting a buffer (5 x 42MB = 210MB
>>>> initial stab) at the recv side and see if you get any improvement.
>>>>           
>> It took a while...
>>
>> I was able to get about 47MB/s with a 256MB circular input buffer. I
>> think that's about as fast it can go, the buffer fills so receive
>> processing is the bottleneck.  Bonnie++ shows the pool (a mirror) block
>> write speed is 58MB/s.
>>
>> When I reverse the transfer to the faster box, the rate drops to 35MB/s
>> with neither the send nor receive buffer filling.  So send processing
>> appears to be the limit in this case.  
> Those rates are what I would expect writing to a single disk.
> How is the pool configured?
>
The "slow" system has a single mirror pool of two SATA drives, the
faster one a stripe of 4 mirrors and an IDE SD boot drive.

ZFS send though ssh from the slow to the fast box takes 189 seconds, the
direct socket connection send takes 82 seconds.

-- 
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to