Andrew Gabriel <[EMAIL PROTECTED]> wrote:

> That is exactly the issue. When the zfs recv data has been written, zfs 
> recv starts reading the network again, but there's only a tiny amount of 
> data buffered in the TCP/IP stack, so it has to wait for the network to 
> heave more data across. In effect, it's a single buffered copy. The 
> addition of a buffer program turns it into a double-buffered (or cyclic 
> buffered) copy, with the disks running flat out continuously, and the 
> network streaming data across continuously at the disk platter speed.

rmt and star increase the Socket read/write buffer size via

setsockopt(STDOUT_FILENO, SOL_SOCKET, SO_SNDBUF, 
setsockopt(STDIN_FILENO, SOL_SOCKET, SO_RCVBUF,

when doing "remote tape access".

This has a notable effect on throughput.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
       [EMAIL PROTECTED]                (uni)  
       [EMAIL PROTECTED]     (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to