Joerg Schilling wrote:
> Andrew Gabriel <[EMAIL PROTECTED]> wrote:
> 
>> That is exactly the issue. When the zfs recv data has been written, zfs 
>> recv starts reading the network again, but there's only a tiny amount of 
>> data buffered in the TCP/IP stack, so it has to wait for the network to 
>> heave more data across. In effect, it's a single buffered copy. The 
>> addition of a buffer program turns it into a double-buffered (or cyclic 
>> buffered) copy, with the disks running flat out continuously, and the 
>> network streaming data across continuously at the disk platter speed.
> 
> rmt and star increase the Socket read/write buffer size via
> 
> setsockopt(STDOUT_FILENO, SOL_SOCKET, SO_SNDBUF, 
> setsockopt(STDIN_FILENO, SOL_SOCKET, SO_RCVBUF,
> 
> when doing "remote tape access".
> 
> This has a notable effect on throughput.

Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I 
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's 
many orders of magnitude bigger than SO_RCVBUF can go.

-- 
Andrew
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to