Joerg Schilling schrieb:
> Andrew Gabriel <[EMAIL PROTECTED]> wrote:
> 
>> That is exactly the issue. When the zfs recv data has been written, zfs 
>> recv starts reading the network again, but there's only a tiny amount of 
>> data buffered in the TCP/IP stack, so it has to wait for the network to 
>> heave more data across. In effect, it's a single buffered copy. The 
>> addition of a buffer program turns it into a double-buffered (or cyclic 
>> buffered) copy, with the disks running flat out continuously, and the 
>> network streaming data across continuously at the disk platter speed.
> 
> rmt and star increase the Socket read/write buffer size via
> 
> setsockopt(STDOUT_FILENO, SOL_SOCKET, SO_SNDBUF, 
> setsockopt(STDIN_FILENO, SOL_SOCKET, SO_RCVBUF,
> 
> when doing "remote tape access".
> 
> This has a notable effect on throughput.
> 
> Jörg
> 

yesterday, I've release a new version of mbuffer, which also enlarges
the default TCP buffer size. So everybody using mbuffer for network data
transfer might want to update.

For everybody unfamiliar with mbuffer, it might be worth to note that it
has a bunch of additional features like e.g. sending to multiple clients
at once, high/low watermark flushing to prevent tape drives from
stop/rewind/restart cycles.

- Thomas
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to