Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-08 Thread Joerg Schilling
[EMAIL PROTECTED] wrote:

> For ufs "ufsdump | ufsrestore" I have found that I prefer the buffer on the
> receive side, but it should be much bigger.  ufsrestore starts with 
> creating all directories and that is SLOW.

This is why copying filesystems via star is much faster:

-   There is no pipe that slows down I/O

-   The star FIFO is between both star processes
and thus both star processes take advantage of the
FIFO. The filesystem read process may read in bigger
chunks than ufsdump as long as there is free space
in the FIFO. The filesystem write process may write
much bigger chunks than ufsrestore as long as there is 
data in the FIFO.


Just call star -copy . and set FIFO size to a reasonable
big number with the fs= option.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-08 Thread Andrew Gabriel
[EMAIL PROTECTED] wrote:
>   
>> In my experimentation (using my own buffer program), it's the receive 
>> side buffering you need. The size of the buffer needs to be large enough 
>> to hold 5 seconds worth of data. How much data/second you get will 
>> depend on which part of your system is the limiting factor. In my case, 
>> with 7200 RPM drives not striped and a 1Gbit network, the limiting 
>> factor is the drives, which can easily deliver 50MBytes/sec, so a buffer 
>> size of 250MBytes works well. With striped disks or 10,000 or 15,000 RPM 
>> disks, the 1Gbit network might become the limiting factor (at around 
>> 100MByte/sec).
>> 
>
> The modern "Green Caviars" from Western Digital run at 5400rpm; yet they
> deliver 95MB/s from the outer tracks.
>
> For ufs "ufsdump | ufsrestore" I have found that I prefer the buffer on the
> receive side, but it should be much bigger.  ufsrestore starts with 
> creating all directories and that is SLOW.

My 7200 RPM drives are spec'ed at 76MByte/second, and on a resilver, I 
get exactly that. (It's not clear from the spec if this is a peak, or 
average from anywhere on the surface. The outer edge of disks is 
typically 2.5 times the throughput of the inner edge.)

zfs-send doesn't seem to quite match resilver speeds (at least, for me), 
but IIRC, my 50MBytes/second was averaged across the whole send. I found 
that by going up to a 350MByte buffer, I did manage to fill it just 
occasionally during a send/recv, but it didn't make any significant 
difference to the total send/recv time.

-- 
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-08 Thread Casper . Dik


>In my experimentation (using my own buffer program), it's the receive 
>side buffering you need. The size of the buffer needs to be large enough 
>to hold 5 seconds worth of data. How much data/second you get will 
>depend on which part of your system is the limiting factor. In my case, 
>with 7200 RPM drives not striped and a 1Gbit network, the limiting 
>factor is the drives, which can easily deliver 50MBytes/sec, so a buffer 
>size of 250MBytes works well. With striped disks or 10,000 or 15,000 RPM 
>disks, the 1Gbit network might become the limiting factor (at around 
>100MByte/sec).

The modern "Green Caviars" from Western Digital run at 5400rpm; yet they
deliver 95MB/s from the outer tracks.

For ufs "ufsdump | ufsrestore" I have found that I prefer the buffer on the
receive side, but it should be much bigger.  ufsrestore starts with 
creating all directories and that is SLOW.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-08 Thread Andrew Gabriel
Thomas Maier-Komor wrote:
> First start the receive side, then the sender side:
>
> receiver> mbuffer -s 128k -m 200M -I sender:8000 | zfs receive filesystem
>
> sender> zfs send pool/filesystem | mbuffer -s 128k -m 200M -O receiver:8000
>
> Of course, you should adjust the hostnames accordingly, and set the
> mbuffer buffer size to a value that fits your needs (option -m).
>
> BTW: I've just released a new version of mbuffer which defaults to TCP
> buffer size of 1M, which can be adjusted with option --tcpbuffer.
>   

In my experimentation (using my own buffer program), it's the receive 
side buffering you need. The size of the buffer needs to be large enough 
to hold 5 seconds worth of data. How much data/second you get will 
depend on which part of your system is the limiting factor. In my case, 
with 7200 RPM drives not striped and a 1Gbit network, the limiting 
factor is the drives, which can easily deliver 50MBytes/sec, so a buffer 
size of 250MBytes works well. With striped disks or 10,000 or 15,000 RPM 
disks, the 1Gbit network might become the limiting factor (at around 
100MByte/sec).

If the speed of the disks and network are miles apart either way around 
(e.g. if I had used had 100Mbit or 10Gbit ethernet), then the buffer has 
a considerably reduced effectiveness -- it's most important when the 
disks and network are delivering same order of magnitude performance.

Note, this may not help with incrementals much, as there are long 
periods when zfs-send is only sending data at 1Mbyte/sec, and no network 
buffering will make a scrap of difference to that. (I presume this is 
when it's looking for changed data to send and is skipping over stuff 
that hasn't changed?)

-- 
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-07 Thread Thomas Maier-Komor
Julius Roberts wrote:
>>> How do i compile mbuffer for our system,
> 
> Thanks to Mike Futerko for help with the compile, i now have it installed OK.
> 
>>>  and what syntax to i use to invoke it within the zfs send recv?
> 
> Still looking for answers to this one?  Any example syntax, gotchas
> etc would be much appreciated.
> 

First start the receive side, then the sender side:

receiver> mbuffer -s 128k -m 200M -I sender:8000 | zfs receive filesystem

sender> zfs send pool/filesystem | mbuffer -s 128k -m 200M -O receiver:8000

Of course, you should adjust the hostnames accordingly, and set the
mbuffer buffer size to a value that fits your needs (option -m).

BTW: I've just released a new version of mbuffer which defaults to TCP
buffer size of 1M, which can be adjusted with option --tcpbuffer.

Cheers,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-07 Thread Julius Roberts
>> How do i compile mbuffer for our system,

Thanks to Mike Futerko for help with the compile, i now have it installed OK.

>>  and what syntax to i use to invoke it within the zfs send recv?

Still looking for answers to this one?  Any example syntax, gotchas
etc would be much appreciated.

-- 
Kind regards, Jules

free. open. honest. love. kindness. generosity. energy. frenetic.
electric. light. lasers. spinning spotlights. stage dancers. heads
bathed in yellow light. silence. stillness. awareness. empathy. the
beat. magic, not mushrooms. thick. tight. solid. commanding.
compelling. uplifting. euphoric. ecstatic, not e. ongoing. releasing.
reforming. meandering. focussing. quickening. quickening. quickening.
aloft. floating. then the beat. fat exploding thick bass-line.
eyes, everywhere. smiling. sharing. giving. trust. understanding.
tolerance. peace. equanimity. emptiness (Earthcore, 2008)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss