On 11/15/11 23:40, Tim Cook wrote:
On Tue, Nov 15, 2011 at 5:17 PM, Andrew Gabriel
<andrew.gabr...@oracle.com <mailto:andrew.gabr...@oracle.com>> wrote:
On 11/15/11 23:05, Anatoly wrote:
The speed of send/recv is around 30-60 MBytes/s for initial
send and 17-25 MBytes/s for incremental. I have seen lots of
setups with 1 disk to 100+ disks in pool. But the speed
doesn't vary in any degree. As I understand 'zfs send' is a
limiting factor. I did tests by sending to /dev/null. It
worked out too slow and absolutely not scalable.
None of cpu/memory/disk activity were in peak load, so there
is of room for improvement.
Is there any bug report or article that addresses this
problem? Any workaround or solution?
I found these guys have the same result - around 7 Mbytes/s
for 'send' and 70 Mbytes for 'recv'.
Well, if I do a zfs send/recv over 1Gbit ethernet from a 2 disk
mirror, the send runs at almost 100Mbytes/sec, so it's pretty much
limited by the ethernet.
Since you have provided none of the diagnostic data you collected,
it's difficult to guess what the limiting factor is for you.
So all the bugs have been fixed?
Probably not, but the OP's implication that zfs send has a specific rate
limit in the range suggested is demonstrably untrue. So I don't know
what's limiting the OP's send rate. (I could guess a few possibilities,
but that's pointless without the data.)
I seem to recall people on this mailing list using mbuff to speed it
up because it was so bursty and slow at one point. IE:
Yes, this idea originally came from me, having analyzed the send/receive
traffic behavior in combination with network connection behavior.
However, it's the receive side that's bursty around the TXG commits, not
the send side, so that doesn't match the issue the OP is seeing. (The
buffer sizes in that blog are not optimal, although any buffer at the
receive side will make a significant improvement if the network
bandwidth is same order of magnitude as the send/recv are capable of.)
zfs-discuss mailing list