Andrew Gabriel wrote:
> Ian Collins wrote:
>> Andrew Gabriel wrote:
>>> Ian Collins wrote:
>>>  
>>>> Brent Jones wrote:
>>>>    
>>>>> Theres been a couple threads about this now, tracked some bug
>>>>> ID's/ticket:
>>>>>
>>>>> 6333409
>>>>> 6418042
>>>>>       
>>>> I see these are fixed in build 102.
>>>>
>>>> Are they targeted to get back to Solaris 10 via a patch?
>>>> If not, is it worth escalating the issue with support to get a patch?
>>>>     
>>> Given the issue described is slow zfs recv over network, I suspect
>>> this is:
>>>
>>> 6729347 Poor zfs receive performance across networks
>>>
>>> This is quite easily worked around by putting a buffering program
>>> between the network and the zfs receive. There is a public domain
>>> "mbuffer" which should work, although I haven't tried it as I wrote
>>> my own. The buffer size you need is about 5 seconds worth of data.
>>> In my case of 7200RPM disks (in a mirror and not striped) and a
>>> gigabit ethernet link, the disks are the limiting factor at around
>>> 57MB/sec sustained i/o, so I used a 250MB buffer to best effect. If
>>> I recall correctly, that speeded up the zfs send/recv across the
>>> network by about 3 times, and it then ran at the disk platter speed.
>>  
>> Did this apply to incremental sends as well?  I can live with ~20MB/sec
>> for full sends, but ~1MB/sec for incremental sends is a killer.
>
> It doesn't help the ~1MB/sec periods in incrementals, but it does help
> the fast periods in incrementals.
>
:)

I don't see the 5 second bursty behaviour described in the bug report. 
It's more like 5 second interval gaps in the network traffic while the
data is written to disk.

-- 
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to