On Wed, Dec 16, 2009 at 12:19 PM, Michael Herf <mbh...@gmail.com> wrote:
> Mine is similar (4-disk RAIDZ1)
>  - send/recv with dedup on: <4MB/sec
>  - send/recv with dedup off: ~80M/sec
>  - send > /dev/null: ~200MB/sec.
> I know dedup can save some disk bandwidth on write, but it shouldn't save
> much read bandwidth (so I think these numbers are right).
> There's a warning in a Jeff Bonwick post that if the DDT (de-dupe tables)
> don't fit in RAM, things will be "slower".
> Wonder what that threshold is?
> Second try of the same "recv" appears to go randomly faster (5-12MB bursting
> to 100MB/sec briefly) - DDT in core should make the second try quite a bit
> faster, but it's not as fast as I'd expect.
> My zdb -D output:
> DDT-sha256-zap-duplicate: 633396 entries, size 361 on disk, 179 in core
> DDT-sha256-zap-unique: 5054608 entries, size 350 on disk, 185 in core
> 6M entries doesn't sound like that much for a box with 6GB of RAM.
>
> CPU load is also low.
> mike
>
> On Wed, Dec 16, 2009 at 8:19 AM, Brandon High <bh...@freaks.com> wrote:
>>
>> On Wed, Dec 16, 2009 at 8:05 AM, Bob Friesenhahn
>> <bfrie...@simple.dallas.tx.us> wrote:
>> >  In his case 'zfs send' to /dev/null was still quite fast and the
>> > network
>> > was also quite fast (when tested with benchmark software).  The
>> > implication
>> > is that ssh network transfer performace may have dropped with the
>> > update.
>>
>> zfs send appears to be fast still, but receive is slow.
>>
>> I tried a pipe from the send to the receive, as well as using mbuffer
>> with a 100mb buffer, both wrote at ~ 12 MB/s.
>>
>> -B
>>
>> --
>> Brandon High : bh...@freaks.com
>> Indecision is the key to flexibility.
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>

I'm seeing similar results, though my file systems currently have
de-dupe disabled, and only compression enable, both systems being
snv_129.
An old 111 build is also sending to the 129 main file server slow,
when it used to be the 111 could send about 25MB/sec over SSH to the
main file server which used to run 127. Since 128 however, the main
file server is receiving ZFS snapshots at a fraction of the previous
speed.
129 fixed it a bit, I was literaly getting just a couple hundred
-BYTES- a second on 128, but 129 I can get about 9-10MB/sec if I'm
lucky, but usually 4-5MB/sec. No other configuration changes on the
network occured, except for my X4540's being upgraded to snv_129.

It does appear to be the zfs receive part, because I can send to
/dev/null at close to 800MB/sec (42 drives in 5-6 disk vdevs, RAID-Z)

Something must've changed in either SSH, or the ZFS receive bits to
cause this, but sadly since I upgrade my pool, I cannot roll back
these hosts  :(

-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to