I have observed the opposite, and I believe that all writes are slow to my
dedup'd pool.

I used local rsync (no ssh) for one of my migrations (so it was restartable,
as it took *4 days*), and the writes were slow just like zfs recv.

I have not seen fast writes of real data to the deduped volume, if you're
copying enough data. (I assume there's some sort of writeback behavior to
make small writes faster?)

Of course if you just use mkfile, it does run amazingly fast.

mike


Edward Ned Harvey wrote:
>
>  I'm willing to accept slower writes with compression enabled, par for
> the course. Local writes, even with compression enabled, can still
> exceed 500MB/sec, with moderate to high CPU usage.
> These problems seem to have manifested after snv_128, and seemingly
> only affect ZFS receive speeds. Local pool performance is still very
> fast.
>
>
>  Now we're getting somewhere.  ;-)
> You've tested the source disk (result: fast.)
> You've tested the destination disk without zfs receive (result: fast.)
> Now the only two ingredients left are:
>
> Ssh performance, or zfs receive performance.
>
> So, to conclusively identify and prove and measure that zfs receive is the
> problem, how about this:
> zfs send somefilessytem | ssh somehost 'cat > /dev/null'
>
> If that goes slow, then ssh is the culprit.
> If that goes fast ... and then you change to "zfs receive" and that goes
> slow ... Now you've scientifically shown that zfs receive is slow.
>
>
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to