On Jul 9, 2011 1:56 PM, "Edward Ned Harvey" <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
>
> Given the abysmal performance, I have to assume there is a significant
> number of "overhead" reads or writes in order to maintain the DDT for each
> "actual" block write operation.  Something I didn't mention in the other
> email is that I also tracked iostat throughout the whole operation.  It's
> all writes (or at least 99.9% writes.)  So I am forced to conclude it's a
> bunch of small DDT maintenance writes taking place and incurring access
time
> penalties in addition to each intended single block access time penalty.
>
> The nature of the DDT is that it's a bunch of small blocks, that tend to
be
> scattered randomly, and require maintenance in order to do anything else.
> This sounds like precisely the usage pattern that benefits from low
latency
> devices such as SSD's.

The DDT should be written to in COW fashion, and asynchronously, so there
should be no access time penalty.  Or so ISTM it should be.

Dedup is necessarily slower for writing because of the deduplication table
lookups.  Those are synchronous lookups, but for async writes you'd think
that total write throughput would only be affected by a) the additional read
load (which is zero in your case) and b) any inability to put together large
transactions due to the high latency of each logical write, but (b)
shouldn't happen, particularly if the DDT fits in RAM or L2ARC, as it does
in your case.

So, at first glance my guess is ZFS is leaving dedup write performance on
the table most likely due to implementation reasons, not design reasons.

Nico
--
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to