Thanks everyone for your input on this thread.  It sounds like there is sufficient weight
behind the affirmative that I will include this methodology into my performance analysis
test plan.  If the performance goes well, I will share some of the results when we conclude
in January/February timeframe.

Regarding the great dd use case provided earlier in this thread, the L1 and L2 ARC 
detect and prevent streaming reads such as from dd from populating the cache.  See
my previous blog post at the web site link below for a way around this protective
caching control of ZFS.

Thanks again!

Brad Diggs | Principal Sales Consultant

On Dec 8, 2011, at 4:22 PM, Mark Musante wrote:

You can see the original ARC case here:

On 8 Dec 2011, at 16:41, Ian Collins wrote:

On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
On 12/07/11 20:48, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.

The only vendor i know that can do this is Netapp

In fact , most of our functions, like replication is not dedup aware.
For example, thecnicaly it's possible to optimize our replication that
it does not send daya chunks if a data chunk with the same chechsum
exists in target, without enabling dedup on target and source.
We already do that with 'zfs send -D':


             Perform dedup processing on the stream. Deduplicated
             streams  cannot  be  received on systems that do not
             support the stream deduplication feature.

Is there any more published information on how this feature works?


zfs-discuss mailing list

zfs-discuss mailing list

zfs-discuss mailing list

Reply via email to