I am almost sure that in cache things are still hydrated. There is an
outstanding RFE for this, while I am not sure, I think this feature will
be implemented sooner or later. And in theory there will be little
benefits as most dedup'ed shares are used for archive purposes...

PS: NetApp's do have significantly bigger problems in caching department ,
like virtually having no L1 cache. However it's also my duty to knw where
they have an advantage Š

Mertol Özyöney | Storage Sales
Mobile: +90 533 931 0752
Email: mertol.ozyo...@oracle.com

On 12/10/11 4:05 PM, "Pawel Jakub Dawidek" <p...@freebsd.org> wrote:

>On Wed, Dec 07, 2011 at 10:48:43PM +0200, Mertol Ozyoney wrote:
>> Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
>> The only vendor i know that can do this is Netapp
>And you really work at Oracle?:)
>The answer is definiately yes. ARC caches on-disk blocks and dedup just
>reference those blocks. When you read dedup code is not involved at all.
>Let me show it to you with simple test:
>Create a file (dedup is on):
>       # dd if=/dev/random of=/foo/a bs=1m count=1024
>Copy this file so that it is deduped:
>       # dd if=/foo/a of=/foo/b bs=1m
>Export the pool so all cache is removed and reimport it:
>       # zpool export foo
>       # zpool import foo
>Now let's read one file:
>       # dd if=/foo/a of=/dev/null bs=1m
>       1073741824 bytes transferred in 10.855750 secs (98909962 bytes/sec)
>We read file 'a' and all its blocks are in cache now. The 'b' file
>shares all the same blocks, so if ARC caches blocks only once, reading
>'b' should be much faster:
>       # dd if=/foo/b of=/dev/null bs=1m
>       1073741824 bytes transferred in 0.870501 secs (1233475634 bytes/sec)
>Now look at it, 'b' was read 12.5 times faster than 'a' with no disk
>activity. Magic?:)
>Pawel Jakub Dawidek                       http://www.wheelsystems.com
>FreeBSD committer                         http://www.FreeBSD.org
>Am I Evil? Yes, I Am!                     http://yomoli.com
>zfs-discuss mailing list

zfs-discuss mailing list

Reply via email to