> Actually, I think the rule-of-thumb is 270 bytes/DDT
> entry.  It's 200 
> bytes of ARC for every L2ARC entry.
> 
> DDT doesn't count for this ARC space usage
> 
> E.g.:        I have 1TB of 4k files that are to be
> deduped, and it turns 
> out that I have about a 5:1 dedup ratio. I'd also
> like to see how much 
> ARC usage I eat up with a 160GB L2ARC.
> 
> (1)    How many entries are there in the DDT:
> 1TB of 4k files means there are 2^30
>  files (about 1 billion).
> However, at a 5:1 dedup ratio, I'm only
>  actually storing 
> 0% of that, so I have about 214 million blocks.
> Thus, I need a DDT of about 270 * 214
>  million  =~  58GB in size
> (2)    My L2ARC is 160GB in size, but I'm using 58GB
> for the DDT.  Thus, 
> I have 102GB free for use as a data cache.
> 102GB / 4k =~ 27 million blocks can be
>  stored in the 
> emaining L2ARC space.
> However, 26 million files takes up:
>    200 * 27 million =~ 
> GB of space in ARC
> Thus, I'd better have at least 5.5GB of
>  RAM allocated 
> olely for L2ARC reference pointers, and no other use.
> 
> 

Hi Erik.

Are you saying the DDT will automatically look to be stored in an L2ARC device 
if one exists in the pool, instead of using ARC? 

Or is there some sort of memory pressure point where the DDT gets moved from 
ARC to L2ARC?

Thanks,

Geoff
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to