HI

I use GELI with ZFS all the time. Works fine for me so far.


Am 31.07.12 21:54, schrieb Robert Milkowski:
>> Once something is written deduped you will always use the memory when
>> you want to read any files that were written when dedup was enabled, so
>> you do not save any memory unless you do not normally access most of
>> your data.
> 
> For reads you don't need ddt. Also in Solaris 11 (not in Illumos
> unfortunately AFAIK) on reads the in-memory ARC will also stay deduped (so
> if 10x logical blocks are deduped to 1 and you read all 10 logical copies,
> only one block in arc will be allocated). If there are no further
> modifications and you only read dedupped data, apart from disk space
> savings, there can be very nice improvement in performance as well (less
> i/o, more ram for caching, etc.).
> 
> 
>>
>> As far as the OP is concerned, unless you have a dataset that will
>> dedup well don't bother with it, use compression instead (don't use
>> both compression and dedup because you will shrink the average record
>> size and balloon the memory usage).
> 
> Can you expand a little bit more here?
> Dedup+compression works pretty well actually (not counting "standard"
> problems with current dedup - compression or not).
> 
> 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to