I did some research for this a while ago, and currently use the
following setup:
https://gist.github.com/Alveel/c8e80aeef208a7e27c9bd50d0023420c Of
course this is still far from perfect, and I welcome more ideas or
criticism, but perhaps it can help you out :)

On 9/11/19 10:03 PM, Carl Soderstrom wrote:

> On 09/11 09:40 , Alexander Moisseev via BackupPC-users wrote:
>> On 11.09.2019 18:19, Robert Trevellyan wrote:
>>> I'm letting ZFS do the compression (using the default of LZ4) with BackupPC 
>>> handling deduplication. I think you'll find a reasonable consensus that ZFS 
>>> compression is always a win for storage space (it will store 
>>> un-compressible data unmodified), whereas ZFS deduplication is best avoided 
>>> in most cases, mostly due to its high memory usage. It's possible that 
>>> BackupPC compression would be tighter than LZ4,
>> Actually, on ZFS you are not limited to LZ4, but in ZFS each file block is 
>> compressed independently, that is why in most cases BackupPC compression is 
>> higher, though it depends on data.
>>
>> We moved from 77.96G cpool to pool on compressed filesystem recently. Now it 
>> consumes 81.2G, so there is not much difference.
>>
>> # zfs get compression,compressratio,recordsize,referenced zroot/bpc/pool
>> NAME            PROPERTY       VALUE     SOURCE
>> zroot/bpc/pool  compression    gzip-3    local
>> zroot/bpc/pool  compressratio  3.87x     -
>> zroot/bpc/pool  recordsize     128K      default
>> zroot/bpc/pool  referenced     81,2G     -
> Thanks Alexander, those details are really helpful.
>
>
-- 
Alwyn

_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to