Thanks Alexander -- REALLY helpful, REALLY thoughtful. Comments below Alexander Kobel wrote at about 18:04:54 +0100 on Thursday, January 28, 2021:
> For initial backups and changes, it depends on your BackupPC server CPU. > The zlib compression in BackupPC is *way* more resource hungry than lzop > or zstd. You probably want to make sure that the network bandwidth is > the bottleneck rather than compressor throughput: > > gzip -c $somebigfile | pv > /dev/null > zstd -c $somebigfile | pv > /dev/null > lzop -c $somebigfile | pv > /dev/null > > I get the following where I stored the file on a ram disk to minimize the file read time effect... 1] Highly compressible 6GB text file Compress: gzip: 207MiB 0:00:47 [4.39MiB/s] lzop: 355MiB 0:00:05 [70.2MiB/s] zstd: 177MiB 0:00:07 [22.2MiB/s] Uncompress: gzip: 5.90GiB 0:00:21 [ 287MiB/s] lzop: 5.90GiB 0:00:06 [ 946MiB/s] zstd: 5.90GiB 0:00:04 [1.40GiB/s] 2] 1GB highly non-compressible file (created by /dev/urandom) Compress: gzip: 987MiB 0:00:31 [31.6MiB/s] lzop: 987MiB 0:00:00 [1.24GiB/s] zstd: 986MiB 0:00:01 [ 857MiB/s] Note: I used the default compression levels for each. So, focusing on the compressible file: - gzip/zlib is slower than lzop and zstd and less compressible than zstd (but more than lzop) - lzop is fastest but least compressible - zstd is most compressible but slower than lzop, especially on compression My concern with zstd though is that on compression, it is more than 3x slower than lzo -- and is slower than even a standard hard disk, meaning that it may be system performance limiting on writes. Are your numbers similar? Either way, it seems that btrfs block level compression using either lzo or zstd would be about an order of magnitude *faster* than BackupPC compression using zlib. Right?? > +/- multithreading, check for yourself. > > Unchanged files are essentially for free with both cpool and > pool+btrfs-comp for incrementals, but require decryption for full > backups except for rsync (as the hashes are always built over the > uncompressed content). I assume you are referring to '--checksum' > Same for nightlies, where integrity checks over > your pool data is done. I don't believe that BackupPC_nightly does any integrity check of the content, but rather just checks the integrity of the refcounts. As such, I don't believe that it actually reads any files. I did write my own perl script to check pool integrity in case anybody is interested (it is about 100x faster than using a bash script to iterate through the c-pool and pipe files to BackupPC_zcat followed by md5sum) > Decryption is significantly faster, of course, > but still vastly different between the three algorithms. > For fast full backups, you might want to ensure that you can decrypt even > several > times faster than network throughput. > > > 2. Storage efficiency, including: > > - Raw compression efficiency of each file > > Cpool does file-level compression, btrfs does block-level compression. > The difference is measurable, but not huge (~ 1 to 2% compression ratio > in my experience for the same algorithm, i.e. zstd on block vs. file > level). I assume compression is better on the whole file level, right? But deduplication would likely more than make up for this difference, right? > Btrfs also includes a logic to not even attempt further > compression if a block looks like it's not going to compress well. In my > experience, that's hardly ever an issue. > > So, yes, using zlib at the same compression level, btrfs compresses > slightly worse than BackupPC. But for btrfs there's also lzop and zstd. > > > - Ability to take advantage of btrfs extent deduplication for 2 > > distinct files that share some or all of the same (uncompressed) > > content > > Won't work with cpool compression. > For pool+btrfs-comp, it's hard to assess - depends on how your data > changes. Effectively, this only helps with large files that are mostly > identical, such as VM images. Block-level dedup is difficult, only > available as offline dedup in btrfs, You can use 'duperemove' online though it is a memory hog for sure. > and you risk that all your backups > are destroyed if the one copy of the common block in there gets > corrupted. This is true for btrfs snapshots generally of course... > For me a no-go, but YMMV, in particular with a RAID-1. > > File level deduplication is irrelevant, because BackupPC takes care of > that by pooling. > > > 3. Robustness in case of disk crashes, file corruption, file system > > corruption, other types of "bit rot" etc. > > (note my btrfs filesystem is in a btrfs-native Raid-1 > > configuration) > > DISCLAIMER: These are instances for personal data of few people. I care > about the data, but there are no lives or jobs at stake. > > > Solid in my experience. Make sure to perform regular scrubs and check > that you get informed about problems. > On my backup system, I only ever saw problems once, when the HDD was > about to die. No RAID to help, so this was fatal for a dozen files, > which I had to recover from a second off-site BackupPC server. > > On my laptops, I saw scrub errors five, six after power losses during > heavy duty. That's less than one occasion per year, but still, it > happened. Luckily haven't seen any scrub errors yet. But what I like about btrfs Raid-1 is that btrfs can use checksums to decide which extent to choose from when there is a lack of concordance which is not true with standard RAID-1. > On a side note, theoretically you won't need nightly pool checks if you > run btrfs scrub at the same rate. I don't think the standard BackupPC_nightly does file integrity checks. See above. > > With kernel 5.10 being a LTS release, we even have a stable kernel + > fallback supporting xxhash/blake2/sha256 checksums, which is great at > least from a theoretical perspective. > Can you elaborate here? - What advantages do you get? - Is it enough to install the kernel or do you need to update user-space programs too? Note: I am running kernel 5.10.7 on Ubuntu 18.04. > > In case there *is* a defect, however, there's not a whole lot of > recovery options on btrfs systems. I wasn't able to recover from any of > the above scrub errors, I had to delete the affected files. > > > > In the past, it seems like the tradeoffs were not always clear so > > hoping the above outline will help flesh out the details... > VERY HELPFUL > > Looking for both real-world experience as well as theoretical > > observations :) > > Theoretically, pool+btrfs-comp with zstd is hard to beat. You won't find > a better trade-off between resource usage and compression ratio these days. > > Also, I believe it's more elegant and clean to keep compression apart > from BackupPC. Storing and retrieving files efficiently is what > filesystems are there for; BackupPC is busy enough already with rotating > backups, deduplication and transfer. AGREE - plus would be nice to have to use BackupPC_zcat every time I want to look at a pool file. Is there any way to convert your backups from cpool to pool, short of manually doing the following: 1. Loop through cpool tree, decompress each file, and move to pool 2. Loop through pc tree, edit each attrib file to change each file attrib from compress=3 to compress=1 I have done worse, but this would take quite a while to run... > Practically, it depends on how much trust you put into btrfs. > > Definitive answers for that one are expected to be available immediately > after the emacs-vs-vi question is settled for good. Oh, that hasn't been resolved yet? :) > > > > HTH, > Alex > _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/