Standard disclaimer, I'm not a btrfs expert...

On Sat, Mar 6, 2021 at 6:52 AM John Botha (SourceForge) <
sourcefo...@yellowacorn.com> wrote:

> From what I've read, btrfs (like many file systems) suffers over time when
> fragmentation increases. I have seen suggestions such as not to put data
> bases on btrfs because of this, but that just seems silly at a number of
> levels. At the least one should take special care with data bases (know
> your data and adjust your maintenance accordingly), but that holds for any
> data base on any FS. My question is how best to approach this with a
> combination of rebalancing and scrubbing, or if there is another way or
> other aspects to keep in mind.
>

Defragmenting is easy enough... But yes, large files that change are
generally not a good idea on a COW filesystem as it causes excessive
writes. Since you mention rebalancing I assume you mean some sort of disk
array?



> I won't be using snapshots in btrfs since BackupPC effectively implements
> its own. When I read up how btrfs implements COW I thought it would be
> safest to use nodatacow, but then read that doing so would also stop bit
> rot protection, so that's a real bummer. Am I missing something, or do I
> have that right?
>

I don't know about stop, but you would be losing one of the major benefits
of btrfs since it writes the file before "erasing" the old one. I think
btrfs is very good for system and /home drives unless you're working with
large files (databases, major video editing, etc). It even has some
advantages over typical RAID configurations as long as you don't have buggy
drive firmware and don't mind scrubbing your file system on a regular
basis. I'm not sure it's a great candidate for BackupPC at this time.

Here's a thread on the Fedora mailing list when I was looking into btrfs
RAID (not for BackupPC)

https://lists.fedoraproject.org/archives/list/us...@lists.fedoraproject.org/message/FNK3XEPHAMN4L4DEYQFMEJPA2OQVW3AM/

Chris is very helpful and there's a lot of good info in the thread.


Given how I understand BackupPC implements compression, I'd rather have
> btrfs handle de/compression, as that would seem to involve less time doing
> redundant calculations. Does that make sense?
>

I think the only real advantage here is that the btrfs compression would be
multi-threaded, I don't think BackupPC is but someone please correct me if
I'm wrong. I run BackupPC on a dedicated server so I'm not really worried
how long it takes as long as it can keep up. Which for my home use it does.

Thanks,
Richard
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

Reply via email to