> On 2024-10-21 12:43 AM PDT Martin Steigerwald <[email protected]> wrote:

> ...

> There are several different variants of this and I bet it is important to 
> clarify exactly what is meant here. At least these two come to my mind:
> 
> 1) Mount a block-for-block clone of the same filesystem another time.
> 
> 2) Mount a snapshot of one filesystem on one block device to a different 
> location in filesystem tree.
> 
> Carl, what is the one refering to? If you mean a third thing, please 
> elaborate.

I am talking about both 2) and 1). However, I'm talking about thin **LVM** 
snapshots, not bcachefs' native snapshots. I'm also talking about file images 
of entire filesystems (what you'd get if you ran something like "cat 
/dev/nvme0n1p1 > /fs.bak"). 

I have several computers on which I use several different filesystems, and on 
almost all of those computers I use (thin) LVM. Since I'm using LVM anyway it's 
easier, more reliable and more consistent for me to use LVM's snapshots rather 
than the filesystems' native snapshots (if any). For similar reasons I also 
tend to use MDRAID instead of the filesystems' native multiple device support 
and LUKS instead of the filesystems' native encryption support. I'm also 
thinking about adding checksums to every filesystem using dm-integrity but I 
haven't gotten around to planning that out yet. So if I need to build server 
that absolutely needs to run reliably and consistently my current default is:

  Drives -> MDRAID -> LUKS -> Thin LVM -> XFS

I know that's really old-school and really un-sexy these days but it's 
essentially bulletproof when managed properly.

> ...

> (in this case BCacheFS on LVM as I wanted to have the flexibility to test 
> out different filesystems)
> 
> This makes it easy for me to exclude all snapshots from backup operations 
> as I can do top level snapshots of the filesystem contents and "hide" them 
> away in a subvolume (means sub directory in filesystem tree) of my choice.

I'm not sure I understand what you mean but I don't think I do it that way.

> I still use rsync for backups as it has stood the test of time.

Yeah, I've fought with rsync for a couple of decades now and it's the transport 
used by my backup system. For me rsync has always been problematic. It 
semi-regularly hangs despite being run on completely quiescent snapshots, it 
has atrocious performance on large images and it has some security weak spots 
that probably don't matter much on a secure network but still bother me. So I'm 
going to switch out rsync for BorgBackup at some point which should allow me to 
scrap most of my current backup system except for the front-end. If you've ever 
had any issues with rsync you might want to check it out. There are actually a 
bunch of other newer alternatives to rsync I've tested but for me BorgBackup 
was the winner.

> I could probably switch to BTRFS send/recieve or a similar functionality 
> in BCacheFS. With the added benefit of way better handling of renames.

There are actually projects that you can find on GitHub and elsewhere that 
allow you to do the same sort of send/receive that xfs/zfs/btrfs can do on 
**any** filesystem by working at the thin LVM level. (The tools require that 
you are using **thin** LVs which you should be anyway.) I use this method when 
I need to send an efficient, incremental **exact** copy backup of an **entire** 
filesystem somewhere else.

These tools are nice because they don't require reading or sending every block 
on the source device just the changed ones so they're efficient which is 
crucial for large filesystems or slow networks. I use this technique instead of 
the native filesystem send/receive so I can use can have a consistent interface 
across any filesystem. I won't point you at any particular project as what I'm 
using has been heavily modified for my particular use case.

> ...

Thanks,
Carl

Reply via email to