On Thursday, February 8, 2024 6:44:50 PM CET Wols Lists wrote:
> On 08/02/2024 06:38, J. Roeleveld wrote:
> > ZFS doesn't have this "max amount of changes", but will happily fill up
> > the
> > entire pool keeping all versions available.
> > But it was easier to add zpool monitoring for this on ZFS then it was to
> > add snapshot monitoring to LVM.
> > 
> > I wonder, how do you deal with snapshots getting "full" on your system?
> 
> As far as I'm, concerned, snapshots are read-only once they're created.
> But there is a "grow the snapshot as required" option.
> 
> I don't understand it exactly, but what I think happens is when I create
> the snapshot it allocates, let's say, 1GB. As I write to the master
> copy, it fills up that 1GB with CoW blocks, and the original blocks are
> handed over to the backup snapshot. And when that backup snapshot is
> full of blocks that have been "overwritten" (or in reality replaced),
> lvm just adds another 1GB or whatever I told it to.

That works with a single snapshot.
But, when I last used LVM like this, I would have multiple snapshots. When I 
change something on the LV, the original data would be copied to the snapshot.
If I would have 2 snapshots for that LV, both would grow at the same time.

Or is that changed in recent versions?

> So when I delete a snapshot, it just goes through those few blocks,
> decrements their use count (if they've been used in multiple snapshots),
> and if the use count goes to zero they're handed back to the "empty" pool.

I know this is how ZFS snapshots work. But am not convinced LVM snapshots work 
the same way.

> All I have to do is make sure that the sum of my snapshots does not fill
> the lv (logical volume). Which in my case is a raid-5.

I assume you mean PV (Physical Volume)?

I actually ditched the whole idea of raid-5 when drives got bigger than 1TB. I 
currently use Raid-6 (or specifically RaidZ2, which is the ZFS "equivalent")

--
Joost



Reply via email to