Re: state of btrfs snapshot limitations?

2018-09-15 Thread Qu Wenruo


On 2018/9/15 上午5:05, James A. Robinson wrote:
> The mail archive seems to indicate this list is appropriate
> for not only the technical coding issues, but also for user
> questions, so I wanted to pose a question here.  If I'm
> wrong about that, I apologize in advance.
> 
> The page
> 
> https://btrfs.wiki.kernel.org/index.php/Incremental_Backup
> 
> talks about the basic snapshot capabilities of btrfs and led
> me to look up what, if any, limits might apply.  I find some
> threads from a few years ago that talk about limiting the
> number of snapshots for a volume to 100.

This is mostly related to send and quota, and maybe for
snapshot/subvolume removal.
(Any personally I would recommend only 20 snapshots)

Both of them needs to do backref walk in their core functionality.
Increased number of reference introduced by snapshots could bring a huge
impact to quota especially.

We have some plan to enhance it, but for now if send/quota is important
to you, it's highly recommend to limit number of snapshots to a
reasonable number.

> 
> The reason I'm curious is I wanted to try and use the
> snapshot capability as a way of keeping a 'history' of a
> backup volume I maintain.  The backup doesn't change a
> lot overtime, but small changes are made to files within
> it daily.

Then normally it could lead to some dilemma.

Currently the mostly common way to know how much exclusively used space
one snapshot uses is btrfs quota (qgroup).
But a lot of snapshots bring tons of performance impact, sometimes even
unacceptable.

If one don't need to account how much space a snapshot really take, it
won't be a problem though.


Despite above things, I'd like to point out that, snapshot is not backup
(which I believe everyone should have already known).

And further more for btrfs specifically, since file trees (snapshots and
subvolumes) still share the same chunk/extent/csum trees, if one of such
essential trees gets corrupted (especially for extent tree), you may not
be able to mount the fs (at least unable to do RW mount).

So it's still pretty important to take real backup.

Thanks,
Qu

> 
> The Plan 9 OS has a nice archival filesystem that lets you
> easily maintain snapshots, and has various tools that make
> it simple to keep a /snapshot//mmdd snapshot going back
> for the life of the filesystem.
> 
> I wanted to try and replicate the basic functionality of
> that history using a non-plan-9 filesystem.  At first I
> tried rsnapshot but I find its technique of rotating and
> deleting backups is thrashing the disks to the point that it
> can't keep up with the rotations (the cp -al is fast, but
> the periodic rm -rf of older snapshots kills the disk).
> 
> With btrfs I was thinking perhaps I could more efficiently
> maintain the archive of changes over time using a snapshot.
> If this is an awful thought and I should just go away,
> please let me know.
> 
> If the limit is 100 or less I'd need use a more complicated
> rotation scheme.  For example with a layout like the
> following:
> 
> min/
> hour/
> day/
> month/
> year/
> 
> The idea being each bucket, min, hour, day, month, would
> be capped and older snapshots would be removed and replaced
> with newer ones over time.
> 
> so with a 15-minute snapshot cycle I'd end up with
> 
> min/[00,15,30,45]
> hour/[00-23]
> day/[01-31]
> month/[01-12]
> year/[2018,2019,...]
> 
> (72+ snapshots with room for a few years worth of yearly's).
> 
> But if things have changed with btrfs over the past few
> years and number of snapshots scales much higher, I would
> use the easier scheme:
> 
> /min/[00,15,30,45]
> /hourly/[00-23]
> /daily//
> 
> with 365 snapshots added per additional year.
> 



signature.asc
Description: OpenPGP digital signature


Re: Problem with BTRFS

2018-09-15 Thread Rafael Jesús Alcántara Pérez
Hi:

I've installed the package from your link and it has fixed the issue ;)

$ sudo btrfs rescue fix-device-size /dev/sdc1
Fixed device size for devid 3, old size: 1999843081728 new size:
1999843078144
Fixed device size for devid 5, old size: 1999843081728 new size:
1999843078144
Fixed device size for devid 4, old size: 1999843081728 new size:
1999843078144
Fixed super total bytes, old size: 601020864 new size: 6010266652672
Fixed unaligned/mismatched total_bytes for super block and device items

Thank you very much.

El 14/09/18 a las 22:24, Nicholas D Steeves escribió:
> Hi,
> 
> On Fri, Sep 14, 2018 at 10:13:06PM +0200, Rafael Jesús Alcántara Pérez wrote:
>> Hi,
>>
>> It seems that btrfs-progs_4.17-1 from Sid, includes that feature (at
>> least, it says so in the manual page). I don't know if I can install it
>> on Stretch but I'll try.
>>
>> Greets and thank you very much to both of you ;)
>> Rafael J. Alcántara Pérez.
> 
> Please do not install btrfs-progs from sid on stretch, it's likely to
> break your system.  If you can't wait, here is a link to what I
> uploaded.  It includes both the source and the binary packages (see
> gpg signed changes file, and please take care to verify the binaries
> weren't tampered with):
> 
> https://drive.google.com/file/d/1WflwBEn-QN_btrKPiefz7Kxz58VT8kIQ/view?usp=sharing
> 
> Of course, this package is unofficial.  The official one will soon
> become available through the regular channels.
> 
> For best results, set up a local file:// apt repository so that apt
> update && apt upgrade will work properly.  Official bpo will
> automatically overwrite these, in any case.
> 
> Cheers,
> Nicholas
> 


-- 


Re: state of btrfs snapshot limitations?

2018-09-15 Thread Hans van Kranenburg
On 09/15/2018 05:56 AM, James A. Robinson wrote:
> [...]
> 
> I've got to read up a bit more on subvolumes, I am missing some
> context from the warnings given by Chris regarding per-subvolume
> options.

Btrfs lets you mount the filesystem multiple times, e.g. with a
different subvolume id, so you can mount part of the filesystem somewhere.

Some of the mount options (many btrfs specific ones, like space_cache*,
autodefrag etc) get a value when doing the first mount, and subsequent
ones cannot change them any more, because they're filesystem wide behavior.

Some others can be changed on each individual mount (like the atime
options), and when omitting them you get the non-optimal default again.

-- 
Hans van Kranenburg