Re: Snapshot rollback

2011-10-23 Thread Fajar A. Nugraha
On Mon, Oct 24, 2011 at 12:45 PM, dima  wrote:
> Phillip Susi  cfl.rr.com> writes:
>
>> I created a snapshot of my root subvol, then used btrfs-subvolume
>> set-default to make the snapshot the default subvol and rebooted.  This
>> seems to have correctly gotten the system to boot from the snapshot
>> instead of the original subvol, but now /home ( @home subvol ) refuses
>> to mount claiming that /dev/sda1 is not a valid block device.  What gives?

Try mounting using sobvolid. Use "btrfs su li /" (or wherever it's
mounted) to list the ids.

> Personally I do not store anything in subvolid=0 directly and never bothered
> with 'set-default' option - just used a new subvolume/snapshot name

+1

A problem with that, though, if you decide to put /boot on btrfs as
well. Grub uses the default subvolume to determine paths (for kernel,
initrd, etc). A workaround is to manually create and manage your
grub.cfg (or create and use a manual-managed include file, like
custom-top.cfg, that gets parsed before the automatically created
entries).

I really like zfs grub2 support, where it will correctly use the
dataset name for file locations. Unfortunately grub's btrfs support
doesn't have it (yet).

> - create a named snapshot
> - edit bootloader config to include the new
> rootflags=subvol=

I had some problem with subvol option in old version of kernel/btrfs
in Lucid/Natty. I use subvolid now, which seems to be more reliable.

-- 
Fajar
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Snapshot rollback

2011-10-23 Thread dima
Phillip Susi  cfl.rr.com> writes:

> I created a snapshot of my root subvol, then used btrfs-subvolume
> set-default to make the snapshot the default subvol and rebooted.  This
> seems to have correctly gotten the system to boot from the snapshot
> instead of the original subvol, but now /home ( @home subvol ) refuses
> to mount claiming that /dev/sda1 is not a valid block device.  What gives?


Hello Phillip,
It is hard to judge without seeing your fstab and bootloader config. Maybe your
/ was directly in subvolid=0 without creating a separate subvolume for it (like
__active in Goffredo's reply)? In my very humble opinion, if you have your @home
subvolume under subvolid=0 and then change the default subvolume, it just cannot
access your @home any more.

Personally I do not store anything in subvolid=0 directly and never bothered
with 'set-default' option - just used a new subvolume/snapshot name
- create a named snapshot
- edit bootloader config to include the new
rootflags=subvol=
- reboot

Here is a very good article that explains the working of subvolumes. I used it
as reference a lot.
http://www.funtoo.org/wiki/BTRFS_Fun#Using_snapshots_for_system_recovery_.28aka_Back_to_the_Future.29

~dima

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: system hangs after deleting bad file

2011-10-23 Thread dima
Hello,
I had a similar problem.

Some files (2-3) got corrupted in my /home subvolume for no apparent reason.
Trying to access the files gives kernel oops. Sometimes it freezes the machine,
sometimes I am back to my console without any problems.

Then I switched to the latest 3.1rc and freezes were gone (though I still had
the kernel oops).

(I did not try the repair program fearing that it would do more bad than good.
After all, my / subvolume was fine and i still could mount /home)

But to tackle the problems with corrupted files I had to create a new subvolume
for /home, tranfer the files from the old one (minus the corrupted files) and
delete the old subvolume.

Though still btrfsck would give me errors trying to access some inode. But I
could mount and use all my subvolumes with no problems.

Then... I re-created btrfs with the latest btrfs-tools and installed the latest
3.1rc from the very beginning.

So far, it is working fine and the situation with disk I/O has greatly improved.
I think you may want to try to upgrade to the latest 3.1rc and at the very least
you (hopefully) should not be getting hard freezes any more.

best
~dima

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Snapshot rollback

2011-10-23 Thread Goffredo Baroncelli
On Sunday, 23 October, 2011 15:42:50 you wrote:
>  there no way to move or hard link subvolumes to somewhere other
> than their original location?

You can use the 'mv' command.

[__active is the subvolume which I use as root]

ghigo@venice:/tmp$ btrfs sub crea a
Create subvolume './a'
ghigo@venice:/tmp$ btrfs sub crea c
Create subvolume './c'
ghigo@venice:/tmp$ btrfs sub crea a/b
Create subvolume 'a/b'
ghigo@venice:/tmp$ sudo btrfs sub list /var/btrfs/
ID 258 top level 5 path __active
ID 259 top level 5 path __active/tmp/a
ID 260 top level 5 path __active/tmp/c
ID 261 top level 5 path __active/tmp/a/b
ghigo@venice:/tmp$ mv a/b c
ghigo@venice:/tmp$ sudo btrfs sub list /var/btrfs/
ID 258 top level 5 path __active
ID 259 top level 5 path __active/tmp/a
ID 260 top level 5 path __active/tmp/c
ID 261 top level 5 path __active/tmp/c/b

-- 
gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) 
Key fingerprint = 4769 7E51 5293 D36C 814E  C054 BF04 F161 3DC5 0512
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Snapshot rollback

2011-10-23 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I created a snapshot of my root subvol, then used btrfs-subvolume
set-default to make the snapshot the default subvol and rebooted.  This
seems to have correctly gotten the system to boot from the snapshot
instead of the original subvol, but now /home ( @home subvol ) refuses
to mount claiming that /dev/sda1 is not a valid block device.  What gives?

Also, is there no way to move or hard link subvolumes to somewhere other
than their original location?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6kbjoACgkQJ4UciIs+XuJ9cgCgplNTWEmJjW+9fC87y9nO9yao
xcQAoLzsOCVgxsm4a28wKudvyX+7OCpB
=rL1g
-END PGP SIGNATURE-
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Subvolume level allocation policy

2011-10-23 Thread Hugo Mills
On Sun, Oct 23, 2011 at 02:50:50PM -0400, Phillip Susi wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Is it ( yet? ) possible to manipulate the allocation policy on a
> subvolume level instead of the fs level?  For example, to make / use
> raid1, and /home use raid0?  Or to have / allocated from an ssd and
> /home allocated from the giant 2tb hd.

   Not yet, sorry. It's planned, but nobody's got around to
implementing it.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
 --- I'm on a 30-day diet. So far I've lost 18 days. --- 


signature.asc
Description: Digital signature


Subvolume level allocation policy

2011-10-23 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Is it ( yet? ) possible to manipulate the allocation policy on a
subvolume level instead of the fs level?  For example, to make / use
raid1, and /home use raid0?  Or to have / allocated from an ssd and
/home allocated from the giant 2tb hd.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6kYgYACgkQJ4UciIs+XuISPQCglUtPmg4GMrrY53Fkafk2fkcA
E84AoMAZXBha/fDrk6moMKPzbMYEtLci
=BIdX
-END PGP SIGNATURE-
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel BUG unable to handle kernel NULL pointer dereference

2011-10-23 Thread Leonidas Spyropoulos
On Sun, Oct 23, 2011 at 4:37 PM, Mitch Harder
 wrote:
> On Sat, Oct 22, 2011 at 3:23 PM, Leonidas Spyropoulos
>  wrote:
>> Hello, I got a kernel bug error, my guess from BTRFS.
>>
>> Here is the report,
>> Oct 22 20:44:43 localhost kernel: [25554.947970] BUG: unable to handle
>> kernel NULL pointer dereference at 0030
>> Oct 22 20:44:43 localhost kernel: [25554.948002] IP:
>> [] btrfs_print_leaf+0x37/0x880 [btrfs]
>
> A patch was submitted by Sergei Trofimovich to address the issue with
> handling a NULL pointer in btrfs_print_leaf.
>
> http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg12021.html
>
> Unfortunately, this usually only crops up when btrfs runs into
> corruptions that it can't handle.  So you very likely still have
> problems even if the btrfs_print_leaf issue isn't addressed.
>

So from what I understand btrfs_print_leaf function is called only
when something is wrong and want to print out debug information,
correct?

How can I track down the real problem? Any suggestions?


-- 
Caution: breathing may be hazardous to your health.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how stable are snapshots at the block level?

2011-10-23 Thread Mathijs Kwik
On Sun, Oct 23, 2011 at 6:05 PM, Chris Mason  wrote:
> On Sun, Oct 23, 2011 at 09:45:10AM +0200, Mathijs Kwik wrote:
>> Hi all,
>>
>> I'm currently doing backups by doing a btrfs snapshot, then rsync the
>> snapshot to my backup location.
>> As I have a lot of small files and quite some changes between
>> snapshots, this process is taking more and more time.
>> I looked at "btrfs find-new", which is promissing, but I need
>> something to track deletes and modifications too.
>> Also, while this will help the initial comparison phase, most time is
>> still spent on the syncing itself, as a lot of overhead is caused by
>> the tiny files.
>>
>> After finding some discussion about it here:
>> http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-synchronisation-of-backupp-100438
>>
>> I found that the official rsync-patches tarball includes the patch
>> that allows syncing full block devices.
>> After the initial backup, I found that this indeed speeds up my backups a 
>> lot.
>> Ofcourse this is meant for syncing unmounted filesystems (or other
>> things that are "stable" at the block level, like LVM snapshot
>> volumes).
>>
>> I tested backing up a live btrfs filesystem by making a btrfs
>> snapshot, and this (very simple, non-thorough) turned out to work ok.
>> My root subvolume contains the "current" subvolume (which I mount) and
>> several backup subvolumes.
>> Ofcourse I understand that the "current" subvolume on the backup
>> destination is broken/inconsistent, as I change it during the rsync
>> run. But when I mounted the backup disk and compared the subvolumes
>> using normal file-by-file rsync, they were identical.
>>
>> Can someone with knowledge about the on-disk structure please
>> confirm/reject that subvolumes (created before starting rsync on the
>> block device) should be safe and never move by themselves? Or was I
>> just lucky?
>> Are there any things that might break the backup when performed during rsync?
>> Like creating/deleting other subvolumes, probably defrag isn't a good
>> idea either :)
>
> The short answer is that you were lucky ;)

That's why I only try this at home :)

>
> The big risk is the extent allocation tree is changing, and the tree of
> tree roots is changing and so the result of the rsync isn't going to be
> a fully consistent filesystem.

Nope, I understand it's not fully consistent, but I'm hoping for
consistency for all subvols that weren't in use during the sync/dd.

>
> With that said, as long as you can mount it the actual files in the
> snapshot are going to be valid.  The only exceptions are if you've run a
> filesystem balance or removed a drive during the rsync.

Do I understand correctly that as long as I don't defrag/balance or
add/remove drives (my example is just about 1 drive though), there's a
chance the result isn't mountable, but if it _is_ mountable, all
subvolumes that weren't touched during the rsync/dd should be fine?
Or is there a chance that some files/dirs (in a snapshot volume) are
fine, but others are broken?

In other words: do I only need to check the destination to be
mountable afterwards or does that by itself mean not enough.

You mentioned 2 important trees
- tree of tree roots
- extent allocation tree

My root subvolume contains only subvolumes (no dirs/files), 1 of which
is mounted with -o subvol, the rest are snapshots.
Am I correct to assume the tree of tree roots doesn't change as long
as I don't create/remove subvols?

And for the extent allocation tree, can I assume that all changes to
extent allocation will be related to files/dirs changing on the
currently in-use subvolume? All extents that contain files in any of
the snapshots will still be there as changes to those files in
"current" will be COWed to new extents. So the risk is not that
extents are marked "free" when they aren't, but I might end up with
extents that are marked in-use while they are free.
As I expect "current" to become corrupt in the destination, I will
remove the subvolume there. Will that take care of the extent
allocation tree? Or will there still be extents marked "in use"
without any subvolume/dir/file pointing at it? If so, this is probably
something that the future fsck can deal with?


>
> -chris

Thanks,
Mathijs
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how stable are snapshots at the block level?

2011-10-23 Thread Chris Mason
On Sun, Oct 23, 2011 at 09:45:10AM +0200, Mathijs Kwik wrote:
> Hi all,
> 
> I'm currently doing backups by doing a btrfs snapshot, then rsync the
> snapshot to my backup location.
> As I have a lot of small files and quite some changes between
> snapshots, this process is taking more and more time.
> I looked at "btrfs find-new", which is promissing, but I need
> something to track deletes and modifications too.
> Also, while this will help the initial comparison phase, most time is
> still spent on the syncing itself, as a lot of overhead is caused by
> the tiny files.
> 
> After finding some discussion about it here:
> http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-synchronisation-of-backupp-100438
> 
> I found that the official rsync-patches tarball includes the patch
> that allows syncing full block devices.
> After the initial backup, I found that this indeed speeds up my backups a lot.
> Ofcourse this is meant for syncing unmounted filesystems (or other
> things that are "stable" at the block level, like LVM snapshot
> volumes).
> 
> I tested backing up a live btrfs filesystem by making a btrfs
> snapshot, and this (very simple, non-thorough) turned out to work ok.
> My root subvolume contains the "current" subvolume (which I mount) and
> several backup subvolumes.
> Ofcourse I understand that the "current" subvolume on the backup
> destination is broken/inconsistent, as I change it during the rsync
> run. But when I mounted the backup disk and compared the subvolumes
> using normal file-by-file rsync, they were identical.
> 
> Can someone with knowledge about the on-disk structure please
> confirm/reject that subvolumes (created before starting rsync on the
> block device) should be safe and never move by themselves? Or was I
> just lucky?
> Are there any things that might break the backup when performed during rsync?
> Like creating/deleting other subvolumes, probably defrag isn't a good
> idea either :)

The short answer is that you were lucky ;)

The big risk is the extent allocation tree is changing, and the tree of
tree roots is changing and so the result of the rsync isn't going to be
a fully consistent filesystem.

With that said, as long as you can mount it the actual files in the
snapshot are going to be valid.  The only exceptions are if you've run a
filesystem balance or removed a drive during the rsync.

-chris



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel BUG unable to handle kernel NULL pointer dereference

2011-10-23 Thread Mitch Harder
On Sat, Oct 22, 2011 at 3:23 PM, Leonidas Spyropoulos
 wrote:
> Hello, I got a kernel bug error, my guess from BTRFS.
>
> Here is the report,
> Oct 22 20:44:43 localhost kernel: [25554.947970] BUG: unable to handle
> kernel NULL pointer dereference at 0030
> Oct 22 20:44:43 localhost kernel: [25554.948002] IP:
> [] btrfs_print_leaf+0x37/0x880 [btrfs]

A patch was submitted by Sergei Trofimovich to address the issue with
handling a NULL pointer in btrfs_print_leaf.

http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg12021.html

Unfortunately, this usually only crops up when btrfs runs into
corruptions that it can't handle.  So you very likely still have
problems even if the btrfs_print_leaf issue isn't addressed.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how stable are snapshots at the block level?

2011-10-23 Thread Mathijs Kwik
On Sun, Oct 23, 2011 at 4:08 PM, Edward Ned Harvey  wrote:
>> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
>> ow...@vger.kernel.org] On Behalf Of Mathijs Kwik
>>
>> I'm currently doing backups by doing a btrfs snapshot, then rsync the
>> snapshot to my backup location.
>> As I have a lot of small files and quite some changes between
>> snapshots, this process is taking more and more time.
>> I looked at "btrfs find-new", which is promissing, but I need
>> something to track deletes and modifications too.
>> Also, while this will help the initial comparison phase, most time is
>> still spent on the syncing itself, as a lot of overhead is caused by
>> the tiny files.
>
> No word on when this will be available, but "btrfs send" or whatever it's 
> going to be called, is currently in the works.  This is really what you want.
>
>
>> After finding some discussion about it here:
>> http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-
>> mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-
>> synchronisation-of-backupp-100438
>
> When you rsync at the file level, it needs to walk the directory structure, 
> which is essentially a bunch of random IO.  When you rsync at the block 
> level, it needs to read the entire storage device sequentially.  The latter 
> is only a possible benefit, when the amount of time to walk the tree is 
> significantly greater than the time to read the entire block device.

My test was just a 10G block device filled with random files between 512b and 8k
While this is a contrived example, in this case a block level rsync is
way way way faster. It's not just the tree-walking that's slow, I
guess there's some per-file overhead too.When not using rsync but
plain dd, it's even faster (at the expense of more writes, even when
unneeded), since it can almost transfer data at the maximum write
speed for the receiver.


>
> Even if you rsync the blocklevel device, the local rsync will have to read 
> the entire block device to search for binary differences before sending.  
> This will probably have the opposite effect from what you want - Because 
> every time you created and deleted a file, every time you overwrote an 
> existing block (copy on write) it still represents binary differences on 
> disk, so even though that file was deleted, or several modifications all 
> yielded a single modification in the end, all the bytes of all the deleted 
> files and all the file deltas that were formerly occupied will be sent 
> anyway.  Unless you always zero them out, or something.

I understand. A block copy is not advantageous in every situation. I'm
just trying to find out if it's possible for the situations where it
is beneficial.

>
> Given that you're talking about rsync'ing a block level device that contains 
> btrfs, I'm assuming you have no raid/redundancy.  And the receiving end is 
> the same.

Yup, in my example I synced my laptop ssd to an external disk (usb3).

>
> Also if you're rsyncing the block level device, you're running underneath 
> btrfs and losing any checksumming benefit that btrfs was giving you, so 
> you're possibly introducing risk for silent data corruption.  (Or more 
> accurately, failing to allow btrfs to detect/correct it.)

Not sure... I'm sure that's the case for in-use subvolumes, but
shouldn't snapshots (and their metadata/checksums) just be safe?

>
>
>> I found that the official rsync-patches tarball includes the patch
>> that allows syncing full block devices.
>> After the initial backup, I found that this indeed speeds up my backups a 
>> lot.
>> Ofcourse this is meant for syncing unmounted filesystems (or other
>> things that are "stable" at the block level, like LVM snapshot
>> volumes).
>
> Just guessing you did a minimal test.  Send initial image, then make some 
> changes, then send again.  I don't expect this to be typical after a day or a 
> week of usage, for the reasons previously described.
>
>
>> I tested backing up a live btrfs filesystem by making a btrfs
>> snapshot, and this (very simple, non-thorough) turned out to work ok.
>> My root subvolume contains the "current" subvolume (which I mount) and
>> several backup subvolumes.
>> Ofcourse I understand that the "current" subvolume on the backup
>> destination is broken/inconsistent, as I change it during the rsync
>> run. But when I mounted the backup disk and compared the subvolumes
>> using normal file-by-file rsync, they were identical.
>
> I may be wrong, but this sounds dangerous to me.  As you've demonstrated, it 
> will probably work a lot of the time - because the subvols and everything 
> necessary to reference them are static on disk most of the time.  But as soon 
> as you write to any of the subvols - and that includes a scan, fsck, 
> rebalance, defrag, etc.  Anything that writes transparently behind the scenes 
> as far as user processes are concerned...  Those could break things.

I understand there are harmful operations, that's why I'm asking i

RE: how stable are snapshots at the block level?

2011-10-23 Thread Edward Ned Harvey
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of Mathijs Kwik
> 
> I'm currently doing backups by doing a btrfs snapshot, then rsync the
> snapshot to my backup location.
> As I have a lot of small files and quite some changes between
> snapshots, this process is taking more and more time.
> I looked at "btrfs find-new", which is promissing, but I need
> something to track deletes and modifications too.
> Also, while this will help the initial comparison phase, most time is
> still spent on the syncing itself, as a lot of overhead is caused by
> the tiny files.

No word on when this will be available, but "btrfs send" or whatever it's going 
to be called, is currently in the works.  This is really what you want.


> After finding some discussion about it here:
> http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-
> mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-
> synchronisation-of-backupp-100438

When you rsync at the file level, it needs to walk the directory structure, 
which is essentially a bunch of random IO.  When you rsync at the block level, 
it needs to read the entire storage device sequentially.  The latter is only a 
possible benefit, when the amount of time to walk the tree is significantly 
greater than the time to read the entire block device. 

Even if you rsync the blocklevel device, the local rsync will have to read the 
entire block device to search for binary differences before sending.  This will 
probably have the opposite effect from what you want - Because every time you 
created and deleted a file, every time you overwrote an existing block (copy on 
write) it still represents binary differences on disk, so even though that file 
was deleted, or several modifications all yielded a single modification in the 
end, all the bytes of all the deleted files and all the file deltas that were 
formerly occupied will be sent anyway.  Unless you always zero them out, or 
something.

Given that you're talking about rsync'ing a block level device that contains 
btrfs, I'm assuming you have no raid/redundancy.  And the receiving end is the 
same.

Also if you're rsyncing the block level device, you're running underneath btrfs 
and losing any checksumming benefit that btrfs was giving you, so you're 
possibly introducing risk for silent data corruption.  (Or more accurately, 
failing to allow btrfs to detect/correct it.)


> I found that the official rsync-patches tarball includes the patch
> that allows syncing full block devices.
> After the initial backup, I found that this indeed speeds up my backups a lot.
> Ofcourse this is meant for syncing unmounted filesystems (or other
> things that are "stable" at the block level, like LVM snapshot
> volumes).

Just guessing you did a minimal test.  Send initial image, then make some 
changes, then send again.  I don't expect this to be typical after a day or a 
week of usage, for the reasons previously described.


> I tested backing up a live btrfs filesystem by making a btrfs
> snapshot, and this (very simple, non-thorough) turned out to work ok.
> My root subvolume contains the "current" subvolume (which I mount) and
> several backup subvolumes.
> Ofcourse I understand that the "current" subvolume on the backup
> destination is broken/inconsistent, as I change it during the rsync
> run. But when I mounted the backup disk and compared the subvolumes
> using normal file-by-file rsync, they were identical.

I may be wrong, but this sounds dangerous to me.  As you've demonstrated, it 
will probably work a lot of the time - because the subvols and everything 
necessary to reference them are static on disk most of the time.  But as soon 
as you write to any of the subvols - and that includes a scan, fsck, rebalance, 
defrag, etc.  Anything that writes transparently behind the scenes as far as 
user processes are concerned...  Those could break things.


> Thanks for any comments on this.

I suggest one of a few options:
(a) Stick with rsync at the file level.  It's stable.
(b) Wait for btrfs send (or whatever) to become available
(c) Use ZFS.  Both ZFS and BTRFS have advantages over one another.  This an 
area where zfs has the advantage for now.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


how stable are snapshots at the block level?

2011-10-23 Thread Mathijs Kwik
Hi all,

I'm currently doing backups by doing a btrfs snapshot, then rsync the
snapshot to my backup location.
As I have a lot of small files and quite some changes between
snapshots, this process is taking more and more time.
I looked at "btrfs find-new", which is promissing, but I need
something to track deletes and modifications too.
Also, while this will help the initial comparison phase, most time is
still spent on the syncing itself, as a lot of overhead is caused by
the tiny files.

After finding some discussion about it here:
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/using-rsync-for-blockdevice-level-synchronisation-of-backupp-100438

I found that the official rsync-patches tarball includes the patch
that allows syncing full block devices.
After the initial backup, I found that this indeed speeds up my backups a lot.
Ofcourse this is meant for syncing unmounted filesystems (or other
things that are "stable" at the block level, like LVM snapshot
volumes).

I tested backing up a live btrfs filesystem by making a btrfs
snapshot, and this (very simple, non-thorough) turned out to work ok.
My root subvolume contains the "current" subvolume (which I mount) and
several backup subvolumes.
Ofcourse I understand that the "current" subvolume on the backup
destination is broken/inconsistent, as I change it during the rsync
run. But when I mounted the backup disk and compared the subvolumes
using normal file-by-file rsync, they were identical.

Can someone with knowledge about the on-disk structure please
confirm/reject that subvolumes (created before starting rsync on the
block device) should be safe and never move by themselves? Or was I
just lucky?
Are there any things that might break the backup when performed during rsync?
Like creating/deleting other subvolumes, probably defrag isn't a good
idea either :)

Or any incompatible mount options (compression, space_cache, ssd)

Thanks for any comments on this.
Mathijs
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html