The problem was a corrupt "btrfs send file" or to be more specific, the file got corrupt somewhere on the line of transport
To recap the problem for further reference:

check the import (btrfs receive) with the -v option like so:

$ nohup btrfs receive -v -f root.receive.file . &

the import is successful, thus the received uuid is set when you get something like ....

$ tail -n1 nohup.out
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=99a34963-3506-7e4c-a82d-93e337191684, stransid=1232187

... after "btrfs receive" is done.

make sure to double check the file size and checksum (md5sum) of "btrfs send file" and "btrfs receive file".

I don't know exactly where the corruption happened, but in the second attempt I successful combined the import like so:

$ cat x* | pigz -d > root.receive.file


Thanks for the support


Am 17.01.21 um 13:07 schrieb Chris Murphy:
On Sun, Jan 17, 2021 at 11:51 AM Anders Halman <anders.hal...@gmail.com> wrote:
Hello,

I try to backup my laptop over an unreliable slow internet connection to
a even slower Raspberry Pi.

To bootstrap the backup I used the following:

# local
btrfs send root.send.ro | pigz | split --verbose -d -b 1G
rsync -aHAXxv --numeric-ids --partial --progress -e "ssh -T -o
Compression=no -x" x* remote-host:/mnt/backup/btrfs-backup/

# remote
cat x* > split.gz
pigz -d split.gz
btrfs receive -f split

worked nicely. But I don't understand why the "received uuid" on the
remote site in blank.
I tried it locally with smaller volumes and it worked.
I suggest using -v or -vv on the receive side to dig into why the
receive is failing. Setting the received uuid is one of the last
things performed on receive, so if it's not set it suggests the
receive isn't finished.


Reply via email to