Re: [PATCH] btrfs-progs: change filename limit to 255 when creating subvolume

2018-09-16 Thread Su Yanjun



On 9/14/2018 10:34 PM, David Sterba wrote:

On Wed, Sep 12, 2018 at 03:39:03PM +0800, Su Yanjun wrote:

Modify the file name length limit to meet the Linux naming convention.
In addition, the file name length is always bigger than 0, no need to
compare with 0 again.

Issue: #145
Signed-off-by: Su Yanjun 

Looks good, please send a test, thanks. You can copy portions of the
misc-tests/014-filesystem-label that does similar string length check
for the label.


The following is the tests for the patch.

[suyj@sarch tests]$ vim misc-tests-results.txt
=== START TEST 
/home/suyj/btrfs-progs/tests//misc-tests/033-filename-length-limit
$TEST_DEV not given, using /home/suyj/btrfs-progs/tests//test.img as 
fallback
== RUN CHECK /home/suyj/btrfs-progs/mkfs.btrfs -L BTRFS-TEST-LABEL 
-f /home/suyj/btrfs-progs/tests//test.img

btrfs-progs v4.17.1
See http://btrfs.wiki.kernel.org for more information.

Label:  BTRFS-TEST-LABEL
UUID:   925425a2-5557-4dea-93d3-3b4543707082
Node size:  16384
Sector size:    4096
Filesystem size:    2.00GiB
Block group profiles:
  Data: single    8.00MiB
  Metadata: DUP 102.38MiB
  System:   DUP   8.00MiB
SSD detected:   no
Incompat features:  extref, skinny-metadata
Number of devices:  1
Devices:
   ID    SIZE  PATH
    1 2.00GiB  /home/suyj/btrfs-progs/tests//test.img

== RUN CHECK root_helper mount -t btrfs -o loop 
/home/suyj/btrfs-progs/tests//test.img /home/suyj/btrfs-progs/tests//mnt

== RUN CHECK root_helper chmod a+rw /home/suyj/btrfs-progs/tests//mnt
== RUN CHECK root_helper /home/suyj/btrfs-progs/btrfs subvolume 
create aaa

Create subvolume './aaa'
== RUN CHECK root_helper /home/suyj/btrfs-progs/btrfs subvolume 
create 
012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
Create subvolume 
'./012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
== RUN MUSTFAIL root_helper /home/suyj/btrfs-progs/btrfs subvolume 
create 
0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
ERROR: cannot access 
0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345: 
File name too long
failed (expected): root_helper /home/suyj/btrfs-progs/btrfs subvolume 
create 
0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
== RUN MUSTFAIL root_helper /home/suyj/btrfs-progs/btrfs subvolume 
create 
012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
ERROR: cannot access 
012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234: 
File name too long
failed (expected): root_helper /home/suyj/btrfs-progs/btrfs subvolume 
create 

Re: btrfs problems

2018-09-16 Thread Chris Murphy
On Sun, Sep 16, 2018 at 2:11 PM, Adrian Bastholm  wrote:
> Thanks for answering Qu.
>
>> At this timing, your fs is already corrupted.
>> I'm not sure about the reason, it can be a failed CoW combined with
>> powerloss, or corrupted free space cache, or some old kernel bugs.
>>
>> Anyway, the metadata itself is already corrupted, and I believe it
>> happens even before you noticed.
>  I suspected it had to be like that
>>
>> > BTRFS check --repair is not recommended, it
>> > crashes , doesn't fix all problems, and I later found out that my
>> > lost+found dir had about 39G of lost files and dirs.
>>
>> lost+found is completely created by btrfs check --repair.
>>
>> > I spent about two days trying to fix everything, removing a disk,
>> > adding it again, checking , you name it. I ended up removing one disk,
>> > reformatting it, and moving the data there.
>>
>> Well, I would recommend to submit such problem to the mail list *BEFORE*
>> doing any write operation to the fs (including btrfs check --repair).
>> As it would help us to analyse the failure pattern to further enhance btrfs.
>
> IMHO that's a, how should I put it, a design flaw, the wrong way of
> looking at how people think, with all respect to all the very smart
> people that put in countless hours of hard work. Users expect and fs
> check and repair to repair, not to break stuff.
> Reading that --repair is "destructive" is contradictory even to me.

It's contradictory to everyone including the developers. No developer
set out to make --repair dangerous from the outset. It just turns out
that it was a harder problem to solve and the thought was that it
would keep getting better.

Newer versions are "should be safe" now even if they can't fix
everything. The far bigger issue I think the developers are aware of
is that depending on repair at all for any Btrfs of appreciable size,
is simply not scalable. Taking a day or a week to run a repair on a
large file system, is unworkable. And that's why it's better to avoid
inconsistencies in the first place which is what Btrfs is supposed to
do, and if that's not happening it's a bug somewhere in Btrfs and also
sometimes in the hardware.


> This problem emerged in a direcory where motion (the camera software)
> was saving pictures. Either killing the process or a powerloss could
> have left these jpg files (or fs metadata) in a bad state. Maybe
> that's something to go on. I was thinking that there's not much anyone
> can do without root access to my box anyway, and I'm not sure I was
> prepared to give that to anyone.

I can't recommend raid56 for people new to Btrfs. It really takes
qualified hardware to make sure there's no betrayal, as everything
gets a lot more complicated with raid56. The general state of faulty
device handling on Btrfs, makes raid56 very much a hands on approach
you can't turn your back on it. And then when jumping into raid5, I
advise raid1 for metadata. It reduces problems. And that's true for
raid6 also, except that raid1 metadata is less redundancy than raid1
so...it's not helpful if you end up losing 2 devices.

If you need production grade parity raid you should use openzfs,
although I can't speak to how it behaves with respect to faulty
devices on Linux.




>> Any btrfs unexpected behavior, from strange ls output to aborted
>> transaction, please consult with the mail list first.
>> (Of course, with kernel version and btrfs-progs version, which is
>> missing in your console log though)
>
> Linux jenna 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4 (2018-08-21)
> x86_64 GNU/Linux
> btrfs-progs is already the newest version (4.7.3-1).

Well the newest versions are kernel 4.18.8, and btrfs-progs 4.17.1, so
in Btrfs terms those are kinda old.

That is not inherently bad, but there are literally thousands of
additions and deletions since kernel 4.9 so there's almost no way
anyone on this list, except a developer familiar with backport status,
can tell you if the problem you're seeing is a bug that's been fixed
in that particular version. There aren't that many developers that
familiar with that status who also have time to read user reports.
Since this is an upstream list, most developers will want to know if
you're able to reproduce the problem with a mainline kernel, because
if you can it's very probable it's a bug that needs to be fixed
upstream first before it can be backported. That's just the nature of
kernel development generally. And you'll find the same thing on ext4
and XFS lists...

The main reason why people use Debian and its older kernel bases is
they're willing to accept certain bugginess in favor of stability.
Transient bugs are really bad in that world. Consistent bugs they just
find work arounds for (avoidance) until there's a known highly tested
backport, because they want "The Behavior" to be predictable, both
good and bad. That is not a model well suited for a file system that's
in Btrfs really active development state. It's better now than it was
even a 

Fwd: btrfs problems

2018-09-16 Thread Adrian Bastholm
...

And also raid56 is still considered experimental, and has various
problems if the hardware lies (like if some writes happen out of order
or faster on some devices than others,and it's much harder to repair
because the repair tools aren't raid56 feature complete).

https://btrfs.wiki.kernel.org/index.php/Status

I think it's less scary than "dangerous" or "unstable" but anyway,
there are known problems unique to raid56 that will need future
features to make it as reliable as single, raid1, raid10. And like any
parity raid it sucks performance wise for random writes, especially
when using hard drives.



On Sun, Sep 16, 2018 at 1:40 PM, Adrian Bastholm  wrote:
> Hi Chris
>> There's almost no useful information provided for someone to even try
>> to reproduce your results, isolate cause and figure out the bugs.
> I realize that. That's why I wasn't really asking for help, I was
> merely giving some feedback.
>
>> No kernel version. No btrfs-progs version. No description of the
>> hardware and how it's laid out, and what mkfs and mount options are
>> being used. No one really has the time to speculate.
>
> I understand, and I apologize. I could have added more detail.
>
>>
>> >BTRFS check --repair is not recommended
>>
>> Right. So why did you run it anyway?
>
> Because "repair" implies it does something to help you. That's how
> most people's brains work. My fs is broken. I'll try "REPAIR"
>
>
>> man btrfs check:
>>
>> Warning
>>Do not use --repair unless you are advised to do so by a
>> developer or an experienced user
>>
>>
>> It is always a legitimate complaint, despite this warning, if btrfs
>> check --repair makes things worse, because --repair shouldn't ever
>> make things worse.
>
> I don't think It made things worse. It's more like it didn't do
> anything. That's when I started trying to copy a new file to the file
> with the question mark attributes (lame, I know) to see what happens.
> The "corrupted" file suddenly had attributes, and so on.
> check --repair removed the extra files and left me at square one, so not 
> worse.
>
>>But Btrfs repairs are complicated, and that's why
>> the warning is there. I suppose the devs could have made the flag
>> --riskyrepair but I doubt this would really slow users down that much.
>
> calling it --destructive or --deconstruct, or something even more
> scary would slow people down
>
>> A big part of --repair fixes weren't known to make things worse at the
>> time, and edge cases where it made things worse kept popping up, so
>> only in hindsight does it make sense --repair maybe could have been
>> called something different to catch the user's attention.
>
> Exactly. It's not too late to rename it. And maybe make it dump a
> filesystem report with everything a developer would need (within
> reason) to trace the error
>
>> But anyway, I see this same sort of thing on the linux-raid list all
>> the time. People run into trouble, and they press full forward making
>> all kinds of changes, each change increases the chance of data loss.
>> And then they come on the list with WTF messages. And it's always a
>> lesson in patience for the list regulars and developers... if only
>> you'd come to us with questions sooner.
>
> True. I found the list a bit late. I tried the IRC channel but I
> couldn' t post messages.
>
>> > Please have a look at the console logs.
>>
>> These aren't logs. It's a record of shell commands. Logs would include
>> kernel messages, ideally all of them. Why is device 3 missing?
>
> It was a RAID5 array of three drives. When doing btrfs check on two of
> the drives I got the drive x is missing. I figured that maybe it had
> to do something with which one was the "first" drive or something. The
> same way, btrfs-check crashed when I was running it against the drives
> where I got the "drive x missing" message
>
>
>> We have no idea. Most of Btrfs code is in the kernel, problems are reported 
>> by
>> the kernel. So we need kernel messages, user space messages aren't
>> enough.
>
>> Anyway, good luck with openzfs, cool project.
> Cool project, not so cool pitfalls. I might head back to BTRFS after
> all .. see the response to Qu.
>
> Thanks for answering, and sorry for the shortcomings of my feedback
> /A
>
>>
>> --
>> Chris Murphy
>
>
>
> --
> Vänliga hälsningar / Kind regards,
> Adrian Bastholm
>
> ``I would change the world, but they won't give me the sourcecode``



--
Chris Murphy


-- 
Vänliga hälsningar / Kind regards,
Adrian Bastholm

``I would change the world, but they won't give me the sourcecode``


Re: Move data and mount point to subvolume

2018-09-16 Thread Chris Murphy
On Sun, Sep 16, 2018 at 12:40 PM, Rory Campbell-Lange
 wrote:

> Thanks very much for spotting my error, Chris.
>
> # mount | grep bkp
> /dev/mapper/cdisk2 on /bkp type btrfs
> (rw,noatime,compress=lzo,space_cache,subvolid=5,subvol=/)
>
> # btrfs subvol list /bkp
> ID 258 gen 313636 top level 5 path backup
>
> I'm a bit confused about the difference between / and backup, which is
> at /bkp/backup.


top level, subvolid=5, subvolid=0, subvol=/, FS_TREE are all the same
thing. This is the subvolume that's created at mkfs time, it has no
name, it can't be deleted, and at mkfs time if you do

# btrfs sub get-default 
ID 5 (FS_TREE)

So long as you haven't changed the default subvolume, the top level
subvolume is what gets mounted, unless you use "-o subvol=" or "-o
subvolid=" mount option.

If you do
# btrfs sub list -ta /bkp

It might become a bit more clear what the layout is on disk. And for
an even more verbose output you can do:

# btrfs insp dump-t -t fs_tree /dev/### for this you need to
specify device not mountpoint, you don't need to umount, it's a read
only command


Anything that in the "top level" or the "file system root" you will
see listed. The first number is the inode, you'll see 256 is a special
inode for subvolumes. You can do 'ls -li' and compare. Any subvolume
you create is not FS_TREE, it is a "file tree". And note that each
subvolume has it's own pile of inode numbers, meaning
files/directories only have unique inode numbers *in a given
subvolume*. Those inode numbers start over in a new subvolume.

Subvolumes share extent, chunk, csum, uuid and other trees, so a
subvolume is not a completely isolated "file system".


>
> Anyhow I've verified I can snapshot /bkp/backup to another subvolume.
> This means I don't need to move anything, simply remount /bkp at
> /bkp/backup.

Uhh, that's the reverse of what you said in the first message. I'm not
sure what you want to do. It sounds like you want to mount the
subvolume "backup" at /bkp/ so that all the other files/dirs on this
Btrfs volume are not visible through the /bkp/ mount path?

Anyway if you want to explicitly mount the subvolume "backup"
somewhere, you use -o subvol=backup to specify "the subvolume named
backup, not the top level subvolume".





>
> Presumably I can therefore remount /bkp at subvolume /backup?
>
> # btrfs subvolume show /bkp/backup | egrep -i 'name|uuid|subvol'
> Name:   backup
> UUID:   d17cf2ca-a6db-ca43-8054-1fd76533e84b
> Parent UUID:-
> Received UUID:  -
> Subvolume ID:   258
>
> My fstab is presently
>
> UUID=da90602a-b98e-4f0b-959a-ce431ac0cdfa /bkp  btrfs  
> noauto,noatime,compress=lzo 0  2
>
> I guess it would now be
>
> UUID=d17cf2ca-a6db-ca43-8054-1fd76533e84b /bkp  btrfs  
> noauto,noatime,compress=lzo 0  2

No you can't mount by subvolume UUID. You continue to specify the
volume UUID, but then add a mount option


noauto,noatime,compress=lzo,subvol=backup

or

noauto,noatime,compress=lzo,subvolid=258


The advantage of subvolid is that it doesn't change when you rename
the subvolume.


>
>> If you snapshot a subvolume, which itself contains subvolumes, the
>> nested subvolumes are not snapshot. In the snapshot, the nested
>> subvolumes are empty directories.
>>
>> >
>> > # btrfs fi du -s /bkp/backup-subvol/backup
>> >  Total   Exclusive  Set shared  Filename
>> > ERROR: cannot check space of '/bkp/backup-subvol/backup': Inappropriate
>> > ioctl for device
>>
>> That's a bug in older btrfs-progs. It's been fixed, but I'm not sure
>> what version, maybe by 4.14?
>
> Sounds about right -- my version is 4.7.3.

It's not dangerous to use it (maybe --repair is more dangerous but
don't use it without advice first, no matter version). You just don't
get new features and bug fixes. It's also not dangerous to use
something much newer, again if the user space tools are very new and
the kernel is old, you just don't get certain features.




-- 
Chris Murphy


Re: Move data and mount point to subvolume

2018-09-16 Thread Rory Campbell-Lange
On 16/09/18, Chris Murphy (li...@colorremedies.com) wrote:
> > So I did this:
> >
> > btrfs subvol snapshot /bkp /bkp/backup-subvol
> >
> > strangely while /bkp/backup has lots of files in it,
> > /bkp/backup-subvol/backup has none.
> >
> > # btrfs subvol list /bkp
> > ID 258 gen 313585 top level 5 path backup
> > ID 4782 gen 313590 top level 5 path backup-subvol
> 
> OK so previously you said "/bkp which is a top level subvolume. There
> are no other subvolumes."
> 
> But in fact backup is already a subvolume. So now it's confusing what
> you were asking for in the first place, maybe you didn't realize
> backup is not a dir but it is a subvolume.

Thanks very much for spotting my error, Chris.

# mount | grep bkp
/dev/mapper/cdisk2 on /bkp type btrfs
(rw,noatime,compress=lzo,space_cache,subvolid=5,subvol=/)

# btrfs subvol list /bkp
ID 258 gen 313636 top level 5 path backup

I'm a bit confused about the difference between / and backup, which is
at /bkp/backup.

Anyhow I've verified I can snapshot /bkp/backup to another subvolume.
This means I don't need to move anything, simply remount /bkp at
/bkp/backup.

Presumably I can therefore remount /bkp at subvolume /backup? 

# btrfs subvolume show /bkp/backup | egrep -i 'name|uuid|subvol'
Name:   backup
UUID:   d17cf2ca-a6db-ca43-8054-1fd76533e84b
Parent UUID:-
Received UUID:  -
Subvolume ID:   258

My fstab is presently

UUID=da90602a-b98e-4f0b-959a-ce431ac0cdfa /bkp  btrfs  
noauto,noatime,compress=lzo 0  2

I guess it would now be

UUID=d17cf2ca-a6db-ca43-8054-1fd76533e84b /bkp  btrfs  
noauto,noatime,compress=lzo 0  2

> If you snapshot a subvolume, which itself contains subvolumes, the
> nested subvolumes are not snapshot. In the snapshot, the nested
> subvolumes are empty directories.
> 
> >
> > # btrfs fi du -s /bkp/backup-subvol/backup
> >  Total   Exclusive  Set shared  Filename
> > ERROR: cannot check space of '/bkp/backup-subvol/backup': Inappropriate
> > ioctl for device
> 
> That's a bug in older btrfs-progs. It's been fixed, but I'm not sure
> what version, maybe by 4.14?

Sounds about right -- my version is 4.7.3.

> > Any ideas about what could be going on?
> >
> > In the mean time I'm trying:
> >
> > btrfs subvol create /bkp/backup-subvol
> > cp -prv --reflink=always /bkp/backup/* /bkp/backup-subvol/
> 
> Yeah that will take a lot of writes that are not necessary, now that
> you see backup is a subvolume already. If you want a copy of it, just
> snapshot it.

Makes sense.

Thanks very much
Rory


Re: btrfs problems

2018-09-16 Thread Chris Murphy
On Sun, Sep 16, 2018 at 7:58 AM, Adrian Bastholm  wrote:
> Hello all
> Actually I'm not trying to get any help any more, I gave up BTRFS on
> the desktop, but I'd like to share my efforts of trying to fix my
> problems, in hope I can help some poor noob like me.

There's almost no useful information provided for someone to even try
to reproduce your results, isolate cause and figure out the bugs.

No kernel version. No btrfs-progs version. No description of the
hardware and how it's laid out, and what mkfs and mount options are
being used. No one really has the time to speculate.


>BTRFS check --repair is not recommended

Right. So why did you run it anyway?

man btrfs check:

Warning
   Do not use --repair unless you are advised to do so by a
developer or an experienced user


It is always a legitimate complaint, despite this warning, if btrfs
check --repair makes things worse, because --repair shouldn't ever
make things worse. But Btrfs repairs are complicated, and that's why
the warning is there. I suppose the devs could have made the flag
--riskyrepair but I doubt this would really slow users down that much.
A big part of --repair fixes weren't known to make things worse at the
time, and edge cases where it made things worse kept popping up, so
only in hindsight does it make sense --repair maybe could have been
called something different to catch the user's attention.

But anyway, I see this same sort of thing on the linux-raid list all
the time. People run into trouble, and they press full forward making
all kinds of changes, each change increases the chance of data loss.
And then they come on the list with WTF messages. And it's always a
lesson in patience for the list regulars and developers... if only
you'd come to us with questions sooner.

> Please have a look at the console logs.

These aren't logs. It's a record of shell commands. Logs would include
kernel messages, ideally all of them. Why is device 3 missing? We have
no idea. Most of Btrfs code is in the kernel, problems are reported by
the kernel. So we need kernel messages, user space messages aren't
enough.

Anyway, good luck with openzfs, cool project.


-- 
Chris Murphy


Re: Move data and mount point to subvolume

2018-09-16 Thread Chris Murphy
> So I did this:
>
> btrfs subvol snapshot /bkp /bkp/backup-subvol
>
> strangely while /bkp/backup has lots of files in it,
> /bkp/backup-subvol/backup has none.
>
> # btrfs subvol list /bkp
> ID 258 gen 313585 top level 5 path backup
> ID 4782 gen 313590 top level 5 path backup-subvol

OK so previously you said "/bkp which is a top level subvolume. There
are no other subvolumes."

But in fact backup is already a subvolume. So now it's confusing what
you were asking for in the first place, maybe you didn't realize
backup is not a dir but it is a subvolume.

If you snapshot a subvolume, which itself contains subvolumes, the
nested subvolumes are not snapshot. In the snapshot, the nested
subvolumes are empty directories.


>
> # btrfs fi du -s /bkp/backup-subvol/backup
>  Total   Exclusive  Set shared  Filename
> ERROR: cannot check space of '/bkp/backup-subvol/backup': Inappropriate
> ioctl for device

That's a bug in older btrfs-progs. It's been fixed, but I'm not sure
what version, maybe by 4.14?


>
> Any ideas about what could be going on?
>
> In the mean time I'm trying:
>
> btrfs subvol create /bkp/backup-subvol
> cp -prv --reflink=always /bkp/backup/* /bkp/backup-subvol/

Yeah that will take a lot of writes that are not necessary, now that
you see backup is a subvolume already. If you want a copy of it, just
snapshot it.

-- 
Chris Murphy


Re: Move data and mount point to subvolume

2018-09-16 Thread Chris Murphy
On Sun, Sep 16, 2018 at 5:14 AM, Rory Campbell-Lange
 wrote:
> Hi
>
> We have a backup machine that has been happily running its backup
> partitions on btrfs (on top of a luks encrypted disks) for a few years.
>
> Our backup partition is on /bkp which is a top level subvolume.
> Data, RAID1: total=2.52TiB, used=1.36TiB
> There are no other subvolumes.

and

> /dev/mapper/cdisk2 on /bkp type btrfs 
> (rw,noatime,compress=lzo,space_cache,subvolid=5,subvol=/)

I like Hans' 2nd email advice to snapshot the top level subvolume.

I would start out with:
btrfs sub snap -r /bkp /bkp/toplevel.ro

And that way I shouldn't be able to F this up irreversibly if I make a
mistake. :-D And then do another snapshot that's rw:

btrfs sub snap /bkp /bkp/bkpsnap
cd /bkp/bkpsnap

Now remove everything except "backupdir". Then move everything out of
backupdir including any hidden files. Then rmdir backupdir. Then you
can rename the snapshot/subvolume
cd ..
mv bkpsnap backup

That's less metadata writes than creating a new subvolume, and reflink
copying the backup dir, e.g. cp -a --reflink /bkp/backupdir
/bkp/backupsubvol

That could take a long time because all the metadata is fully read,
modified (new inodes) and written out.

But either way it should work.

-- 
Chris Murphy


Re: Move data and mount point to subvolume

2018-09-16 Thread Rory Campbell-Lange
On 16/09/18, Hans van Kranenburg (hans.van.kranenb...@mendix.com) wrote:
> On 09/16/2018 02:37 PM, Hans van Kranenburg wrote:
> > On 09/16/2018 01:14 PM, Rory Campbell-Lange wrote:
...
> >> We have /bkp with /bkp/backup in it. We would like to mount /bkp/backup
> >> at /bkp instead. Note that /bkp/backup has a lot of hardlinked files.
...
> So, what you can do is (similar to what I earlier typed):
> 
> * You now have subvol 5 (at /bkp)
> * btrfs sub snap /bkp /bkp/backup-subvol
> * Now you have a new subvol (256 or higher number) which already
> contains everything
> * Then inside /bkp/backup-subvol you will again see
> /bkp/backup-subvol/backup (since you snapshotted the toplevel which also
> contains it)
> * Now mv /bkp/backup/backup-subvol/* /bkp/backup-subvol (so the mv
> operations stays within the same subvolume)
> * Then remove everything outside /bkp/backup-subvol and mv
> /bkp/backup-subvol /bkp/backup, and then voila... you can now use
> subvol=/backup to mount it.

Thanks for the advice, Hans. Your suggestion makes complete sense.

So I did this:

btrfs subvol snapshot /bkp /bkp/backup-subvol

strangely while /bkp/backup has lots of files in it,
/bkp/backup-subvol/backup has none.

# btrfs subvol list /bkp
ID 258 gen 313585 top level 5 path backup
ID 4782 gen 313590 top level 5 path backup-subvol

# btrfs fi du -s /bkp/backup-subvol/backup
 Total   Exclusive  Set shared  Filename
ERROR: cannot check space of '/bkp/backup-subvol/backup': Inappropriate
ioctl for device

Any ideas about what could be going on?

In the mean time I'm trying:

btrfs subvol create /bkp/backup-subvol
cp -prv --reflink=always /bkp/backup/* /bkp/backup-subvol/

One worry as I do this is that the Metadata seems to be building quickly
as I run this. At the start:

# btrfs fi df /bkp
Data, RAID1: total=2.52TiB, used=1.37TiB
System, RAID1: total=32.00MiB, used=480.00KiB
Metadata, RAID1: total=9.00GiB, used=4.80GiB <--- **
GlobalReserve, single: total=512.00MiB, used=0.00B

15 minutes into the cp run:

# btrfs fi df /bkp
Data, RAID1: total=2.52TiB, used=1.37TiB
System, RAID1: total=32.00MiB, used=480.00KiB
Metadata, RAID1: total=9.00GiB, used=5.56GiB  <--- **
GlobalReserve, single: total=512.00MiB, used=80.00KiB

Thanks very much for any further advice
Rory


Re: Move data and mount point to subvolume

2018-09-16 Thread Rory Campbell-Lange
Thanks very much for the reply, Hans. I'm just responding to your note
about mout options here.

On 16/09/18, Hans van Kranenburg (hans.van.kranenb...@mendix.com) wrote:
> > The machine is btrfs-progs v4.7.3 Linux 4.9.0-8-amd64 on Debian. The
> > coreutils version is 8.26-3.
> 
> Another note: since it seems you have about 100% more data space
> allocated (set apart for data purpose) than you're actually using, or in
> other words, having the 1GiB chunks on average just for 50% filled...
> 
>Data, RAID1: total=2.52TiB, used=1.36TiB
> 
> ...in combination with using Linux 4.9, I suspect there's also 'ssd' in
> your mount options (not in fstab, but enabled by btrfs while mounting,
> see /proc/mounts or mount command output)?
> 
> If so, this is a nice starting point for more info about what might also
> be happening to your filesystem:
> https://www.spinics.net/lists/linux-btrfs/msg70622.html

My mount options are (according to 'mount'):

/dev/mapper/cdisk2 on /bkp type btrfs 
(rw,noatime,compress=lzo,space_cache,subvolid=5,subvol=/)

Perhaps I need to run a balance, as suggested in your message recorded
on spinics? mount doesn't report the ssd option though, as you suggest
it might. Perhaps the compression flag is the culprit here.

Many thanks
Rory


Re: btrfs problems

2018-09-16 Thread Qu Wenruo


On 2018/9/16 下午9:58, Adrian Bastholm wrote:
> Hello all
> Actually I'm not trying to get any help any more, I gave up BTRFS on
> the desktop, but I'd like to share my efforts of trying to fix my
> problems, in hope I can help some poor noob like me.
> 
> I decided to use BTRFS after reading the ArsTechnica article about the
> next-gen filesystems, and BTRFS seemed like the natural choice, open
> source, built into linux, etc. I even bought a HP microserver to have
> everything on because none of the commercial NAS-es supported BTRFS.
> What a mistake, I wasted weeks in total managing something that could
> have taken a day to set up, and I'd have MUCH more functionality now
> (if I wasn't hit by some ransomware, that is).
> 
> I had three 1TB drives, chose to use raid, and all was good for a
> while, until started fiddling with Motion, the image capturing
> software. When you kill that process (my take on it) a file can be
> written but it ends up with question marks instead of attributes, and
> it's impossible to remove.

At this timing, your fs is already corrupted.
I'm not sure about the reason, it can be a failed CoW combined with
powerloss, or corrupted free space cache, or some old kernel bugs.

Anyway, the metadata itself is already corrupted, and I believe it
happens even before you noticed.

> BTRFS check --repair is not recommended, it
> crashes , doesn't fix all problems, and I later found out that my
> lost+found dir had about 39G of lost files and dirs.

lost+found is completely created by btrfs check --repair.

> I spent about two days trying to fix everything, removing a disk,
> adding it again, checking , you name it. I ended up removing one disk,
> reformatting it, and moving the data there.

Well, I would recommend to submit such problem to the mail list *BEFORE*
doing any write operation to the fs (including btrfs check --repair).
As it would help us to analyse the failure pattern to further enhance btrfs.

> Now I removed BTRFS
> entirely and replaced it with a OpenZFS mirror array, to which I'll
> add the third disk later when I transferred everything over.

Understandable, it's really annoying a fs just get itself corrupted, and
without much btrfs specified knowledge it would just be a hell to try
any method to fix it (even a lot of them would just make the case worse).

> 
> Please have a look at the console logs. I've been running linux on the
> desktop for the past 15 years, so I'm not a noob, but for running
> BTRFS you better be involved in the development of it.

I'd say, yes.
For any btrfs unexpected behavior, don't use btrfs check --repair unless
you're a developer or some developer asked to do.

Any btrfs unexpected behavior, from strange ls output to aborted
transaction, please consult with the mail list first.
(Of course, with kernel version and btrfs-progs version, which is
missing in your console log though)

In fact, in recent (IIRC starting from v4.15) kernel releases, btrfs is
already doing much better error detection thus it would detect such
problem early on and protect the fs from being further modified.

(This further shows that the importance of using the latest mainline
kernel other than some old kernel provided by stable distribution).

Thanks,
Qu

> In my humble
> opinion, it's not for us "users" just yet. Not even for power users.
> 
> For those of you considering building a NAS without special purposes,
> don't. Buy a synology, pop in a couple of drives, and enjoy the ride.
> 
> 
> 
>  root  /home/storage/motion/2017-05-24  1  ls -al
> ls: cannot access '36-20170524201346-02.jpg': No such file or directory
> ls: cannot access '36-20170524201346-02.jpg': No such file or directory
> total 4
> drwxrwxrwx 1 motion   motion   114 Sep 14 12:48 .
> drwxrwxr-x 1 motion   adyhasch  60 Sep 14 09:42 ..
> -? ? ??  ?? 36-20170524201346-02.jpg
> -? ? ??  ?? 36-20170524201346-02.jpg
> -rwxr-xr-x 1 adyhasch adyhasch  62 Sep 14 12:43 remove.py
> root  /home/storage/motion/2017-05-24  1  touch test.raw
>  root  /home/storage/motion/2017-05-24  cat /dev/random > test.raw
> ^C
> root  /home/storage/motion/2017-05-24  ls -al
> ls: cannot access '36-20170524201346-02.jpg': No such file or directory
> ls: cannot access '36-20170524201346-02.jpg': No such file or directory
> total 8
> drwxrwxrwx 1 motion   motion   130 Sep 14 13:12 .
> drwxrwxr-x 1 motion   adyhasch  60 Sep 14 09:42 ..
> -? ? ??  ?? 36-20170524201346-02.jpg
> -? ? ??  ?? 36-20170524201346-02.jpg
> -rwxr-xr-x 1 adyhasch adyhasch  62 Sep 14 12:43 remove.py
> -rwxrwxrwx 1 root root 338 Sep 14 13:12 test.raw
>  root  /home/storage/motion/2017-05-24  1  cp test.raw
> 36-20170524201346-02.jpg
> 'test.raw' -> '36-20170524201346-02.jpg'
> 
>  root  /home/storage/motion/2017-05-24  ls -al
> total 20
> drwxrwxrwx 1 motion   motion   178 Sep 14 

btrfs problems

2018-09-16 Thread Adrian Bastholm
Hello all
Actually I'm not trying to get any help any more, I gave up BTRFS on
the desktop, but I'd like to share my efforts of trying to fix my
problems, in hope I can help some poor noob like me.

I decided to use BTRFS after reading the ArsTechnica article about the
next-gen filesystems, and BTRFS seemed like the natural choice, open
source, built into linux, etc. I even bought a HP microserver to have
everything on because none of the commercial NAS-es supported BTRFS.
What a mistake, I wasted weeks in total managing something that could
have taken a day to set up, and I'd have MUCH more functionality now
(if I wasn't hit by some ransomware, that is).

I had three 1TB drives, chose to use raid, and all was good for a
while, until started fiddling with Motion, the image capturing
software. When you kill that process (my take on it) a file can be
written but it ends up with question marks instead of attributes, and
it's impossible to remove. BTRFS check --repair is not recommended, it
crashes , doesn't fix all problems, and I later found out that my
lost+found dir had about 39G of lost files and dirs.
I spent about two days trying to fix everything, removing a disk,
adding it again, checking , you name it. I ended up removing one disk,
reformatting it, and moving the data there. Now I removed BTRFS
entirely and replaced it with a OpenZFS mirror array, to which I'll
add the third disk later when I transferred everything over.

Please have a look at the console logs. I've been running linux on the
desktop for the past 15 years, so I'm not a noob, but for running
BTRFS you better be involved in the development of it. In my humble
opinion, it's not for us "users" just yet. Not even for power users.

For those of you considering building a NAS without special purposes,
don't. Buy a synology, pop in a couple of drives, and enjoy the ride.



 root  /home/storage/motion/2017-05-24  1  ls -al
ls: cannot access '36-20170524201346-02.jpg': No such file or directory
ls: cannot access '36-20170524201346-02.jpg': No such file or directory
total 4
drwxrwxrwx 1 motion   motion   114 Sep 14 12:48 .
drwxrwxr-x 1 motion   adyhasch  60 Sep 14 09:42 ..
-? ? ??  ?? 36-20170524201346-02.jpg
-? ? ??  ?? 36-20170524201346-02.jpg
-rwxr-xr-x 1 adyhasch adyhasch  62 Sep 14 12:43 remove.py
root  /home/storage/motion/2017-05-24  1  touch test.raw
 root  /home/storage/motion/2017-05-24  cat /dev/random > test.raw
^C
root  /home/storage/motion/2017-05-24  ls -al
ls: cannot access '36-20170524201346-02.jpg': No such file or directory
ls: cannot access '36-20170524201346-02.jpg': No such file or directory
total 8
drwxrwxrwx 1 motion   motion   130 Sep 14 13:12 .
drwxrwxr-x 1 motion   adyhasch  60 Sep 14 09:42 ..
-? ? ??  ?? 36-20170524201346-02.jpg
-? ? ??  ?? 36-20170524201346-02.jpg
-rwxr-xr-x 1 adyhasch adyhasch  62 Sep 14 12:43 remove.py
-rwxrwxrwx 1 root root 338 Sep 14 13:12 test.raw
 root  /home/storage/motion/2017-05-24  1  cp test.raw
36-20170524201346-02.jpg
'test.raw' -> '36-20170524201346-02.jpg'

 root  /home/storage/motion/2017-05-24  ls -al
total 20
drwxrwxrwx 1 motion   motion   178 Sep 14 13:13 .
drwxrwxr-x 1 motion   adyhasch  60 Sep 14 09:42 ..
-rwxr-xr-x 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxr-xr-x 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxr-xr-x 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxr-xr-x 1 adyhasch adyhasch  62 Sep 14 12:43 remove.py
-rwxrwxrwx 1 root root 338 Sep 14 13:12 test.raw

 root  /home/storage/motion/2017-05-24  chmod 777 36-20170524201346-02.jpg

 root  /home/storage/motion/2017-05-24  ls -al
total 20
drwxrwxrwx 1 motion   motion   178 Sep 14 13:13 .
drwxrwxr-x 1 motion   adyhasch  60 Sep 14 09:42 ..
-rwxrwxrwx 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxrwxrwx 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxrwxrwx 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxr-xr-x 1 adyhasch adyhasch  62 Sep 14 12:43 remove.py
-rwxrwxrwx 1 root root 338 Sep 14 13:12 test.raw
 root  /home/storage/motion/2017-05-24  unlink 36-20170524201346-02.jpg
unlink: cannot unlink '36-20170524201346-02.jpg': No such file or directory

 root  /home/storage/motion/2017-05-24  1  ls -al
total 20
drwxrwxrwx 1 motion   motion   178 Sep 14 13:13 .
drwxrwxr-x 1 motion   adyhasch  60 Sep 14 09:42 ..
-rwxrwxrwx 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxrwxrwx 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxrwxrwx 1 root root 338 Sep 14 13:13 36-20170524201346-02.jpg
-rwxr-xr-x 1 adyhasch adyhasch  62 Sep 14 12:43 remove.py
-rwxrwxrwx 1 root root 338 Sep 14 13:12 test.raw

 root  /home/storage/motion/2017-05-24  journalctl -k | grep BTRFS
Sep 

Re: Move data and mount point to subvolume

2018-09-16 Thread Hans van Kranenburg
On 09/16/2018 02:37 PM, Hans van Kranenburg wrote:
> On 09/16/2018 01:14 PM, Rory Campbell-Lange wrote:
>> Hi
>>
>> We have a backup machine that has been happily running its backup
>> partitions on btrfs (on top of a luks encrypted disks) for a few years.
>>
>> Our backup partition is on /bkp which is a top level subvolume. 
>> Data, RAID1: total=2.52TiB, used=1.36TiB
>> There are no other subvolumes.
>>
>> We have /bkp with /bkp/backup in it. We would like to mount /bkp/backup
>> at /bkp instead. Note that /bkp/backup has a lot of hardlinked files.
>>
>> I guess I could do 
>>
>> cd /bkp/backup
>> mv * ../
>> rmdir backup
> 
> Doing it the other way around is easier, since you don't have to think
> about hardlinked/reflinked/etc/anything while copying data:

Oh wait, I'm stupid, I was reading too quickly. Let me finish my coffee
first.

> btrfs sub snap /bkp /bkp/backup
> 
> Now you have the exact identical thing in /backup, and you can start
> throwing away everything outside of /backup.
> 
> To reduce chance for accidental errors, you can snapshot with -r to make
> the new /backup read-only first. Then after removing everything outside
> it, make it rw again with:
> 
> btrfs property set -ts /bkp/backup ro false
> 
>> But would it also be possible to do something like
>>
>> cd /bkp
>> btrfs subvol create backup-subvol
>> mv /bkp/backup/* /bkp/backup-subvol
>> ... then mount /bkp/backup-subvol at /bkp
>>
>> Would this second approach work, and preserve hardlinks?

You essentially want to turn a subdirectory into a subvolume.

The last example, where you make a subvolume and move everything into
it, will not do what you want. Since a subvolume is a separate new
directoty/file hierarchy, mv will turn into a cp and rm operation
(without warning you) probably destroying information about data shared
between files.

So, what you can do is (similar to what I earlier typed):

* You now have subvol 5 (at /bkp)
* btrfs sub snap /bkp /bkp/backup-subvol
* Now you have a new subvol (256 or higher number) which already
contains everything
* Then inside /bkp/backup-subvol you will again see
/bkp/backup-subvol/backup (since you snapshotted the toplevel which also
contains it)
* Now mv /bkp/backup/backup-subvol/* /bkp/backup-subvol (so the mv
operations stays within the same subvolume)
* Then remove everything outside /bkp/backup-subvol and mv
/bkp/backup-subvol /bkp/backup, and then voila... you can now use
subvol=/backup to mount it.

>> The machine is btrfs-progs v4.7.3 Linux 4.9.0-8-amd64 on Debian. The
>> coreutils version is 8.26-3.
> 
> Another note: since it seems you have about 100% more data space
> allocated (set apart for data purpose) than you're actually using, or in
> other words, having the 1GiB chunks on average just for 50% filled...
> 
>Data, RAID1: total=2.52TiB, used=1.36TiB
> 
> ...in combination with using Linux 4.9, I suspect there's also 'ssd' in
> your mount options (not in fstab, but enabled by btrfs while mounting,
> see /proc/mounts or mount command output)?
> 
> If so, this is a nice starting point for more info about what might also
> be happening to your filesystem:
> https://www.spinics.net/lists/linux-btrfs/msg70622.html


Have fun,


-- 
Hans van Kranenburg


Re: Move data and mount point to subvolume

2018-09-16 Thread Hans van Kranenburg
On 09/16/2018 01:14 PM, Rory Campbell-Lange wrote:
> Hi
> 
> We have a backup machine that has been happily running its backup
> partitions on btrfs (on top of a luks encrypted disks) for a few years.
> 
> Our backup partition is on /bkp which is a top level subvolume. 
> Data, RAID1: total=2.52TiB, used=1.36TiB
> There are no other subvolumes.
> 
> We have /bkp with /bkp/backup in it. We would like to mount /bkp/backup
> at /bkp instead. Note that /bkp/backup has a lot of hardlinked files.
> 
> I guess I could do 
> 
> cd /bkp/backup
> mv * ../
> rmdir backup

Doing it the other way around is easier, since you don't have to think
about hardlinked/reflinked/etc/anything while copying data:

btrfs sub snap /bkp /bkp/backup

Now you have the exact identical thing in /backup, and you can start
throwing away everything outside of /backup.

To reduce chance for accidental errors, you can snapshot with -r to make
the new /backup read-only first. Then after removing everything outside
it, make it rw again with:

btrfs property set -ts /bkp/backup ro false

> But would it also be possible to do something like
> 
> cd /bkp
> btrfs subvol create backup-subvol
> mv /bkp/backup/* /bkp/backup-subvol
> ... then mount /bkp/backup-subvol at /bkp
> 
> Would this second approach work, and preserve hardlinks?
> 
> The machine is btrfs-progs v4.7.3 Linux 4.9.0-8-amd64 on Debian. The
> coreutils version is 8.26-3.

Another note: since it seems you have about 100% more data space
allocated (set apart for data purpose) than you're actually using, or in
other words, having the 1GiB chunks on average just for 50% filled...

   Data, RAID1: total=2.52TiB, used=1.36TiB

...in combination with using Linux 4.9, I suspect there's also 'ssd' in
your mount options (not in fstab, but enabled by btrfs while mounting,
see /proc/mounts or mount command output)?

If so, this is a nice starting point for more info about what might also
be happening to your filesystem:
https://www.spinics.net/lists/linux-btrfs/msg70622.html

-- 
Hans van Kranenburg


Move data and mount point to subvolume

2018-09-16 Thread Rory Campbell-Lange
Hi

We have a backup machine that has been happily running its backup
partitions on btrfs (on top of a luks encrypted disks) for a few years.

Our backup partition is on /bkp which is a top level subvolume. 
Data, RAID1: total=2.52TiB, used=1.36TiB
There are no other subvolumes.

We have /bkp with /bkp/backup in it. We would like to mount /bkp/backup
at /bkp instead. Note that /bkp/backup has a lot of hardlinked files.

I guess I could do 

cd /bkp/backup
mv * ../
rmdir backup

But would it also be possible to do something like

cd /bkp
btrfs subvol create backup-subvol
mv /bkp/backup/* /bkp/backup-subvol
... then mount /bkp/backup-subvol at /bkp

Would this second approach work, and preserve hardlinks?

The machine is btrfs-progs v4.7.3 Linux 4.9.0-8-amd64 on Debian. The
coreutils version is 8.26-3.

Thanks for any comments
Rory




[RESEND][PATCH v3] fstests: btrfs/149 make it sectorsize independent

2018-09-16 Thread Anand Jain
Originally this test case was designed to work with 4K sectorsize.
Now enhance it to work with any sector sizes and makes the following
changes:
.out file not to contain any traces of sector size.
Use max_inline=0 mount option so that it meets the requisite of non inline
regular extent.
Don't log the md5sum results to the output file as the data size vary by
the sectorsize.

Signed-off-by: Anand Jain 
---
(Adding fstests mailinglist now)
v2->v3: . write md5sum output to full log, drop echo to the out file.
. drop max_inline=0 for the receive side mount, as at the receive
  side we are suppose to create inline extent.
. update references to 4k in the comments.
v1->v2: rename _scratch_sectorsize() to _scratch_btrfs_sectorsize()
add _require_btrfs_command inspect-internal dump-super

 common/btrfs|  7 +++
 common/filter   |  5 +
 tests/btrfs/149 | 46 ++
 tests/btrfs/149.out | 11 ---
 4 files changed, 46 insertions(+), 23 deletions(-)

diff --git a/common/btrfs b/common/btrfs
index 79c687f73376..26dc0bb9600f 100644
--- a/common/btrfs
+++ b/common/btrfs
@@ -367,3 +367,10 @@ _run_btrfs_balance_start()
 
run_check $BTRFS_UTIL_PROG balance start $bal_opt $*
 }
+
+#return the sector size of the btrfs scratch fs
+_scratch_btrfs_sectorsize()
+{
+   $BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV |\
+   grep sectorsize | awk '{print $2}'
+}
diff --git a/common/filter b/common/filter
index 3965c2eb752b..e87740ddda3f 100644
--- a/common/filter
+++ b/common/filter
@@ -271,6 +271,11 @@ _filter_xfs_io_pages_modified()
_filter_xfs_io_units_modified "Page" $PAGE_SIZE
 }
 
+_filter_xfs_io_numbers()
+{
+_filter_xfs_io | sed -E 's/[0-9]+//g'
+}
+
 _filter_test_dir()
 {
# TEST_DEV may be a prefix of TEST_DIR (e.g. /mnt, /mnt/ovl-mnt)
diff --git a/tests/btrfs/149 b/tests/btrfs/149
index 3e955a305e0f..bf5e16962876 100755
--- a/tests/btrfs/149
+++ b/tests/btrfs/149
@@ -6,7 +6,7 @@
 #
 # Test that an incremental send/receive operation will not fail when the
 # destination filesystem has compression enabled and the source filesystem
-# has a 4K extent at a file offset 0 that is not compressed and that is
+# has an extent at a file offset 0 that is not compressed and that is
 # shared.
 #
 seq=`basename $0`
@@ -36,6 +36,7 @@ _require_test
 _require_scratch
 _require_scratch_reflink
 _require_odirect
+_require_btrfs_command inspect-internal dump-super
 
 send_files_dir=$TEST_DIR/btrfs-test-$seq
 
@@ -44,21 +45,27 @@ rm -fr $send_files_dir
 mkdir $send_files_dir
 
 _scratch_mkfs >>$seqres.full 2>&1
-_scratch_mount "-o compress"
+# On 64K pagesize systems the compression is more efficient, so max_inline
+# helps to create regular (non inline) extent irrespective of the final
+# write size.
+_scratch_mount "-o compress -o max_inline=0"
 
 # Write to our file using direct IO, so that this way the write ends up not
 # getting compressed, that is, we get a regular extent which is neither
 # inlined nor compressed.
 # Alternatively, we could have mounted the fs without compression enabled,
 # which would result as well in an uncompressed regular extent.
-$XFS_IO_PROG -f -d -c "pwrite -S 0xab 0 4K" $SCRATCH_MNT/foobar | 
_filter_xfs_io
+sectorsize=$(_scratch_btrfs_sectorsize)
+$XFS_IO_PROG -f -d -c "pwrite -S 0xab 0 $sectorsize" $SCRATCH_MNT/foobar |\
+   _filter_xfs_io_numbers
 
 $BTRFS_UTIL_PROG subvolume snapshot -r $SCRATCH_MNT \
$SCRATCH_MNT/mysnap1 > /dev/null
 
 # Clone the regular (not inlined) extent.
-$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foobar 0 8K 4K" $SCRATCH_MNT/foobar \
-   | _filter_xfs_io
+$XFS_IO_PROG -c \
+   "reflink $SCRATCH_MNT/foobar 0 $((2 * $sectorsize)) $sectorsize" \
+   $SCRATCH_MNT/foobar | _filter_xfs_io_numbers
 
 $BTRFS_UTIL_PROG subvolume snapshot -r $SCRATCH_MNT \
$SCRATCH_MNT/mysnap2 > /dev/null
@@ -67,17 +74,19 @@ $BTRFS_UTIL_PROG send -f $send_files_dir/1.snap \
 $SCRATCH_MNT/mysnap1 2>&1 >/dev/null | _filter_scratch
 
 # Now do an incremental send of the second snapshot. The send stream can have
-# a clone operation to clone the extent at offset 0 to offset 8K. This 
operation
-# would fail on the receiver if it has compression enabled, since the write
-# operation of the extent at offset 0 was compressed because it was a buffered
-# write operation, and btrfs' clone implementation does not allow cloning 
inline
-# extents to offsets different from 0.
+# a clone operation to clone the extent at offset 0 to offset (2 x sectorsize).
+# This operation would fail on the receiver if it has compression enabled, 
since
+# the write operation of the extent at offset 0 was compressed because it was a
+# buffered write operation, and btrfs' clone implementation does not allow
+# cloning inline extents to offsets different from 0.
 $BTRFS_UTIL_PROG send -p $SCRATCH_MNT/mysnap1 -f