[PATCH] btrfs: disk-io: Allow 0 as tree block bytenr

2017-12-09 Thread Qu Wenruo
Some btrfs created by old mkfs.btrfs can have tree block with 0 bytenr.

In fact, any aligned bytenr is allowed in btrfs, and in some case it can
cause problem if the valid tree block at 0 bytenr can't be read.

Currently, the superblock checker and bytenr alignment checker can
already handle the case so there is no need to check bytenr < sectorsize
in read_tree_block().

Reported-by: Benjamin Beichler 
Fixes: 6cca2ea9bea9 ("btrfs-progs: more sanity checks in 
read_tree_block_fs_info")
Signed-off-by: Qu Wenruo 
---
 disk-io.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/disk-io.c b/disk-io.c
index f5edc4796619..7f13f05ac600 100644
--- a/disk-io.c
+++ b/disk-io.c
@@ -318,7 +318,7 @@ struct extent_buffer* read_tree_block(struct btrfs_fs_info 
*fs_info, u64 bytenr,
 * Such unaligned tree block will free overlapping extent buffer,
 * causing use-after-free bugs for fuzzed images.
 */
-   if (bytenr < sectorsize || !IS_ALIGNED(bytenr, sectorsize)) {
+   if (!IS_ALIGNED(bytenr, sectorsize)) {
error("tree block bytenr %llu is not aligned to sectorsize %u",
  bytenr, sectorsize);
return ERR_PTR(-EIO);
-- 
2.15.1

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Odd behaviour of replace -- unknown resulting state

2017-12-09 Thread Duncan
Hugo Mills posted on Sat, 09 Dec 2017 17:43:48 + as excerpted:

> This is on 4.10, so there may have been fixes made to this since
> then. If so, apologies for the noise.
> 
>I had a filesystem on 6 devices with a badly failing drive in it
> (/dev/sdi). I replaced the drive with a new one:
> 
> # btrfs replace start /dev/sdi /dev/sdj /media/video
> 
>Once it had finished(*), I resized the device from 6 TB to 8 TB:
> 
> # btrfs fi resize 2:max /media/video
> 
>I also removed another, smaller, device:
> 
> # btrfs dev del 7 /media/video
> 
>Following this, btrfs fi show was reporting the correct device
> size, but still the same device node in the filesystem:
> 
> Label: 'amelia'  uuid: f7409f7d-bea2-4818-b937-9e45d754b5f1
>Total devices 5 FS bytes used 9.15TiB
>devid2 size 7.28TiB used 6.44TiB path /dev/sdi2
>devid3 size 3.63TiB used 3.46TiB path /dev/sde2
>devid4 size 3.63TiB used 3.45TiB path /dev/sdd2
>devid5 size 1.81TiB used 1.65TiB path /dev/sdh2
>devid6 size 3.63TiB used 3.43TiB path /dev/sdc2
> 
>Note that device 2 definitely isn't /dev/sdi2, because /dev/sdi2
> was on a 6 TB device, not an 8 TB device.
> 
>Finally, I physically removed the two deleted devices from the
> machine. The second device came out fine, but the first (/dev/sdi) has
> now resulted in this from btrfs fi show:
> 
> Label: 'amelia'  uuid: f7409f7d-bea2-4818-b937-9e45d754b5f1
>Total devices 5 FS bytes used 9.15TiB
>devid3 size 3.63TiB used 3.46TiB path /dev/sde2
>devid4 size 3.63TiB used 3.45TiB path /dev/sdd2
>devid5 size 1.81TiB used 1.65TiB path /dev/sdh2
>devid6 size 3.63TiB used 3.43TiB path /dev/sdc2
>*** Some devices missing
> 
>So, what's the *actual* current state of this filesystem? It's not
> throwing write errors in the kernel logs from having a missing device,
> so it seems like it's probably OK. However, the FS's idea of which
> devices it's got seems to be confused.
> 
>I suspect that if I reboot, it'll all be fine, but I'd be happier
> if it hadn't got into this state in the first place.

As I believe you know, I'm not a coder, and there's a limit to the
technical detail level I'm comfortable with.  As such, I do sometimes
come to the wrong conclusions...

That said, as I understand things, this sort of device confusion is
normal for btrfs at this time, because the kernel btrfs code simply
doesn't have a proper concept of real-time (physical or blockdev layer
below btrfs) device disappearance/removal.

Adding the ability for btrfs to properly deal with device removal is
part of the patch set that one of the devs (Anand Jain, IIRC) is
working on as a prerequisite to hot-spare.  I've seen quite some
discussion on the device-tracking subset recently and it's my
impression that it's headed for mainline right now, tho I haven't
tracked it closely enough to be sure if it's in for 4.15 or being
staged for 4.16.

Until then, the btrfs fi show and similar output can be /expected/
to still show old devices at times until a reboot, even if what's
actually on-dev has been correctly updated.

Thus, I too expect that after a reboot it should actually show up
correctly, tho of course I'd expect people to have backups updated
before they go doing anything like btrfs device remove, etc, so
if it does /not/ come back after a reboot, no problem, just go to
the backup.

>Is this bug fixed in later versions of the kernel? Can anyone think
> of any issues I might have if I leave it in this state for a while?
> Likewise, any issues I might have from a reboot? (Probably into 4.14)
> 
>Hugo.
> 
> (*) as an aside, it was reporting over 300% complete when it finally
> completed. Not sure if that's been fixed since 4.10, either.

IIRC, this one *has* been fixed recently.  At least, I definitely
remember multi-hundred-percent complete reports a few kernel-cycles
ago, and believed it to be a known bug with a known-to-fix patch just
waiting for the normal development cycle timing to get it out there.
And since that /was/ several kernel cycles ago, probably about the
4.10 you mention, actually, I'd be rather surprised to see it still
being an issue with current.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Chunk-Recovery fails with alignment error

2017-12-09 Thread Qu Wenruo


On 2017年12月10日 08:29, Qu Wenruo wrote:
> 
> 
> On 2017年12月10日 07:12, Benjamin Beichler wrote:
>> Hi Qu,
>>
>> 2017-12-07 12:09 GMT+00:00 Qu Wenruo :
>>>
>>> Since the btrfs chunk recovery doesn't work and my dirty quick hack
>>> doesn't work either, I don't expect much to recovery.
>>>
>>> Unless we have more detailed info about the how and why the BUG_ON() of
>>> chunk recovery is triggered.
>>>
>>> That's to say, it will be a quite time consuming work to use gdb to
>>> locate the problem, and see if any developer (mostly me) could use the
>>> info to further dig into the problem or fix it.
>>> (Considering the difference in timezone, I expect at least 8+ weeks to
>>> get a conclusion)
>>
>> I'm really pleased that you want to help me, of course the current
>> backtrace was quite useless.
>> Firstly, I revised the code a bit, and since one run over the 1,7TB
>> drive took about 6h, I thought about saving the state of already found
>> chunks. I simply saved all bytenr which are valid to a file. The
>> consequence was a reduction of the time for scan_one_device to about
>> 30s. If you think this could be interesting for the normal version, I
>> could create a patch for this.
>>
>>>
>>> If you really want to do it, please step into the function
>>> btrfs_insert_item() in __rebuild_device_items() and to see at which
>>> point -EIO is returned.
>>>
>>> My guess is btrfs_search_slot() call in btrfs_insert_empty_items().
>>>
>>> If that's true, please call
>>>
>>> btrfs_print_tree(root->fs_info->chunk_root, 
>>> root->fs_info->chunk_root->node, 1)
>>>
>>> in gdb, just before the btrfs_search_slot() call above, to show what's
>>> the problem.
>>>
>> Your guess was right. The current stack trace and btrfs_print_tree is
>> under : https://gist.github.com/anonymous/2cf40ac1d3ddcbca95177acec78041b2
> 
> The output is very helpful.
> 
> I was originally thinking it's something more serious, but it turns out
> to be less serious than my expectation.
> 
>>
>> As you can see, the code in disk.io:321 explicitly exclude the the
>> sector from 0 to sectorsize, and states it is unaligned. I think
>> because the code found a chunk/block at address zero, this triggers
>> the problem. Is it possible, that there live chunks/blocks at address
>> 0 or is this fuzzy data?
> 
> 0 is completely valid in btrfs logical address space.
> 
> It's the IS_ALIGNED macro which caused the problem.
> So it's quite easy to fix in fact.

Sorry, IS_ALIGNED is working as expected.

It's the bytenr < sectorsize line causing the problem.
Please remove bytenr < sectorsize check, I'll submit a patch later to
fix it.

Thanks,
Qu

> 
> For 0, always return it as aligned should fix your problem.
> 
> Thanks,
> Qu
> 
>>
>>>
>>> BTW, currently nothing in chunk tree/super block contains any info of
>>> your fs, feel free to share it with the mail list, where more guys may help.
>>>
>> I added the list, I simply forgot it in some answer.
>>
>>> Thanks,
>>> Qu
>>>
>>
>> thanks
>>
>> Benjamin
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Chunk-Recovery fails with alignment error

2017-12-09 Thread Qu Wenruo


On 2017年12月10日 07:12, Benjamin Beichler wrote:
> Hi Qu,
> 
> 2017-12-07 12:09 GMT+00:00 Qu Wenruo :
>>
>> Since the btrfs chunk recovery doesn't work and my dirty quick hack
>> doesn't work either, I don't expect much to recovery.
>>
>> Unless we have more detailed info about the how and why the BUG_ON() of
>> chunk recovery is triggered.
>>
>> That's to say, it will be a quite time consuming work to use gdb to
>> locate the problem, and see if any developer (mostly me) could use the
>> info to further dig into the problem or fix it.
>> (Considering the difference in timezone, I expect at least 8+ weeks to
>> get a conclusion)
> 
> I'm really pleased that you want to help me, of course the current
> backtrace was quite useless.
> Firstly, I revised the code a bit, and since one run over the 1,7TB
> drive took about 6h, I thought about saving the state of already found
> chunks. I simply saved all bytenr which are valid to a file. The
> consequence was a reduction of the time for scan_one_device to about
> 30s. If you think this could be interesting for the normal version, I
> could create a patch for this.
> 
>>
>> If you really want to do it, please step into the function
>> btrfs_insert_item() in __rebuild_device_items() and to see at which
>> point -EIO is returned.
>>
>> My guess is btrfs_search_slot() call in btrfs_insert_empty_items().
>>
>> If that's true, please call
>>
>> btrfs_print_tree(root->fs_info->chunk_root, root->fs_info->chunk_root->node, 
>> 1)
>>
>> in gdb, just before the btrfs_search_slot() call above, to show what's
>> the problem.
>>
> Your guess was right. The current stack trace and btrfs_print_tree is
> under : https://gist.github.com/anonymous/2cf40ac1d3ddcbca95177acec78041b2

The output is very helpful.

I was originally thinking it's something more serious, but it turns out
to be less serious than my expectation.

> 
> As you can see, the code in disk.io:321 explicitly exclude the the
> sector from 0 to sectorsize, and states it is unaligned. I think
> because the code found a chunk/block at address zero, this triggers
> the problem. Is it possible, that there live chunks/blocks at address
> 0 or is this fuzzy data?

0 is completely valid in btrfs logical address space.

It's the IS_ALIGNED macro which caused the problem.
So it's quite easy to fix in fact.

For 0, always return it as aligned should fix your problem.

Thanks,
Qu

> 
>>
>> BTW, currently nothing in chunk tree/super block contains any info of
>> your fs, feel free to share it with the mail list, where more guys may help.
>>
> I added the list, I simply forgot it in some answer.
> 
>> Thanks,
>> Qu
>>
> 
> thanks
> 
> Benjamin
> 



signature.asc
Description: OpenPGP digital signature


Re: Chunk-Recovery fails with alignment error

2017-12-09 Thread Benjamin Beichler
Hi Qu,

2017-12-07 12:09 GMT+00:00 Qu Wenruo :
>
> Since the btrfs chunk recovery doesn't work and my dirty quick hack
> doesn't work either, I don't expect much to recovery.
>
> Unless we have more detailed info about the how and why the BUG_ON() of
> chunk recovery is triggered.
>
> That's to say, it will be a quite time consuming work to use gdb to
> locate the problem, and see if any developer (mostly me) could use the
> info to further dig into the problem or fix it.
> (Considering the difference in timezone, I expect at least 8+ weeks to
> get a conclusion)

I'm really pleased that you want to help me, of course the current
backtrace was quite useless.
Firstly, I revised the code a bit, and since one run over the 1,7TB
drive took about 6h, I thought about saving the state of already found
chunks. I simply saved all bytenr which are valid to a file. The
consequence was a reduction of the time for scan_one_device to about
30s. If you think this could be interesting for the normal version, I
could create a patch for this.

>
> If you really want to do it, please step into the function
> btrfs_insert_item() in __rebuild_device_items() and to see at which
> point -EIO is returned.
>
> My guess is btrfs_search_slot() call in btrfs_insert_empty_items().
>
> If that's true, please call
>
> btrfs_print_tree(root->fs_info->chunk_root, root->fs_info->chunk_root->node, 
> 1)
>
> in gdb, just before the btrfs_search_slot() call above, to show what's
> the problem.
>
Your guess was right. The current stack trace and btrfs_print_tree is
under : https://gist.github.com/anonymous/2cf40ac1d3ddcbca95177acec78041b2

As you can see, the code in disk.io:321 explicitly exclude the the
sector from 0 to sectorsize, and states it is unaligned. I think
because the code found a chunk/block at address zero, this triggers
the problem. Is it possible, that there live chunks/blocks at address
0 or is this fuzzy data?

>
> BTW, currently nothing in chunk tree/super block contains any info of
> your fs, feel free to share it with the mail list, where more guys may help.
>
I added the list, I simply forgot it in some answer.

> Thanks,
> Qu
>

thanks

Benjamin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Odd behaviour of replace -- unknown resulting state

2017-12-09 Thread Hugo Mills
On Sat, Dec 09, 2017 at 05:43:48PM +, Hugo Mills wrote:
>This is on 4.10, so there may have been fixes made to this since
> then. If so, apologies for the noise.
> 
>I had a filesystem on 6 devices with a badly failing drive in it
> (/dev/sdi). I replaced the drive with a new one:
> 
> # btrfs replace start /dev/sdi /dev/sdj /media/video

Sorry, that should, of course, read:

# btrfs replace start /dev/sdi2 /dev/sdj2 /media/video

   Hugo.

>Once it had finished(*), I resized the device from 6 TB to 8 TB:
> 
> # btrfs fi resize 2:max /media/video
> 
>I also removed another, smaller, device:
> 
> # btrfs dev del 7 /media/video
> 
>Following this, btrfs fi show was reporting the correct device
> size, but still the same device node in the filesystem:
> 
> Label: 'amelia'  uuid: f7409f7d-bea2-4818-b937-9e45d754b5f1
>Total devices 5 FS bytes used 9.15TiB
>devid2 size 7.28TiB used 6.44TiB path /dev/sdi2
>devid3 size 3.63TiB used 3.46TiB path /dev/sde2
>devid4 size 3.63TiB used 3.45TiB path /dev/sdd2
>devid5 size 1.81TiB used 1.65TiB path /dev/sdh2
>devid6 size 3.63TiB used 3.43TiB path /dev/sdc2
> 
>Note that device 2 definitely isn't /dev/sdi2, because /dev/sdi2
> was on a 6 TB device, not an 8 TB device.
> 
>Finally, I physically removed the two deleted devices from the
> machine. The second device came out fine, but the first (/dev/sdi) has
> now resulted in this from btrfs fi show:
> 
> Label: 'amelia'  uuid: f7409f7d-bea2-4818-b937-9e45d754b5f1
>Total devices 5 FS bytes used 9.15TiB
>devid3 size 3.63TiB used 3.46TiB path /dev/sde2
>devid4 size 3.63TiB used 3.45TiB path /dev/sdd2
>devid5 size 1.81TiB used 1.65TiB path /dev/sdh2
>devid6 size 3.63TiB used 3.43TiB path /dev/sdc2
>*** Some devices missing
> 
>So, what's the *actual* current state of this filesystem? It's not
> throwing write errors in the kernel logs from having a missing device,
> so it seems like it's probably OK. However, the FS's idea of which
> devices it's got seems to be confused.
> 
>I suspect that if I reboot, it'll all be fine, but I'd be happier
> if it hadn't got into this state in the first place.
> 
>Is this bug fixed in later versions of the kernel? Can anyone think
> of any issues I might have if I leave it in this state for a while?
> Likewise, any issues I might have from a reboot? (Probably into 4.14)
> 
>Hugo.
> 
> (*) as an aside, it was reporting over 300% complete when it finally
> completed. Not sure if that's been fixed since 4.10, either.
>  

-- 
Hugo Mills | I'm on a 30-day diet. So far I've lost 18 days.
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: E2AB1DE4  |


signature.asc
Description: Digital signature


[GIT PULL] Btrfs fixes for 4.15-rc3

2017-12-09 Thread David Sterba
Hi,

this update contains a few fixes (error handling, quota leak, FUA vs
nobarrier mount option).  There's one one worth mentioning separately -
an off-by-one fix that leads to overwriting first byte of an adjacent
page with 0, out of bounds of the memory allocated by an ioctl.  This is
under a privileged part of the ioctl, can be triggerd in some subvolume
layouts.

After the last tags and branches mess [1], let me note that the pull url
is pointed to the signed tag. There are no merge conflics. Please pull,
thanks.

[1] https://lkml.org/lkml/2017/11/29/952


The following changes since commit ea37d5998b50a72b9045ba60a132eeb20e1c4230:

  Btrfs: incremental send, fix wrong unlink path after renaming file 
(2017-11-28 17:15:30 +0100)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-4.15-rc3-tag

for you to fetch changes up to c8bcbfbd239ed60a6562964b58034ac8a25f4c31:

  btrfs: Fix possible off-by-one in btrfs_search_path_in_tree (2017-12-07 
00:35:15 +0100)


Jeff Mahoney (2):
  btrfs: handle errors while updating refcounts in update_ref_for_cow
  btrfs: fix missing error return in btrfs_drop_snapshot

Justin Maggard (1):
  btrfs: Fix quota reservation leak on preallocated files

Nikolay Borisov (1):
  btrfs: Fix possible off-by-one in btrfs_search_path_in_tree

Omar Sandoval (1):
  Btrfs: disable FUA if mounted with nobarrier

 fs/btrfs/ctree.c   | 18 --
 fs/btrfs/disk-io.c | 12 +---
 fs/btrfs/extent-tree.c |  1 +
 fs/btrfs/inode.c   |  2 ++
 fs/btrfs/ioctl.c   |  2 +-
 5 files changed, 21 insertions(+), 14 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Odd behaviour of replace -- unknown resulting state

2017-12-09 Thread Hugo Mills
   This is on 4.10, so there may have been fixes made to this since
then. If so, apologies for the noise.

   I had a filesystem on 6 devices with a badly failing drive in it
(/dev/sdi). I replaced the drive with a new one:

# btrfs replace start /dev/sdi /dev/sdj /media/video

   Once it had finished(*), I resized the device from 6 TB to 8 TB:

# btrfs fi resize 2:max /media/video

   I also removed another, smaller, device:

# btrfs dev del 7 /media/video

   Following this, btrfs fi show was reporting the correct device
size, but still the same device node in the filesystem:

Label: 'amelia'  uuid: f7409f7d-bea2-4818-b937-9e45d754b5f1
   Total devices 5 FS bytes used 9.15TiB
   devid2 size 7.28TiB used 6.44TiB path /dev/sdi2
   devid3 size 3.63TiB used 3.46TiB path /dev/sde2
   devid4 size 3.63TiB used 3.45TiB path /dev/sdd2
   devid5 size 1.81TiB used 1.65TiB path /dev/sdh2
   devid6 size 3.63TiB used 3.43TiB path /dev/sdc2

   Note that device 2 definitely isn't /dev/sdi2, because /dev/sdi2
was on a 6 TB device, not an 8 TB device.

   Finally, I physically removed the two deleted devices from the
machine. The second device came out fine, but the first (/dev/sdi) has
now resulted in this from btrfs fi show:

Label: 'amelia'  uuid: f7409f7d-bea2-4818-b937-9e45d754b5f1
   Total devices 5 FS bytes used 9.15TiB
   devid3 size 3.63TiB used 3.46TiB path /dev/sde2
   devid4 size 3.63TiB used 3.45TiB path /dev/sdd2
   devid5 size 1.81TiB used 1.65TiB path /dev/sdh2
   devid6 size 3.63TiB used 3.43TiB path /dev/sdc2
   *** Some devices missing

   So, what's the *actual* current state of this filesystem? It's not
throwing write errors in the kernel logs from having a missing device,
so it seems like it's probably OK. However, the FS's idea of which
devices it's got seems to be confused.

   I suspect that if I reboot, it'll all be fine, but I'd be happier
if it hadn't got into this state in the first place.

   Is this bug fixed in later versions of the kernel? Can anyone think
of any issues I might have if I leave it in this state for a while?
Likewise, any issues I might have from a reboot? (Probably into 4.14)

   Hugo.

(*) as an aside, it was reporting over 300% complete when it finally
completed. Not sure if that's been fixed since 4.10, either.
 
-- 
Hugo Mills | Biphocles: Plato's optician
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: E2AB1DE4  |


signature.asc
Description: Digital signature


Re: [PATCH v4 72/73] xfs: Convert mru cache to XArray

2017-12-09 Thread Joe Perches
On Sat, 2017-12-09 at 09:36 +1100, Dave Chinner wrote:
>   1. Using lockdep_set_novalidate_class() for anything other
>   than device->mutex will throw checkpatch warnings. Nice. (*)
[]
> (*) checkpatch.pl is considered mostly harmful round here, too,
> but that's another rant

How so?

> (**) the frequent occurrence of "core code/devs aren't held to the
> same rules/standard as everyone else" is another rant I have stored
> up for a rainy day.

Yeah.  I wouldn't mind reading that one...

Rainy season is starting right about now here too.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: raid56: fix race between merge_bio and rbio_orig_end_io

2017-12-09 Thread Nikolay Borisov


On  9.12.2017 01:02, Liu Bo wrote:
> We're not allowed to take any new bios to rbio->bio_list in
> rbio_orig_end_io(), otherwise we may get merged with more bios and
> rbio->bio_list is not empty.
> 
> This should only happens in error-out cases, the normal path of
> recover and full stripe write have already set RBIO_RMW_LOCKED_BIT to
> disable merge before doing IO.
> 
> Reported-by: Jérôme Carretero 
> Signed-off-by: Liu Bo 
> ---
>  fs/btrfs/raid56.c | 13 -
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index 5aa9d22..127c782 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -859,12 +859,23 @@ static void free_raid_bio(struct btrfs_raid_bio *rbio)
>   */
>  static void rbio_orig_end_io(struct btrfs_raid_bio *rbio, blk_status_t err)
>  {
> - struct bio *cur = bio_list_get(&rbio->bio_list);
> + struct bio *cur;
>   struct bio *next;
>  
> + /*
> +  * We're not allowed to take any new bios to rbio->bio_list
> +  * from now on, otherwise we may get merged with more bios and
> +  * rbio->bio_list is not empty.
> +  */
> + spin_lock(&rbio->bio_list_lock);
> + set_bit(RBIO_RMW_LOCKED_BIT, &rbio->flags);
> + spin_unlock(&rbio->bio_list_lock);

do we really need the spinlock, bit operations are atomic?

> +
>   if (rbio->generic_bio_cnt)
>   btrfs_bio_counter_sub(rbio->fs_info, rbio->generic_bio_cnt);
>  
> + cur = bio_list_get(&rbio->bio_list);
> +
>   free_raid_bio(rbio);
>  
>   while (cur) {
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html