Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-06-02 Thread David Sterba
On Thu, Jun 01, 2017 at 09:01:26AM +0800, Qu Wenruo wrote:
> 
> 
> At 05/31/2017 10:30 PM, David Sterba wrote:
> > On Wed, May 31, 2017 at 08:31:35AM +0800, Qu Wenruo wrote:
>  Yes it's hard to find such deadlock especially when lockdep will not
>  detect it.
> 
>  And this makes the advantage of using stack memory in v3 patch more 
>  obvious.
> 
>  I didn't realize the extra possible deadlock when memory pressure is
>  high, and to make completely correct usage of GFP_ flags we should let
>  caller to choose its GFP_ flag, which will introduce more modification
>  and more possibility to cause problem.
> 
>  So now I prefer the stack version a little more.
> >>>
> >>> The difference is that the stack version will always consume the stack
> >>> at runtime.  The dynamic allocation will not, but we have to add error
> >>> handling and make sure we use right gfp flags. So it's runtime vs review
> >>> trade off, I choose to spend time on review.
> >>
> >> OK, then I'll update the patchset to allow passing gfp flags for each
> >> reservation.
> > 
> > You mean to add gfp flags to extent_changeset_alloc and update the
> > direct callers or to add gfp flags to the whole reservation codepath?
> 
> Yes, I was planning to do it.
> 
> > I strongly prefer to use GFP_NOFS for now, although it's not ideal.
> 
> OK, then keep GFP_NOFS.
> But I also want to know the reason why.
> 
> Is it just because we don't have good enough tool to detect possible 
> deadlock caused by wrong GFP_* flags in write path?

Yes, basically. I'ts either overzealous GFP_NOFS or potential deadlock
with GFP_KERNEL. We'll deal with the NOFS eventually, so we want to be
safe until then.

Michal Hocko has a debugging patch that will report use of NOFS when
it's not needed, but we have to explicitly mark the sections for that.
This hasn't happened and is not easy to do as we have to audit lots of
codepaths.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-31 Thread Qu Wenruo



At 05/31/2017 10:30 PM, David Sterba wrote:

On Wed, May 31, 2017 at 08:31:35AM +0800, Qu Wenruo wrote:

Yes it's hard to find such deadlock especially when lockdep will not
detect it.

And this makes the advantage of using stack memory in v3 patch more obvious.

I didn't realize the extra possible deadlock when memory pressure is
high, and to make completely correct usage of GFP_ flags we should let
caller to choose its GFP_ flag, which will introduce more modification
and more possibility to cause problem.

So now I prefer the stack version a little more.


The difference is that the stack version will always consume the stack
at runtime.  The dynamic allocation will not, but we have to add error
handling and make sure we use right gfp flags. So it's runtime vs review
trade off, I choose to spend time on review.


OK, then I'll update the patchset to allow passing gfp flags for each
reservation.


You mean to add gfp flags to extent_changeset_alloc and update the
direct callers or to add gfp flags to the whole reservation codepath?


Yes, I was planning to do it.


I strongly prefer to use GFP_NOFS for now, although it's not ideal.


OK, then keep GFP_NOFS.
But I also want to know the reason why.

Is it just because we don't have good enough tool to detect possible 
deadlock caused by wrong GFP_* flags in write path?


Thanks,
Qu


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-31 Thread David Sterba
On Wed, May 31, 2017 at 08:31:35AM +0800, Qu Wenruo wrote:
> >> Yes it's hard to find such deadlock especially when lockdep will not
> >> detect it.
> >>
> >> And this makes the advantage of using stack memory in v3 patch more 
> >> obvious.
> >>
> >> I didn't realize the extra possible deadlock when memory pressure is
> >> high, and to make completely correct usage of GFP_ flags we should let
> >> caller to choose its GFP_ flag, which will introduce more modification
> >> and more possibility to cause problem.
> >>
> >> So now I prefer the stack version a little more.
> > 
> > The difference is that the stack version will always consume the stack
> > at runtime.  The dynamic allocation will not, but we have to add error
> > handling and make sure we use right gfp flags. So it's runtime vs review
> > trade off, I choose to spend time on review.
> 
> OK, then I'll update the patchset to allow passing gfp flags for each 
> reservation.

You mean to add gfp flags to extent_changeset_alloc and update the
direct callers or to add gfp flags to the whole reservation codepath?
I strongly prefer to use GFP_NOFS for now, although it's not ideal.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-30 Thread Qu Wenruo



At 05/29/2017 11:51 PM, David Sterba wrote:

On Fri, May 19, 2017 at 08:32:18AM +0800, Qu Wenruo wrote:

At 05/18/2017 09:45 PM, David Sterba wrote:

On Thu, May 18, 2017 at 08:24:26AM +0800, Qu Wenruo wrote:

+static inline void extent_changeset_init(struct extent_changeset *changeset)
+{
+   changeset->bytes_changed = 0;
+   ulist_init(>range_changed);
+}
+
+static inline struct extent_changeset *extent_changeset_alloc(void)
+{
+   struct extent_changeset *ret;
+
+   ret = kmalloc(sizeof(*ret), GFP_KERNEL);


I don't remember if we'd discussed this before, but have you evaluated
if GFP_KERNEL is ok to use in this context?


IIRC you have informed me that I shouldn't abuse GFP_NOFS.


Use of GFP_NOFS or _KERNEL has to be evaluated case by case. So if it's
"let's use NOFS because everybody else does" or "he said I should not
use NOFS, then I'll use KERNEL", then it's wrong and I'll complain.

A short notice in the changelog or a comment above the allocation would
better signify that the patch author spent some time thinking about the
consequences.

Sometimes it can become pretty hard to find the potential deadlock
scenarios. Using GFP_NOFS in such case is a matter of precaution, but at
least would be nice to be explictly stated somewhere.


Yes it's hard to find such deadlock especially when lockdep will not
detect it.

And this makes the advantage of using stack memory in v3 patch more obvious.

I didn't realize the extra possible deadlock when memory pressure is
high, and to make completely correct usage of GFP_ flags we should let
caller to choose its GFP_ flag, which will introduce more modification
and more possibility to cause problem.

So now I prefer the stack version a little more.


The difference is that the stack version will always consume the stack
at runtime.  The dynamic allocation will not, but we have to add error
handling and make sure we use right gfp flags. So it's runtime vs review
trade off, I choose to spend time on review.


OK, then I'll update the patchset to allow passing gfp flags for each 
reservation.




As catching all the gfp misuse is hard, we'll need some runtime
validation anyway, ie. marking the start and end of the context where
GFP_KERNEL must not be used.


That's indeed a very nice feature.

Thanks,
Qu







--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-29 Thread David Sterba
On Fri, May 19, 2017 at 08:32:18AM +0800, Qu Wenruo wrote:
> At 05/18/2017 09:45 PM, David Sterba wrote:
> > On Thu, May 18, 2017 at 08:24:26AM +0800, Qu Wenruo wrote:
>  +static inline void extent_changeset_init(struct extent_changeset 
>  *changeset)
>  +{
>  +changeset->bytes_changed = 0;
>  +ulist_init(>range_changed);
>  +}
>  +
>  +static inline struct extent_changeset *extent_changeset_alloc(void)
>  +{
>  +struct extent_changeset *ret;
>  +
>  +ret = kmalloc(sizeof(*ret), GFP_KERNEL);
> >>>
> >>> I don't remember if we'd discussed this before, but have you evaluated
> >>> if GFP_KERNEL is ok to use in this context?
> >>
> >> IIRC you have informed me that I shouldn't abuse GFP_NOFS.
> > 
> > Use of GFP_NOFS or _KERNEL has to be evaluated case by case. So if it's
> > "let's use NOFS because everybody else does" or "he said I should not
> > use NOFS, then I'll use KERNEL", then it's wrong and I'll complain.
> > 
> > A short notice in the changelog or a comment above the allocation would
> > better signify that the patch author spent some time thinking about the
> > consequences.
> > 
> > Sometimes it can become pretty hard to find the potential deadlock
> > scenarios. Using GFP_NOFS in such case is a matter of precaution, but at
> > least would be nice to be explictly stated somewhere.
> 
> Yes it's hard to find such deadlock especially when lockdep will not 
> detect it.
> 
> And this makes the advantage of using stack memory in v3 patch more obvious.
> 
> I didn't realize the extra possible deadlock when memory pressure is 
> high, and to make completely correct usage of GFP_ flags we should let 
> caller to choose its GFP_ flag, which will introduce more modification 
> and more possibility to cause problem.
> 
> So now I prefer the stack version a little more.

The difference is that the stack version will always consume the stack
at runtime.  The dynamic allocation will not, but we have to add error
handling and make sure we use right gfp flags. So it's runtime vs review
trade off, I choose to spend time on review.

As catching all the gfp misuse is hard, we'll need some runtime
validation anyway, ie. marking the start and end of the context where
GFP_KERNEL must not be used.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-18 Thread Qu Wenruo



At 05/18/2017 09:45 PM, David Sterba wrote:

On Thu, May 18, 2017 at 08:24:26AM +0800, Qu Wenruo wrote:

+static inline void extent_changeset_init(struct extent_changeset *changeset)
+{
+   changeset->bytes_changed = 0;
+   ulist_init(>range_changed);
+}
+
+static inline struct extent_changeset *extent_changeset_alloc(void)
+{
+   struct extent_changeset *ret;
+
+   ret = kmalloc(sizeof(*ret), GFP_KERNEL);


I don't remember if we'd discussed this before, but have you evaluated
if GFP_KERNEL is ok to use in this context?


IIRC you have informed me that I shouldn't abuse GFP_NOFS.


Use of GFP_NOFS or _KERNEL has to be evaluated case by case. So if it's
"let's use NOFS because everybody else does" or "he said I should not
use NOFS, then I'll use KERNEL", then it's wrong and I'll complain.

A short notice in the changelog or a comment above the allocation would
better signify that the patch author spent some time thinking about the
consequences.

Sometimes it can become pretty hard to find the potential deadlock
scenarios. Using GFP_NOFS in such case is a matter of precaution, but at
least would be nice to be explictly stated somewhere.


Yes it's hard to find such deadlock especially when lockdep will not 
detect it.


And this makes the advantage of using stack memory in v3 patch more obvious.

I didn't realize the extra possible deadlock when memory pressure is 
high, and to make completely correct usage of GFP_ flags we should let 
caller to choose its GFP_ flag, which will introduce more modification 
and more possibility to cause problem.


So now I prefer the stack version a little more.

Thanks,
Qu



The hard cases help to understand the callchain patterns and it's easier
to detect them in the future. For example, in your patch I already knew
that it's a problem when I saw lock_extent_bits, because I had seen this
pattern in a patch doing allocation in FIEMAP. Commit
afce772e87c36c7f07f230a76d525025aaf09e41, discussion in thread
http://lkml.kernel.org/r/1465362783-27078-1-git-send-email-lufq.f...@cn.fujitsu.com





--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-18 Thread David Sterba
On Thu, May 18, 2017 at 08:24:26AM +0800, Qu Wenruo wrote:
> >> +static inline void extent_changeset_init(struct extent_changeset 
> >> *changeset)
> >> +{
> >> +  changeset->bytes_changed = 0;
> >> +  ulist_init(>range_changed);
> >> +}
> >> +
> >> +static inline struct extent_changeset *extent_changeset_alloc(void)
> >> +{
> >> +  struct extent_changeset *ret;
> >> +
> >> +  ret = kmalloc(sizeof(*ret), GFP_KERNEL);
> > 
> > I don't remember if we'd discussed this before, but have you evaluated
> > if GFP_KERNEL is ok to use in this context?
> 
> IIRC you have informed me that I shouldn't abuse GFP_NOFS.

Use of GFP_NOFS or _KERNEL has to be evaluated case by case. So if it's
"let's use NOFS because everybody else does" or "he said I should not
use NOFS, then I'll use KERNEL", then it's wrong and I'll complain.

A short notice in the changelog or a comment above the allocation would
better signify that the patch author spent some time thinking about the
consequences.

Sometimes it can become pretty hard to find the potential deadlock
scenarios. Using GFP_NOFS in such case is a matter of precaution, but at
least would be nice to be explictly stated somewhere.

The hard cases help to understand the callchain patterns and it's easier
to detect them in the future. For example, in your patch I already knew
that it's a problem when I saw lock_extent_bits, because I had seen this
pattern in a patch doing allocation in FIEMAP. Commit
afce772e87c36c7f07f230a76d525025aaf09e41, discussion in thread
http://lkml.kernel.org/r/1465362783-27078-1-git-send-email-lufq.f...@cn.fujitsu.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-17 Thread Qu Wenruo



At 05/17/2017 11:37 PM, David Sterba wrote:

On Wed, May 17, 2017 at 10:56:27AM +0800, Qu Wenruo wrote:

Introduce a new parameter, struct extent_changeset for
btrfs_qgroup_reserved_data() and its callers.

Such extent_changeset was used in btrfs_qgroup_reserve_data() to record
which range it reserved in current reserve, so it can free it at error
path.

The reason we need to export it to callers is, at buffered write error
path, without knowing what exactly which range we reserved in current
allocation, we can free space which is not reserved by us.

This will lead to qgroup reserved space underflow.

Reviewed-by: Chandan Rajendra 
Signed-off-by: Qu Wenruo 
---
  fs/btrfs/ctree.h   |  6 --
  fs/btrfs/extent-tree.c | 23 +--
  fs/btrfs/extent_io.h   | 34 +
  fs/btrfs/file.c| 12 +---
  fs/btrfs/inode-map.c   |  4 +++-
  fs/btrfs/inode.c   | 18 ++
  fs/btrfs/ioctl.c   |  5 -
  fs/btrfs/qgroup.c  | 51 --
  fs/btrfs/qgroup.h  |  3 ++-
  fs/btrfs/relocation.c  |  4 +++-
  10 files changed, 119 insertions(+), 41 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 1e82516fe2d8..52a0147cd612 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -2704,8 +2704,9 @@ enum btrfs_flush_state {
COMMIT_TRANS=   6,
  };
  
-int btrfs_check_data_free_space(struct inode *inode, u64 start, u64 len);

  int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode *inode, u64 bytes);
+int btrfs_check_data_free_space(struct inode *inode,
+   struct extent_changeset **reserved, u64 start, u64 len);
  void btrfs_free_reserved_data_space(struct inode *inode, u64 start, u64 len);
  void btrfs_free_reserved_data_space_noquota(struct inode *inode, u64 start,
u64 len);
@@ -2723,7 +2724,8 @@ void btrfs_subvolume_release_metadata(struct 
btrfs_fs_info *fs_info,
  struct btrfs_block_rsv *rsv);
  int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes);
  void btrfs_delalloc_release_metadata(struct btrfs_inode *inode, u64 
num_bytes);
-int btrfs_delalloc_reserve_space(struct inode *inode, u64 start, u64 len);
+int btrfs_delalloc_reserve_space(struct inode *inode,
+   struct extent_changeset **reserved, u64 start, u64 len);
  void btrfs_delalloc_release_space(struct inode *inode, u64 start, u64 len);
  void btrfs_init_block_rsv(struct btrfs_block_rsv *rsv, unsigned short type);
  struct btrfs_block_rsv *btrfs_alloc_block_rsv(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 4f62696131a6..ef09cc37f25f 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3364,6 +3364,7 @@ static int cache_save_setup(struct 
btrfs_block_group_cache *block_group,
struct btrfs_fs_info *fs_info = block_group->fs_info;
struct btrfs_root *root = fs_info->tree_root;
struct inode *inode = NULL;
+   struct extent_changeset *data_reserved = NULL;
u64 alloc_hint = 0;
int dcs = BTRFS_DC_ERROR;
u64 num_pages = 0;
@@ -3483,7 +3484,7 @@ static int cache_save_setup(struct 
btrfs_block_group_cache *block_group,
num_pages *= 16;
num_pages *= PAGE_SIZE;
  
-	ret = btrfs_check_data_free_space(inode, 0, num_pages);

+   ret = btrfs_check_data_free_space(inode, _reserved, 0, num_pages);
if (ret)
goto out_put;
  
@@ -3514,6 +3515,7 @@ static int cache_save_setup(struct btrfs_block_group_cache *block_group,

block_group->disk_cache_state = dcs;
spin_unlock(_group->lock);
  
+	extent_changeset_free(data_reserved);

return ret;
  }
  
@@ -4277,12 +4279,8 @@ int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode *inode, u64 bytes)

return ret;
  }
  
-/*

- * New check_data_free_space() with ability for precious data reservation
- * Will replace old btrfs_check_data_free_space(), but for patch split,
- * add a new function first and then replace it.
- */
-int btrfs_check_data_free_space(struct inode *inode, u64 start, u64 len)
+int btrfs_check_data_free_space(struct inode *inode,
+   struct extent_changeset **reserved, u64 start, u64 len)
  {
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
int ret;
@@ -4297,9 +4295,11 @@ int btrfs_check_data_free_space(struct inode *inode, u64 
start, u64 len)
return ret;
  
  	/* Use new btrfs_qgroup_reserve_data to reserve precious data space. */

-   ret = btrfs_qgroup_reserve_data(inode, start, len);
+   ret = btrfs_qgroup_reserve_data(inode, reserved, start, len);
if (ret < 0)
btrfs_free_reserved_data_space_noquota(inode, start, len);
+   else
+   ret = 0;

Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-17 Thread David Sterba
On Wed, May 17, 2017 at 10:56:27AM +0800, Qu Wenruo wrote:
> Introduce a new parameter, struct extent_changeset for
> btrfs_qgroup_reserved_data() and its callers.
> 
> Such extent_changeset was used in btrfs_qgroup_reserve_data() to record
> which range it reserved in current reserve, so it can free it at error
> path.
> 
> The reason we need to export it to callers is, at buffered write error
> path, without knowing what exactly which range we reserved in current
> allocation, we can free space which is not reserved by us.
> 
> This will lead to qgroup reserved space underflow.
> 
> Reviewed-by: Chandan Rajendra 
> Signed-off-by: Qu Wenruo 
> ---
>  fs/btrfs/ctree.h   |  6 --
>  fs/btrfs/extent-tree.c | 23 +--
>  fs/btrfs/extent_io.h   | 34 +
>  fs/btrfs/file.c| 12 +---
>  fs/btrfs/inode-map.c   |  4 +++-
>  fs/btrfs/inode.c   | 18 ++
>  fs/btrfs/ioctl.c   |  5 -
>  fs/btrfs/qgroup.c  | 51 
> --
>  fs/btrfs/qgroup.h  |  3 ++-
>  fs/btrfs/relocation.c  |  4 +++-
>  10 files changed, 119 insertions(+), 41 deletions(-)
> 
> diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> index 1e82516fe2d8..52a0147cd612 100644
> --- a/fs/btrfs/ctree.h
> +++ b/fs/btrfs/ctree.h
> @@ -2704,8 +2704,9 @@ enum btrfs_flush_state {
>   COMMIT_TRANS=   6,
>  };
>  
> -int btrfs_check_data_free_space(struct inode *inode, u64 start, u64 len);
>  int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode *inode, u64 bytes);
> +int btrfs_check_data_free_space(struct inode *inode,
> + struct extent_changeset **reserved, u64 start, u64 len);
>  void btrfs_free_reserved_data_space(struct inode *inode, u64 start, u64 len);
>  void btrfs_free_reserved_data_space_noquota(struct inode *inode, u64 start,
>   u64 len);
> @@ -2723,7 +2724,8 @@ void btrfs_subvolume_release_metadata(struct 
> btrfs_fs_info *fs_info,
> struct btrfs_block_rsv *rsv);
>  int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 
> num_bytes);
>  void btrfs_delalloc_release_metadata(struct btrfs_inode *inode, u64 
> num_bytes);
> -int btrfs_delalloc_reserve_space(struct inode *inode, u64 start, u64 len);
> +int btrfs_delalloc_reserve_space(struct inode *inode,
> + struct extent_changeset **reserved, u64 start, u64 len);
>  void btrfs_delalloc_release_space(struct inode *inode, u64 start, u64 len);
>  void btrfs_init_block_rsv(struct btrfs_block_rsv *rsv, unsigned short type);
>  struct btrfs_block_rsv *btrfs_alloc_block_rsv(struct btrfs_fs_info *fs_info,
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index 4f62696131a6..ef09cc37f25f 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -3364,6 +3364,7 @@ static int cache_save_setup(struct 
> btrfs_block_group_cache *block_group,
>   struct btrfs_fs_info *fs_info = block_group->fs_info;
>   struct btrfs_root *root = fs_info->tree_root;
>   struct inode *inode = NULL;
> + struct extent_changeset *data_reserved = NULL;
>   u64 alloc_hint = 0;
>   int dcs = BTRFS_DC_ERROR;
>   u64 num_pages = 0;
> @@ -3483,7 +3484,7 @@ static int cache_save_setup(struct 
> btrfs_block_group_cache *block_group,
>   num_pages *= 16;
>   num_pages *= PAGE_SIZE;
>  
> - ret = btrfs_check_data_free_space(inode, 0, num_pages);
> + ret = btrfs_check_data_free_space(inode, _reserved, 0, num_pages);
>   if (ret)
>   goto out_put;
>  
> @@ -3514,6 +3515,7 @@ static int cache_save_setup(struct 
> btrfs_block_group_cache *block_group,
>   block_group->disk_cache_state = dcs;
>   spin_unlock(_group->lock);
>  
> + extent_changeset_free(data_reserved);
>   return ret;
>  }
>  
> @@ -4277,12 +4279,8 @@ int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode 
> *inode, u64 bytes)
>   return ret;
>  }
>  
> -/*
> - * New check_data_free_space() with ability for precious data reservation
> - * Will replace old btrfs_check_data_free_space(), but for patch split,
> - * add a new function first and then replace it.
> - */
> -int btrfs_check_data_free_space(struct inode *inode, u64 start, u64 len)
> +int btrfs_check_data_free_space(struct inode *inode,
> + struct extent_changeset **reserved, u64 start, u64 len)
>  {
>   struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
>   int ret;
> @@ -4297,9 +4295,11 @@ int btrfs_check_data_free_space(struct inode *inode, 
> u64 start, u64 len)
>   return ret;
>  
>   /* Use new btrfs_qgroup_reserve_data to reserve precious data space. */
> - ret = btrfs_qgroup_reserve_data(inode, start, len);
> + ret = btrfs_qgroup_reserve_data(inode, reserved, start, len);
>   if (ret < 0)
>   

[RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions

2017-05-16 Thread Qu Wenruo
Introduce a new parameter, struct extent_changeset for
btrfs_qgroup_reserved_data() and its callers.

Such extent_changeset was used in btrfs_qgroup_reserve_data() to record
which range it reserved in current reserve, so it can free it at error
path.

The reason we need to export it to callers is, at buffered write error
path, without knowing what exactly which range we reserved in current
allocation, we can free space which is not reserved by us.

This will lead to qgroup reserved space underflow.

Reviewed-by: Chandan Rajendra 
Signed-off-by: Qu Wenruo 
---
 fs/btrfs/ctree.h   |  6 --
 fs/btrfs/extent-tree.c | 23 +--
 fs/btrfs/extent_io.h   | 34 +
 fs/btrfs/file.c| 12 +---
 fs/btrfs/inode-map.c   |  4 +++-
 fs/btrfs/inode.c   | 18 ++
 fs/btrfs/ioctl.c   |  5 -
 fs/btrfs/qgroup.c  | 51 --
 fs/btrfs/qgroup.h  |  3 ++-
 fs/btrfs/relocation.c  |  4 +++-
 10 files changed, 119 insertions(+), 41 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 1e82516fe2d8..52a0147cd612 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -2704,8 +2704,9 @@ enum btrfs_flush_state {
COMMIT_TRANS=   6,
 };
 
-int btrfs_check_data_free_space(struct inode *inode, u64 start, u64 len);
 int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode *inode, u64 bytes);
+int btrfs_check_data_free_space(struct inode *inode,
+   struct extent_changeset **reserved, u64 start, u64 len);
 void btrfs_free_reserved_data_space(struct inode *inode, u64 start, u64 len);
 void btrfs_free_reserved_data_space_noquota(struct inode *inode, u64 start,
u64 len);
@@ -2723,7 +2724,8 @@ void btrfs_subvolume_release_metadata(struct 
btrfs_fs_info *fs_info,
  struct btrfs_block_rsv *rsv);
 int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes);
 void btrfs_delalloc_release_metadata(struct btrfs_inode *inode, u64 num_bytes);
-int btrfs_delalloc_reserve_space(struct inode *inode, u64 start, u64 len);
+int btrfs_delalloc_reserve_space(struct inode *inode,
+   struct extent_changeset **reserved, u64 start, u64 len);
 void btrfs_delalloc_release_space(struct inode *inode, u64 start, u64 len);
 void btrfs_init_block_rsv(struct btrfs_block_rsv *rsv, unsigned short type);
 struct btrfs_block_rsv *btrfs_alloc_block_rsv(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 4f62696131a6..ef09cc37f25f 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3364,6 +3364,7 @@ static int cache_save_setup(struct 
btrfs_block_group_cache *block_group,
struct btrfs_fs_info *fs_info = block_group->fs_info;
struct btrfs_root *root = fs_info->tree_root;
struct inode *inode = NULL;
+   struct extent_changeset *data_reserved = NULL;
u64 alloc_hint = 0;
int dcs = BTRFS_DC_ERROR;
u64 num_pages = 0;
@@ -3483,7 +3484,7 @@ static int cache_save_setup(struct 
btrfs_block_group_cache *block_group,
num_pages *= 16;
num_pages *= PAGE_SIZE;
 
-   ret = btrfs_check_data_free_space(inode, 0, num_pages);
+   ret = btrfs_check_data_free_space(inode, _reserved, 0, num_pages);
if (ret)
goto out_put;
 
@@ -3514,6 +3515,7 @@ static int cache_save_setup(struct 
btrfs_block_group_cache *block_group,
block_group->disk_cache_state = dcs;
spin_unlock(_group->lock);
 
+   extent_changeset_free(data_reserved);
return ret;
 }
 
@@ -4277,12 +4279,8 @@ int btrfs_alloc_data_chunk_ondemand(struct btrfs_inode 
*inode, u64 bytes)
return ret;
 }
 
-/*
- * New check_data_free_space() with ability for precious data reservation
- * Will replace old btrfs_check_data_free_space(), but for patch split,
- * add a new function first and then replace it.
- */
-int btrfs_check_data_free_space(struct inode *inode, u64 start, u64 len)
+int btrfs_check_data_free_space(struct inode *inode,
+   struct extent_changeset **reserved, u64 start, u64 len)
 {
struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
int ret;
@@ -4297,9 +4295,11 @@ int btrfs_check_data_free_space(struct inode *inode, u64 
start, u64 len)
return ret;
 
/* Use new btrfs_qgroup_reserve_data to reserve precious data space. */
-   ret = btrfs_qgroup_reserve_data(inode, start, len);
+   ret = btrfs_qgroup_reserve_data(inode, reserved, start, len);
if (ret < 0)
btrfs_free_reserved_data_space_noquota(inode, start, len);
+   else
+   ret = 0;
return ret;
 }
 
@@ -6123,6 +6123,8 @@ void btrfs_delalloc_release_metadata(struct btrfs_inode 
*inode, u64 num_bytes)
  * @inode: