On Wed, Mar 17, 2021 at 3:31 PM Johannes Thumshirn
<johannes.thumsh...@wdc.com> wrote:
>
> On 17/03/2021 16:26, Filipe Manana wrote:
> >> +void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info)
> >> +{
> >> +       struct btrfs_block_group *bg;
> >> +       struct btrfs_space_info *space_info;
> >> +       int ret = 0;
> >> +
> >> +       if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
> >> +               return;
> >> +
> >> +       if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE))
> >> +               return;
> >> +
> >> +       mutex_lock(&fs_info->delete_unused_bgs_mutex);
> >> +       mutex_lock(&fs_info->reclaim_bgs_lock);
> >
> > Could we just use delete_unused_bgs_mutex? I think you are using it
> > only because btrfs_relocate_chunk() asserts it's being held.
> >
> > Just renaming delete_unused_bgs_mutex to a more generic name like
> > reclaim_bgs_lock, and just use that should work.
>
> Do I understand you correctly, that btrfs_delete_unused_bgs and 
> btrfs_reclaim_bgs
> should use the same mutex? I was thinking about that but then didn't want to
> have one mutex protecting two lists.

Correct, that's what I meant.

Since btrfs_delete_unused_bgs() and btrfs_reclaim_bgs() can never run
in parallel, and since they are already locking
delete_unused_bgs_mutex,
I don't see a reason not to do it.

The other cases that take the mutex, adding blocks groups to one of
the lists, are fast and are relatively rare specially for empty block
groups.

>
> >
> >> +       while (!list_empty(&fs_info->reclaim_bgs)) {
> >> +               bg = list_first_entry(&fs_info->reclaim_bgs,
> >> +                                     struct btrfs_block_group,
> >> +                                     bg_list);
> >> +               list_del_init(&bg->bg_list);
> >> +
> >> +               space_info = bg->space_info;
> >> +               mutex_unlock(&fs_info->reclaim_bgs_lock);
> >> +
> >> +               down_write(&space_info->groups_sem);
> >
> > Having a comment on why we lock groups_sem would be good to have, just
> > like at btrfs_delete_unused_bgs().
> >
>
> OK
>
> >> +
> >> +               spin_lock(&bg->lock);
> >> +               if (bg->reserved || bg->pinned || bg->ro ||
> >> +                   list_is_singular(&bg->list)) {
> >> +                       /*
> >> +                        * We want to bail if we made new allocations or 
> >> have
> >> +                        * outstanding allocations in this block group.  
> >> We do
> >> +                        * the ro check in case balance is currently 
> >> acting on
> >> +                        * this block group.
> >
> > Why is the list_is_singular() check there?
> >
> > This was copied from the empty block group removal case -
> > btrfs_delete_unused_bgs(), but here I don't see why it's needed, since
> > we relocate rather than remove block groups.
> > The list_is_singular() from btrfs_delete_unused_bgs() is there to
> > prevent losing the raid profile for data when the last data block
> > group is removed (which implies not using space cache v1).
> >
> > Other than that, overall it looks good.
>
> Ah OK, yes this was a copy.
>
>
> >> +               /*
> >> +                * Reclaim block groups in the reclaim_bgs list after we 
> >> deleted
> >> +                * all unused block_groups. This possibly gives us some 
> >> more free
> >> +                * space.
> >> +                */
> >> +               btrfs_reclaim_bgs(fs_info);
> >
> > Now with the cleaner kthread doing relocation, which can be a very
> > slow operation, we end up delaying iputs and other work delegated to
> > it for a very long time.
> > Not saying it needs to be fixed right away, perhaps it's not that bad,
> > I'm just not sure at the moment. We already do subvolume deletion in
> > this kthread, which is also a slow operation.
>
> Noted.



-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

Reply via email to