Quoting Chris Murphy <li...@colorremedies.com>:

On Thu, Sep 12, 2019 at 3:34 PM General Zed <general-...@zedlx.com> wrote:


Quoting Chris Murphy <li...@colorremedies.com>:

> On Thu, Sep 12, 2019 at 1:18 PM <webmas...@zedlx.com> wrote:
>>
>> It is normal and common for defrag operation to use some disk space
>> while it is running. I estimate that a reasonable limit would be to
>> use up to 1% of total partition size. So, if a partition size is 100
>> GB, the defrag can use 1 GB. Lets call this "defrag operation space".
>
> The simplest case of a file with no shared extents, the minimum free
> space should be set to the potential maximum rewrite of the file, i.e.
> 100% of the file size. Since Btrfs is COW, the entire operation must
> succeed or fail, no possibility of an ambiguous in between state, and
> this does apply to defragment.
>
> So if you're defragging a 10GiB file, you need 10GiB minimum free
> space to COW those extents to a new, mostly contiguous, set of exents,

False.

You can defragment just 1 GB of that file, and then just write out to
disk (in new extents) an entire new version of b-trees.
Of course, you don't really need to do all that, as usually only a
small part of the b-trees need to be updated.

The `-l` option allows the user to choose a maximum amount to
defragment. Setting up a default defragment behavior that has a
variable outcome is not idempotent and probably not a good idea.

We are talking about a future, imagined defrag. It has no -l option yet, as we haven't discussed it yet.

As for kernel behavior, it presumably could defragment in portions,
but it would have to completely update all affected metadata after
each e.g. 1GiB section, translating into 10 separate rewrites of file
metadata, all affected nodes, all the way up the tree to the super.
There is no such thing as metadata overwrites in Btrfs. You're
familiar with the wandering trees problem?

No, but it doesn't matter.

At worst, it just has to completely write-out "all metadata", all the way up to the super. It needs to be done just once, because what's the point of writing it 10 times over? Then, the super is updated as the final commit.

On my comouter the ENTIRE METADATA is 1 GB. That would be very tolerable and doable.

But that is a very bad case, because usually not much metadata has to be updated or written out to disk.

So, there is no problem.


Reply via email to