On 2021/2/20 下午12:28, Erik Jensen wrote:
On Fri, Feb 19, 2021 at 7:16 PM Qu Wenruo <quwenruo.bt...@gmx.com> wrote:
On 2021/2/20 上午10:47, Erik Jensen wrote:
Given that it sounds like the issue is the metadata address space, and
given that I surely don't actually have 16TiB of metadata on a 24TiB
file system (indeed, Metadata, RAID1: total=30.00GiB, used=28.91GiB),
is there any way I could compact the metadata offsets into the lower
16TiB of the virtual metadata inode? Perhaps that could be something
balance could be taught to do? (Obviously, the initial run of such a
balance would have to be performed using a 64-bit system.)

Unfortunately, no.

Btrfs relies on increasing bytenr in the logical address space for
things like balance, thus we can't relocate chunks to smaller bytenr.

That's… unfortunate. How much relies on the assumption that bytenr is monotonic?

IIRC mostly balance itself.


Brainstorming some ideas, is compacting the address space something
that could be done offline? E.g., maybe some two-pass process: first
something balance-like that bumps all of the metadata up to a compact
region of address space, starting at a new 16TiB boundary, and then a
follow up pass that just strips the top bits off?

We need btrfs-progs support for off-line balancing.

I used to have this idea, but see very limited usage.

This would be the safest bet, but needs a lot of work, although in user
space.


Or maybe once all of the bytenrs are brought within 16TiB of each
other by balance, btrfs could just keep track of an offset that needs
to be applied when mapping page cache indexes?

But further balance/new chunk allocation can still go beyond the limit.

This is biggest problem other fs don't need to bother.
We can dynamically allocate chunks while others can't.


Or maybe btrfs could use multiple virtual inodes on 32-bit systems,
one for each 16TiB block of address space with metadata in it? If this
were to ever grow to need more than a handful of virtual inodes, it
seems like a balance *would* actually help in this case by compacting
the metadata higher in the address space, allowing the virtual inodes
for lower in the address space to be dropped.

This may be a good idea.

But the problem of test coverage is always here.

We can spend tons of lines, but at the end it will not really be well
tested, as it's really hard

Or maybe btrfs could just not use the page cache for the metadata
inode once the offset exceeds 16TiB, and only cache at the block
layer? This would surely hurt performance, but at least the filesystem
could still be accessed.

I don't believe it's really possible, unless we override the XArray
thing provided by MM completely and implemented a btrfs only structure.

That's too costy.


Given that this issue appears to be not due to the size of the
filesystem, but merely how much I've used it, having the only solution
be to copy all of the data off, reformat the drives, and then restore
every time filesystem usage exceeds a certain thresholds is… not very
satisfying.

Yeah, definitely not a good experience.


Finally, I've never done kernel dev before, but I do have some C
experience, so if there is a solution that falls into the category of
seeming reasonable, likely to be accepted if implemented, but being
unlikely to get implemented given the low priority of supporting
32-bit systems, let me know and maybe I can carve out some time to
give it a try.

BTW, if you want things like 64K page size, while still keep the 4K
sector size of your existing btrfs, then I guess you may be interested
in the recent subpage support.

Which allow btrfs to mount 4K sector size fs with 64K page size.

Unfortunately it's still WIP, but may fit your usecase, as ARM support
multiple page sizes (4K, 16K, 64K).
(Although we are only going to support 64K page for now)

Thanks,
Qu

Reply via email to