On 2021/2/20 上午12:12, Theodore Ts'o wrote:
On Fri, Feb 19, 2021 at 08:37:30AM +0800, Qu Wenruo wrote:
So it means the 32bit archs are already 2nd tier targets for at least
upstream linux kernel?

At least as far as btrfs is concerned, anyway....

I'm afraid that would be the case.

But I'm still interested in how other fses handle such problem.

Doesn't they rely on page::index to handle their metadata?
Or all other fses just don't support allocating/deleting their AG/BG
dynamically so they can reject the fs at mount time?

Or they limit their metadata page::index to just inside each AG/BG?

Anyway, I'm afraid we have to reject the fs at both mount time and
runtime for now.


Or would it be possible to make it an option to make the index u64?
So guys who really wants large file support can enable it while most
other 32bit guys can just keep the existing behavior?

I think if this is going to be done at all, it would need to be a
compile-time CONFIG option to make the index be 64-bits.  That's
because there are a huge number of low-end Android devices (retail
price ~$30 USD in India, for example --- this set of customers is
sometimes called "the next billion users" by some folks) that are
using 32-bit ARM systems.  And they will be using ext4 or f2fs, and it
would be massively unfortunate/unfair/etc. to impose that performance
penalty on them.

It sounds like what Willy is saying is that supporting a 64-bit page
index on 32-bit platforms is going to be have a lot of downsides, and
not just the performance / memory overhead issue.  It's also a code
mainteinance concern, and that tax would land on the mm developers.
And if it's not well-maintained, without regular testing, it's likely
to be heavily subject to bitrot.  (Although I suppose if we don't mind
doubling the number of configs that kernelci has to test, this could
be mitigated.)

In contrast, changing btrfs to not depend on a single address space
for all of its metadata might be a lot of work, but it's something
which lands on the btrfs developers, as opposed to a another (perhaps
more central) kernel subsystem.  Managing at this tradeoff is
something that is going to be between the mm developers and the btrfs
developers, but as someone who doesn't do any work on either of these
subsystems, it seems like a pretty obvious choice.

Yeah, I totally understand that.

And it doesn't look that worthy (or even possible) to make several
metadata inodes (address space to be more specific) just to support
32bit systemts.

As the lack of test coverage problem is still the same.

I don't see any active btrfs developer using 32bit system to test, even
for ARM systems.

Even rejecting the fs is in fact much more complex and may not get
enough tests after the initial submission.

The final observation I'll make is that if we know which NAS box
vendor can (properly) support volumes > 16 TB, we can probably find
the 64-bit page index patch.  It'll probably be against a fairly old
kernel, so it might not all _that_ helpful, but it might give folks a
bit of a head start.

I can tell you that the NAS box vendor that it _isn't_ is Synology.
Synology boxes uses btrfs, and on 32-bit processors, they have a 16TB
volume size limit, and this is enforced by the Synology NAS
software[1].  However, Synology NAS boxes can support multiple
volumes; until today, I never understood why, since it seemed to be
unnecessary complexity, but I suspect the real answer was this was how
Synology handled storage array sizes > 16TB on their older systems.
(All of their new NAS boxes use 64-bit processors.)

BTW, even for Synology, 32bit systems can easily go beyond 16T in its
local address space while the underlying fs is only 1T or even smaller.

They only need to run routine balance and finally they will go beyond
that 16T limit.

Thanks,
Qu


[1] https://www.reddit.com/r/synology/comments/a62xrx/max_volume_size_of_16tb/

Cheers,

                                        - Ted

Reply via email to