On Tue, Mar 14, 2023 at 11:36:29PM +0200, Παύλος Γκέσος wrote:
> Package: btrfs-progs
> Version: 5.10.1-2 armhf
> 
> When I try to delete a previous created btrfs subvolume I get this:
> ERROR: Could not statfs: Value too large for defined data type
> 
> The same when I try to make a btrfs snapshot.

Hi, there are known issues with large filesystems on 32-bit.  Not just
btrfs for that matter -- it's just more likely to be affected because of
native multi-device support and two layers of addressing.

Thus:
* how big is the filesystem?
* does it consist of multiple devices?
* has it been rebalanced or converted to a different redundancy profile?

If the sum of all parts that are (or ever have been) included in the
filesystem approaches 8TB, this would be the cause.  In addition, any
address space that was allocated in the past but had been balanced/converted
away is lost -- virtual offsets always go up.  This is not a concern on
64-bit as you can't possibly use them up, but for 32-bit the limit can be
exceeded even with a single large disk.

Other filesystems also suffer from this limit, although MD at least moves
the threshold from being applied to raw device size to the available size
which allows redundant raid as long as the resulting size is below[1] 8TB.

Shedding the limit would require changing many parts of the kernel, and
there is currently no intention of ever doing that.  Thus, I'm afraid you
need to either use a smaller filesystem or a 64-bit kernel (which your CPU
doesn't support).

There's little support for 32-bit in general, it's in maintenance mode
these days...


Did I assume correctly that you ran into the limit?  If not, please say so.
Otherwise, all we can do is improving error messages.


[1]. Because disk manufacturers cheat on the definition of "terabyte",
an "8TB" disk has less than 1099511627776 bytes.
-- 
⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Q: Is it ok to combine wired, wifi, and/or bluetooth connections
⢿⡄⠘⠷⠚⠋⠀    in wearable computing?
⠈⠳⣄⠀⠀⠀⠀ A: No, that would be mixed fabric, which Lev19:19 forbids.

Reply via email to