On Thu, Feb 18, 2021 at 08:42:14PM +0800, Qu Wenruo wrote:
> On 2021/2/18 下午8:15, Matthew Wilcox wrote:
> > Yes, this is a known limitation.  Some vendors have gone to the trouble
> > of introducing a new page_index_t.  I'm not convinced this is a problem
> > worth solving.  There are very few 32-bit systems with this much storage
> > on a single partition (everything should work fine if you take a 20TB
> > drive and partition it into two 10TB partitions).
> What would happen if a user just tries to write 4K at file offset 16T
> fir a sparse file?
> Would it be blocked by other checks before reaching the underlying fs?

/* Page cache limit. The filesystems should put that into their s_maxbytes 
   limits, otherwise bad things can happen in VM. */ 
#define MAX_LFS_FILESIZE        ((loff_t)ULONG_MAX << PAGE_SHIFT)
#elif BITS_PER_LONG==64
#define MAX_LFS_FILESIZE        ((loff_t)LLONG_MAX)

> This is especially true for btrfs, which has its internal address space
> (and it can be any aligned U64 value).
> Even 1T btrfs can have its metadata at its internal bytenr way larger
> than 1T. (although those ranges still needs to be mapped inside the device).

Sounds like btrfs has a problem to fix.

> And considering the reporter is already using 32bit with 10T+ storage, I
> doubt if it's really not worthy.
> BTW, what would be the extra cost by converting page::index to u64?
> I know tons of printk() would cause warning, but most 64bit systems
> should not be affected anyway.

No effect for 64-bit systems, other than the churn.

For 32-bit systems, it'd have some pretty horrible overhead.  You don't
just have to touch the page cache, you have to convert the XArray.
It's doable (I mean, it's been done), but it's very costly for all the
32-bit systems which don't use a humongous filesystem.  And we could
minimise that overhead with a typedef, but then the source code gets
harder to work with.

Reply via email to