A user with a 75 TB filesystem reported the following journal replay
error:
https://github.com/koverstreet/bcachefs/issues/769

In journal replay we have to sort and dedup all the keys from the
journal, which means we need a large contiguous allocation. Given that
the user has 128GB of ram, the 2GB limit on allocation size has become
far too small.

Cc: Vlastimil Babka <[email protected]>
Cc: Andrew Morton <[email protected]>
Signed-off-by: Kent Overstreet <[email protected]>
---
 mm/util.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/mm/util.c b/mm/util.c
index 4f1275023eb7..c60df7723096 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -665,12 +665,6 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), 
gfp_t flags, int node)
        if (!gfpflags_allow_blocking(flags))
                return NULL;
 
-       /* Don't even allow crazy sizes */
-       if (unlikely(size > INT_MAX)) {
-               WARN_ON_ONCE(!(flags & __GFP_NOWARN));
-               return NULL;
-       }
-
        /*
         * kvmalloc() can always use VM_ALLOW_HUGE_VMAP,
         * since the callers already cannot assume anything
-- 
2.45.2


Reply via email to