On Sun, Oct 20, 2024 at 12:45:33PM +0100, Lorenzo Stoakes wrote:
> On Sat, Oct 19, 2024 at 05:00:37PM -0400, Kent Overstreet wrote:
> > A user with a 75 TB filesystem reported the following journal replay
> > error:
> > https://github.com/koverstreet/bcachefs/issues/769
> >
> > In journal replay we have to sort and dedup all the keys from the
> > journal, which means we need a large contiguous allocation. Given that
> > the user has 128GB of ram, the 2GB limit on allocation size has become
> > far too small.
> >
> > Cc: Vlastimil Babka <[email protected]>
> > Cc: Andrew Morton <[email protected]>
> > Signed-off-by: Kent Overstreet <[email protected]>
> > ---
> >  mm/util.c | 6 ------
> >  1 file changed, 6 deletions(-)
> >
> > diff --git a/mm/util.c b/mm/util.c
> > index 4f1275023eb7..c60df7723096 100644
> > --- a/mm/util.c
> > +++ b/mm/util.c
> > @@ -665,12 +665,6 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, 
> > b), gfp_t flags, int node)
> >     if (!gfpflags_allow_blocking(flags))
> >             return NULL;
> >
> > -   /* Don't even allow crazy sizes */
> > -   if (unlikely(size > INT_MAX)) {
> > -           WARN_ON_ONCE(!(flags & __GFP_NOWARN));
> > -           return NULL;
> > -   }
> > -
> 
> Err, and not replace it with _any_ limit? That seems very unwise.

large allocations will go to either the page allocator or vmalloc, and
they have their own limits.

although I should have a look at that, and make sure we're not
triggering the > MAX_ORDER warning in the page allocator unnecessarily w
hen we could just call vmalloc().

Reply via email to