On Tue, Mar 30, 2021 at 10:57:04AM +0800, Su Yue wrote:
> 
> On Tue 23 Mar 2021 at 16:14, Ming Lei <ming....@redhat.com> wrote:
> 
> > On some ARCHs, such as aarch64, page size may be 64K, meantime there may
> > be lots of CPU cores. relay_open() needs to allocate pages on each CPU
> > blktrace, so easily too many pages are taken by blktrace. For example,
> > on one ARM64 server: 224 CPU cores, 16G RAM, blktrace finally got
> > allocated 7GB in case of 'blktrace -b 8192' which is used by
> > device-mapper
> > test suite[1]. This way could cause OOM easily.
> > 
> > Fix the issue by limiting max allowed pages to be 1/8 of
> > totalram_pages().
> > 
> > [1] https://github.com/jthornber/device-mapper-test-suite.git
> > 
> > Signed-off-by: Ming Lei <ming....@redhat.com>
> > ---
> >  kernel/trace/blktrace.c | 32 ++++++++++++++++++++++++++++++++
> >  1 file changed, 32 insertions(+)
> > 
> > diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
> > index c221e4c3f625..8403ff19d533 100644
> > --- a/kernel/trace/blktrace.c
> > +++ b/kernel/trace/blktrace.c
> > @@ -466,6 +466,35 @@ static void blk_trace_setup_lba(struct blk_trace
> > *bt,
> >     }
> >  }
> > 
> > +/* limit total allocated buffer size is <= 1/8 of total pages */
> > +static void validate_and_adjust_buf(struct blk_user_trace_setup *buts)
> > +{
> > +   unsigned buf_size = buts->buf_size;
> > +   unsigned buf_nr = buts->buf_nr;
> > +   unsigned long max_allowed_pages = totalram_pages() >> 3;
> > +   unsigned long req_pages = PAGE_ALIGN(buf_size * buf_nr) >> PAGE_SHIFT;
> > +
> > +   if (req_pages * num_online_cpus() <= max_allowed_pages)
> > +           return;
> > +
> > +   req_pages = DIV_ROUND_UP(max_allowed_pages, num_online_cpus());
> > +
> > +   if (req_pages == 0) {
> > +           buf_size = PAGE_SIZE;
> > +           buf_nr = 1;
> > +   } else {
> > +           buf_size = req_pages << PAGE_SHIFT / buf_nr;
> > 
> Should it be:
> buf_size = (req_pages << PAGE_SHIFT) / buf_nr;
> ?
> The priority of '<<' is lower than '/', right? :)

Good catch, thanks!

-- 
Ming

Reply via email to