Hi,

I have created a ticket for the same -
https://bugs.dpdk.org/show_bug.cgi?id=386

Regards,
Siddarth

On Thu, Jan 30, 2020 at 6:45 PM Meunier, Julien (Nokia - FR/Paris-Saclay) <
julien.meun...@nokia.com> wrote:

> Hi,
>
> I noticed also this same behavior since DPDK 18.05. As David said, it is
> related to the virtual space management in DPDK.
> Please check the commit 66cc45e293ed ("mem: replace memseg with memseg
> lists") which introduces this new memory management.
>
> If you use mlockall in your application, all virtual space are locked, and
> if you dump PageTable in /proc/meminfo, you will see a huge memory usage on
> Kernel side.
> I am not an expert of the memory management topic, especially in the
> kernel, but what I observed is that mlockall locks also unused virtual
> memory space.
>
> For testpmd, you can use in the testpmd command line the flag
> --no-mlockall.
>
> For your application, you can use the flag MCL_ONFAULT (kernel >  4.4).
> man mlockall::
>
>        Mark all current (with MCL_CURRENT) or future (with MCL_FUTURE)
>        mappings to lock pages when they are faulted in. When used with
>        MCL_CURRENT, all present pages are locked, but mlockall() will not
>        fault in non-present pages. When used with MCL_FUTURE, all future
>        mappings will be marked to lock pages when they are faulted in, but
>        they will not be populated by the lock when the mapping is created.
>        MCL_ONFAULT must be used with either MCL_CURRENT or MCL_FUTURE or
>        both.
>
> These options will not reduce the VSZ, but at least, will not allocate
> unused memory.
> Otherwise, you need to customize your DPDK .config in order to configure
> RTE_MAX_MEM_MB are related parameters for your specific application.
>
> ---
> Julien Meunier
>
> > -----Original Message-----
> > From: dev <dev-boun...@dpdk.org> On Behalf Of siddarth rai
> > Sent: Thursday, January 30, 2020 11:48 AM
> > To: David Marchand <david.march...@redhat.com>
> > Cc: Burakov, Anatoly <anatoly.bura...@intel.com>; dev <dev@dpdk.org>
> > Subject: Re: [dpdk-dev] Big spike in DPDK VSZ
> >
> > Hi,
> >
> > I did some further experiments and found out that version 18.02.2
> doesn't have
> > the problem, but the 18.05.1 release has it.
> >
> > Would really appreciate if someone can help, if there is a patch to get
> over this
> > issue in the DPDK code ?
> > This is becoming a huge practical issue for me as on multi NUMA setup,
> the VSZ
> > goes above 400G and I can't get core files to debug crashes in my app.
> >
> > Regards,
> > Siddarth
> >
> >
> > Regards,
> > Siddarth
> >
> > On Thu, Jan 30, 2020 at 2:21 PM David Marchand
> > <david.march...@redhat.com>
> > wrote:
> >
> > > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai <sid...@gmail.com> wrote:
> > > > I have been using DPDK 19.08 and I notice the process VSZ is huge.
> > > >
> > > > I tried running the test PMD. It takes 64G VSZ and if I use the
> > > > '--in-memory' option it takes up to 188G.
> > > >
> > > > Is there anyway to disable allocation of such huge VSZ in DPDK ?
> > >
> > > *Disclaimer* I don't know the arcanes of the mem subsystem.
> > >
> > > I suppose this is due to the memory allocator in dpdk that reserves
> > > unused virtual space (for memory hotplug + multiprocess).
> > >
> > > If this is the case, maybe we could do something to enhance the
> > > situation for applications that won't care about multiprocess.
> > > Like inform dpdk that the application won't use multiprocess and skip
> > > those reservations.
> > >
> > > Or another idea would be to limit those reservations to what is passed
> > > via --socket-limit.
> > >
> > > Anatoly?
> > >
> > >
> > >
> > > --
> > > David Marchand
> > >
> > >
>

Reply via email to