That's showing you have 0 hugepages free. Maybe they weren't passed through
to the VM properly?

On Tue, Mar 1, 2022 at 7:50 PM Lombardo, Ed <ed.lomba...@netscout.com>
wrote:

> [root@vSTREAM_632 ~]# cat /proc/meminfo
>
> MemTotal:       32778372 kB
>
> MemFree:        15724124 kB
>
> MemAvailable:   15897392 kB
>
> Buffers:           18384 kB
>
> Cached:           526768 kB
>
> SwapCached:            0 kB
>
> Active:           355140 kB
>
> Inactive:         173360 kB
>
> Active(anon):      62472 kB
>
> Inactive(anon):    12484 kB
>
> Active(file):     292668 kB
>
> Inactive(file):   160876 kB
>
> Unevictable:    13998696 kB
>
> Mlocked:        13998696 kB
>
> SwapTotal:       3906556 kB
>
> SwapFree:        3906556 kB
>
> Dirty:                76 kB
>
> Writeback:             0 kB
>
> AnonPages:      13986156 kB
>
> Mapped:            95500 kB
>
> Shmem:             16864 kB
>
> Slab:             121952 kB
>
> SReclaimable:      71128 kB
>
> SUnreclaim:        50824 kB
>
> KernelStack:        4608 kB
>
> PageTables:        31524 kB
>
> NFS_Unstable:          0 kB
>
> Bounce:                0 kB
>
> WritebackTmp:          0 kB
>
> CommitLimit:    19247164 kB
>
> Committed_AS:   14170424 kB
>
> VmallocTotal:   34359738367 kB
>
> VmallocUsed:      212012 kB
>
> VmallocChunk:   34342301692 kB
>
> Percpu:             2816 kB
>
> HardwareCorrupted:     0 kB
>
> AnonHugePages:  13228032 kB
>
> CmaTotal:              0 kB
>
> CmaFree:               0 kB
>
> HugePages_Total:    1024
>
> HugePages_Free:        0
>
> HugePages_Rsvd:        0
>
> HugePages_Surp:        0
>
> Hugepagesize:       2048 kB
>
> DirectMap4k:      104320 kB
>
> DirectMap2M:    33449984 kB
>
>
>
> *From:* Cliff Burdick <shakl...@gmail.com>
> *Sent:* Tuesday, March 1, 2022 10:45 PM
> *To:* Lombardo, Ed <ed.lomba...@netscout.com>
> *Cc:* Stephen Hemminger <step...@networkplumber.org>; users@dpdk.org
> *Subject:* Re: How to increase mbuf size in dpdk version 17.11
>
>
>
> *External Email:* This message originated outside of NETSCOUT. Do not
> click links or open attachments unless you recognize the sender and know
> the content is safe.
>
> Can you paste the output of "cat /proc/meminfo"?
>
>
>
> On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <ed.lomba...@netscout.com>
> wrote:
>
> Here is the output from rte_mempool_dump() after creating the mbuf "
> mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)":
>  nb_mbuf_per_pool = 32768
>  mb_size = 16640
>  16512 * 32768 = 541,065,216
>
> mempool <mbuf_pool_socket_0>@0x17f811400
>   flags=10
>   pool=0x17f791180
>   iova=0x80fe11400
>   nb_mem_chunks=1
>   size=32768
>   populated_size=32768
>   header_size=64
>   elt_size=16640
>   trailer_size=0
>   total_obj_size=16704
>   private_data_size=64
>   avg bytes/object=16704.000000
>   internal cache infos:
>     cache_size=250
>     cache_count[0]=0
> ...
>     cache_count[126]=0
>     cache_count[127]=0
>     total_cache_count=0
>   common_pool_count=32768
>   no statistics available
>
> -----Original Message-----
> From: Stephen Hemminger <step...@networkplumber.org>
> Sent: Tuesday, March 1, 2022 5:46 PM
> To: Cliff Burdick <shakl...@gmail.com>
> Cc: Lombardo, Ed <ed.lomba...@netscout.com>; users@dpdk.org
> Subject: Re: How to increase mbuf size in dpdk version 17.11
>
> External Email: This message originated outside of NETSCOUT. Do not click
> links or open attachments unless you recognize the sender and know the
> content is safe.
>
> On Tue, 1 Mar 2022 13:37:07 -0800
> Cliff Burdick <shakl...@gmail.com> wrote:
>
> > Can you verify how many buffers you're allocating? I don't see how
> > many you're allocating in this thread.
> >
> > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <ed.lomba...@netscout.com>
> > wrote:
> >
> > > Hi Stephen,
> > > The VM is configured to have 32 GB of memory.
> > > Will dpdk consume the 2GB of hugepage memory for the mbufs?
> > > I don't mind having less mbufs with mbuf size of 16K vs original
> > > mbuf size of 2K.
> > >
> > > Thanks,
> > > Ed
> > >
> > > -----Original Message-----
> > > From: Stephen Hemminger <step...@networkplumber.org>
> > > Sent: Tuesday, March 1, 2022 2:57 PM
> > > To: Lombardo, Ed <ed.lomba...@netscout.com>
> > > Cc: users@dpdk.org
> > > Subject: Re: How to increase mbuf size in dpdk version 17.11
> > >
> > > External Email: This message originated outside of NETSCOUT. Do not
> > > click links or open attachments unless you recognize the sender and
> > > know the content is safe.
> > >
> > > On Tue, 1 Mar 2022 18:34:22 +0000
> > > "Lombardo, Ed" <ed.lomba...@netscout.com> wrote:
> > >
> > > > Hi,
> > > > I have an application built with dpdk 17.11.
> > > > During initialization I want to change the mbuf size from 2K to 16K.
> > > > I want to receive packet sizes of 8K or more in one mbuf.
> > > >
> > > > The VM running the application is configured to have 2G hugepages.
> > > >
> > > > I tried many things and I get an error when a packet arrives.
> > > >
> > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I
> > > changed from 2176 to ((2048*8)+128), where 128 is for headroom.
> > > > The call to rte_pktmbuf_pool_create() returns success with my
> changes.
> > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx
> > > > mbuf
> > > allocation failures.  This value increments each time a packet
> arrives.
> > > >
> > > > Is there any reference document explaining what causes this error?
> > > > Is there a user guide I should follow to make the mbuf size
> > > > change,
> > > starting with the hugepage value?
> > > >
> > > > Thanks,
> > > > Ed
> > >
> > > Did you check that you have enough memory in the system for the
> > > larger footprint?
> > > Using 16K per mbuf is going to cause lots of memory to be consumed.
>
> A little maths you can fill in your own values.
>
> Assuming you want 16K of data.
>
> You need at a minimum [1]
>     num_rxq := total number of receive queues
>     num_rxd := number of receive descriptors per receive queue
>     num_txq := total number of transmit queues (assume all can be full)
>     num_txd := number of transmit descriptors
>     num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores *
> burst_size
>
> Assuming you are using code copy/pasted from some example like l3fwd.
> With 4 Rxq
>
>     num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320
>
> Each mbuf element requires [2]
>     elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size
>              = 128 + 128 + 16K = 16640
>
>     obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL)
>              = 16832
>
> So total pool is
>     num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M
>
>
> [1] Some devices line bnxt need multiple buffers per packet.
> [2] Often applications want additional space per mbuf for meta-data.
>
>
>

Reply via email to