I am using vmware hypervisor.
-Original Message-
From: Stephen Hemminger
Sent: Tuesday, March 1, 2022 11:41 PM
To: Cliff Burdick
Cc: Lombardo, Ed ; users@dpdk.org
Subject: Re: How to increase mbuf size in dpdk version 17.11
External Email: This message originated outside of NETSCOUT.
On Tue, 1 Mar 2022 19:56:39 -0800
Cliff Burdick wrote:
> That's showing you have 0 hugepages free. Maybe they weren't passed through
> to the VM properly?
Which hypervisor? Not all hypervisors really support hugepages.
That's showing you have 0 hugepages free. Maybe they weren't passed through
to the VM properly?
On Tue, Mar 1, 2022 at 7:50 PM Lombardo, Ed
wrote:
> [root@vSTREAM_632 ~]# cat /proc/meminfo
>
> MemTotal: 32778372 kB
>
> MemFree:15724124 kB
>
> MemAvailable: 15897392 kB
>
>
[root@vSTREAM_632 ~]# cat /proc/meminfo
MemTotal: 32778372 kB
MemFree:15724124 kB
MemAvailable: 15897392 kB
Buffers: 18384 kB
Cached: 526768 kB
SwapCached:0 kB
Active: 355140 kB
Inactive: 173360 kB
Active(anon): 62472 kB
Can you paste the output of "cat /proc/meminfo"?
On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed
wrote:
> Here is the output from rte_mempool_dump() after creating the mbuf "
> mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)":
> nb_mbuf_per_pool = 32768
> mb_size = 16640
>
Here is the output from rte_mempool_dump() after creating the mbuf "
mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)":
nb_mbuf_per_pool = 32768
mb_size = 16640
16512 * 32768 = 541,065,216
mempool @0x17f811400
flags=10
pool=0x17f791180
iova=0x80fe11400
On Tue, 1 Mar 2022 13:37:07 -0800
Cliff Burdick wrote:
> Can you verify how many buffers you're allocating? I don't see how many
> you're allocating in this thread.
>
> On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed
> wrote:
>
> > Hi Stephen,
> > The VM is configured to have 32 GB of memory.
> >
Can you verify how many buffers you're allocating? I don't see how many
you're allocating in this thread.
On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed
wrote:
> Hi Stephen,
> The VM is configured to have 32 GB of memory.
> Will dpdk consume the 2GB of hugepage memory for the mbufs?
> I don't mind
Hi Stephen,
The VM is configured to have 32 GB of memory.
Will dpdk consume the 2GB of hugepage memory for the mbufs?
I don't mind having less mbufs with mbuf size of 16K vs original mbuf size of
2K.
Thanks,
Ed
-Original Message-
From: Stephen Hemminger
Sent: Tuesday, March 1,
On Tue, 1 Mar 2022 18:34:22 +
"Lombardo, Ed" wrote:
> Hi,
> I have an application built with dpdk 17.11.
> During initialization I want to change the mbuf size from 2K to 16K.
> I want to receive packet sizes of 8K or more in one mbuf.
>
> The VM running the application is configured to
Hi,
I have an application built with dpdk 17.11.
During initialization I want to change the mbuf size from 2K to 16K.
I want to receive packet sizes of 8K or more in one mbuf.
The VM running the application is configured to have 2G hugepages.
I tried many things and I get an error when a packet
On Tue, 1 Mar 2022 15:16:32 +
"Kinsella, Ray" wrote:
> Can you supply “cat /proc/cmdline” please?
>
> Ray K
>
> From: Antonio Di Bacco
> Sent: Tuesday 1 March 2022 14:52
> To: users@dpdk.org
> Subject: DPDK on isolated cores but I still see interrupts
>
> I am trying to run a DPDK
One thing you can try is using the irqaffinity boot parameter to force
interrupts onto your non-isolated cores: For timer interrupts you can try
running in NO_HZ_FULL mode, but that may not work if you have other
userspace processes running on those cores. It might be worth confirming
that the
Can you supply “cat /proc/cmdline” please?
Ray K
From: Antonio Di Bacco
Sent: Tuesday 1 March 2022 14:52
To: users@dpdk.org
Subject: DPDK on isolated cores but I still see interrupts
I am trying to run a DPDK application on x86_64 Machine (Ubuntu 20.04) on
isolated cores.
I expected not to
I am trying to run a DPDK application on x86_64 Machine (Ubuntu 20.04) on
isolated cores.
I expected not to have interrupts on isolated cores but I still have a lot
of CAL (*Function call interrupts*) interrupts and LOC interrupts (*Local
timer interrupts*). Is there any setting in DPDK to stop
Hello Asaf,
I am currently working on forwarding IQ samples from EttusResearch USRP
n320 SDR receiver to Nvidia GPU (via GPUDirect RDMA) with DPDK.
We are using Connectx-5 NICs.
USRP uses CHDR network protocol (
https://files.ettus.com/manual_archive/release_003_009_000/html/page_rtp.html)
over
17 matches
Mail list logo