Thanks for the explanation!
I understand your use case, and it cannot be supported with ConnectX-5.
Regards,
Asaf Penso
From: Vladimir Yesin
Sent: Tuesday, March 1, 2022 10:02 AM
To: Asaf Penso
Cc: users@dpdk.org
Subject: Re: Feature request: MLX5 DPDK flow item type RAW support
Hello Asaf,
I
On Wed, 2 Mar 2022 15:36:57 +
"Jim Holland (jimholla)" wrote:
> unsubscribe
Use website, this list doesn't use mailman
unsubscribe
did you follow the instructions here:
https://docs.vmware.com/en/VMware-vCloud-NFV-OpenStack-Edition/3.0/vmwa-vcloud-nfv30-performance-tunning/GUID-1F05987F-012B-4BC4-9015-CDE3C991C68C.html
?
On Tue, Mar 1, 2022, 21:48 Lombardo, Ed wrote:
> I am using vmware hypervisor.
>
> -Original
Hi,
When I return to 2K mbuf size the /proc/meminfo hugepage info looks exactly the
same.
HugePages_Total:1024
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize: 2048 kB
I did not make any changes to the arguments to rte_eal_init() when I tried 16K
Hi
I'm new to DPDK. I'm looking at adding a backend to a high-level
library (https://github.com/ska-sa/spead2) that currently has a
backend using ibverbs raw ethernet QPs for kernel bypass (on mlx5
hardware).
One of the features of the high-level library is that when
transmitting data, the user