Mr. Ferber, much appreciated. I knew this metal box came to me with two
mellanox NICs bonded. I used their util to unbond it, but alas it did not
do it all the way. The /etc/network/interfaces was bad. I fixed the config
and rebooted.
Voila. Success. ibv_devinfo now shows two devices. And the
+Asaf Penso
On Tue, Apr 5, 2022, 9:15 PM fwefew 4t4tg <7532ya...@gmail.com> wrote:
> Where is the latest (most current) downlink for:
>
> DPDK-MLNX: Mellanox's customization of DPDK?
> MLNX_DPDK_Quick_Start_Guide?
>
> I cannot get a list of available versions Nvidia's download page and the
>
Hi,
Based on your output, the ConnectX-4LX device is configured in LAG mode
managed via the kernel bonding scripts. In this mode, both physical
functions share a single port (mlx5_bond_0). You should only probe the
first PCI BDF - 01:00.0, not the 2nd one.
By the way, the --dpdk installation
I built the current version of DPDK directly from dpdk.org after I
installed the current OFED Mellanox driver set:
* MLNX_OFED_LINUX-5.5-1.0.3.2-ubuntu20.04-x86_64.iso
with ./install --dpdk
I am using a Mellanox Technologies MT27710 Family [ConnectX-4 Lx] which is
Ethernet only; there is no IB
Where is the latest (most current) downlink for:
DPDK-MLNX: Mellanox's customization of DPDK?
MLNX_DPDK_Quick_Start_Guide?
I cannot get a list of available versions Nvidia's download page and the
only versions I can find:
My question was theorical, is it possible to do the same load balance with the
rss flow for vf as for pf for inner ip over gre, but then I tried and failed to
initiate the testpmd application and also my own application.
I sent a separate email about it, but still didn’t succeed to find the