Hi Stephen,

  Yeah we are seeing all NETVSC, failsafe PMD, TAP, MLX4 VF. We are expecting 
bifurcation mode where default traffic goes with MLX4_EN slave to NETVSC master 
to enter Linux net stack and certain flow-defined traffic shall go into DPDK RX 
queue rather than MLX4_EN net-dev queue. Unfortunately as we cannot define flow 
into VF NIC, we cannot re-direct traffic into DPDK user-land. 

   In fact I am looking for a way to steer all RX traffic from MLX4_EN slave 
device into DPDK RX ring. 

  Hypervisor: Windows server + MLNX VPI 5.5; VM: Linux dpdk-18.11 
mlnx-ofed-kernel-4.4 rdma-core-43mlnx1 

  Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual 
Function]

Best,

Liwu

-----Original Message-----
From: Stephen Hemminger <[email protected]> 
Sent: Friday, January 4, 2019 4:24 PM
To: Liwu Liu <[email protected]>
Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox NIC under 
Hyper-V

On Sat, 5 Jan 2019 00:10:43 +0000
Liwu Liu <[email protected]> wrote:

> Hi Stephen,
> 
>    It is SR-IOV
> 
>   Thanks,
> 
> Liwu
> 
> -----Original Message-----
> From: Stephen Hemminger <[email protected]>
> Sent: Friday, January 4, 2019 4:07 PM
> To: Liwu Liu <[email protected]>
> Cc: [email protected]
> Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox 
> NIC under Hyper-V
> 
> On Fri, 4 Jan 2019 20:06:48 +0000
> Liwu Liu <[email protected]> wrote:
> 
> > Hi Team,
> >     We used to have similar problem for Ubuntu 18.04 hypervisor and 
> > resolved by set log_num_mgm_entry_size=-1. (Refer to 
> > https://mails.dpdk.org/archives/users/2018-November/003647.html
> > )
> > 
> >   Somehow for Windows Servers with MLNX VPI, I do not know where to set 
> > such and DPDK over MLX4 on linux vm has same failure of attaching flow.
> > 
> >     [  374.568992] <mlx4_ib> __mlx4_ib_create_flow: mcg table is 
> > full. Fail to register network rule. size = 64 (out of memory error 
> > code)
> > 
> >     Would like to get your help on this. It seems to be the case that the 
> > PF interface is not configured to trust VF interfaces to be able to add new 
> > flow rules.
> > 
> >    Many thanks for help,
> > 
> > Liwu
> > 
> > 
> > 
> > 
> > ***  Please note that this message and any attachments may contain 
> > confidential and proprietary material and information and are 
> > intended only for the use of the intended recipient(s). If you are 
> > not the intended recipient, you are hereby notified that any review, 
> > use, disclosure, dissemination, distribution or copying of this 
> > message and any attachments is strictly prohibited. If you have 
> > received this email in error, please immediately notify the sender 
> > and destroy this e-mail and any attachments and all copies, whether 
> > electronic or printed. Please also note that any views, opinions, 
> > conclusions or commitments expressed in this message are those of 
> > the individual sender and do not necessarily reflect the views of 
> > Fortinet, Inc., its affiliates, and emails are not binding on 
> > Fortinet and only a writing manually signed by Fortinet's General 
> > Counsel can be a binding commitment of Fortinet to Fortinet's 
> > customers or partners. Thank you. ***
> >   
> 
> How are using the Mellanox device with Windows Server? PCI passthrough or 
> SR-IOV?

If using SR-IOV then you can't use the Mellanox device directly. You have to 
use the synthetic device. If you use vdev_netvsc pseudo-device, then it sets up 
a failsafe PMD, TAP and the VF (Mellanox) PMD for you.

The experimental way is to use the netvsc PMD which will manage VF if available.

Reply via email to