Donald,

cannot make VFs to work to test your suggested way

Calling driver with: modprobe ixgbe max_vfs=15,15

Output is:

Intel(R) 10 Gigabit PCI Express Network Driver - version 3.21.2
Copyright (c) 1999-2014 Intel Corporation.
ixgbe 0000:06:00.0: PCI->APIC IRQ transform: INT A -> IRQ 42
ixgbe: I/O Virtualization (IOV) set to 15
ixgbe: 0000:06:00.0: ixgbe_check_options: FCoE Offload feature enabled
ixgbe 0000:06:00.0: Enabling SR-IOV VFs using the max_vfs module parameter is 
deprecated.
ixgbe 0000:06:00.0: Please use the pci sysfs interface instead. Ex:
ixgbe 0000:06:00.0: echo '15' > /sys/bus/pci/devices/0000:06:00.0/sriov_numvfs
ixgbe 0000:06:00.0 (unregistered net_device): Failed to enable PCI sriov: -38
ixgbe 0000:06:00.0: irq 109 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 110 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 111 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 112 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 113 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 114 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 115 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 116 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 117 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 118 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 119 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 120 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 121 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 122 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 123 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 124 for MSI/MSI-X
ixgbe 0000:06:00.0: irq 125 for MSI/MSI-X
ixgbe 0000:06:00.0: PCI Express bandwidth of 32GT/s available
ixgbe 0000:06:00.0: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)
ixgbe 0000:06:00.0 eth2: MAC: 2, PHY: 15, SFP+: 5, PBA No: E68785-007
ixgbe 0000:06:00.0: 90:e2:ba:5f:6d:a4
ixgbe 0000:06:00.0 eth2: Enabled Features: RxQ: 16 TxQ: 16 FdirHash 
ixgbe 0000:06:00.0 eth2: Intel(R) 10 Gigabit Network Connection
ixgbe 0000:06:00.1: PCI->APIC IRQ transform: INT B -> IRQ 45
ixgbe: I/O Virtualization (IOV) set to 15
ixgbe: 0000:06:00.1: ixgbe_check_options: FCoE Offload feature enabled
ixgbe 0000:06:00.1: Enabling SR-IOV VFs using the max_vfs module parameter is 
deprecated.
ixgbe 0000:06:00.1: Please use the pci sysfs interface instead. Ex:
ixgbe 0000:06:00.1: echo '15' > /sys/bus/pci/devices/0000:06:00.1/sriov_numvfs
ixgbe 0000:06:00.1 (unregistered net_device): Failed to enable PCI sriov: -38
ixgbe 0000:06:00.1: irq 126 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 127 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 128 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 129 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 130 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 131 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 132 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 133 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 134 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 135 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 136 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 137 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 138 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 139 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 140 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 141 for MSI/MSI-X
ixgbe 0000:06:00.1: irq 142 for MSI/MSI-X
IPv6: ADDRCONF(NETDEV_UP): eth2: link is not ready
8021q: adding VLAN 0 to HW filter on device eth2
ixgbe 0000:06:00.1: PCI Express bandwidth of 32GT/s available
ixgbe 0000:06:00.1: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)
ixgbe 0000:06:00.1 eth3: MAC: 2, PHY: 15, SFP+: 6, PBA No: E68785-007
ixgbe 0000:06:00.1: 90:e2:ba:5f:6d:a5
ixgbe 0000:06:00.1 eth3: Enabled Features: RxQ: 16 TxQ: 16 FdirHash 
ixgbe 0000:06:00.1 eth3: Intel(R) 10 Gigabit Network Connection
ixgbe 0000:06:00.0 eth2: detected SFP+: 5
IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready
8021q: adding VLAN 0 to HW filter on device eth3
ixgbe 0000:06:00.1 eth3: detected SFP+: 6
ixgbe 0000:06:00.1 eth3: NIC Link is Up 10 Gbps, Flow Control: RX/TX
IPv6: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
ixgbe 0000:06:00.0 eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready

I cannot do requested: echo '15' > 
/sys/bus/pci/devices/0000:06:00.0/sriov_numvfs since file sriov_numvfs doesn't 
exist in that directory ( device index is correct ).

I assume it's because: ixgbe 0000:06:00.1 (unregistered net_device): Failed to 
enable PCI sriov: -38. But I'm not sure what does it mean.

Obviously, when I'm trying to bind vf to a vlan I'm getting:

# ip link set eth3.3 vf 0 vlan 3
RTNETLINK answers: Operation not supported

lspci for NICs:
06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)
06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)

Kernel is 3.12.18.

Not sure what to do now. 


----- Original Message -----
> From: "Jack Spinov" <spi...@timegroup.ae>
> To: "Donald C Skidmore" <donald.c.skidm...@intel.com>
> Cc: e1000-devel@lists.sourceforge.net
> Sent: Thursday, May 15, 2014 12:37:47 PM
> Subject: Re: [E1000-devel] ixgbe: how to balance PPPoE traffic via RSS to     
> multiple queues
> 
> Thanks for your reply.
> 
> I'll try that for synthetic tests. But it's unclear what to do, in
> case I will not have VLANs or will have just few in production? VMDq
> without virtual environment looks like workaround to me.
> 
> Let's say server is BRAS with PPPoE server terminating client
> sessions. Without PPPoE - everything is working as it should. Once
> PPPoE in effect - everything gets to queue 0.
> 
> How can I configure VFs in this case?
> 
> ----- Original Message -----
> > From: "Donald C Skidmore" <donald.c.skidm...@intel.com>
> > To: "Jack Spinov" <spi...@timegroup.ae>,
> > e1000-devel@lists.sourceforge.net
> > Sent: Wednesday, May 14, 2014 9:05:27 PM
> > Subject: RE: [E1000-devel] ixgbe: how to balance PPPoE traffic via
> > RSS to      multiple queues
> > 
> > Maybe try VMDq which could sort by L2 (MAC address and VLAN).
> > 
> > Thanks,
> > -Don Skidmore <donald.c.skidm...@intel.com>
> > 
> > > -----Original Message-----
> > > From: Jack Spinov [mailto:spi...@timegroup.ae]
> > > Sent: Tuesday, May 13, 2014 7:39 AM
> > > To: e1000-devel@lists.sourceforge.net
> > > Subject: [E1000-devel] ixgbe: how to balance PPPoE traffic via
> > > RSS
> > > to multiple
> > > queues
> > > 
> > > Hello, everyone.
> > > 
> > > Have spent a lot of time trying to balance traffic into multiple
> > > RSS queues, but
> > > traffic falls into same queue all the time. I've read all the
> > > documents, threads
> > > I could find, but still cannot find a way to solve my problem.
> > > 
> > > My configuration: 3 servers with 82599 adapters. Connected like
> > > this:
> > > 
> > > Packet generator ( PG )<--- 10G ---> Packet router ( PR )<--- 10G
> > > ---> Packet
> > > dumper ( PD )
> > > 
> > > PG connects to PR via PPPoE and generates packets directed to
> > > Packed
> > > dumper ( PG ).
> > > 
> > > PR running 3.2.23 kernel and latest ixgbe 3.21.2. PG running
> > > 3.12.18 and same
> > > latest ixgbe 3.21.2, with default settings. I've generated
> > > traffic
> > > using iperf,
> > > netperf, ping. For iperf, netperf, both TCP and UDP - all goes to
> > > default
> > > queue - 0, while the rest 15 are idle. As on receiving interface,
> > > as on sending
> > > for PR. In case of using ping - everything is distributed
> > > properly.
> > > Which is
> > > another riddle for me.
> > > 
> > > I've tried using macvlan ( different MACs on PR, PG ), vlans (
> > > different source
> > > IPs for PPPoE sessions, different destination IP for PD ) -
> > > nothing
> > > helps. I
> > > cannot use Flow Director as packets in tunneling are not analyzed
> > > by it ( as
> > > per DS ), this is proved by ethtool counter for fdir* variable.
> > > 
> > > So what are my options? And how to guarantee, that in production
> > > environment I will not face the same issue? Or probably I'm doing
> > > something
> > > wrong?
> > > 
> > > Thanks in advance.
> > > 
> > > ------------------------------------------------------------------------------
> > > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For
> > > FREE
> > > Instantly run your Selenium tests across 300+ browser/OS combos.
> > > Get unparalleled scalability from the best Selenium testing
> > > platform available
> > > Simple to use. Nothing to install. Get started now for free."
> > > http://p.sf.net/sfu/SauceLabs
> > > _______________________________________________
> > > E1000-devel mailing list
> > > E1000-devel@lists.sourceforge.net
> > > https://lists.sourceforge.net/lists/listinfo/e1000-devel
> > > To learn more about Intel&#174; Ethernet, visit
> > > http://communities.intel.com/community/wired
> > 
> 

------------------------------------------------------------------------------
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to