My guess is no, nobody has checked and I don't think openstack is under our 
purview, nor is it an official part of our projects. You'll have to talk to the 
openstack guys first.

Also, when you direct-assign a device to a VM, you only get one function, so it 
will look "different" in that environment.

If the option ROM loads any firmware, it will also see that "different" 
hardware and could get confused.

In any case, I'll have to consult with Shannon to see if he knows any more 
about this.

Todd Fujinaka
Software Application Engineer
Networking Division (ND)
Intel Corporation
todd.fujin...@intel.com
(503) 712-4565

-----Original Message-----
From: jacob jacob [mailto:opstk...@gmail.com] 
Sent: Friday, March 20, 2015 1:16 PM
To: Fujinaka, Todd
Cc: Nelson, Shannon; e1000-devel@lists.sourceforge.net
Subject: Re: [E1000-devel] Fwd: PCI passthrough of 40G ethernet interface 
(Openstack/KVM)

Hi Todd,Shannon,

Any suggestions on how to proceed with this? Has anyone verified PCI 
passthrough of the XL710 with any version of openstack( icehouse,juno
etc)

Thanks
Jacob

On Thu, Mar 19, 2015 at 6:57 PM, jacob jacob <opstk...@gmail.com> wrote:
> Yes. I am using a single port card and i have 2 of these.
> Data transfer works with these when run from the host.
>
> The module error is seen only in the vm. Also with dpdk it can be seen 
> that one of the ports can transmit fine.
>
> On Thu, Mar 19, 2015 at 6:53 PM, Fujinaka, Todd <todd.fujin...@intel.com> 
> wrote:
>> The information you've given us says you're using a single-port card. If you 
>> see a module error, you should see no traffic.
>>
>> Todd Fujinaka
>> Software Application Engineer
>> Networking Division (ND)
>> Intel Corporation
>> todd.fujin...@intel.com
>> (503) 712-4565
>>
>> -----Original Message-----
>> From: jacob jacob [mailto:opstk...@gmail.com]
>> Sent: Thursday, March 19, 2015 3:24 PM
>> To: Fujinaka, Todd
>> Cc: Nelson, Shannon; e1000-devel@lists.sourceforge.net
>> Subject: Re: [E1000-devel] Fwd: PCI passthrough of 40G ethernet 
>> interface (Openstack/KVM)
>>
>> How is this different for devices passed through to VM?
>> As mentioned earlier, everything works fine on the host and hence the 
>> modules are verified to be working.
>> the issue is seen only when the device is passed through to a VM running on 
>> the same host.
>>
>> Am i missing something?
>>
>> On Thu, Mar 19, 2015 at 6:13 PM, Fujinaka, Todd <todd.fujin...@intel.com> 
>> wrote:
>>> From the README:
>>>
>>> "
>>> SFP+ Devices with Pluggable Optics
>>> ----------------------------------
>>>
>>> SR Modules
>>> ----------
>>>   Intel     DUAL RATE 1G/10G SFP+ SR (bailed)    FTLX8571D3BCV-IT
>>>   Intel     DUAL RATE 1G/10G SFP+ SR (bailed)    AFBR-703SDZ-IN2
>>>
>>> LR Modules
>>> ----------
>>>   Intel     DUAL RATE 1G/10G SFP+ LR (bailed)    FTLX1471D3BCV-IT
>>>   Intel     DUAL RATE 1G/10G SFP+ LR (bailed)    AFCT-701SDZ-IN2
>>>
>>> QSFP+ Modules
>>> -------------
>>>   Intel     TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed)    E40GQSFPSR
>>>   Intel     TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed)    E40GQSFPLR
>>>     QSFP+ 1G speed is not supported on XL710 based devices.
>>>
>>> X710/XL710 Based SFP+ adapters support all passive and active 
>>> limiting direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 
>>> v10.4 specifications.
>>> "
>>>
>>> Please keep in mind that the driver check for modules is done in the 
>>> firmware, not in the driver. At this time you have to use the listed 
>>> modules to get connectivity.
>>>
>>> Todd Fujinaka
>>> Software Application Engineer
>>> Networking Division (ND)
>>> Intel Corporation
>>> todd.fujin...@intel.com
>>> (503) 712-4565
>>>
>>> -----Original Message-----
>>> From: jacob jacob [mailto:opstk...@gmail.com]
>>> Sent: Thursday, March 19, 2015 2:01 PM
>>> To: Fujinaka, Todd
>>> Cc: Nelson, Shannon; e1000-devel@lists.sourceforge.net
>>> Subject: Re: [E1000-devel] Fwd: PCI passthrough of 40G ethernet 
>>> interface (Openstack/KVM)
>>>
>>> I have updated to latest firmware and still no luck.
>>>
>>>
>>> ]# ethtool -i eth1
>>> driver: i40e
>>> version: 1.2.37
>>> firmware-version: f4.33.31377 a1.2 n4.42 e1930
>>> bus-info: 0000:00:05.0
>>> supports-statistics: yes
>>> supports-test: yes
>>> supports-eeprom-access: yes
>>> supports-register-dump: yes
>>> supports-priv-flags: yes
>>>
>>>
>>> Seeing similar results as before :
>>> 1)Everything works fine on host (used i40e version 1.2.37 and dpdk
>>> 1.8.0)
>>>
>>> 2)In vm tried both i40e driver version 1.2.37 and dpdk 1.8.0 and data tx  
>>> fails (I have 2 40G interfaces passed through to a VM):See the following 
>>> error now in the VM which looks interesting...
>>>
>>> [    5.449672] i40e 0000:00:06.0: f4.33.31377 a1.2 n4.42 e1930
>>> [    5.525061] i40e 0000:00:06.0: FCoE capability is disabled
>>> [    5.528786] i40e 0000:00:06.0: MAC address: 68:05:ca:2e:80:50
>>> [    5.534491] i40e 0000:00:06.0: SAN MAC: 68:05:ca:2e:80:54
>>> [    5.544081] i40e 0000:00:06.0: AQ Querying DCB configuration failed: 
>>> aq_err 1
>>> [    5.545870] i40e 0000:00:06.0: DCB init failed -53, disabled
>>> [    5.547462] i40e 0000:00:06.0: fcoe queues = 0
>>> [    5.548970] i40e 0000:00:06.0: irq 43 for MSI/MSI-X
>>> [    5.548987] i40e 0000:00:06.0: irq 44 for MSI/MSI-X
>>> [    5.549012] i40e 0000:00:06.0: irq 45 for MSI/MSI-X
>>> [    5.549028] i40e 0000:00:06.0: irq 46 for MSI/MSI-X
>>> [    5.549044] i40e 0000:00:06.0: irq 47 for MSI/MSI-X
>>> [    5.549059] i40e 0000:00:06.0: irq 48 for MSI/MSI-X
>>> [    5.549074] i40e 0000:00:06.0: irq 49 for MSI/MSI-X
>>> [    5.549089] i40e 0000:00:06.0: irq 50 for MSI/MSI-X
>>> [    5.549103] i40e 0000:00:06.0: irq 51 for MSI/MSI-X
>>> [    5.549117] i40e 0000:00:06.0: irq 52 for MSI/MSI-X
>>> [    5.549132] i40e 0000:00:06.0: irq 53 for MSI/MSI-X
>>> [    5.549146] i40e 0000:00:06.0: irq 54 for MSI/MSI-X
>>> [    5.549160] i40e 0000:00:06.0: irq 55 for MSI/MSI-X
>>> [    5.549174] i40e 0000:00:06.0: irq 56 for MSI/MSI-X
>>> [    5.579062] i40e 0000:00:06.0: enabling bridge mode: VEB
>>> [    5.615344] i40e 0000:00:06.0: PHC enabled
>>> [    5.636028] i40e 0000:00:06.0: PCI-Express: Speed 8.0GT/s Width x8
>>> [    5.639822] audit: type=1305 audit(1426797692.463:4): audit_pid=345
>>> old=0 auid=4294967295 ses=4294967295
>>> subj=system_u:system_r:auditd_t:s0 res=1
>>> [    5.651225] i40e 0000:00:06.0: Features: PF-id[0] VFs: 128 VSIs:
>>> 130 QP: 4 RX: 1BUF RSS FD_ATR FD_SB NTUPLE PTP
>>> [   12.720451] SELinux: initialized (dev tmpfs, type tmpfs), uses
>>> transition SIDs
>>> [   15.909477] SELinux: initialized (dev tmpfs, type tmpfs), uses
>>> transition SIDs
>>> [   61.553491] i40e 0000:00:06.0 eth2: NIC Link is Down
>>> [   61.554132] i40e 0000:00:06.0 eth2: the driver failed to link
>>> because an unqualified module was detected.     <<<<<<<<<<<<<<<<<<<<
>>> [   61.555331] IPv6: ADDRCONF(NETDEV_UP): eth2: link is not ready
>>>
>>>
>>>
>>> With dpdk, see the following output in the VM:
>>>
>>> testpmd> stop
>>> Telling cores to stop...
>>> Waiting for lcores to finish...
>>>
>>>   ---------------------- Forward statistics for port 0  
>>> ----------------------
>>>   RX-packets: 41328971       RX-dropped: 0             RX-total: 41328971
>>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>>>
>>> --------------------------------------------------------------------
>>> --
>>> ------
>>>
>>>   ---------------------- Forward statistics for port 1  
>>> ----------------------
>>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>>>   TX-packets: 41328972       TX-dropped: 0             TX-total: 41328972
>>>
>>> --------------------------------------------------------------------
>>> --
>>> ------
>>>
>>>   +++++++++++++++ Accumulated forward statistics for all 
>>> ports+++++++++++++++
>>>   RX-packets: 41328971       RX-dropped: 0             RX-total: 41328971
>>>   TX-packets: 41328972       TX-dropped: 0             TX-total: 41328972
>>>
>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>> ++++++
>>>
>>>
>>> Here it can be seen that one of the ports transmits just fine.  I have 
>>> verified that it is not a card,pci port or any such hw issues..
>>>
>>> Please let me know if you need to see any specific debug info..
>>>
>>> Appreciate your help.
>>>
>>> On Wed, Mar 18, 2015 at 6:54 PM, Fujinaka, Todd <todd.fujin...@intel.com> 
>>> wrote:
>>>> Whoops, sorry, I missed the last line of your email.
>>>>
>>>> In any case, at least now you know how strongly I feel about 
>>>> updating the firmware. :)
>>>>
>>>> Todd Fujinaka
>>>> Software Application Engineer
>>>> Networking Division (ND)
>>>> Intel Corporation
>>>> todd.fujin...@intel.com
>>>> (503) 712-4565
>>>>
>>>> -----Original Message-----
>>>> From: Fujinaka, Todd
>>>> Sent: Wednesday, March 18, 2015 3:47 PM
>>>> To: 'jacob jacob'
>>>> Cc: Nelson, Shannon; e1000-devel@lists.sourceforge.net
>>>> Subject: RE: [E1000-devel] Fwd: PCI passthrough of 40G ethernet 
>>>> interface (Openstack/KVM)
>>>>
>>>> You've purposely ignored both Shannon and my recommendations that you 
>>>> UPDATE YOUR FIRMWARE.
>>>>
>>>> Todd Fujinaka
>>>> Software Application Engineer
>>>> Networking Division (ND)
>>>> Intel Corporation
>>>> todd.fujin...@intel.com
>>>> (503) 712-4565
>>>>
>>>> -----Original Message-----
>>>> From: jacob jacob [mailto:opstk...@gmail.com]
>>>> Sent: Wednesday, March 18, 2015 3:45 PM
>>>> To: Fujinaka, Todd
>>>> Cc: Nelson, Shannon; e1000-devel@lists.sourceforge.net
>>>> Subject: Re: [E1000-devel] Fwd: PCI passthrough of 40G ethernet 
>>>> interface (Openstack/KVM)
>>>>
>>>> As mentioned int he previous email have already tried the latest available 
>>>> drivers:
>>>> "
>>>> Have tried the following as well and still see the issue:
>>>> a) Used 1.2.37 version of i40e driver in the VM b)used dpdk 1.8.0 in the 
>>>> VM "
>>>>
>>>> Will try again after updating the firmware and post results.
>>>>
>>>> On Wed, Mar 18, 2015 at 6:31 PM, Fujinaka, Todd <todd.fujin...@intel.com> 
>>>> wrote:
>>>>> At the very least you should definitely be using a newer driver and 
>>>>> firmware.
>>>>>
>>>>> Todd Fujinaka
>>>>> Software Application Engineer
>>>>> Networking Division (ND)
>>>>> Intel Corporation
>>>>> todd.fujin...@intel.com
>>>>> (503) 712-4565
>>>>>
>>>>> -----Original Message-----
>>>>> From: jacob jacob [mailto:opstk...@gmail.com]
>>>>> Sent: Wednesday, March 18, 2015 3:07 PM
>>>>> To: Nelson, Shannon
>>>>> Cc: e1000-devel@lists.sourceforge.net
>>>>> Subject: Re: [E1000-devel] Fwd: PCI passthrough of 40G ethernet 
>>>>> interface (Openstack/KVM)
>>>>>
>>>>> On Wed, Mar 18, 2015 at 5:54 PM, Nelson, Shannon 
>>>>> <shannon.nel...@intel.com> wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: jacob jacob [mailto:opstk...@gmail.com]
>>>>>>> Sent: Wednesday, March 18, 2015 11:26 AM
>>>>>>> To: e1000-devel@lists.sourceforge.net
>>>>>>> Subject: [E1000-devel] Fwd: PCI passthrough of 40G ethernet 
>>>>>>> interface
>>>>>>> (Openstack/KVM)
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Seeing failures when trying to do PCI passthrough of Intel XL710 
>>>>>>> 40G interface to KVM vm.
>>>>>>>     0a:00.1 Ethernet controller: Intel Corporation Ethernet 
>>>>>>> Controller
>>>>>>> XL710 for 40GbE QSFP+ (rev 01)
>>>>>>>
>>>>>>> >From dmesg on host:
>>>>>>>
>>>>>>> [80326.559674] kvm: zapping shadow pages for mmio generation 
>>>>>>> wraparound [80327.271191] kvm [175994]: vcpu0 unhandled rdmsr:
>>>>>>> 0x1c9 [80327.271689] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a6 
>>>>>>> [80327.272201] kvm [175994]: vcpu0 unhandled rdmsr: 0x1a7 
>>>>>>> [80327.272681] kvm [175994]: vcpu0 unhandled rdmsr: 0x3f6 
>>>>>>> [80327.376186] kvm [175994]: vcpu0 unhandled rdmsr: 0x606
>>>>>>>
>>>>>>> The pci device is still available in the VM but stat transfer fails.
>>>>>>>
>>>>>>> With the i40e driver, the data transfer fails.
>>>>>>>  Relevant dmesg output:
>>>>>>>  [   11.544088] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full
>>>>>>> Duplex, Flow Control: None
>>>>>>> [   11.689178] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full
>>>>>>> Duplex, Flow Control: None
>>>>>>> [   16.704071] ------------[ cut here ]------------
>>>>>>> [   16.705053] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:303
>>>>>>> dev_watchdog+0x23e/0x250()
>>>>>>> [   16.705053] NETDEV WATCHDOG: eth1 (i40e): transmit queue 1 timed out
>>>>>>> [   16.705053] Modules linked in: cirrus ttm drm_kms_helper i40e drm
>>>>>>> ppdev serio_raw i2c_piix4 virtio_net parport_pc ptp 
>>>>>>> virtio_balloon crct10dif_pclmul pps_core parport pvpanic 
>>>>>>> crc32_pclmul ghash_clmulni_intel virtio_blk crc32c_intel 
>>>>>>> virtio_pci virtio_ring virtio ata_generic pata_acpi
>>>>>>> [   16.705053] CPU: 1 PID: 0 Comm: swapper/1 Not tainted
>>>>>>> 3.18.7-200.fc21.x86_64 #1
>>>>>>> [   16.705053] Hardware name: Fedora Project OpenStack Nova, BIOS
>>>>>>> 1.7.5-20140709_153950- 04/01/2014
>>>>>>> [   16.705053]  0000000000000000 2e5932b294d0c473 ffff88043fc83d48
>>>>>>> ffffffff8175e686
>>>>>>> [   16.705053]  0000000000000000 ffff88043fc83da0 ffff88043fc83d88
>>>>>>> ffffffff810991d1
>>>>>>> [   16.705053]  ffff88042958f5c0 0000000000000001 ffff88042865f000
>>>>>>> 0000000000000001
>>>>>>> [   16.705053] Call Trace:
>>>>>>> [   16.705053]  <IRQ>  [<ffffffff8175e686>] dump_stack+0x46/0x58
>>>>>>> [   16.705053]  [<ffffffff810991d1>] warn_slowpath_common+0x81/0xa0
>>>>>>> [   16.705053]  [<ffffffff81099245>] warn_slowpath_fmt+0x55/0x70
>>>>>>> [   16.705053]  [<ffffffff8166e62e>] dev_watchdog+0x23e/0x250
>>>>>>> [   16.705053]  [<ffffffff8166e3f0>] ? dev_graft_qdisc+0x80/0x80
>>>>>>> [   16.705053]  [<ffffffff810fd52a>] call_timer_fn+0x3a/0x120
>>>>>>> [   16.705053]  [<ffffffff8166e3f0>] ? dev_graft_qdisc+0x80/0x80
>>>>>>> [   16.705053]  [<ffffffff810ff692>] run_timer_softirq+0x212/0x2f0
>>>>>>> [   16.705053]  [<ffffffff8109d7a4>] __do_softirq+0x124/0x2d0
>>>>>>> [   16.705053]  [<ffffffff8109db75>] irq_exit+0x125/0x130
>>>>>>> [   16.705053]  [<ffffffff817681d8>] smp_apic_timer_interrupt+0x48/0x60
>>>>>>> [   16.705053]  [<ffffffff817662bd>] apic_timer_interrupt+0x6d/0x80
>>>>>>> [   16.705053]  <EOI>  [<ffffffff811005c8>] ? hrtimer_start+0x18/0x20
>>>>>>> [   16.705053]  [<ffffffff8105ca96>] ? native_safe_halt+0x6/0x10
>>>>>>> [   16.705053]  [<ffffffff810f81d3>] ? rcu_eqs_enter+0xa3/0xb0
>>>>>>> [   16.705053]  [<ffffffff8101ec7f>] default_idle+0x1f/0xc0
>>>>>>> [   16.705053]  [<ffffffff8101f64f>] arch_cpu_idle+0xf/0x20
>>>>>>> [   16.705053]  [<ffffffff810dad35>] cpu_startup_entry+0x3c5/0x410
>>>>>>> [   16.705053]  [<ffffffff8104a2af>] start_secondary+0x1af/0x1f0
>>>>>>> [   16.705053] ---[ end trace 7bda53aeda558267 ]---
>>>>>>> [   16.705053] i40e 0000:00:05.0 eth1: tx_timeout recovery level 1
>>>>>>> [   16.705053] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx
>>>>>>> ring 0 disable timeout
>>>>>>> [   16.744198] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx
>>>>>>> ring 64 disable timeout
>>>>>>> [   16.779322] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1
>>>>>>> [   16.791819] i40e 0000:00:05.0: PF 40 attempted to control timestamp
>>>>>>> mode on port 1, which is owned by PF 1
>>>>>>> [   16.933869] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full
>>>>>>> Duplex, Flow Control: None
>>>>>>> [   18.853624] SELinux: initialized (dev tmpfs, type tmpfs), uses
>>>>>>> transition SIDs
>>>>>>> [   22.720083] i40e 0000:00:05.0 eth1: tx_timeout recovery level 2
>>>>>>> [   22.826993] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx
>>>>>>> ring 0 disable timeout
>>>>>>> [   22.935288] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx
>>>>>>> ring 64 disable timeout
>>>>>>> [   23.669555] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1
>>>>>>> [   23.682067] i40e 0000:00:05.0: PF 40 attempted to control timestamp
>>>>>>> mode on port 1, which is owned by PF 1
>>>>>>> [   23.722423] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full
>>>>>>> Duplex, Flow Control: None
>>>>>>> [   23.800206] i40e 0000:00:06.0: i40e_ptp_init: added PHC on eth2
>>>>>>> [   23.813804] i40e 0000:00:06.0: PF 48 attempted to control timestamp
>>>>>>> mode on port 0, which is owned by PF 0
>>>>>>> [   23.855275] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full
>>>>>>> Duplex, Flow Control: None
>>>>>>> [   38.720091] i40e 0000:00:05.0 eth1: tx_timeout recovery level 3
>>>>>>> [   38.725844] random: nonblocking pool is initialized
>>>>>>> [   38.729874] i40e 0000:00:06.0: HMC error interrupt
>>>>>>> [   38.733425] i40e 0000:00:06.0: i40e_vsi_control_tx: VSI seid 518 Tx
>>>>>>> ring 0 disable timeout
>>>>>>> [   38.738886] i40e 0000:00:06.0: i40e_vsi_control_tx: VSI seid 521 Tx
>>>>>>> ring 64 disable timeout
>>>>>>> [   39.689569] i40e 0000:00:06.0: i40e_ptp_init: added PHC on eth2
>>>>>>> [   39.704197] i40e 0000:00:06.0: PF 48 attempted to control timestamp
>>>>>>> mode on port 0, which is owned by PF 0
>>>>>>> [   39.746879] i40e 0000:00:06.0 eth2: NIC Link is Down
>>>>>>> [   39.838356] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1
>>>>>>> [   39.851788] i40e 0000:00:05.0: PF 40 attempted to control timestamp
>>>>>>> mode on port 1, which is owned by PF 1
>>>>>>> [   39.892822] i40e 0000:00:05.0 eth1: NIC Link is Down
>>>>>>> [   43.011610] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full
>>>>>>> Duplex, Flow Control: None
>>>>>>> [   43.059976] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full
>>>>>>> Duplex, Flow Control: None
>>>>>>>
>>>>>>>
>>>>>>> Would appreciate any information on how to debug this issue 
>>>>>>> further and if the "unhandled rdmsr" logs from KVM indicate some 
>>>>>>> issues with the device passthrough.
>>>>>>>
>>>>>>> Thanks
>>>>>>> Jacob
>>>>>>
>>>>>> I have no idea on the "unhandled rdmsr" messages.
>>>>>>
>>>>>> As for the driver, we can't do much debugging without version 
>>>>>> information from the driver and the NIC - the best way to get this is 
>>>>>> from "ethtool -i".  If this is the same setup as from your previous 
>>>>>> thread on another forum, then I believe you're using a NIC with the 
>>>>>> e800013fd firmware from late last summer, and that you saw these issues 
>>>>>> with both the 1.2.9-k and the 1.2.37 version drivers.  I suggest the 
>>>>>> next step would be to update the NIC firmware as there are some 
>>>>>> performance and stability updates available that deal with similar 
>>>>>> issues.  Please see the Intel Networking support webpage at 
>>>>>> https://downloadcenter.intel.com/download/24769 and look for the 
>>>>>> NVMUpdatePackage.zip.
>>>>>>
>>>>>> sln
>>>>>
>>>>>
>>>>> Yes. This is the same setup on which PCI passthrough  and data transfer 
>>>>> works fine for Intel 10G 82599 interfaces.
>>>>>
>>>>> Here are some additional details:
>>>>>
>>>>> Host CPU : Model name:            Intel(R) Xeon(R) CPU E5-2630 v2 @2.60GHz
>>>>>
>>>>> NIC: Manufacturer Part Number:  XL710QDA1BLK Ethernet controller:
>>>>> Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 
>>>>> 01) #ethtool -i enp9s0
>>>>>  driver: i40e
>>>>>  version: 1.2.6-k
>>>>>  firmware-version: f4.22 a1.1 n04.24 e800013fd
>>>>>  bus-info: 0000:09:00.0
>>>>>  supports-statistics: yes
>>>>>  supports-test: yes
>>>>>  supports-eeprom-access: yes
>>>>>  supports-register-dump: yes
>>>>>  supports-priv-flags: no>
>>>>>
>>>>> Have tried the following as well and still see the issue:
>>>>> a) Used 1.2.37 version of i40e driver in the VM b)used dpdk 1.8.0 
>>>>> in the VM
>>>>>
>>>>>
>>>>> Everything works fine if the interfaces are used on the host itself.
>>>>> The issue with TX failure is seen only when PCI passthrough of the device 
>>>>> is done.
>>>>> I'll look into updating the firmware on the NICs.
>>>>>
>>>>> Curious to know if anyone has successfully tested this with any version 
>>>>> of Openstack.
>>>>>
>>>>> Thanks
>>>>> Jacob
>>>>>
>>>>> ------------------------------------------------------------------
>>>>> --
>>>>> -
>>>>> -
>>>>> -------- Dive into the World of Parallel Programming The Go 
>>>>> Parallel Website, sponsored by Intel and developed in partnership 
>>>>> with Slashdot Media, is your hub for all things parallel software 
>>>>> development, from weekly thought leadership blogs to news, videos, 
>>>>> case studies, tutorials and more. Take a look and join the conversation 
>>>>> now.
>>>>> http://goparallel.sourceforge.net/
>>>>> _______________________________________________
>>>>> E1000-devel mailing list
>>>>> E1000-devel@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/e1000-devel
>>>>> To learn more about Intel&#174; Ethernet, visit 
>>>>> http://communities.intel.com/community/wired
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to