On 07/11/2012 06:18 PM, akepner wrote:
> Using the 3.7.21 version of the ixgbe driver we can reliably
> produce a crash with this signature:
>
> BUG: unable to handle kernel NULL pointer dereference at 006c
> IP: [] ixgbe_poll+0x9df/0x1710 [ixgbe]
> PGD 814c7b067 PUD 8074dd067 PMD 0
On 07/23/2012 12:41 AM, Pekka Riikonen wrote:
> Hi,
>
> In our 64 byte packet test with 12 10GbE ports we encountered some
> interesting softlockups and interrupt rates. For some reason suddenly we
> started seeing softlockups usually in kworker (doing various work) while
> processing packets.
On 07/26/2012 12:30 AM, Pekka Riikonen wrote:
>
>>> The problem went a way with the hack in ixgbe_poll() but it got me
>>> thinking why the ITR value is not updated always in ixgbe_poll() and
>>> not
>>> only after napi_complete()? It should be more stable if it was
>>> updated at
>>> each poll().
On 07/26/2012 11:55 PM, Pekka Riikonen wrote:
>
> On Thu, 26 Jul 2012, Alexander Duyck wrote:
>> I suspect the reason why you are seeing so many interrupts is simply
>> because you have so many queues. At 16 queues per port with 12 ports
>> you are looking at 192 queues.
On 07/30/2012 11:54 AM, Pekka Riikonen wrote:
>
> On Fri, 27 Jul 2012, Alexander Duyck wrote:
>
>> Another option if you cannot lower the number of queues would be to
>> reduce the number of q_vectors. What you could do is modify
>> ixgbe_set_interrupt_capability
On 07/31/2012 02:40 PM, Chris Friesen wrote:
> There is a comment in ixgbe_reset_hw_vf() in the ixgbevf driver that
> says, "we cannot reset while the RSTI / RSTD bits are asserted".
>
> According to the datasheet, this is false. We cannot reset while the
> RSTI bit is asserted, but the RSTD bi
On 07/31/2012 03:07 PM, Chris Friesen wrote:
> On 07/31/2012 02:37 PM, Chris Friesen wrote:
>> So is it really the case that it can take up to two seconds for the
>> ixgbevf driver to notice loss of carrier?
>>
>> If so, I'm quite surprised. I would have expected this to be instant.
> If this is
On 08/01/2012 10:36 AM, Chris Friesen wrote:
> On 08/01/2012 09:38 AM, Alexander Duyck wrote:
>> On 07/31/2012 02:40 PM, Chris Friesen wrote:
>>> There is a comment in ixgbe_reset_hw_vf() in the ixgbevf driver that
>>> says, "we cannot reset while t
On 08/02/2012 12:18 AM, Pekka Riikonen wrote:
>
>>> Why is the num_tx_queues by default always same as num_rx_queues in
>>> ixgbe?
>>>
>>> Pekka
>> The main reason is because of the ATR feature. It expects us to be able
>> to receive packets on the same queue index as the queue we transmitted
On 08/10/2012 03:53 PM, Chris Friesen wrote:
> On 08/08/2012 05:20 PM, Brandeburg, Jesse wrote:
>> You can enable flow-director on UDP packets with ethtool.
> What's the command for that? I'm using ixgbe 3.6.7, does it support
> that or do I need to upgrade?
>
>
> Also, I tried enabling perfect f
pci_get_device(PCI_VENDOR_ID_INTEL, dev_id, vfdev);
> }
> +
> return false;
> }
>
As the author of commit 9297127b9cdd8d30c829ef5fd28b7cc0323a7bcd it
would have been nice to include me on the CC since I am probably one of
the best people to review this patch. That bein
On 08/16/2012 04:25 PM, Chris Friesen wrote:
> On 08/16/2012 05:07 PM, Chris Friesen wrote:
>> On 08/16/2012 04:00 PM, Waskiewicz Jr, Peter P wrote:
>>> If you disable RSS, then all non-matched FDIR flows will go to queue 0
>>> by default. If you're using the upstream driver, you'll need to
>>> co
On Thu, Aug 23, 2012 at 4:29 PM, Stephen Hemminger
wrote:
> Trying to setup new X540 card I have is not allowing VF's to be setup.
>
> [ 11.481677] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver -
> version 3.8.21-k
> [ 11.481679] ixgbe: Copyright (c) 1999-2012 Intel Corporation.
> [
On 09/14/2012 05:22 AM, Dick Snippe wrote:
> On Wed, Sep 12, 2012 at 05:10:44PM -0700, Jesse Brandeburg wrote:
>
>> On Wed, 12 Sep 2012 22:47:55 +0200
>> Dick Snippe wrote:
>>
>>> On Wed, Sep 12, 2012 at 04:05:02PM +, Brandeburg, Jesse wrote:
>>>
Hi Dick, we need to know exactly what you
On 09/14/2012 11:40 AM, Dick Snippe wrote:
> On Fri, Sep 14, 2012 at 10:40:53AM -0700, Alexander Duyck wrote:
>
>> On 09/14/2012 05:22 AM, Dick Snippe wrote:
>>> On Wed, Sep 12, 2012 at 05:10:44PM -0700, Jesse Brandeburg wrote:
>>>
>>>> On Wed, 12 Sep 2
We are currently working on a software workaround. It requires changes
to both the PF and VF in order to support it.
The likely implementation will require enabling Jumbo frames on the PF
in order to enable Jumbo frames on the VFs, and any legacy VFs will be
disabled at that time.
Thanks,
Alex
On 10/01/2012 03:22 PM, Nathan March wrote:
> Sorry, forgot to mention that this is confirmed on the latest driver
> from sourceforge (3.10.17)
>
> - Nathan
>
> On 10/1/2012 2:42 PM, Nathan March wrote:
>> Hi All,
>>
>> Running into what seems to be a very strange bug, that's specific only
>> to
On 10/03/2012 10:51 AM, Yinghai Lu wrote:
> Need ixgbe guys to close the loop to use set_max_vfs instead
> kernel parameters.
>
> Signed-off-by: Yinghai Lu
> Cc: Jeff Kirsher
> Cc: Jesse Brandeburg
> Cc: Greg Rose
> Cc: "David S. Miller"
> Cc: John Fastabend
> Cc: e1000-devel@lists.sourceforg
On 10/06/2012 11:50 AM, Frank Gamper wrote:
> Hi everybody,
> I'm trying to use the Hardware Filters from the ixgbe network driver but
> somehow I always get error messages when trying to set the filters.
> I installed the ixgbe module like this:
> cd ixgbe-3.9.15/src
> make
> mane install
> modpr
On 10/09/2012 12:06 AM, Frank Gamper wrote:
> Hi Alex,
> I'm using kernel version 3.2.21-12core
> I installed the module now by doing: sudo modprobe ixgbe
> FdirPballoc=2,2 FdirMode=2,2
> I enable ntuple filtering by doing ethtool -K eth2 ntuple on (I
> thought it would be enabled automatically)
>
On 10/14/2012 10:19 AM, Dmitry Fleytman wrote:
> There is a race condition in e1000 driver.
> It enables HW receive before RX rings initalization.
> In case of specific timing this may lead to host memory corruption
> due to DMA write to arbitrary memory location.
> Following patch fixes this issue
should be
> ignored. This is pure QEMU bug and we'll fix it there.
>
> Thanks,
> Dmitry.
>
> On Mon, Oct 15, 2012 at 8:53 PM, Alexander Duyck
> wrote:
>> On 10/14/2012 10:19 AM, Dmitry Fleytman wrote:
>>> There is a race condition in e1000 driver.
>&g
t details?
>
> Dmitry.
>
>
> On Mon, Oct 15, 2012 at 10:03 PM, Alexander Duyck
> wrote:
>> Hello Dmitry,
>>
>> My concern is that on many of our parts the behavior is to initialize
>> both the head and tail to 0, enable Rx for either the ring or device
>>
On 10/15/2012 10:57 PM, Linghu Yi (Tiger) wrote:
> Hi,
>
>
>
> >From intel82580 datasheet the set/get Rx rule for filtering are supported
> but igb3.4.8 driver has no such functions.
>
> Do you know why and how can we get the working driver for RX rule class?
> We are using:
>
> ethtool -u eth
>
On 10/17/2012 07:41 AM, ratheesh kannoth wrote:
> igb_change_mtu change is only changing adapter->rx_ring[0]->rx_buffer_len.
>
> 1) we dont have to change adapter->tx_ring[0]->rx_buffer_len ?
> 2) Is there any way to set different values to
> adapter->tx_ring[0]->rx_buffer_len and
> adapter->rx_ri
On 10/18/2012 06:21 AM, ratheesh kannoth wrote:
> On Thu, Oct 18, 2012 at 5:40 AM, ratheesh kannoth
> wrote:
>> On Thu, Oct 18, 2012 at 1:39 AM, Alexander Duyck
>> wrote:
>>> The current igb driver does receive the frame data into 2K buffers, and
>>>
We use dma_map_single here because we have a virtual pointer and not a
page. If you look in the kernel at the file
include/asm-generic/dma-mapping-common.h you will see that
dma_map_single_attrs which is what ends up being called when we call
dma_map_single will convert the pointer to a page and t
On 11/04/2012 08:42 AM, Fred Klassen wrote:
> I must be dealing with the wrong documents. I am using the latest 82599
> and x540 data sheets, and their specification updates. I tried logging
> onto my Intel support account, but cannot find any ixgbe related documents
> with a section 4.4.3.5.12. Am
On 11/04/2012 03:15 AM, Geoge.Q wrote:
> Intel 82599 Driver issue
>
> 1. Background:
>1) Linux 2.6.32
>2) igbex-2.9.7
>3) Linux box with 8-core CPU and with two Intel 82599 NIC;
I don't believe we ever released a 2.9.7 version of the driver. Do you
mean perhaps 3.9.17?
> 2. Topology:
Mark,
I could recommend checking the dmesg log after you have
unloaded/reloaded the e1000e driver. There is likely an error that is
being reported that is preventing the driver from loading and knowing
what the error is that is being reported would go a long way toward
telling us what the issue i
urce/linux/+bug/1072722
>
>
> On Thu, Nov 8, 2012 at 1:04 PM, Alexander Duyck
> mailto:alexander.h.du...@intel.com>> wrote:
>
> Mark,
>
> I could recommend checking the dmesg log after you have
> unloaded/reloaded the e1000e driver. There is likely an
Corporation
> physical id: 19
> bus info: pci@:00:19.0
> version: 02
> width: 32 bits
> clock: 33MHz
> capabilities: pm msi cap_list
> configuration: latency=0
>
That is kind of what I figured. I am assuming that there is an error
returned during e1000_probe that is causing the device to not show up as
a network device.
Thanks,
Alex
On 11/08/2012 11:25 AM, Mark Bidewell wrote:
> Thanks I will give that a try. I forgot to mention that e1000e does
> not
On 11/20/2012 02:56 PM, Ben Greear wrote:
> We are trying out some new hardware, but it's not able to go above about
> 4Gbps in each direction
> (using modified pktgen). The two ixgbe ports are cabled to each other.
>
> Ethernet controller:
>
> 03:00.0 Ethernet controller: Intel Corporation 82599
On 11/20/2012 05:05 PM, Ben Greear wrote:
> On 11/20/2012 04:34 PM, Ben Greear wrote:
>> On 11/20/2012 04:18 PM, Ben Greear wrote:
>
Also, have you checked to make sure the feature set is comparable?
For
instance the E5 can support VT-d. If that is enabled it can have a
negati
On 11/21/2012 09:18 AM, Ben Greear wrote:
> On 11/21/2012 09:09 AM, Alexander Duyck wrote:
>> On 11/20/2012 05:05 PM, Ben Greear wrote:
>>> On 11/20/2012 04:34 PM, Ben Greear wrote:
>>>> On 11/20/2012 04:18 PM, Ben Greear wrote:
>>>
>>>>>&g
On 11/27/2012 10:50 AM, Richard Cochran wrote:
> On Tue, Nov 13, 2012 at 12:33:03AM +0400, Andrey Wagin wrote:
>
>> I found that this test returns 372293 req/sec without a problematic patch
>> and only 334911 req/sec with this patch. A degradation is about 10%.
> Wow, that seems a little high. Are
On 12/04/2012 06:21 PM, Ben Greear wrote:
> On 12/04/2012 05:48 PM, Hisashi T Fujinaka wrote:
>> On Wed, 5 Dec 2012, Skidmore, Donald C wrote:
>>
> Looking for something like 'lspci -vv'?
That shows some 'Capabilities' mentioning 5GT/s, but is that just what the
NIC is theoretica
On 12/05/2012 08:30 PM, Ben Greear wrote:
> I'm curious if/how I can set the number of tx/rx queues (or otherwise force
> all
> pkts to be received in a single queue) in the ixgbe driver. I'm using
> 3.5.7+ kernel.
>
> Now, one might ask why?
>
> I have a bridging/network-impairment module that
On 12/06/2012 09:10 AM, Ben Greear wrote:
> On 12/06/2012 09:05 AM, Alexander Duyck wrote:
>> On 12/05/2012 08:30 PM, Ben Greear wrote:
>>> I'm curious if/how I can set the number of tx/rx queues (or otherwise
>>> force all
>>> pkts to be received in a sing
On 12/10/2012 11:10 AM, Ben Greear wrote:
> On 12/06/2012 09:51 AM, Alexander Duyck wrote:
>> On 12/06/2012 09:10 AM, Ben Greear wrote:
>>> On 12/06/2012 09:05 AM, Alexander Duyck wrote:
>>>> On 12/05/2012 08:30 PM, Ben Greear wrote:
>>>>> I'm c
wrote:
> Hi Alexander Duyck,
>
>
>
> We are building a Virtualization test bed with Intel 82599 NIC on Linux
> Host and KVM is used. Following is a brief description:
>
>
>
> *Host system:* Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
>
> *NIC: *In
On 12/20/2012 05:56 AM, 周介龙 wrote:
> Hi all,
> I am testing the receive side performance of a 82599EB NIC, using the
> newest stable driver ixgbe3.11. When receiving 10Gbps tcp pkts with (and only
> with) ACK flag, I found a lot of drop statistics in the result of ifconfig
> command and rx_m
ACK to 1, there would be no errors
> or drops.
>
> Thanks,If any other info is needed, please let me known.
> Dillan
>
> 在 2012-12-21 02:03:15,"Alexander Duyck" <mailto:alexander.h.du...@intel.com>> 写道:
>>On 12/20/2012 05:56 AM, 周介龙 wrote:
>&
On 01/16/2013 06:01 AM, Mritunjay Kumar wrote:
> Hi All,
>
> We are trying to measure throughput of IXGBE(82599 10G controller) driver for
> different packet size. We are enabling bridging between two 10G ports and
> pumping traffic.
> We are seeing following:-
>
> 1. For packet size 64/128
On 01/22/2013 03:16 PM, Gabe Black wrote:
> I have been spending some time in the igb driver code to understand what I
> would have to change to make more use of the header-split features of the
> cards.
>
> I noticed that header-split is always set to
> E1000_SRRCTL_DESCTYPE_HDR_SPLIT_ALWAYS, w
On 01/23/2013 10:24 AM, Gabe Black wrote:
>> -Original Message-
>> From: Alexander Duyck [mailto:alexander.h.du...@intel.com]
>> It is interesting that you mention the 82598. To the best of my
>> knowledge it does have the same behavior as the igb parts current
On 01/23/2013 03:29 PM, Sascha Fahl wrote:
> Dear Mr Duyck,
>
> We just purchased an Intel X520-SR1 NIC and installed it on a server
> running Debian 7 with Linux 3.7.4. In the context of a research project
> we'd like to use the hardware packet filtering feature to filter out all
> network traffi
On 02/14/2013 10:51 PM, Sasikanth babu wrote:
> Hi all,
>
>After enabling Queueing/Scheduling and Classification, I'm getting
> NMI's continuously and also observed RCU stall. I had truned off
> gro,lro,tso and gso settings on eth8.
>
> IGB version:3.2.9
> root@Shash:/>ethtool -i eth0
On 02/15/2013 10:31 AM, Sasikanth babu wrote:
> On Fri, Feb 15, 2013 at 10:50 PM, Alexander Duyck
> wrote:
>> On 02/15/2013 09:12 AM, Sasikanth babu wrote:
>>> On Fri, Feb 15, 2013 at 10:25 PM, Alexander Duyck
>>> wrote:
>>>> On 02/14/2013 10
On 02/15/2013 09:12 AM, Sasikanth babu wrote:
> On Fri, Feb 15, 2013 at 10:25 PM, Alexander Duyck
> wrote:
>>
>> On 02/14/2013 10:51 PM, Sasikanth babu wrote:
>>> Hi all,
>>>
>>>After enabling Queueing/Scheduling and Classification, I'm ge
On 02/17/2013 01:26 AM, Sasikanth babu wrote:
> On Sat, Feb 16, 2013 at 12:29 AM, Sasikanth babu
> wrote:
>> On Sat, Feb 16, 2013 at 12:11 AM, Alexander Duyck
>> wrote:
>>> On 02/15/2013 10:31 AM, Sasikanth babu wrote:
>>>> On Fri, Feb 15, 2013 at 10:50 PM
On 02/20/2013 11:22 AM, Eric Dumazet wrote:
> On Wed, 2013-02-20 at 10:16 -0800, Alexander Duyck wrote:
>
>> The problem is the 256 byte alignment for L1_CACHE_BYTES is increasing
>> the size of the data and shared info significantly pushing us past the
>> 2K limit.
>>
On 02/19/2013 05:09 PM, Eric Dumazet wrote:
> On Tue, Feb 19, 2013 at 2:30 PM, Allan, Bruce W
> wrote:
>>> -Original Message-
>>> From: Andrew Morton [mailto:a...@linux-foundation.org]
>>> Sent: Tuesday, February 19, 2013 2:27 PM
>>> To: Wu, Fengguang
>>> Cc: Daniel Santos; Kirsher, Jeffr
On 02/20/2013 01:42 PM, Eric Dumazet wrote:
> On Wed, 2013-02-20 at 13:23 -0800, Alexander Duyck wrote:
>
>> NET_SKB_PAD is defined for the s390. It is already 32. If you look it
>> up we only have 2 definitions for NET_SKB_PAD, one specific to the s390
>> architect
I did a bit of digging and it looks like the issue is that the
ext_filter_mask is not being found in the message received in
rtnl_dump_ifinfo. I'm still trying to figure out why the kernel isn't
finding the flag when it was finding it previously, but I'm not much of
a netlink expert.
Thanks,
Ale
On 04/25/2013 12:24 PM, David Miller wrote:
> From: Alexander Duyck
> Date: Thu, 25 Apr 2013 11:29:04 -0700
>
>> I did a bit of digging and it looks like the issue is that the
>> ext_filter_mask is not being found in the message received in
>> rtnl_dump_ifinfo. I'
On 04/25/2013 01:25 PM, David Miller wrote:
> From: Alexander Duyck
> Date: Thu, 25 Apr 2013 13:20:24 -0700
>
>> On 04/25/2013 12:24 PM, David Miller wrote:
>>> From: Alexander Duyck
>>> Date: Thu, 25 Apr 2013 11:29:04 -0700
>>>
>>> diff
On 04/25/2013 01:49 PM, David Miller wrote:
> From: Stephen Hemminger
> Date: Thu, 25 Apr 2013 13:45:13 -0700
>
>> On Thu, 25 Apr 2013 13:36:06 -0700
>> Alexander Duyck wrote:
>>
>>> On 04/25/2013 01:25 PM, David Miller wrote:
>>>> From: Alexa
This sounds like standard buffer bloat due to the Rx FIFO.
One thing you can try doing to test this is to use the FdirPballoc
module parameter setting of 3. Note this is a comma separated list of
values so if you have multiple ports it would be 3,3,3 where the number
of entries is equal to the nu
On 05/28/2013 09:23 AM, Richard Cochran wrote:
> On Tue, May 28, 2013 at 03:58:07PM +, Vick, Matthew wrote:
>> On 5/27/13 2:21 AM, "Richard Cochran" wrote:
>
>> I would prefer it if we did a MAC check before these two TSICR checks,
>> since we're making some assumptions about the hardware wi
On 07/12/2013 01:56 AM, Jagdish Motwani wrote:
> Hi list,
> I am facing a strange issue with igb driver.
>
> If i set my mtu to 1000, then ping -s 1200 does not work. (the same
> thing works with e1000e interface)
>
> On further debugging, i reached
> http://git.kernel.org/cgit/l
This change makes it so that we limit the lower bound for max_frame_size to
the size of a standard Ethernet frame. This allows for feature parity with
other Intel based drivers such as ixgbe.
Signed-off-by: Alexander Duyck
---
src/igb_main.c |4
1 files changed, 4 insertions(+), 0
On 07/13/2013 01:22 AM, jagdish.motw...@elitecore.com wrote:
>> On 07/12/2013 01:56 AM, Jagdish Motwani wrote:
>>> Hi list,
>>> I am facing a strange issue with igb driver.
>>>
>>> If i set my mtu to 1000, then ping -s 1200 does not work. (the same
>>> thing works with e1000e interface)
>
On 07/16/2013 05:17 AM, Christoph Mathys wrote:
> We are using a 3.6 kernel and a network card with igb driver
> (4.0.1-k). I noticed that the card allocates 4 rx and 4 tx irqs. Since
> we are only using this card for ethercat I would like to reduce this
> to one irq per direction, but I couln't fi
14 bytes packets. - CPU is free by 50%. No
> rx_missed_errors and DROP_EN bit is set.)
>
> Rgds,
> Nishit Shah.
>
> On 5/18/2013 1:15 AM, Alexander Duyck wrote:
>> This sounds like standard buffer bloat due to the Rx FIFO.
>>
>> One thing you can try doing to test this is
In regards to issue 2 it is hard for me to say what is causing these
increased latencies. If you are not seeing rx_missed and no_dma errors
then it is likely that the part itself is not saturated. It could point
to a bottleneck somewhere in the CPU and/or memory subsystem.
You might want to doub
On 08/08/2013 04:56 AM, Stefan Assmann wrote:
> Currently carrier is forced off in igbvf_msix_other(). This seems
> unnecessary and causes multiple calls to igbvf_watchdog_task(), resulting
> in multiple link up messages when calling dhclient for example.
> [ 111.818106] igbvf :00:04.0: Link i
On 08/12/2013 12:09 PM, Alexey Stoyanov wrote:
> Hello
> I got one issue, and seems i need help from driver developers.
>
> I have a some servers located in a different datacenters around
> Russia, we used mostly 82575/827576 intel nic managed by e1000e and
> igb drivers. When i testing speed with
Based on the info you provided I would say one possible red flag would
be the flow control bits in the statistics. Specifically:
> tx_flow_control_xon: 0
> rx_flow_control_xon: 164
> tx_flow_control_xoff: 0
> rx_flow_control_xoff: 164
> rx_csum_offload_errors: 1
The fact
ytes: 0
> rx_queue_61_packets: 0
> rx_queue_61_bytes: 0
> rx_queue_62_packets: 0
> rx_queue_62_bytes: 0
> rx_queue_63_packets: 0
> rx_queue_63_bytes: 0
> rx_queue_64_packets: 0
> rx_queue_64_bytes: 0
> rx_queue_65_p
On 08/12/2013 03:28 PM, Alexey Stoyanov wrote:
> I done reload of ixgbe with MQ=0,0 and RSS=1,1
> There are no luck with speed.
>
> [ 3] local xxx.xxx.185.135 port 5001 connected with yy.yy.74.11 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-20.0 sec 151 MBytes 63.1 Mbits
On 08/13/2013 05:34 AM, Alexey Stoyanov wrote:
>> I am not an expert when it comes to setting up rate limiting. What you
>> would need to do is setup a qdisc and configure it to limit your
>> outgoing traffic. You can probably find more information on how to do
>> that on the web as what you are
On 09/12/2013 06:55 AM, nirmoy das wrote:
> Hey,
>
> I am trying to write my own driver for intel 82599. After enabling 63 VF
> using pci_enable_sriov(), How rx/tx queue distribution happen amoung PF and
> VF?
>
> Is PF is using first pool of queues like in 16 Mode PF gets 0-7 queues?
>
The P
Early versions of the ixgbe driver unperformed when compared to the igb
driver for small packet performance. I actually gave a presentation on
this a couple of years ago:
http://www.linuxplumbersconf.org/2011/ocw/system/presentations/423/original/optimizing-ixgbe-performance.pdf
My advice would b
If you need linear buffers I would recommend using
CONFIG_IGB_DISABLE_PACKET_SPLIT. The only time this would not provide
higher performance in your case if your are running the driver on a
system with an IOMMU enabled.
Another option depending on your packet format might be to simply use
the page
On 10/03/2013 10:55 AM, venkatakrishna pari wrote:
> Hi ALL
>
> I have configured four Q vectors likes rx-0,rx-1,tx-0 and tx-1. ITR value
> sets to 1000 interrupt/second. But Interrupt are firing at 4000
> interrupt/sec. When Q vector changed two Q vector TxRx-0 and TxRx-1. It
> is firing at 2
i do understanding firing at ITR * number queue vectors. But i
> worked in *scenario-II *old ixgbe driver of 2.0.32.2 version. It is
> working in that drivers.
>
> Could you please guide me how to resolve this issue.
>
> Thanks and Regard's
> venkat
>
>
>
> On
ence 'the header will have
> been copied out'?
>
>
> --^_^--
> Best Regards
> Liming
>
> -Original Message-
> From: ext Alexander Duyck [mailto:alexander.h.du...@intel.com]
> Sent: Tuesday, October 01, 2013 1:26 AM
> To: Yan, Liming (NSN - CN/Hangzhou)
Your traffic appears to all be received on one queue. Are you using
multiple source/destination IPs or a single pair? In order to improve
the performance you can either use multiple source/destination IPs in
your test or enable UDP RSS via the "ethtool -N" and specifying an
rx-flow-hash that supp
On 12/19/2013 10:31 AM, Scott Silverman wrote:
> We have three generations of servers running nearly identical software.
> Each subscribes to a variety of multicast groups taking in, on average,
> 200-300Mbps of data.
>
> The oldest generation (2x Xeon X5670, SuperMicro 6016T-NTRF, Intel
> X520-DA2
On 12/20/2013 12:38 PM, Matthew Kent wrote:
> Hello all,
>
> We've been doing some testing here with netem and packet corruption
> (http://www.linuxfoundation.org/collaborate/workgroups/networking/netem#Packet_corruption)
> and I believe we may have stumbled on an igb or bonding bug that leads t
On 12/20/2013 02:43 PM, Matthew Kent wrote:
> On Friday, December 20, 2013 at 2:02 PM, Alexander Duyck wrote:
>> On 12/20/2013 12:38 PM, Matthew Kent wrote:
>>> Hello all,
>>>
>>> We've been doing some testing here with netem and packet corruption
>&g
On 12/22/2013 09:09 PM, Ding Tianhong wrote:
> Use the recently added and possibly more efficient
> ether_addr_equal_unaligned to instead of memcmp.
>
> Cc: "David S. Miller"
> Cc: net...@vger.kernel.org
> Cc: linux-ker...@vger.kernel.org
> Cc: e1000-devel@lists.sourceforge.net
> Signed-off-by: Di
On 01/02/2014 04:39 AM, Raj Ravi wrote:
> Hi,
> I have noticed this while loading ixgbe kernel module.
>
> When I do something like:
> insmod /path/to/ixgbe.ko max_vfs=5
>
> The above command causes ixgbevf kernel module also automatically loaded.
> The worst part is instead of loading ixgbevf wh
On 01/02/2014 08:26 AM, Raj Ravi wrote:
> Hi,
> Thanks.
>
> I think you maybe right, I can see some udev rules related to ixgb .
>
> grep ixgb /etc/udev/rules.d/70-persistent-net.rules
> # PCI device 0x8086:0x10fb (ixgbe)
> # PCI device 0x8086:0x10fb (ixgbe)
> # PCI device 0x8086:0x10fb (ixgbe)
>
A VF isn't a real device so it shouldn't really have the concept of a
power state. The power state for the device is controlled via the PF.
I suspect the fact that ixgbevf is modifying power state on resume is
likely a bug.
Thanks,
Alex
On 01/07/2014 06:46 AM, Julia Lawall wrote:
> I was wonder
On 01/07/2014 07:44 AM, Julia Lawall wrote:
> On Tue, 7 Jan 2014, Alexander Duyck wrote:
>
>> A VF isn't a real device so it shouldn't really have the concept of a
>> power state. The power state for the device is controlled via the PF.
>> I suspect the fact tha
On 01/14/2014 02:16 PM, Chaitanya Lala wrote:
> Hello,
>
> I am trying to enable RSC/LRO on a VF (X540-AT2 based) which has been PCIe
> passthrough'd to a Ubuntu VM (12.10) running latest ixgbevf.
> The host machine is a dual socket E5-2600 machine with a bunch of dual
> port X540 cards, running R
hich indicate similar
> performance for GRO and RSC/LRO ?
>
> Thanks,
> Chaitanya
>
> On 1/14/14 3:02 PM, "Alexander Duyck" wrote:
>
>> My advice would be to use GRO on the VF instead of trying to enable
>> LRO/RSC. This will
On 01/15/2014 11:34 AM, Chaitanya Lala wrote:
> Hi Alex, ...
>
> On 1/15/14 8:12 AM, "Alexander Duyck" wrote:
>
>> You mentioned using a 3.15.1 driver for the PF, I was wondering what
>> version of the ixgbevf driver it was you were using?
> 2.12.1 i.e. late
leave on C1 only.
>
> There was a program called cpudmalatency.c or something that may
> be able to help you keep system more awake.
>
> --
> Jesse Brandeburg
>
>
>
> On Dec 19, 2013, at 2:57 PM, "Scott Silverman"
> <mailto:ssilve
As my options don't really match those on the spec, I thought
> I'd ask what you suggest I try here.
>
>
>
>
> Thanks,
>
> Scott Silverman | IT | Simplex Investments | 312-360-2444
>
> 230 S. LaSalle St., Suite 4-100,
On 02/03/2014 08:59 AM, Maksim M wrote:
> Hi, Alexander
>
>
>
> Recently, I came across some issue with ixgbe drive.
> The matter is following:
> IXGBE is working in NAPI , so I was sure that Rx interrupt was fired for
> fir
Maksim,
The check_hang_subtask function is only meant to be run once every 2
seconds. As such you should only be seeing one interrupt per vector
every 2 seconds. How is this overloading your CPU? Are you seeing the
interrupts fire at a rate faster than 1 every 2 seconds?
The function is meant
ply changing the setting for this
> option has resolved the performance issue we had. However, it
> is frustrating to not understand why it helps, or what the
> other effects of changing that setting might be.
>
>
> Thanks,
>
> Scott
The PCIe Device Serial Number should be included as a part of the PCI
configuration space. You can dump it with lspci -vvv. All it really
contains is just the MAC address with ff-ff placed between the 3rd and
4th bytes.
Thanks,
Alex
On 02/12/2014 09:32 AM, Skidmore, Donald C wrote:
> I don't u
Scott,
You might try checking the dmesg log on the systems that come up as not
supporting DCA. There was a patch submitted a year or so ago that would
disable DCA on platforms that had a misconfigured APIC ID tag map.
It is possible that the platform might have had DCA disabled due to this
in th
place
> DCA? If so, what implication does the presence of "DCA" in
> enabled features on a platform that is meant to have DDIO
> instead of DCA?
>
> 2. What change from 3.18.7 to 3.19.1 would cause the feature
> to become Enabled on a p
(serial number)?
>
> Thanks in advance
>
>
>
> On Wednesday, February 12, 2014 10:47 PM, Alexander Duyck
> wrote:
>
> The PCIe Device Serial Number should be included as a part of the PCI
> configuration space. You can dump it with lspci -vvv. All it really
>
1 - 100 of 418 matches
Mail list logo