Hi Shyam,

We are working on cleaning up the Rx code path which will also remove the 
IXGBE_FLAG_IN_NETPOLL. 

Thanks,
Emil

>-----Original Message-----
>From: Shyam Kaushik [mailto:sh...@zadarastorage.com]
>Sent: Monday, September 02, 2013 4:32 AM
>To: Rose, Gregory V
>Cc: e1000-devel@lists.sourceforge.net; Alex Lyakas; Yair
>Hershko
>Subject: [E1000-devel] Huge Performance Regression with
>adding of IXGBE_FLAG_IN_NETPOLL flag
>
>Gregory et al,
>
>
>
>We recently moved to using Linux kernel 3.8 & hence had to
>use ixgbevf
>driver version 2.7.12-k. We realized that we have a huge
>performance
>regression in most of the cases.
>
>
>
>Our setup involves:
>
># ixgbevf driver over SRIOV VF
>
># SCST iSCSI target exposing standard linux RAMdisk
>
>
>
>While doing 64K sequential writes, the latency of single
>host IO has
>increased from sub 1ms to 3ms. Additionally previously (in
>3.2 kernels) on
>running multiple host IO threads, we were able to reach
>115MB/s with
>RAMDisk (115MB/s is the max we can reach with our
>host/network
>configuration for RAMDisk). However with 2.7.12-k ixgbevf
>driver we were
>able to reach only 60-80MB/s. This performance fluctuates
>very heavily.
>
>
>We managed to nail down the issue is due to a ixgbevf driver
>regression
>introduced with this patchset.
>
>http://www.spinics.net/lists/netdev/msg216479.html
>
>
>
>https://kernel.googlesource.com/pub/scm/linux/kernel/git/jki
>rsher/net-next/+/85624caff9decc8174f286e12e9d0038d9a6cced%5E
>
>
>
>Previously we always had packets reach SCST iSCSI target in
>the following
>way
>
>iscsi_data_ready+0x2b/0x50 [iscsi_scst]
>tcp_rcv_established+0x53b/0x7e0
>tcp_v4_do_rcv+0x134/0x220
>tcp_v4_rcv+0x569/0x840
>ip_local_deliver_finish+0xde/0x280
>ip_local_deliver+0x4a/0x90
>ip_rcv_finish+0x119/0x360
>ip_rcv+0x21d/0x300
>__netif_receive_skb+0x602/0x760
>netif_receive_skb+0x23/0x90
>napi_gro_complete+0x84/0xe0
>dev_gro_receive+0x1d8/0x2b0
>napi_gro_receive+0x113/0x1b0
>ixgbevf_clean_rx_irq+0x2bd/0x410 [ixgbevf]
>ixgbevf_poll+0x9e/0x120 [ixgbevf]
>net_rx_action+0x134/0x260
>call_softirq+0x1c/0x30
>do_softirq+0x65/0xa0
>irq_exit+0x8e/0xb0
>do_IRQ+0x63/0xe0
>
>
>
>However with this flag on, we always have it delivered
>through
>
>iscsi_data_ready+0x2b/0x50 [iscsi_scst]
>tcp_data_queue+0x364/0x580
>tcp_rcv_established+0x245/0x770
>tcp_v4_do_rcv+0x134/0x220
>tcp_v4_rcv+0x569/0x840
>ip_local_deliver_finish+0xe6/0x280
>ip_local_deliver+0x4a/0x90
>ip_rcv_finish+0x119/0x360
>ip_rcv+0x21d/0x300
>__netif_receive_skb+0x5fa/0x760
>process_backlog+0xb1/0x190
>net_rx_action+0x134/0x260
>call_softirq+0x1c/0x30
>do_softirq+0x65/0xa0
>irq_exit+0x8e/0xb0
>do_IRQ+0x63/0xe0
>
>
>
>i.e. we always requeue packets & only backlog processing
>pushes the packet
>to SCST.
>
>
>
>We have confirmed that by reverting this patch in 2.7.12-k
>ixgbevf we are
>able to get back to the old performance levels in 3.8
>kernels.
>
>
>
>Kindly request you to revert this patch & get a different
>patch for the
>jumbo frame issue.
>
>
>Thanks.
>
>
>
>PS: I am not in e1000-devel list. Pls cc any replies.
>
>--Shyam

------------------------------------------------------------------------------
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to