On Thu, Oct 04, 2012 at 05:06:33PM +0200, Eric Dumazet wrote: > On Thu, 2012-10-04 at 16:48 +0200, Dick Snippe wrote: > > $ sudo ethtool -k eth0 > > Offload parameters for eth0: > > rx-checksumming: on > > tx-checksumming: on > > scatter-gather: on > > tcp-segmentation-offload: on > > udp-fragmentation-offload: off > > generic-segmentation-offload: on > > generic-receive-offload: on > > large-receive-offload: on > > ntuple-filters: off > > receive-hashing: on > > > > > If yes, the numer of ACK they are sending back should be limited to one > > > ACK per GRO packet, instead of one ACK every 2 MSS. > > > > using sar -n DEV I saw +/- 150.000 rxpck/s on the sending webserver, > > with +/- 9.000 rxkB/s, i.e. ~60 bytes/packet. I assume that these are > > pure ACKs. But I don't know what sar counts exactly and how that relates > > to GRO. > > > > > Also, I was considering adding GRO support of TCP pure ACK, at least for > > > local traffic (not forwarding workloads) > > > > > > It would be nice if you could post a "perf top" output of the sender, > > > because dropping to 1-2Gbit sounds really really bad... > > > > I'll have to look into that as we don't usually build perf with our > > kernels and "make" in the tools/perf directory fails, probably because > > our version of bison is too old.
Here is "perf top" output when concurrently sending out 1000x a 100Mbyte file while doing +/- 1.3Gbit throughput (with rps off) PerfTop: 22405 irqs/sec kernel:99.3% exact: 0.0% [4000Hz cycles], (all, 16 CPUs) ------------------------------------------------------------------------------- 19.45% [kernel] [k] ixgbe_poll 4.34% [kernel] [k] __copy_user_nocache 4.24% [kernel] [k] ipt_do_table 2.33% [kernel] [k] tcp_ack 1.69% [kernel] [k] dma_issue_pending_all 1.67% [kernel] [k] __netif_receive_skb 1.66% [kernel] [k] irq_entries_start 1.40% [kernel] [k] add_interrupt_randomness 1.29% [kernel] [k] nf_iterate 1.28% [kernel] [k] tcp_v4_rcv Below the same, but with rps on while doing 10Gbit throughput PerfTop: 20623 irqs/sec kernel:95.2% exact: 0.0% [4000Hz cycles], (all, 16 CPUs) --------------------------------------------------------------------------------------------- 10.54% [kernel] [k] __copy_user_nocache 5.37% [kernel] [k] ipt_do_table 2.56% [kernel] [k] page_fault 2.36% [kernel] [k] ixgbe_poll 2.07% [kernel] [k] tcp_sendmsg 2.06% [kernel] [k] tcp_ack 2.06% [kernel] [k] _raw_spin_lock 1.95% [kernel] [k] ixgbe_xmit_frame_ring > What is the average sent bytes per session ? Ah, good question. thanks. I _thought_ ab dit keepalive requests by default, but it does so only no request (-k) As 1 request is 100Mbyte (we request a 100Mbyte file) the average sent bytes per session in this case is 100Mbyte. > I am asking because I dont really understand how you can have 150.000 > pkt/sec. A 10Gb link would send about 820k MSS packets per sec. > > So even assuming one ACK per MSS, you should have about 820k > packets/second. > > I wonder if the problem is not about SYN/SYNACK processing, because its > currently serialized on a single listener lock. I re-tested with "ab -k" and checked that ab _really_ does keepalive (yes) but the results are very similar. I think that is to be expected because in the oroginal testcase there's was only one SYN/SYNACK per 100Mbyte of data. Btw: the 150.000 packets per second (as reported by sar) are when doing 10Gbit/s throughput. When the throughput hovers between 1-2Gbit I see ~40.000 rxpck/s: rxpck/s txpck/s rxkB/s txkB/s comment 40680.00 94290.00 2383.83 139391.88 bad (rps off) 150220.00 812768.00 8802.01 1201687.31 good (rps on) -- Dick Snippe, internetbeheerder \ fight war beh...@omroep.nl, +31 35 677 3555 \ not wars NPO ICT, Sumatralaan 45, 1217 GP Hilversum, NPO Gebouw A ------------------------------------------------------------------------------ Don't let slow site performance ruin your business. Deploy New Relic APM Deploy New Relic app performance management and know exactly what is happening inside your Ruby, Python, PHP, Java, and .NET app Try New Relic at no cost today and get our sweet Data Nerd shirt too! http://p.sf.net/sfu/newrelic-dev2dev _______________________________________________ E1000-devel mailing list E1000-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/e1000-devel To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired