Hello list,
I have been experimenting to tune latency on my 82599 NIC (even
at cost of decreased throughput).
For that purpose, i modified
1) rx_pb_size
2) IXGBE_MIN_RXD
My test setup:
Passing 20GB (10 GB dual direction -- 64 bytes udp) traffic on a core
which can take only about 1GB.
Results with latest stable ixgbe driver 3.14.5:
Rx ring size (ethtool -g) (default 512)
Tx ring size (ethtool -g) (default 512)
rx_pb_size (default 512)
flow control(ethtool -a)
laency
(us)
512
512
512
rx on tx on
7128
64
128
512
rx on tx on
6346
512
512
512
rx off tx off
1874
64
128
512
rx off tx off
667
64
128
128
rx off tx off
230
32
128
128
rx off tx off 150
16
128
128
rx off tx off
110
Now i do have some questions, please help me out with them.
Q1) Is reducing rx_pb_size risky?
Q2) Is there any other way to change rx_pb_size (RXPBSIZE0) value (i
hardcoded it and recompiled the drivers)?
Q3) What could be other side effects of reducing rx_pb_size, except for
reduced throughput?
Q4) What are side effects of reducing IXGBE_MIN_RXD below 64?
Q5) Is there any other way to reduce latency (even at cost of throughput) ?
Also there is one more strange observation (without applying any of
above changes).
When i pass 1400 bytes 9.99GB traffic -- latency is in control
But When i pass 1400 bytes 10GB traffic(line rate) -- latency increases
a lot
Regards,
Jagdish Motwani
------------------------------------------------------------------------------
Precog is a next-generation analytics platform capable of advanced
analytics on semi-structured data. The platform includes APIs for building
apps and a phenomenal toolset for data science. Developers can use
our toolset for easy data analysis & visualization. Get a free account!
http://www2.precog.com/precogplatform/slashdotnewsletter
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit
http://communities.intel.com/community/wired