On 06/24/2013 11:38 AM, Don Smith wrote:
> I am trying to use the timestamping function of the Intel I350-T2 1 Gbps
> adapter and igb driver to put a hardware timestamp in the sk_buff of ALL
> received frames.  Note that I am NOT doing this for PTP, but rather to
> obtain more accurate frame arrival timestamps to use in code I am
> developing for higher layers in the TCP/IP protocol stack.
>
> All this seems to be working fine, but I am at a loss to explain the
> curious distributions of timestamp values I have recorded from the
> arriving sk_buffs.  At high frame rates (>= 800 Mbps) the delta between
> successive timestamps is quite stable and reflects the expected
> inter-frame gaps.  However, at low frame rates (<= 250 Mbps) the delta
> between successive timestamps essentially behaves like a random process
> with a high variance (I should mention that I am controlling the frame
> sending rate so I know what inter-frame gaps I expect to measure at the
> receiver).  So far, I only have measurements at these two extremes so I
> don't know anything about how accuracy varies over the entire range of
> arrival rates.
This is just a theory, but you may be seeing effects of the descriptor 
writeback mechanism.  Descriptor writebacks are batched to improve PCIe 
efficiency, but in some cases can affect latency on the writeback.

Just as an experiment, try disabling the writeback mechanisms in 
igb_main.c for Rx.  In igb_configure_rx_ring(), just disable the 3 
*THRESH sets in the descriptor control register, and try again. Again, 
this is pure speculation and theory.

-PJ

------------------------------------------------------------------------------
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to