This turned out to be my problem with the experimental setup that did 
not properly control the sending inter-frame times at the lower frame 
rates but did at higher frame rates.  Once I corrected the experiment I 
get almost perfect timestamp results for all frames at all rates.

I'm sorry to have bothered you about what turned out to be my mistake 
(but you did help me a lot because you indicated there should not be a 
problem with the timestamping, so I decided to look more carefully at 
how they were being measured).

Thank you very much.

  -- Don


Wyborny, Carolyn wrote:
>> -----Original Message-----
>> From: Peter P Waskiewicz Jr [mailto:[email protected]]
>> Sent: Monday, June 24, 2013 12:59 PM
>> To: Don Smith
>> Cc: [email protected]
>> Subject: Re: [E1000-devel] Problems using timestamps in igb driver with 
>> I350-T2
>> adapter
>>
>> On 06/24/2013 11:38 AM, Don Smith wrote:
>>> I am trying to use the timestamping function of the Intel I350-T2 1
>>> Gbps adapter and igb driver to put a hardware timestamp in the sk_buff
>>> of ALL received frames.  Note that I am NOT doing this for PTP, but
>>> rather to obtain more accurate frame arrival timestamps to use in code
>>> I am developing for higher layers in the TCP/IP protocol stack.
>>>
>>> All this seems to be working fine, but I am at a loss to explain the
>>> curious distributions of timestamp values I have recorded from the
>>> arriving sk_buffs.  At high frame rates (>= 800 Mbps) the delta
>>> between successive timestamps is quite stable and reflects the
>>> expected inter-frame gaps.  However, at low frame rates (<= 250 Mbps)
>>> the delta between successive timestamps essentially behaves like a
>>> random process with a high variance (I should mention that I am
>>> controlling the frame sending rate so I know what inter-frame gaps I
>>> expect to measure at the receiver).  So far, I only have measurements
>>> at these two extremes so I don't know anything about how accuracy
>>> varies over the entire range of arrival rates.
>> This is just a theory, but you may be seeing effects of the descriptor 
>> writeback
>> mechanism.  Descriptor writebacks are batched to improve PCIe efficiency, but
>> in some cases can affect latency on the writeback.
>>
>> Just as an experiment, try disabling the writeback mechanisms in igb_main.c 
>> for
>> Rx.  In igb_configure_rx_ring(), just disable the 3 *THRESH sets in the 
>> descriptor
>> control register, and try again. Again, this is pure speculation and theory.
>>
> Hello, 
> 
> There does seem to be some latency somewhere.  I'll be interested to see the 
> results of the test PJ suggested.  
> 
> Also, can you tell me how the network is configured for these tests, e.g., 
> back to back, switch, etc..  I'm interested in details on how many systems 
> and their link partners and how many switches are in the network overall.  
> Can you try disabling EEE with ethtool and see if that changes the 
> discrepancies at all?  Do you know what your overall network latency is?  I 
> also wonder what power management features might be getting in the way here.  
> To help answer some of those questions, can you send me a full lspci -vvv 
> output from a system and its link partner that show some of the symptoms you 
> are seeing.
> 
> Thanks,
> 
> Carolyn
> 
> Carolyn Wyborny 
> Linux Development 
> Networking Division 
> Intel Corporation 
> 
> 


-- 
F. Donelson Smith (Don)                      (919) 962-1884
Research Professor                           [email protected]
Department of Computer Science               www.cs.unc.edu/~smithfd
University of North Carolina at Chapel Hill

------------------------------------------------------------------------------
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to