Oh... and this one also went to Mr. Cochran directly. Apologies. I already got an answer from him and I'm past this stage, but I'm forwarding this into the mailing-list "for the record", to give some food to the Google spider.
FR On 8 Dec 2017 at 15:59, Richard Cochran wrote: > On Fri, Dec 08, 2017 at 11:09:40PM +0000, Keller, Jacob E wrote: > > I'm thinking the best way is to use the external timestamp events setup, > > and then plug that in as the pps source into phc2sys? > > > > Does this make any sense, or am I paddling up the wrong creek? > > So you *could* extend phc2sys, but that program is complex enough as > is. I have made thoughts about a successor to phc2sys that would > handle gpio based measurements, including setting the pin functions > using the PHC ioctls. > > But for now, I would just write a simple program for your specific > setup. Below is an example for using three i210 cards whose first SDP > are connected. The first card is hard coded as the PPS producer. In > a more realistic JBOD setting, you would want to switch the PPS > producer to be the PHC of the port that takes on the SLAVE role. > > HTH, > Richard Apologies for the intrusion gentlemen... I'm just an end user passing by, but this thread coincides with a related topic that's currently on my mind :-) I've been wondering for a few days, if I could use Intel NIC hardware to capture misc network traffic (libpcap style), with hardware timestamping of incoming packets, at nanosecond resolution. Timestamps on any packets captured, not just PTP - such as, to implement the capturing back-end of a poor man's precision network traffic analyzer. There are several question marks along the way. In the following text, note that I'll answer some questions myselfs, as I'm studying and experimenting (with freshmost upstream GIT code). To me, the most unclear parts are especially the "general timestamping" bits. From the user space, I've noticed the SO_TIMESTAMPNS and SOF_TIMESTAMPING_RX_HARDWARE = some flags available from the Linux kernel. They appear to be "mutually exclusive" ? But the latter should be sufficient for nanosec-level timestamping? What is the difference between SOF_TIMESTAMPING_RX_HARDWARE and SOF_TIMESTAMPING_RAW_HARDWARE ? https://stackoverflow.com/questions/41805687/linux-kernel-udp-receptio n-timestamp Am i right to assume, that the Intel NIC's can provide RX timestamps to any packets received, rather than just PTP exclusively? And, is this capability reachable via the networking driver's in-kernel API? Another point is how to actually capture the data, from user space, preferably using tools that are ready. Use libpcap? Are there any other libraries in Linux along those lines? Or, should I roll my own capture library? I'm asking this with respect to nanosecond-level resolution. There appears to be a common wisdom, permeating the interwebs, that tcpdump and libpcap do not support nanoseconds resolution, that they stick to microseconds. At a closer look, I have to say that it definitely is true about the PCAP file format - that alone appears to be a non-issue. PCAP-NG supports fairly arbitrary resolution, with "microseconds" and "nanoseconds" being popular choices that get actually implemented in libraries available to manipulate those file formats. Wireshark/tshark has nanoseconds support in PCAP-NG files mentioned in the docs for version 2.5.0, but I can actually see nanoseconds resolution in the textual output of tshark 2.4.1. in Linux, precisely "tshark -i eth1" with no custom options... I'm wondering how to find out if tshark uses the HW timestamping capabilities of the kernel and hardware. Interestingly, a nightly build of Wireshark and TShark 2.5.0 still shows microseconds on Windows 64bit (capturing from a local Ethernet interface) and I haven't found any configuration options to switch it to nanoseconds... TCPdump the user-space app doesn't seem to support PCAP-NG, except for some options specific to MacOS X... LibPCAP the capture library DOES seem to support nanoseconds precision! In the changelog of libpcap source code (currently at 1.8/1.9 release), I've found notes that support for nanoseconds was added in 1.5.0 back in 2013 or so... A good keyword to grep for appears to be PCAP_TSTAMP_PRECISION_NANO, and grep also finds references to SOF_TIMESTAMPING_SYS_HARDWARE and SOF_TIMESTAMPING_RAW_HARDWARE in the libpcap source code (pcap-linux.c). Support for pcap-ng seems to be in the current libpcap source code too. Heheh - when I downloaded and compiled fresh libpcap and tcpdump from GIT, the following prints nanosecond timestamps: tcpdump -i eth1 --time-stamp-precision=nano # tcpdump --version tcpdump version 4.10.0-PRE-GIT libpcap version 1.9.0-PRE-GIT (with TPACKET_V3) Only I can't seem to save the data in PCAP-NG format, as tcpdump still doesn't seem to support that... :-( The tcpdump manual mentions some option to save a modified PCAP format, with a different magic number, with nanosecond timestamps - so again I'd have a problem to load that in some other program. Hmm... I've just taken a "nano" capture by the fresh tcpdump in Linux with PCAP output, copied the file to Windows, and opened it Wireshark 2.5.0 - the one that shows us timestamps for a native Windows capture. Lo and behold, the "nano PCAP" is displayed with a nanosecond timestamp :-) This is starting to look almost ready... Could you suggest a way for me to somehow verify at runtime, that the capturing with T-shark or Wireshark indeed works with HW timestamping in Linux? Compile libpcap with some debug options, or with custom instrumentation inserted? Use some syscall or function tracing stuff in the kernel? Capture some PTP traffic with a passive tap and analyze discrepancies in the set of timestamps thus obtained? :-) Would a HW-supported "nano" libpcap capture work in parallel with ptp4l running on the same interface? Another large area of terra incognita (to me) is the hardware setup of refclock distribution to several Intel NIC's for HW timestamping. Such as, as suggested in this thread, use one NIC as a PTP slave, configure it to produce PPS, and use that to servo the on-chip PHC's in other NIC's, that would be used, in my case, for multiport capture timestamping, rather than to set up a HW-assisted boundary clock... Unfortunately I'm not designing my own boards. I have to use other people's board-level designs and I can maybe hack some jumpers on top... The four GPIO pins (SDP0 to SDP3), common to several Intel NICs with HW timestamping support, those alone are clear to me... I've also found a master's thesis by Balint Ferencz, and some follow-up material (his own source code), dealing with the config of these SDP pins: http://home.mit.bme.hu/~khazy/ptpd/bf2013.pdf https://bitbucket.org/fernya/igb-pps-guide https://bitbucket.org/fernya/igb-pps.git He seems to be using custom mods to the Intel driver, but otherwise it's the same stuff as discussed in this e-mail thread so far. I've been curious all along, how I would tap the SDP pins. I was wondering if perhaps the dual or quad versions of the i350 would have a shared PHC, or an internal "PPS interconnect bus", but clearly no such ready measure exists: each Eth port in the quad chip has its own PHC/synth/servo and its own dedicated set of four SDP pins. That, on a BGA package - so that if the board maker doesn't provide some test pads or headers for the SDP's, I'm stuffed :-) The i210 is a comparably lower-integrated package, but the QFN with a 0.5mm raster (16 pins along one 9mm edge) is not exactly hackable with my tools either :-) So again, if the board maker leaves the footprint pads unconnected, these are difficult to wiretap with a hand-held iron. Mr. Balint Ferencz was using an Intel-branded PCI-e NIC card with a single i210 chip on it, which had the SDP pins connected to jumper pads. Called the "Intel i210-T1 Ethernet Adapter". Even compared to dual-port and quad-port Intel server NIC's (i350/i340=82580), this is probably the cheaper alternative. So... looking at the proggie from Mr. Cochran, to configure the PHC in a NIC chip to be a PPS slave, using a particular SDP pin as an input, I need to open its respective /dev/phcX and run some fine-tipped ioctl()s on the open fd. Mr. Balint mentions that this operation only succeeds if the interface is active and has an IP address assigned... not sure if this is still true, but I'll pay attention to it. Makes me wonder if I need to prime the PHC with some wall clock time, before enslaving it to PPS. Do I need to care about wall clock time, per PHC device, or is this handled somehow "implicitly" ? (Sorry I should probably just read the source snippet by Mr. Cochran in detail :-) I have reasonably good external sources of PPS: a freewheeling Rb or a locked GPS with a phase-synchronous OCXO. Actually I'd love to use 10 MHz for reference instead of PPS, but I haven't seen that option mentioned anywhere so far, with the Intel NIC's... and 10 MHz alone would not give me alignment to PPS, so I'd need 10 MHz *and* PPS, if anything... No problem to level-shift the external signals to 3.3V TTL for the NIC chips. And I can keep an eye on grounding phenomena. Speaking of multiple i210 chips in a box, Advantech happens to make a fanless PC for IEC61850 (substation automation) containing 8x i210 on motherboard (rather than 2x i350 quad-port - probably motivated by price), these boxes happen to flow through our warehouse, I will probably be tempted to take a look if by any chance the SDP pins are accessible. I could also try to inspire Advantech to provide some SDP interconnect ex works, to simplify hacking. I'm also wondering how to capture 100Mbit fiber optics. Off the shelf cards with 100Meg fiber are nowadays typically based on RTL8139 :-( Intel cards with fiber ports are typically 1Gb-only. The Intel chips actually come in different SKU's for copper and for optical SERDES/SGMII, and the SERDES seems 1G-only. So I can see two chances in this direction: A) get a card with an Intel 210 or 350 fiber SKU and SFP sockets, where I could use a 100/1000 optical transceiver with MII-style interface (SGMII) and switch the port into 100Mbit mode via ethtool or miitool . Note that SFP slotted NIC's are rare... There's a fine example called the Advantech PCIE-2130NP but it's an i350 and the SDP's are likely buried. B) use some more common metallic 10/100/1000 Intel card and use an external fixed-rate 100Mb media converter. Unfortunately, such converters are no longer stupid nowadays... I'd love to have a 100Mb fiber SERDES converted straight into a 100Mb data stream on 100Base-TX, but that's probably just a pipe dream. I'd love to manage this without store and forward, but even the cheapest chinese fixed-rate 100Mb converters seem to boast several hundred kB of buffers nowadays :-( Any comments welcome :-) Apologies for picking your brains - I can write a few lines of C code, but the time is running short on a related project (where I would appreciate some measurements along those lines) and in general I'm more of a script kid these days :-( Thanks for your polite attention. Frank Rysanek
WPM$AR4Z.PM$
Description: Mail message body
------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________ Linuxptp-devel mailing list Linuxptp-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linuxptp-devel