Matthew Luckie wrote:

The motivation for this patch was to obtain something resembling the timestamp closest to when a packet I generated and transmitted hit the wire, to infer a more accurate RTT with an associated response packet.

That's certainly a worthy goal, but the patch might not help much there - if you're getting time stamps for packets being transmitted by the machine running the BPF-based application, the time stamps you'll get are the time when the packet gets wrapped around by BPF in the driver, but there's more time spent in the CPU handing the packet to the network adapter and possibly time spent in the network adapter, especially if it has to wait for others to finish transmitting, or deal with collisions, on Ethernet, wait to get the token on a token-based network, etc..


It also wouldn't help get time stamps closer to the *received* time stamp, as it'd include time between the time when the last octet of the packet was received and when the driver handed it to BPF.

On the other hand, one could perhaps argue that those times *should* be counted in RTT, if you're trying to measure application RTT rather than low-level link-layer RTT....

There is an argument to be made for generating the timestamp just the once after it actually passes a filter,

I.e., so you don't spend CPU time generating the time stamp for packets that'll be discarded? That might be worthwhile, given that I think people have found that getting time stamps *can* be a bottleneck when capturing lots of traffic, so it might be a bottleneck if you're receiving a lot of traffic and discarding most of that traffic.
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.

Reply via email to