On 11/26/2012 01:05 PM, Rick Jones wrote:
In theory, netperf could be tweaked to set SO_RCVTIMEO at some
high-but-not-too-high level (from the command line?).  It could then
keep the test limping along I suppose (with gaps), but I don't want
anything terribly complicated going-on in netperf - otherwise one might
as well use TCP_RR anyway.

Without committing to keeping it in there, I have made a first pass at a quick and dirty SO_RCVTIMEO-based mechanism to keep a UDP_RR test from stopping entirely in the face of UDP datagram loss. The result is checked-in to the top-of-trunk of the netperf subversion repository at http://www.netperf.org/svn/netperf2/trunk .

I'm not at all sure at present the "right" things happen for interim results or the RTT statistics.

To enable the functionality, one adds a test-specific -e option with a timeout specified in seconds. I would suggest it be quite large so that one is very much statistically certain that the request/response was indeed lost and not simply delayed or it will definitely throw the timings off...

happy benchmarking,

rick jones

_______________________________________________
Cerowrt-devel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to