actually - and I can see pete running screaming from the room - we could add
tcp-like behavior to irtt and obsolete netperf entirely except for referencing
the main stack. The main reason we use netperf was because core linux devs
trusted it, and the reason why we sample only is because
on plotting stuff I could see adding a 4th graph much like TSDE's for loss and
reorder.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
I really do care about measuring packet loss and re-orders accurately.
I've also been fiddling with setting the tos field, to do ect(0,1) and CE.
Doing that at a higher level would be good and noting the result. --ecn 1,2,3 ?
summary line of "Forward/backward path stripping dscp", "CE marks"
Ok, well if we do go for it, so far in irtt's JSON there's just an average
`send_rate` and `receive_rate` under stats, both of which contain an integer
`bps` and a `string` text representation. `send_rate` ignores lost packets and
`receive_rate` takes them into account. Let me know if anything
Pete Heist writes:
>> On Aug 27, 2018, at 1:40 PM, flent-users wrote:
>>
>> Hi Pete,
>>
>> > On Aug 25, 2018, at 19:53, Pete Heist wrote:
>> >
>> > 50ms wouldn't be too disruptive in most cases. At 1 Mbit, the 5% of
>> > bandwidth threshold is crossed.
>>
>> Would it be possible to simply
> On Aug 27, 2018, at 1:40 PM, flent-users wrote:
>
> Hi Pete,
>
> > On Aug 25, 2018, at 19:53, Pete Heist wrote:
> >
> > 50ms wouldn't be too disruptive in most cases. At 1 Mbit, the 5% of
> > bandwidth threshold is crossed.
>
> Would it be possible to simply also show this bandwidth use