Maybe of interest, the per-packet upstream vs downstream loss now works 
properly, if you want to use it. Limitations documented: 
https://github.com/peteheist/irtt#64-bit-received-window

But how to plot that. Maybe there could be an up arrow for `lost_up`, down 
arrow for `lost_down`, or X for the generic `lost` shown at y=0 on the latency 
plot, along with a gap in the plotted line?

IRTT for the voip tests would be awesome! (I hope) Just an idea, instead of 
having separate voip tests, any of the tests with UDP flows could become a voip 
test by accepting a "voip_codec" test-parameter that sets the interval and 
packet length. Common codec values: 
https://www.cisco.com/c/en/us/support/docs/voice/voice-quality/7934-bwidth-consume.html#anc1

I guess having generic parameters to explicitly set interval and packet length 
would still be useful...

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345439908
_______________________________________________
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org

Reply via email to