> On Nov 21, 2017, at 10:56 PM, Toke Høiland-Jørgensen 
> <notificati...@github.com> wrote:
> 
> Pete Heist <notificati...@github.com> writes:
> 
> > Trying to confirm how latency was being calculated before with the
> > UDP_RR test. Looking at its raw output, I see that transactions per
> > second is probably used to calculate RTT, with interim results like:
> >
> > ```
> > NETPERF_INTERIM_RESULT[0]=3033.41
> > NETPERF_UNITS[0]=Trans/s
> > NETPERF_INTERVAL[0]=0.200
> > NETPERF_ENDING[0]=1511296777.475
> > ```
> >
> > So RTT = (1 / 3033.41) ~= 330us
> >
> > And this likely takes the mean value of all transactions and
> > summarizes it at the end of the interval, then the calculated latency
> > was what was plotted in flent?
> 
> Yup, that's exactly it :)

Ok, it’ll be interesting for me to look at the differences between the two 
going forward. Naturally doing it the udp_rr way would probably result in a 
smoother line. The other impacts on the test might be fun to explore.

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346185007
_______________________________________________
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org

Reply via email to