measuring time to connect, time to ssl, time to first b yte
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/231___
Flent-users mailing list
Since ping is being spoofed by starlink, a udp ping test in combination with
tcp_nup/tcp_ndown or appears needed.
also I am loving sampling things at a 3ms interval in irtt general.
I think a new generation of tests is needed. rrulv2 ?
I am thinking nup and ndown would be good names for
and/or call it udp-owd so as to plot both directions
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/230#issuecomment-869200383___
can you email me, I have something in the works you are going to *love* (davet
AT teklibre.net)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
It looks like we lost the g+ thread but basically we suppressed channel scans
if the existing rssi was < 80 or so.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
also try adding net.ipv4.tcp_ecn=1 to sysctl
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/227#issuecomment-867650898___
Flent-users
thx very much for absorbing my (our) work.
what's the wifi chipset in this device? (can I get one?) did you implement
https://www.usenix.org/conference/atc17/technical-sessions/presentation/hoilan-jorgesen
yes, I had written about the impact of channel scans before over here:
What is causing the 3000ms spikes in your test???
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/227#issuecomment-865436922___
yes
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/225#issuecomment-863491515___
Flent-users mailing list -- flent-users@flent.org
To
In other talks about voip, I've talked about this as "riding the sawtooth"
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
![rrul_-_evenroute_v3_server_fq_codel](https://user-images.githubusercontent.com/108682/120904374-e0474980-c600-11eb-9660-88cc566a15b9.png)
![rrul_-_evenroute_v3_server_fq](https://user-images.githubusercontent.com/108682/120904376-e2110d00-c600-11eb-8492-a2c9852bf138.png)
--
You are receiving
Naive readers of a rrul plot tend to look at the average, when, especially when
you are trying to describe the difference between a good plot of latency and
jitter
[a good sch_fq plot of latency and
jitter](http://dallas.starlink.taht.net/virtio_nobql/rrul_-_evenroute_v3_server_fq.png)
and
I'd also really like this to work on osx.
/me hides
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/204#issuecomment-619612239___
Flent-users
I note if you are running into heisenbugs, patches exist for both tc and ss to
let it poll on an interval. Much better than a script, not accepted upstream
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Jumziey writes:
> Jumziey notificati...@github.com writes:
> I would like to get some good data on how tcp uploads/downloads
> works from a boat in certain areas while moving around with
> different solutions. I did a naive test with flent running the
> tcp_download
Closed #190.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/190#event-2805039388___
Flent-users mailing list
Flent-users@flent.org
Attached is a packet capture showing that cs1, cs0, and ef are being
successfully set on the rrul test for irtt. cs5 (the vi queue in wifi), isn't.
(it also shows that comcast remarks ef to cs0)
Pete Heist writes:
> Currently, the output filenames for batches always include batch_time:
>
> def gen_filename(self, settings, batch, argset, rep):
> filename = "batch-%s-%s-%s" % (
> settings.BATCH_NAME,
> batch['batch_time'],
>
Well, I'd LOVE it if we suppotedOSX for thuis
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/177#issuecomment-528664922___
Flent-users mailing list
We really hit BBR where it hurts with starting it all up at the same time.
https://forum.netgate.com/topic/112527/playing-with-fq_codel-in-2-4/788
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
I learned something today. There are times when netperf will fall back to not
using the address you specified, or --local-bind is not working right for
netperf. I have multiple ipv6 addresses... and in this case, I wanted to go
back and forth between testing an ipv6 tunnel and ipv6 native.
Another rrul_v2 issue would be to correctly end up in all the queues on wifi.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/148#issuecomment-418152116___
this convo is (purposefully) all over the place, but I'm leaning towards a
rrul_v2 test with 10ms irtt intervals. Not clear to me if flent could deal with
two different sample rates
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
actually - and I can see pete running screaming from the room - we could add
tcp-like behavior to irtt and obsolete netperf entirely except for referencing
the main stack. The main reason we use netperf was because core linux devs
trusted it, and the reason why we sample only is because
on plotting stuff I could see adding a 4th graph much like TSDE's for loss and
reorder.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
I really do care about measuring packet loss and re-orders accurately.
I've also been fiddling with setting the tos field, to do ect(0,1) and CE.
Doing that at a higher level would be good and noting the result. --ecn 1,2,3 ?
summary line of "Forward/backward path stripping dscp", "CE marks"
@chromi @heistp @jg @richb-hanover
Our tests with typical sampling rates in the 200ms range are misleading. We
(until the development of irtt) are basically pitting request/response traffic
against heavy tcp traffic and I think it's been leading us to draw some
conclusions that are untrue
It hangs. run at the command line the equivalent batch line works, so I am
thinking that somewhere along on the jsonification stage they stopped line
buffering or are not doing a flush somewhere needed.
Not specifically a flent issue, obviously, just noting this as a reminder to
self to go fix
thx! the network behavior I'm looking at itself doesn't make any sense, but at
least I can plot it now.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
the *weird* thing I'm actually trying to look at is that it takes 10sec for the
impact of the 10 new flows to truly hit, then all the other flows come back
hard, then 10 sec later, we achieve balance. I've been lying down on the job by
not thoroughly auditing the last 6 months worth of AQM
and ok, I got around to installing pyqt5 and it does the same thing. Older
matplotlib?
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
d@dancer:~/ne
![fortoke](https://user-images.githubusercontent.com/108682/42965417-c9dfa524-8b4e-11e8-8cc3-ffea1c1cd26e.png)
m$ flent-gui --absolute-time *500*.gz
Started Flent 1.2.2-git-09e66b3 using Python 3.5.2.
WARNING: Falling back to Qt4 for the GUI. Please consider installing PyQt5.
(I have no idea how you are holding cake, iproute2, and flent all in your head)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/144#issuecomment-406384877___
Reopened #144.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/144#event-1743330633___
Flent-users mailing list
Flent-users@flent.org
i pulled. that fix breaks those plots completely in the general case.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/144#issuecomment-406384387___
Flent-users
[tcp_nup-2018-07-19T114126.321412.500mbit-spaceheater-dancer-offset-20sec-ethernet.flent.gz](https://github.com/tohojo/flent/files/2211294/tcp_nup-2018-07-19T114126.321412.500mbit-spaceheater-dancer-offset-20sec-ethernet.flent.gz)
yes.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/144#issuecomment-406375455___
Flent-users mailing list
Flent-users@flent.org
irtt supports hmac for authentication. I am glad to see the three way
handshake, but hmac might be useful.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
I didn't realize irtt had grown so feature complete! --hmac! lovely!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
Closed #78.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/78#event-1741016680___
Flent-users mailing list
Flent-users@flent.org
flent -H 172.22.148.9 -H 172.22.148.9 -H 172.22.148.9 -H 172.22.148.9 -t
cake-simul udp_flood_var_up
?
[udp_flood_var_up-2017-11-23T154421.462961.cake-simul.flent.gz](https://github.com/tohojo/flent/files/1500331/udp_flood_var_up-2017-11-23T154421.462961.cake-simul.flent.gz)
--
You are
I would prefer to think it was flent that was busted here rather than cake.
Summary of rrul_be test run '100-10Mbit:nat:ack_filter' (at 2017-11-22
22:17:58.800917):
avg median # data pts
Ping (ms) ICMP: 0.86 0.88 ms
Toke Høiland-Jørgensen writes:
> Oh, and many thanks for your work on irtt, @peteheist! We really needed such a
> tool :)
Thx very much also. I'd really like to get some owd plots out of
flent
>
> —
> You are receiving this because you commented.
> Reply to this
Pete Heist writes:
>> On Nov 20, 2017, at 10:44 PM, flent-users wrote:
>>
>> A goal for me has been to be able to run Opus at 24 bit, 96Khz, with 2.7ms
>> sampling latency.
>> Actually getting 8 channels of that through a loaded box would be
Closed #117.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/117#event-1350923264___
Flent-users mailing list
Flent-users@flent.org
For simulation it would be helpful to be able to monitor qdiscs and other stats
within a container.
ip netns exec delay tc_iterate -i delay.l
ip netns exec delay tc_iterate -i delay.r
ip netns exec mbox tc_iterate -i middlebox.l
ip netns exec mbox tc_iterate -i middlebox.r
--
Pete Heist writes:
> I really like these PCEngines APU2 boards, and PTP HW timestamps. Now Peter,
> back to work... :)
What kernel is this, btw? A *lot* of useful stuff just landed in
net-next for network namespaces, which may mean I can try to validate
your results in
I tried rust out at about the same time esr did (in fact, sped up his go code
by threading it better). Didn't like it, either.
( http://esr.ibiblio.org/?p=7294 )
Honestly, I don't know what to do about having better R/T capable code in a
better language. All I can do is point out things like
Pete Heist writes:
> Thanks! I made most of your changes (-o was particularly broken, so this is a
> better solution), except:
>
> * I'm still thinking about whether to default durations to seconds or not. I'm
> using Go's default duration flag parsing, and I like
Pete Heist writes:
> An update:
>
> * JSON is working, sample attached in case there are comments / wishes.
>
> * Median (where possible) and stddev are working.
While I'm obsessive, so many seem to think networks behave with gaussian
(where the concept of stdev
50 matches
Mail list logo