Dave Täht writes:

> What kernel is this, btw? A *lot* of useful stuff just landed in
net-next for network namespaces, which may mean I can try to validate
your results in emulation, also. My primitive (eyeballing the packet
captures of some tcp traces) jitter result in a 4 virtualized network
namespace, was 2-6us, but that's hardly trustable.

4.9.0-4 (Debian Stretch). I only now read the man page for ip-netns for the 
first time. After re-reading your post about using this to consolidate the test 
environment, I get it now. That could be extremely helpful, though at first 
glance I wonder about the realism of netem (depending on how it's being used).

> Knowing that basic measurement noise in your setup is < 107us is quite
helpful, I wonder what the sources are....

Good question and I'd also like to know. I could have included full stats to 
show that server processing time is part of it. Here's another run:

```
sysadmin@apu2a:~$ ./irtt client -i 1ms -d 10s -q 10.9.0.2
[Connecting] connecting to 10.9.0.2
[Connected] connected to 10.9.0.2:2112

                        Min    Mean  Median     Max  Stddev
                        ---    ----  ------     ---  ------
                RTT   233µs   269µs   268µs   330µs  8.69µs
         send delay   119µs   135µs   134µs   190µs  6.74µs
      receive delay   103µs   134µs   134µs   176µs  5.12µs
                                                           
      IPDV (jitter)      0s  8.47µs  6.08µs  92.3µs  7.98µs
          send IPDV      0s  6.25µs  4.29µs  56.9µs  6.62µs
       receive IPDV      0s  4.62µs  2.99µs  64.1µs  5.15µs
                                                           
     send call time  16.1µs  28.7µs          57.1µs  3.01µs
        timer error      0s  4.55µs           121µs  5.09µs
  server proc. time  14.8µs  16.3µs          68.8µs  2.97µs

                duration: 10s (wait 989µs)
   packets sent/received: 10000/10000 (0.00% loss)
 server packets received: 10000/10000 (0.00%/0.00% upstream/downstream loss)
     bytes sent/received: 600000/600000
       send/receive rate: 480.0 Kbps / 480.0 Kbps
           packet length: 60 bytes
             timer stats: 0/10000 (0.00%) missed, 0.45% error
```

Server proc. time is the time between after after the server receives the 
packet and right before it sends it (dual time stamps are enabled here). 
Theoretically I could somehow set the Linux thread scheduler to SCHED_FIFO and 
increase the sched_priority value, but:
- I'd probably have to lock the goroutines to OS threads, which I can do 
(already have a -thread option for both client and server)
- It's not clear to me how to set SCHED_FIFO and prio without a bit of native 
code compiled in
- I'm not sure it would even help

It'd be interesting to see if you run the same irtt command in your namespaced 
environment, what you'd get.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-343609209
_______________________________________________
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org

Reply via email to