Reproducing step:
Take 3 machines, connect them on the same network through a single switch (to 
limits network bandwith/latency effects)
run uperf on 3 of them, 2 slave and one master.

I am using the following uperf script:
<?xml version="1.0"?>
<profile name="xml">
  <group nthreads="1">
        <transaction iterations="1">
            <flowop type="connect" options="remotehost=$h1 protocol=tcp
            wndsz=50k  tcp_nodelay"/>
        </transaction>
        <transaction duration="30s">
            <flowop type="read" options="size=8k"/>
            <flowop type="write" options="size=8k"/>
        </transaction>
        <transaction iterations="1">
            <flowop type="disconnect" />
        </transaction>
  </group>
  <group nthreads="1">
        <transaction iterations="1">
            <flowop type="connect" options="remotehost=$h2 protocol=tcp
        wndsz=50k  tcp_nodelay"/>
        </transaction>
        <transaction duration="30s">
            <flowop type="write" options="size=8k"/>
            <flowop type="read" options="size=8k"/>
        </transaction>
        <transaction iterations="1">
            <flowop type="disconnect" />
        </transaction>
  </group>
  <group nthreads="1">
        <transaction iterations="1">
            <flowop type="connect" options="remotehost=$h1 protocol=udp
        wndsz=50k  tcp_nodelay"/>
        </transaction>
        <transaction duration="30s">
            <flowop type="read" options="size=1.4k"/>
            <flowop type="write" options="size=1.4k"/>
        </transaction>
        <transaction iterations="1">
            <flowop type="disconnect" />
        </transaction>
  </group>
   <group nthreads="$n">
        <transaction iterations="1">
            <flowop type="connect" options="remotehost=$h2 protocol=udp
        wndsz=50k  tcp_nodelay"/>
        </transaction>
        <transaction duration="30s">
            <flowop type="write" options="size=1.4k"/>
            <flowop type="read" options="size=1.4k"/>
        </transaction>
        <transaction iterations="1">
            <flowop type="disconnect" />
        </transaction>
  </group>
  <group nthreads="$n">
        <transaction iterations="1">
            <flowop type="connect" options="remotehost=$h1 protocol=udp
        wndsz=50k  tcp_nodelay"/>
        </transaction>
        <transaction duration="30s">
            <flowop type="read" options="size=0.2k"/>
            <flowop type="write" options="size=0.2k"/>
        </transaction>
        <transaction iterations="1">
            <flowop type="disconnect" />
        </transaction>
  </group>
  <group nthreads="$n">
        <transaction iterations="1">
            <flowop type="connect" options="remotehost=$h2 protocol=udp
        wndsz=50k  tcp_nodelay"/>
        </transaction>
        <transaction duration="30s">
            <flowop type="write" options="size=0.2k"/>
            <flowop type="read" options="size=0.2k"/>
        </transaction>
        <transaction iterations="1">
            <flowop type="disconnect" />
        </transaction>
  </group>
</profile>

if you put n big enough (50 in my case seems a good value) on the same hardware 
you will have significant throughput differences between Bionic and Xenial.
On hundred runs, I see an avergae throuhput 719,8 Mbits/s in and 719,9 Mbits/s 
out on xenial and 608,2 Mbits/s in and 608,3 Mbits/s out on Bionic. That 
represent 20% network performance on bionic.

Also if I run sudo tc -s qdisc or netstat, we see packets been dropped
on Bionic but not Xenial.

I ran that test because we see network performance issue (TCP
timeouts/packet drop) when validating our application on bionic in our
datacenter, which is using a completly different hardware.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862044

Title:
  packet loss, extra latency and lower bandwidth on bionic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1862044/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to