Hi, > The report shall report that the receiver did not see the generated packets.
The Iperf client reports this by saying that "WARNING: did not receive ack of last datagram after 3 tries". What I find weird are those unrealistic sent traffic results printed by Iperf client. If I execute "iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m" and 10.10.10.1 is firewalled/non-reachable, then I expect output like this: root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m ------------------------------------------------------------ Client connecting to 10.10.10.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.22 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2 port 38755 connected with 10.10.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 3613 MBytes 505 Mbits/sec [ 3] 60.0-120.0 sec 3620 MBytes 506 Mbits/sec [ 3] 120.0-180.0 sec 3618 MBytes 506 Mbits/sec etc In other words Iperf client should send traffic despite the fact that 10.10.10.1 is unreachable because UDP is connectionless and amount of bandwidth sent should be ~500Mbps because this is determined during the execution of client with the "-b" flag. regards, Martin On 8/22/14, Sandro Bureca <sbur...@gmail.com> wrote: > Hi, all, > since udp is connection less by its nature, you may want to flood the > network with iperf even > with no correspondent receiver on the far end. > The report shall report that the receiver did not see the generated > packets. > Sandro > > > On 22 August 2014 11:03, Martin T <m4rtn...@gmail.com> wrote: > >> Hi, >> >> please see the full output below: >> >> root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m >> ------------------------------------------------------------ >> Client connecting to 10.10.10.1, UDP port 5001 >> Sending 1470 byte datagrams >> UDP buffer size: 0.16 MByte (default) >> ------------------------------------------------------------ >> [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 >> [ ID] Interval Transfer Bandwidth >> [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec >> [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec >> [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec >> [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec >> [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec >> [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec >> [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec >> [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec >> [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec >> [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec >> [ 3] Sent 6119890 datagrams >> read failed: No route to host >> [ 3] WARNING: did not receive ack of last datagram after 3 tries. >> root@vserver:~# >> >> >> In case of UDP mode the Iperf client will send the data despite the >> fact that the Iperf server is not reachable. >> >> Still, to me this looks like a bug. Iperf client reporting ~60Gbps >> egress traffic on a virtual-machine with 1GigE vNIC while having >> bandwidth specified with -b flag, is IMHO not expected bahavior. >> >> >> regards, >> Martin >> >> >> On 8/22/14, Metod Kozelj <metod.koz...@lugos.si> wrote: >> > Hi, >> > >> > the bandwidth limitation switch (-b) limits the maximum rate with which >> > sending party (that's usually client) will transmit data if there's no >> > bottleneck that sending party is able to detect. If test is done using >> TCP, >> > >> > bottleneck will be apparent to client (IP stack will always block >> > transmission >> > if outstanding data is not delivered yet). If test is done using UDP, >> > sending >> > party will mostly just transmit data at maximum rate except in some >> > rare >> > cases. >> > >> > To verify this, you can run iperf in client mode with command similar >> > to >> > this: >> > >> > iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 >> > >> > ... make sure that the port used in command above (42000) is not used >> > by >> > some >> > other application. If you vary the bandwidth setting, you can se that >> > there's >> > a practical maximum speed that even loopback network device can handle. >> When >> > >> > experimenting with the command above, I've found a few interesting >> > facts >> > about >> > my particular machine: >> > >> > * when targeting machine on my 100Mbps LAN, transmit rate would not >> > go >> > beyond 96Mbps (which is consistent with the fact that 100Mmbps is >> wire >> > speed while UDP over ethernet faces some overhead) >> > * when targeting loopback device with "low" bandwidth requirement >> > (such >> > as >> > 50Mbps), transmit rate would be exactly half of requested. I don't >> know >> > if >> > this is some kind of reporting artefact or it actually does >> > transmit >> at >> > half the rate >> > * UDP transmit rate over loopback device would not go beyond 402Mbps. >> > >> > I was using iperf 2.0.5. And I found out that it behaves similarly on >> > another >> > host (402 Mbps max over loopback, up to 812 Mbps over GigE). >> > >> > Tests above show that loopback devices (and I would count any >> > virtualised >> > network devices as such) experience some kind of limits. >> > >> > Peace! >> > Mkx >> > >> > -- perl -e 'print >> > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >> > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >> > >> > >> ------------------------------------------------------------------------------ >> > >> > BOFH excuse #299: >> > >> > The data on your hard drive is out of balance. >> > >> > >> > >> > Martin T je dne 21/08/14 16:51 napisal-a: >> >> Metod, >> >> >> >> but shouldn't the Iperf client send out traffic at 500Mbps as I had >> >> "-b 500m" specified? In my example is prints unrealistic >> >> bandwidth(~60Gbps) results. >> >> >> >> >> >> regards, >> >> Martin >> >> >> >> On 8/21/14, Metod Kozelj <metod.koz...@lugos.si> wrote: >> >>> Hi, >> >>> >> >>> Martin T je dne 21/08/14 15:12 napisal-a: >> >>>> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >> >>>> 10.10.10.1 is behind the firewall so that Iperf client is not able >> >>>> to >> >>>> reach it, then I will see following results printed by Iperf client: >> >>>> >> >>>> [ ID] Interval Transfer >> >>>> Bandwidth >> >>>> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >> >>>> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >> >>>> etc >> >>>> >> >>>> >> >>>> Why does Iperf client behave like that? Is this a know bug? >> >>> That's not a bug in iperf, it's how UDP is working. The main >> >>> difference >> >>> between TCP and UDP is that with TCP, IP stack itself takes care of >> >>> all >> >>> the >> >>> >> >>> details (such as in-order delivery, retransmissions, rate adaption, >> >>> ...), >> >>> while with UDP stack that's responsibility of application. The only >> >>> functionality that iperf application does when using UDP is to fetch >> the >> >>> server (receiving side) report at the end of transmission. Even this >> >>> function >> >>> is not performed in perfect way ... sending side only waits for >> >>> server >> >>> report >> >>> for short time and if it filled network buffers, this waiting time >> >>> can >> >>> be >> >>> too >> >>> short. >> >>> >> >>> The same phenomenon can be seen if there's a bottleneck somewhere >> >>> between >> >>> the >> >>> nodes and you try to push datarate too high ... routers at either >> >>> side >> >>> of >> >>> the >> >>> bottle will discard packets when their TX buffers get filled up. If >> >>> TCP >> >>> was >> >>> >> >>> used, this would trigger retransmission in IP stack and all of >> >>> TCP-slow-start >> >>> would kick in and sending application would notice drop in >> >>> throughput. >> >>> If >> >>> UDP >> >>> was used, IP stack would not react in any way and application would >> dump >> >>> data >> >>> at top speed. >> >>> -- >> >>> >> >>> Peace! >> >>> Mkx >> >>> >> >>> -- perl -e 'print >> >>> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >> >>> -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >> >>> >> >>> >> ------------------------------------------------------------------------------ >> >>> >> >>> BOFH excuse #252: >> >>> >> >>> Our ISP is having {switching,routing,SMDS,frame relay} problems >> >>> >> >>> >> > >> > >> >> >> ------------------------------------------------------------------------------ >> Slashdot TV. >> Video for Nerds. Stuff that matters. >> http://tv.slashdot.org/ >> _______________________________________________ >> Iperf-users mailing list >> Iperf-users@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/iperf-users >> > ------------------------------------------------------------------------------ Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/ _______________________________________________ Iperf-users mailing list Iperf-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/iperf-users