Hello,
You seems quite experienced with Ethernet. I would say you got the 
expertise.
I am interested by this board because it has two Ethernet ports. The idea 
is to have the first Ethernet port connected to the source (server) and the 
second one for cascading/daisy-chain..
Could you please tell me if it possible?
Thanks in advance.
Cheers,
Christophe

On Friday, September 23, 2016 at 6:17:58 PM UTC+2, [email protected] wrote:
>
> Hi,
>
> I think, I already asked this question, but can't find it anymore...
> There is an errata of the AM572x CPU saying that RGMII2 can only be 
> clocked as fast that allows Fast-Ethernet (100BASE-T), no Gigabit 
> Ethernet... Today I made some tests: Both Ethernet Ports of the BB-X15 are 
> connected to a switch that supports Gigabit Ethernet. Auto negotiation 
> selects 1000baseT-FD for both links. A test with iperf3 gives me ~940Mbps 
> on both ports (sequentially). If I do tests on both links, things look very 
> differently. I started the second test after ~5s and terminated it after 
> ~10s. Performance on two links is far LESS than the performance on a single 
> link. (The iperf3 server is connected to the network via 2*10Gbps and IS 
> able to fill the link)
>
> [me@test-vm ~]$ iperf3 -c 10.20.0.121 -t 600
> Connecting to host 10.20.0.121, port 5201
> [  4] local 10.20.0.121 port 54362 connected to 46.234.46.30 port 5201
> [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
> [  4]   0.00-1.00   sec   115 MBytes   966 Mbits/sec  1054    311 
> KBytes       
> [  4]   1.00-2.00   sec   111 MBytes   933 Mbits/sec  270    317 
> KBytes       
> [  4]   2.00-3.00   sec   111 MBytes   933 Mbits/sec  286    314 
> KBytes       
> [  4]   3.00-4.00   sec   111 MBytes   933 Mbits/sec  156    266 
> KBytes       
> [  4]   4.00-5.00   sec   112 MBytes   944 Mbits/sec    0    495 
> KBytes       
> [  4]   5.00-6.00   sec  62.5 MBytes   524 Mbits/sec  182   21.2 
> KBytes       
> [  4]   6.00-7.00   sec  35.0 MBytes   293 Mbits/sec  190   21.2 
> KBytes       
> [  4]   7.00-8.00   sec  37.5 MBytes   315 Mbits/sec  143   12.7 
> KBytes       
> [  4]   8.00-9.00   sec  40.0 MBytes   336 Mbits/sec  210   26.9 
> KBytes       
> [  4]   9.00-10.00  sec  38.8 MBytes   325 Mbits/sec  199   12.7 
> KBytes       
> [  4]  10.00-11.00  sec  72.5 MBytes   608 Mbits/sec  120    236 
> KBytes       
> [  4]  11.00-12.00  sec   111 MBytes   933 Mbits/sec  228    243 
> KBytes       
> [  4]  12.00-13.00  sec   111 MBytes   933 Mbits/sec  312    276 
> KBytes       
> [  4]  13.00-14.00  sec   112 MBytes   944 Mbits/sec  231    328 
> KBytes       
> [  4]  14.00-15.00  sec   111 MBytes   933 Mbits/sec  309    341 
> KBytes       
> [  4]  15.00-16.00  sec   111 MBytes   933 Mbits/sec  232    272 
> KBytes       
> [  4]  16.00-17.00  sec   109 MBytes   912 Mbits/sec  334    165 
> KBytes       
> ^C[  4]  17.00-17.25  sec  22.5 MBytes   756 Mbits/sec    0    250 
> KBytes       
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bandwidth       Retr
> [  4]   0.00-17.25  sec  1.50 GBytes   747 Mbits/sec  4456             
> sender
> [  4]   0.00-17.25  sec  0.00 Bytes  0.00 bits/sec                  
> receiver
> iperf3: interrupt - the client has terminated
>
> The same results if I run two iperfs in different directions on the links. 
> (iperf and iperf -R). Probably the connection of the internal switch to the 
> A15-cores is the bottleneck.
>
> Things look worse if I test UDP:
>
> 800Mbps from BB-X15 -> BigHost
>
> [me@test-vm ~]$ iperf3 -c 10.20.0.121 -t 10 -u -b 800M -R 
> Connecting to host 10.20.0.121, port 5201
> Reverse mode, remote host 10.20.0.121 is sending
> [  4] local 10.20.0.121 port 40556 connected to 46.234.46.31 port 5201
> [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
> Datagrams
> [  4]   0.00-1.00   sec  97.1 MBytes   814 Mbits/sec  0.059 ms  0/12425 
> (0%)  
> [  4]   1.00-2.00   sec  96.8 MBytes   812 Mbits/sec  0.058 ms  0/12388 
> (0%)  
> [  4]   2.00-3.00   sec  95.6 MBytes   802 Mbits/sec  0.062 ms  0/12234 
> (0%)  
> [  4]   3.00-4.00   sec  92.6 MBytes   777 Mbits/sec  0.053 ms  0/11849 
> (0%)  
> [  4]   4.00-5.00   sec  92.7 MBytes   778 Mbits/sec  0.059 ms  0/11869 
> (0%)  
> [  4]   5.00-6.00   sec  97.9 MBytes   821 Mbits/sec  0.059 ms  0/12529 
> (0%)  
> [  4]   6.00-7.00   sec  92.5 MBytes   776 Mbits/sec  0.053 ms  0/11839 
> (0%)  
> [  4]   7.00-8.00   sec  96.7 MBytes   811 Mbits/sec  0.076 ms  0/12376 
> (0%)  
> [  4]   8.00-9.00   sec  95.1 MBytes   798 Mbits/sec  0.056 ms  0/12173 
> (0%)  
> [  4]   9.00-10.00  sec  98.3 MBytes   825 Mbits/sec  0.061 ms  0/12584 
> (0%)  
> [  4]  10.00-11.00  sec  89.4 MBytes   750 Mbits/sec  0.088 ms  0/11447 
> (0%)  
> [  4]  11.00-12.00  sec  56.8 MBytes   476 Mbits/sec  0.053 ms  0/7264 
> (0%)  
> [  4]  12.00-13.00  sec  56.8 MBytes   476 Mbits/sec  0.049 ms  0/7269 
> (0%)  
> [  4]  13.00-14.00  sec  56.1 MBytes   471 Mbits/sec  0.042 ms  0/7187 
> (0%)  
> [  4]  14.00-15.00  sec  56.6 MBytes   474 Mbits/sec  0.060 ms  0/7239 
> (0%)  
> [  4]  15.00-16.00  sec  54.4 MBytes   457 Mbits/sec  0.037 ms  0/6968 
> (0%)  
> [  4]  16.00-17.00  sec  56.1 MBytes   471 Mbits/sec  0.061 ms  0/7183 
> (0%)  
> [  4]  17.00-18.00  sec  56.7 MBytes   476 Mbits/sec  0.061 ms  0/7257 
> (0%)  
> [  4]  18.00-19.00  sec  56.6 MBytes   475 Mbits/sec  0.050 ms  0/7242 
> (0%)  
> [  4]  19.00-20.00  sec  55.8 MBytes   468 Mbits/sec  0.078 ms  0/7140 
> (0%)  
>
> (Results are similar on both links)
> After ~10s I started the same test on the other link. Packet throughput 
> was reduced by less than 50%. If I send 800Mbps on one link and receive 
> 800Mbps on the other link, things work as expected (1.6Gbps total (rx+tx).
> One very strange result is testing UDP reception on the BB-X15. I always 
> get very high packet loss:
> -----------------------------------------------------------
> Server listening on 5201
> -----------------------------------------------------------
> Accepted connection from bighost, port 44906
> [  5] local 10.20.0.121 port 5201 connected to bighost port 33716
> [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
> Datagrams
> [  5]   0.00-1.00   sec  13.0 MBytes   109 Mbits/sec  0.496 ms  9376/11035 
> (85%)  
> [  5]   1.00-2.00   sec  13.7 MBytes   115 Mbits/sec  0.712 ms  
> 10473/12224 (86%)  
> [  5]   2.00-3.00   sec  14.5 MBytes   122 Mbits/sec  0.821 ms  
> 10437/12293 (85%)  
> [  5]   3.00-4.00   sec  13.1 MBytes   110 Mbits/sec  0.571 ms  
> 10509/12183 (86%)  
>
> This seems be related to the too small receive buffers (163kB). If I 
> increase the default buffer space to 2MB (sysctl -w 
> net.core.rmem_default=2097152), I get 0% loss at 100Mbps, 26% at 200Mbps). 
> Checking the default rmem on the bighost, it's 16MB. Setting this to 16MB, 
> results in 5% packetloss at 800Mbps on the BB-X15 as a UDP receiver.
>
> Lession learnt: There are no two true Gigabit ports available as we know 
> it from PC servers ;) Still impressive for an embedded system.
>
> Claudius
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/a48828ca-f65c-4ef8-88bb-04d10c3701d3o%40googlegroups.com.

Reply via email to