Re: Switching from trunk(4) to aggr(4)

2020-12-13 Thread livio

Hey,
My setup at home is almost identical. APU with aggr interface and a couple of 
VLANs:
https://github.com/liv-io/ansible-playbooks-example/blob/master/bsd/host_vars/fw01.example.com.yml

# cat /etc/hostname.em{1,2,3}
up

# cat /etc/hostname.aggr0
trunkport em1 trunkport em2 trunkport em3 lacpmode active lacptimeout slow 
description "i_data"

up

# cat /etc/hostname.vlan11
inet 10.1.1.2 255.255.255.0 NONE vnetid 11 vlandev aggr0 description "v_base"
up

# cat /etc/hostname.carp11
inet 10.1.1.1 255.255.255.0 NONE vhid 1 carpdev vlan11  advskew 10 pass 
"" description "v_base"


# ifconfig aggr0
aggr0: flags=8943 mtu 1500
    lladdr
    description: i_data
    index 11 priority 0 llprio 7
    trunk: trunkproto lacp
    trunk id: [(8000,,000B,,),
 (8000,,0006,,)]
    em1 lacp actor system pri 0x8000 mac, key 0xb, port pri 0x8000 
number 0x2
    em1 lacp actor state 
activity,aggregation,sync,collecting,distributing
    em1 lacp partner system pri 0x8000 mac, key 0x6, port pri 
0x8000 number 0x12e
    em1 lacp partner state 
activity,aggregation,sync,collecting,distributing

    em1 port active,collecting,distributing
    em2 lacp actor system pri 0x8000 mac, key 0xb, port pri 0x8000 
number 0x3
    em2 lacp actor state 
activity,aggregation,sync,collecting,distributing
    em2 lacp partner system pri 0x8000 mac, key 0x6, port pri 
0x8000 number 0x130
    em2 lacp partner state 
activity,aggregation,sync,collecting,distributing

    em2 port active,collecting,distributing
    em3 lacp actor system pri 0x8000 mac, key 0xb, port pri 0x8000 
number 0x4
    em3 lacp actor state 
activity,aggregation,sync,collecting,distributing
    em3 lacp partner system pri 0x8000 mac, key 0x6, port pri 
0x8000 number 0x12f
    em3 lacp partner state 
activity,aggregation,sync,collecting,distributing

    em3 port active,collecting,distributing
    groups: aggr
    media: Ethernet autoselect
    status: active

It works well for me and I never had issues. I currently use a HP switch, but it 
also works with Cisco.
Maybe some leftovers from the LACP config? I never encountered the "no carrier 
status" issue though.


Let me know if I can extract any config for you.

HTH,
Livio

On 2020-12-12 16:44, Daniel Jakots wrote:

Hi,

I've been using a LACP trunk on my apu (with the three em(4)). On
top of which I have some vlans. I've been doing that for years and it's
working fine.

I thought about using aggr(4) instead (for no real reason). But the
aggr interface stays in "status: no carrier".

What I did is, I replaced my hostname.trunk0

trunkproto lacp trunkport em0 trunkport em1 trunkport em2
up

with a hostname.aggr0

trunkport em0 trunkport em1 trunkport em2
up

(and changing the parent in my hostname.vlan*). To apply the new
configuration, I just reboot.

My trunk0 which works is
trunk0: flags=8843 mtu 1500
 lladdr 00:0d:b9:43:9f:fc
 index 7 priority 0 llprio 3
 trunk: trunkproto lacp
 trunk id: [(8000,00:0d:b9:43:9f:fc,403C,,),
  (0080,00:00:00:00:00:00,,,)]
 em2 lacp actor system pri 0x8000 mac 00:0d:b9:43:9f:fc, key 
0x403c, port pri 0x8000 number 0x3
 em2 lacp actor state 
activity,aggregation,sync,collecting,distributing,defaulted
 em2 lacp partner system pri 0x80 mac 00:00:00:00:00:00, key 
0x0, port pri 0x80 number 0x0
 em2 lacp partner state aggregation,sync,collecting,distributing
 em2 port active,collecting,distributing
 em1 lacp actor system pri 0x8000 mac 00:0d:b9:43:9f:fc, key 
0x403c, port pri 0x8000 number 0x2
 em1 lacp actor state 
activity,aggregation,sync,collecting,distributing,defaulted
 em1 lacp partner system pri 0x80 mac 00:00:00:00:00:00, key 
0x0, port pri 0x80 number 0x0
 em1 lacp partner state aggregation,sync,collecting,distributing
 em1 port active,collecting,distributing
 em0 lacp actor system pri 0x8000 mac 00:0d:b9:43:9f:fc, key 
0x403c, port pri 0x8000 number 0x1
 em0 lacp actor state 
activity,aggregation,sync,collecting,distributing,defaulted
 em0 lacp partner system pri 0x80 mac 00:00:00:00:00:00, key 
0x0, port pri 0x80 number 0x0
 em0 lacp partner state aggregation,sync,collecting,distributing
 em0 port active,collecting,distributing
 groups: trunk
 media: Ethernet autoselect
 status: active

And the aggr0 which doesn't come up is:
aggr0: flags=8843 mtu 1500
 lladdr 00:0d:b9:43:9f:fc
 index 6 priority 0 llprio 7
 t

Re: Low throughput with 1 GigE interface

2020-02-06 Thread livio
Thank you @Noth.

You are right. The OpenBSD PF FAQ also says:
> PF will only use one processor, so multiple processors (or multiple cores)
WILL NOT improve PF performance.

For PC Engines APU users, I can highly recommend to update the BIOS. It improved
my networking performance quite a bit:
https://pcengines.github.io/

When testing with iperf3 I now get 450Mbit/s instead of 200Mbit/s (with pf
enabled). With pf disabled, I get ~750Mbit/s. Copying files from one to another
machine (same physical NIC on APU, with VLANs and PF enabled) gave me around
930Mbit/s.

Again, many thanks for all your help and inputs!

On 2/5/2020 12:56 AM, Noth wrote:
> According to the manufacturer of the APU2, the problem is with OpenBSD not
> using all cores for network traffic management:
> https://teklager.se/en/knowledge-base/apu2c0-ipfire-throughput-test-much-faster-pfsense/






Re: Low throughput with 1 GigE interface

2020-01-30 Thread livio
Thank you for your inputs - @Jordan, @Tom, @Christian

On 1/30/2020 9:07 PM, Tom Smyth wrote:
> Livio are you running iperf on the apu ?
> The apu doesnt have much cpu to generate packets from iperf...
> Forwarding perf should about 450m on an apu c2 with pf enabled and about
> 850m-900m with pf disabled
> That issting with iperf through the apu2c2 with decent professional laptops
> with iperf generated traffic and measured on these laptops

@Tom: Yes, I am running iperf (server) on the APU. When I run iperf
(server) on the notebook I get the following results:

apu# iperf3 -c 192.168.20.40
Connecting to host 192.168.20.40, port 5201
[  5] local 192.168.20.28 port 17990 connected to 192.168.20.40 port 5201
[ ID] Interval   Transfer Bitrate
[  5]   0.00-1.00   sec  81.5 MBytes   680 Mbits/sec
[  5]   1.00-2.00   sec  81.1 MBytes   681 Mbits/sec
[  5]   2.00-3.01   sec  82.5 MBytes   685 Mbits/sec


On 1/30/2020 11:29 PM, Christian Weisgerber wrote:
> I vaguely remember a thread somewhere that concluded that one of
> these network benchmark tools degenerated into a benchmark of
> gettimeofday(2), which apparently is very cheap on Linux and not
> cheap on OpenBSD.  So you end up measuring the performance of this
> system call.
>
> I don't remember whether it was iperf...

You are probably right. I will now have to test the throughput with
tcpbench as suggested by Jordan.


On 1/31/2020 2:04 AM, Jordan Geoghegan wrote:
> That sounds about right. I vaguely remember reading a thread about iperf on
> misc some time in the past year mentioning that.
> While OpenBSD obviously doesn't have the same network performance as Linux or
> FreeBSD, as work continues on unlocking more of the kernel, things will
> continue to get better. I think bluhm@ regularly runs some automated
> benchmarks that show that OpenBSD maxes out at around 4-5 Gbit / second
> throughput. 

Thank you, I will try to run some tests using tcpbench and send an update.
The forwarding performance is the relevant part.




Re: Low throughput with 1 GigE interface

2020-01-30 Thread livio
I ran fw_update and syspatch, which made the machine crash twice after
booting, but now it is up and running again. The iperf results are
still the same though:

apu# iperf -s

Server listening on TCP port 5001
TCP window size: 16.0 KByte (default)

[  4] local 192.168.20.28 port 5001 connected with 192.168.20.40 port 50064
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.0 sec   238 MBytes   199 Mbits/sec

apu# iperf3 -s
---
Server listening on 5201
---
Accepted connection from 192.168.20.40, port 50066
[  5] local 192.168.20.28 port 5201 connected to 192.168.20.40 port 50067
[ ID] Interval   Transfer Bitrate
[  5]   0.00-1.00   sec  22.2 MBytes   186 Mbits/sec
[  5]   1.00-2.00   sec  23.2 MBytes   195 Mbits/sec

Should I try a -current or -stable Kernel?


The update and crash output:

apu# fw_update
apu# syspatch
Get/Verify syspatch66-001_bpf.tgz 100% ||   102 KB    00:00
Installing patch 001_bpf
Get/Verify syspatch66-002_ber.tgz 100% ||   660 KB    00:03
Installing patch 002_ber
Get/Verify syspatch66-003_bgpd.tgz 100% |***|   181 KB    00:00
Installing patch 003_bgpd
Get/Verify syspatch66-004_net8021... 100% |*| 64839   00:00
Installing patch 004_net80211
Get/Verify syspatch66-005_sysupgr... 100% |*|  3023   00:00
Installing patch 005_sysupgrade
Get/Verify syspatch66-006_ifioctl... 100% |*|   381 KB    00:03
Installing patch 006_ifioctl
Get/Verify syspatch66-007_inteldr... 100% |*| 21468 KB    00:23
Installing patch 007_inteldrm
Get/Verify syspatch66-010_libcaut... 100% |*| 20185 KB    00:17
Installing patch 010_libcauth
Get/Verify syspatch66-012_suauth.tgz 100% |*|  7997   00:00
Installing patch 012_suauth
Get/Verify syspatch66-013_ldso.tgz 100% |***|   295 KB    00:02
Installing patch 013_ldso
Get/Verify syspatch66-015_ftp.tgz 100% || 65164   00:00
Installing patch 015_ftp
Get/Verify syspatch66-016_ripd.tgz 100% |***| 45685   00:00
Installing patch 016_ripd
Get/Verify syspatch66-017_inteldr... 100% |*|   268 KB    00:02
Installing patch 017_inteldrmctx
Get/Verify syspatch66-018_smtpd_t... 100% |*|   224 KB    00:03
Installing patch 018_smtpd_tls
Get/Verify syspatch66-019_smtpd_e... 100% |*|   224 KB    00:02
Installing patch 019_smtpd_exec
Relinking to create unique kernel... done; reboot to load the new kernel
Errata can be reviewed under /var/syspatch

apu# reboot
syncing disks... done
rebooting...
PC Engines apu2
coreboot build 20193012
BIOS version v4.11.0.2

login: kernel: protection fault trap, code=0
Stopped at  uvm_map_lookup_entry+0x40:  cmpq    %r14,0x40(%rax)

On 1/30/2020 7:06 PM, livio wrote:
> @KatolaZ and @remi
>
> Thank you for your inputs on iperf2 vs. iperf3.
>
> After all the tests I needed a clean setup again and reinstalled both
> OpenBSD and Window 10.
>
> With the new notebook (Dell vs Lenovo) I have different results.
> Dell: ~ 200Mbit/s
> Lenovo: ~ 145Mbit/s
>
> iperf2 vs. iperf3 (I also ran the corresponding version on Windows):
>
> apu# iperf -s
> 
> Server listening on TCP port 5001
> TCP window size: 16.0 KByte (default)
> 
> [  4] local 192.168.20.28 port 5001 connected with 192.168.20.40 port 50052
> [ ID] Interval   Transfer Bandwidth
> [  4]  0.0-10.0 sec   241 MBytes   202 Mbits/sec
>
> apu# iperf3 -s
> ---
> Server listening on 5201
> ---
> Accepted connection from 192.168.20.40, port 50054
> [  5] local 192.168.20.28 port 5201 connected to 192.168.20.40 port 50055
> [ ID] Interval   Transfer Bitrate
> [  5]   0.00-1.00   sec  22.2 MBytes   186 Mbits/sec
> [  5]   1.00-2.00   sec  23.5 MBytes   197 Mbits/sec
> [  5]   2.00-3.00   sec  23.4 MBytes   196 Mbits/sec
> [  5]   3.00-4.00   sec  23.3 MBytes   196 Mbits/sec
> [  5]   4.00-5.00   sec  23.2 MBytes   195 Mbits/sec
> [  5]   5.00-6.00   sec  23.4 MBytes   196 Mbits/sec
> [  5]   6.00-7.00   sec  23.4 MBytes   196 Mbits/sec
> [  5]   7.00-8.00   sec  23.4 MBytes   197 Mbits/sec
> [  5]   8.00-9.00   sec  23.0 MBytes   193 Mbits/sec
> [  5]   9.00-10.00  sec  23.5 MBytes   197 Mbits/sec
> [  5]  10.00-10.05  sec  1.05 MBytes   197 Mbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bitrate
> [  5]   0.00-10.05  sec   233 MBytes   195 Mbits/sec  receiver



Re: Low throughput with 1 GigE interface

2020-01-30 Thread livio
@KatolaZ and @remi

Thank you for your inputs on iperf2 vs. iperf3.

After all the tests I needed a clean setup again and reinstalled both
OpenBSD and Window 10.

With the new notebook (Dell vs Lenovo) I have different results.
Dell: ~ 200Mbit/s
Lenovo: ~ 145Mbit/s

iperf2 vs. iperf3 (I also ran the corresponding version on Windows):

apu# iperf -s

Server listening on TCP port 5001
TCP window size: 16.0 KByte (default)

[  4] local 192.168.20.28 port 5001 connected with 192.168.20.40 port 50052
[ ID] Interval   Transfer Bandwidth
[  4]  0.0-10.0 sec   241 MBytes   202 Mbits/sec

apu# iperf3 -s
---
Server listening on 5201
---
Accepted connection from 192.168.20.40, port 50054
[  5] local 192.168.20.28 port 5201 connected to 192.168.20.40 port 50055
[ ID] Interval   Transfer Bitrate
[  5]   0.00-1.00   sec  22.2 MBytes   186 Mbits/sec
[  5]   1.00-2.00   sec  23.5 MBytes   197 Mbits/sec
[  5]   2.00-3.00   sec  23.4 MBytes   196 Mbits/sec
[  5]   3.00-4.00   sec  23.3 MBytes   196 Mbits/sec
[  5]   4.00-5.00   sec  23.2 MBytes   195 Mbits/sec
[  5]   5.00-6.00   sec  23.4 MBytes   196 Mbits/sec
[  5]   6.00-7.00   sec  23.4 MBytes   196 Mbits/sec
[  5]   7.00-8.00   sec  23.4 MBytes   197 Mbits/sec
[  5]   8.00-9.00   sec  23.0 MBytes   193 Mbits/sec
[  5]   9.00-10.00  sec  23.5 MBytes   197 Mbits/sec
[  5]  10.00-10.05  sec  1.05 MBytes   197 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate
[  5]   0.00-10.05  sec   233 MBytes   195 Mbits/sec  receiver


On 1/30/2020 6:37 PM, Remi Locherer wrote:
>
> iperf vs. iperf3? 




Re: Low throughput with 1 GigE interface

2020-01-30 Thread livio
To answer your second question, I did not change any sysctls or other
settings on the OpenBSD. The only thing I ran was pfctl -d.

My installation guide was:
https://github.com/elad/openbsd-apu2

- amd/install66.fs
- stty com0 115200
- set tty com0

On 1/30/2020 5:39 PM, livio wrote:
> Yes, I tried yet another cable. I hope this gives some credibility:
> https://ibb.co/m4mrWt3
>
> I now tried with 3 different cables (and vendors). As you can see the
> patch cable is brand new. I am also setting up a new Windows 10 notebook
> on the right.
>
> But again, I achieve 940Mbit/s with the exact same setup and FreeBSD 10.
>
> On 1/30/2020 5:17 PM, Ian Darwin wrote:
>> Peter wrote:
>>  
>>> chi# iperf -c beta.internal.centroid.eu
>>> 
>>> Client connecting to beta.internal.centroid.eu, TCP port 5001
>>> TCP window size: 17.0 KByte (default)
>>> 
>>> [  3] local 192.168.177.40 port 13242 connected with 192.168.177.2 port 5001
>>> [ ID] Interval   Transfer Bandwidth
>>> [  3]  0.0-10.0 sec   536 MBytes   449 Mbits/sec
>>>
>>> ... on an APU1C4, could it be you have a slow switch or router?  Any other
>>> hardware that could slow yours down?
>>>
>>> I'm happy with this result, the APU1 is not really a powerhorse.
>> That is pretty normal. From an older Intel-cpu laptop with a bge interface,
>> to my APU2, both on a TP-Link gig switch, I get
>>
>> $ iperf -c gw-int 
>> 
>> Client connecting to gw-int, TCP port 5001
>> TCP window size: 32.5 KByte (default)
>> 
>> [  3] local 192.168.42.46 port 21653 connected with 192.168.42.254 port 5001
>> [ ID] Interval   Transfer Bandwidth
>> [  3]  0.0-10.0 sec   502 MBytes   421 Mbits/sec
>> $
>>
>> Again, that's with no tuning. Did you try a different cable?
>>



Re: Low throughput with 1 GigE interface

2020-01-30 Thread livio
Yes, I tried yet another cable. I hope this gives some credibility:
https://ibb.co/m4mrWt3

I now tried with 3 different cables (and vendors). As you can see the
patch cable is brand new. I am also setting up a new Windows 10 notebook
on the right.

But again, I achieve 940Mbit/s with the exact same setup and FreeBSD 10.

On 1/30/2020 5:17 PM, Ian Darwin wrote:
> Peter wrote:
>  
>> chi# iperf -c beta.internal.centroid.eu
>> 
>> Client connecting to beta.internal.centroid.eu, TCP port 5001
>> TCP window size: 17.0 KByte (default)
>> 
>> [  3] local 192.168.177.40 port 13242 connected with 192.168.177.2 port 5001
>> [ ID] Interval   Transfer Bandwidth
>> [  3]  0.0-10.0 sec   536 MBytes   449 Mbits/sec
>>
>> ... on an APU1C4, could it be you have a slow switch or router?  Any other
>> hardware that could slow yours down?
>>
>> I'm happy with this result, the APU1 is not really a powerhorse.
> That is pretty normal. From an older Intel-cpu laptop with a bge interface,
> to my APU2, both on a TP-Link gig switch, I get
>
> $ iperf -c gw-int 
> 
> Client connecting to gw-int, TCP port 5001
> TCP window size: 32.5 KByte (default)
> 
> [  3] local 192.168.42.46 port 21653 connected with 192.168.42.254 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec   502 MBytes   421 Mbits/sec
> $
>
> Again, that's with no tuning. Did you try a different cable?
>



Re: Low throughput with 1 GigE interface

2020-01-30 Thread livio
I am happy run the tests with another cable (although the one I was
using is brand new). I still receive 940Mbit/s with FreeBSD 12.1
with the exact same setup.

The only(!) difference is the physical mSATA SSD (one for OpenBSD, the
other for FreeBSD). They have the identical specs though.

Results with a different cable:

OpenBSD apu.liv.io 6.6 GENERIC.MP#4 amd64
apu# iperf3 -s
---
Server listening on 5201
---
Accepted connection from 10.10.1.240, port 64453
[  5] local 10.10.1.241 port 5201 connected to 10.10.1.240 port 64454
[ ID] Interval   Transfer Bitrate
[  5]   0.00-1.00   sec  16.8 MBytes   141 Mbits/sec
[  5]   1.00-2.00   sec  17.7 MBytes   149 Mbits/sec
[  5]   2.00-3.00   sec  17.5 MBytes   147 Mbits/sec
[  5]   3.00-4.00   sec  17.7 MBytes   148 Mbits/sec
[  5]   4.00-5.00   sec  17.3 MBytes   146 Mbits/sec

On 1/30/2020 5:04 PM, Peter J. Philipp wrote:
> On Thu, Jan 30, 2020 at 04:50:59PM +0100, livio wrote:
>> Hi Peter,
>> Thanks for your reply. I would already be quite happy with ~500Mbit/s.
>> My test do not involve a switch, just a notebook and the APU through
>> a Cat.6a cable. I achieve 940Mbit/s with the exact same setup but
>> FreeBSD 12.1 on the APU.
>>
>> I am happy to change parameters, provide additional logs and run any
>> number of tests. I am currently out of ideas.
> I go from my APU this way: 
>
> cat5e-->switch (netgear)-->cat5e-->switch (netgear)-->cat6a-->Xeon workstation
>
> Where the path between the first switch and the Xeon is all 10 GbE but this
> shouldn't matter.  I'd bet on that you have a bad cable.  It's happened to me
> before.
>
> Regards,
> -peter
>
>> $ uname -a
>> OpenBSD apu.liv.io 6.6 GENERIC.MP#4 amd64
> I have the same version on my APU and -current on the Xeon.
>
>
>> $ ifconfig em0
>> em0: flags=8843 mtu 1500
>>  lladdr 00:0d:b9:41:70:20
>>  index 1 priority 0 llprio 3
>>  media: Ethernet 1000baseT full-duplex (1000baseT
>> full-duplex,master,rxpause,txpause)
>>  status: active
>>  inet 10.10.1.241 netmask 0xff00 broadcast 10.10.1.255
>>
>> Thank you,
>> Livio
>>
>> On 1/30/2020 4:38 PM, Peter J. Philipp wrote:
>>> On Thu, Jan 30, 2020 at 03:43:41PM +0100, livio wrote:
>>>> Dear all,
>>>>
>>>> I am unable to achieve decent throughput with a 1 GigE interface
>>>> (Intel I210) on OpenBSD 6.6. When running iperf3 I get around 145Mbit/s.
>>>>
>>>> The config/setup is: APU2c4, Win10 notebook, no switch, Cat.6a cable,
>>>> MTU 1500, 1000baseT, full-duplex, pf disabled, BSD.mp, no custom Kernel
>>>> parameters/optimizations.
>>>>
>>>> With an increased MTU of 9000 (on both devices) the throughput is around
>>>> 230-250Mbit/s.
>>>>
>>>> When running the same test with a FreeBSD 12.1 on the APU I achieve
>>>> around 940Mbit/s (MTU 1500).
>>>>
>>>> The BIOS has been updated to the latest version (v4.11.0.2). The
>>>> hardware of the device is: https://pcengines.ch/apu2c0.htm
>>>>
>>>> dmesg output:
>>>> https://paste.ee/p/OeRbI
>>>>
>>>> Any inputs and help is highly appreciated.
>>>>
>>>> Many thanks,
>>>> Livio
>>>>
>>>> PS: I ran the same tests on an APU1c4 with Realtek RTL8111E interfaces.
>>>> The results were lower - around 95Mbit/s.
>>>> https://pcengines.ch/apu1c4.htm
>>> Hi,
>>>
>>> Without any tuning arguments I get:
>>>
>>> chi# iperf -c beta.internal.centroid.eu
>>> 
>>> Client connecting to beta.internal.centroid.eu, TCP port 5001
>>> TCP window size: 17.0 KByte (default)
>>> 
>>> [  3] local 192.168.177.40 port 13242 connected with 192.168.177.2 port 5001
>>> [ ID] Interval   Transfer Bandwidth
>>> [  3]  0.0-10.0 sec   536 MBytes   449 Mbits/sec
>>>
>>> ... on an APU1C4, could it be you have a slow switch or router?  Any other
>>> hardware that could slow yours down?
>>>
>>> I'm happy with this result, the APU1 is not really a powerhorse.
>>>
>>> Regards,
>>>
>>> -peter
>>>
>>>> PPS: Others also seem to have low throughput. None of the tuning
>>>> recommendations I found online substantially improved my results:
>>>> https://www.reddit.com/r/openbsd/comments/cg9vhq/poor_network_performance_pcengines_apu4/
>>>>




Re: Low throughput with 1 GigE interface

2020-01-30 Thread livio
Hi Peter,
Thanks for your reply. I would already be quite happy with ~500Mbit/s.
My test do not involve a switch, just a notebook and the APU through
a Cat.6a cable. I achieve 940Mbit/s with the exact same setup but
FreeBSD 12.1 on the APU.

I am happy to change parameters, provide additional logs and run any
number of tests. I am currently out of ideas.

$ uname -a
OpenBSD apu.liv.io 6.6 GENERIC.MP#4 amd64

$ ifconfig em0
em0: flags=8843 mtu 1500
 lladdr 00:0d:b9:41:70:20
 index 1 priority 0 llprio 3
 media: Ethernet 1000baseT full-duplex (1000baseT
full-duplex,master,rxpause,txpause)
 status: active
 inet 10.10.1.241 netmask 0xff00 broadcast 10.10.1.255

Thank you,
Livio

On 1/30/2020 4:38 PM, Peter J. Philipp wrote:
> On Thu, Jan 30, 2020 at 03:43:41PM +0100, livio wrote:
>> Dear all,
>>
>> I am unable to achieve decent throughput with a 1 GigE interface
>> (Intel I210) on OpenBSD 6.6. When running iperf3 I get around 145Mbit/s.
>>
>> The config/setup is: APU2c4, Win10 notebook, no switch, Cat.6a cable,
>> MTU 1500, 1000baseT, full-duplex, pf disabled, BSD.mp, no custom Kernel
>> parameters/optimizations.
>>
>> With an increased MTU of 9000 (on both devices) the throughput is around
>> 230-250Mbit/s.
>>
>> When running the same test with a FreeBSD 12.1 on the APU I achieve
>> around 940Mbit/s (MTU 1500).
>>
>> The BIOS has been updated to the latest version (v4.11.0.2). The
>> hardware of the device is: https://pcengines.ch/apu2c0.htm
>>
>> dmesg output:
>> https://paste.ee/p/OeRbI
>>
>> Any inputs and help is highly appreciated.
>>
>> Many thanks,
>> Livio
>>
>> PS: I ran the same tests on an APU1c4 with Realtek RTL8111E interfaces.
>> The results were lower - around 95Mbit/s.
>> https://pcengines.ch/apu1c4.htm
> Hi,
>
> Without any tuning arguments I get:
>
> chi# iperf -c beta.internal.centroid.eu
> 
> Client connecting to beta.internal.centroid.eu, TCP port 5001
> TCP window size: 17.0 KByte (default)
> 
> [  3] local 192.168.177.40 port 13242 connected with 192.168.177.2 port 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec   536 MBytes   449 Mbits/sec
>
> ... on an APU1C4, could it be you have a slow switch or router?  Any other
> hardware that could slow yours down?
>
> I'm happy with this result, the APU1 is not really a powerhorse.
>
> Regards,
>
> -peter
>
>> PPS: Others also seem to have low throughput. None of the tuning
>> recommendations I found online substantially improved my results:
>> https://www.reddit.com/r/openbsd/comments/cg9vhq/poor_network_performance_pcengines_apu4/
>>



Low throughput with 1 GigE interface

2020-01-30 Thread livio
Dear all,

I am unable to achieve decent throughput with a 1 GigE interface
(Intel I210) on OpenBSD 6.6. When running iperf3 I get around 145Mbit/s.

The config/setup is: APU2c4, Win10 notebook, no switch, Cat.6a cable,
MTU 1500, 1000baseT, full-duplex, pf disabled, BSD.mp, no custom Kernel
parameters/optimizations.

With an increased MTU of 9000 (on both devices) the throughput is around
230-250Mbit/s.

When running the same test with a FreeBSD 12.1 on the APU I achieve
around 940Mbit/s (MTU 1500).

The BIOS has been updated to the latest version (v4.11.0.2). The
hardware of the device is: https://pcengines.ch/apu2c0.htm

dmesg output:
https://paste.ee/p/OeRbI

Any inputs and help is highly appreciated.

Many thanks,
Livio

PS: I ran the same tests on an APU1c4 with Realtek RTL8111E interfaces.
The results were lower - around 95Mbit/s.
https://pcengines.ch/apu1c4.htm

PPS: Others also seem to have low throughput. None of the tuning
recommendations I found online substantially improved my results:
https://www.reddit.com/r/openbsd/comments/cg9vhq/poor_network_performance_pcengines_apu4/