On 18.11.2021 03:20, Brian Hutchinson wrote:
> Yet another update, I was able to get it working .. but feel like it is a
> hack so comments welcome ... see below:
> 
> On Wed, Nov 17, 2021 at 12:26 AM Brian Hutchinson <b.hutch...@gmail.com>
> wrote:
> 
>> Update below
>>
>> On Tue, Nov 16, 2021 at 2:27 PM Brian Hutchinson <b.hutch...@gmail.com>
>> wrote:
>>
>>> Hi Mikulėnas,
>>>
>>> On Tue, Nov 16, 2021, 3:12 AM Mantas Mikulėnas <graw...@gmail.com> wrote:
>>>
>>>> Most of this looks like it could be done with systemd-networkd to create
>>>> a bond .netdev, with a small oneshot service for i2c. (What's the exact
>>>> criteria for when it should be run? Does it depend on bond0 being there,
>>>> does it need to be last, etc?)
>>>>
>>>
>>> It can be last in the startup chain I guess, don't know what other
>>> systemd things that might need the network to be up before the last unit
>>> file runs.
>>>
>>> I start linuxptp too so I would have the unit file that starts ptp4l
>>> start after the bond was created etc.
>>>
>>> Same thing for the i2c command to enable the switch.
>>>
>>> Regards,
>>>
>>> Brian
>>>
>>>
>>>> On Tue, Nov 16, 2021, 02:58 Brian Hutchinson <b.hutch...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I'm on a IMX8 platform and have a Microchip KSZ9567 Ethernet switch.  I
>>>>> can use IP commands to manually bring lan1 and lan2 interfaces up and then
>>>>> create a redundant/failover bond ... but I'm having difficulty figuring 
>>>>> out
>>>>> how to do this the "systemd" way.
>>>>>
>>>>> My first attempt was to just have systemd run a script of all the
>>>>> commands I do manually but during system startup there appears to be race
>>>>> conditions so I have to set my service type to "Idle" and sometimes even
>>>>> that doesn't work. So I want to exploit any systemd support for DSA and
>>>>> bonding.
>>>>>
>>>>> Here is script my manual steps which is what I want systemd to
>>>>> ultimately do:
>>>>>
>>>>> #!/bin/bash
>>>>>
>>>>> # Create a redundant bond between ksz9567 DSA lan1 and lan2 interfaces
>>>>>
>>>>> # Load bonding kernel module
>>>>> modprobe bonding
>>>>>
>>>>> # Bring up CPU interface (cpu to switch port 7 - the RGMII link)
>>>>> ip link set eth0 up
>>>>>
>>>>> # Create a bond
>>>>> echo +bond0 > /sys/class/net/bonding_masters
>>>>>
>>>>> # Set mode to active-backup (redundancy failover)
>>>>> echo active-backup > /sys/class/net/bond0/bonding/mode
>>>>>
>>>>> # Set time it takes (in ms) for slave to move when a link goes down
>>>>> echo 1000 > /sys/class/net/bond0/bonding/miimon
>>>>>
>>>>> # Add slaves to bond
>>>>>
>>>>> echo +lan1 > /sys/class/net/bond0/bonding/slaves
>>>>> echo +lan2 > /sys/class/net/bond0/bonding/slaves
>>>>>
>>>>> # Set IP and netmask of the bond
>>>>> ip addr add 192.168.0.4/24 dev bond0
>>>>>
>>>>> # And bring bond up.  Pings and network connectivity should work now
>>>>> ip link set bond0 up
>>>>>
>>>>> # For a board that doesn't have Ethernet switch hardware strapped to
>>>>> enable at boot .. enable it now
>>>>> i2cset -f -y 0 0x5f 0x03 0x00 0x01 i
>>>>>
>>>>> Thanks for any information, pointers etc.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Brian
>>>>>
>>>>
>> So not sure I'm doing this right.  eth0 needs to be up before lan1 and
>> lan2 can be added to the bond.  Not quite sure how to do that with systemd
>> but I made the following files and see some progress but ping doesn't work
>> so appears I have no network connectivity:
>>
>> root@imx8mmevk:/etc/systemd/network# cat 10-bond1.netdev
>> [NetDev]
>> Name=bond1
>> Kind=bond
>>
>> [Bond]
>> Mode=active-backup
>> PrimaryReselectPolicy=failure
>> MIIMonitorSec=2s
>>
>> root@imx8mmevk:/etc/systemd/network# cat 10-bond1.network
>> [Match]
>> Name=bond1
>>
>> [Network]
>> Address=192.168.0.4/24
>>
>> root@imx8mmevk:/etc/systemd/network# cat 20-lan1.network
>> [Match]
>> Name=lan1
>>
>> [Network]
>> Bond=bond1
>> PrimarySlave=true
>>
>> root@imx8mmevk:/etc/systemd/network# cat 30-lan2.network
>>
>> [Match]
>> Name=lan2
>>
>> [Network]
>> Bond=bond
>>
>> ip link list
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
>> DEFAULT group default qlen 1000
>>    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1506 qdisc mq state UP mode
>> DEFAULT group default qlen 1000
>>    link/ether f0:1f:af:6b:b2:17 brd ff:ff:ff:ff:ff:ff
>> 3: lan1@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
>> DEFAULT group default qlen 1000
>>    link/ether f0:1f:af:6b:b2:17 brd ff:ff:ff:ff:ff:ff
>> 4: lan2@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
>> DEFAULT group default qlen 1000
>>    link/ether f0:1f:af:6b:b2:17 brd ff:ff:ff:ff:ff:ff
>> 5: bond1: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc
>> noqueue state DOWN mode DEFAULT group default qlen 1000
>>    link/ether be:87:0a:9b:13:03 brd ff:ff:ff:ff:ff:ff
>>
>> cat /proc/net/bonding/bond1
>> Ethernet Channel Bonding Driver: v5.10.69
>>
>> Bonding Mode: fault-tolerance (active-backup)
>> Primary Slave: None
>> Currently Active Slave: None
>> MII Status: down
>> MII Polling Interval (ms): 2000
>> Up Delay (ms): 0
>> Down Delay (ms): 0
>> Peer Notification Delay (ms): 0
>>
>> At this point there should be info on lan1 and lan2 status but don't see
>> it.
>>
>> ... so of course I can't ping.
>>
>> Next I did systemctl restart systemd-networkd and saw the following:
>>
>> systemctl restart systemd-networkd
>> root@imx8mmevk:~# [   33.816313] device eth0 entered promiscuous mode
>> [   33.821026] audit: type=1700 audit(1636550966.339:2): dev=eth0 prom=256
>> old_prom=0 auid=4294967295 uid=995 gid=994 ses=4294967295
>> [   33.867164] ksz9477-switch 0-005f lan2: configuring for phy/gmii link
>> mode
>> [   33.875066] bond1: (slave lan2): Enslaving as a backup interface with a
>> down link
>> [   33.919055] ksz9477-switch 0-005f lan1: configuring for phy/gmii link
>> mode
>> [   33.926683] bond1: (slave lan1): Enslaving as a backup interface with a
>> down link
>> [   38.066148] ksz9477-switch 0-005f lan1: Link is Up - 1Gbps/Full - flow
>> control rx/tx
>> [   39.472022] bond1: (slave lan1): link status definitely up, 1000 Mbps
>> full duplex
>> [   39.479537] bond1: (slave lan1): making interface the new active one
>> [   39.486154] bond1: active interface up!
>> [   39.490071] IPv6: ADDRCONF(NETDEV_CHANGE): bond1: link becomes ready
>>
>> At which point I can ping.  Feels like there still might be some kind of
>> race condition as things won't work unless I manually issue systemctl
>> restart systemd-networkd command after logging in.
>>
>> In kernel dmesg logs I see:
>> [    4.165940] bond1: (slave lan2): Opening slave failed
>> [    4.196834] bond1: (slave lan1): Opening slave failed
>> [    4.315588] Generic PHY fixed-0:00: attached PHY driver [Generic PHY]
>> (mii_bus:phy_addr=fixed-0:00, irq=POLL)
>> [    4.326181] fec 30be0000.ethernet eth0: Link is Up - 1Gbps/Full - flow
>> control off
>> [    4.561000] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
>>
>>     Loaded: loaded (/lib/systemd/system/systemd-networkd.service; enabled;
>> vendor preset: enabled)
>>     Active: [[0;1;32mactive (running)[[0m since Sun 2020-09-20 10:43:59
>> UTC; 1 years 1 months ago
>> TriggeredBy: [[0;1;32m*[[0m systemd-networkd.socket
>>       Docs: man:systemd-networkd.service(8)
>>   Main PID: 251 (systemd-network)
>>     Status: "Processing requests..."
>>      Tasks: 1 (limit: 1574)
>>     Memory: 1.5M
>>     CGroup: /system.slice/systemd-networkd.service
>>             `-251 /lib/systemd/systemd-networkd
>>
>> Nov 10 13:28:56 imx8mmevk systemd-networkd[251]: 
>> [[0;1;31m[[0;1;39m[[0;1;31mlan2:
>> Could not join netdev: Network is down[[0m
>> Nov 10 13:28:56 imx8mmevk systemd-networkd[251]: 
>> [[0;1;38;5;185m[[0;1;39m[[0;1;38;5;185mlan2:
>> Failed[[0m
>> Nov 10 13:28:56 imx8mmevk systemd-networkd[251]: 
>> [[0;1;31m[[0;1;39m[[0;1;31mlan1:
>> Could not join netdev: Network is down[[0m
>> Nov 10 13:28:56 imx8mmevk systemd-networkd[251]: 
>> [[0;1;38;5;185m[[0;1;39m[[0;1;38;5;185mlan1:
>> Failed[[0m
>> Nov 10 13:28:56 imx8mmevk systemd-networkd[251]: eth0: IPv6 successfully
>> enabled
>> Nov 10 13:28:56 imx8mmevk systemd-networkd[251]: eth0: Link UP
>> Nov 10 13:28:57 imx8mmevk systemd-networkd[251]: eth0: Gained carrier
>> Nov 10 13:28:57 imx8mmevk systemd-networkd[251]: bond1: IPv6 successfully
>> enabled
>> Nov 10 13:28:57 imx8mmevk systemd-networkd[251]: bond1: Link UP
>> Nov 10 13:28:58 imx8mmevk systemd-networkd[251]: eth0: Gained IPv6LL
>>
>> ... so looks like the bond stuff is trying to happen before eth0 (my DSA
>> HOST/CPU interface to switch) is up.  How can I make eth0 up with systemd?
>> eth0 just needs to be up ... no IP etc., and the bond1 gets the IP etc.
>>
>> For now I'm doing the i2c command to enable my switch in u-boot but still
>> need to incorporate that into systemd somehow.
>>
>> Any ideas as to what I'm doing wrong?  I think at a minimum I need to
>> bring eth0 up before the bonding stuff happens but not quite sure what that
>> would look like using systemd.
>>
>> Regards,
>>
>> Brian
>>
>>
> I tried and tried to get eth0 to come up before the bond was brought up.  I
> had everything named in lexical order but didn't appear to matter.  I added
> a eth0.network file and in it specified  ActivationPolicy=always-up and
> other things but could not get eth0 to come up.
> 
> It was obvious the bond was trying to be established before eth0 was up and
> since this is using DSA that just won't work.  Dmesg logs would show slaves
> being added before eth0 was up as in copy/paste from previous email.
> 
> So I added a service to bring eth0 up:
> 
> cat eth0-up.service
> [Unit]
> Description=Bring eth0 up before bonding
> Before=network-pre.target
> Wants=network-pre.target
> 
> [Service]
> ExecStart=/usr/local/bin/eth0-up.sh
> RemainAfterExit=yes
> 
> [Install]
> WantedBy=multi-user.target
> 
> cat /usr/local/bin/eth0-up.sh
> #!/bin/bash
> ip link set eth0 up
> 
> ... I feel like this is a hack, that systemd can probably do this but
> either I can't figure out how or there is a problem with the code.
> 

How lan1 and lan2 are related to eth0? Your script never creates or sets up 
them. 


> Again,  advice and info welcome.
> 
> Regards,
> 
> Brian
> 

Reply via email to