GitHub user deajan added a comment to the discussion: Bridge interface not recognized for newly added host if they don't have a physical interface attached
Hmmm... interesting. ``` 2025-04-15 15:59:16,309 DEBUG [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-5:[]) (logid:04967b5d) failing to get physical interface from bridge br_bgp0, did not find an eth*, bond*, team*, vlan*, em*, p*p*, ens*, eno*, enp*, or enx* in /sys/devices/virtual/net/br_bgp0/brif ``` Looks like indeed using a bridge in cloudstack is limited to physical interfaces. I've done a (stupid) check: ``` nmcli c add type dummy ifname ethdummy0 con-name ethdummy0 nmcli c mod ethdummy0 master br_bgp0 nmcli c up ethdummy0 ``` Now the agent doesn't complain anymore, since there's now a `eth*` in `/sys/devices/virtual/net/br_bgp0/brif` I guess that indeed the bridge tests are a bit too restrictive. I wonder if the test should be less restrictive, like checking if there is at least one non vtnet* interface in the bridge, so people would be allowed to use whatever they want behind the bridge (like vxlans, gretap or whatever). This would also allow make cloudstack future proof for ethernet driver names. Perhaps one day we'll have igb* or whatever driver names. I can do python and bash PRs, but I really am not java fluent. Is there anyway this could be discussed for perhaps some next release ? @weizhouapache Big thanks for the hints. GitHub link: https://github.com/apache/cloudstack/discussions/10804#discussioncomment-13004291 ---- This is an automatically sent email for users@cloudstack.apache.org. To unsubscribe, please send an email to: users-unsubscr...@cloudstack.apache.org