Hi there,
I have a cluster of 5 hosts running with openNebula 4.8 and just recently configured OpenvSwitch on all these nodes. Networking is working just fine. This holds also true for VLAN isolation, but just as long as the VMs belonging to the isolated Virtual Network reside on the same physical host. When I move these VMs to different hosts, they cant communicate with each other anymore. Non-isolated nodes can communicate to everywhere without problems. Is that intentional? I chose OpenVswitch because the ONE docs say it requires no support from the switch hardware (or more specifically it says 802.1Q would require support). A colleague suggested it might have to do with the switch not forwarding the tagged packets from Open vSwitch. Can that be the cause? Does OVS even tag the packets? Heres my environment: OpenNebula 4.8 Open vSwitch 2.0.1 Cisco Switch Host OS: Ubuntu Server 14.04 LTS (latest patches) Output from ovs-vsctl show: Bridge "br0" Port "vnet3" Interface "vnet3" Port "bond0" Interface "bond0" Port "vnet1" Interface "vnet1" Port "vnet2" Interface "vnet2" Port "vnet5" Interface "vnet5" Port "br0" Interface "br0" type: internal Port "vnet0" Interface "vnet0" Port "vnet4" Interface "vnet4" ovs_version: "2.0.1" Where br0 is my ovs bridge interface which has the external real IP address configured and bond0 is a link aggregated dual Gbit interface which is the port for br0 I would greatly appreciate some suggestions or ideas on this, since I am a bit lost. Cheers, Christian ----------------------------------------------- Christian Hüning, BSc. Fakultät Technik und Informatik, Department Informatik Berliner Tor 7 20099 Hamburg Web: http://www.mars-group.org
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org