Hi, I'm still having trouble with that DVR setup. As per https://bugs.launchpad.net/charms/+source/neutron-gateway/+bug/1513677 I can see that neutron-openvswitch installs the l3-agent but then removes it.
My neutron-openvswitch charm is only related to: nova-compute neutron-api rabbitmq-server I have tried removing each and every one of those relations but it either destroys the unit (missing nova-compute), complains about missing relation (rabbitmq-server) or doesn't install the l3-agent anyway (neutron-api). Any hint as to how this is supposed to be set up will be appreciated. kind regards Pshem On Sat, 7 Nov 2015 at 10:24 Pshem Kowalczyk <[email protected]> wrote: > Hi, > > Yes this helps a lot. > > My google-fu is failing me here. Would you be able to point me to any > documentation that shows how those various charms (nova-api, > nova-cloud-controller, neutron-api, neutron-gateway, neutron-openvswitch) > interact with each other in various network scenarios and what sort of > relations supposed to be build for the setup to work (in various > scenarios)? Does the neutron-gateway node need the openvswitch charm as > well (I suspect not, but could you please confirm that)? > Is it possible to use other interface then eth0 for OVS tunnels? > > For the last few days I've been struggling to understand some > charm-specific aspects (like what's br-data for?) and started reading the > source code of those charms, but that's a little bit slow way of getting > the information. > > kind regards > Pshem > > > On Fri, 6 Nov 2015 at 22:34 James Page <[email protected]> wrote: > >> Hi Pshem >> >> You need to make use of the neutron-openvswitch charm - this is a >> subordinate charm that's deployed with nova-compute, and manages the >> neutron configuration and agents on compute nodes. >> >> In the same way that you provide ext-port to neutron-gateway, you'll need >> todo the same with neutron-openvswitch when using dvr mode - this port is >> used for north/south traffic for instances on each compute node where they >> have floating IP's - instances which don't have floating ip's will still go >> via the neutron gateway. >> >> Hope that helps! >> >> On Thu, Nov 5, 2015 at 12:29 AM, Pshem Kowalczyk <[email protected]> >> wrote: >> >>> Hi, >>> >>> I'm trying to figure out if it's possible to setup distributed routing >>> (as per http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html >>> ). >>> >>> Using fairly basic config: >>> >>> nova-cloud-controller: >>> openstack-origin: cloud:trusty-liberty >>> network-manager: Neutron >>> neutron-external-network: "ext-net" >>> console-access-protocol: vnc >>> >>> nova-compute: >>> openstack-origin: cloud:trusty-liberty >>> >>> neutron-api: >>> openstack-origin: cloud:trusty-liberty >>> neutron-external-network: "ext-net" >>> neutron-security-groups: true >>> enable-dvr: true >>> overlay-network-type: vxlan >>> >>> neutron-gateway: >>> openstack-origin: cloud:trusty-liberty >>> ext-port: eth1 >>> >>> >>> and deploying nova-cloud-controller and neutron api into lxc containers >>> on node 0, neutron-gateway into native node 0, and nova compute into nodes >>> 1 and 2. >>> >>> After I done that I ended up with 'default' configuration for neutron >>> on nodes 1 and 2. >>> I was told that neutron-gateway and nova-compute can not be collocated, >>> so I wonder which charm should claim the ownership of the neutron >>> configuration in that scenario? >>> >>> kind regards >>> Pshem >>> >>> >>> -- >>> Juju mailing list >>> [email protected] >>> Modify settings or unsubscribe at: >>> https://lists.ubuntu.com/mailman/listinfo/juju >>> >>>
-- Juju mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
