On my net/compute node I have two interfaces; one is for the management network, and the other is for VM internet access. This is already added to br-ex as per a normal set up. I don't have this on the dedicated compute node because it shouldn't have direct access to the internet. And I can't execute ip netns exec qrouter-4cff1882-1962-4084-9b48-cb1bcd048a4e ping $IP on the compute node because that name space doesn't exist there, only on the network node.
I went back and restarted the dhcp service just as a precaution, and found it hadn't been running correctly. Not sure if that affected anything. I also ran ifconfig on the compute node after alunching a VM and it showed the qbr qbo and qbv interfaces, as well as the tap interface. On Thu, Sep 26, 2013 at 10:26 AM, Cristian Falcas <[email protected]>wrote: > On br-ex you need to add an interface that has no ip but it's up: > ovs-vsctl add-port br-ex eth1 > ifconfig eth1 up > > the instance is not connected to the router namespace > > Check in /var/lib/nova/instances/$instance_id/libvirt.xml for source > bridge and target dev > > try ip netns exec qrouter-4cff1882-1962-4084-9b48-cb1bcd048a4e ping $IP > > also, do the same from the dhcp namespace > > > > > On Thu, Sep 26, 2013 at 5:17 PM, Brandon Adams > <[email protected]> wrote: > > Cristian, > > Thanks for responding so quickly. The instances should be attached to the > > internal network interface, which I can find on the network/compute node. > > It's under the namespace of the router for the project: > > ip netns exec qrouter-4cff1882-1962-4084-9b48-cb1bcd048a4e ifconfig shows > > two interfaces, one for the internal network and one for the external > > network. > > > > The problem I think is that this router isn't reachable from my extra > > compute node. When I create the router, its namespace does not appear on > the > > extra node. > > > > And I am using dhcp, the quantum-dhcp-agent is installed on the > > network/compute node. > > > > Brandon > > > > > > On Thu, Sep 26, 2013 at 9:57 AM, Cristian Falcas < > [email protected]> > > wrote: > >> > >> layer 3 is created only on the network node. On compute nodes you have > >> layer 2 only (openvswitch) > >> > >> The gre tunnels should take care of everything magically :). > >> > >> Where are the instances attached (that should be tap$id and source > >> qbr$id)? Do you use dhcp? > >> > >> > >> On Thu, Sep 26, 2013 at 4:40 PM, Brandon Adams > >> <[email protected]> wrote: > >> > Hi all, > >> > > >> > I'm trying to add a second compute node my dev cluster, I've already > got > >> > one > >> > controller node and one network/compute node running perfectly. I've > >> > installed all of the necessary packages and everything seems to be > >> > running > >> > smoothly. However, when I create a private network and subnet, they > >> > don't > >> > seem to be applied to the extra node. That is, I can see the extra > >> > interfaces when I run ovs-vsctl show on the network/compute node, but > >> > not on > >> > the dedicated compute node. There are GRE tunnels which I assume > connect > >> > the > >> > two nodes across the management network, and instances boot up on the > >> > new > >> > node. They just can't find the internal network and thus can't be > >> > reached at > >> > all. I'm running Grizzly on Ubuntu, using Quantum OpenVSwitch with GRE > >> > tunnels. I've already fixed an error regarding brcompatd running on > both > >> > nodes, so I'm wondering what my next step is. Thanks. > >> > > >> > Brandon > >> > > >> > _______________________________________________ > >> > Mailing list: > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> > Post to : [email protected] > >> > Unsubscribe : > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >> > > > > > >
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : [email protected] Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
