Hi Kevin, The current method outlined in [1] is to manually assign networks to dhcp agents. I need to be able to kill the node running the dhcp agent and start it up on another node without manual intervention. Someone else pointed me to the dhcp_agents_per_network option which I'm looking into now.
[1] http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html On Mon, Oct 20, 2014 at 8:17 PM, Kevin Benton <[email protected]> wrote: > The current suggested way for DHCP agent fault tolerance is multiple > agents per network. Is there a reason you don't want to use that option? > On Oct 20, 2014 5:13 PM, "Noel Burton-Krahn" <[email protected]> wrote: > >> Thanks, Robert. >> >> So, ML2 needs the host attribute to match to bind the port. My other >> requirement is that the dhcp agent must be able to migrate to a new host on >> failover. The issue there is that if the dhcp service starts on a new host >> with a new host name, then it will not take over the networks that were >> served by the old host name. I'm looking for a way to start the dhcp agent >> on a new host using the old host's config. >> >> -- >> Noel >> >> >> On Mon, Oct 20, 2014 at 11:10 AM, Robert Kukura <[email protected] >> > wrote: >> >>> Hi Noel, >>> >>> The ML2 plugin uses the binding:host_id attribute of port to control >>> port binding. For compute ports, nova sets binding:host_id when >>> creating/updating the neutron port, and ML2's openvswitch mechanism driver >>> will look in agents_db to make sure the openvswitch L2 agent is running on >>> that host, and that it has a bridge mapping for any needed physical network >>> or has the appropriate tunnel type enabled. The binding:host_id attribute >>> also gets set on DHCP, L3, and other agents' ports, and must match the host >>> of the openvswitch-agent on that node or ML2 will not be able to bind the >>> port. I suspect your configuration may be resulting in these not matching, >>> and the DHCP port's binding:vif_type attribute being 'binding_failed'. >>> >>> I'd suggest running "neutron port-show" as admin on the DHCP port to see >>> what the values of binding_vif_type and binding:host_id are, and running >>> "neutron agent-list" as admin to make sure there is an L2 agent on that >>> node and maybe "neutron agent-show" as admin to get that agents config >>> details. >>> >>> -Bob >>> >>> >>> >>> On 10/20/14 1:28 PM, Noel Burton-Krahn wrote: >>> >>> I'm running OpenStack Icehouse with Neutron ML2/OVS. I've configured >>> the ml2-ovs-plugin on all nodes with host = the IP of the host itself. >>> However, my dhcp-agent may float from host to host for failover, so I >>> configured it with host="floating". That doesn't work. In this case, the >>> ml2-ovs-plugin creates a namespace and a tap interface for the dhcp agent, >>> but OVS doesn't route any traffic to the dhcp agent. It *does* work if the >>> dhcp agent's host is the same as the ovs plugin's host, but if my dhcp >>> agent migrates to another host, it loses its configuration since it now has >>> a different host name. >>> >>> So my question is, what does host mean for the ML2 dhcp agent and host >>> can I get it to work if the dhcp agent's host != host for the ovs plugin? >>> >>> Case 1: fails: running with dhcp agent's host = "floating", ovs >>> plugin's host = IP-of-server >>> dhcp agent is running in netns created by ovs-plugin >>> dhcp agent never receives network traffic >>> >>> Case 2: ok: running with dhcp agent's host = ovs plugin's host = >>> IP-of-server >>> dhcp agent is running in netns created by ovs-plugin (different tap >>> name than case 1) >>> dhcp agent works >>> >>> -- >>> Noel >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing >>> [email protected]http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> [email protected] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> [email protected] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
