Adding quantum-core, as this has general relevance. On Fri, Dec 7, 2012 at 10:07 AM, Robert Kukura <[email protected]> wrote:
> Dan and Salvatore, > > I've been looking at https://bugs.launchpad.net/quantum/+bug/1056437 and > am not seeing any l3_agent or plugin/agent code changes needed, except > maybe changing l3_agent's default for external_network_bridge from > "br-ex" to empty. I've done some testing, but need to do a bit more to > be sure. Am I missing something? > I think there are some code changes needed, though I suspect they are fairly straightforward. For example, the external_gateway_added/removed functions create a port on the special external bridge. The trickiest part will be handling backward compatibility. The bad thing about changing the default value of br-ex, as that would cause an upgrade to break existing setups that relied on that default. If we want to maintain backward compat, I guess we could require people to explicitly zero-out the field, in which case the l3-agent would plug the router gw interface into br-int, meaning that information from the "provider" extension would be honored. > > The actual work needed is in devstack (and the documentation), to > configure the public network as a provider network instead of setting up > PUBLIC_BRIDGE. I'm thinking this would be done by setting > PUBLIC_NETWORK_TYPE, PUBLIC_PHYSICAL_NETWORK, and PUBLIC_SEGMENTATION_ID > in localrc. I guess we'd also need to allow specifying the gateway IP > and the pool of floating IPs to use within it. Does this all make sense? > Devstack already has a notion of the FLOATING_RANGE , which is roughly, though not exactly, equivalent to the cidr and allocation pool of the subnet on the quantum external network (in some deployments, the cidr may be larger than the range you want to allow for allocation). I agree it would be worth having a value to indicate the gateway on the external network. The current NETWORK_GATEWAY value in devstack is actually used as the gateway of the per-tenant "private" network. > > But I'm not totally clear about what scenarios we want devstack to > support for public networks. It currently behaves quite differently > between openvswitch and linuxbridge. I could imagine it supporting > any/all of the following options for the public network: > > 1) create no public network > 2) create it as a normal tenant network (which may be accessible via > quantum-debug - I'm not very familiar with this) > 3) create it as a provider network with details from localrc > 4) create it as the currently supported tenant network + external bridge > > AFAICT, right now with openvswitch we get #4, and with linuxbridge we > get #2. With openvswitch, the host gets an IP on br-ex, so the user can > access the floating IPs. With linuxbridge, an IP can be added to the > VLAN's bridge manually after starting devstack, accomplishing the same > thing. > > If we really think public networks should be a provider networks in real > deployments, then I'd be in favor of having devstack do this by default. > But treating the host running devstack's real network connection as a > public network is tricky, dangerous, and not always possible. > > One idea is to have devstack default to option #3 with > PUBLIC_NETWORK_TYPE=local. For the node running l3_agent, this is > similar to #2. Would quantum-debug be usable for testing in this case, > without the host actually having an IP on the public network? Does this > make any sense? Or do we need some default that actually give the > devstack host running l3_agent connectivity to the floating IPs? > > Another idea would be for devstack to default to option #3 with > PUBLIC_NETWORK_TYPE=vlan with a fake physical network, and for devstack > to give the host an IP on the VLAN for connectivity to the floating IPs. > This would be straightforward for openvswitch, since the agent's mapping > is to a bridge that doesn't have to have a physical network interface. > But the linuxbridge agent needs a mapping to physical interface for each > physical network, so this might have to be faked with a veth or something. > The "public" network in devstack is an L3 "external network", but it is not a "shared network" (i.e., tenants cannot plug into it directly). I am not aware of any reason why using the Linux Bridge plugin would let tenants connect directly to those networks, or why Linux Bridge behavior would be different from OVS, but I haven't used Linux Bridge with devstack much, so I may be missing something I agree that ideally devstack would mirror how we expect people to use things in "real life". However, we also definitely require that the local machine have access to floating IPs and I agree that we don't want these values to conflict with any real network the devstack box is on. This is part of the reason why things are set-up as they are currently, with the "gateway IP" of the external subnet being configured on the br-ex interface. Using the newer quantum-debug stuff to make sure the local devstack box has access is an interesting idea that we can explore more. It may make sense to optimize for two use cases with devstack: - a "fake" local external network (essentially what we have today, but not using br-ex anymore) - a "real" physical external network, with a real network gateway on it, and a range of IP addresses from that network that the tenant can use for allocation on the external network. Dan > > Any other thoughts/ideas on this? > > Thanks, > > -Bob > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dan Wendlandt Nicira, Inc: www.nicira.com twitter: danwendlandt ~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Mailing list: https://launchpad.net/~quantum-core Post to : [email protected] Unsubscribe : https://launchpad.net/~quantum-core More help : https://help.launchpad.net/ListHelp

