(resending with reply-all)

The routed networks work will include a change to the DHCP scheduler
which will work something like this:

1. Neutron subnets will have optional affinity to a segment
2. DHCP agents will (somewhat indirectly) report which segments to
which they are attached*.
3. Where today, DHCP schedules networks to DHCP agents, tomorrow DHCP
will schedule each segment to an agent that can reach it.  This will
be predicated on 'enable_dhcp' being set on the subnets.

There is an implicit assumption here that the operator will deploy a
DHCP agent in each of the segments.  This will be documented in the
guide.

Down the road, I really think we should continue to explore other
possibilities like DHCP relay or a DHCP responder on the compute host.
But, that should be considered an independent effort.

Carl

* they already do this by reporting physical_network in bridge mappings

On Thu, Feb 25, 2016 at 11:30 AM, Tim Bell <[email protected]> wrote:
>
> The CERN guys had some concerns on how dhcp was working in a segment 
> environment. I’ll leave them to give details.
>
> Tim
>
>
>
>
>
> On 25/02/16 14:53, "Andrew Laski" <[email protected]> wrote:
>
>>
>>
>>On Thu, Feb 25, 2016, at 05:01 AM, Tim Bell wrote:
>>>
>>> CERN info added.. Feel free to come back for more information if needed.
>>
>>An additional piece of information we're specifically interested in from
>>all cellsv1 deployments is around the networking control plane setup. Is
>>there a single nova-net/Neutron deployment per region that is shared
>>among cells? It appears that all cells users are splitting the network
>>data plane into clusters/segments, are similar things being done to the
>>control plane?
>>
>>
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>> On 24/02/16 22:47, "Edgar Magana" <[email protected]> wrote:
>>>
>>> >It will be awesome if we can add this doc into the networking guide  :-)
>>> >
>>> >
>>> >Edgar
>>> >
>>> >
>>> >
>>> >
>>> >On 2/24/16, 1:42 PM, "Matt Riedemann" <[email protected]> wrote:
>>> >
>>> >>The nova and neutron teams are trying to sort out existing deployment
>>> >>network scenarios for cells v1 so we can try and document some of that
>>> >>and get an idea if things change at all with cells v2.
>>> >>
>>> >>Therefore we're asking that deployers running cells please document
>>> >>anything you can in an etherpad [1].
>>> >>
>>> >>We'll try to distill that for upstream docs at some point and then use
>>> >>it as a reference when talking about cells v2 + networking.
>>> >>
>>> >>[1] https://etherpad.openstack.org/p/cells-networking-use-cases
>>> >>
>>> >>--
>>> >>
>>> >>Thanks,
>>> >>
>>> >>Matt Riedemann
>>> >>
>>> >>
>>> >>_______________________________________________
>>> >>OpenStack-operators mailing list
>>> >>[email protected]
>>> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >_______________________________________________
>>> >OpenStack-operators mailing list
>>> >[email protected]
>>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> [email protected]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> Email had 1 attachment:
>>> + smime.p7s
>>>   4k (application/pkcs7-signature)
>>
>>_______________________________________________
>>OpenStack-operators mailing list
>>[email protected]
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> [email protected]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to