Le 22/04/2016 16:14, Matt Riedemann a écrit :
On 4/22/2016 2:48 AM, Sylvain Bauza wrote:


Le 22/04/2016 02:49, Jay Pipes a écrit :
On 04/20/2016 06:40 PM, Matt Riedemann wrote:
Note that I think the only time Nova gets details about ports in the API
during a server create request is when doing the network request
validation, and that's only if there is a fixed IP address or specific
port(s) in the request, otherwise Nova just gets the networks. [1]

[1]
https://github.com/openstack/nova/blob/ee7a01982611cdf8012a308fa49722146c51497f/nova/network/neutronv2/api.py#L1123


Actually, nova.network.neutronv3.api.API.allocate_for_instance() is
*never* called by the Compute API service (though, strangely,
deallocate_for_instance() *is* called by the Compute API service.

allocate_for_instance() is *only* ever called in the nova-compute
service:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/compute/manager.py#L1388


I was actually on a hangout today with Carl, Miguel and Dan Smith
talking about just this particular section of code with regards to
routed networks IPAM handling.

What I believe we'd like to do is move to a model where we call out to
Neutron here in the conductor:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L397


and ask Neutron to give us as much information about available subnet
allocation pools and segment IDs as it can *before* we end up calling
the scheduler here:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L415


Not only will the segment IDs allow us to more properly use network
affinity in placement decisions, but doing this kind of "probing" for
network information in the conductor is inherently more scalable than
doing this all in allocate_for_instance() on the compute node while
holding the giant COMPUTE_NODE_SEMAPHORE lock.

I totally agree with that plan. I never replied to Ajo's point (thanks
Matt for doing that) but I was struggling to figure out an allocation
call in the Compute API service. Thanks Jay for clarifying this.

Funny, we do *deallocate* if an exception is raised when trying to find
a destination in the conductor, but since the port is not allocated yet,
I guess it's a no-op at the moment.

https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/conductor/manager.py#L423-L424

Is this here for rebuilds where we setup networks on a compute node but something else failed, maybe setting up block devices? Although we have a lot of checks in the build flow in the compute manager for deallocating the network on failure.
Yeah, after git blaming, the reason is told in the commit msg : https://review.openstack.org/#/c/243477/

Fair enough, I just think it's another good reason to discuss where and when we should allocate and deallocate networks because I'm not super comfortable with the above. Or one other thing could be to trace that a port was already allocated for a specific instance and prevent doing that deallocation if not done yet instead of just doing what was necessary there https://review.openstack.org/#/c/269462/1/nova/conductor/manager.py ?

-Sylvain




Clarifying the above and making the conductor responsible for placing
calls to Neutron is something I'd love to see before moving further with
the routed networks and the QoS specs, and yes doing that in the
conductor seems to me the best fit.

-Sylvain



Best,
-jay

__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to