On 09/11/2014 03:01 PM, Jay Pipes wrote:
On 09/11/2014 04:51 PM, Matt Riedemann wrote:
On 9/10/2014 6:00 PM, Russell Bryant wrote:
On 09/10/2014 06:46 PM, Joe Cropper wrote:
Hmm, not sure I follow the concern, Russell.  How is that any different
from putting a VM into the group when it’s booted as is done today?
  This simply defers the ‘group insertion time’ to some time after
initial the VM’s been spawned, so I’m not sure this creates anymore
race
conditions than what’s already there [1].

[1] Sure, the to-be-added VM could be in the midst of a migration or
something, but that would be pretty simple to check make sure its task
state is None or some such.

The way this works at boot is already a nasty hack.  It does policy
checking in the scheduler, and then has to re-do some policy checking at
launch time on the compute node.  I'm afraid of making this any worse.
In any case, it's probably better to discuss this in the context of a
more detailed design proposal.


This [1] is the hack you're referring to right?

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.b3#n1297


That's the hack *I* had in the back of my mind.

I think that's the only boot hack related to server groups.

I was thinking that it should be possible to deal with the race more cleanly by recording the selected compute node in the database at the time of scheduling. As it stands, the host is implicitly encoded in the compute node to which we send the boot request and nobody else knows about it.

Chris


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to