According to [1]:

"""

1) It's possible to hit a similar race condition for server groups with the 
"affinity" policy. Suppose we create a new group and then create two instances 
simultaneously. The scheduler sees an empty group for each, assigns them to 
different compute nodes, and the policy is violated. We should add a check in 
_validate_instance_group_policy() to cover the "affinity" case.


2) It's possible to create two instances simultaneously, have them be scheduled 
to conflicting hosts, both of them detect the problem in 
_validate_instance_group_policy(), both of them get sent back for rescheduling, 
and both of them get assigned to conflicting hosts *again*, resulting in an 
error. In order to fix this I propose that instead of checking against all 
other instances in the group, we only check against instances that were created 
before the current instance.

"""


I've been trying to improve upon Chris' solution here[2], but honestly i'm not 
exactly sure if I'm approaching this correctly. Chris' solution is to only 
consider the older members when validating group policy(ignoring any members 
younger than the instance we are validating), eliminating the possibility for 
the two cases mentioned above. I don't really know enough about the scheduler 
and instance group code to validate the integrity of the solution, hence my 
plea for help here :)


I've attached a git format of my attempt to make the implementation a little 
cleaner. It's the same solution Chris implemented, just moved to the 
setup_hosts() method to avoid creating a new remotable method.
I haven't gotten the tests to pass, i'm having trouble getting the expected 
filters to work.


I'm pretty new to the openstack code base, is there anyway anyone here could 
give me some direction? Is the solution correct? What am I missing? How can I 
fill in the gaps? Is this even still a valid issue?


Thanks!


[1]

https://bugs.launchpad.net/nova/+bug/1423648

[2]

https://review.openstack.org/#/c/164762/
From b6ea4e26d5feac8c57d6eb0619f29c22c1f917ca Mon Sep 17 00:00:00 2001
From: Miguel Alex Cantu <miguel.ca...@rackspace.com>
Date: Wed, 17 Aug 2016 22:32:55 +0000
Subject: [PATCH] Fix race in server group policy validation

This is a follow up to Chris Friesen's patch here:
https://review.openstack.org/#/c/164762/9

There is a race condition when validating instances being
simultaneously created as part of the same server group.

It's possible to create two instances simultaneously, have them be
scheduled to hosts that would violate the group policy, have both
of them detect the problem in _validate_instance_group_policy(),
both of them get sent back for rescheduling, and both of them get
assigned to conflicting hosts *again*, resulting in an error.

The fix is to modify the validation code to ignore
group members created after the instance being validated.  This
ensures that at least one of the instances will run on the chosen
host, and the other will reschedule and adapt to the first.

The main difference between my change and Chris' change is that
instead of getting instances that are older than an instance given,
the younger instances (instances created before instance_uuid) are
retrieved, and added to the exclude list in get_hosts().

For this to work, I've added a kwarg parameter to get_hosts so
the instance_uuid that is being validated can be passed.

Closes-Bug: #1423648

Signed-off-by: Miguel Alex Cantu <miguel.ca...@rackspace.com>
---
 nova/compute/manager.py                        |  8 ++++--
 nova/objects/instance_group.py                 | 39 +++++++++++++++++++++++++-
 nova/tests/unit/objects/test_instance_group.py | 21 ++++++++++++++
 3 files changed, 65 insertions(+), 3 deletions(-)

diff --git a/nova/compute/manager.py b/nova/compute/manager.py
index 2112c33..8b05aee 100644
--- a/nova/compute/manager.py
+++ b/nova/compute/manager.py
@@ -1288,7 +1288,9 @@ class ComputeManager(manager.Manager):
         def _do_validation(context, instance, group_hint):
             group = objects.InstanceGroup.get_by_hint(context, group_hint)
             if 'anti-affinity' in group.policies:
-                group_hosts = group.get_hosts(exclude=[instance.uuid])
+                group_hosts = group.get_hosts(
+                    exclude=[instance.uuid],
+                    **{'instance_uuid': instance.uuid})
                 if self.host in group_hosts:
                     msg = _("Anti-affinity instance group policy "
                             "was violated.")
@@ -1296,7 +1298,9 @@ class ComputeManager(manager.Manager):
                             instance_uuid=instance.uuid,
                             reason=msg)
             elif 'affinity' in group.policies:
-                group_hosts = group.get_hosts(exclude=[instance.uuid])
+                group_hosts = group.get_hosts(
+                    exclude=[instance.uuid],
+                    **{'instance_uuid': instance.uuid})
                 if group_hosts and self.host not in group_hosts:
                     msg = _("Affinity instance group policy was violated.")
                     raise exception.RescheduledException(
diff --git a/nova/objects/instance_group.py b/nova/objects/instance_group.py
index b1280df..9b55416 100644
--- a/nova/objects/instance_group.py
+++ b/nova/objects/instance_group.py
@@ -200,14 +200,28 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
         return list(members)
 
     @base.remotable
-    def get_hosts(self, exclude=None):
+    def get_hosts(self, exclude=list(), **kwargs):
         """Get a list of hosts for non-deleted instances in the group
 
         This method allows you to get a list of the hosts where instances in
         this group are currently running.  There's also an option to exclude
         certain instance UUIDs from this calculation.
+        If instance_uuid is given as a kwarg parameter, exclude any members
+        created after instance_uuid. This is to avoid race conditions with
+        multiple instance being created in close succession. See:
+        https://bugs.launchpad.net/nova/+bug/1423648
 
         """
+
+        # Get instance_uuid if given
+        instance_uuid = kwargs.get('instance_uuid', '')
+
+        # Add any hosts created after instance_uuid to the exclude list
+        if instance_uuid:
+            younger_members = self._get_younger_member_hosts(instance_uuid)
+            exclude.extend(
+                younger_members)
+
         filter_uuids = self.members
         if exclude:
             filter_uuids = set(filter_uuids) - set(exclude)
@@ -226,6 +240,29 @@ class InstanceGroup(base.NovaPersistentObject, base.NovaObject,
                                                         filters=filters)
         return len(instances)
 
+    def _get_younger_member_hosts(self, instance_uuid):
+        """Get a list of hosts for younger non-deleted instances in the group
+
+        This method allows you to get a list of the hosts where instances in
+        this group are currently running. Instances which were created before
+        the specified instance are not considered (to avoid races with multiple
+        instances being created in close succession).
+
+        """
+
+        filter_uuids = self.members
+        filters = {'uuid': filter_uuids, 'deleted': False}
+        instances = objects.InstanceList.get_by_filters(self._context,
+                                                        filters=filters)
+        # Determine the id of the instance_uuid
+        instance_id = next(i.id for i in instances
+                                if i.uuid == instance_uuid)
+
+        # Unique hosts for instances that were created after
+        # instance_uuid
+        return list(set([i.host for i in instances
+                         if i.host and i.id > instance_id]))
+
 
 @base.NovaObjectRegistry.register
 class InstanceGroupList(base.ObjectListBase, base.NovaObject):
diff --git a/nova/tests/unit/objects/test_instance_group.py b/nova/tests/unit/objects/test_instance_group.py
index 60d45b3..6cf369a 100644
--- a/nova/tests/unit/objects/test_instance_group.py
+++ b/nova/tests/unit/objects/test_instance_group.py
@@ -246,6 +246,27 @@ class _TestInstanceGroupObject(object):
         mock_il_get.assert_called_once_with(mock.sentinel.ctx,
                                             filters=expected_filters)
 
+        # Test getting younger instances
+        # instance_id1 is the older member, instance_id2 is the younger.
+        # We need to test that we will only get instances younger than
+        # instance_id1(including itself, since no exclude is defined).
+        mock_il_get.reset_mock()
+        hosts = obj.get_hosts(instance_uuid=uuids.instance_id1)
+        expected_filters1 = {
+            'uuid': [uuids.instance_id1, uuids.instance_id2],
+            'deleted': False
+        }
+        mock_il_get.assert_called_with(mock.sentinel.ctx,
+                                       filters=expected_filters1)
+        expected_filters2 = {
+            'uuid': set([uuids.instance_id1]),
+            'deleted': False
+        }
+
+        calls = [mock.call(mock.sentinel.ctx, expected_filters1),
+                 mock.call(mock.sentinel.ctx, expected_filters2)]
+        mock_il_get.assert_has_calls(calls)
+
     @mock.patch('nova.db.instance_group_get', return_value=_INST_GROUP_DB)
     def test_obj_make_compatible(self, mock_db_get):
         obj = objects.InstanceGroup.get_by_uuid(mock.sentinel.ctx,
-- 
1.9.1

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to