Re: [openstack-dev] [nova][api] Is this a potential issue
On 11/15/13 at 04:01pm, yunhong jiang wrote: On Fri, 2013-11-15 at 17:19 -0500, Andrew Laski wrote: On 11/15/13 at 07:30am, Dan Smith wrote: You're not missing anything. But I think that's a bug, or at least an unexpected change in behaviour from how it used to work. If you follow instance_update() in nova.db.sqlalchemy.api just the presence of expected_task_state triggers the check. So we may need to find a way to pass that through with the save method. This came up recently. We decided that since we no longer have a kwargs dictionary to test for the presence or absence of that flag, that we would require setting it to a tuple, which is already supported for allowing multiple state possibilities. So, if you pass expected_task_state=(None,) then it will do the right thing. Make sense? Perfect. I thought the old method was a bit counterintuitive and started thinking this would be better after I sent the email earlier. I checked and seems most usage of instance.save() with expected_state=None assume an exception and need change. Can I assume this rule apply to all? Yes. If yes, would it be possible to create a special task_state as IDLE, to distinguish it better? When no task on-going, the task_state will be IDLE, instead of None. I'm starting on some work right now which will break task_state off into it's own model and API resource. In my opinion we don't need to model the idea of no task running, we can check if there are tasks for the instance or not. So I think that using None is fine here and we shouldn't add an IDLE state. --jyh ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
On Mon, 2013-11-18 at 10:18 -0500, Andrew Laski wrote: On 11/15/13 at 04:01pm, yunhong jiang wrote: On Fri, 2013-11-15 at 17:19 -0500, Andrew Laski wrote: If yes, would it be possible to create a special task_state as IDLE, to distinguish it better? When no task on-going, the task_state will be IDLE, instead of None. I'm starting on some work right now which will break task_state off into it's own model and API resource. In my opinion we don't need to model the idea of no task running, we can check if there are tasks for the instance or not. So I think that using None is fine here and we shouldn't add an IDLE state. Got it, thanks. --jyh --jyh ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
On Tue, Nov 12, 2013 at 03:07:19PM -0500, Andrew Laski andrew.la...@rackspace.com wrote: On 11/11/13 at 05:27pm, Jiang, Yunhong wrote: Resend after the HK summit, hope someone can give me hint on it. Thanks --jyh -Original Message- From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] Sent: Thursday, November 07, 2013 5:39 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova][api] Is this a potential issue Hi, all I'm a bit confused of followed code in ./compute/api.py, which will be invoked by api/openstack/compute/servers.py, _action_revert_resize(). From the code seems there is a small windows between get the migration object and update migration.status. If another API request comes at this small window, it means two utility will try to revert resize at same time. Is this a potential issue? Currently implementation already roll back the reservation if something wrong, but not sure if we should update state to reverting as a transaction in get_by_instance_and_status()? The migration shouldn't end up being set to 'reverting' twice because of the expected_task_state set and check in instance.save(expected_task_state=None). The quota reservation could happen twice, so a rollback in the case of a failure in instance.save could be good. nova.objects.instance.Intance.save() seems like that expected_task_state=None means don't care of task state instead of no-task going-on. Am I missing anything? thanks, --jyh def revert_resize(self, context, instance): Reverts a resize, deleting the 'new' instance in the process. elevated = context.elevated() migration = migration_obj.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') Here we get the migration object # reverse quota reservation for increased resource usage deltas = self._reverse_upsize_quota_delta(context, migration) reservations = self._reserve_quota_delta(context, deltas) instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=None) migration.status = 'reverting' Here we update the status. migration.save() ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Isaku Yamahata isaku.yamah...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
On 11/15/13 at 07:57pm, Isaku Yamahata wrote: On Tue, Nov 12, 2013 at 03:07:19PM -0500, Andrew Laski andrew.la...@rackspace.com wrote: On 11/11/13 at 05:27pm, Jiang, Yunhong wrote: Resend after the HK summit, hope someone can give me hint on it. Thanks --jyh -Original Message- From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] Sent: Thursday, November 07, 2013 5:39 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova][api] Is this a potential issue Hi, all I'm a bit confused of followed code in ./compute/api.py, which will be invoked by api/openstack/compute/servers.py, _action_revert_resize(). From the code seems there is a small windows between get the migration object and update migration.status. If another API request comes at this small window, it means two utility will try to revert resize at same time. Is this a potential issue? Currently implementation already roll back the reservation if something wrong, but not sure if we should update state to reverting as a transaction in get_by_instance_and_status()? The migration shouldn't end up being set to 'reverting' twice because of the expected_task_state set and check in instance.save(expected_task_state=None). The quota reservation could happen twice, so a rollback in the case of a failure in instance.save could be good. nova.objects.instance.Intance.save() seems like that expected_task_state=None means don't care of task state instead of no-task going-on. Am I missing anything? You're not missing anything. But I think that's a bug, or at least an unexpected change in behaviour from how it used to work. If you follow instance_update() in nova.db.sqlalchemy.api just the presence of expected_task_state triggers the check. So we may need to find a way to pass that through with the save method. thanks, --jyh def revert_resize(self, context, instance): Reverts a resize, deleting the 'new' instance in the process. elevated = context.elevated() migration = migration_obj.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') Here we get the migration object # reverse quota reservation for increased resource usage deltas = self._reverse_upsize_quota_delta(context, migration) reservations = self._reserve_quota_delta(context, deltas) instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=None) migration.status = 'reverting' Here we update the status. migration.save() ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Isaku Yamahata isaku.yamah...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
-Original Message- From: Dan Smith [mailto:d...@danplanet.com] Sent: Friday, November 15, 2013 7:30 AM To: OpenStack Development Mailing List (not for usage questions); isaku.yamah...@gmail.com Subject: Re: [openstack-dev] [nova][api] Is this a potential issue You're not missing anything. But I think that's a bug, or at least an unexpected change in behaviour from how it used to work. If you follow instance_update() in nova.db.sqlalchemy.api just the presence of expected_task_state triggers the check. So we may need to find a way to pass that through with the save method. This came up recently. We decided that since we no longer have a kwargs dictionary to test for the presence or absence of that flag, that we would require setting it to a tuple, which is already supported for allowing multiple state possibilities. So, if you pass expected_task_state=(None,) then it will do the right thing. Make sense? This should work, although I'm not sure if it's so clean. I will cook a patch for it. --jyh --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
On 11/15/13 at 07:30am, Dan Smith wrote: You're not missing anything. But I think that's a bug, or at least an unexpected change in behaviour from how it used to work. If you follow instance_update() in nova.db.sqlalchemy.api just the presence of expected_task_state triggers the check. So we may need to find a way to pass that through with the save method. This came up recently. We decided that since we no longer have a kwargs dictionary to test for the presence or absence of that flag, that we would require setting it to a tuple, which is already supported for allowing multiple state possibilities. So, if you pass expected_task_state=(None,) then it will do the right thing. Make sense? Perfect. I thought the old method was a bit counterintuitive and started thinking this would be better after I sent the email earlier. --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
On Fri, 2013-11-15 at 17:19 -0500, Andrew Laski wrote: On 11/15/13 at 07:30am, Dan Smith wrote: You're not missing anything. But I think that's a bug, or at least an unexpected change in behaviour from how it used to work. If you follow instance_update() in nova.db.sqlalchemy.api just the presence of expected_task_state triggers the check. So we may need to find a way to pass that through with the save method. This came up recently. We decided that since we no longer have a kwargs dictionary to test for the presence or absence of that flag, that we would require setting it to a tuple, which is already supported for allowing multiple state possibilities. So, if you pass expected_task_state=(None,) then it will do the right thing. Make sense? Perfect. I thought the old method was a bit counterintuitive and started thinking this would be better after I sent the email earlier. I checked and seems most usage of instance.save() with expected_state=None assume an exception and need change. Can I assume this rule apply to all? If yes, would it be possible to create a special task_state as IDLE, to distinguish it better? When no task on-going, the task_state will be IDLE, instead of None. --jyh ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
On 11/11/13 at 05:27pm, Jiang, Yunhong wrote: Resend after the HK summit, hope someone can give me hint on it. Thanks --jyh -Original Message- From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] Sent: Thursday, November 07, 2013 5:39 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova][api] Is this a potential issue Hi, all I'm a bit confused of followed code in ./compute/api.py, which will be invoked by api/openstack/compute/servers.py, _action_revert_resize(). From the code seems there is a small windows between get the migration object and update migration.status. If another API request comes at this small window, it means two utility will try to revert resize at same time. Is this a potential issue? Currently implementation already roll back the reservation if something wrong, but not sure if we should update state to reverting as a transaction in get_by_instance_and_status()? The migration shouldn't end up being set to 'reverting' twice because of the expected_task_state set and check in instance.save(expected_task_state=None). The quota reservation could happen twice, so a rollback in the case of a failure in instance.save could be good. --jyh def revert_resize(self, context, instance): Reverts a resize, deleting the 'new' instance in the process. elevated = context.elevated() migration = migration_obj.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') Here we get the migration object # reverse quota reservation for increased resource usage deltas = self._reverse_upsize_quota_delta(context, migration) reservations = self._reserve_quota_delta(context, deltas) instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=None) migration.status = 'reverting' Here we update the status. migration.save() ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][api] Is this a potential issue
-Original Message- From: Andrew Laski [mailto:andrew.la...@rackspace.com] Sent: Tuesday, November 12, 2013 12:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova][api] Is this a potential issue On 11/11/13 at 05:27pm, Jiang, Yunhong wrote: Resend after the HK summit, hope someone can give me hint on it. Thanks --jyh -Original Message- From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] Sent: Thursday, November 07, 2013 5:39 PM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [nova][api] Is this a potential issue Hi, all I'm a bit confused of followed code in ./compute/api.py, which will be invoked by api/openstack/compute/servers.py, _action_revert_resize(). From the code seems there is a small windows between get the migration object and update migration.status. If another API request comes at this small window, it means two utility will try to revert resize at same time. Is this a potential issue? Currently implementation already roll back the reservation if something wrong, but not sure if we should update state to reverting as a transaction in get_by_instance_and_status()? The migration shouldn't end up being set to 'reverting' twice because of the expected_task_state set and check in instance.save(expected_task_state=None). The quota reservation could happen twice, so a rollback in the case of a failure in instance.save could be good. Aha, didn't notice that's a guard. It's really cool. --jyh --jyh def revert_resize(self, context, instance): Reverts a resize, deleting the 'new' instance in the process. elevated = context.elevated() migration = migration_obj.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') Here we get the migration object # reverse quota reservation for increased resource usage deltas = self._reverse_upsize_quota_delta(context, migration) reservations = self._reserve_quota_delta(context, deltas) instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=None) migration.status = 'reverting' Here we update the status. migration.save() ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][api] Is this a potential issue
Hi, all I'm a bit confused of followed code in ./compute/api.py, which will be invoked by api/openstack/compute/servers.py, _action_revert_resize(). From the code seems there is a small windows between get the migration object and update migration.status. If another API request comes at this small window, it means two utility will try to revert resize at same time. Is this a potential issue? Currently implementation already roll back the reservation if something wrong, but not sure if we should update state to reverting as a transaction in get_by_instance_and_status()? --jyh def revert_resize(self, context, instance): Reverts a resize, deleting the 'new' instance in the process. elevated = context.elevated() migration = migration_obj.Migration.get_by_instance_and_status( elevated, instance.uuid, 'finished') Here we get the migration object # reverse quota reservation for increased resource usage deltas = self._reverse_upsize_quota_delta(context, migration) reservations = self._reserve_quota_delta(context, deltas) instance.task_state = task_states.RESIZE_REVERTING instance.save(expected_task_state=None) migration.status = 'reverting' Here we update the status. migration.save() ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev