On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski <pkamin...@mirantis.com>

> > Well i think there should be finished_at field anyway, why not to
> > add it for this purpose?
> So you're suggesting to add another column and modify all tasks for
> this one feature?

Such things as time stamps should be on all tasks anyway.

> > I dont actually recall what was the reason to delete them, but if
> > it happens imo it is ok to show right now that network verification
> > wasnt performed.
> Is this how one does predictible and easy to understand software?
> Sometimes we'll say that verification is OK, othertimes that it wasn't
> performed?
> In my opinion the questions that needs to be answered - what is the reason
or event to remove verify_networks tasks history?

> > 3. Just having network verification status as ready is NOT enough.
> > From the UI you can fire off network verification for unsaved
> > changes. Some JSON request is made, network configuration validated
> > by tasks and RPC call made returing that all is OK for example. But
> > if you haven't saved your changes then in fact you haven't verified
> > your current configuration, just some other one. So in this case
> > task status 'ready' doesn't mean that current cluster config is
> > valid. What do you propose in this case? Fail the task on purpose?
> Issue #3 I described is still valid -- what is your solution in this case?
> Ok, sorry.
What do you think if in such case we will remove old tasks?
It seems to me that is correct event in which old verify_networks is
invalid anyway,
and there is no point to store history.

> As far as I understand, there's one supertask 'verify_networks'
> (called in nailgu/task/manager.py line 751). It spawns other tasks
> that do verification. When all is OK verify_networks calls RPC's
> 'verify_networks_resp' method and returns a 'ready' status and at that
> point I can inject code to also set the DB column in cluster saying
> that network verification was OK for the saved configuration. Adding
> other tasks should in no way affect this behavior since they're just
> subtasks of this task -- or am I wrong?

It is not that smooth, but in general yes - it can be done when state of
verify_networks is changed.
But lets say we have some_settings_verify task? Would be it valid to add
one more field on cluster model, like some_settings_status?
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to