Recently I've been asked to implement Python side of a simple feature:
before deployment tell the UI user that network verification for current
cluster configuration has not been performed. Moreover, on the UI side
it's possible to do network checking on usaved cluster data -- in that
case treat is as no network checking was performed. Unfortunately it
turned out to be not at all that simple to implement on the backend and
I'll try to explain why.

I ended up with the implementation [1] that added a tri-valued flag to
the Cluster model. What's surprising I got stuck at the idempotency test
of network configuration: I've sent a GET request on network config,
then sent a PUT with the received data and asserted that nothing
changed. What's strange in about 1/4 cases this test failed because some
ips got assigned differently. I wasn't able to explain why (I had other
tasks to do and this one was somewhat of a side-project). BTW, it turned
out that we have at least 2 functions that are used to deeply compare 2
objects, both unnecessary IMHO as there are 3rd party libs for this,
like [3].

Another issue was that network configuration PUT returns a task while
actually there is no asynchronicity there at all, it's just a huge
validator that executes everything synchronously. This was already
heavily commented in [2] and it's proposed to remove that task
completely. Moreover Nova and Neutron backends returned different
statuses albeit their verification code was almost the same. A
unification of these 2 handlers was proposed in [1].

Another issue is that we have to somehow invalidate the flag that says
cluster verification is done. It is not difficult to overwrite the save
method for Cluster so that any change in cluster invalidates network
checking. But it's not that simple. First of all -- not all cluster's
changes should invalidate the network checking. Second -- not only
cluster changes invalidate that -- adding nodes to cluster, for example,
invalidates network checking too. Adding triggers all over the code that
check this don't seem to be a good solution.

So what I proposed is to instead of having a simple flag like in [1] to
actually store the whole JSON object with serialized network
configuration. The UI, upon deployment, will ask the API about cluster
and there we will return an additional key called 'network_status_check'
that is 'failed', 'passed' or 'not_performed'. The API will determine
that flag by getting that saved JSON object and comparing it with a
freshly serialized object. This way we don't need to alter the flag upon
save or anything, we just compute if it was changed on demand.

I guess this change grew out so big that it requires a blueprint and can
be done for 7.0. The feature can be implemented on the UI side only that
covers most (but not all of) the problems and is "good enough" for 6.1.

[1] https://review.openstack.org/153556
[3] https://github.com/inveniosoftware/dictdiffer


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to