On 05/01/2014 11:36 AM, Matthew Treinish wrote:
On Thu, May 01, 2014 at 06:18:10PM +0900, Ken'ichi Ohmichi wrote:
# Sorry for sending this again, previous mail was unreadable.

2014-04-28 11:54 GMT+09:00 Ken'ichi Ohmichi <[email protected]>:
This is also why there are a bunch of nova v2 extensions that just add
properties to an existing API. I think in v3 the proposal was to do this with
microversioning of the plugins. (we don't have a way to configure
microversioned v3 api plugins in tempest yet, but we can cross that bridge when
the time comes) Either way it will allow tempest to have in config which
behavior to expect.
Good point, my current understanding is:
When adding new API parameters to the existing APIs, these parameters should
be API extensions according to the above guidelines. So we have three options
for handling API extensions in Tempest:

1. Consider them as optional, and cannot block the incompatible
changes of them. (Current)
2. Consider them as required based on tempest.conf, and can block the
incompatible changes.
3. Consider them as required automatically with microversioning, and
can block the incompatible changes.
I investigated the way of the above option 3, then have one question
about current Tempest implementation.

Now verify_tempest_config tool gets API extension list from each
service including Nova and verifies API extension config of tempest.conf
based on the list.
Can we use the list for selecting what extension tests run instead of
the verification?
As you said In the previous IRC meeting, current API tests will be
skipped if the test which is decorated with requires_ext() and the
extension is not specified in tempest.conf. I feel it would be nice
that Tempest gets API extension list and selects API tests automatically
based on the list.
So we used to do this type of autodiscovery in tempest, but we stopped because
it let bugs slip through the gate. This topic has come up several times in the
past, most recently in discussing reorganizing the config file. [1] This is why
we put [2] in the tempest README. I agree autodiscovery would be simpler, but
the problem is because we use tempest as the gate if there was a bug that caused
autodiscovery to be different from what was expected the tests would just
silently skip. This would often go unnoticed because of the sheer volume of
tempest tests.(I think we're currently at ~2300) I also feel that explicitly
defining what is a expected to be enabled is a key requirement for branchless
tempest for the same reason.


The verify_tempest_config tool was an attempt at a compromise between being
explicit and also using auto discovery. By using the APIs to help create a
config file that reflected the current configuration state of the services. It's
still a WIP though, and it's really just meant to be a user tool. I don't ever
see it being included in our gate workflow.
I think we have to accept that there are two legitimate use cases for tempest configuration:

1. The entity configuring tempest is the same as the entity that deployed. This is the gate case. 2. Tempest is to be pointed at an existing cloud but was not part of a deployment process. We want to run the tests for the supported services/extensions.

We should modularize the code around discovery so that the discovery functions return the changes to conf that would have to be made. The callers can then decide how that information is to be used. This would support both use cases. I have some changes to the verify_tempest_config code that does this which I will push up if the concept is agreed.

 -David

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to