On 05/09/2014 11:29 AM, Matthew Treinish wrote:
On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:
On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:
Hi Sean,

2014-05-07 23:28 GMT+09:00 Sean Dague <s...@dague.net>:
On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:
Hi David,

2014-05-07 22:53 GMT+09:00 David Kranz <dkr...@redhat.com>:
I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
given a -1 due to not checking that every call to list_hosts returns 200. I
realized that we don't have a shared understanding or policy about this. We
need to make sure that each api is tested to return the right response, but
many tests need to call multiple apis in support of the one they are
actually testing. It seems silly to have the caller check the response of
every api call. Currently there are many, if not the majority of, cases
where api calls are made without checking the response code. I see a few
possibilities:

1. Move all response code checking to the tempest clients. They are already
checking for failure codes and are now doing validation of json response and
headers as well. Callers would only do an explicit check if there were
multiple success codes possible.

2. Have a clear policy of when callers should check response codes and apply
it.

I think the first approach has a lot of advantages. Thoughts?
Thanks for proposing this, I also prefer the first approach.
We will be able to remove a lot of status code checks if going on
this direction.
It is necessary for bp/nova-api-test-inheritance tasks also.
Current https://review.openstack.org/#/c/92536/ removes status code checks
because some Nova v2/v3 APIs return different codes and the codes are already
checked in client side.

but it is necessary to create a lot of patch for covering all API tests.
So for now, I feel it is OK to skip status code checks in API tests
only if client side checks are already implemented.
After implementing all client validations, we can remove them of API
tests.
Do we still have instances where we want to make a call that we know
will fail and not through the exception?

I agree there is a certain clarity in putting this down in the rest
client. I just haven't figured out if it's going to break some behavior
that we currently expect.
If a server returns unexpected status code, Tempest fails with client
validations
like the following sample:

Traceback (most recent call last):
   File "/opt/stack/tempest/tempest/api/compute/servers/test_servers.py",
line 36, in test_create_server_with_admin_password
     resp, server = self.create_test_server(adminPass='testpassword')
   File "/opt/stack/tempest/tempest/api/compute/base.py", line 211, in
create_test_server
     name, image_id, flavor, **kwargs)
   File "/opt/stack/tempest/tempest/services/compute/json/servers_client.py",
line 95, in create_server
     self.validate_response(schema.create_server, resp, body)
   File "/opt/stack/tempest/tempest/common/rest_client.py", line 596,
in validate_response
     raise exceptions.InvalidHttpSuccessCode(msg)
InvalidHttpSuccessCode: The success code is different than the expected one
Details: The status code(202) is different than the expected one([200])


Thanks
Ken'ichi Ohmichi

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Note that there are currently two different methods on RestClient
that do this sort of thing. Your stacktrace shows
"validate_response" which expects to be passed a schema. The other
is "expected_success" which takes the expected response code and is
only used by the image clients.
Both of these will need to stay around since not all APIs have
defined schemas but the expected_success method should probably be
changed to accept a list of valid success responses rather than just
one as it does at present.
So expected_success() is just a better way of doing something like:

assert.Equals(resp.status, 200)

There isn't anything specific about the images clients with it.
validate_response() should just call expected_success(), which I pushed out
here:
https://review.openstack.org/93035
Right, I was just observing that it was only used by the image clients at present.


I hope we can get agreement to move response checking to the client.
There was no opposition when we started doing this in nova to check
schema. Does any one see a reason to not do this? It would both
simplify the code and make sure responses are checked in all cases.

Sean, do you have a concrete example of what you are concerned about
here? Moving the check from the value returned by a client call to
inside the client code should not have any visible effect unless the
value was actually wrong but not checked by the caller. But this
would be a bug that was just found if a test started failing.

Please draft a spec/bp for doing this, we can sort out the implementation
details in the spec review. There is definitely some overlap with the jsonschema
work though so we need to think about how to best integrate the 2 efforts. For
example, for projects that don't use jsonschema yet does it make sense to start
using jsonschema files like we do for nova tests to veriy the status codes. Just
so we can have a common path for doing this. I think there may be value in doing
it that way. We can discuss it more during the jsonschema summit session.


-Matt Treinish
I pushed a spec https://review.openstack.org/#/c/93037/ but it does not propose routing all calls through the schema checker. I think if there is a schema we should use it but am not sure we should demand schemas for all api calls in a client in order to make progress on checking. But we can discuss this next week.

 -David

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to