Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread Matthew Treinish
On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:
 On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:
 Hi Sean,
 
 2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:
 On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:
 Hi David,
 
 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which 
 was
 given a -1 due to not checking that every call to list_hosts returns 200. 
 I
 realized that we don't have a shared understanding or policy about this. 
 We
 need to make sure that each api is tested to return the right response, 
 but
 many tests need to call multiple apis in support of the one they are
 actually testing. It seems silly to have the caller check the response of
 every api call. Currently there are many, if not the majority of, cases
 where api calls are made without checking the response code. I see a few
 possibilities:
 
 1. Move all response code checking to the tempest clients. They are 
 already
 checking for failure codes and are now doing validation of json response 
 and
 headers as well. Callers would only do an explicit check if there were
 multiple success codes possible.
 
 2. Have a clear policy of when callers should check response codes and 
 apply
 it.
 
 I think the first approach has a lot of advantages. Thoughts?
 Thanks for proposing this, I also prefer the first approach.
 We will be able to remove a lot of status code checks if going on
 this direction.
 It is necessary for bp/nova-api-test-inheritance tasks also.
 Current https://review.openstack.org/#/c/92536/ removes status code checks
 because some Nova v2/v3 APIs return different codes and the codes are 
 already
 checked in client side.
 
 but it is necessary to create a lot of patch for covering all API tests.
 So for now, I feel it is OK to skip status code checks in API tests
 only if client side checks are already implemented.
 After implementing all client validations, we can remove them of API
 tests.
 Do we still have instances where we want to make a call that we know
 will fail and not through the exception?
 
 I agree there is a certain clarity in putting this down in the rest
 client. I just haven't figured out if it's going to break some behavior
 that we currently expect.
 If a server returns unexpected status code, Tempest fails with client
 validations
 like the following sample:
 
 Traceback (most recent call last):
File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
 line 36, in test_create_server_with_admin_password
  resp, server = self.create_test_server(adminPass='testpassword')
File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
 create_test_server
  name, image_id, flavor, **kwargs)
File /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
 line 95, in create_server
  self.validate_response(schema.create_server, resp, body)
File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
 in validate_response
  raise exceptions.InvalidHttpSuccessCode(msg)
 InvalidHttpSuccessCode: The success code is different than the expected one
 Details: The status code(202) is different than the expected one([200])
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Note that there are currently two different methods on RestClient
 that do this sort of thing. Your stacktrace shows
 validate_response which expects to be passed a schema. The other
 is expected_success which takes the expected response code and is
 only used by the image clients.
 Both of these will need to stay around since not all APIs have
 defined schemas but the expected_success method should probably be
 changed to accept a list of valid success responses rather than just
 one as it does at present.

So expected_success() is just a better way of doing something like:

assert.Equals(resp.status, 200)

There isn't anything specific about the images clients with it.
validate_response() should just call expected_success(), which I pushed out
here:
https://review.openstack.org/93035


 
 I hope we can get agreement to move response checking to the client.
 There was no opposition when we started doing this in nova to check
 schema. Does any one see a reason to not do this? It would both
 simplify the code and make sure responses are checked in all cases.

 Sean, do you have a concrete example of what you are concerned about
 here? Moving the check from the value returned by a client call to
 inside the client code should not have any visible effect unless the
 value was actually wrong but not checked by the caller. But this
 would be a bug that was just found if a test started failing.
 

Please draft a spec/bp for doing this, we can sort out the implementation
details in the spec review. There is definitely some overlap with the 

Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread Ken'ichi Ohmichi
2014-05-10 0:29 GMT+09:00 Matthew Treinish mtrein...@kortar.org:
 On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:
 On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:
 Hi Sean,
 
 2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:
 On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:
 Hi David,
 
 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which 
 was
 given a -1 due to not checking that every call to list_hosts returns 
 200. I
 realized that we don't have a shared understanding or policy about this. 
 We
 need to make sure that each api is tested to return the right response, 
 but
 many tests need to call multiple apis in support of the one they are
 actually testing. It seems silly to have the caller check the response of
 every api call. Currently there are many, if not the majority of, cases
 where api calls are made without checking the response code. I see a few
 possibilities:
 
 1. Move all response code checking to the tempest clients. They are 
 already
 checking for failure codes and are now doing validation of json response 
 and
 headers as well. Callers would only do an explicit check if there were
 multiple success codes possible.
 
 2. Have a clear policy of when callers should check response codes and 
 apply
 it.
 
 I think the first approach has a lot of advantages. Thoughts?
 Thanks for proposing this, I also prefer the first approach.
 We will be able to remove a lot of status code checks if going on
 this direction.
 It is necessary for bp/nova-api-test-inheritance tasks also.
 Current https://review.openstack.org/#/c/92536/ removes status code checks
 because some Nova v2/v3 APIs return different codes and the codes are 
 already
 checked in client side.
 
 but it is necessary to create a lot of patch for covering all API tests.
 So for now, I feel it is OK to skip status code checks in API tests
 only if client side checks are already implemented.
 After implementing all client validations, we can remove them of API
 tests.
 Do we still have instances where we want to make a call that we know
 will fail and not through the exception?
 
 I agree there is a certain clarity in putting this down in the rest
 client. I just haven't figured out if it's going to break some behavior
 that we currently expect.
 If a server returns unexpected status code, Tempest fails with client
 validations
 like the following sample:
 
 Traceback (most recent call last):
File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
 line 36, in test_create_server_with_admin_password
  resp, server = self.create_test_server(adminPass='testpassword')
File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
 create_test_server
  name, image_id, flavor, **kwargs)
File 
  /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
 line 95, in create_server
  self.validate_response(schema.create_server, resp, body)
File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
 in validate_response
  raise exceptions.InvalidHttpSuccessCode(msg)
 InvalidHttpSuccessCode: The success code is different than the expected one
 Details: The status code(202) is different than the expected one([200])
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Note that there are currently two different methods on RestClient
 that do this sort of thing. Your stacktrace shows
 validate_response which expects to be passed a schema. The other
 is expected_success which takes the expected response code and is
 only used by the image clients.
 Both of these will need to stay around since not all APIs have
 defined schemas but the expected_success method should probably be
 changed to accept a list of valid success responses rather than just
 one as it does at present.

 So expected_success() is just a better way of doing something like:

 assert.Equals(resp.status, 200)

 There isn't anything specific about the images clients with it.
 validate_response() should just call expected_success(), which I pushed out
 here:
 https://review.openstack.org/93035



 I hope we can get agreement to move response checking to the client.
 There was no opposition when we started doing this in nova to check
 schema. Does any one see a reason to not do this? It would both
 simplify the code and make sure responses are checked in all cases.

 Sean, do you have a concrete example of what you are concerned about
 here? Moving the check from the value returned by a client call to
 inside the client code should not have any visible effect unless the
 value was actually wrong but not checked by the caller. But this
 would be a bug that was just found if a test started failing.


 Please draft a spec/bp for doing this, we can sort out the 

Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread David Kranz

On 05/09/2014 11:29 AM, Matthew Treinish wrote:

On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:

On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:

Hi Sean,

2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:

On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:

Hi David,

2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:

I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
given a -1 due to not checking that every call to list_hosts returns 200. I
realized that we don't have a shared understanding or policy about this. We
need to make sure that each api is tested to return the right response, but
many tests need to call multiple apis in support of the one they are
actually testing. It seems silly to have the caller check the response of
every api call. Currently there are many, if not the majority of, cases
where api calls are made without checking the response code. I see a few
possibilities:

1. Move all response code checking to the tempest clients. They are already
checking for failure codes and are now doing validation of json response and
headers as well. Callers would only do an explicit check if there were
multiple success codes possible.

2. Have a clear policy of when callers should check response codes and apply
it.

I think the first approach has a lot of advantages. Thoughts?

Thanks for proposing this, I also prefer the first approach.
We will be able to remove a lot of status code checks if going on
this direction.
It is necessary for bp/nova-api-test-inheritance tasks also.
Current https://review.openstack.org/#/c/92536/ removes status code checks
because some Nova v2/v3 APIs return different codes and the codes are already
checked in client side.

but it is necessary to create a lot of patch for covering all API tests.
So for now, I feel it is OK to skip status code checks in API tests
only if client side checks are already implemented.
After implementing all client validations, we can remove them of API
tests.

Do we still have instances where we want to make a call that we know
will fail and not through the exception?

I agree there is a certain clarity in putting this down in the rest
client. I just haven't figured out if it's going to break some behavior
that we currently expect.

If a server returns unexpected status code, Tempest fails with client
validations
like the following sample:

Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
line 36, in test_create_server_with_admin_password
 resp, server = self.create_test_server(adminPass='testpassword')
   File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
create_test_server
 name, image_id, flavor, **kwargs)
   File /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
line 95, in create_server
 self.validate_response(schema.create_server, resp, body)
   File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
in validate_response
 raise exceptions.InvalidHttpSuccessCode(msg)
InvalidHttpSuccessCode: The success code is different than the expected one
Details: The status code(202) is different than the expected one([200])


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Note that there are currently two different methods on RestClient
that do this sort of thing. Your stacktrace shows
validate_response which expects to be passed a schema. The other
is expected_success which takes the expected response code and is
only used by the image clients.
Both of these will need to stay around since not all APIs have
defined schemas but the expected_success method should probably be
changed to accept a list of valid success responses rather than just
one as it does at present.

So expected_success() is just a better way of doing something like:

assert.Equals(resp.status, 200)

There isn't anything specific about the images clients with it.
validate_response() should just call expected_success(), which I pushed out
here:
https://review.openstack.org/93035
Right, I was just observing that it was only used by the image clients 
at present.




I hope we can get agreement to move response checking to the client.
There was no opposition when we started doing this in nova to check
schema. Does any one see a reason to not do this? It would both
simplify the code and make sure responses are checked in all cases.

Sean, do you have a concrete example of what you are concerned about
here? Moving the check from the value returned by a client call to
inside the client code should not have any visible effect unless the
value was actually wrong but not checked by the caller. But this
would be a bug that was just found if a test started failing.


Please draft a spec/bp for doing this, we can sort out the implementation
details in the spec review. There is 

Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-09 Thread GHANSHYAM MANN
Hi Matthew,



 -Original Message-

 From: Matthew Treinish [mailto:mtrein...@kortar.org]

 Sent: Saturday, May 10, 2014 12:29 AM

 To: OpenStack Development Mailing List (not for usage questions)

 Subject: Re: [openstack-dev] [qa] Checking for return codes in tempest
client

 calls



 On Thu, May 08, 2014 at 09:50:03AM -0400, David Kranz wrote:

  On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:

  Hi Sean,

  

  2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:

  On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:

  Hi David,

  

  2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:

  I just looked at a patch https://review.openstack.org/#/c/90310/3

  which was given a -1 due to not checking that every call to

  list_hosts returns 200. I realized that we don't have a shared

  understanding or policy about this. We need to make sure that each

  api is tested to return the right response, but many tests need to

  call multiple apis in support of the one they are actually

  testing. It seems silly to have the caller check the response of

  every api call. Currently there are many, if not the majority of,

  cases where api calls are made without checking the response code.

  I see a few

  possibilities:

  

  1. Move all response code checking to the tempest clients. They

  are already checking for failure codes and are now doing

  validation of json response and headers as well. Callers would

  only do an explicit check if there were multiple success codes

 possible.

  

  2. Have a clear policy of when callers should check response codes

  and apply it.

  

  I think the first approach has a lot of advantages. Thoughts?

  Thanks for proposing this, I also prefer the first approach.

  We will be able to remove a lot of status code checks if going on

  this direction.

  It is necessary for bp/nova-api-test-inheritance tasks also.

  Current https://review.openstack.org/#/c/92536/ removes status code

  checks because some Nova v2/v3 APIs return different codes and the

  codes are already checked in client side.

  

  but it is necessary to create a lot of patch for covering all API
tests.

  So for now, I feel it is OK to skip status code checks in API tests

  only if client side checks are already implemented.

  After implementing all client validations, we can remove them of

  API tests.

  Do we still have instances where we want to make a call that we know

  will fail and not through the exception?

  

  I agree there is a certain clarity in putting this down in the rest

  client. I just haven't figured out if it's going to break some

  behavior that we currently expect.

  If a server returns unexpected status code, Tempest fails with client

  validations like the following sample:

  

  Traceback (most recent call last):

 File

  /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,

  line 36, in test_create_server_with_admin_password

   resp, server = self.create_test_server(adminPass='testpassword')

 File /opt/stack/tempest/tempest/api/compute/base.py, line 211,

  in create_test_server

   name, image_id, flavor, **kwargs)

 File

 

 /opt/stack/tempest/tempest/services/compute/json/servers_client.py,

  line 95, in create_server

   self.validate_response(schema.create_server, resp, body)

 File /opt/stack/tempest/tempest/common/rest_client.py, line 596,

  in validate_response

   raise exceptions.InvalidHttpSuccessCode(msg)

  InvalidHttpSuccessCode: The success code is different than the

  expected one

  Details: The status code(202) is different than the expected

  one([200])

  

  

  Thanks

  Ken'ichi Ohmichi

  

  ___

  OpenStack-dev mailing list

  OpenStack-dev@lists.openstack.org

  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  Note that there are currently two different methods on RestClient that

  do this sort of thing. Your stacktrace shows validate_response which

  expects to be passed a schema. The other is expected_success which

  takes the expected response code and is only used by the image

  clients.

  Both of these will need to stay around since not all APIs have defined

  schemas but the expected_success method should probably be changed to

  accept a list of valid success responses rather than just one as it

  does at present.



 So expected_success() is just a better way of doing something like:



 assert.Equals(resp.status, 200)



 There isn't anything specific about the images clients with it.

 validate_response() should just call expected_success(), which I pushed
out

 here:

 https://review.openstack.org/93035



There can be possibility to have multiple success return code (Nova Server
Ext events API return 200  207 as success code). Currently there is no
such API schema but we need to consider this case. In validate_response(),
it was handled and we should expand the expected_success

Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-08 Thread David Kranz

On 05/07/2014 10:48 AM, Ken'ichi Ohmichi wrote:

Hi Sean,

2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:

On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:

Hi David,

2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:

I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
given a -1 due to not checking that every call to list_hosts returns 200. I
realized that we don't have a shared understanding or policy about this. We
need to make sure that each api is tested to return the right response, but
many tests need to call multiple apis in support of the one they are
actually testing. It seems silly to have the caller check the response of
every api call. Currently there are many, if not the majority of, cases
where api calls are made without checking the response code. I see a few
possibilities:

1. Move all response code checking to the tempest clients. They are already
checking for failure codes and are now doing validation of json response and
headers as well. Callers would only do an explicit check if there were
multiple success codes possible.

2. Have a clear policy of when callers should check response codes and apply
it.

I think the first approach has a lot of advantages. Thoughts?

Thanks for proposing this, I also prefer the first approach.
We will be able to remove a lot of status code checks if going on
this direction.
It is necessary for bp/nova-api-test-inheritance tasks also.
Current https://review.openstack.org/#/c/92536/ removes status code checks
because some Nova v2/v3 APIs return different codes and the codes are already
checked in client side.

but it is necessary to create a lot of patch for covering all API tests.
So for now, I feel it is OK to skip status code checks in API tests
only if client side checks are already implemented.
After implementing all client validations, we can remove them of API
tests.

Do we still have instances where we want to make a call that we know
will fail and not through the exception?

I agree there is a certain clarity in putting this down in the rest
client. I just haven't figured out if it's going to break some behavior
that we currently expect.

If a server returns unexpected status code, Tempest fails with client
validations
like the following sample:

Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
line 36, in test_create_server_with_admin_password
 resp, server = self.create_test_server(adminPass='testpassword')
   File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
create_test_server
 name, image_id, flavor, **kwargs)
   File /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
line 95, in create_server
 self.validate_response(schema.create_server, resp, body)
   File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
in validate_response
 raise exceptions.InvalidHttpSuccessCode(msg)
InvalidHttpSuccessCode: The success code is different than the expected one
Details: The status code(202) is different than the expected one([200])


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Note that there are currently two different methods on RestClient that 
do this sort of thing. Your stacktrace shows validate_response which 
expects to be passed a schema. The other is expected_success which 
takes the expected response code and is only used by the image clients.
Both of these will need to stay around since not all APIs have defined 
schemas but the expected_success method should probably be changed to 
accept a list of valid success responses rather than just one as it does 
at present.


I hope we can get agreement to move response checking to the client. 
There was no opposition when we started doing this in nova to check 
schema. Does any one see a reason to not do this? It would both simplify 
the code and make sure responses are checked in all cases.


Sean, do you have a concrete example of what you are concerned about 
here? Moving the check from the value returned by a client call to 
inside the client code should not have any visible effect unless the 
value was actually wrong but not checked by the caller. But this would 
be a bug that was just found if a test started failing.


 -David

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-07 Thread David Kranz
I just looked at a patch https://review.openstack.org/#/c/90310/3 which 
was given a -1 due to not checking that every call to list_hosts returns 
200. I realized that we don't have a shared understanding or policy 
about this. We need to make sure that each api is tested to return the 
right response, but many tests need to call multiple apis in support of 
the one they are actually testing. It seems silly to have the caller 
check the response of every api call. Currently there are many, if not 
the majority of, cases where api calls are made without checking the 
response code. I see a few possibilities:


1. Move all response code checking to the tempest clients. They are 
already checking for failure codes and are now doing validation of json 
response and headers as well. Callers would only do an explicit check if 
there were multiple success codes possible.


2. Have a clear policy of when callers should check response codes and 
apply it.


I think the first approach has a lot of advantages. Thoughts?

 -David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-07 Thread Duncan Thomas
On 7 May 2014 14:53, David Kranz dkr...@redhat.com wrote:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
 given a -1 due to not checking that every call to list_hosts returns 200. I
 realized that we don't have a shared understanding or policy about this.

snip

 Thoughts?

While I don't know the tempest code well enough to opine where the
check should be, every call should definitely be checked and failures
reported - I've had a few cases where I've debugged failures (some
tempest, some other tests) where somebody says 'my volume attach isn't
working' and the reason turned out to be because their instance never
came up properly, or snapshot delete failed because the create failed
but wasn't logged. Anything that causes the test to automatically
report the narrowest definition of the fault is definitely a good
thing.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-07 Thread Ken'ichi Ohmichi
Hi David,

2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
 given a -1 due to not checking that every call to list_hosts returns 200. I
 realized that we don't have a shared understanding or policy about this. We
 need to make sure that each api is tested to return the right response, but
 many tests need to call multiple apis in support of the one they are
 actually testing. It seems silly to have the caller check the response of
 every api call. Currently there are many, if not the majority of, cases
 where api calls are made without checking the response code. I see a few
 possibilities:

 1. Move all response code checking to the tempest clients. They are already
 checking for failure codes and are now doing validation of json response and
 headers as well. Callers would only do an explicit check if there were
 multiple success codes possible.

 2. Have a clear policy of when callers should check response codes and apply
 it.

 I think the first approach has a lot of advantages. Thoughts?

Thanks for proposing this, I also prefer the first approach.
We will be able to remove a lot of status code checks if going on
this direction.
It is necessary for bp/nova-api-test-inheritance tasks also.
Current https://review.openstack.org/#/c/92536/ removes status code checks
because some Nova v2/v3 APIs return different codes and the codes are already
checked in client side.

but it is necessary to create a lot of patch for covering all API tests.
So for now, I feel it is OK to skip status code checks in API tests
only if client side checks are already implemented.
After implementing all client validations, we can remove them of API
tests.

Thanks
Kenichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-07 Thread Sean Dague
On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:
 Hi David,
 
 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
 given a -1 due to not checking that every call to list_hosts returns 200. I
 realized that we don't have a shared understanding or policy about this. We
 need to make sure that each api is tested to return the right response, but
 many tests need to call multiple apis in support of the one they are
 actually testing. It seems silly to have the caller check the response of
 every api call. Currently there are many, if not the majority of, cases
 where api calls are made without checking the response code. I see a few
 possibilities:

 1. Move all response code checking to the tempest clients. They are already
 checking for failure codes and are now doing validation of json response and
 headers as well. Callers would only do an explicit check if there were
 multiple success codes possible.

 2. Have a clear policy of when callers should check response codes and apply
 it.

 I think the first approach has a lot of advantages. Thoughts?
 
 Thanks for proposing this, I also prefer the first approach.
 We will be able to remove a lot of status code checks if going on
 this direction.
 It is necessary for bp/nova-api-test-inheritance tasks also.
 Current https://review.openstack.org/#/c/92536/ removes status code checks
 because some Nova v2/v3 APIs return different codes and the codes are already
 checked in client side.
 
 but it is necessary to create a lot of patch for covering all API tests.
 So for now, I feel it is OK to skip status code checks in API tests
 only if client side checks are already implemented.
 After implementing all client validations, we can remove them of API
 tests.

Do we still have instances where we want to make a call that we know
will fail and not through the exception?

I agree there is a certain clarity in putting this down in the rest
client. I just haven't figured out if it's going to break some behavior
that we currently expect.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-07 Thread David Kranz

On 05/07/2014 10:07 AM, Duncan Thomas wrote:

On 7 May 2014 14:53, David Kranz dkr...@redhat.com wrote:

I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
given a -1 due to not checking that every call to list_hosts returns 200. I
realized that we don't have a shared understanding or policy about this.

snip


Thoughts?

While I don't know the tempest code well enough to opine where the
check should be, every call should definitely be checked and failures
reported - I've had a few cases where I've debugged failures (some
tempest, some other tests) where somebody says 'my volume attach isn't
working' and the reason turned out to be because their instance never
came up properly, or snapshot delete failed because the create failed
but wasn't logged. Anything that causes the test to automatically
report the narrowest definition of the fault is definitely a good
thing.


Yes. To be clear, all calls raise an exception on failure. What we don't 
check on every call is if an api  that is supposed to return 200 might 
have returned 201, etc.


 -David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-07 Thread Ken'ichi Ohmichi
Hi Sean,

2014-05-07 23:28 GMT+09:00 Sean Dague s...@dague.net:
 On 05/07/2014 10:23 AM, Ken'ichi Ohmichi wrote:
 Hi David,

 2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 which was
 given a -1 due to not checking that every call to list_hosts returns 200. I
 realized that we don't have a shared understanding or policy about this. We
 need to make sure that each api is tested to return the right response, but
 many tests need to call multiple apis in support of the one they are
 actually testing. It seems silly to have the caller check the response of
 every api call. Currently there are many, if not the majority of, cases
 where api calls are made without checking the response code. I see a few
 possibilities:

 1. Move all response code checking to the tempest clients. They are already
 checking for failure codes and are now doing validation of json response and
 headers as well. Callers would only do an explicit check if there were
 multiple success codes possible.

 2. Have a clear policy of when callers should check response codes and apply
 it.

 I think the first approach has a lot of advantages. Thoughts?

 Thanks for proposing this, I also prefer the first approach.
 We will be able to remove a lot of status code checks if going on
 this direction.
 It is necessary for bp/nova-api-test-inheritance tasks also.
 Current https://review.openstack.org/#/c/92536/ removes status code checks
 because some Nova v2/v3 APIs return different codes and the codes are already
 checked in client side.

 but it is necessary to create a lot of patch for covering all API tests.
 So for now, I feel it is OK to skip status code checks in API tests
 only if client side checks are already implemented.
 After implementing all client validations, we can remove them of API
 tests.

 Do we still have instances where we want to make a call that we know
 will fail and not through the exception?

 I agree there is a certain clarity in putting this down in the rest
 client. I just haven't figured out if it's going to break some behavior
 that we currently expect.

If a server returns unexpected status code, Tempest fails with client
validations
like the following sample:

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/api/compute/servers/test_servers.py,
line 36, in test_create_server_with_admin_password
resp, server = self.create_test_server(adminPass='testpassword')
  File /opt/stack/tempest/tempest/api/compute/base.py, line 211, in
create_test_server
name, image_id, flavor, **kwargs)
  File /opt/stack/tempest/tempest/services/compute/json/servers_client.py,
line 95, in create_server
self.validate_response(schema.create_server, resp, body)
  File /opt/stack/tempest/tempest/common/rest_client.py, line 596,
in validate_response
raise exceptions.InvalidHttpSuccessCode(msg)
InvalidHttpSuccessCode: The success code is different than the expected one
Details: The status code(202) is different than the expected one([200])


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Checking for return codes in tempest client calls

2014-05-07 Thread Ghanshyam Mann
Hi All,

2014-05-07 22:53 GMT+09:00 David Kranz dkr...@redhat.com:
 I just looked at a patch https://review.openstack.org/#/c/90310/3 
 which was given a -1 due to not checking that every call to list_hosts 
 returns 200. I realized that we don't have a shared understanding or 
 policy about this. We need to make sure that each api is tested to 
 return the right response, but many tests need to call multiple apis 
 in support of the one they are actually testing. It seems silly to 
 have the caller check the response of every api call. Currently there 
 are many, if not the majority of, cases where api calls are made 
 without checking the response code. I see a few
 possibilities:

 1. Move all response code checking to the tempest clients. They are 
 already checking for failure codes and are now doing validation of 
 json response and headers as well. Callers would only do an explicit 
 check if there were multiple success codes possible.

 2. Have a clear policy of when callers should check response codes and 
 apply it.

 I think the first approach has a lot of advantages. Thoughts?

Thanks for proposing this, I also prefer the first approach.
We will be able to remove a lot of status code checks if going on this 
direction.
It is necessary for bp/nova-api-test-inheritance tasks also.
Current https://review.openstack.org/#/c/92536/ removes status code checks 
because some Nova v2/v3 APIs return different codes and the codes are already 
checked in client side.

but it is necessary to create a lot of patch for covering all API tests.
So for now, I feel it is OK to skip status code checks in API tests only if 
client side checks are already implemented.
After implementing all client validations, we can remove them of API tests.

I also like the first approach, where any missing of return code checking can 
be avoided.
Even we can have multiple return code check at client side in JSON schema. 
Currently we do not have any schema validation of any such API which return 
multiple success code.
For example server external events API returns multiple success code (200, 
207), https://review.openstack.org/#/c/90655/ implement the test for that and 
Schema for this API response validation should have multiple return code check.

Thanks
Ghanshyam Mann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev