Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Chris Dent

On Wed, 25 May 2016, Sean Dague wrote:


I still would rather not put gabbi into the compute API testing this
cycle. Instead learn from the placement side, let people see good
patterns there, and not confuse contributors with multiple ways to test
things in the compute API. Because that requires a lot of digging out
from later (example: mox & mock).


To be clear, I wasn't saying "let's do this immediately" or even
"let's do this this cycle". What I'm trying to do is two things. One
is to lay, slowly, some groundwork on which we can build up an
understanding of two things:

* what gabbi can do
* which of those things might be useful for nova

That's a conversation that can carry on pretty slowly and doesn't
have to take away from anything else. But as I've been noticing a
lot lately, if we try to go into changes without having some
agreement on the words we're using, we're not going to get anywhere,
so you know, let's have a chilled chat about this stuff and see
where it takes us. That's an important part of the process and the
medium of email is a reasonable place for that process (inclusive,
asynchronous, addressable).

The other is to grant people who do have the wherewithal to improve
their stuff (be that stuff nova or something else) with gabbi some greater
visibility into gabbi's existence and prowess. Knowing is half the
battle, etc.


And we still have this whole api-ref site which is only 50% verified
(and we still need to address a number of microversion issues) -
http://burndown.dague.org/. We said at the beginning of the cycle
api-ref and policy in code were our 2 API priorities. Until those are
well in the bag I don't want to take the energy and care to make sure we
do a pivot on test strategy to something completely new in a way that is
easy for everyone to contribute to and review.


a) I promise to be good boy and get involved there. I keep meaning
   to and a variety of other things keep coming up (including simply
   the need to be in a different zone to clear out the crazy) and I
   feel lame about it.

b) I'd like to disabuse you of this notion that there is a pivot
   involved here or being suggested here. I prefer to think of it as
   an augmentation.

   However, even if it is a pivot: So what? Sometimes we need to
   make changes. Sometimes because it is necessary and we need new
   functionality. Sometimes simply because changing things up a bit
   provides a _much_ needed shift in perspective.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Sean Dague
On 05/25/2016 02:54 PM, Andrew Laski wrote:
> 
> 
> On Wed, May 25, 2016, at 11:13 AM, Chris Dent wrote:
>>
>> Earlier this year I worked with jaypipes to compose a spec[1] for using
>> gabbi[2] with nova. Summit rolled around and there were some legitimate
>> concerns about the focus of the spec being geared towards replacing the
>> api sample tests. I wasn't at summit ☹ but my understanding of the
>> outcome of the discussion was (please correct me if I'm wrong):
>>
>> * gabbi is not a straight replacement for the api-samples (notably
>>it doesn't address the documentation functionality provided by
>>api-samples)
>>
>> * there are concerns, because of the style of response validation
>>that gabbi does, that there could be a coverage gap[3] when a
>>representation changes (in, for example, a microversion bump)
>>
>> * we'll see how things go with the placement API work[4], which uses
>>gabbi for TDD, and allow people to learn more about gabbi from
>>that
>>
>> Since that all seems to make sense, I've gone ahead and abandoned
>> the review associated with the spec as overreaching for the time
>> being.
>>
>> I'd like, however, to replace it with a spec that is somewhat less
>> reaching in its plans. Rather than replace api-samples with gabbi,
>> augment existing tests of the API with gabbi-based tests. I think
>> this is a useful endeavor that will find and fix inconsistencies but
>> I'd like to get some feedback from people so I can formulate a spec
>> that will actually be useful.
>>
>> For reference, I started working on some integration of tempest and
>> gabbi[5] (based on some work that Mehdi did), and in the first few
>> minutes of writing tests found and reported bugs against nova and
>> glance, some of which have even been fixed since then. Win! We like
>> win.
>>
>> The difficulty here, and the reason I'm writing this message, is
>> simply this: The biggest benefit of gabbi is the actual writing and
>> initial (not the repeated) running of the tests. You write tests, you
>> find bugs and inconsistencies. The second biggest benefit is going
>> back and being a human and reading the tests and being able to see
>> what the API is doing, request and response in the same place. That's
>> harder to write a spec about than "I want to add or change feature X".
>> There's no feature here.
> 
> After reading this my first thought is that gabbi would handle what I'm
> testing in
> https://review.openstack.org/#/c/263927/33/nova/tests/functional/wsgi/test_servers.py,
> or any of the other tests in that directory. Does that seem accurate?
> And what would the advantage of gabbi be versus what I have currently
> written?

It would.

I still would rather not put gabbi into the compute API testing this
cycle. Instead learn from the placement side, let people see good
patterns there, and not confuse contributors with multiple ways to test
things in the compute API. Because that requires a lot of digging out
from later (example: mox & mock).

And we still have this whole api-ref site which is only 50% verified
(and we still need to address a number of microversion issues) -
http://burndown.dague.org/. We said at the beginning of the cycle
api-ref and policy in code were our 2 API priorities. Until those are
well in the bag I don't want to take the energy and care to make sure we
do a pivot on test strategy to something completely new in a way that is
easy for everyone to contribute to and review.

I feel like we have a good sandbox for this in the placement API, and we
can evaluate at end of cycle for next steps.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Chris Dent

On Wed, 25 May 2016, Andrew Laski wrote:


After reading this my first thought is that gabbi would handle what I'm
testing in
https://review.openstack.org/#/c/263927/33/nova/tests/functional/wsgi/test_servers.py,
or any of the other tests in that directory. Does that seem accurate?
And what would the advantage of gabbi be versus what I have currently
written?


Yes, things like that seem like they could be a pretty good candidate.
Assuming you had a GabbiFixture subclass that did what you're doing in
your setUp()[1] and test loader[2] then the gabbi file would look
something like this (untested, but if you want to try this together
tomorrow I reckon we could make it go pretty quickly):

```yaml
fixtures:
- LaskiFixture

tests:
- name: create a server
  POST: /servers
  request_headers:
  content-type: application/json
  data:
  server:
  name: foo
  # the fixture injects this value
  imageRef: $ENVIRON['image_ref']
  flavorRef: 1
  status: 201
  response_headers:
  # check headers however you like here

- name: get the server
  # this assumes the post above had a location response
  # header
  GET: $LOCATION
  response_json_paths:
  $.server.name: foo
  $.server.image.id: $ENVIRON['image_ref']
  $.server.flavor.id: 1

- name: delete the server
  DELETE: $LAST_URL
  status: 204

- name: make sure it really is gone
  GET: $LAST_URL
  status: 404
```

To me the primary advantages are:

* cleaner representation of the request response cycle of a sequence
  of requests without random other stuff
* under the covers it's direct interaction with the wsgi application
  with regular plain ol http clients 
* response validation that can be as simple or complex as you like

  with json paths
  * or even more complex if you want to write your own response
handlers[5]
* It's pretty easy to write (and correct if you get it wrong) these things.

That's a start at least.

Thanks for the good leading question.

[1] The placement api review[3] has a fairly straightforward
fixture[4] that has some but not all of the ideas that your fixture
would need. As Sergey correctly points out it needs to be cleaned up
now that it has a subclass.

[2] The test loader associates the gabbi yaml files with the wsgi
application that is being tested and produces standard python
unittest tests. There's an example in the placement api again:
https://review.openstack.org/#/c/293104/47/nova/tests/functional/gabbi/test_placement_api.py

[3] https://review.openstack.org/#/c/293104/

[4] 
https://review.openstack.org/#/c/293104/47/nova/tests/functional/gabbi/fixtures.py

[5] https://gabbi.readthedocs.io/en/latest/handlers.html
--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Andrew Laski


On Wed, May 25, 2016, at 11:13 AM, Chris Dent wrote:
> 
> Earlier this year I worked with jaypipes to compose a spec[1] for using
> gabbi[2] with nova. Summit rolled around and there were some legitimate
> concerns about the focus of the spec being geared towards replacing the
> api sample tests. I wasn't at summit ☹ but my understanding of the
> outcome of the discussion was (please correct me if I'm wrong):
> 
> * gabbi is not a straight replacement for the api-samples (notably
>it doesn't address the documentation functionality provided by
>api-samples)
> 
> * there are concerns, because of the style of response validation
>that gabbi does, that there could be a coverage gap[3] when a
>representation changes (in, for example, a microversion bump)
> 
> * we'll see how things go with the placement API work[4], which uses
>gabbi for TDD, and allow people to learn more about gabbi from
>that
> 
> Since that all seems to make sense, I've gone ahead and abandoned
> the review associated with the spec as overreaching for the time
> being.
> 
> I'd like, however, to replace it with a spec that is somewhat less
> reaching in its plans. Rather than replace api-samples with gabbi,
> augment existing tests of the API with gabbi-based tests. I think
> this is a useful endeavor that will find and fix inconsistencies but
> I'd like to get some feedback from people so I can formulate a spec
> that will actually be useful.
> 
> For reference, I started working on some integration of tempest and
> gabbi[5] (based on some work that Mehdi did), and in the first few
> minutes of writing tests found and reported bugs against nova and
> glance, some of which have even been fixed since then. Win! We like
> win.
> 
> The difficulty here, and the reason I'm writing this message, is
> simply this: The biggest benefit of gabbi is the actual writing and
> initial (not the repeated) running of the tests. You write tests, you
> find bugs and inconsistencies. The second biggest benefit is going
> back and being a human and reading the tests and being able to see
> what the API is doing, request and response in the same place. That's
> harder to write a spec about than "I want to add or change feature X".
> There's no feature here.

After reading this my first thought is that gabbi would handle what I'm
testing in
https://review.openstack.org/#/c/263927/33/nova/tests/functional/wsgi/test_servers.py,
or any of the other tests in that directory. Does that seem accurate?
And what would the advantage of gabbi be versus what I have currently
written?


> 
> I'm also aware that there is concern about adding yet another thing to
> understand in the codebase.
> 
> So what's a reasonable course of action here?
> 
> Thanks.
> 
> P.S: If any other project is curious about using gabbi, it is easier
> to use and set up than this discussion is probably making it sound
> and extremely capable. If you want to try it and need some help,
> just ask me: cdent on IRC.
> 
> [1] https://review.openstack.org/#/c/291352/
> 
> [2] https://gabbi.readthedocs.io/
> 
> [3] This would be expected: Gabbi considers its job to be testing
> the API layer, not the serializers and object that the API might be
> using (although it certainly can validate those things).
> 
> [4] https://review.openstack.org/#/c/293104/
> 
> [5] http://markmail.org/message/z6z6ego4wqdaelhq
> 
> -- 
> Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] determining or clarifying a path for gabbi+nova

2016-05-25 Thread Chris Dent


Earlier this year I worked with jaypipes to compose a spec[1] for using
gabbi[2] with nova. Summit rolled around and there were some legitimate
concerns about the focus of the spec being geared towards replacing the
api sample tests. I wasn't at summit ☹ but my understanding of the
outcome of the discussion was (please correct me if I'm wrong):

* gabbi is not a straight replacement for the api-samples (notably
  it doesn't address the documentation functionality provided by
  api-samples)

* there are concerns, because of the style of response validation
  that gabbi does, that there could be a coverage gap[3] when a
  representation changes (in, for example, a microversion bump)

* we'll see how things go with the placement API work[4], which uses
  gabbi for TDD, and allow people to learn more about gabbi from
  that

Since that all seems to make sense, I've gone ahead and abandoned
the review associated with the spec as overreaching for the time
being.

I'd like, however, to replace it with a spec that is somewhat less
reaching in its plans. Rather than replace api-samples with gabbi,
augment existing tests of the API with gabbi-based tests. I think
this is a useful endeavor that will find and fix inconsistencies but
I'd like to get some feedback from people so I can formulate a spec
that will actually be useful.

For reference, I started working on some integration of tempest and
gabbi[5] (based on some work that Mehdi did), and in the first few
minutes of writing tests found and reported bugs against nova and
glance, some of which have even been fixed since then. Win! We like
win.

The difficulty here, and the reason I'm writing this message, is
simply this: The biggest benefit of gabbi is the actual writing and
initial (not the repeated) running of the tests. You write tests, you
find bugs and inconsistencies. The second biggest benefit is going
back and being a human and reading the tests and being able to see
what the API is doing, request and response in the same place. That's
harder to write a spec about than "I want to add or change feature X".
There's no feature here.

I'm also aware that there is concern about adding yet another thing to
understand in the codebase.

So what's a reasonable course of action here?

Thanks.

P.S: If any other project is curious about using gabbi, it is easier
to use and set up than this discussion is probably making it sound
and extremely capable. If you want to try it and need some help,
just ask me: cdent on IRC.

[1] https://review.openstack.org/#/c/291352/

[2] https://gabbi.readthedocs.io/

[3] This would be expected: Gabbi considers its job to be testing
the API layer, not the serializers and object that the API might be
using (although it certainly can validate those things).

[4] https://review.openstack.org/#/c/293104/

[5] http://markmail.org/message/z6z6ego4wqdaelhq

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev