On Wed, May 25, 2016, at 11:13 AM, Chris Dent wrote: > > Earlier this year I worked with jaypipes to compose a spec[1] for using > gabbi[2] with nova. Summit rolled around and there were some legitimate > concerns about the focus of the spec being geared towards replacing the > api sample tests. I wasn't at summit ☹ but my understanding of the > outcome of the discussion was (please correct me if I'm wrong): > > * gabbi is not a straight replacement for the api-samples (notably > it doesn't address the documentation functionality provided by > api-samples) > > * there are concerns, because of the style of response validation > that gabbi does, that there could be a coverage gap[3] when a > representation changes (in, for example, a microversion bump) > > * we'll see how things go with the placement API work[4], which uses > gabbi for TDD, and allow people to learn more about gabbi from > that > > Since that all seems to make sense, I've gone ahead and abandoned > the review associated with the spec as overreaching for the time > being. > > I'd like, however, to replace it with a spec that is somewhat less > reaching in its plans. Rather than replace api-samples with gabbi, > augment existing tests of the API with gabbi-based tests. I think > this is a useful endeavor that will find and fix inconsistencies but > I'd like to get some feedback from people so I can formulate a spec > that will actually be useful. > > For reference, I started working on some integration of tempest and > gabbi[5] (based on some work that Mehdi did), and in the first few > minutes of writing tests found and reported bugs against nova and > glance, some of which have even been fixed since then. Win! We like > win. > > The difficulty here, and the reason I'm writing this message, is > simply this: The biggest benefit of gabbi is the actual writing and > initial (not the repeated) running of the tests. You write tests, you > find bugs and inconsistencies. The second biggest benefit is going > back and being a human and reading the tests and being able to see > what the API is doing, request and response in the same place. That's > harder to write a spec about than "I want to add or change feature X". > There's no feature here.
After reading this my first thought is that gabbi would handle what I'm testing in https://review.openstack.org/#/c/263927/33/nova/tests/functional/wsgi/test_servers.py, or any of the other tests in that directory. Does that seem accurate? And what would the advantage of gabbi be versus what I have currently written? > > I'm also aware that there is concern about adding yet another thing to > understand in the codebase. > > So what's a reasonable course of action here? > > Thanks. > > P.S: If any other project is curious about using gabbi, it is easier > to use and set up than this discussion is probably making it sound > and extremely capable. If you want to try it and need some help, > just ask me: cdent on IRC. > > [1] https://review.openstack.org/#/c/291352/ > > [2] https://gabbi.readthedocs.io/ > > [3] This would be expected: Gabbi considers its job to be testing > the API layer, not the serializers and object that the API might be > using (although it certainly can validate those things). > > [4] https://review.openstack.org/#/c/293104/ > > [5] http://markmail.org/message/z6z6ego4wqdaelhq > > -- > Chris Dent (╯°□°)╯︵┻━┻ http://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev