Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-04-21 Thread Everett Toews
On Apr 20, 2015, at 2:45 PM, Chris Dent  wrote:

> I wanted to make a quick update on the latest happenings with
> gabbi[0], the tool I've created to do "declarative" testing of
> OpenStack APIs (starting with Ceilometer and Gnocchi).
> 
> * Jay Pipes and I are doing a presentation "API Matters" at summit.
>  The latter of half of that will be me noodling about gabbi,
>  including a demo.[1]

This is great! I'm looking forward to attending it.

Miguel and I are doing something similar with our "The Good and the Bad of the 
OpenStack REST APIs” presentation [1]. Which is exactly 10 minutes after your 
presentation. :)

Cheers,
Everett

[1] 
https://openstacksummitmay2015vancouver.sched.org/event/6ce758d5c7340db74e0d432e138c6619
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-04-20 Thread Chris Dent


I wanted to make a quick update on the latest happenings with
gabbi[0], the tool I've created to do "declarative" testing of
OpenStack APIs (starting with Ceilometer and Gnocchi).

* Jay Pipes and I are doing a presentation "API Matters" at summit.
  The latter of half of that will be me noodling about gabbi,
  including a demo.[1]

* Preparing for that demo made me want a command line tool to run
  the YAML so I made that and now it is integrated in the latest
  release. It provides the interesting function of being able to run
  what appear to be Python unittests against any web server.[2]

* I finally got around to writing probably the most important part
  of the documentation: an annotated example YAML file.[3]

If you're at all involved with developing or testing web APIs, are
interesting in gabbi and you've not yet had a chance to give it a 
look it would be great if you could do so soon. I'm hoping to

stabilize the test format and make a 1.0 release soon.

[0] https://cdent.github.io/gabbi/
[1] 
http://openstacksummitmay2015vancouver.sched.org/event/176411fff081aad3d7f275632f70d52b
[2] http://gabbi.readthedocs.org/en/latest/runner.html
[3] http://gabbi.readthedocs.org/en/latest/example.html

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-03-03 Thread Chris Dent

On Tue, 3 Mar 2015, Steve Baker wrote:

This looks very useful, I'd like to use this in the heat functional tests 
job.


Awesome, let me know if you need any assistance getting started.

Is it possible to write tests which do a POST/PUT then a loop of GETs until 
some condition is met (a response_json_paths match on IN_PROGRESS -> 
COMPLETE)


Not yet, but it is something that has been lurking in the back of my
mind. Under the covers the implementation wouldn't be too hard but
I've held off because getting the syntax right seem tricky. It should
be clear but not too invasive and because of the way the test suites
are managed the info would need to be expressed in the chunk of
YAML that represents just one request.

One option would be to allow a test to sleepy loop and fail N times
(or a duration of N seconds) until allowing the failure exception to
rise out of the loop. In this case the COMPLETE info would be the
success condition and IN_PROGRESS wouldn't be noted at all.

Another option would be to express the try again condition
(IN_PROGRESS) and the done condition (COMPLETE) with new syntax and
loop on try again, finish on done and fail otherwise.

I think I prefer the first because it is more general and requires
less syntax but it may be insufficiently expressive.

What I want to be sure to avoid is clouding up the syntax with a bunch
of meta hooey that obscures the basic request and response chunks. So,
for example, if it became clear we needed some way to express a sleep
then that sleep would be just one key and we'd (somehow) avoid going
down the messy road of expressing two differents kinds of sleep: one
at top of and one at the bottom of the loop.

What might be best is for you or us to write some tests that express
what you want to express and then see how best to implement that. This
is how the new $ENVIRON template feature got added.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-03-02 Thread Steve Baker

On 03/03/15 00:56, Chris Dent wrote:


I (and a few others) have been using gabbi[1] for a couple of months now
and it has proven very useful and evolved a bit so I thought it would be
worthwhile to followup my original message and give an update.

Some recent reviews[1] give a sample of how it can be used to validate
an existing API as well as search for less than perfect HTTP behavior
(e.g sending a 404 when a 405 would be correct).

Regular use has lead to some important changes:

* It can now be integrated with other tox targets so it can run
  alongside other functional tests.
* Individual tests can be xfailed and skipped. An entire YAML test
  file can be skipped.
* For those APIs which provide insufficient hypermedia support, the
  ability to inspect and reference the prior test and use template
  variables in the current request has been expanded (with support for
  environment variables pending a merge).

My original motivation for creating the tool was to make it easier to
learn APIs by causing a body of readable YAML files to exist. This
remains important but what I've found is that writing the tests is
itself an incredible tool. Not only is it very easy to write tests
(throw some stuff at a URL and see what happen) and find (many) bugs
as a result, the exploratory nature of test writing drives a
learning process.

You'll note that the reviews below are just the YAML files. That's
because the test loading and fixture python code is already merged.
Adding tests is just a matter of adding more YAML. An interesting
trick is to run a small segment of the gabbi tests in a project (e.g.
just one file that represents one type of resource) while producing
coverage data. Reviewing the coverage of just the controller for that
resource can help drive test creation and separation.

[1] http://gabbi.readthedocs.org/en/latest/
[2] https://review.openstack.org/#/c/159945/
https://review.openstack.org/#/c/159204/

This looks very useful, I'd like to use this in the heat functional 
tests job.


Is it possible to write tests which do a POST/PUT then a loop of GETs 
until some condition is met (a response_json_paths match on IN_PROGRESS 
-> COMPLETE)


This would allow for testing of non-atomic PUT/POST operations for 
entities like nova servers, heat stacks etc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-03-02 Thread Jay Pipes

On 03/02/2015 03:56 AM, Chris Dent wrote:


I (and a few others) have been using gabbi[1] for a couple of months now
and it has proven very useful and evolved a bit so I thought it would be
worthwhile to followup my original message and give an update.

Some recent reviews[1] give a sample of how it can be used to validate
an existing API as well as search for less than perfect HTTP behavior
(e.g sending a 404 when a 405 would be correct).

Regular use has lead to some important changes:

* It can now be integrated with other tox targets so it can run
   alongside other functional tests.
* Individual tests can be xfailed and skipped. An entire YAML test
   file can be skipped.
* For those APIs which provide insufficient hypermedia support, the
   ability to inspect and reference the prior test and use template
   variables in the current request has been expanded (with support for
   environment variables pending a merge).

My original motivation for creating the tool was to make it easier to
learn APIs by causing a body of readable YAML files to exist. This
remains important but what I've found is that writing the tests is
itself an incredible tool. Not only is it very easy to write tests
(throw some stuff at a URL and see what happen) and find (many) bugs
as a result, the exploratory nature of test writing drives a
learning process.

You'll note that the reviews below are just the YAML files. That's
because the test loading and fixture python code is already merged.
Adding tests is just a matter of adding more YAML. An interesting
trick is to run a small segment of the gabbi tests in a project (e.g.
just one file that represents one type of resource) while producing
coverage data. Reviewing the coverage of just the controller for that
resource can help drive test creation and separation.


Total awesomesauce, Chris :)

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-03-02 Thread Chris Dent


I (and a few others) have been using gabbi[1] for a couple of months now
and it has proven very useful and evolved a bit so I thought it would be
worthwhile to followup my original message and give an update.

Some recent reviews[1] give a sample of how it can be used to validate
an existing API as well as search for less than perfect HTTP behavior
(e.g sending a 404 when a 405 would be correct).

Regular use has lead to some important changes:

* It can now be integrated with other tox targets so it can run
  alongside other functional tests.
* Individual tests can be xfailed and skipped. An entire YAML test
  file can be skipped.
* For those APIs which provide insufficient hypermedia support, the
  ability to inspect and reference the prior test and use template
  variables in the current request has been expanded (with support for
  environment variables pending a merge).

My original motivation for creating the tool was to make it easier to
learn APIs by causing a body of readable YAML files to exist. This
remains important but what I've found is that writing the tests is
itself an incredible tool. Not only is it very easy to write tests
(throw some stuff at a URL and see what happen) and find (many) bugs
as a result, the exploratory nature of test writing drives a
learning process.

You'll note that the reviews below are just the YAML files. That's
because the test loading and fixture python code is already merged.
Adding tests is just a matter of adding more YAML. An interesting
trick is to run a small segment of the gabbi tests in a project (e.g.
just one file that represents one type of resource) while producing
coverage data. Reviewing the coverage of just the controller for that
resource can help drive test creation and separation.

[1] http://gabbi.readthedocs.org/en/latest/
[2] https://review.openstack.org/#/c/159945/
https://review.openstack.org/#/c/159204/

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-18 Thread Jay Pipes

On 01/12/2015 02:20 PM, Chris Dent wrote:


After some discussion with Sean Dague and a few others it became
clear that it would be a good idea to introduce a new tool I've been
working on to the list to get a sense of its usefulness generally,
work towards getting it into global requirements, and get the
documentation fleshed out so that people can actually figure out how
to use it well.

tl;dr: Help me make this interesting tool useful to you and your
HTTP testing by reading this message and following some of the links
and asking any questions that come up.

The tool is called gabbi

 https://github.com/cdent/gabbi
 http://gabbi.readthedocs.org/
 https://pypi.python.org/pypi/gabbi

It describes itself as a tool for running HTTP tests where requests
and responses are represented in a declarative form. Its main
purpose is to allow testing of APIs where the focus of test writing
(and reading!) is on the HTTP requests and responses, not on a bunch of
Python (that obscures the HTTP).

The tests are written in YAML and the simplest test file has this form:

```
tests:
- name: a test
   url: /
```

This test will pass if the response status code is '200'.

The test file is loaded by a small amount of python code which transforms
the file into an ordered sequence of TestCases in a TestSuite[1].

```
def load_tests(loader, tests, pattern):
 """Provide a TestSuite to the discovery process."""
 test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
 return driver.build_tests(test_dir, loader, host=None,
   intercept=SimpleWsgi,
   fixture_module=sys.modules[__name__])
```

The loader provides either:

* a host to which real over-the-network requests are made
* a WSGI app which is wsgi-intercept-ed[2]

If an individual TestCase is asked to be run by the testrunner, those tests
that are prior to it in the same file are run first, as prerequisites.

Each test file can declare a sequence of nested fixtures to be loaded
from a configured (in the loader) module. Fixtures are context managers
(they establish the fixture upon __enter__ and destroy it upon
__exit__).

With a proper group_regex setting in .testr.conf each YAML file can
run in its own process in a concurrent test runner.

The docs contain information on the format of the test files:

 http://gabbi.readthedocs.org/en/latest/format.html

Each test can state request headers and bodies and evaluate both response
headers and response bodies. Request bodies can be strings in the
YAML, files read from disk, or JSON created from YAML structures.
Response verifcation can use JSONPath[3] to inspect the details of
response bodies. Response header validation may use regular
expressions.

There is limited support for refering to the previous request
to construct URIs, potentially allowing traversal of a full HATEOAS
compliant API.

At the moment the most complete examples of how things work are:

* Ceilometer's pending use of gabbi:
   https://review.openstack.org/#/c/146187/
* Gabbi's testing of gabbi:
   https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
   (the loader and faked WSGI app for those yaml files is in:
   https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)

One obvious thing that will need to happen is a suite of concrete
examples on how to use the various features. I'm hoping that
feedback will help drive that.

In my own experimentation with gabbi I've found it very useful. It's
helped me explore and learn the ceilometer API in a way that existing
test code has completely failed to do. It's also helped reveal
several warts that will be very useful to fix. And it is fast. To
run and to write. I hope that with some work it can be useful to you
too.


Very impressive, Chris, thanks very much for bringing Gabbi into the 
OpenStack ecosystem. I very much look forward to replacing the API 
samples code in Nova with Gabbi, which looks very clean and 
easily-understandable for anyone.


Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-13 Thread Chris Dent

On Mon, 12 Jan 2015, Sean Dague wrote:


I think it's important to look at this in the narrower context, we're
not testing full environments here, this is custom crafting HTTP req /
resp in a limited context to make sure components are completing a contract.


Yes, exactly.

In fact one of the things that keeps coming up in conversations is
that people keep asking about ways of extending the response body
validation and I'm reluctant to make that aspect of things _too_
powerful. The goal is to validate that the HTTP is doing the right
thing, not to validate the persistence layer or the business logic
that is assembling the details of the resources.

In that sense the place where the attention and power should be in the
tests is in the crafting of the requests and in the validation of the
response headers. Part of the reason for including jsonpath was to be
able to do spot checks into the response body rather than including
some simulcrum of the entire response in the test.

And even including that was a matter of convenience to deal with
ambiguity in the JSON producers. The original response body tests
were simple assertions that some string fragment is somewhere in
the body.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Tue, 13 Jan 2015, Boris Pavlovic wrote:


Having separated engine seems like a good idea. It will really simplify
stuff


I'm not certain that's the case, but it may be worth exploration.


This seems like a huge duplication of efforts. I mean operators will write
own
tools developers own... Why not just resolve more common problem:
"Does it work or not?"


Because no one tool can solve all problems well. I think it is far
better to have lots of small tools that are fairly focused on doing a
one or a few small jobs well.

It may be that there are pieces of gabbi which can be reused or
extracted to more general libraries. If there, that's fantastic. But
I think it is very important to try to solve one problem at a time
rather than everything at once.


$ python -m subunit.run discover gabbi |subunit-trace
[...]
gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
[0.027512s] ... ok
[...]



What is "test_request" Just one RestAPI call?


That long dotted name is the name of a dynamically (some metaclass
mumbo jumbo magic is used to turn the YAML into TestCase classes)
created single TestCase and within that TestCase is one single HTTP
request and the evaluation of its response. It directly corresponds to a
test named "inheritance of defaults" in a file called self.yaml.
self.yaml is in a directory containing other YAML files, all of which
are loaded by a python filed named test_intercept.py.


Btw the thin that I am interested how they are all combined?


As I said before: Each yaml file is an ordered sequence of tests, each
one representing a singe HTTP request. Fixtures are per yaml file.
There is no cleanup phase outside of the fixtures. Each fixture is
expected to do its own cleanup, if required.


And where are you doing cleanup? (like if you would like to test only
creation of resource?)


In the ceilometer integration that is currently being built, the
test_gabbi.py[1] file configures itself to use a mongodb database that
is unique for this process. The test harness is responsible for
starting the mongodb. In a concurrency situation, each process will
have a different database in the same monogo server. When the test run
is done, mongo is shut down, the databases removed.

In other words, the environment surrounding gabbi is responsible for
doing the things it is good at, and gabbi does the HTTP tests. A long
running test cannot necessarily depend on what else might be in the
datastore used by the API. It needs to test that which it knows about.

I hope that clarifies things a bit.

[1] https://review.openstack.org/#/c/146187/2/ceilometer/gabbi/test_gabbi.py,cm

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Sean,


So I'd say let's focus on that problem right now, and get some traction
> on this as part of functional test suites in OpenStack. Genericizing it
> too much just turns this back into a version of every other full stack
> testing tool, which we know isn't sufficient for having quality
> components in OpenStack.


Please be more specific about what tools were tested?
It will be nice to see overview.  At least what tool were tested
and why they can't be used for testing-in-tree.


Best regards,
Boris Pavlovic



On Tue, Jan 13, 2015 at 1:37 AM, Anne Gentle  wrote:

>
>
> On Mon, Jan 12, 2015 at 1:20 PM, Chris Dent  wrote:
>
>>
>> After some discussion with Sean Dague and a few others it became
>> clear that it would be a good idea to introduce a new tool I've been
>> working on to the list to get a sense of its usefulness generally,
>> work towards getting it into global requirements, and get the
>> documentation fleshed out so that people can actually figure out how
>> to use it well.
>>
>> tl;dr: Help me make this interesting tool useful to you and your
>> HTTP testing by reading this message and following some of the links
>> and asking any questions that come up.
>>
>> The tool is called gabbi
>>
>> https://github.com/cdent/gabbi
>> http://gabbi.readthedocs.org/
>> https://pypi.python.org/pypi/gabbi
>>
>> It describes itself as a tool for running HTTP tests where requests
>> and responses are represented in a declarative form. Its main
>> purpose is to allow testing of APIs where the focus of test writing
>> (and reading!) is on the HTTP requests and responses, not on a bunch of
>> Python (that obscures the HTTP).
>>
>>
> Hi Chris,
>
> I'm interested, sure. What did you use to write the HTTP tests, as in,
> what was the source of truth for what the requests and responses should be?
>
> Thanks,
> Anne
>
>
>> The tests are written in YAML and the simplest test file has this form:
>>
>> ```
>> tests:
>> - name: a test
>>   url: /
>> ```
>>
>> This test will pass if the response status code is '200'.
>>
>> The test file is loaded by a small amount of python code which transforms
>> the file into an ordered sequence of TestCases in a TestSuite[1].
>>
>> ```
>> def load_tests(loader, tests, pattern):
>> """Provide a TestSuite to the discovery process."""
>> test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
>> return driver.build_tests(test_dir, loader, host=None,
>>   intercept=SimpleWsgi,
>>   fixture_module=sys.modules[__name__])
>> ```
>>
>> The loader provides either:
>>
>> * a host to which real over-the-network requests are made
>> * a WSGI app which is wsgi-intercept-ed[2]
>>
>> If an individual TestCase is asked to be run by the testrunner, those
>> tests
>> that are prior to it in the same file are run first, as prerequisites.
>>
>> Each test file can declare a sequence of nested fixtures to be loaded
>> from a configured (in the loader) module. Fixtures are context managers
>> (they establish the fixture upon __enter__ and destroy it upon
>> __exit__).
>>
>> With a proper group_regex setting in .testr.conf each YAML file can
>> run in its own process in a concurrent test runner.
>>
>> The docs contain information on the format of the test files:
>>
>> http://gabbi.readthedocs.org/en/latest/format.html
>>
>> Each test can state request headers and bodies and evaluate both response
>> headers and response bodies. Request bodies can be strings in the
>> YAML, files read from disk, or JSON created from YAML structures.
>> Response verifcation can use JSONPath[3] to inspect the details of
>> response bodies. Response header validation may use regular
>> expressions.
>>
>> There is limited support for refering to the previous request
>> to construct URIs, potentially allowing traversal of a full HATEOAS
>> compliant API.
>>
>> At the moment the most complete examples of how things work are:
>>
>> * Ceilometer's pending use of gabbi:
>>   https://review.openstack.org/#/c/146187/
>> * Gabbi's testing of gabbi:
>>   https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>>   (the loader and faked WSGI app for those yaml files is in:
>>   https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
>>
>> One obvious thing that will need to happen is a suite of concrete
>> examples on how to use the various features. I'm hoping that
>> feedback will help drive that.
>>
>> In my own experimentation with gabbi I've found it very useful. It's
>> helped me explore and learn the ceilometer API in a way that existing
>> test code has completely failed to do. It's also helped reveal
>> several warts that will be very useful to fix. And it is fast. To
>> run and to write. I hope that with some work it can be useful to you
>> too.
>>
>> Thanks.
>>
>> [1] Getting gabbi to play well with PyUnit style tests and
>> with infrastructure like subunit and testrepository was one of
>> 

Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Mon, 12 Jan 2015, Anne Gentle wrote:


I'm interested, sure. What did you use to write the HTTP tests, as in, what
was the source of truth for what the requests and responses should be?


That is an _extremely_ good question and one I really struggled with
as I started integrating gabbi with ceilometer.

Initially I thought "I'll just use the API docs[1] as the source of
truth" but I found they were a bit incomplete on some of the nuances,
so I asked around for other sources of truth, but got little in the
way of response.

So then I tried to use the api controller code, but not to put too fine a
point on it, but the combination of WSME and Pecan makes for utterly
inscrutable code if you're interested in the actual structure of the
HTTP requests and response and the URIs being used.

So then I tried to use the existing api unit tests and was able to
extract a bit there, but it wasn't smooth sailing.

So finally what I did was decide that I would do the work in phases
and with collaborators: I'd get the initial framework in place and
then impose upon those more familiar with the API than I to do
subsequent dependent patchsets that cover the API more completely.

I have to admit that the concept of API truth is part of the reason I
wanted to create this kind of testing. None of the resources I could
find in the ceilometer code tree gave any clear overview that mapped
URIs to methods, allowing easy discovery of how the code works. I
wanted to find some kind of map[2]. Gabbi itself doesn't solve this
problem (there's no map between URI and python method) but it can at
least show the API, there in the code. It's a step in the right
direction.

I know that there are discussions in progress about formalizing APIs
with tools like RAML (for example the thread Ian just extended[3]). I
think these have their place, especially for declaring truth, but they
aren't necessarily good learning aids for new developers or good
assistants for enabling and maintaining transparency.

[1] I started at: http://docs.openstack.org/developer/ceilometer/webapi/v2.html
but I think I should have used: 
http://developer.openstack.org/api-ref-telemetry-v2.html

[2] https://github.com/tiddlyweb/tiddlyweb/blob/master/tiddlyweb/urls.map

[3] http://lists.openstack.org/pipermail/openstack-dev/2015-January/054153.html
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Anne Gentle
On Mon, Jan 12, 2015 at 1:20 PM, Chris Dent  wrote:

>
> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
>
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
>
> The tool is called gabbi
>
> https://github.com/cdent/gabbi
> http://gabbi.readthedocs.org/
> https://pypi.python.org/pypi/gabbi
>
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
>
>
Hi Chris,

I'm interested, sure. What did you use to write the HTTP tests, as in, what
was the source of truth for what the requests and responses should be?

Thanks,
Anne


> The tests are written in YAML and the simplest test file has this form:
>
> ```
> tests:
> - name: a test
>   url: /
> ```
>
> This test will pass if the response status code is '200'.
>
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite[1].
>
> ```
> def load_tests(loader, tests, pattern):
> """Provide a TestSuite to the discovery process."""
> test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
> return driver.build_tests(test_dir, loader, host=None,
>   intercept=SimpleWsgi,
>   fixture_module=sys.modules[__name__])
> ```
>
> The loader provides either:
>
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed[2]
>
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
>
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> __exit__).
>
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
>
> The docs contain information on the format of the test files:
>
> http://gabbi.readthedocs.org/en/latest/format.html
>
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath[3] to inspect the details of
> response bodies. Response header validation may use regular
> expressions.
>
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
>
> At the moment the most complete examples of how things work are:
>
> * Ceilometer's pending use of gabbi:
>   https://review.openstack.org/#/c/146187/
> * Gabbi's testing of gabbi:
>   https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>   (the loader and faked WSGI app for those yaml files is in:
>   https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
>
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
>
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
> too.
>
> Thanks.
>
> [1] Getting gabbi to play well with PyUnit style tests and
> with infrastructure like subunit and testrepository was one of
> the most challenging parts of the build, but the result has been
> a lot of flexbility.
>
> [2] https://pypi.python.org/pypi/wsgi_intercept
> [3] https://pypi.python.org/pypi/jsonpath-rw
>
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Sean Dague
On 01/12/2015 05:00 PM, Boris Pavlovic wrote:
> Hi Chris, 
> 
> If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.
> 
> 
> 
> Having separated engine seems like a good idea. It will really simplify
> stuff 
> 
> 
> So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.
> 
> 
> 
> This seems like a huge duplication of efforts. I mean operators will
> write own 
> tools developers own... Why not just resolve more common problem: 
> "Does it work or not?"

I think it's important to look at this in the narrower context, we're
not testing full environments here, this is custom crafting HTTP req /
resp in a limited context to make sure components are completing a contract.

"Does it work or not?" is so broad a statement as to be meaningless most
of the time. It's important to be able to looking at these lower level
response flows and make sure they both function, and that when they
break, they do so in a debuggable way.

So I'd say let's focus on that problem right now, and get some traction
on this as part of functional test suites in OpenStack. Genericizing it
too much just turns this back into a version of every other full stack
testing tool, which we know isn't sufficient for having quality
components in OpenStack.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Hi Chris,

If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.



Having separated engine seems like a good idea. It will really simplify
stuff


So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.



This seems like a huge duplication of efforts. I mean operators will write
own
tools developers own... Why not just resolve more common problem:
"Does it work or not?"


But if you are concerned about individual test times gabbi makes every
> request an individual TestCase, which means that subunit can record times
> for it. Here's a sample of the output from running gabbi's own gabbi
> tests:
> $ python -m subunit.run discover gabbi |subunit-trace
> [...]
> gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
> [0.027512s] ... ok
> [...]



What is "test_request" Just one RestAPI call?

Btw the thin that I am interested how they are all combined?

 -> fixtures.set
-> run first Rest call
-> run second Rest call
...
 -> fixtures.clean

Something like that?

And where are you doing cleanup? (like if you would like to test only
creation of resource?)


Best regards,
Boris Pavlovic



On Tue, Jan 13, 2015 at 12:37 AM, Chris Dent  wrote:

> On Tue, 13 Jan 2015, Boris Pavlovic wrote:
>
>  The Idea is brilliant. I may steal it! =)
>>
>
> Feel free.
>
>  But there are some issues that will be faced:
>>
>> 1) Using as a base unittest:
>>
>>  python -m subunit.run discover -f gabbi | subunit2pyunit
>>>
>>
>> So rally team won't be able to reuse it for load testing (if we directly
>> integrate it) because we will have huge overhead (of discover stuff)
>>
>
> So the use of unittest, subunit and related tools are to allow the
> tests to be integrated with the usual OpenStack testing handling. That
> is, gabbi is primarily oriented towards being a tool for developers to
> drive or validate their work.
>
> However we may feel about subunit, testr etc they are a de facto
> standard. As I said in my message at the top of the thread the vast
> majority of effort made in gabbi was getting it to be "tests" in the
> PyUnit view of the universe. And not just appear to be tests, but each
> request as an individual TestCase discoverable and addressable in the
> PyUnit style.
>
> In any case, can you go into more details about your concerns with
> discovery? In my limited exploration thus far the discovery portion is
> not too heavyweight: reading the YAML files.
>
>  2.3) It makes it hardly integratabtle with other tools. Like Rally..
>>
>
> If there's sufficient motivation and time it might make sense to
> separate the part of gabbi that builds TestCases from the part that
> runs (and evaluates) HTTP requests and responses. If that happens then
> integration with tools like Rally and runners is probably possible.
>
>  3) Usage by Operators is hard in case of N projects.
>>
>
> This is not a use case that I really imagined for gabbi. I didn't want
> to create a tool for everyone, I was after satisfying a narrow part of
> the "in tree functional tests" need that's been discussed for the past
> several months. That narrow part is: legible tests of the HTTP aspects
> of project APIs.
>
>  Operators would like to have 1 button that will say (does cloud work or
>> not). And they don't want to combine all gabbi files from all projects and
>> run test.
>>
>
> So, while this is an interesting idea, it's not something that gabbi
> intends to be. It doesn't validate existing clouds. It validates code
> that is used to run clouds.
>
> Such a thing is probably possible (especially given the fact that you
> can give a "real" host to gabbi tests) but that's not the primary
> goal.
>
>  4) Using subunit format is not good for functional testing.
>>
>> It doesn't allow you to collect detailed information about execution of
>> test. Like for benchmarking it will be quite interesting to collect
>> durations of every API call.
>>
>
> I think we've all got different definitions of functional testing. For
> example in my own personal defintion I'm not too concerned about test
> times: I'm worried about what fails.
>
> But if you are concerned about individual test times gabbi makes every
> request an individual TestCase, which means that subunit can record times
> for it. Here's a sample of the output from running gabbi's own gabbi
> tests:
>
> $ python -m subunit.run discover gabbi |subunit-trace
> [...]
> gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request
> [0.027512s] ... ok
> [...]
>
>
>
> --
> Chris Dent tw:@anticdent

Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Tue, 13 Jan 2015, Boris Pavlovic wrote:


The Idea is brilliant. I may steal it! =)


Feel free.


But there are some issues that will be faced:

1) Using as a base unittest:


python -m subunit.run discover -f gabbi | subunit2pyunit


So rally team won't be able to reuse it for load testing (if we directly
integrate it) because we will have huge overhead (of discover stuff)


So the use of unittest, subunit and related tools are to allow the
tests to be integrated with the usual OpenStack testing handling. That
is, gabbi is primarily oriented towards being a tool for developers to
drive or validate their work.

However we may feel about subunit, testr etc they are a de facto
standard. As I said in my message at the top of the thread the vast
majority of effort made in gabbi was getting it to be "tests" in the
PyUnit view of the universe. And not just appear to be tests, but each
request as an individual TestCase discoverable and addressable in the
PyUnit style.

In any case, can you go into more details about your concerns with
discovery? In my limited exploration thus far the discovery portion is
not too heavyweight: reading the YAML files.


2.3) It makes it hardly integratabtle with other tools. Like Rally..


If there's sufficient motivation and time it might make sense to
separate the part of gabbi that builds TestCases from the part that
runs (and evaluates) HTTP requests and responses. If that happens then
integration with tools like Rally and runners is probably possible.


3) Usage by Operators is hard in case of N projects.


This is not a use case that I really imagined for gabbi. I didn't want
to create a tool for everyone, I was after satisfying a narrow part of
the "in tree functional tests" need that's been discussed for the past
several months. That narrow part is: legible tests of the HTTP aspects
of project APIs.


Operators would like to have 1 button that will say (does cloud work or
not). And they don't want to combine all gabbi files from all projects and
run test.


So, while this is an interesting idea, it's not something that gabbi
intends to be. It doesn't validate existing clouds. It validates code
that is used to run clouds.

Such a thing is probably possible (especially given the fact that you
can give a "real" host to gabbi tests) but that's not the primary
goal.


4) Using subunit format is not good for functional testing.

It doesn't allow you to collect detailed information about execution of
test. Like for benchmarking it will be quite interesting to collect
durations of every API call.


I think we've all got different definitions of functional testing. For
example in my own personal defintion I'm not too concerned about test
times: I'm worried about what fails.

But if you are concerned about individual test times gabbi makes every
request an individual TestCase, which means that subunit can record times
for it. Here's a sample of the output from running gabbi's own gabbi
tests:

$ python -m subunit.run discover gabbi |subunit-trace
[...]
gabbi.driver.test_intercept_self_inheritance_of_defaults.test_request 
[0.027512s] ... ok
[...]


--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Chris Dent

On Mon, 12 Jan 2015, Gregory Haynes wrote:


Awesome! I was discussing trying to add extensions to RAML[1] so we
could do something like this the other day. Is there any reason you
didnt use an existing modeling language like this?


Glad you like it.

I chose to go with my own model in the YAML for a few different
reasons:

* I had some pre-existing code[1] that had worked well (but was
  considerably less featureful[2]) so I used that as a starting point.

* I wanted to model HTTP requests and responses _not_ APIs. RAML looks
  pretty interesting but it abstracts at a slightly different level
  for a considerably different purpose. To use it in the context I was
  working towards would require ignoring a lot of the syntax and (as
  far as a superficial read goes) adding a fair bit more.

* I wanted small, simple and clean but [2] came along so now it is
  like most languages: small, simple and clean if you try to make it
  that way, noisy if you let things get out of hand.

[1]
https://github.com/tiddlyweb/tiddlyweb/blob/master/test/http_runner.py
https://github.com/tiddlyweb/tiddlyweb/blob/master/test/httptest.yaml

[2] What I found while building gabbi was that it could be a useful as
a TDD tool without many features. The constrained feature set would
result in constrained (and thus limited in the good way) APIs because
the limited expressiveness of the tests would limit ambiguity in the
API.

However, existing APIs were not limited from the outset and have a fair
bit of ambiguity so to test them a lot of flexibility is required in
the tests. Already in conversations this evening people are asking for
more features in the evaluation of response bodies in order to be able
to test more flexibily.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Gregory Haynes
Excerpts from Chris Dent's message of 2015-01-12 19:20:18 +:
> 
> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
> 
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
> 
> The tool is called gabbi
> 
>  https://github.com/cdent/gabbi
>  http://gabbi.readthedocs.org/
>  https://pypi.python.org/pypi/gabbi
> 
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
> 
> The tests are written in YAML and the simplest test file has this form:
> 
> ```
> tests:
> - name: a test
>url: /
> ```
> 
> This test will pass if the response status code is '200'.
> 
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite[1].
> 
> ```
> def load_tests(loader, tests, pattern):
>  """Provide a TestSuite to the discovery process."""
>  test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
>  return driver.build_tests(test_dir, loader, host=None,
>intercept=SimpleWsgi,
>fixture_module=sys.modules[__name__])
> ```
> 
> The loader provides either:
> 
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed[2]
> 
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
> 
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> __exit__).
> 
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
> 
> The docs contain information on the format of the test files:
> 
>  http://gabbi.readthedocs.org/en/latest/format.html
> 
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath[3] to inspect the details of
> response bodies. Response header validation may use regular
> expressions.
> 
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
> 
> At the moment the most complete examples of how things work are:
> 
> * Ceilometer's pending use of gabbi:
>https://review.openstack.org/#/c/146187/
> * Gabbi's testing of gabbi:
>https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>(the loader and faked WSGI app for those yaml files is in:
>https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
> 
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
> 
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
> too.
> 
> Thanks.
> 
> [1] Getting gabbi to play well with PyUnit style tests and
>  with infrastructure like subunit and testrepository was one of
>  the most challenging parts of the build, but the result has been
>  a lot of flexbility.
> 
> [2] https://pypi.python.org/pypi/wsgi_intercept
> [3] https://pypi.python.org/pypi/jsonpath-rw
> 

Awesome! I was discussing trying to add extensions to RAML[1] so we
could do something like this the other day. Is there any reason you
didnt use an existing modeling language like this?

Cheers,
Greg

[1] http://raml.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Sean,


I definitely like the direction that gabbi seems to be headed. It feels
> like a much cleaner version of what nova tried to do with API samples.
> As long as multiple projects think this is an interesting direction, I
> think it's probably fine to add it to global-requirements and let them
> start working with it.



+1 more testing better code.

Best regards,
Boris Pavlovic

On Mon, Jan 12, 2015 at 11:20 PM, Sean Dague  wrote:

> On 01/12/2015 03:11 PM, Dean Troyer wrote:
> > Thanks for this Chris, I'm hoping to get my fingers dirty with it Real
> > Soon Now.
> >
> > On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn  > > wrote:
> >
> > I'd be interested in hearing the api-wg viewpoint, specifically
> whether
> > that working group intends to recommend any best practices around the
> > approach to API testing.
> >
> >
> > Testing recommendations haven't been part of the conversation yet, but I
> > think it is within scope for the WG to have some opinions on REST API
> > design and validation tools.
>
> I definitely like the direction that gabbi seems to be headed. It feels
> like a much cleaner version of what nova tried to do with API samples.
>
> As long as multiple projects think this is an interesting direction, I
> think it's probably fine to add it to global-requirements and let them
> start working with it.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Sean Dague
On 01/12/2015 03:18 PM, Boris Pavlovic wrote:
> Chris, 
> 
> The Idea is brilliant. I may steal it! =)
> 
> But there are some issues that will be faced: 
> 
> 1) Using as a base unittest: 
> 
> python -m subunit.run discover -f gabbi | subunit2pyunit
> 
> 
> So rally team won't be able to reuse it for load testing (if we directly
> integrate it) because we will have huge overhead (of discover stuff)
> 
> 2) Load testing. 
> 
> Using unittest for functional testing adds a lot of troubles: 
> 2.1) It makes things complicated: 
> Like reusing fixtures via input YAML will be painfull
> 2.2) It adds a lot of functionality that is not required 
> 2.3) It makes it hardly integratabtle with other tools. Like Rally.. 
> 
> 3) Usage by Operators is hard in case of N projects. 
> 
> So you should have some kind of 
> 
> Operators would like to have 1 button that will say (does cloud work or
> not). And they don't want to combine all gabbi files from all projects
> and run test. 
> 
> From other side there should be a way to write such code
> in-projects-tree (so new features are directly tested) and then moved to
> some common place that is run on every patch (without breaking gates) 
> 
> 4) Using subunit format is not good for functional testing.
>  
> It doesn't allow you to collect detailed information about execution of
> test. Like for benchmarking it will be quite interesting to collect
> durations of every API call. 

I'm not sure how subunit causes an issue here either way. You can either
put content into one of the existing subunit attachments, or could
modify it to have a new one.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Sean Dague
On 01/12/2015 03:11 PM, Dean Troyer wrote:
> Thanks for this Chris, I'm hoping to get my fingers dirty with it Real
> Soon Now.
> 
> On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn  > wrote:
> 
> I'd be interested in hearing the api-wg viewpoint, specifically whether
> that working group intends to recommend any best practices around the
> approach to API testing.
> 
> 
> Testing recommendations haven't been part of the conversation yet, but I
> think it is within scope for the WG to have some opinions on REST API
> design and validation tools.

I definitely like the direction that gabbi seems to be headed. It feels
like a much cleaner version of what nova tried to do with API samples.

As long as multiple projects think this is an interesting direction, I
think it's probably fine to add it to global-requirements and let them
start working with it.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Boris Pavlovic
Chris,

The Idea is brilliant. I may steal it! =)

But there are some issues that will be faced:

1) Using as a base unittest:

> python -m subunit.run discover -f gabbi | subunit2pyunit


So rally team won't be able to reuse it for load testing (if we directly
integrate it) because we will have huge overhead (of discover stuff)

2) Load testing.

Using unittest for functional testing adds a lot of troubles:
2.1) It makes things complicated:
Like reusing fixtures via input YAML will be painfull
2.2) It adds a lot of functionality that is not required
2.3) It makes it hardly integratabtle with other tools. Like Rally..

3) Usage by Operators is hard in case of N projects.

So you should have some kind of

Operators would like to have 1 button that will say (does cloud work or
not). And they don't want to combine all gabbi files from all projects and
run test.

>From other side there should be a way to write such code in-projects-tree
(so new features are directly tested) and then moved to some common place
that is run on every patch (without breaking gates)

4) Using subunit format is not good for functional testing.

It doesn't allow you to collect detailed information about execution of
test. Like for benchmarking it will be quite interesting to collect
durations of every API call.



Best regards,
Boris Pavlovic


On Mon, Jan 12, 2015 at 10:54 PM, Eoghan Glynn  wrote:

>
>
> > After some discussion with Sean Dague and a few others it became
> > clear that it would be a good idea to introduce a new tool I've been
> > working on to the list to get a sense of its usefulness generally,
> > work towards getting it into global requirements, and get the
> > documentation fleshed out so that people can actually figure out how
> > to use it well.
> >
> > tl;dr: Help me make this interesting tool useful to you and your
> > HTTP testing by reading this message and following some of the links
> > and asking any questions that come up.
> >
> > The tool is called gabbi
> >
> >  https://github.com/cdent/gabbi
> >  http://gabbi.readthedocs.org/
> >  https://pypi.python.org/pypi/gabbi
> >
> > It describes itself as a tool for running HTTP tests where requests
> > and responses are represented in a declarative form. Its main
> > purpose is to allow testing of APIs where the focus of test writing
> > (and reading!) is on the HTTP requests and responses, not on a bunch of
> > Python (that obscures the HTTP).
> >
> > The tests are written in YAML and the simplest test file has this form:
> >
> > ```
> > tests:
> > - name: a test
> >url: /
> > ```
> >
> > This test will pass if the response status code is '200'.
> >
> > The test file is loaded by a small amount of python code which transforms
> > the file into an ordered sequence of TestCases in a TestSuite[1].
> >
> > ```
> > def load_tests(loader, tests, pattern):
> >  """Provide a TestSuite to the discovery process."""
> >  test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
> >  return driver.build_tests(test_dir, loader, host=None,
> >intercept=SimpleWsgi,
> >fixture_module=sys.modules[__name__])
> > ```
> >
> > The loader provides either:
> >
> > * a host to which real over-the-network requests are made
> > * a WSGI app which is wsgi-intercept-ed[2]
> >
> > If an individual TestCase is asked to be run by the testrunner, those
> tests
> > that are prior to it in the same file are run first, as prerequisites.
> >
> > Each test file can declare a sequence of nested fixtures to be loaded
> > from a configured (in the loader) module. Fixtures are context managers
> > (they establish the fixture upon __enter__ and destroy it upon
> > __exit__).
> >
> > With a proper group_regex setting in .testr.conf each YAML file can
> > run in its own process in a concurrent test runner.
> >
> > The docs contain information on the format of the test files:
> >
> >  http://gabbi.readthedocs.org/en/latest/format.html
> >
> > Each test can state request headers and bodies and evaluate both response
> > headers and response bodies. Request bodies can be strings in the
> > YAML, files read from disk, or JSON created from YAML structures.
> > Response verifcation can use JSONPath[3] to inspect the details of
> > response bodies. Response header validation may use regular
> > expressions.
> >
> > There is limited support for refering to the previous request
> > to construct URIs, potentially allowing traversal of a full HATEOAS
> > compliant API.
> >
> > At the moment the most complete examples of how things work are:
> >
> > * Ceilometer's pending use of gabbi:
> >https://review.openstack.org/#/c/146187/
> > * Gabbi's testing of gabbi:
> >https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
> >(the loader and faked WSGI app for those yaml files is in:
> >https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
> >
> > One obvious thing

Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Dean Troyer
Thanks for this Chris, I'm hoping to get my fingers dirty with it Real Soon
Now.

On Mon, Jan 12, 2015 at 1:54 PM, Eoghan Glynn  wrote:
>
> I'd be interested in hearing the api-wg viewpoint, specifically whether
> that working group intends to recommend any best practices around the
> approach to API testing.
>

Testing recommendations haven't been part of the conversation yet, but I
think it is within scope for the WG to have some opinions on REST API
design and validation tools.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-12 Thread Eoghan Glynn


> After some discussion with Sean Dague and a few others it became
> clear that it would be a good idea to introduce a new tool I've been
> working on to the list to get a sense of its usefulness generally,
> work towards getting it into global requirements, and get the
> documentation fleshed out so that people can actually figure out how
> to use it well.
> 
> tl;dr: Help me make this interesting tool useful to you and your
> HTTP testing by reading this message and following some of the links
> and asking any questions that come up.
> 
> The tool is called gabbi
> 
>  https://github.com/cdent/gabbi
>  http://gabbi.readthedocs.org/
>  https://pypi.python.org/pypi/gabbi
> 
> It describes itself as a tool for running HTTP tests where requests
> and responses are represented in a declarative form. Its main
> purpose is to allow testing of APIs where the focus of test writing
> (and reading!) is on the HTTP requests and responses, not on a bunch of
> Python (that obscures the HTTP).
> 
> The tests are written in YAML and the simplest test file has this form:
> 
> ```
> tests:
> - name: a test
>url: /
> ```
> 
> This test will pass if the response status code is '200'.
> 
> The test file is loaded by a small amount of python code which transforms
> the file into an ordered sequence of TestCases in a TestSuite[1].
> 
> ```
> def load_tests(loader, tests, pattern):
>  """Provide a TestSuite to the discovery process."""
>  test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
>  return driver.build_tests(test_dir, loader, host=None,
>intercept=SimpleWsgi,
>fixture_module=sys.modules[__name__])
> ```
> 
> The loader provides either:
> 
> * a host to which real over-the-network requests are made
> * a WSGI app which is wsgi-intercept-ed[2]
> 
> If an individual TestCase is asked to be run by the testrunner, those tests
> that are prior to it in the same file are run first, as prerequisites.
> 
> Each test file can declare a sequence of nested fixtures to be loaded
> from a configured (in the loader) module. Fixtures are context managers
> (they establish the fixture upon __enter__ and destroy it upon
> __exit__).
> 
> With a proper group_regex setting in .testr.conf each YAML file can
> run in its own process in a concurrent test runner.
> 
> The docs contain information on the format of the test files:
> 
>  http://gabbi.readthedocs.org/en/latest/format.html
> 
> Each test can state request headers and bodies and evaluate both response
> headers and response bodies. Request bodies can be strings in the
> YAML, files read from disk, or JSON created from YAML structures.
> Response verifcation can use JSONPath[3] to inspect the details of
> response bodies. Response header validation may use regular
> expressions.
> 
> There is limited support for refering to the previous request
> to construct URIs, potentially allowing traversal of a full HATEOAS
> compliant API.
> 
> At the moment the most complete examples of how things work are:
> 
> * Ceilometer's pending use of gabbi:
>https://review.openstack.org/#/c/146187/
> * Gabbi's testing of gabbi:
>https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
>(the loader and faked WSGI app for those yaml files is in:
>https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)
> 
> One obvious thing that will need to happen is a suite of concrete
> examples on how to use the various features. I'm hoping that
> feedback will help drive that.
> 
> In my own experimentation with gabbi I've found it very useful. It's
> helped me explore and learn the ceilometer API in a way that existing
> test code has completely failed to do. It's also helped reveal
> several warts that will be very useful to fix. And it is fast. To
> run and to write. I hope that with some work it can be useful to you
> too.

Thanks for the write-up Chris,

Needless to say, we're sold on the utility of this on the ceilometer
side, in terms of crafting readable, self-documenting tests that reveal
the core aspects of an API in a easily consumable way.

I'd be interested in hearing the api-wg viewpoint, specifically whether
that working group intends to recommend any best practices around the
approach to API testing.

If so, I think gabbi would be a worthy candidate for consideration.

Cheers,
Eoghan

> Thanks.
> 
> [1] Getting gabbi to play well with PyUnit style tests and
>  with infrastructure like subunit and testrepository was one of
>  the most challenging parts of the build, but the result has been
>  a lot of flexbility.
> 
> [2] https://pypi.python.org/pypi/wsgi_intercept
> [3] https://pypi.python.org/pypi/jsonpath-rw
> 
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
> 
> __
> OpenStack Development Mailing List (not for usag