On Tue, 13 Jan 2015, Boris Pavlovic wrote:
Having separated engine seems like a good idea. It will really simplify
I'm not certain that's the case, but it may be worth exploration.
This seems like a huge duplication of efforts. I mean operators will write
tools developers own... Why not just resolve more common problem:
"Does it work or not?"
Because no one tool can solve all problems well. I think it is far
better to have lots of small tools that are fairly focused on doing a
one or a few small jobs well.
It may be that there are pieces of gabbi which can be reused or
extracted to more general libraries. If there, that's fantastic. But
I think it is very important to try to solve one problem at a time
rather than everything at once.
$ python -m subunit.run discover gabbi |subunit-trace
[0.027512s] ... ok
What is "test_request" Just one RestAPI call?
That long dotted name is the name of a dynamically (some metaclass
mumbo jumbo magic is used to turn the YAML into TestCase classes)
created single TestCase and within that TestCase is one single HTTP
request and the evaluation of its response. It directly corresponds to a
test named "inheritance of defaults" in a file called self.yaml.
self.yaml is in a directory containing other YAML files, all of which
are loaded by a python filed named test_intercept.py.
Btw the thin that I am interested how they are all combined?
As I said before: Each yaml file is an ordered sequence of tests, each
one representing a singe HTTP request. Fixtures are per yaml file.
There is no cleanup phase outside of the fixtures. Each fixture is
expected to do its own cleanup, if required.
And where are you doing cleanup? (like if you would like to test only
creation of resource?)
In the ceilometer integration that is currently being built, the
test_gabbi.py file configures itself to use a mongodb database that
is unique for this process. The test harness is responsible for
starting the mongodb. In a concurrency situation, each process will
have a different database in the same monogo server. When the test run
is done, mongo is shut down, the databases removed.
In other words, the environment surrounding gabbi is responsible for
doing the things it is good at, and gabbi does the HTTP tests. A long
running test cannot necessarily depend on what else might be in the
datastore used by the API. It needs to test that which it knows about.
I hope that clarifies things a bit.
Chris Dent tw:@anticdent freenode:cdent
OpenStack Development Mailing List (not for usage questions)