On Thu, 23 Oct 2014, Doug Hellmann wrote:

Thanks for the feedback Doug, it's useful.

WebTest isn’t quite what you’re talking about, but does provide a
way to talk to a WSGI app from within a test suite rather simply. Can
you expand a little on why “declarative” tests are better suited
for this than the more usual sorts of tests we write?

I'll add a bit more on why I think declarative tests are useful, but
it basically comes down to explicit transparency of the on-the-wire
HTTP requests and responses. At least within Ceilometer and at least
for me, unittest form tests are very difficult to read as a
significant portion of the action is somewhere in test setup or a
super class. This may not have much impact on the effectiveness of the
tests for computers, but it's a total killer of the effectiveness of
the tests as tools for discovery by a developer who needs to make
changes or, heaven forbid, is merely curious. I think we can agree
that a more informed developer, via a more learnable codebase, is a
good thing?

I did look at WebTest and while it looks pretty good I don't particular
care for its grammar. This might be seen as a frivolous point (because
under the hood the same thing is happening, and its not this aspect that
would be exposed the declarations) but calling a method on the `app`
provided by WebTest feels different than using an http library to make
a request of what appears to be a web server hosting the app.

If people prefer WebTest over wsgi-intercept, that's fine, it's not
something worth fighting about. The positions I'm taking in the spec,
as you'll have seen from my responses to Eoghan, are trying to emphasize
the focus on HTTP rather than the app itself. I think this could lead,
eventually, to better HTTP APIs.

I definitely don’t think the ceilometer team should build
something completely new for this without a lot more detail in the
spec about which projects on PyPI were evaluated and rejected as not
meeting the requirements. If we do need/want something like this I
would expect it to be built within the QA program. I don’t know if
it’s appropriate to put it in tempestlib or if we need a
completely new tool.

I don't think anyone wants to build a new thing unless that turns out
to be necessary, thus this thread. I'm hoping to get input from people
who have thought about or explored this before. I'm hoping we can
build on the shoulders of giants and all that. I'm also hoping to
short circuit extensive personal research by collaborating with others
who may have already done it before.

I have an implementation of the concept that I've used in a previous
project (linked from the spec) which could work as a starting point
from which we could iterate but if there is something better out there
I'd prefer to start there.

So the questions from the original post still stand, all input
welcome, please and thank you.

* Is this a good idea?
* Do other projects have similar ideas in progress?
* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?
* Is there prior art? What's a good format?

Chris Dent tw:@anticdent freenode:cdent
OpenStack-dev mailing list

Reply via email to