Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-24 Thread Chris Dent

On Thu, 23 Oct 2014, Doug Hellmann wrote:

Thanks for the feedback Doug, it's useful.


WebTest isn’t quite what you’re talking about, but does provide a
way to talk to a WSGI app from within a test suite rather simply. Can
you expand a little on why “declarative” tests are better suited
for this than the more usual sorts of tests we write?


I'll add a bit more on why I think declarative tests are useful, but
it basically comes down to explicit transparency of the on-the-wire
HTTP requests and responses. At least within Ceilometer and at least
for me, unittest form tests are very difficult to read as a
significant portion of the action is somewhere in test setup or a
super class. This may not have much impact on the effectiveness of the
tests for computers, but it's a total killer of the effectiveness of
the tests as tools for discovery by a developer who needs to make
changes or, heaven forbid, is merely curious. I think we can agree
that a more informed developer, via a more learnable codebase, is a
good thing?

I did look at WebTest and while it looks pretty good I don't particular
care for its grammar. This might be seen as a frivolous point (because
under the hood the same thing is happening, and its not this aspect that
would be exposed the declarations) but calling a method on the `app`
provided by WebTest feels different than using an http library to make
a request of what appears to be a web server hosting the app.

If people prefer WebTest over wsgi-intercept, that's fine, it's not
something worth fighting about. The positions I'm taking in the spec,
as you'll have seen from my responses to Eoghan, are trying to emphasize
the focus on HTTP rather than the app itself. I think this could lead,
eventually, to better HTTP APIs.


I definitely don’t think the ceilometer team should build
something completely new for this without a lot more detail in the
spec about which projects on PyPI were evaluated and rejected as not
meeting the requirements. If we do need/want something like this I
would expect it to be built within the QA program. I don’t know if
it’s appropriate to put it in tempestlib or if we need a
completely new tool.


I don't think anyone wants to build a new thing unless that turns out
to be necessary, thus this thread. I'm hoping to get input from people
who have thought about or explored this before. I'm hoping we can
build on the shoulders of giants and all that. I'm also hoping to
short circuit extensive personal research by collaborating with others
who may have already done it before.

I have an implementation of the concept that I've used in a previous
project (linked from the spec) which could work as a starting point
from which we could iterate but if there is something better out there
I'd prefer to start there.

So the questions from the original post still stand, all input
welcome, please and thank you.


* Is this a good idea?
* Do other projects have similar ideas in progress?
* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?
* Is there prior art? What's a good format?


--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-24 Thread David Kranz

On 10/23/2014 06:27 AM, Chris Dent wrote:


I've proposed a spec to Ceilometer

   https://review.openstack.org/#/c/129669/

for a suite of declarative HTTP tests that would be runnable both in
gate check jobs and in local dev environments.

There's been some discussion that this may be generally applicable
and could be best served by a generic tool. My original assertion
was let's make something work and then see if people like it but I
thought I also better check with the larger world:

* Is this a good idea?

I think so


* Do other projects have similar ideas in progress?
Tempest faced a similar problem around negative tests in particular. We 
have code in tempest that automatically generates a series of negative
test cases based on illegal variations of a schema. If you want to look 
at it the NegativeAutoTest class is probably a good place to start. We have
discussed using a similar methodology for positive test cases but never 
did anything with that.


Currently only a few of the previous negative tests have been replaced 
with auto-gen tests. In addition to the issue of how to represent the 
schema, the other major issue we encountered was the need to create 
resources used by the auto-generated tests and a way to integrate a 
resource description into the schema. We use json for the schema and 
hoped one day to be able to receive base schemas from the projects 
themselves.


* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?

* Is there prior art? What's a good format?
Marc Koderer and I did a lot of searching and asking folks if there was 
some python code that we could use as a starting point but in the end 
did not find anything. I do not have a list of what we considered and 
rejected.


 -David


Thanks.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-24 Thread Doug Hellmann

On Oct 24, 2014, at 6:58 AM, Chris Dent chd...@redhat.com wrote:

 On Thu, 23 Oct 2014, Doug Hellmann wrote:
 
 Thanks for the feedback Doug, it's useful.
 
 WebTest isn’t quite what you’re talking about, but does provide a
 way to talk to a WSGI app from within a test suite rather simply. Can
 you expand a little on why “declarative” tests are better suited
 for this than the more usual sorts of tests we write?
 
 I'll add a bit more on why I think declarative tests are useful, but
 it basically comes down to explicit transparency of the on-the-wire
 HTTP requests and responses. At least within Ceilometer and at least
 for me, unittest form tests are very difficult to read as a
 significant portion of the action is somewhere in test setup or a
 super class. This may not have much impact on the effectiveness of the
 tests for computers, but it's a total killer of the effectiveness of
 the tests as tools for discovery by a developer who needs to make
 changes or, heaven forbid, is merely curious. I think we can agree
 that a more informed developer, via a more learnable codebase, is a
 good thing?

OK, at first I thought you were talking about writing out literal HTTP 
request/response sets, but looking at the example YAML file in [1] I see you’re 
doing something more abstract. I was worried about minor changes in a 
serialization library somewhere breaking a bunch of tests by changing their 
formatting in insignificant ways, but that shouldn’t be a problem if you’re 
testing the semantic contents rather than the literal contents of the response.

[1] https://github.com/tiddlyweb/tiddlyweb/blob/master/test/httptest.yaml

 
 I did look at WebTest and while it looks pretty good I don't particular
 care for its grammar. This might be seen as a frivolous point (because
 under the hood the same thing is happening, and its not this aspect that
 would be exposed the declarations) but calling a method on the `app`
 provided by WebTest feels different than using an http library to make
 a request of what appears to be a web server hosting the app.
 
 If people prefer WebTest over wsgi-intercept, that's fine, it's not
 something worth fighting about. The positions I'm taking in the spec,
 as you'll have seen from my responses to Eoghan, are trying to emphasize
 the focus on HTTP rather than the app itself. I think this could lead,
 eventually, to better HTTP APIs.

I’m not familiar with wsgi-intercept, so I can’t really comment on the 
difference there. I find WebTest tests to be reasonably easy to read, but since 
(as I understand it) I would write YAML files instead of Python tests, I’m not 
sure I care which library is used to build the tool as long as we don’t have to 
actually spin up a web server listening on a network port in order to run tests.

 
 I definitely don’t think the ceilometer team should build
 something completely new for this without a lot more detail in the
 spec about which projects on PyPI were evaluated and rejected as not
 meeting the requirements. If we do need/want something like this I
 would expect it to be built within the QA program. I don’t know if
 it’s appropriate to put it in tempestlib or if we need a
 completely new tool.
 
 I don't think anyone wants to build a new thing unless that turns out
 to be necessary, thus this thread. I'm hoping to get input from people
 who have thought about or explored this before. I'm hoping we can
 build on the shoulders of giants and all that. I'm also hoping to
 short circuit extensive personal research by collaborating with others
 who may have already done it before.
 
 I have an implementation of the concept that I've used in a previous
 project (linked from the spec) which could work as a starting point
 from which we could iterate but if there is something better out there
 I'd prefer to start there.
 
 So the questions from the original post still stand, all input
 welcome, please and thank you.
 
 * Is this a good idea?
 * Do other projects have similar ideas in progress?
 * Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?
 * Is there prior art? What's a good format?
 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-23 Thread Chris Dent


I've proposed a spec to Ceilometer

   https://review.openstack.org/#/c/129669/

for a suite of declarative HTTP tests that would be runnable both in
gate check jobs and in local dev environments.

There's been some discussion that this may be generally applicable
and could be best served by a generic tool. My original assertion
was let's make something work and then see if people like it but I
thought I also better check with the larger world:

* Is this a good idea?

* Do other projects have similar ideas in progress?

* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?

* Is there prior art? What's a good format?

Thanks.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-23 Thread Doug Hellmann

On Oct 23, 2014, at 6:27 AM, Chris Dent chd...@redhat.com wrote:

 
 I've proposed a spec to Ceilometer
 
   https://review.openstack.org/#/c/129669/
 
 for a suite of declarative HTTP tests that would be runnable both in
 gate check jobs and in local dev environments.
 
 There's been some discussion that this may be generally applicable
 and could be best served by a generic tool. My original assertion
 was let's make something work and then see if people like it but I
 thought I also better check with the larger world:
 
 * Is this a good idea?
 
 * Do other projects have similar ideas in progress?
 
 * Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?
 
 * Is there prior art? What's a good format?

WebTest isn’t quite what you’re talking about, but does provide a way to talk 
to a WSGI app from within a test suite rather simply. Can you expand a little 
on why “declarative” tests are better suited for this than the more usual sorts 
of tests we write?

I definitely don’t think the ceilometer team should build something completely 
new for this without a lot more detail in the spec about which projects on PyPI 
were evaluated and rejected as not meeting the requirements. If we do need/want 
something like this I would expect it to be built within the QA program. I 
don’t know if it’s appropriate to put it in tempestlib or if we need a 
completely new tool.

Doug

 
 Thanks.
 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev