Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

2013-11-22 Thread Daisuke Morita


Great work!
Now the test cases in Tempest are well-stocked so it's a good time to 
rearrange the design of test codes.


I checked mailing lists, IRC log and etherpads relating to this topic. 
Let me leave my 5 thoughts below.


How to handle:
1. Data type (e.g, int, bool)
2. Specific value or format support (e.g, RegExp)
3. Boundary value analysis (David made a mention to this issue below)
4. Invalid value by non-Unicode (Ken'ichi made a mention in his mail Nov 13)
5. Errors that complicated pre- or post- processings are required for 
reproducing them



I suggest that issues 1-4 be considered in the scope of new framework. 
From above sources, I feel a little bias towards invalid value testing.


On the other hand, I think that some tests remain outside of this framework.
As for Swift, the max total size of sending HTTP headers for metadata is 
4096 bytes but the max size of meta-key is 128 bytes and the max of 
meta-value is 256 bytes. It might be difficult to test boundary value of 
total HTTP headers with the new framework. In such cases, is it OK to 
write test case like current implementation?



Anyway, I do never want to derail this work. I am looking forward to a 
prototype:)



Best Regards,
Daisuke Morita


Excerpts from David Kranz's message of 2013-11-12 14:33:04 -0500:



I am working on this with Marc Koderer but we only just started and are
not quite ready. But since you asked now...

The problem is that the current implementation of negative tests is that
each case is represented as code in a method and targets a particular
set of api arguments and expected result. In most (but not all) of these
tests there is boilerplate code surrounding the real content which is
the actual arguments being passed and the value expected. That
boilerplate code has to be written correctly and reviewed. The general
form of the solution has to be worked out but basically would involve
expressing these tests declaratively, perhaps in a yaml file. In order
to do this we will need some kind of json schema for each api. The main
implementation around this is defining the yaml attributes that make it
easy to express the test cases, and somehow coming up with the json
schema for each api.

In addition, we would like to support fuzz testing where arguments
are, at least partially, randomly generated and the return values are
only examined for 4xx vs something else. This would be possible if we
had json schemas. The main work is to write a generator and methods for
creating bad values including boundary conditions for types with ranges.
I had thought a bit about this last year and poked around for an
existing framework. I didn't find anything that seemed to make the job
much easier but if any one knows of such a thing (python, hopefully)
please let me know.

The negative tests for each api would be some combination of
declaratively specified cases and auto-generated ones.

With regard to the json schema, there have been various attempts at this
in the past, including some ideas of how wsme/pecan will help, and it
might be helpful to have more project coordination. I can see a few options:

1. Tempest keeps its own json schema data
2. Each project keeps its own json schema in a way that supports
automated extraction
3. There are several use cases for json schema like this and it gets
stored in some openstacky place that is not in tempest

So that is the starting point. Comments and suggestions welcome! Marc
and I just started working on an etherpad
https://etherpad.openstack.org/p/bp_negative_tests  but any one is
welcome to contribute there.

   -David


--
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

2013-11-13 Thread Koderer, Marc
Hi,

see below.

Regards
Marc

From: Kenichi Oomichi [oomi...@mxs.nes.nec.co.jp]
Sent: Wednesday, November 13, 2013 7:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

 -Original Message-
 From: David Kranz [mailto:dkr...@redhat.com]
 Sent: Wednesday, November 13, 2013 4:33 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

 So that is the starting point. Comments and suggestions welcome! Marc
 and I just started working on an etherpad
 https://etherpad.openstack.org/p/bp_negative_tests but any one is
 welcome to contribute there.

Negative tests based on yaml would be nice because of cleaning the code up
and making the tests more readable.
just one question:
 On the etherpad, there are some invaid_uuids.
 Does that mean invalid string (ex. utf-8 string, not ascii)?
 or invalid uuid format(ex. uuid.uuid4() + foo)?

Great that you already had a look!
So my idea is that we have a battery of functions which can create erroneous 
input.
My intention for invalid_uuid was just something like uuid.uuid4() - but the 
name is a bit misleading.
We can use additional functions that create the input that you are suggesting. 
I think all of them make sense.

 IIUC, in negative test session, we discussed that tests passing utf-8 string
 as API parameter should be negative tests, and the server should return a
 BadRequest response.
 I guess we need to implement such API negative tests. After that, if finding
 an unfavorable behavior of some server, we need to implement API validation
 for the server.
 (unfavorable behavior ex. When a client send utf-8 request, the server returns
 a NotFound response, not a BadRequest one)

+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

2013-11-12 Thread Monty Taylor


On 11/12/2013 02:33 PM, David Kranz wrote:
 On 11/12/2013 01:36 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2013-11-12 10:01:06 -0800:
 During the freeze phase of Havana we got a ton of new contributors
 coming on board to Tempest, which was super cool. However it meant we
 had this new influx of negative tests (i.e. tests which push invalid
 parameters looking for error codes) which made us realize that human
 creation and review of negative tests really doesn't scale. David Kranz
 is working on a generative model for this now.

 Are there some notes or other source material we can follow to understand
 this line of thinking? I don't agree or disagree with it, as I don't
 really understand, so it would be helpful to have the problems enumerated
 and the solution hypothesis stated. Thanks!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 I am working on this with Marc Koderer but we only just started and are
 not quite ready. But since you asked now...
 
 The problem is that the current implementation of negative tests is that
 each case is represented as code in a method and targets a particular
 set of api arguments and expected result. In most (but not all) of these
 tests there is boilerplate code surrounding the real content which is
 the actual arguments being passed and the value expected. That
 boilerplate code has to be written correctly and reviewed. The general
 form of the solution has to be worked out but basically would involve
 expressing these tests declaratively, perhaps in a yaml file. In order
 to do this we will need some kind of json schema for each api. The main
 implementation around this is defining the yaml attributes that make it
 easy to express the test cases, and somehow coming up with the json
 schema for each api.
 
 In addition, we would like to support fuzz testing where arguments
 are, at least partially, randomly generated and the return values are
 only examined for 4xx vs something else. This would be possible if we
 had json schemas. The main work is to write a generator and methods for
 creating bad values including boundary conditions for types with ranges.
 I had thought a bit about this last year and poked around for an
 existing framework. I didn't find anything that seemed to make the job
 much easier but if any one knows of such a thing (python, hopefully)
 please let me know.
 
 The negative tests for each api would be some combination of
 declaratively specified cases and auto-generated ones.
 
 With regard to the json schema, there have been various attempts at this
 in the past, including some ideas of how wsme/pecan will help, and it
 might be helpful to have more project coordination. I can see a few
 options:
 
 1. Tempest keeps its own json schema data
 2. Each project keeps its own json schema in a way that supports
 automated extraction
 3. There are several use cases for json schema like this and it gets
 stored in some openstacky place that is not in tempest
 
 So that is the starting point. Comments and suggestions welcome! Marc
 and I just started working on an etherpad
 https://etherpad.openstack.org/p/bp_negative_tests but any one is
 welcome to contribute there.

We actually did this back in the good old Drizzle days- and by we, I
mean Patrick Crews, who I copied here. He can refer to the research
better than I can, but AIUI, generative schema-driven testing of things
like this is certainly the right direction. It's about 10 years behind
the actual state of the art of the research, but it's in all ways
superior to making human combinations of input parameters and output
behaviors.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

2013-11-12 Thread Rochelle.Grober
I agree that parametric testing, with input generators is the way to go for the 
API testing.  Both positive and negative.  I've looked at a number of 
frameworks in the past and the one that until recently was the highest on my 
list is Robot:
http://code.google.com/p/robotframework/

I had looked at it in the past for doing parametric testing for APIs.  It 
doesn't seem to have the generators, but it has a fair amount of infrastructure.

But in my search in preparation for responding to this email, I stumbled upon a 
test framework I had not seen before that looks promising:
http://www.squashtest.org/index.php/en/squash-ta/squash-ta-overview
It does the data generation separate from the test code, the setup, tear down.  
It actually looks quite interesting, and it is open source.  It might not pan 
out, but it's worth a look. 

Another page by the same group:
 
http://www.squashtest.org/index.php/en/what-is-squash/tools-and-functionalities/squash-data
is the data generators.  I'm not sure just how much of the project is open 
source, but I suspect enough for our purposes.  The other question is whether 
the licensing is acceptable for OpenStack.org.

I'm willing to jump in and help on this as this sort of stuff is my bailiwick.  
A subgroup maybe?  

I also want to get some of the QA/Test lore written down so newbies can come up 
to speed sooner and we reduce some of the vagueness that causes reviews to 
thrash a bit.  I started a blueprint:
https://blueprints.launchpad.net/tempest/+spec/test-developer-documentation
and being pretty much a newbie myself, wasn't sure how to start (I have only 
limited access to IRC), but realized I should start an Etherpad with strawman 
sections and let people edit there.

Hope this is useful.

--Rocky

-Original Message-
From: pcrews [mailto:glee...@gmail.com] 
Sent: Tuesday, November 12, 2013 2:03 PM
To: Monty Taylor; openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

On 11/12/2013 12:20 PM, Monty Taylor wrote:


 On 11/12/2013 02:33 PM, David Kranz wrote:
 On 11/12/2013 01:36 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2013-11-12 10:01:06 -0800:
 During the freeze phase of Havana we got a ton of new contributors
 coming on board to Tempest, which was super cool. However it meant we
 had this new influx of negative tests (i.e. tests which push invalid
 parameters looking for error codes) which made us realize that human
 creation and review of negative tests really doesn't scale. David Kranz
 is working on a generative model for this now.

 Are there some notes or other source material we can follow to understand
 this line of thinking? I don't agree or disagree with it, as I don't
 really understand, so it would be helpful to have the problems enumerated
 and the solution hypothesis stated. Thanks!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 I am working on this with Marc Koderer but we only just started and are
 not quite ready. But since you asked now...

 The problem is that the current implementation of negative tests is that
 each case is represented as code in a method and targets a particular
 set of api arguments and expected result. In most (but not all) of these
 tests there is boilerplate code surrounding the real content which is
 the actual arguments being passed and the value expected. That
 boilerplate code has to be written correctly and reviewed. The general
 form of the solution has to be worked out but basically would involve
 expressing these tests declaratively, perhaps in a yaml file. In order
 to do this we will need some kind of json schema for each api. The main
 implementation around this is defining the yaml attributes that make it
 easy to express the test cases, and somehow coming up with the json
 schema for each api.

 In addition, we would like to support fuzz testing where arguments
 are, at least partially, randomly generated and the return values are
 only examined for 4xx vs something else. This would be possible if we
 had json schemas. The main work is to write a generator and methods for
 creating bad values including boundary conditions for types with ranges.
 I had thought a bit about this last year and poked around for an
 existing framework. I didn't find anything that seemed to make the job
 much easier but if any one knows of such a thing (python, hopefully)
 please let me know.

 The negative tests for each api would be some combination of
 declaratively specified cases and auto-generated ones.

 With regard to the json schema, there have been various attempts at this
 in the past, including some ideas of how wsme/pecan will help, and it
 might be helpful to have more project coordination. I can see a few
 options:

 1. Tempest keeps its own json schema data
 2. Each project keeps its own json schema