Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread John Garbutt
On 25 July 2016 at 13:56, Bhor, Dinesh <dinesh.b...@nttdata.com> wrote:
>
>
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Monday, July 25, 2016 5:53 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven 
> Tests (DDT)
>
> On 07/25/2016 08:05 AM, Daniel P. Berrange wrote:
>> On Mon, Jul 25, 2016 at 07:57:08AM -0400, Sean Dague wrote:
>>> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
>>>> On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
>>>>> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
>>>>>
>>>>> I agree that it's not a bug. I also agree that it helps in some
>>>>> specific types of tests which are doing some kind of input
>>>>> validation (like the patch you've proposed) or are simply iterating
>>>>> over some list of values (status values on a server instance for example).
>>>>>
>>>>> Using DDT in Nova has come up before and one of the concerns was
>>>>> hiding details in how the tests are run with a library, and if
>>>>> there would be a learning curve. Depending on the usage, I
>>>>> personally don't have a problem with it. When I used it in manila
>>>>> it took a little getting used to but I was basically just looking
>>>>> at existing tests and figuring out what they were doing when adding new 
>>>>> ones.
>>>>
>>>> I don't think there's significant learning curve there - the way it
>>>> lets you annotate the test methods is pretty easy to understand and
>>>> the ddt docs spell it out clearly for newbies. We've far worse
>>>> things in our code that create a hard learning curve which people
>>>> will hit first :-)
>>>>
>>>> People have essentially been re-inventing ddt in nova tests already
>>>> by defining one helper method and them having multiple tests methods
>>>> all calling the same helper with a different dataset. So ddt is just
>>>> formalizing what we're already doing in many places, with less code
>>>> and greater clarity.
>>>>
>>>>> I definitely think DDT is easier to use/understand than something
>>>>> like testscenarios, which we're already using in Nova.
>>>>
>>>> Yeah, testscenarios feels little over-engineered for what we want
>>>> most of the time.
>>>
>>> Except, DDT is way less clear (and deterministic) about what's going
>>> on with the test name munging. Which means failures are harder to
>>> track back to individual tests and data load. So debugging the failures is 
>>> harder.
>>
>> I'm not sure what you think is unclear - given an annotated test:
>>
>>@ddt.data({"foo": "test", "availability_zone": "nova1"},
>>   {"name": "  test  ", "availability_zone": "nova1"},
>>   {"name": "", "availability_zone": "nova1"},
>>   {"name": "x" * 256, "availability_zone": "nova1"},
>>   {"name": "test", "availability_zone": "x" * 256},
>>   {"name": "test", "availability_zone": "  nova1  "},
>>   {"name": "test", "availability_zone": ""},
>>   {"name": "test", "availability_zone": "nova1", "foo": "bar"})
>> def test_create_invalid_create_aggregate_data(self, value):
>>
>> It is generated one test for each data item:
>>
>>  test_create_invalid_create_aggregate_data_1
>>  test_create_invalid_create_aggregate_data_2
>>  test_create_invalid_create_aggregate_data_3
>>  test_create_invalid_create_aggregate_data_4
>>  test_create_invalid_create_aggregate_data_5
>>  test_create_invalid_create_aggregate_data_6
>>  test_create_invalid_create_aggregate_data_7
>>  test_create_invalid_create_aggregate_data_8
>>
>> This seems about as obvious as you can possibly get
>
> At least when this was attempted to be introduced into Tempest, the naming 
> was a lot less clear, maybe it got better. But I still think milestone 3 
> isn't the time to start a thing like this.
>
> -Sean
>
> Hi Sean,
>
> IMO it is possibl

Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Daniel P. Berrange
On Mon, Jul 25, 2016 at 08:22:52AM -0400, Sean Dague wrote:
> On 07/25/2016 08:05 AM, Daniel P. Berrange wrote:
> > On Mon, Jul 25, 2016 at 07:57:08AM -0400, Sean Dague wrote:
> >> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
> >>> On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
>  On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
> 
>  I agree that it's not a bug. I also agree that it helps in some specific
>  types of tests which are doing some kind of input validation (like the 
>  patch
>  you've proposed) or are simply iterating over some list of values (status
>  values on a server instance for example).
> 
>  Using DDT in Nova has come up before and one of the concerns was hiding
>  details in how the tests are run with a library, and if there would be a
>  learning curve. Depending on the usage, I personally don't have a problem
>  with it. When I used it in manila it took a little getting used to but I 
>  was
>  basically just looking at existing tests and figuring out what they were
>  doing when adding new ones.
> >>>
> >>> I don't think there's significant learning curve there - the way it
> >>> lets you annotate the test methods is pretty easy to understand and
> >>> the ddt docs spell it out clearly for newbies. We've far worse things
> >>> in our code that create a hard learning curve which people will hit
> >>> first :-)
> >>>
> >>> People have essentially been re-inventing ddt in nova tests already
> >>> by defining one helper method and them having multiple tests methods
> >>> all calling the same helper with a different dataset. So ddt is just
> >>> formalizing what we're already doing in many places, with less code
> >>> and greater clarity.
> >>>
>  I definitely think DDT is easier to use/understand than something like
>  testscenarios, which we're already using in Nova.
> >>>
> >>> Yeah, testscenarios feels little over-engineered for what we want most
> >>> of the time.
> >>
> >> Except, DDT is way less clear (and deterministic) about what's going on
> >> with the test name munging. Which means failures are harder to track
> >> back to individual tests and data load. So debugging the failures is 
> >> harder.
> > 
> > I'm not sure what you think is unclear - given an annotated test:
> > 
> >@ddt.data({"foo": "test", "availability_zone": "nova1"},
> >   {"name": "  test  ", "availability_zone": "nova1"},
> >   {"name": "", "availability_zone": "nova1"},
> >   {"name": "x" * 256, "availability_zone": "nova1"},
> >   {"name": "test", "availability_zone": "x" * 256},
> >   {"name": "test", "availability_zone": "  nova1  "},
> >   {"name": "test", "availability_zone": ""},
> >   {"name": "test", "availability_zone": "nova1", "foo": "bar"})
> > def test_create_invalid_create_aggregate_data(self, value):
> > 
> > It is generated one test for each data item:
> > 
> >  test_create_invalid_create_aggregate_data_1
> >  test_create_invalid_create_aggregate_data_2
> >  test_create_invalid_create_aggregate_data_3
> >  test_create_invalid_create_aggregate_data_4
> >  test_create_invalid_create_aggregate_data_5
> >  test_create_invalid_create_aggregate_data_6
> >  test_create_invalid_create_aggregate_data_7
> >  test_create_invalid_create_aggregate_data_8
> > 
> > This seems about as obvious as you can possibly get
> 
> At least when this was attempted to be introduced into Tempest, the
> naming was a lot less clear, maybe it got better. But I still think
> milestone 3 isn't the time to start a thing like this.

Historically we've allowed patches that improve / adapt unit tests
to be merged at any time that we're not in final bug-fix only freeze
periods. So on this basis, I'm happy to see this accepted now, especially
since the module is already in global requirements, so not a new thing
from an openstack POV

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Bhor, Dinesh


-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Monday, July 25, 2016 5:53 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven 
Tests (DDT)

On 07/25/2016 08:05 AM, Daniel P. Berrange wrote:
> On Mon, Jul 25, 2016 at 07:57:08AM -0400, Sean Dague wrote:
>> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
>>> On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
>>>> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
>>>>
>>>> I agree that it's not a bug. I also agree that it helps in some 
>>>> specific types of tests which are doing some kind of input 
>>>> validation (like the patch you've proposed) or are simply iterating 
>>>> over some list of values (status values on a server instance for example).
>>>>
>>>> Using DDT in Nova has come up before and one of the concerns was 
>>>> hiding details in how the tests are run with a library, and if 
>>>> there would be a learning curve. Depending on the usage, I 
>>>> personally don't have a problem with it. When I used it in manila 
>>>> it took a little getting used to but I was basically just looking 
>>>> at existing tests and figuring out what they were doing when adding new 
>>>> ones.
>>>
>>> I don't think there's significant learning curve there - the way it 
>>> lets you annotate the test methods is pretty easy to understand and 
>>> the ddt docs spell it out clearly for newbies. We've far worse 
>>> things in our code that create a hard learning curve which people 
>>> will hit first :-)
>>>
>>> People have essentially been re-inventing ddt in nova tests already 
>>> by defining one helper method and them having multiple tests methods 
>>> all calling the same helper with a different dataset. So ddt is just 
>>> formalizing what we're already doing in many places, with less code 
>>> and greater clarity.
>>>
>>>> I definitely think DDT is easier to use/understand than something 
>>>> like testscenarios, which we're already using in Nova.
>>>
>>> Yeah, testscenarios feels little over-engineered for what we want 
>>> most of the time.
>>
>> Except, DDT is way less clear (and deterministic) about what's going 
>> on with the test name munging. Which means failures are harder to 
>> track back to individual tests and data load. So debugging the failures is 
>> harder.
> 
> I'm not sure what you think is unclear - given an annotated test:
> 
>@ddt.data({"foo": "test", "availability_zone": "nova1"},
>   {"name": "  test  ", "availability_zone": "nova1"},
>   {"name": "", "availability_zone": "nova1"},
>   {"name": "x" * 256, "availability_zone": "nova1"},
>   {"name": "test", "availability_zone": "x" * 256},
>   {"name": "test", "availability_zone": "  nova1  "},
>   {"name": "test", "availability_zone": ""},
>   {"name": "test", "availability_zone": "nova1", "foo": "bar"})
> def test_create_invalid_create_aggregate_data(self, value):
> 
> It is generated one test for each data item:
> 
>  test_create_invalid_create_aggregate_data_1
>  test_create_invalid_create_aggregate_data_2
>  test_create_invalid_create_aggregate_data_3
>  test_create_invalid_create_aggregate_data_4
>  test_create_invalid_create_aggregate_data_5
>  test_create_invalid_create_aggregate_data_6
>  test_create_invalid_create_aggregate_data_7
>  test_create_invalid_create_aggregate_data_8
> 
> This seems about as obvious as you can possibly get

At least when this was attempted to be introduced into Tempest, the naming was 
a lot less clear, maybe it got better. But I still think milestone 3 isn't the 
time to start a thing like this.

-Sean

Hi Sean,

IMO it is possible to have a descriptive name to test cases using DDT.

For ex.,

@ddt.data(annotated('missing_name', {"foo": "test", "availability_zone": 
"nova1"}),
  annotated('name_greater_than_255_characters', {"name": "x" * 256, 
"availability_zone": "nova1"}))
def test_create_invalid_aggregate_data(self, value):

   

Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Sean Dague
On 07/25/2016 08:05 AM, Daniel P. Berrange wrote:
> On Mon, Jul 25, 2016 at 07:57:08AM -0400, Sean Dague wrote:
>> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
>>> On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
 On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:

 I agree that it's not a bug. I also agree that it helps in some specific
 types of tests which are doing some kind of input validation (like the 
 patch
 you've proposed) or are simply iterating over some list of values (status
 values on a server instance for example).

 Using DDT in Nova has come up before and one of the concerns was hiding
 details in how the tests are run with a library, and if there would be a
 learning curve. Depending on the usage, I personally don't have a problem
 with it. When I used it in manila it took a little getting used to but I 
 was
 basically just looking at existing tests and figuring out what they were
 doing when adding new ones.
>>>
>>> I don't think there's significant learning curve there - the way it
>>> lets you annotate the test methods is pretty easy to understand and
>>> the ddt docs spell it out clearly for newbies. We've far worse things
>>> in our code that create a hard learning curve which people will hit
>>> first :-)
>>>
>>> People have essentially been re-inventing ddt in nova tests already
>>> by defining one helper method and them having multiple tests methods
>>> all calling the same helper with a different dataset. So ddt is just
>>> formalizing what we're already doing in many places, with less code
>>> and greater clarity.
>>>
 I definitely think DDT is easier to use/understand than something like
 testscenarios, which we're already using in Nova.
>>>
>>> Yeah, testscenarios feels little over-engineered for what we want most
>>> of the time.
>>
>> Except, DDT is way less clear (and deterministic) about what's going on
>> with the test name munging. Which means failures are harder to track
>> back to individual tests and data load. So debugging the failures is harder.
> 
> I'm not sure what you think is unclear - given an annotated test:
> 
>@ddt.data({"foo": "test", "availability_zone": "nova1"},
>   {"name": "  test  ", "availability_zone": "nova1"},
>   {"name": "", "availability_zone": "nova1"},
>   {"name": "x" * 256, "availability_zone": "nova1"},
>   {"name": "test", "availability_zone": "x" * 256},
>   {"name": "test", "availability_zone": "  nova1  "},
>   {"name": "test", "availability_zone": ""},
>   {"name": "test", "availability_zone": "nova1", "foo": "bar"})
> def test_create_invalid_create_aggregate_data(self, value):
> 
> It is generated one test for each data item:
> 
>  test_create_invalid_create_aggregate_data_1
>  test_create_invalid_create_aggregate_data_2
>  test_create_invalid_create_aggregate_data_3
>  test_create_invalid_create_aggregate_data_4
>  test_create_invalid_create_aggregate_data_5
>  test_create_invalid_create_aggregate_data_6
>  test_create_invalid_create_aggregate_data_7
>  test_create_invalid_create_aggregate_data_8
> 
> This seems about as obvious as you can possibly get

At least when this was attempted to be introduced into Tempest, the
naming was a lot less clear, maybe it got better. But I still think
milestone 3 isn't the time to start a thing like this.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Daniel P. Berrange
On Mon, Jul 25, 2016 at 07:57:08AM -0400, Sean Dague wrote:
> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
> > On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
> >> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
> >>
> >> I agree that it's not a bug. I also agree that it helps in some specific
> >> types of tests which are doing some kind of input validation (like the 
> >> patch
> >> you've proposed) or are simply iterating over some list of values (status
> >> values on a server instance for example).
> >>
> >> Using DDT in Nova has come up before and one of the concerns was hiding
> >> details in how the tests are run with a library, and if there would be a
> >> learning curve. Depending on the usage, I personally don't have a problem
> >> with it. When I used it in manila it took a little getting used to but I 
> >> was
> >> basically just looking at existing tests and figuring out what they were
> >> doing when adding new ones.
> > 
> > I don't think there's significant learning curve there - the way it
> > lets you annotate the test methods is pretty easy to understand and
> > the ddt docs spell it out clearly for newbies. We've far worse things
> > in our code that create a hard learning curve which people will hit
> > first :-)
> > 
> > People have essentially been re-inventing ddt in nova tests already
> > by defining one helper method and them having multiple tests methods
> > all calling the same helper with a different dataset. So ddt is just
> > formalizing what we're already doing in many places, with less code
> > and greater clarity.
> > 
> >> I definitely think DDT is easier to use/understand than something like
> >> testscenarios, which we're already using in Nova.
> > 
> > Yeah, testscenarios feels little over-engineered for what we want most
> > of the time.
> 
> Except, DDT is way less clear (and deterministic) about what's going on
> with the test name munging. Which means failures are harder to track
> back to individual tests and data load. So debugging the failures is harder.

I'm not sure what you think is unclear - given an annotated test:

   @ddt.data({"foo": "test", "availability_zone": "nova1"},
  {"name": "  test  ", "availability_zone": "nova1"},
  {"name": "", "availability_zone": "nova1"},
  {"name": "x" * 256, "availability_zone": "nova1"},
  {"name": "test", "availability_zone": "x" * 256},
  {"name": "test", "availability_zone": "  nova1  "},
  {"name": "test", "availability_zone": ""},
  {"name": "test", "availability_zone": "nova1", "foo": "bar"})
def test_create_invalid_create_aggregate_data(self, value):

It is generated one test for each data item:

 test_create_invalid_create_aggregate_data_1
 test_create_invalid_create_aggregate_data_2
 test_create_invalid_create_aggregate_data_3
 test_create_invalid_create_aggregate_data_4
 test_create_invalid_create_aggregate_data_5
 test_create_invalid_create_aggregate_data_6
 test_create_invalid_create_aggregate_data_7
 test_create_invalid_create_aggregate_data_8

This seems about as obvious as you can possibly get

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Jay Pipes

On 07/25/2016 07:57 AM, Sean Dague wrote:

On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:

On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:

On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:

I agree that it's not a bug. I also agree that it helps in some specific
types of tests which are doing some kind of input validation (like the patch
you've proposed) or are simply iterating over some list of values (status
values on a server instance for example).

Using DDT in Nova has come up before and one of the concerns was hiding
details in how the tests are run with a library, and if there would be a
learning curve. Depending on the usage, I personally don't have a problem
with it. When I used it in manila it took a little getting used to but I was
basically just looking at existing tests and figuring out what they were
doing when adding new ones.


I don't think there's significant learning curve there - the way it
lets you annotate the test methods is pretty easy to understand and
the ddt docs spell it out clearly for newbies. We've far worse things
in our code that create a hard learning curve which people will hit
first :-)

People have essentially been re-inventing ddt in nova tests already
by defining one helper method and them having multiple tests methods
all calling the same helper with a different dataset. So ddt is just
formalizing what we're already doing in many places, with less code
and greater clarity.


I definitely think DDT is easier to use/understand than something like
testscenarios, which we're already using in Nova.


Yeah, testscenarios feels little over-engineered for what we want most
of the time.


Except, DDT is way less clear (and deterministic) about what's going on
with the test name munging. Which means failures are harder to track
back to individual tests and data load. So debugging the failures is harder.

I agree with have a lot of bad patterns in the tests. But I also don't
think that embedding another pattern during milestone 3 is the right
time to do it. At least lets hold until next cycle opens up when there
is more time to actually look at trade offs here.


+1

Also, I actually don't see how testscenarios won't/can't work for 
everything DDT is doing. Sounds a bit like the "why can't we use pytest 
instead of testr?" thing again.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Sean Dague
On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
> On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
>> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
>>
>> I agree that it's not a bug. I also agree that it helps in some specific
>> types of tests which are doing some kind of input validation (like the patch
>> you've proposed) or are simply iterating over some list of values (status
>> values on a server instance for example).
>>
>> Using DDT in Nova has come up before and one of the concerns was hiding
>> details in how the tests are run with a library, and if there would be a
>> learning curve. Depending on the usage, I personally don't have a problem
>> with it. When I used it in manila it took a little getting used to but I was
>> basically just looking at existing tests and figuring out what they were
>> doing when adding new ones.
> 
> I don't think there's significant learning curve there - the way it
> lets you annotate the test methods is pretty easy to understand and
> the ddt docs spell it out clearly for newbies. We've far worse things
> in our code that create a hard learning curve which people will hit
> first :-)
> 
> People have essentially been re-inventing ddt in nova tests already
> by defining one helper method and them having multiple tests methods
> all calling the same helper with a different dataset. So ddt is just
> formalizing what we're already doing in many places, with less code
> and greater clarity.
> 
>> I definitely think DDT is easier to use/understand than something like
>> testscenarios, which we're already using in Nova.
> 
> Yeah, testscenarios feels little over-engineered for what we want most
> of the time.

Except, DDT is way less clear (and deterministic) about what's going on
with the test name munging. Which means failures are harder to track
back to individual tests and data load. So debugging the failures is harder.

I agree with have a lot of bad patterns in the tests. But I also don't
think that embedding another pattern during milestone 3 is the right
time to do it. At least lets hold until next cycle opens up when there
is more time to actually look at trade offs here.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-22 Thread Daniel P. Berrange
On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
> > Hi Nova Devs,
> > 
> > 
> > 
> > Many times, there are a number of data sets that we have to run the same
> > tests on.
> > 
> > And, to create a different test for each data set values is
> > time-consuming and inefficient.
> > 
> > 
> > 
> > Data Driven Testing [1] overcomes this issue. Data-driven testing (DDT)
> > is taking a test,
> > 
> > parameterizing it and then running that test with varying data. This
> > allows you to run the
> > 
> > same test case with many varying inputs, therefore increasing coverage
> > from a single test,
> > 
> > reduces code duplication and can ease up error tracing as well.
> > 
> > 
> > 
> > DDT is a third party library needs to be installed separately and invoke the
> > 
> > module when writing the tests. At present DDT is used in cinder and rally.
> 
> There are several projects using it:
> 
> http://codesearch.openstack.org/?q=ddt%3E%3D1.0.1=nope==
> 
> I first came across it when working a little in manila.
> 
> > 
> > 
> > 
> > To start with, I have reported this as a bug [2] and added initial patch
> > [3] for the same,
> > 
> > but couple of reviewers has suggested to discuss about this on ML as
> > this is not a real bug.
> > 
> > IMO this is not a feature implementation and it’s just a effort for
> > simplifying our tests,
> > 
> > so a blueprint will be sufficient to track its progress.
> > 
> > 
> > 
> > So please let me know whether I can file a new blueprint or nova-specs
> > to proceed with this.
> > 
> > 
> > 
> > [1] http://ddt.readthedocs.io/en/latest/index.html
> > 
> > [2] https://bugs.launchpad.net/nova/+bug/1604798
> > 
> > [3] https://review.openstack.org/#/c/344820/
> > 
> > 
> > 
> > Thank you,
> > 
> > Dinesh Bhor
> > 
> > 
> > __
> > Disclaimer: This email and any attachments are sent in strictest confidence
> > for the sole use of the addressee and may contain legally privileged,
> > confidential, and proprietary data. If you are not the intended recipient,
> > please advise the sender by replying promptly to this email and then delete
> > and destroy this email and any attachments without any further use, copying
> > or forwarding.
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> I agree that it's not a bug. I also agree that it helps in some specific
> types of tests which are doing some kind of input validation (like the patch
> you've proposed) or are simply iterating over some list of values (status
> values on a server instance for example).
> 
> Using DDT in Nova has come up before and one of the concerns was hiding
> details in how the tests are run with a library, and if there would be a
> learning curve. Depending on the usage, I personally don't have a problem
> with it. When I used it in manila it took a little getting used to but I was
> basically just looking at existing tests and figuring out what they were
> doing when adding new ones.

I don't think there's significant learning curve there - the way it
lets you annotate the test methods is pretty easy to understand and
the ddt docs spell it out clearly for newbies. We've far worse things
in our code that create a hard learning curve which people will hit
first :-)

People have essentially been re-inventing ddt in nova tests already
by defining one helper method and them having multiple tests methods
all calling the same helper with a different dataset. So ddt is just
formalizing what we're already doing in many places, with less code
and greater clarity.

> I definitely think DDT is easier to use/understand than something like
> testscenarios, which we're already using in Nova.

Yeah, testscenarios feels little over-engineered for what we want most
of the time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-21 Thread Matt Riedemann

On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:

Hi Nova Devs,



Many times, there are a number of data sets that we have to run the same
tests on.

And, to create a different test for each data set values is
time-consuming and inefficient.



Data Driven Testing [1] overcomes this issue. Data-driven testing (DDT)
is taking a test,

parameterizing it and then running that test with varying data. This
allows you to run the

same test case with many varying inputs, therefore increasing coverage
from a single test,

reduces code duplication and can ease up error tracing as well.



DDT is a third party library needs to be installed separately and invoke the

module when writing the tests. At present DDT is used in cinder and rally.


There are several projects using it:

http://codesearch.openstack.org/?q=ddt%3E%3D1.0.1=nope==

I first came across it when working a little in manila.





To start with, I have reported this as a bug [2] and added initial patch
[3] for the same,

but couple of reviewers has suggested to discuss about this on ML as
this is not a real bug.

IMO this is not a feature implementation and it’s just a effort for
simplifying our tests,

so a blueprint will be sufficient to track its progress.



So please let me know whether I can file a new blueprint or nova-specs
to proceed with this.



[1] http://ddt.readthedocs.io/en/latest/index.html

[2] https://bugs.launchpad.net/nova/+bug/1604798

[3] https://review.openstack.org/#/c/344820/



Thank you,

Dinesh Bhor


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I agree that it's not a bug. I also agree that it helps in some specific 
types of tests which are doing some kind of input validation (like the 
patch you've proposed) or are simply iterating over some list of values 
(status values on a server instance for example).


Using DDT in Nova has come up before and one of the concerns was hiding 
details in how the tests are run with a library, and if there would be a 
learning curve. Depending on the usage, I personally don't have a 
problem with it. When I used it in manila it took a little getting used 
to but I was basically just looking at existing tests and figuring out 
what they were doing when adding new ones.


I definitely think DDT is easier to use/understand than something like 
testscenarios, which we're already using in Nova.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-21 Thread Sheel Rana Insaan
Hi Dinesh Bhor,

I could see a lot of projects have used this. Seems interesting.
You can refer Cinder, for example.

I think we should not raise bug for this. For me it's invalid.

You can treat this as improvement(just to reduce test code blocks), it
 seems ok to go ahead even without any BP or spec.

Best Regards,
Sheel Rana

On Thu, Jul 21, 2016 at 2:33 PM, Bhor, Dinesh 
wrote:

> Hi Nova Devs,
>
>
>
> Many times, there are a number of data sets that we have to run the same
> tests on.
>
> And, to create a different test for each data set values is time-consuming
> and inefficient.
>
>
>
> Data Driven Testing [1] overcomes this issue. Data-driven testing (DDT) is
> taking a test,
>
> parameterizing it and then running that test with varying data. This
> allows you to run the
>
> same test case with many varying inputs, therefore increasing coverage
> from a single test,
>
> reduces code duplication and can ease up error tracing as well.
>
>
>
> DDT is a third party library needs to be installed separately and invoke
> the
>
> module when writing the tests. At present DDT is used in cinder and rally.
>
>
>
> To start with, I have reported this as a bug [2] and added initial patch
> [3] for the same,
>
> but couple of reviewers has suggested to discuss about this on ML as this
> is not a real bug.
>
> IMO this is not a feature implementation and it’s just a effort for
> simplifying our tests,
>
> so a blueprint will be sufficient to track its progress.
>
>
>
> So please let me know whether I can file a new blueprint or nova-specs to
> proceed with this.
>
>
>
> [1] http://ddt.readthedocs.io/en/latest/index.html
>
> [2] https://bugs.launchpad.net/nova/+bug/1604798
>
> [3] https://review.openstack.org/#/c/344820/
>
>
>
> Thank you,
>
> Dinesh Bhor
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-21 Thread Bhor, Dinesh
Hi Nova Devs,

Many times, there are a number of data sets that we have to run the same tests 
on.
And, to create a different test for each data set values is time-consuming and 
inefficient.

Data Driven Testing [1] overcomes this issue. Data-driven testing (DDT) is 
taking a test,
parameterizing it and then running that test with varying data. This allows you 
to run the
same test case with many varying inputs, therefore increasing coverage from a 
single test,
reduces code duplication and can ease up error tracing as well.

DDT is a third party library needs to be installed separately and invoke the
module when writing the tests. At present DDT is used in cinder and rally.

To start with, I have reported this as a bug [2] and added initial patch [3] 
for the same,
but couple of reviewers has suggested to discuss about this on ML as this is 
not a real bug.
IMO this is not a feature implementation and it's just a effort for simplifying 
our tests,
so a blueprint will be sufficient to track its progress.

So please let me know whether I can file a new blueprint or nova-specs to 
proceed with this.

[1] http://ddt.readthedocs.io/en/latest/index.html
[2] https://bugs.launchpad.net/nova/+bug/1604798
[3] https://review.openstack.org/#/c/344820/

Thank you,
Dinesh Bhor

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev