Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-22 Thread Jay Pipes
On Tue, 2014-01-21 at 01:15 -0500, Yair Fried wrote:
 I seem to be unable to convey my point using generalization, so I will give a 
 specific example:
 I would like to have update dns server as an additional network scenario. 
 Currently I could add it to the existing module:
 
 1. tests connectivity
 2. re-associate floating ip
 3. update dns server
 
 In which case, failure to re-associate ip will prevent my test from running, 
 even though these are completely unrelated scenarios, and (IMO) we would like 
 to get feedback on both of them.
 
 Another way, is to copy the entire network_basic_ops module, remove 
 re-associate floating ip and add update dns server. For the obvious 
 reasons - this also seems like the wrong way to go.
 
 I am looking for an elegant way to share the code of these scenarios.

Well, unfortunately, there are no very elegant answers at this time :)

The closest thing we have would be to create a fixtures.Fixture that
constructed a VM and associated the floating IP address to the instance.
You could then have separate tests that for checking connectivity and
updating the DNS server for that instance. However, fixtures are for
resources that are shared between test methods and are not modified
during those test methods. They cannot be modified, because then
parallel execution of the test methods may yield non-deterministic
results.

There would need to be a separate fixture for the instance that would
have the floating IP re-associated with it (because re-associating the
floating IP by nature is a modification to the resource).

Having a separate fixture means essentially doubling the amount of
resources used by the test case class in question, which is why we're
pushing to just have all of the tests done serially in a single test
method, even though that means that a failure to re-associate the
floating IP would mean that the update DNS server test would not be
executed.

Choose your poison.

Best,
-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-22 Thread David Kranz

On 01/22/2014 03:19 PM, Jay Pipes wrote:

On Tue, 2014-01-21 at 01:15 -0500, Yair Fried wrote:

I seem to be unable to convey my point using generalization, so I will give a 
specific example:
I would like to have update dns server as an additional network scenario. 
Currently I could add it to the existing module:

1. tests connectivity
2. re-associate floating ip
3. update dns server

In which case, failure to re-associate ip will prevent my test from running, 
even though these are completely unrelated scenarios, and (IMO) we would like 
to get feedback on both of them.

Another way, is to copy the entire network_basic_ops module, remove re-associate floating 
ip and add update dns server. For the obvious reasons - this also seems like the 
wrong way to go.

I am looking for an elegant way to share the code of these scenarios.

Well, unfortunately, there are no very elegant answers at this time :)

The closest thing we have would be to create a fixtures.Fixture that
constructed a VM and associated the floating IP address to the instance.
You could then have separate tests that for checking connectivity and
updating the DNS server for that instance. However, fixtures are for
resources that are shared between test methods and are not modified
during those test methods. They cannot be modified, because then
parallel execution of the test methods may yield non-deterministic
results.

There would need to be a separate fixture for the instance that would
have the floating IP re-associated with it (because re-associating the
floating IP by nature is a modification to the resource).

Having a separate fixture means essentially doubling the amount of
resources used by the test case class in question, which is why we're
pushing to just have all of the tests done serially in a single test
method, even though that means that a failure to re-associate the
floating IP would mean that the update DNS server test would not be
executed.

Choose your poison.

Best,
-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thanks, Jay. So to close this loop, I think Yair started down this road 
after receiving feedback that this test was getting too much stuff in 
it. Sounds like you are advocating putting more stuff in it as the least 
of evils. Which is fine by me because it is a lot easier.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-20 Thread Jay Pipes
On Sun, 2014-01-19 at 07:17 -0500, Yair Fried wrote:
 OK,
 but considering my pending patch (#3 and #4)
 what about:
 
 #1 - #2
 #1 - #3
 #1 - #4
 
 instead of 
 
 #1 - #2 - #3 - #4
 
 a failure in #2 will prevent #3 and #4 from running even though they are 
 completely unrelated

Seems to me, that the above is a logical fault. If a failure in #2
prevents #3 or #4 from running, then by nature they are related to #2.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-20 Thread Salvatore Orlando
Yair is probably referring to statistically independent tests, or whatever
case for which the following is true (P(x) is the probably that a test
succeeds):

P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1)

This might apply to the tests we are adding to network_basic_ops scenario;
however it is worth noting that:

- in some cases the above relationship does not hold. For instance a public
network connectivity test can hardly succeeds if the private connectivity
test failed (is that correct? I'm not sure anymore of anything this days!)
- Sean correctly pointed out that splitting test will cause repeated
activities which will just make the test run longer without any additional
benefit.

On the other hand, I understand and share the feeling that we are adding
too much to the same workflow. Would it make sense to identify a few
conceptually independent workflows, identify one or more advanced network
scenarios, and keep only internal + public connectivity checks in basic_ops?

Salvatore


On 20 January 2014 09:23, Jay Pipes jaypi...@gmail.com wrote:

 On Sun, 2014-01-19 at 07:17 -0500, Yair Fried wrote:
  OK,
  but considering my pending patch (#3 and #4)
  what about:
 
  #1 - #2
  #1 - #3
  #1 - #4
 
  instead of
 
  #1 - #2 - #3 - #4
 
  a failure in #2 will prevent #3 and #4 from running even though they are
 completely unrelated

 Seems to me, that the above is a logical fault. If a failure in #2
 prevents #3 or #4 from running, then by nature they are related to #2.

 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-20 Thread Yair Fried
I seem to be unable to convey my point using generalization, so I will give a 
specific example:
I would like to have update dns server as an additional network scenario. 
Currently I could add it to the existing module:

1. tests connectivity
2. re-associate floating ip
3. update dns server

In which case, failure to re-associate ip will prevent my test from running, 
even though these are completely unrelated scenarios, and (IMO) we would like 
to get feedback on both of them.

Another way, is to copy the entire network_basic_ops module, remove 
re-associate floating ip and add update dns server. For the obvious reasons 
- this also seems like the wrong way to go.

I am looking for an elegant way to share the code of these scenarios.

Yair


- Original Message -
From: Salvatore Orlando sorla...@nicira.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, January 20, 2014 7:22:22 PM
Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
NetworkBasicOps to smaller test cases



Yair is probably referring to statistically independent tests, or whatever case 
for which the following is true (P(x) is the probably that a test succeeds): 


P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1) 



This might apply to the tests we are adding to network_basic_ops scenario; 
however it is worth noting that: 


- in some cases the above relationship does not hold. For instance a public 
network connectivity test can hardly succeeds if the private connectivity test 
failed (is that correct? I'm not sure anymore of anything this days!) 
- Sean correctly pointed out that splitting test will cause repeated activities 
which will just make the test run longer without any additional benefit. 


On the other hand, I understand and share the feeling that we are adding too 
much to the same workflow. Would it make sense to identify a few conceptually 
independent workflows, identify one or more advanced network scenarios, and 
keep only internal + public connectivity checks in basic_ops? 


Salvatore 



On 20 January 2014 09:23, Jay Pipes  jaypi...@gmail.com  wrote: 



On Sun, 2014-01-19 at 07:17 -0500, Yair Fried wrote: 
 OK, 
 but considering my pending patch (#3 and #4) 
 what about: 
 
 #1 - #2 
 #1 - #3 
 #1 - #4 
 
 instead of 
 
 #1 - #2 - #3 - #4 
 
 a failure in #2 will prevent #3 and #4 from running even though they are 
 completely unrelated 

Seems to me, that the above is a logical fault. If a failure in #2 
prevents #3 or #4 from running, then by nature they are related to #2. 

-jay 




___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-19 Thread Sean Dague
On 01/19/2014 02:06 AM, Yair Fried wrote:
 MT:Is your issue here that it's just called basic ops and you don't think 
 that's
 reflective of what is being tested in that file anymore
 
 No.
 My issue is, that the current scenario is, in fact, at least 2 separate 
 scenarios:
 1. original basic_ops
 2. reassociate_floating_ip
 to which I would like to add ( https://review.openstack.org/#/c/55146/ ):
 4. check external/internal connectivity
 3. update dns
 
 While #2 includes #1 as part of its setup, its failing shouldn't prevent #1 
 from passing. the obvious solution would be to create separate modules for 
 each test case, but since they all share the same setup sequence, IMO, they 
 should at least share code.
 Notice that in my patch, #2 still includes #1.
 
 Actually, the more network scenario we get, the more we will need to do 
 something in that direction, since most of the scenarios will require the 
 setup of a VM with a floating-ip to ssh into.
 
 So either we do this, or we move all of this code to scenario.manager which 
 is also becoming very complicated

If #2 is always supposed to work, then I don't actually understand why
#1 being part of the test or not part of the test is really relevant.
And being part of the same test saves substantial time.

If you have tests that do:
 * A - B - C
 * A - B - D - F

There really isn't value in a test for A - B *as long* as you have
sufficient sign posting to know in the fail logs that A - B worked fine.

And there are sufficient detriments in making it a separate test,
because it's just adding time to the runs without actually testing
anything different.

-Sean

 
 Yair
 
 - Original Message -
 From: Matthew Treinish mtrein...@kortar.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, January 17, 2014 6:17:55 AM
 Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
 NetworkBasicOps to smaller test cases
 
 On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
 Hi Guys
 As Maru pointed out - NetworkBasicOps scenario has grown out of proportion 
 and is no longer basic ops.
 
 Is your issue here that it's just called basic ops and you don't think that's
 reflective of what is being tested in that file anymore. If that's the case
 then just change the name.
 

 So, I started breaking it down to smaller test cases that can fail 
 independently.
 
 I'm not convinced this is needed. Some scenarios are going to be very involved
 and complex. Each scenario tests is designed to simulate real use cases in the
 cloud, so obviously some of them will be fairly large. The solution for making
 debugging easier in these cases is to make sure that any failures have clear
 messages. Also make sure there is plenty of signposting debug log messages so
 when something goes wrong you know what state the test was in. 
 
 If you split things up into smaller individual tests you'll most likely end up
 making tests that are really aren't scenario tests. They'd be closer to API
 tests, just using the official clients, which really shouldn't be in the
 scenario tests.
 

 Those test cases share the same setup and tear-down code:
 1. create network resources (and verify them)
 2. create VM with floating IP.

 I see 2 options to manage these resources:
 1. Completely isolated - resources are created and cleaned using setUp() and 
 tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. 
 Apparently the previous way (with tearDownClass) wasn't as fast). This has 
 the side effect of having expensive resources (ie VMs and floating IPs) 
 created and discarded  multiple times though they are unchanged.

 2. Shared Resources - Using the idea of (or actually using) Fixtures - use 
 the same resources unless a test case fails, in which case resources are 
 deleted and created by the next test case [3].
 
 If you're doing this and splitting things into smaller tests then it has to be
 option 1. Scenarios have to be isolated if there are resources shared between
 scenario tests that really is only one scenario and shouldn't be split. In 
 fact
 I've been working on a change that fixes the scenario test tearDowns that has 
 the
 side effect of enforcing this policy.
 
 Also just for the record we've tried doing option 2 in the past, for example
 there used to be a tenant-reuse config option. The problem with doing that was
 actually tends to cause more non-deterministic failures or adding a not
 insignificant wait time to ensure the state is clean when you start the next
 test. Which is why we ended up pulling this out of tree. What ends up 
 happening
 is that you get leftover state from previous tests and the second test ends up
 failing because things aren't in the clean state that the test case assumes. 
 If
 you look at some of the oneserver files in the API that is the only place we
 still do this in the tempest, and we've had many issues on making that work

Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-19 Thread Yair Fried


- Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, January 19, 2014 1:53:21 PM
 Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
 NetworkBasicOps to smaller test cases
 
 On 01/19/2014 02:06 AM, Yair Fried wrote:
  MT:Is your issue here that it's just called basic ops and you
  don't think that's
  reflective of what is being tested in that file anymore
  
  No.
  My issue is, that the current scenario is, in fact, at least 2
  separate scenarios:
  1. original basic_ops
  2. reassociate_floating_ip
  to which I would like to add (
  https://review.openstack.org/#/c/55146/ ):
  4. check external/internal connectivity
  3. update dns
  
  While #2 includes #1 as part of its setup, its failing shouldn't
  prevent #1 from passing. the obvious solution would be to create
  separate modules for each test case, but since they all share the
  same setup sequence, IMO, they should at least share code.
  Notice that in my patch, #2 still includes #1.
  
  Actually, the more network scenario we get, the more we will need
  to do something in that direction, since most of the scenarios
  will require the setup of a VM with a floating-ip to ssh into.
  
  So either we do this, or we move all of this code to
  scenario.manager which is also becoming very complicated
 
 If #2 is always supposed to work, then I don't actually understand
 why
 #1 being part of the test or not part of the test is really relevant.
 And being part of the same test saves substantial time.
 
 If you have tests that do:
  * A - B - C
  * A - B - D - F
 
 There really isn't value in a test for A - B *as long* as you have
 sufficient sign posting to know in the fail logs that A - B worked
 fine.
 
 And there are sufficient detriments in making it a separate test,
 because it's just adding time to the runs without actually testing
 anything different.

OK,
but considering my pending patch (#3 and #4)
what about:

#1 - #2
#1 - #3
#1 - #4

instead of 

#1 - #2 - #3 - #4

a failure in #2 will prevent #3 and #4 from running even though they are 
completely unrelated


 
   -Sean
 
  
  Yair
  
  - Original Message -
  From: Matthew Treinish mtrein...@kortar.org
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Friday, January 17, 2014 6:17:55 AM
  Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break
  down NetworkBasicOps to smaller test cases
  
  On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
  Hi Guys
  As Maru pointed out - NetworkBasicOps scenario has grown out of
  proportion and is no longer basic ops.
  
  Is your issue here that it's just called basic ops and you don't
  think that's
  reflective of what is being tested in that file anymore. If that's
  the case
  then just change the name.
  
 
  So, I started breaking it down to smaller test cases that can fail
  independently.
  
  I'm not convinced this is needed. Some scenarios are going to be
  very involved
  and complex. Each scenario tests is designed to simulate real use
  cases in the
  cloud, so obviously some of them will be fairly large. The solution
  for making
  debugging easier in these cases is to make sure that any failures
  have clear
  messages. Also make sure there is plenty of signposting debug log
  messages so
  when something goes wrong you know what state the test was in.
  
  If you split things up into smaller individual tests you'll most
  likely end up
  making tests that are really aren't scenario tests. They'd be
  closer to API
  tests, just using the official clients, which really shouldn't be
  in the
  scenario tests.
  
 
  Those test cases share the same setup and tear-down code:
  1. create network resources (and verify them)
  2. create VM with floating IP.
 
  I see 2 options to manage these resources:
  1. Completely isolated - resources are created and cleaned using
  setUp() and tearDown() methods [1]. Moving cleanup to tearDown
  revealed this bug [2]. Apparently the previous way (with
  tearDownClass) wasn't as fast). This has the side effect of
  having expensive resources (ie VMs and floating IPs) created and
  discarded  multiple times though they are unchanged.
 
  2. Shared Resources - Using the idea of (or actually using)
  Fixtures - use the same resources unless a test case fails, in
  which case resources are deleted and created by the next test
  case [3].
  
  If you're doing this and splitting things into smaller tests then
  it has to be
  option 1. Scenarios have to be isolated if there are resources
  shared between
  scenario tests that really is only one scenario and shouldn't be
  split. In fact
  I've been working on a change that fixes the scenario test
  tearDowns that has the
  side effect of enforcing this policy.
  
  Also just for the record we've tried doing option 2

Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-18 Thread Yair Fried
MT:Is your issue here that it's just called basic ops and you don't think 
that's
reflective of what is being tested in that file anymore

No.
My issue is, that the current scenario is, in fact, at least 2 separate 
scenarios:
1. original basic_ops
2. reassociate_floating_ip
to which I would like to add ( https://review.openstack.org/#/c/55146/ ):
4. check external/internal connectivity
3. update dns

While #2 includes #1 as part of its setup, its failing shouldn't prevent #1 
from passing. the obvious solution would be to create separate modules for each 
test case, but since they all share the same setup sequence, IMO, they should 
at least share code.
Notice that in my patch, #2 still includes #1.

Actually, the more network scenario we get, the more we will need to do 
something in that direction, since most of the scenarios will require the setup 
of a VM with a floating-ip to ssh into.

So either we do this, or we move all of this code to scenario.manager which is 
also becoming very complicated

Yair

- Original Message -
From: Matthew Treinish mtrein...@kortar.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, January 17, 2014 6:17:55 AM
Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
NetworkBasicOps to smaller test cases

On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
 Hi Guys
 As Maru pointed out - NetworkBasicOps scenario has grown out of proportion 
 and is no longer basic ops.

Is your issue here that it's just called basic ops and you don't think that's
reflective of what is being tested in that file anymore. If that's the case
then just change the name.

 
 So, I started breaking it down to smaller test cases that can fail 
 independently.

I'm not convinced this is needed. Some scenarios are going to be very involved
and complex. Each scenario tests is designed to simulate real use cases in the
cloud, so obviously some of them will be fairly large. The solution for making
debugging easier in these cases is to make sure that any failures have clear
messages. Also make sure there is plenty of signposting debug log messages so
when something goes wrong you know what state the test was in. 

If you split things up into smaller individual tests you'll most likely end up
making tests that are really aren't scenario tests. They'd be closer to API
tests, just using the official clients, which really shouldn't be in the
scenario tests.

 
 Those test cases share the same setup and tear-down code:
 1. create network resources (and verify them)
 2. create VM with floating IP.
 
 I see 2 options to manage these resources:
 1. Completely isolated - resources are created and cleaned using setUp() and 
 tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. 
 Apparently the previous way (with tearDownClass) wasn't as fast). This has 
 the side effect of having expensive resources (ie VMs and floating IPs) 
 created and discarded  multiple times though they are unchanged.
 
 2. Shared Resources - Using the idea of (or actually using) Fixtures - use 
 the same resources unless a test case fails, in which case resources are 
 deleted and created by the next test case [3].

If you're doing this and splitting things into smaller tests then it has to be
option 1. Scenarios have to be isolated if there are resources shared between
scenario tests that really is only one scenario and shouldn't be split. In fact
I've been working on a change that fixes the scenario test tearDowns that has 
the
side effect of enforcing this policy.

Also just for the record we've tried doing option 2 in the past, for example
there used to be a tenant-reuse config option. The problem with doing that was
actually tends to cause more non-deterministic failures or adding a not
insignificant wait time to ensure the state is clean when you start the next
test. Which is why we ended up pulling this out of tree. What ends up happening
is that you get leftover state from previous tests and the second test ends up
failing because things aren't in the clean state that the test case assumes. If
you look at some of the oneserver files in the API that is the only place we
still do this in the tempest, and we've had many issues on making that work
reliably. Those tests are in a relatively good place now but those are much
simpler tests. Also between each test setUp has to check and ensure that the
shared server is in the proper state. If it's not then the shared server has to
be rebuilt. This methodology would become far more involved for the scenario
tests where you have to manage more than one shared resource.

 
 I would like to hear your opinions, and know if anyone has any thoughts or 
 ideas on which direction is best, and why.
 
 Once this is completed, we can move on to other scenarios as well
 
 Regards
 Yair
 
 [1] fully isolated - https://review.openstack.org/#/c/66879/
 [2] https://bugs.launchpad.net/nova

Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-16 Thread Matthew Treinish
On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
 Hi Guys
 As Maru pointed out - NetworkBasicOps scenario has grown out of proportion 
 and is no longer basic ops.

Is your issue here that it's just called basic ops and you don't think that's
reflective of what is being tested in that file anymore. If that's the case
then just change the name.

 
 So, I started breaking it down to smaller test cases that can fail 
 independently.

I'm not convinced this is needed. Some scenarios are going to be very involved
and complex. Each scenario tests is designed to simulate real use cases in the
cloud, so obviously some of them will be fairly large. The solution for making
debugging easier in these cases is to make sure that any failures have clear
messages. Also make sure there is plenty of signposting debug log messages so
when something goes wrong you know what state the test was in. 

If you split things up into smaller individual tests you'll most likely end up
making tests that are really aren't scenario tests. They'd be closer to API
tests, just using the official clients, which really shouldn't be in the
scenario tests.

 
 Those test cases share the same setup and tear-down code:
 1. create network resources (and verify them)
 2. create VM with floating IP.
 
 I see 2 options to manage these resources:
 1. Completely isolated - resources are created and cleaned using setUp() and 
 tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. 
 Apparently the previous way (with tearDownClass) wasn't as fast). This has 
 the side effect of having expensive resources (ie VMs and floating IPs) 
 created and discarded  multiple times though they are unchanged.
 
 2. Shared Resources - Using the idea of (or actually using) Fixtures - use 
 the same resources unless a test case fails, in which case resources are 
 deleted and created by the next test case [3].

If you're doing this and splitting things into smaller tests then it has to be
option 1. Scenarios have to be isolated if there are resources shared between
scenario tests that really is only one scenario and shouldn't be split. In fact
I've been working on a change that fixes the scenario test tearDowns that has 
the
side effect of enforcing this policy.

Also just for the record we've tried doing option 2 in the past, for example
there used to be a tenant-reuse config option. The problem with doing that was
actually tends to cause more non-deterministic failures or adding a not
insignificant wait time to ensure the state is clean when you start the next
test. Which is why we ended up pulling this out of tree. What ends up happening
is that you get leftover state from previous tests and the second test ends up
failing because things aren't in the clean state that the test case assumes. If
you look at some of the oneserver files in the API that is the only place we
still do this in the tempest, and we've had many issues on making that work
reliably. Those tests are in a relatively good place now but those are much
simpler tests. Also between each test setUp has to check and ensure that the
shared server is in the proper state. If it's not then the shared server has to
be rebuilt. This methodology would become far more involved for the scenario
tests where you have to manage more than one shared resource.

 
 I would like to hear your opinions, and know if anyone has any thoughts or 
 ideas on which direction is best, and why.
 
 Once this is completed, we can move on to other scenarios as well
 
 Regards
 Yair
 
 [1] fully isolated - https://review.openstack.org/#/c/66879/
 [2] https://bugs.launchpad.net/nova/+bug/1269407/+choose-affected-product
 [3] shared resources - https://review.openstack.org/#/c/64622/

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-15 Thread Yair Fried
Hi Guys
As Maru pointed out - NetworkBasicOps scenario has grown out of proportion and 
is no longer basic ops.

So, I started breaking it down to smaller test cases that can fail 
independently.

Those test cases share the same setup and tear-down code:
1. create network resources (and verify them)
2. create VM with floating IP.

I see 2 options to manage these resources:
1. Completely isolated - resources are created and cleaned using setUp() and 
tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. 
Apparently the previous way (with tearDownClass) wasn't as fast). This has the 
side effect of having expensive resources (ie VMs and floating IPs) created and 
discarded  multiple times though they are unchanged.

2. Shared Resources - Using the idea of (or actually using) Fixtures - use the 
same resources unless a test case fails, in which case resources are deleted 
and created by the next test case [3].

I would like to hear your opinions, and know if anyone has any thoughts or 
ideas on which direction is best, and why.

Once this is completed, we can move on to other scenarios as well

Regards
Yair

[1] fully isolated - https://review.openstack.org/#/c/66879/
[2] https://bugs.launchpad.net/nova/+bug/1269407/+choose-affected-product
[3] shared resources - https://review.openstack.org/#/c/64622/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev