Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-12 Thread Eoghan Glynn

 So I'm not sure that this should be a mandatory thing, but an
 opt-in. My real concern is the manpower, who is going to take the
 time to write all the test suites for all of the projects. I think
 it would be better to add that on-demand as the extra testing is
 required. That being said, I definitely view doing this as a good
 thing and something to be encouraged, because tempest won't be able
 to test everything.

 The other thing to also consider is duplicated effort between
 projects. For an example, look at the CLI tests in Tempest, the
 functional testing framework for testing CLI formatting was
 essentially the same between all the clients which is why they're in
 tempest. Under your proposal here, CLI tests should be moved back to
 the clients. But, would that mean we have a bunch of copy and pasted
 versions of the CLI test framework between all the project.

 I really want to avoid a situation where every project does the same
 basic testing differently just in a rush to spin up functional
 testing. I think coming up with a solution for a place with common
 test patterns and frameworks that can be maintained independently of
 all the projects and consumed for project specific testing is
 something we should figure out first. (I'm not sure oslo would be
 the right place for this necessarily)

Yep, I'd have similar concerns about dupication of effort and
divergence in a rush to spin up in-tree mini-Tempests across all the
projects.

So, I think it would be really great to have one or two really solid
exemplar in-tree functional test suites in place, in order to allow
the inevitable initial mistakes to be made the minimal number of
times.

Ideally the QA team would have an advisory, assisting role in getting
these spun up so that the project gets the benefit of their domain
expertise.

Of course it would be preferable also to have the re-usable elements
of the test infrastructure in a consumable form that the laggard
projects can easily pick up without doing whole-scale copy'n'paste.

 So I think that the contract unit tests work well specifically for
 the ironic use case, but isn't a general solution. Mostly because
 the Nova driver api is an unstable interface and there is no reason
 for that to change. It's also a temporary thing because eventually
 the driver will be moved into Nova and then the only cross-project
 interaction between Ironic and Nova will be over the stable REST
 APIs.

 I think in general we should try to avoid doing non REST API
 cross-project communication.

As I've pointed out before, I don't think it's feasible for ceilometer
to switch to using only REST APIs for cross-project communication.

However what we can do is finally grasp the nettle of contractizing
notifications, as discussed on this related thread:

  http://lists.openstack.org/pipermail/openstack-dev/2014-July/039858.html

The realistic time horizon for that is the K* cycle I suspect, but
overall from that thread there seems to be some appetite for actually
doing it finally. 

 So hopefully there won't be more of
 these class of things, and if there are we can tackle them on a per
 case basis. But, even if it's a non REST API I don't think we should
 ever encourage or really allow any cross-project interactions over
 unstable interfaces.

Yes, if we go from discouragement to explicitly *disallowing* such
interactions, that's probably something that would need to mandated
at TC level IMO, with the appropriate grandfathering of existing usage.

Is this something you or Sean (being a member of the TC) or I could
drive?

I'd be happy to draft some language for a governance patch, but having
been to-and-fro the well before on such patches, I'm well aware that a
TC member pushing it would add considerably to effectiveness.

 As a solution for notifications I'd rather see a separate
 notification white/grey (or any other monochrome shade) box test
 suite. If as a project we say that notifications have to be
 versioned for any change we can then enforce that easily with an
 external test suite that contains the definitions for all the
 notification. It then just makes a bunch of api calls and sits on
 RPC verifying the notification format. (or something of that ilk)

Do you mean a *single* external test suite?

As opposed to multiple test suites, each validating the notifications
emitted by each project?

The reason I'm laboring this point is that such an over-arching
testsuite feels a little Tempest-y. Seems it would have to spin up an
entire devstack and then tickle the services into producing a range
of notifications before consuming and verifying the events.

Would that possibly be more lightweight if tested on a service-by-service
basis?

 I agree that normally whitebox testing needs to be tightly coupled
 with the data models in the projects, but I feel like notifications
 are slightly different.  Mostly, because the basic format is the
 same between all the projects to make consumption simpler. So
 instead 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-10 Thread Eoghan Glynn

  Note that the notifications that capture these resource state transitions
  are a long-standing mechanism in openstack that ceilometer has depended
  on from the very outset. I don't think it's realistic to envisage these
  interactions will be replaced by REST APIs any time soon.
 
 I wasn't advocating doing everything over a REST API. (API is an
 overloaded term) I just meant that if we're depending on
 notifications for communication between projects then we should
 enforce a stability contract on them. Similar to what we already
 have with the API stability guidelines for the REST APIs. The fact
 that there is no direct enforcement on notifications, either through
 social policy or testing, is what I was taking issue with.

 I also think if we decide to have a policy of enforcing notification
 stability then we should directly test the notifications from an
 external repo to block slips. But, that's a discussion for later, if
 at all.

A-ha, OK, got it.

I've discussed enforcing such stability with jogo on IRC last night, and
kicked off a separate thread to capture that:

  http://lists.openstack.org/pipermail/openstack-dev/2014-July/039858.html

However the time-horizon for that effort would be quite a bit into the
future, compared to the test coverage that we're aiming to have in place
for juno-2.

The branchless Tempest spec envisages new features will be added and
need to be skipped when testing stable/previous, but IIUC requires
that the presence of new behaviors is externally discoverable[5].
   
   I think the test case you proposed is fine. I know some people will
   argue that it is expanding the scope of tempest to include more
   whitebox like testing, because the notification are an internal
   side-effect of the api call, but I don't see it that way. It feels
   more like exactly what tempest is there to enable testing, a
   cross-project interaction using the api.
  
  In my example, APIs are only used to initiate the action in cinder
  and then to check the metering data in ceilometer.
  
  But the middle-piece, i.e. the interaction between cinder  ceilometer,
  is not mediated by an API. Rather, its carried via an unversioned
  notification.
 
 Yeah, exactly, that's why I feel it's a valid Tempest test case.

Just to clarify: you meant to type it's a valid Tempest test case
as opposed to it's *not* a valid Tempest test case, right?
 
 What I was referring to as the counter argument, and where the
 difference of opinion was, is that the test will be making REST API
 calls to both trigger a nominally internal mechanism (the
 notification) from the services and then using the ceilometer api to
 validate the notification worked. 

Yes, that's exactly the idea.

 But, it's arguably the real intent of these tests is to validate
 that internal mechanism, which is basically a whitebox test. The
 argument was that by testing it in tempes we're testing
 notifications poorly because of it's black box limitation
 notifications will just be tested indirectly. Which I feel is a
 valid point, but not a sufficient reason to exclude the notification
 tests from tempest.

Agreed.

 I think the best way to move forward is to have functional whitebox
 tests for the notifications as part of the individual projects
 generating them, and that way we can direct validation of the
 notification. But, I also feel there should be tempest tests on top
 of that that verify the ceilometer side of consuming the
 notification and the api exposing that information.

Excellent. So, indeed, more fullsome coverage of the notification
logic with in-tree tests on the producer side would definitely
to welcome, and could be seen as a phase zero of an overall to
fix/impove the notification mechanism.
  
   But, they're is also a slight misunderstanding here. Having a
   feature be externally discoverable isn't a hard requirement for a
   config option in tempest, it's just *strongly* recommended. Mostly,
   because if there isn't a way to discover it how are end users
   expected to know what will work.
  
  A-ha, I missed the subtle distinction there and thought that this
  discoverability was a *strict* requirement. So how bad a citizen would
  a project be considered to be if it chose not to meet that strong
  recommendation?
 
 You'd be far from the only ones who are doing that, for an existing example
 look at anything on the nova driver feature matrix. Most of those aren't
 discoverable from the API. So I think it would be ok to do that, but when we
 have efforts like:
 
 https://review.openstack.org/#/c/94473/
 
 it'll make that more difficult. Which is why I think having discoverability
 through the API is important. (it's the same public cloud question)

So for now, would it suffice for the master versus stable/icehouse
config to be checked-in in static form pending the completion of that
BP on tempest-conf-autogen?

Then the assumption is that this static config is replaced by auto-
generating the 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-10 Thread Matthew Treinish
On Thu, Jul 10, 2014 at 08:37:40AM -0400, Eoghan Glynn wrote:
 
   Note that the notifications that capture these resource state transitions
   are a long-standing mechanism in openstack that ceilometer has depended
   on from the very outset. I don't think it's realistic to envisage these
   interactions will be replaced by REST APIs any time soon.
  
  I wasn't advocating doing everything over a REST API. (API is an
  overloaded term) I just meant that if we're depending on
  notifications for communication between projects then we should
  enforce a stability contract on them. Similar to what we already
  have with the API stability guidelines for the REST APIs. The fact
  that there is no direct enforcement on notifications, either through
  social policy or testing, is what I was taking issue with.
 
  I also think if we decide to have a policy of enforcing notification
  stability then we should directly test the notifications from an
  external repo to block slips. But, that's a discussion for later, if
  at all.
 
 A-ha, OK, got it.
 
 I've discussed enforcing such stability with jogo on IRC last night, and
 kicked off a separate thread to capture that:
 
   http://lists.openstack.org/pipermail/openstack-dev/2014-July/039858.html
 
 However the time-horizon for that effort would be quite a bit into the
 future, compared to the test coverage that we're aiming to have in place
 for juno-2.
 
 The branchless Tempest spec envisages new features will be added and
 need to be skipped when testing stable/previous, but IIUC requires
 that the presence of new behaviors is externally discoverable[5].

I think the test case you proposed is fine. I know some people will
argue that it is expanding the scope of tempest to include more
whitebox like testing, because the notification are an internal
side-effect of the api call, but I don't see it that way. It feels
more like exactly what tempest is there to enable testing, a
cross-project interaction using the api.
   
   In my example, APIs are only used to initiate the action in cinder
   and then to check the metering data in ceilometer.
   
   But the middle-piece, i.e. the interaction between cinder  ceilometer,
   is not mediated by an API. Rather, its carried via an unversioned
   notification.
  
  Yeah, exactly, that's why I feel it's a valid Tempest test case.
 
 Just to clarify: you meant to type it's a valid Tempest test case
 as opposed to it's *not* a valid Tempest test case, right?

Heh, yes I meant to say, it is a valid test case.

  
  What I was referring to as the counter argument, and where the
  difference of opinion was, is that the test will be making REST API
  calls to both trigger a nominally internal mechanism (the
  notification) from the services and then using the ceilometer api to
  validate the notification worked. 
 
 Yes, that's exactly the idea.
 
  But, it's arguably the real intent of these tests is to validate
  that internal mechanism, which is basically a whitebox test. The
  argument was that by testing it in tempes we're testing
  notifications poorly because of it's black box limitation
  notifications will just be tested indirectly. Which I feel is a
  valid point, but not a sufficient reason to exclude the notification
  tests from tempest.
 
 Agreed.
 
  I think the best way to move forward is to have functional whitebox
  tests for the notifications as part of the individual projects
  generating them, and that way we can direct validation of the
  notification. But, I also feel there should be tempest tests on top
  of that that verify the ceilometer side of consuming the
  notification and the api exposing that information.
 
 Excellent. So, indeed, more fullsome coverage of the notification
 logic with in-tree tests on the producer side would definitely
 to welcome, and could be seen as a phase zero of an overall to
 fix/impove the notification mechanism.
   
But, they're is also a slight misunderstanding here. Having a
feature be externally discoverable isn't a hard requirement for a
config option in tempest, it's just *strongly* recommended. Mostly,
because if there isn't a way to discover it how are end users
expected to know what will work.
   
   A-ha, I missed the subtle distinction there and thought that this
   discoverability was a *strict* requirement. So how bad a citizen would
   a project be considered to be if it chose not to meet that strong
   recommendation?
  
  You'd be far from the only ones who are doing that, for an existing example
  look at anything on the nova driver feature matrix. Most of those aren't
  discoverable from the API. So I think it would be ok to do that, but when we
  have efforts like:
  
  https://review.openstack.org/#/c/94473/
  
  it'll make that more difficult. Which is why I think having discoverability
  through the API is important. (it's the same public cloud question)
 
 So for now, would it suffice for 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-10 Thread Thierry Carrez
Hi!

There is a lot of useful information in that post (even excluding the
part brainstorming solutions) and it would be a shame if it was lost in
a sub-thread. Do you plan to make a blog post, or reference wiki page,
out of this ?

Back to the content, I think a more layered testing approach (as
suggested) is a great way to reduce our gating issues, but also to
reduce the configuration matrix issue.

On the gating side, our current solution is optimized to detect rare
issues. It's a good outcome, but the main goal should really be to
detect in-project and cross-project regressions, while not preventing us
from landing patches. Rare issues detection should be a side-effect of
the data we generate, not the life-and-death issue it currently is.

So limiting co-gating tests to external interfaces blackbox testing,
while the project would still run more whitebox tests on its own
behavior sounds like a good idea. It would go a long way to limit the
impact a rare issue in project A has on project B velocity, which is
where most of the gate frustration comes from.

Adding another level of per-project functional testing also lets us test
more configuration options outside of co-gating tests. If we can test
that MySQL and Postgres behave the same from Nova's perspective in
Nova-specific functional whitebox testing, then we really don't need to
test both in cogating tests. By being more specific in what we test for
each project, we can actually test more things by running less tests.


Sean Dague wrote:
 I think we need to actually step back a little and figure out where we
 are, how we got here, and what the future of validation might need to
 look like in OpenStack. Because I think there has been some
 communication gaps. (Also, for people I've had vigorous conversations
 about this before, realize my positions have changed somewhat,
 especially on separation of concerns.)
 
 (Also note, this is all mental stream right now, so I will not pretend
 that it's an entirely coherent view of the world, my hope in getting
 things down is we can come up with that coherent view of the wold together.)
 
 == Basic History ==
 
 In the essex time frame Tempest was 70 tests. It was basically a barely
 adequate sniff test for integration for OpenStack. So much so that our
 first 3rd Party CI system, SmokeStack, used it's own test suite, which
 legitimately found completely different bugs than Tempest. Not
 surprising, Tempest was a really small number of integration tests.
 
 As we got to Grizzly Tempest had grown to 1300 tests, somewhat
 organically. People were throwing a mix of tests into the fold, some
 using Tempest's client, some using official clients, some trying to hit
 the database doing white box testing. It had become kind of a mess and a
 rorshack test. We had some really weird design summit sessions because
 many people had only looked at a piece of Tempest, and assumed the rest
 was like it.
 
 So we spent some time defining scope. Tempest couldn't really be
 everything to everyone. It would be a few things:
  * API testing for public APIs with a contract
  * Some throughput integration scenarios to test some common flows
 (these were expected to be small in number)
  * 3rd Party API testing (because it had existed previously)
 
 But importantly, Tempest isn't a generic function test suite. Focus is
 important, because Tempests mission always was highly aligned with what
 eventually became called Defcore. Some way to validate some
 compatibility between clouds. Be that clouds built from upstream (is the
 cloud of 5 patches ago compatible with the cloud right now), clouds from
 different vendors, public clouds vs. private clouds, etc.
 
 == The Current Validation Environment ==
 
 Today most OpenStack projects have 2 levels of validation. Unit tests 
 Tempest. That's sort of like saying your house has a basement and a
 roof. For sufficiently small values of house, this is fine. I don't
 think our house is sufficiently small any more.
 
 This has caused things like Neutron's unit tests, which actually bring
 up a full wsgi functional stack and test plugins through http calls
 through the entire wsgi stack, replicated 17 times. It's the reason that
 Neutron unit tests takes many GB of memory to run, and often run longer
 than Tempest runs. (Maru has been doing hero's work to fix much of this.)
 
 In the last year we made it *really* easy to get a devstack node of your
 own, configured any way you want, to do any project level validation you
 like. Swift uses it to drive their own functional testing. Neutron is
 working on heading down this path.
 
 == New Challenges with New Projects ==
 
 When we started down this path all projects had user APIs. So all
 projects were something we could think about from a tenant usage
 environment. Looking at both Ironic and Ceilometer, we really have
 projects that are Admin API only.
 
 == Contracts or lack thereof ==
 
 I think this is where we start to overlap with Eoghan's 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-10 Thread David Kranz

On 07/10/2014 08:53 AM, Matthew Treinish wrote:

On Thu, Jul 10, 2014 at 08:37:40AM -0400, Eoghan Glynn wrote:

Note that the notifications that capture these resource state transitions
are a long-standing mechanism in openstack that ceilometer has depended
on from the very outset. I don't think it's realistic to envisage these
interactions will be replaced by REST APIs any time soon.

I wasn't advocating doing everything over a REST API. (API is an
overloaded term) I just meant that if we're depending on
notifications for communication between projects then we should
enforce a stability contract on them. Similar to what we already
have with the API stability guidelines for the REST APIs. The fact
that there is no direct enforcement on notifications, either through
social policy or testing, is what I was taking issue with.

I also think if we decide to have a policy of enforcing notification
stability then we should directly test the notifications from an
external repo to block slips. But, that's a discussion for later, if
at all.

A-ha, OK, got it.

I've discussed enforcing such stability with jogo on IRC last night, and
kicked off a separate thread to capture that:

   http://lists.openstack.org/pipermail/openstack-dev/2014-July/039858.html

However the time-horizon for that effort would be quite a bit into the
future, compared to the test coverage that we're aiming to have in place
for juno-2.


The branchless Tempest spec envisages new features will be added and
need to be skipped when testing stable/previous, but IIUC requires
that the presence of new behaviors is externally discoverable[5].

I think the test case you proposed is fine. I know some people will
argue that it is expanding the scope of tempest to include more
whitebox like testing, because the notification are an internal
side-effect of the api call, but I don't see it that way. It feels
more like exactly what tempest is there to enable testing, a
cross-project interaction using the api.

In my example, APIs are only used to initiate the action in cinder
and then to check the metering data in ceilometer.

But the middle-piece, i.e. the interaction between cinder  ceilometer,
is not mediated by an API. Rather, its carried via an unversioned
notification.

Yeah, exactly, that's why I feel it's a valid Tempest test case.

Just to clarify: you meant to type it's a valid Tempest test case
as opposed to it's *not* a valid Tempest test case, right?

Heh, yes I meant to say, it is a valid test case.

  

What I was referring to as the counter argument, and where the
difference of opinion was, is that the test will be making REST API
calls to both trigger a nominally internal mechanism (the
notification) from the services and then using the ceilometer api to
validate the notification worked.

Yes, that's exactly the idea.


But, it's arguably the real intent of these tests is to validate
that internal mechanism, which is basically a whitebox test. The
argument was that by testing it in tempes we're testing
notifications poorly because of it's black box limitation
notifications will just be tested indirectly. Which I feel is a
valid point, but not a sufficient reason to exclude the notification
tests from tempest.

Agreed.


I think the best way to move forward is to have functional whitebox
tests for the notifications as part of the individual projects
generating them, and that way we can direct validation of the
notification. But, I also feel there should be tempest tests on top
of that that verify the ceilometer side of consuming the
notification and the api exposing that information.

Excellent. So, indeed, more fullsome coverage of the notification
logic with in-tree tests on the producer side would definitely
to welcome, and could be seen as a phase zero of an overall to
fix/impove the notification mechanism.
   

But, they're is also a slight misunderstanding here. Having a
feature be externally discoverable isn't a hard requirement for a
config option in tempest, it's just *strongly* recommended. Mostly,
because if there isn't a way to discover it how are end users
expected to know what will work.

A-ha, I missed the subtle distinction there and thought that this
discoverability was a *strict* requirement. So how bad a citizen would
a project be considered to be if it chose not to meet that strong
recommendation?

You'd be far from the only ones who are doing that, for an existing example
look at anything on the nova driver feature matrix. Most of those aren't
discoverable from the API. So I think it would be ok to do that, but when we
have efforts like:

https://review.openstack.org/#/c/94473/

it'll make that more difficult. Which is why I think having discoverability
through the API is important. (it's the same public cloud question)

So for now, would it suffice for the master versus stable/icehouse
config to be checked-in in static form pending the completion of that
BP on tempest-conf-autogen?

Yeah, I think that'll be fine. The 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-10 Thread David Kranz

On 07/10/2014 09:47 AM, Thierry Carrez wrote:

Hi!

There is a lot of useful information in that post (even excluding the
part brainstorming solutions) and it would be a shame if it was lost in
a sub-thread. Do you plan to make a blog post, or reference wiki page,
out of this ?

Back to the content, I think a more layered testing approach (as
suggested) is a great way to reduce our gating issues, but also to
reduce the configuration matrix issue.

On the gating side, our current solution is optimized to detect rare
issues. It's a good outcome, but the main goal should really be to
detect in-project and cross-project regressions, while not preventing us
from landing patches. Rare issues detection should be a side-effect of
the data we generate, not the life-and-death issue it currently is.

So limiting co-gating tests to external interfaces blackbox testing,
while the project would still run more whitebox tests on its own
behavior sounds like a good idea. It would go a long way to limit the
impact a rare issue in project A has on project B velocity, which is
where most of the gate frustration comes from.

Adding another level of per-project functional testing also lets us test
more configuration options outside of co-gating tests. If we can test
that MySQL and Postgres behave the same from Nova's perspective in
Nova-specific functional whitebox testing, then we really don't need to
test both in cogating tests. By being more specific in what we test for
each project, we can actually test more things by running less tests.

+10

Once we recognize that co-gating of every test on every commit does not 
scale, many other options come into play.
This issue is closely related to the decision by the qa group in Atlanta 
that migrating api tests from tempest to projects was a good idea.
All of this will have to be done incrementally, presumably on a project 
by project basis. I think neutron may lead the way.
There are many issues around sharing test framework code that Matt 
raised in another message.
When there are good functional api tests running in a project, a subset 
could be selected to run in the gate. This was the original intent of the

'smoke' tag in tempest.

 -David


Sean Dague wrote:

I think we need to actually step back a little and figure out where we
are, how we got here, and what the future of validation might need to
look like in OpenStack. Because I think there has been some
communication gaps. (Also, for people I've had vigorous conversations
about this before, realize my positions have changed somewhat,
especially on separation of concerns.)

(Also note, this is all mental stream right now, so I will not pretend
that it's an entirely coherent view of the world, my hope in getting
things down is we can come up with that coherent view of the wold together.)

== Basic History ==

In the essex time frame Tempest was 70 tests. It was basically a barely
adequate sniff test for integration for OpenStack. So much so that our
first 3rd Party CI system, SmokeStack, used it's own test suite, which
legitimately found completely different bugs than Tempest. Not
surprising, Tempest was a really small number of integration tests.

As we got to Grizzly Tempest had grown to 1300 tests, somewhat
organically. People were throwing a mix of tests into the fold, some
using Tempest's client, some using official clients, some trying to hit
the database doing white box testing. It had become kind of a mess and a
rorshack test. We had some really weird design summit sessions because
many people had only looked at a piece of Tempest, and assumed the rest
was like it.

So we spent some time defining scope. Tempest couldn't really be
everything to everyone. It would be a few things:
  * API testing for public APIs with a contract
  * Some throughput integration scenarios to test some common flows
(these were expected to be small in number)
  * 3rd Party API testing (because it had existed previously)

But importantly, Tempest isn't a generic function test suite. Focus is
important, because Tempests mission always was highly aligned with what
eventually became called Defcore. Some way to validate some
compatibility between clouds. Be that clouds built from upstream (is the
cloud of 5 patches ago compatible with the cloud right now), clouds from
different vendors, public clouds vs. private clouds, etc.

== The Current Validation Environment ==

Today most OpenStack projects have 2 levels of validation. Unit tests 
Tempest. That's sort of like saying your house has a basement and a
roof. For sufficiently small values of house, this is fine. I don't
think our house is sufficiently small any more.

This has caused things like Neutron's unit tests, which actually bring
up a full wsgi functional stack and test plugins through http calls
through the entire wsgi stack, replicated 17 times. It's the reason that
Neutron unit tests takes many GB of memory to run, and often run longer
than Tempest runs. (Maru has been doing hero's 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-10 Thread Sean Dague
On 07/10/2014 09:48 AM, Matthew Treinish wrote:
 On Wed, Jul 09, 2014 at 09:16:01AM -0400, Sean Dague wrote:
 I think we need to actually step back a little and figure out where we
 are, how we got here, and what the future of validation might need to
 look like in OpenStack. Because I think there has been some
 communication gaps. (Also, for people I've had vigorous conversations
 about this before, realize my positions have changed somewhat,
 especially on separation of concerns.)

 (Also note, this is all mental stream right now, so I will not pretend
 that it's an entirely coherent view of the world, my hope in getting
 things down is we can come up with that coherent view of the wold together.)

 == Basic History ==

 In the essex time frame Tempest was 70 tests. It was basically a barely
 adequate sniff test for integration for OpenStack. So much so that our
 first 3rd Party CI system, SmokeStack, used it's own test suite, which
 legitimately found completely different bugs than Tempest. Not
 surprising, Tempest was a really small number of integration tests.

 As we got to Grizzly Tempest had grown to 1300 tests, somewhat
 organically. People were throwing a mix of tests into the fold, some
 using Tempest's client, some using official clients, some trying to hit
 the database doing white box testing. It had become kind of a mess and a
 rorshack test. We had some really weird design summit sessions because
 many people had only looked at a piece of Tempest, and assumed the rest
 was like it.

 So we spent some time defining scope. Tempest couldn't really be
 everything to everyone. It would be a few things:
  * API testing for public APIs with a contract
  * Some throughput integration scenarios to test some common flows
 (these were expected to be small in number)
  * 3rd Party API testing (because it had existed previously)

 But importantly, Tempest isn't a generic function test suite. Focus is
 important, because Tempests mission always was highly aligned with what
 eventually became called Defcore. Some way to validate some
 compatibility between clouds. Be that clouds built from upstream (is the
 cloud of 5 patches ago compatible with the cloud right now), clouds from
 different vendors, public clouds vs. private clouds, etc.

 == The Current Validation Environment ==

 Today most OpenStack projects have 2 levels of validation. Unit tests 
 Tempest. That's sort of like saying your house has a basement and a
 roof. For sufficiently small values of house, this is fine. I don't
 think our house is sufficiently small any more.

 This has caused things like Neutron's unit tests, which actually bring
 up a full wsgi functional stack and test plugins through http calls
 through the entire wsgi stack, replicated 17 times. It's the reason that
 Neutron unit tests takes many GB of memory to run, and often run longer
 than Tempest runs. (Maru has been doing hero's work to fix much of this.)

 In the last year we made it *really* easy to get a devstack node of your
 own, configured any way you want, to do any project level validation you
 like. Swift uses it to drive their own functional testing. Neutron is
 working on heading down this path.

 == New Challenges with New Projects ==

 When we started down this path all projects had user APIs. So all
 projects were something we could think about from a tenant usage
 environment. Looking at both Ironic and Ceilometer, we really have
 projects that are Admin API only.

 == Contracts or lack thereof ==

 I think this is where we start to overlap with Eoghan's thread most.
 Because branchless Tempest assumes that the test in Tempest are governed
 by a stable contract. The behavior should only change based on API
 version, not on day of the week. In the case that triggered this what
 was really being tested was not an API, but the existence of a meter
 that only showed up in Juno.

 Ceilometer is also another great instance of something that's often in a
 state of huge amounts of stack tracing because it depends on some
 internals interface in a project which isn't a contract. Or notification
 formats, which aren't (largely) versioned.

 Ironic has a Nova driver in their tree, which implements the Nova driver
 internals interface. Which means they depend on something that's not a
 contract. It gets broken a lot.

 == Depth of reach of a test suite ==

 Tempest can only reach so far into a stack given that it's levers are
 basically public API calls. That's ok. But it means that things like
 testing a bunch of different dbs in the gate (i.e. the postgresql job)
 are pretty ineffectual. Trying to exercise code 4 levels deep through
 API calls is like driving a rover on Mars. You can do it, but only very
 carefully.

 == Replication ==

 Because there is such a huge gap between unit tests, and Tempest tests,
 replication of issues is often challenging. We have the ability to see
 races in the gate due to volume of results, that don't show up for
 developers very 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-10 Thread Doug Hellmann
On Thu, Jul 10, 2014 at 11:56 AM, Sean Dague s...@dague.net wrote:
 On 07/10/2014 09:48 AM, Matthew Treinish wrote:
 On Wed, Jul 09, 2014 at 09:16:01AM -0400, Sean Dague wrote:
 I think we need to actually step back a little and figure out where we
 are, how we got here, and what the future of validation might need to
 look like in OpenStack. Because I think there has been some
 communication gaps. (Also, for people I've had vigorous conversations
 about this before, realize my positions have changed somewhat,
 especially on separation of concerns.)

 (Also note, this is all mental stream right now, so I will not pretend
 that it's an entirely coherent view of the world, my hope in getting
 things down is we can come up with that coherent view of the wold together.)

 == Basic History ==

 In the essex time frame Tempest was 70 tests. It was basically a barely
 adequate sniff test for integration for OpenStack. So much so that our
 first 3rd Party CI system, SmokeStack, used it's own test suite, which
 legitimately found completely different bugs than Tempest. Not
 surprising, Tempest was a really small number of integration tests.

 As we got to Grizzly Tempest had grown to 1300 tests, somewhat
 organically. People were throwing a mix of tests into the fold, some
 using Tempest's client, some using official clients, some trying to hit
 the database doing white box testing. It had become kind of a mess and a
 rorshack test. We had some really weird design summit sessions because
 many people had only looked at a piece of Tempest, and assumed the rest
 was like it.

 So we spent some time defining scope. Tempest couldn't really be
 everything to everyone. It would be a few things:
  * API testing for public APIs with a contract
  * Some throughput integration scenarios to test some common flows
 (these were expected to be small in number)
  * 3rd Party API testing (because it had existed previously)

 But importantly, Tempest isn't a generic function test suite. Focus is
 important, because Tempests mission always was highly aligned with what
 eventually became called Defcore. Some way to validate some
 compatibility between clouds. Be that clouds built from upstream (is the
 cloud of 5 patches ago compatible with the cloud right now), clouds from
 different vendors, public clouds vs. private clouds, etc.

 == The Current Validation Environment ==

 Today most OpenStack projects have 2 levels of validation. Unit tests 
 Tempest. That's sort of like saying your house has a basement and a
 roof. For sufficiently small values of house, this is fine. I don't
 think our house is sufficiently small any more.

 This has caused things like Neutron's unit tests, which actually bring
 up a full wsgi functional stack and test plugins through http calls
 through the entire wsgi stack, replicated 17 times. It's the reason that
 Neutron unit tests takes many GB of memory to run, and often run longer
 than Tempest runs. (Maru has been doing hero's work to fix much of this.)

 In the last year we made it *really* easy to get a devstack node of your
 own, configured any way you want, to do any project level validation you
 like. Swift uses it to drive their own functional testing. Neutron is
 working on heading down this path.

 == New Challenges with New Projects ==

 When we started down this path all projects had user APIs. So all
 projects were something we could think about from a tenant usage
 environment. Looking at both Ironic and Ceilometer, we really have
 projects that are Admin API only.

 == Contracts or lack thereof ==

 I think this is where we start to overlap with Eoghan's thread most.
 Because branchless Tempest assumes that the test in Tempest are governed
 by a stable contract. The behavior should only change based on API
 version, not on day of the week. In the case that triggered this what
 was really being tested was not an API, but the existence of a meter
 that only showed up in Juno.

 Ceilometer is also another great instance of something that's often in a
 state of huge amounts of stack tracing because it depends on some
 internals interface in a project which isn't a contract. Or notification
 formats, which aren't (largely) versioned.

 Ironic has a Nova driver in their tree, which implements the Nova driver
 internals interface. Which means they depend on something that's not a
 contract. It gets broken a lot.

 == Depth of reach of a test suite ==

 Tempest can only reach so far into a stack given that it's levers are
 basically public API calls. That's ok. But it means that things like
 testing a bunch of different dbs in the gate (i.e. the postgresql job)
 are pretty ineffectual. Trying to exercise code 4 levels deep through
 API calls is like driving a rover on Mars. You can do it, but only very
 carefully.

 == Replication ==

 Because there is such a huge gap between unit tests, and Tempest tests,
 replication of issues is often challenging. We have the ability to see
 races in the gate 

[openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-09 Thread Eoghan Glynn

TL;DR: branchless Tempest shouldn't impact on backporting policy, yet
   makes it difficult to test new features not discoverable via APIs

Folks,

At the project/release status meeting yesterday[1], I raised the issue
that featureful backports to stable are beginning to show up[2], purely
to facilitate branchless Tempest. We had a useful exchange of views on
IRC but ran out of time, so this thread is intended to capture and
complete the discussion.

The issues, as I see it, are:

 * Tempest is expected to do double-duty as both the integration testing
   harness for upstream CI and as a tool for externally probing capabilities
   in public clouds

 * Tempest has an implicit bent towards pure API tests, yet not all
   interactions between OpenStack services that we want to test are
   mediated by APIs

 * We don't have another integration test harness other than Tempest
   that we could use to host tests that don't just make assertions
   about the correctness/presence of versioned APIs

 * We want to be able to add new features to Juno, or fix bugs of
   omission, in ways that aren't necessarily discoverable in the API;
   without backporting these patches to stable if we wouldn't have
   done so under the normal stable-maint policy[3]

 * Integrated projects are required[4] to provide Tempest coverage,
   so the rate of addition of tests to Tempest is unlikely to slow
   down anytime soon

So the specific type of test that I have in mind would be common
for Ceilometer, but also possibly for Ironic and others:

 1. an end-user initiates some action via an API
(e.g. calls the cinder snapshot API)

 2. this initiates some actions behind the scenes
(e.g. a volume is snapshot'd and a notification emitted)

 3. the test reasons over some expected side-effect
(e.g. some metering data shows up in ceilometer)

The branchless Tempest spec envisages new features will be added and
need to be skipped when testing stable/previous, but IIUC requires
that the presence of new behaviors is externally discoverable[5].

One approach mooted for allowing these kind of scenarios to be tested
was to split off the pure-API aspects of Tempest so that it can be used
for probing public-cloud-capabilities as well as upstream CI, and then
build project-specific mini-Tempests to test integration with other
projects.

Personally, I'm not a fan of that approach as it would require a lot
of QA expertise in each project, lead to inefficient use of CI
nodepool resources to run all the mini-Tempests, and probably lead to
a divergent hotchpotch of per-project approaches.

Another idea would be to keep all tests in Tempest, while also
micro-versioning the services such that tests can be skipped on the
basis of whether a particular feature-adding commit is present.

When this micro-versioning can't be discovered by the test (as in the
public cloud capabilities probing case), those tests would be skipped
anyway.

The final, less palatable, approach that occurs to me would be to
revert to branchful Tempest.

Any other ideas, or preferences among the options laid out above? 

Cheers,
Eoghan

[1] 
http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.html
[2] https://review.openstack.org/104863
[3] https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes
[4] 
https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst#qa-1
[5] 
https://github.com/openstack/qa-specs/blob/master/specs/implemented/branchless-tempest.rst#scenario-1-new-tests-for-new-features

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-09 Thread Matthew Treinish
On Wed, Jul 09, 2014 at 05:41:10AM -0400, Eoghan Glynn wrote:
 
 TL;DR: branchless Tempest shouldn't impact on backporting policy, yet
makes it difficult to test new features not discoverable via APIs
 
 Folks,
 
 At the project/release status meeting yesterday[1], I raised the issue
 that featureful backports to stable are beginning to show up[2], purely
 to facilitate branchless Tempest. We had a useful exchange of views on
 IRC but ran out of time, so this thread is intended to capture and
 complete the discussion.

So, [2] is definitely not something that should be backported. But, doesn't it
mean that cinder snapshot notifications don't work at all in icehouse? Is this
reflected in the release notes or docs somewhere, because it seems like
something that would be expected to work, which, I think, is actually a bigger
bug being exposed by branchless tempest. As a user how do I know that with
the cloud I'm using whether cinder snapshot notifications are supported?

 
 The issues, as I see it, are:
 
  * Tempest is expected to do double-duty as both the integration testing
harness for upstream CI and as a tool for externally probing capabilities
in public clouds
 
  * Tempest has an implicit bent towards pure API tests, yet not all
interactions between OpenStack services that we want to test are
mediated by APIs

I think this is the bigger issue. If there is cross-service communication it
should have an API contract. (and probably be directly tested too) It doesn't
necessarily have to be a REST API, although in most cases that's easier. This
is probably something for the TC to discuss/mandate, though.

 
  * We don't have another integration test harness other than Tempest
that we could use to host tests that don't just make assertions
about the correctness/presence of versioned APIs
 
  * We want to be able to add new features to Juno, or fix bugs of
omission, in ways that aren't necessarily discoverable in the API;
without backporting these patches to stable if we wouldn't have
done so under the normal stable-maint policy[3]
 
  * Integrated projects are required[4] to provide Tempest coverage,
so the rate of addition of tests to Tempest is unlikely to slow
down anytime soon
 
 So the specific type of test that I have in mind would be common
 for Ceilometer, but also possibly for Ironic and others:
 
  1. an end-user initiates some action via an API
 (e.g. calls the cinder snapshot API)
 
  2. this initiates some actions behind the scenes
 (e.g. a volume is snapshot'd and a notification emitted)
 
  3. the test reasons over some expected side-effect
 (e.g. some metering data shows up in ceilometer)
 
 The branchless Tempest spec envisages new features will be added and
 need to be skipped when testing stable/previous, but IIUC requires
 that the presence of new behaviors is externally discoverable[5].

I think the test case you proposed is fine. I know some people will argue that
it is expanding the scope of tempest to include more whitebox like testing,
because the notification are an internal side-effect of the api call, but I
don't see it that way. It feels more like exactly what tempest is there to
enable testing, a cross-project interaction using the api.

I'm pretty sure that most of the concerns around tests like this were from the
gate maintenance and debug side of things. In other words when things go wrong
how impossible will it be to debug that a notification wasn't generated or not
counted? Right now I think it would be pretty difficult to debug a notification
test failure, which is where the problem is. While I think testing like this is
definitely valid, that doesn't mean we should rush in a bunch of sloppy tests
that are impossible to debug, because that'll just make everyone sad panda.

But, they're is also a slight misunderstanding here. Having a feature be
externally discoverable isn't a hard requirement for a config option in tempest,
it's just *strongly* recommended. Mostly, because if there isn't a way to
discover it how are end users expected to know what will work.

For this specific case I think it's definitely fair to have an option for which
notifications services are expected to be generated. That's something that is
definitely a configurable option when setting up a deployment, and is something
that feels like a valid tempest config option, so we know which tests will work.
We already have similar feature flags for config time options in the services,
and having options like that would also get you out of that backport mess you
have right now. However, it does raise the question of being an end user how am
I expected to know which notifications get counted? Which is why having the
feature discoverability is generally a really good idea.

 
 One approach mooted for allowing these kind of scenarios to be tested
 was to split off the pure-API aspects of Tempest so that it can be used
 for probing 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-09 Thread Sean Dague
I think we need to actually step back a little and figure out where we
are, how we got here, and what the future of validation might need to
look like in OpenStack. Because I think there has been some
communication gaps. (Also, for people I've had vigorous conversations
about this before, realize my positions have changed somewhat,
especially on separation of concerns.)

(Also note, this is all mental stream right now, so I will not pretend
that it's an entirely coherent view of the world, my hope in getting
things down is we can come up with that coherent view of the wold together.)

== Basic History ==

In the essex time frame Tempest was 70 tests. It was basically a barely
adequate sniff test for integration for OpenStack. So much so that our
first 3rd Party CI system, SmokeStack, used it's own test suite, which
legitimately found completely different bugs than Tempest. Not
surprising, Tempest was a really small number of integration tests.

As we got to Grizzly Tempest had grown to 1300 tests, somewhat
organically. People were throwing a mix of tests into the fold, some
using Tempest's client, some using official clients, some trying to hit
the database doing white box testing. It had become kind of a mess and a
rorshack test. We had some really weird design summit sessions because
many people had only looked at a piece of Tempest, and assumed the rest
was like it.

So we spent some time defining scope. Tempest couldn't really be
everything to everyone. It would be a few things:
 * API testing for public APIs with a contract
 * Some throughput integration scenarios to test some common flows
(these were expected to be small in number)
 * 3rd Party API testing (because it had existed previously)

But importantly, Tempest isn't a generic function test suite. Focus is
important, because Tempests mission always was highly aligned with what
eventually became called Defcore. Some way to validate some
compatibility between clouds. Be that clouds built from upstream (is the
cloud of 5 patches ago compatible with the cloud right now), clouds from
different vendors, public clouds vs. private clouds, etc.

== The Current Validation Environment ==

Today most OpenStack projects have 2 levels of validation. Unit tests 
Tempest. That's sort of like saying your house has a basement and a
roof. For sufficiently small values of house, this is fine. I don't
think our house is sufficiently small any more.

This has caused things like Neutron's unit tests, which actually bring
up a full wsgi functional stack and test plugins through http calls
through the entire wsgi stack, replicated 17 times. It's the reason that
Neutron unit tests takes many GB of memory to run, and often run longer
than Tempest runs. (Maru has been doing hero's work to fix much of this.)

In the last year we made it *really* easy to get a devstack node of your
own, configured any way you want, to do any project level validation you
like. Swift uses it to drive their own functional testing. Neutron is
working on heading down this path.

== New Challenges with New Projects ==

When we started down this path all projects had user APIs. So all
projects were something we could think about from a tenant usage
environment. Looking at both Ironic and Ceilometer, we really have
projects that are Admin API only.

== Contracts or lack thereof ==

I think this is where we start to overlap with Eoghan's thread most.
Because branchless Tempest assumes that the test in Tempest are governed
by a stable contract. The behavior should only change based on API
version, not on day of the week. In the case that triggered this what
was really being tested was not an API, but the existence of a meter
that only showed up in Juno.

Ceilometer is also another great instance of something that's often in a
state of huge amounts of stack tracing because it depends on some
internals interface in a project which isn't a contract. Or notification
formats, which aren't (largely) versioned.

Ironic has a Nova driver in their tree, which implements the Nova driver
internals interface. Which means they depend on something that's not a
contract. It gets broken a lot.

== Depth of reach of a test suite ==

Tempest can only reach so far into a stack given that it's levers are
basically public API calls. That's ok. But it means that things like
testing a bunch of different dbs in the gate (i.e. the postgresql job)
are pretty ineffectual. Trying to exercise code 4 levels deep through
API calls is like driving a rover on Mars. You can do it, but only very
carefully.

== Replication ==

Because there is such a huge gap between unit tests, and Tempest tests,
replication of issues is often challenging. We have the ability to see
races in the gate due to volume of results, that don't show up for
developers very easily. When you do 30k runs a week, a ton of data falls
out of it.

A good instance is the live snapshot bug. It was failing on about 3% of
Tempest runs, which means that it had about a 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-09 Thread Eoghan Glynn

Thanks for the response Matt, some comments inline.

  At the project/release status meeting yesterday[1], I raised the issue
  that featureful backports to stable are beginning to show up[2], purely
  to facilitate branchless Tempest. We had a useful exchange of views on
  IRC but ran out of time, so this thread is intended to capture and
  complete the discussion.
 
 So, [2] is definitely not something that should be backported.

Agreed. It was the avoidance of such forced backports that motivated
the thread.

 But, doesn't it mean that cinder snapshot notifications don't work
 at all in icehouse?

The snapshot notifications work in the sense that cinder emits them
at the appropriate points in time. What was missing in Icehouse is
that ceilometer didn't consume those notifications and translate to
metering data.

 Is this reflected in the release notes or docs somewhere

Yeah it should be clear from the list of meters in the icehouse docs:

  
https://github.com/openstack/ceilometer/blob/stable/icehouse/doc/source/measurements.rst#volume-cinder

versus the Juno version:

  
https://github.com/openstack/ceilometer/blob/master/doc/source/measurements.rst#volume-cinder

 because it seems like something that would be expected to work, which,
 I think, is actually a bigger bug being exposed by branchless tempest.

The bigger bug being the lack of ceilometer support for consuming this
notification, or the lack of discoverability for that feature?

 As a user how do I know that with the cloud I'm using whether cinder
 snapshot notifications are supported?

If you depend on this as a front-end user, then you'd have to read
the documentation listing the meters being gathered.

But is this something that a front-end cloud user would actually be
directly concerned about?
 
   * Tempest has an implicit bent towards pure API tests, yet not all
 interactions between OpenStack services that we want to test are
 mediated by APIs
 
 I think this is the bigger issue. If there is cross-service communication it
 should have an API contract. (and probably be directly tested too) It doesn't
 necessarily have to be a REST API, although in most cases that's easier. This
 is probably something for the TC to discuss/mandate, though.

As I said at the PTLs meeting yesterday, I think we need to be wary
of the temptation to bend the problem-space to fit the solution.

Short of the polling load imposed by ceilometer significantly increasing,
in reality we will have to continue to depend on notifications as one
of the main ways of detecting phase-shifts in resource state.

Note that the notifications that capture these resource state transitions
are a long-standing mechanism in openstack that ceilometer has depended
on from the very outset. I don't think it's realistic to envisage these
interactions will be replaced by REST APIs any time soon.

  The branchless Tempest spec envisages new features will be added and
  need to be skipped when testing stable/previous, but IIUC requires
  that the presence of new behaviors is externally discoverable[5].
 
 I think the test case you proposed is fine. I know some people will
 argue that it is expanding the scope of tempest to include more
 whitebox like testing, because the notification are an internal
 side-effect of the api call, but I don't see it that way. It feels
 more like exactly what tempest is there to enable testing, a
 cross-project interaction using the api.

In my example, APIs are only used to initiate the action in cinder
and then to check the metering data in ceilometer.

But the middle-piece, i.e. the interaction between cinder  ceilometer,
is not mediated by an API. Rather, its carried via an unversioned
notification.

 I'm pretty sure that most of the concerns around tests like this
 were from the gate maintenance and debug side of things. In other
 words when things go wrong how impossible will it be to debug that a
 notification wasn't generated or not counted? Right now I think it
 would be pretty difficult to debug a notification test failure,
 which is where the problem is. While I think testing like this is
 definitely valid, that doesn't mean we should rush in a bunch of
 sloppy tests that are impossible to debug, because that'll just make
 everyone sad panda.

It's a fair point that cross-service diagnosis is not necessarily easy,
especially as there's pressure to reduce the volume of debug logging
emitted. But notification-driven metering is an important part of what
ceilometer does, so we need to figure out some way of integration-testing
it, IMO.

 But, they're is also a slight misunderstanding here. Having a
 feature be externally discoverable isn't a hard requirement for a
 config option in tempest, it's just *strongly* recommended. Mostly,
 because if there isn't a way to discover it how are end users
 expected to know what will work.

A-ha, I missed the subtle distinction there and thought that this
discoverability was a *strict* requirement. So 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-09 Thread Eoghan Glynn

 I think we need to actually step back a little and figure out where we
 are, how we got here, and what the future of validation might need to
 look like in OpenStack. Because I think there has been some
 communication gaps. (Also, for people I've had vigorous conversations
 about this before, realize my positions have changed somewhat,
 especially on separation of concerns.)
 
 (Also note, this is all mental stream right now, so I will not pretend
 that it's an entirely coherent view of the world, my hope in getting
 things down is we can come up with that coherent view of the wold together.)
 
 == Basic History ==
 
 In the essex time frame Tempest was 70 tests. It was basically a barely
 adequate sniff test for integration for OpenStack. So much so that our
 first 3rd Party CI system, SmokeStack, used it's own test suite, which
 legitimately found completely different bugs than Tempest. Not
 surprising, Tempest was a really small number of integration tests.
 
 As we got to Grizzly Tempest had grown to 1300 tests, somewhat
 organically. People were throwing a mix of tests into the fold, some
 using Tempest's client, some using official clients, some trying to hit
 the database doing white box testing. It had become kind of a mess and a
 rorshack test. We had some really weird design summit sessions because
 many people had only looked at a piece of Tempest, and assumed the rest
 was like it.
 
 So we spent some time defining scope. Tempest couldn't really be
 everything to everyone. It would be a few things:
  * API testing for public APIs with a contract
  * Some throughput integration scenarios to test some common flows
 (these were expected to be small in number)
  * 3rd Party API testing (because it had existed previously)
 
 But importantly, Tempest isn't a generic function test suite. Focus is
 important, because Tempests mission always was highly aligned with what
 eventually became called Defcore. Some way to validate some
 compatibility between clouds. Be that clouds built from upstream (is the
 cloud of 5 patches ago compatible with the cloud right now), clouds from
 different vendors, public clouds vs. private clouds, etc.
 
 == The Current Validation Environment ==
 
 Today most OpenStack projects have 2 levels of validation. Unit tests 
 Tempest. That's sort of like saying your house has a basement and a
 roof. For sufficiently small values of house, this is fine. I don't
 think our house is sufficiently small any more.
 
 This has caused things like Neutron's unit tests, which actually bring
 up a full wsgi functional stack and test plugins through http calls
 through the entire wsgi stack, replicated 17 times. It's the reason that
 Neutron unit tests takes many GB of memory to run, and often run longer
 than Tempest runs. (Maru has been doing hero's work to fix much of this.)
 
 In the last year we made it *really* easy to get a devstack node of your
 own, configured any way you want, to do any project level validation you
 like. Swift uses it to drive their own functional testing. Neutron is
 working on heading down this path.
 
 == New Challenges with New Projects ==
 
 When we started down this path all projects had user APIs. So all
 projects were something we could think about from a tenant usage
 environment. Looking at both Ironic and Ceilometer, we really have
 projects that are Admin API only.
 
 == Contracts or lack thereof ==
 
 I think this is where we start to overlap with Eoghan's thread most.
 Because branchless Tempest assumes that the test in Tempest are governed
 by a stable contract. The behavior should only change based on API
 version, not on day of the week. In the case that triggered this what
 was really being tested was not an API, but the existence of a meter
 that only showed up in Juno.
 
 Ceilometer is also another great instance of something that's often in a
 state of huge amounts of stack tracing because it depends on some
 internals interface in a project which isn't a contract. Or notification
 formats, which aren't (largely) versioned.
 
 Ironic has a Nova driver in their tree, which implements the Nova driver
 internals interface. Which means they depend on something that's not a
 contract. It gets broken a lot.
 
 == Depth of reach of a test suite ==
 
 Tempest can only reach so far into a stack given that it's levers are
 basically public API calls. That's ok. But it means that things like
 testing a bunch of different dbs in the gate (i.e. the postgresql job)
 are pretty ineffectual. Trying to exercise code 4 levels deep through
 API calls is like driving a rover on Mars. You can do it, but only very
 carefully.
 
 == Replication ==
 
 Because there is such a huge gap between unit tests, and Tempest tests,
 replication of issues is often challenging. We have the ability to see
 races in the gate due to volume of results, that don't show up for
 developers very easily. When you do 30k runs a week, a ton of data falls
 out of it.
 
 A good instance 

Re: [openstack-dev] [qa][all] Branchless Tempest beyond pure-API tests, impact on backporting policy

2014-07-09 Thread Matthew Treinish
On Wed, Jul 09, 2014 at 01:44:33PM -0400, Eoghan Glynn wrote:
 
 Thanks for the response Matt, some comments inline.
 
   At the project/release status meeting yesterday[1], I raised the issue
   that featureful backports to stable are beginning to show up[2], purely
   to facilitate branchless Tempest. We had a useful exchange of views on
   IRC but ran out of time, so this thread is intended to capture and
   complete the discussion.
  
  So, [2] is definitely not something that should be backported.
 
 Agreed. It was the avoidance of such forced backports that motivated
 the thread.
 
  But, doesn't it mean that cinder snapshot notifications don't work
  at all in icehouse?
 
 The snapshot notifications work in the sense that cinder emits them
 at the appropriate points in time. What was missing in Icehouse is
 that ceilometer didn't consume those notifications and translate to
 metering data.

Yeah that's what I meant, sorry it wasn't worded clearly.

 
  Is this reflected in the release notes or docs somewhere
 
 Yeah it should be clear from the list of meters in the icehouse docs:
 
   
 https://github.com/openstack/ceilometer/blob/stable/icehouse/doc/source/measurements.rst#volume-cinder
 
 versus the Juno version:
 
   
 https://github.com/openstack/ceilometer/blob/master/doc/source/measurements.rst#volume-cinder
 
  because it seems like something that would be expected to work, which,
  I think, is actually a bigger bug being exposed by branchless tempest.
 
 The bigger bug being the lack of ceilometer support for consuming this
 notification, or the lack of discoverability for that feature?

Well if it's in the docs as a limitation for icehouse, then its less severe
but it's still something on the discoverability side I guess. I think my lack
of experience with ceilometer is showing here.

 
  As a user how do I know that with the cloud I'm using whether cinder
  snapshot notifications are supported?
 
 If you depend on this as a front-end user, then you'd have to read
 the documentation listing the meters being gathered.
 
 But is this something that a front-end cloud user would actually be
 directly concerned about?

I'm not sure, probably not, but if it's exposed on the public api I think it's
totally fair to expect that someone will be depending on it.

  
* Tempest has an implicit bent towards pure API tests, yet not all
  interactions between OpenStack services that we want to test are
  mediated by APIs
  
  I think this is the bigger issue. If there is cross-service communication it
  should have an API contract. (and probably be directly tested too) It 
  doesn't
  necessarily have to be a REST API, although in most cases that's easier. 
  This
  is probably something for the TC to discuss/mandate, though.
 
 As I said at the PTLs meeting yesterday, I think we need to be wary
 of the temptation to bend the problem-space to fit the solution.
 
 Short of the polling load imposed by ceilometer significantly increasing,
 in reality we will have to continue to depend on notifications as one
 of the main ways of detecting phase-shifts in resource state.

No, I agree notifications make a lot of sense, the load from frequent polling is
too high.

 
 Note that the notifications that capture these resource state transitions
 are a long-standing mechanism in openstack that ceilometer has depended
 on from the very outset. I don't think it's realistic to envisage these
 interactions will be replaced by REST APIs any time soon.

I wasn't advocating doing everything over a REST API. (API is an overloaded 
term)
I just meant that if we're depending on notifications for communication between
projects then we should enforce a stability contract on them. Similar to what we
already have with the API stability guidelines for the REST APIs. The fact that
there is no direct enforcement on notifications, either through social policy or
testing, is what I was taking issue with.

I also think if we decide to have a policy of enforcing notification stability
then we should directly test the notifications from an external repo to block
slips. But, that's a discussion for later, if at all.

 
   The branchless Tempest spec envisages new features will be added and
   need to be skipped when testing stable/previous, but IIUC requires
   that the presence of new behaviors is externally discoverable[5].
  
  I think the test case you proposed is fine. I know some people will
  argue that it is expanding the scope of tempest to include more
  whitebox like testing, because the notification are an internal
  side-effect of the api call, but I don't see it that way. It feels
  more like exactly what tempest is there to enable testing, a
  cross-project interaction using the api.
 
 In my example, APIs are only used to initiate the action in cinder
 and then to check the metering data in ceilometer.
 
 But the middle-piece, i.e. the interaction between cinder  ceilometer,
 is not mediated by an API. Rather,