On 07/10/2014 08:53 AM, Matthew Treinish wrote:
On Thu, Jul 10, 2014 at 08:37:40AM -0400, Eoghan Glynn wrote:
Note that the notifications that capture these resource state transitions
are a long-standing mechanism in openstack that ceilometer has depended
on from the very outset. I don't think it's realistic to envisage these
interactions will be replaced by REST APIs any time soon.
I wasn't advocating doing everything over a REST API. (API is an
overloaded term) I just meant that if we're depending on
notifications for communication between projects then we should
enforce a stability contract on them. Similar to what we already
have with the API stability guidelines for the REST APIs. The fact
that there is no direct enforcement on notifications, either through
social policy or testing, is what I was taking issue with.

I also think if we decide to have a policy of enforcing notification
stability then we should directly test the notifications from an
external repo to block slips. But, that's a discussion for later, if
at all.
A-ha, OK, got it.

I've discussed enforcing such stability with jogo on IRC last night, and
kicked off a separate thread to capture that:

   http://lists.openstack.org/pipermail/openstack-dev/2014-July/039858.html

However the time-horizon for that effort would be quite a bit into the
future, compared to the test coverage that we're aiming to have in place
for juno-2.

The branchless Tempest spec envisages new features will be added and
need to be skipped when testing stable/previous, but IIUC requires
that the presence of new behaviors is externally discoverable[5].
I think the test case you proposed is fine. I know some people will
argue that it is expanding the scope of tempest to include more
whitebox like testing, because the notification are an internal
side-effect of the api call, but I don't see it that way. It feels
more like exactly what tempest is there to enable testing, a
cross-project interaction using the api.
In my example, APIs are only used to initiate the action in cinder
and then to check the metering data in ceilometer.

But the middle-piece, i.e. the interaction between cinder & ceilometer,
is not mediated by an API. Rather, its carried via an unversioned
notification.
Yeah, exactly, that's why I feel it's a valid Tempest test case.
Just to clarify: you meant to type "it's a valid Tempest test case"
as opposed to "it's *not* a valid Tempest test case", right?
Heh, yes I meant to say, "it is a valid test case".

What I was referring to as the counter argument, and where the
difference of opinion was, is that the test will be making REST API
calls to both trigger a nominally internal mechanism (the
notification) from the services and then using the ceilometer api to
validate the notification worked.
Yes, that's exactly the idea.

But, it's arguably the real intent of these tests is to validate
that internal mechanism, which is basically a whitebox test. The
argument was that by testing it in tempes we're testing
notifications poorly because of it's black box limitation
notifications will just be tested indirectly. Which I feel is a
valid point, but not a sufficient reason to exclude the notification
tests from tempest.
Agreed.

I think the best way to move forward is to have functional whitebox
tests for the notifications as part of the individual projects
generating them, and that way we can direct validation of the
notification. But, I also feel there should be tempest tests on top
of that that verify the ceilometer side of consuming the
notification and the api exposing that information.
Excellent. So, indeed, more fullsome coverage of the notification
logic with in-tree tests on the producer side would definitely
to welcome, and could be seen as a "phase zero" of an overall to
fix/impove the notification mechanism.
But, they're is also a slight misunderstanding here. Having a
feature be externally discoverable isn't a hard requirement for a
config option in tempest, it's just *strongly* recommended. Mostly,
because if there isn't a way to discover it how are end users
expected to know what will work.
A-ha, I missed the subtle distinction there and thought that this
discoverability was a *strict* requirement. So how bad a citizen would
a project be considered to be if it chose not to meet that strong
recommendation?
You'd be far from the only ones who are doing that, for an existing example
look at anything on the nova driver feature matrix. Most of those aren't
discoverable from the API. So I think it would be ok to do that, but when we
have efforts like:

https://review.openstack.org/#/c/94473/

it'll make that more difficult. Which is why I think having discoverability
through the API is important. (it's the same public cloud question)
So for now, would it suffice for the master versus stable/icehouse
config to be checked-in in static form pending the completion of that
BP on tempest-conf-autogen?
Yeah, I think that'll be fine. The auto-generation stuff is far from having
complete coverage of all the tempest config options. It's more of a best effort
approach.

Then the assumption is that this static config is replaced by auto-
generating the tempest config using some project-specific discovery
mechanisms?
Not exactly, tempest will still always require a static config. The
auto-generating mechanism won't ever be used for gating, it just to help
enable configuring and running tempest against an existing deployment, which
is apparently a popular use case.

For this specific case I think it's definitely fair to have an
option for which notifications services are expected to be
generated. That's something that is definitely a configurable option
when setting up a deployment, and is something that feels like a
valid tempest config option, so we know which tests will work. We
already have similar feature flags for config time options in the
services, and having options like that would also get you out of
that backport mess you have right now.
So would this test configuration option would have a semantic like:

  "a wildcarded list of notification event types that ceilometer consumes"

then tests could be skipped on the basis of the notifications that
they depend on being unavailable, in the manner of say:

   @testtools.skipUnless(
       matchesAll(CONF.telemetry_consumed_notifications.volume,
                  ['snapshot.exists',
                   'snapshot.create.*',
                   'snapshot.delete.*',
                   'snapshot.resize.*',]
                 )
   )
   @test.services('volume')
   def test_check_volume_notification(self):
i>       ...
Is something of that ilk what you envisaged above?
Yeah that was my idea more or less, but I'd probably move the logic into a
separate
decorator to make a bit cleaner. Like:

@test.consumed_notifications('volumes', 'snapshot.exists',
'snapshot.create.*',
                              'snapshot.delete.*', 'snapshot.resize.*')

and you can just double them up if the test requires notifications from other
services.
OK, so the remaining thing I wanted to confirm is that it's acceptable
for the skip/no-skip logic of that decorator to be driven by static
(as opposed to discoverable) config?
Yes, it'll always be from a static config in the tempest config file. For the
purposes of skip decisions it's always a static option.

The discoverability being used in that spec I mentioned before would be a
separate tool to aid in generating that config file if one wasn't available
beforehand.

Right. But making a non-discoverable, but tempest-visible, change means the cloud deployer or installer has to know whether the code being deployed has the fix or not so tempest can be informed about the value of this option. Unless I am missing something, this is really saying there is a time-based versioning going on but it will be hidden from users.

What is the objection to saying that all api capabilities should be discoverable? I realize that any bug fix causes a "behavior change" but we should be able to have guidelines for what behaviour changes must be discoverable just as we do for api stability. It's really the same issue.

 -David


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to