Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-04-02 Thread Mike Perez
On 00:21 Tue 31 Mar , Rochelle Grober wrote:
 Top posting… I believe the main issue was a problem with snapshots that
 caused false negatives for most cinder drivers.  But, that got fixed.

I don't know what you're talking about here. Are you saying there was an
issue with the Tempest tests that drivers hit in allowing additional tests to
run? If so, got a link to the bug and review?

 Unfortunately, we haven’t yet established a good process to notify third
 parties when skipped tests are fixed and should be “unskipped”.  Maybe
 tagging the tests can help on this.  But, I really do think this round was
 a bit of first run gotchas and rookie mistakes on all sides.

Can you further explain this... on all sides? I really believe panicing
could've been avoided and more time to iron out any issues if some vendors
chose to not wait last minute on requirements.

 A good post mortem on how to better communicate changes and deadlines may go
 a long way to smooth these out in the next round.

I hope you're not talking about communication on deadlines of CI's. This has
already been overly discussed/announced and you should reread the previous
thread [1] on this if you still feel that way. The next round won't matter,
because a driver won't be accepted at the beginning unless there is already
a reported CI.

[1] - http://lists.openstack.org/pipermail/openstack-dev/2015-March/059693.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-31 Thread Duncan Thomas
On 31 March 2015 at 01:35, John Griffith john.griffi...@gmail.com wrote:

 On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
 doug...@parksidesoftware.com wrote:

 ​


 - Test relies on some “optional” feature, like overlapping IP subnets that
 the backend doesn’t support.  I’d argue it’s another case of broken tests
 if they require an optional feature, but it still needs skipping in the
 meantime.


 ​This may be something specific to Neutron perhaps?  In Cinder LVM is
 pretty much the lowest common denominator.  I'm not aware of any volume
 tests in Tempest that rely on optional features that don't pick this up
 automatically out of the config (like multi-backend for example).
 ​



That I know of off the top of my head:
- Snapshot of an attached volume works for most drivers but not all
- Backup of a volume that has snapshots fails for some drivers
- Restore to a volume that has snapshots fails on some drivers

I think all of the above are things that we should fix, but they exist
today.

Since one obscure bug can lead to CI failing on every patch, is it better
to say 'no skips without active bugs, and record your open bugs somewhere'?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley doug...@parksidesoftware.com
wrote:

 A few reasons, I’m sure there are others:

 - Broken tests that hardcode something about the ref implementation. The
 test needs to be fixed, of course, but in the meantime, a constantly
 failing CI is worthless (hello, lbaas scenario test.)

​Certainly... but that's relatively easy to fix (bug/patch to Tempest).
Although that's not actually the case in this particular context as there
are a handful of third party devices that run the full set of tests that
the ref driver runs with no additional skips or modifications.
​


 - Test relies on some “optional” feature, like overlapping IP subnets that
 the backend doesn’t support.  I’d argue it’s another case of broken tests
 if they require an optional feature, but it still needs skipping in the
 meantime.


​This may be something specific to Neutron perhaps?  In Cinder LVM is
pretty much the lowest common denominator.  I'm not aware of any volume
tests in Tempest that rely on optional features that don't pick this up
automatically out of the config (like multi-backend for example).
​


 - Some new feature added to an interface, in the presence of
 shims/decomposed drivers/plugins (e.g. adding TLS termination support to
 lbaas.) Those implementations will lag the feature commit, by definition.


​Yeah, certainly I think this highlights some of the differences between
Cinder and Neutron perhaps and the differences in complexity.
Thanks for the feedback... I don't disagree per say, however Cinder is set
up a bit different here in terms of expectations for base functionality
requirements and compatibility but your points are definitely well taken. ​


 Thanks,
 doug


 On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com
 wrote:

 This may have already been raised/discussed, but I'm kinda confused so
 thought I'd ask on the ML here.  The whole point of third party CI as I
 recall was to run the same tests that we run in the official Gate against
 third party drivers.  To me that would imply that a CI system/device that
 marks itself as GOOD doesn't do things like add skips locally that aren't
 in the tempest code already?

 In other words, seems like cheating to say My CI passes and all is good,
 except for the tests that don't work which I skip... but pay no attention
 to those please.

 Did I miss something, isn't the whole point of Third Party CI to
 demonstrate that a third parties backend is tested and functions to the
 same degree that the reference implementations do? So the goal (using
 Cinder for example) was to be able to say that any API call that works on
 the LVM reference driver will work on the drivers listed in driverlog; and
 that we know this because they run the same Tempest API tests?

 Don't get me wrong, certainly not saying there's malice or things should
 be marked as no good... but if the practice is to skip what you can't do
 then maybe that should be documented in the driverlog submission, as
 opposed to just stating Yeah, we run CI successfully.

 Thanks,
 John
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
This may have already been raised/discussed, but I'm kinda confused so
thought I'd ask on the ML here.  The whole point of third party CI as I
recall was to run the same tests that we run in the official Gate against
third party drivers.  To me that would imply that a CI system/device that
marks itself as GOOD doesn't do things like add skips locally that aren't
in the tempest code already?

In other words, seems like cheating to say My CI passes and all is good,
except for the tests that don't work which I skip... but pay no attention
to those please.

Did I miss something, isn't the whole point of Third Party CI to
demonstrate that a third parties backend is tested and functions to the
same degree that the reference implementations do? So the goal (using
Cinder for example) was to be able to say that any API call that works on
the LVM reference driver will work on the drivers listed in driverlog; and
that we know this because they run the same Tempest API tests?

Don't get me wrong, certainly not saying there's malice or things should be
marked as no good... but if the practice is to skip what you can't do then
maybe that should be documented in the driverlog submission, as opposed to
just stating Yeah, we run CI successfully.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread Doug Wiegley
A few reasons, I’m sure there are others:

- Broken tests that hardcode something about the ref implementation. The test 
needs to be fixed, of course, but in the meantime, a constantly failing CI is 
worthless (hello, lbaas scenario test.)
- Test relies on some “optional” feature, like overlapping IP subnets that the 
backend doesn’t support.  I’d argue it’s another case of broken tests if they 
require an optional feature, but it still needs skipping in the meantime.
- Some new feature added to an interface, in the presence of shims/decomposed 
drivers/plugins (e.g. adding TLS termination support to lbaas.) Those 
implementations will lag the feature commit, by definition.

Thanks,
doug


 On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com wrote:
 
 This may have already been raised/discussed, but I'm kinda confused so 
 thought I'd ask on the ML here.  The whole point of third party CI as I 
 recall was to run the same tests that we run in the official Gate against 
 third party drivers.  To me that would imply that a CI system/device that 
 marks itself as GOOD doesn't do things like add skips locally that aren't 
 in the tempest code already?
 
 In other words, seems like cheating to say My CI passes and all is good, 
 except for the tests that don't work which I skip... but pay no attention to 
 those please.
 
 Did I miss something, isn't the whole point of Third Party CI to demonstrate 
 that a third parties backend is tested and functions to the same degree that 
 the reference implementations do? So the goal (using Cinder for example) was 
 to be able to say that any API call that works on the LVM reference driver 
 will work on the drivers listed in driverlog; and that we know this because 
 they run the same Tempest API tests?
 
 Don't get me wrong, certainly not saying there's malice or things should be 
 marked as no good... but if the practice is to skip what you can't do then 
 maybe that should be documented in the driverlog submission, as opposed to 
 just stating Yeah, we run CI successfully.
 
 Thanks,
 John
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread Rochelle Grober
Top posting… I believe the main issue was a problem with snapshots that caused 
false negatives for most cinder drivers.  But, that got fixed.  Unfortunately, 
we haven’t yet established a good process to notify third parties when skipped 
tests are fixed and should be “unskipped”.  Maybe tagging the tests can help on 
this.  But, I really do think this round was a bit of first run gotchas and 
rookie mistakes on all sides.  A good post mortem on how to better communicate 
changes and deadlines may go a long way to smooth these out in the next round.

--Rocky

John Griffith on Monday, March 30, 2015 15:36 wrote:

On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
doug...@parksidesoftware.commailto:doug...@parksidesoftware.com wrote:
A few reasons, I’m sure there are others:

- Broken tests that hardcode something about the ref implementation. The test 
needs to be fixed, of course, but in the meantime, a constantly failing CI is 
worthless (hello, lbaas scenario test.)
​Certainly... but that's relatively easy to fix (bug/patch to Tempest).  
Although that's not actually the case in this particular context as there are a 
handful of third party devices that run the full set of tests that the ref 
driver runs with no additional skips or modifications.
​

- Test relies on some “optional” feature, like overlapping IP subnets that the 
backend doesn’t support.  I’d argue it’s another case of broken tests if they 
require an optional feature, but it still needs skipping in the meantime.

​This may be something specific to Neutron perhaps?  In Cinder LVM is pretty 
much the lowest common denominator.  I'm not aware of any volume tests in 
Tempest that rely on optional features that don't pick this up automatically 
out of the config (like multi-backend for example).
​

- Some new feature added to an interface, in the presence of shims/decomposed 
drivers/plugins (e.g. adding TLS termination support to lbaas.) Those 
implementations will lag the feature commit, by definition.

​Yeah, certainly I think this highlights some of the differences between Cinder 
and Neutron perhaps and the differences in complexity.
Thanks for the feedback... I don't disagree per say, however Cinder is set up a 
bit different here in terms of expectations for base functionality requirements 
and compatibility but your points are definitely well taken. ​

Thanks,
doug


On Mar 30, 2015, at 2:54 PM, John Griffith 
john.griffi...@gmail.commailto:john.griffi...@gmail.com wrote:

This may have already been raised/discussed, but I'm kinda confused so thought 
I'd ask on the ML here.  The whole point of third party CI as I recall was to 
run the same tests that we run in the official Gate against third party 
drivers.  To me that would imply that a CI system/device that marks itself as 
GOOD doesn't do things like add skips locally that aren't in the tempest code 
already?

In other words, seems like cheating to say My CI passes and all is good, 
except for the tests that don't work which I skip... but pay no attention to 
those please.

Did I miss something, isn't the whole point of Third Party CI to demonstrate 
that a third parties backend is tested and functions to the same degree that 
the reference implementations do? So the goal (using Cinder for example) was to 
be able to say that any API call that works on the LVM reference driver will 
work on the drivers listed in driverlog; and that we know this because they run 
the same Tempest API tests?

Don't get me wrong, certainly not saying there's malice or things should be 
marked as no good... but if the practice is to skip what you can't do then 
maybe that should be documented in the driverlog submission, as opposed to just 
stating Yeah, we run CI successfully.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
On Mon, Mar 30, 2015 at 7:26 PM, arkady_kanev...@dell.com wrote:

 Another scenario.

 The default LVM driver is local to cinder service.  Thus, it may work fine
 as soon as you go outside controller node it does not.

 We had a discussion on choosing different default driver and expect that
 discussion to continue.



 Not all drivers support all features. We have a table that list which
 features each driver support.



 The question I would ask is setting which test to skip in the driver is
 the right place?

 Why not specify it in the Tempest which driver run against.

 Then we can setup rules when drivers should remove themselves from that
 blackout list.

 That is easier to track, can be cleanly used by defcore and for tagging.



 Thanks,

 Arkady



 *From:* John Griffith [mailto:john.griffi...@gmail.com]
 *Sent:* Monday, March 30, 2015 8:12 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Dev] [third-party-ci]
 Clarifications on the goal and skipping tests







 On Mon, Mar 30, 2015 at 6:21 PM, Rochelle Grober 
 rochelle.gro...@huawei.com wrote:

 Top posting… I believe the main issue was a problem with snapshots that
 caused false negatives for most cinder drivers.  But, that got fixed.
 Unfortunately, we haven’t yet established a good process to notify third
 parties when skipped tests are fixed and should be “unskipped”.  Maybe
 tagging the tests can help on this.  But, I really do think this round was
 a bit of first run gotchas and rookie mistakes on all sides.  A good post
 mortem on how to better communicate changes and deadlines may go a long way
 to smooth these out in the next round.



 --Rocky



 John Griffith on Monday, March 30, 2015 15:36 wrote:

 On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
 doug...@parksidesoftware.com wrote:

 A few reasons, I’m sure there are others:



 - Broken tests that hardcode something about the ref implementation. The
 test needs to be fixed, of course, but in the meantime, a constantly
 failing CI is worthless (hello, lbaas scenario test.)

 ​Certainly... but that's relatively easy to fix (bug/patch to Tempest).
 Although that's not actually the case in this particular context as there
 are a handful of third party devices that run the full set of tests that
 the ref driver runs with no additional skips or modifications.

 ​



 - Test relies on some “optional” feature, like overlapping IP subnets that
 the backend doesn’t support.  I’d argue it’s another case of broken tests
 if they require an optional feature, but it still needs skipping in the
 meantime.



 ​This may be something specific to Neutron perhaps?  In Cinder LVM is
 pretty much the lowest common denominator.  I'm not aware of any volume
 tests in Tempest that rely on optional features that don't pick this up
 automatically out of the config (like multi-backend for example).

 ​



 - Some new feature added to an interface, in the presence of
 shims/decomposed drivers/plugins (e.g. adding TLS termination support to
 lbaas.) Those implementations will lag the feature commit, by definition.



 ​Yeah, certainly I think this highlights some of the differences between
 Cinder and Neutron perhaps and the differences in complexity.

 Thanks for the feedback... I don't disagree per say, however Cinder is set
 up a bit different here in terms of expectations for base functionality
 requirements and compatibility but your points are definitely well taken.
 ​



 Thanks,

 doug





 On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com
 wrote:



 This may have already been raised/discussed, but I'm kinda confused so
 thought I'd ask on the ML here.  The whole point of third party CI as I
 recall was to run the same tests that we run in the official Gate against
 third party drivers.  To me that would imply that a CI system/device that
 marks itself as GOOD doesn't do things like add skips locally that aren't
 in the tempest code already?



 In other words, seems like cheating to say My CI passes and all is good,
 except for the tests that don't work which I skip... but pay no attention
 to those please.



 Did I miss something, isn't the whole point of Third Party CI to
 demonstrate that a third parties backend is tested and functions to the
 same degree that the reference implementations do? So the goal (using
 Cinder for example) was to be able to say that any API call that works on
 the LVM reference driver will work on the drivers listed in driverlog; and
 that we know this because they run the same Tempest API tests?



 Don't get me wrong, certainly not saying there's malice or things should
 be marked as no good... but if the practice is to skip what you can't do
 then maybe that should be documented in the driverlog submission, as
 opposed to just stating Yeah, we run CI successfully.



 Thanks,

 John

 __
 OpenStack Development

Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread John Griffith
On Mon, Mar 30, 2015 at 6:21 PM, Rochelle Grober rochelle.gro...@huawei.com
 wrote:

  Top posting… I believe the main issue was a problem with snapshots that
 caused false negatives for most cinder drivers.  But, that got fixed.
 Unfortunately, we haven’t yet established a good process to notify third
 parties when skipped tests are fixed and should be “unskipped”.  Maybe
 tagging the tests can help on this.  But, I really do think this round was
 a bit of first run gotchas and rookie mistakes on all sides.  A good post
 mortem on how to better communicate changes and deadlines may go a long way
 to smooth these out in the next round.



 --Rocky



 John Griffith on Monday, March 30, 2015 15:36 wrote:

  On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
 doug...@parksidesoftware.com wrote:

 A few reasons, I’m sure there are others:



 - Broken tests that hardcode something about the ref implementation. The
 test needs to be fixed, of course, but in the meantime, a constantly
 failing CI is worthless (hello, lbaas scenario test.)

 ​Certainly... but that's relatively easy to fix (bug/patch to Tempest).
 Although that's not actually the case in this particular context as there
 are a handful of third party devices that run the full set of tests that
 the ref driver runs with no additional skips or modifications.

 ​



  - Test relies on some “optional” feature, like overlapping IP subnets
 that the backend doesn’t support.  I’d argue it’s another case of broken
 tests if they require an optional feature, but it still needs skipping in
 the meantime.



 ​This may be something specific to Neutron perhaps?  In Cinder LVM is
 pretty much the lowest common denominator.  I'm not aware of any volume
 tests in Tempest that rely on optional features that don't pick this up
 automatically out of the config (like multi-backend for example).

 ​



  - Some new feature added to an interface, in the presence of
 shims/decomposed drivers/plugins (e.g. adding TLS termination support to
 lbaas.) Those implementations will lag the feature commit, by definition.



 ​Yeah, certainly I think this highlights some of the differences between
 Cinder and Neutron perhaps and the differences in complexity.

 Thanks for the feedback... I don't disagree per say, however Cinder is set
 up a bit different here in terms of expectations for base functionality
 requirements and compatibility but your points are definitely well taken.
 ​



 Thanks,

 doug





   On Mar 30, 2015, at 2:54 PM, John Griffith john.griffi...@gmail.com
 wrote:



 This may have already been raised/discussed, but I'm kinda confused so
 thought I'd ask on the ML here.  The whole point of third party CI as I
 recall was to run the same tests that we run in the official Gate against
 third party drivers.  To me that would imply that a CI system/device that
 marks itself as GOOD doesn't do things like add skips locally that aren't
 in the tempest code already?



 In other words, seems like cheating to say My CI passes and all is good,
 except for the tests that don't work which I skip... but pay no attention
 to those please.



 Did I miss something, isn't the whole point of Third Party CI to
 demonstrate that a third parties backend is tested and functions to the
 same degree that the reference implementations do? So the goal (using
 Cinder for example) was to be able to say that any API call that works on
 the LVM reference driver will work on the drivers listed in driverlog; and
 that we know this because they run the same Tempest API tests?



 Don't get me wrong, certainly not saying there's malice or things should
 be marked as no good... but if the practice is to skip what you can't do
 then maybe that should be documented in the driverlog submission, as
 opposed to just stating Yeah, we run CI successfully.



 Thanks,

 John

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Not top posting...

 I believe the main issue was a problem with snapshots that caused false
negatives for most cinder drivers.  But, that got fixed

​Huh?  What was the problem, where was the problem, who/what fixed it, was
there a bug logged somewhere, what comprises *most* Cinder drivers?


Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on the goal and skipping tests

2015-03-30 Thread Arkady_Kanevsky
Another scenario.
The default LVM driver is local to cinder service.  Thus, it may work fine as 
soon as you go outside controller node it does not.
We had a discussion on choosing different default driver and expect that 
discussion to continue.

Not all drivers support all features. We have a table that list which features 
each driver support.

The question I would ask is setting which test to skip in the driver is the 
right place?
Why not specify it in the Tempest which driver run against.
Then we can setup rules when drivers should remove themselves from that 
blackout list.
That is easier to track, can be cleanly used by defcore and for tagging.

Thanks,
Arkady

From: John Griffith [mailto:john.griffi...@gmail.com]
Sent: Monday, March 30, 2015 8:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev] [third-party-ci] Clarifications on 
the goal and skipping tests



On Mon, Mar 30, 2015 at 6:21 PM, Rochelle Grober 
rochelle.gro...@huawei.commailto:rochelle.gro...@huawei.com wrote:
Top posting… I believe the main issue was a problem with snapshots that caused 
false negatives for most cinder drivers.  But, that got fixed.  Unfortunately, 
we haven’t yet established a good process to notify third parties when skipped 
tests are fixed and should be “unskipped”.  Maybe tagging the tests can help on 
this.  But, I really do think this round was a bit of first run gotchas and 
rookie mistakes on all sides.  A good post mortem on how to better communicate 
changes and deadlines may go a long way to smooth these out in the next round.

--Rocky

John Griffith on Monday, March 30, 2015 15:36 wrote:
On Mon, Mar 30, 2015 at 4:06 PM, Doug Wiegley 
doug...@parksidesoftware.commailto:doug...@parksidesoftware.com wrote:
A few reasons, I’m sure there are others:

- Broken tests that hardcode something about the ref implementation. The test 
needs to be fixed, of course, but in the meantime, a constantly failing CI is 
worthless (hello, lbaas scenario test.)
​Certainly... but that's relatively easy to fix (bug/patch to Tempest).  
Although that's not actually the case in this particular context as there are a 
handful of third party devices that run the full set of tests that the ref 
driver runs with no additional skips or modifications.
​

- Test relies on some “optional” feature, like overlapping IP subnets that the 
backend doesn’t support.  I’d argue it’s another case of broken tests if they 
require an optional feature, but it still needs skipping in the meantime.

​This may be something specific to Neutron perhaps?  In Cinder LVM is pretty 
much the lowest common denominator.  I'm not aware of any volume tests in 
Tempest that rely on optional features that don't pick this up automatically 
out of the config (like multi-backend for example).
​

- Some new feature added to an interface, in the presence of shims/decomposed 
drivers/plugins (e.g. adding TLS termination support to lbaas.) Those 
implementations will lag the feature commit, by definition.

​Yeah, certainly I think this highlights some of the differences between Cinder 
and Neutron perhaps and the differences in complexity.
Thanks for the feedback... I don't disagree per say, however Cinder is set up a 
bit different here in terms of expectations for base functionality requirements 
and compatibility but your points are definitely well taken. ​

Thanks,
doug


On Mar 30, 2015, at 2:54 PM, John Griffith 
john.griffi...@gmail.commailto:john.griffi...@gmail.com wrote:

This may have already been raised/discussed, but I'm kinda confused so thought 
I'd ask on the ML here.  The whole point of third party CI as I recall was to 
run the same tests that we run in the official Gate against third party 
drivers.  To me that would imply that a CI system/device that marks itself as 
GOOD doesn't do things like add skips locally that aren't in the tempest code 
already?

In other words, seems like cheating to say My CI passes and all is good, 
except for the tests that don't work which I skip... but pay no attention to 
those please.

Did I miss something, isn't the whole point of Third Party CI to demonstrate 
that a third parties backend is tested and functions to the same degree that 
the reference implementations do? So the goal (using Cinder for example) was to 
be able to say that any API call that works on the LVM reference driver will 
work on the drivers listed in driverlog; and that we know this because they run 
the same Tempest API tests?

Don't get me wrong, certainly not saying there's malice or things should be 
marked as no good... but if the practice is to skip what you can't do then 
maybe that should be documented in the driverlog submission, as opposed to just 
stating Yeah, we run CI successfully.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ