Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread Robert Collins
On 16 January 2014 14:51, John Griffith john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:41 PM, Michael Still mi...@stillhq.com wrote:
 John -- I agree with you entirely here. My concern is more that I
 think the CI tests need to run more frequently than weekly.

 Completely agree, but I guess in essence to start these aren't really
 CI tests.  Instead it's just a public health report for the various
 drivers vendors provide.  I'd love to see a higher frequency, but some
 of us don't have the infrastructure to try and run a test against
 every commit.  Anyway, I think there's HUGE potential for growth and
 adjustment as we go along.  I'd like to get something in place to
 solve the immediate problem first though.

You say you don't have the infrastructure - whats missing? What if you
only ran against commits in the cinder trees?

 To be honest I'd even be thrilled just to see every vendor publish a
 passing run against each milestone cut.  That in and of itself would
 be a huge step in the right direction in my opinion.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
robe...@robertcollins.net wrote:
 On 16 January 2014 14:51, John Griffith john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:41 PM, Michael Still mi...@stillhq.com wrote:
 John -- I agree with you entirely here. My concern is more that I
 think the CI tests need to run more frequently than weekly.

 Completely agree, but I guess in essence to start these aren't really
 CI tests.  Instead it's just a public health report for the various
 drivers vendors provide.  I'd love to see a higher frequency, but some
 of us don't have the infrastructure to try and run a test against
 every commit.  Anyway, I think there's HUGE potential for growth and
 adjustment as we go along.  I'd like to get something in place to
 solve the immediate problem first though.

 You say you don't have the infrastructure - whats missing? What if you
 only ran against commits in the cinder trees?

Maybe this is going a bit sideways, but my point was that making a
first step of getting periodic runs on vendor gear and publicly
submitting those results would be a good starting point and a
SIGNIFICANT improvement over what we have today.

It seems to me that requiring every vendor to have a deployment in
house dedicated and reserved 24/7 might be a tough order right out of
the gate.  That being said, of course I'm willing and able to do that
for my employer, but feedback from others hasn't been quite so
amiable.

The feedback here seems significant enough that maybe gating every
change is the way to go though.  I'm certainly willing to opt in to
that model and get things off the ground.  I do have a couple of
concerns (number 3 begin the most significant):

1. I don't want ANY commit/patch waiting for a Vendors infrastructure
to run a test.  We would definitely need a timeout mechanism or
something along those lines to ensure none of this disrupts the gate

2. Isolating this to changes in Cinder seems fine, the intent was
mostly a compatability / features check.  This takes it up a notch and
allows us to detect when something breaks right away which is
certainly a good thing.

3. Support and maintenance is a concern here.  We have a first rate
community that ALL pull together to make our gating and infrastructure
work in OpenStack.  Even with that it's still hard for everybody to
keep up due to number of project and simply the volume of patches that
go in on a daily basis.  There's no way I could do my regular jobs
that I'm already doing AND maintain my own fork/install of the
OpenStack gating infrastructure.

4. Despite all of the heavy weight corporation throwing resource after
resource at OpenStack, keep in mind that it is an Open Source
community still.  I don't want to do ANYTHING that would make it some
unfriendly to folks who would like to commit.  Keep in mind that
vendors here aren't necessarily all large corporations, or even all
paid for proprietary products.  There are open source storage drivers
for example in Cinder and they may or may not have any of the
resources to make this happen but that doesn't mean they should not be
allowed to have code in OpenStack.

The fact is that the problem I see is that there are drivers/devices
that flat out don't work and end users (heck even some vendors that
choose not to test) don't know this until they've purchased a bunch of
gear and tried to deploy their cloud.  What I was initially proposing
here was just a more formal public and community representation of
whether a device works as it's advertised or not.

Please keep in mind that my proposal here was a first step sort of
test case.  Rather than start with something HUGE like deploying the
OpenStack CI in every vendors lab to test every commit (and Im sorry
for those that don't agree but that does seem like a SIGNIFICANT
undertaking), why not take incremental steps to make things better and
learn as we go along?



 To be honest I'd even be thrilled just to see every vendor publish a
 passing run against each milestone cut.  That in and of itself would
 be a huge step in the right direction in my opinion.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread Robert Collins
On 18 January 2014 06:42, John Griffith john.griff...@solidfire.com wrote:
 On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
 robe...@robertcollins.net wrote:

 Maybe this is going a bit sideways, but my point was that making a
 first step of getting periodic runs on vendor gear and publicly
 submitting those results would be a good starting point and a
 SIGNIFICANT improvement over what we have today.

 It seems to me that requiring every vendor to have a deployment in
 house dedicated and reserved 24/7 might be a tough order right out of
 the gate.  That being said, of course I'm willing and able to do that
 for my employer, but feedback from others hasn't been quite so
 amiable.

 The feedback here seems significant enough that maybe gating every
 change is the way to go though.  I'm certainly willing to opt in to
 that model and get things off the ground.  I do have a couple of
 concerns (number 3 begin the most significant):

 1. I don't want ANY commit/patch waiting for a Vendors infrastructure
 to run a test.  We would definitely need a timeout mechanism or
 something along those lines to ensure none of this disrupts the gate

 2. Isolating this to changes in Cinder seems fine, the intent was
 mostly a compatability / features check.  This takes it up a notch and
 allows us to detect when something breaks right away which is
 certainly a good thing.

 3. Support and maintenance is a concern here.  We have a first rate
 community that ALL pull together to make our gating and infrastructure
 work in OpenStack.  Even with that it's still hard for everybody to
 keep up due to number of project and simply the volume of patches that
 go in on a daily basis.  There's no way I could do my regular jobs
 that I'm already doing AND maintain my own fork/install of the
 OpenStack gating infrastructure.

 4. Despite all of the heavy weight corporation throwing resource after
 resource at OpenStack, keep in mind that it is an Open Source
 community still.  I don't want to do ANYTHING that would make it some
 unfriendly to folks who would like to commit.  Keep in mind that
 vendors here aren't necessarily all large corporations, or even all
 paid for proprietary products.  There are open source storage drivers
 for example in Cinder and they may or may not have any of the
 resources to make this happen but that doesn't mean they should not be
 allowed to have code in OpenStack.

 The fact is that the problem I see is that there are drivers/devices
 that flat out don't work and end users (heck even some vendors that
 choose not to test) don't know this until they've purchased a bunch of
 gear and tried to deploy their cloud.  What I was initially proposing
 here was just a more formal public and community representation of
 whether a device works as it's advertised or not.

 Please keep in mind that my proposal here was a first step sort of
 test case.  Rather than start with something HUGE like deploying the
 OpenStack CI in every vendors lab to test every commit (and Im sorry
 for those that don't agree but that does seem like a SIGNIFICANT
 undertaking), why not take incremental steps to make things better and
 learn as we go along?

Certainly - I totally agree that anything  nothing. I was asking
about your statement of not having enough infra to get a handle on
what would block things. As you know, tripleo is running up a
production quality test cloud to test tripleo, Ironic and once we get
everything in place - multinode gating jobs. We're *super* interested
in making the bar to increased validation as low as possible.

I broadly agree with your points 1 through 4, of course!

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 6:24 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 18 January 2014 06:42, John Griffith john.griff...@solidfire.com wrote:
 On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
 robe...@robertcollins.net wrote:

 Maybe this is going a bit sideways, but my point was that making a
 first step of getting periodic runs on vendor gear and publicly
 submitting those results would be a good starting point and a
 SIGNIFICANT improvement over what we have today.

 It seems to me that requiring every vendor to have a deployment in
 house dedicated and reserved 24/7 might be a tough order right out of
 the gate.  That being said, of course I'm willing and able to do that
 for my employer, but feedback from others hasn't been quite so
 amiable.

 The feedback here seems significant enough that maybe gating every
 change is the way to go though.  I'm certainly willing to opt in to
 that model and get things off the ground.  I do have a couple of
 concerns (number 3 begin the most significant):

 1. I don't want ANY commit/patch waiting for a Vendors infrastructure
 to run a test.  We would definitely need a timeout mechanism or
 something along those lines to ensure none of this disrupts the gate

 2. Isolating this to changes in Cinder seems fine, the intent was
 mostly a compatability / features check.  This takes it up a notch and
 allows us to detect when something breaks right away which is
 certainly a good thing.

 3. Support and maintenance is a concern here.  We have a first rate
 community that ALL pull together to make our gating and infrastructure
 work in OpenStack.  Even with that it's still hard for everybody to
 keep up due to number of project and simply the volume of patches that
 go in on a daily basis.  There's no way I could do my regular jobs
 that I'm already doing AND maintain my own fork/install of the
 OpenStack gating infrastructure.

 4. Despite all of the heavy weight corporation throwing resource after
 resource at OpenStack, keep in mind that it is an Open Source
 community still.  I don't want to do ANYTHING that would make it some
 unfriendly to folks who would like to commit.  Keep in mind that
 vendors here aren't necessarily all large corporations, or even all
 paid for proprietary products.  There are open source storage drivers
 for example in Cinder and they may or may not have any of the
 resources to make this happen but that doesn't mean they should not be
 allowed to have code in OpenStack.

 The fact is that the problem I see is that there are drivers/devices
 that flat out don't work and end users (heck even some vendors that
 choose not to test) don't know this until they've purchased a bunch of
 gear and tried to deploy their cloud.  What I was initially proposing
 here was just a more formal public and community representation of
 whether a device works as it's advertised or not.

 Please keep in mind that my proposal here was a first step sort of
 test case.  Rather than start with something HUGE like deploying the
 OpenStack CI in every vendors lab to test every commit (and Im sorry
 for those that don't agree but that does seem like a SIGNIFICANT
 undertaking), why not take incremental steps to make things better and
 learn as we go along?

 Certainly - I totally agree that anything  nothing. I was asking
 about your statement of not having enough infra to get a handle on
 what would block things. As you know, tripleo is running up a

Sorry, got carried away and didn't really answer your question about
resources clearly.  My point about resources was in terms of
man-power, dedicated hardware, networking and all of the things that
go along with spinning up tests on every commit and archiving the
results.  I would definitely like to do this, but first I'd like to
see something that every backend driver maintainer can do at least at
each milestone.

 production quality test cloud to test tripleo, Ironic and once we get
 everything in place - multinode gating jobs. We're *super* interested
 in making the bar to increased validation as low as possible.

We should chat in IRC about approaches here and see if we can align.
For the record HP's resources are vastly different than say a small
start up storage vendor or an open-source storage software stack.

By the way, maybe you can point me to what tripleo is doing, looking
in gerrit I see the jenkins gate noop and the docs job but that's
all I'm seeing?


 I broadly agree with your points 1 through 4, of course!

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Bottom line I appreciate your feedback and comments, it's generated
some new thoughts for me to ponder over the week-end on this subject.

Thanks,
John

___
OpenStack-dev 

Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread Robert Collins
On 18 January 2014 16:31, John Griffith john.griff...@solidfire.com wrote:
 On Fri, Jan 17, 2014 at 6:24 PM, Robert Collins
 robe...@robertcollins.net wrote:

 Certainly - I totally agree that anything  nothing. I was asking
 about your statement of not having enough infra to get a handle on
 what would block things. As you know, tripleo is running up a

 Sorry, got carried away and didn't really answer your question about
 resources clearly.

LOL, np.

  My point about resources was in terms of
 man-power, dedicated hardware, networking and all of the things that
 go along with spinning up tests on every commit and archiving the
 results.  I would definitely like to do this, but first I'd like to
 see something that every backend driver maintainer can do at least at
 each milestone.

 production quality test cloud to test tripleo, Ironic and once we get
 everything in place - multinode gating jobs. We're *super* interested
 in making the bar to increased validation as low as possible.

 We should chat in IRC about approaches here and see if we can align.
 For the record HP's resources are vastly different than say a small
 start up storage vendor or an open-source storage software stack.

Yeah, I'm aware :/. For open source stacks, my hope is that the
contributed hardware from HP, Redhat etc will permit us to test open
source stacks in the set of permutations we end up testing.

 By the way, maybe you can point me to what tripleo is doing, looking
 in gerrit I see the jenkins gate noop and the docs job but that's
 all I'm seeing?

https://wiki.openstack.org/wiki/TripleO/TripleOCloud and
https://wiki.openstack.org/wiki/TripleO/TripleOCloud/Regions
and https://etherpad.openstack.org/p/tripleo-test-cluster

Cheers,
Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread John Griffith
Hey Everyone,

A while back I started talking about this idea of requiring Cinder
driver contributors to run a super simple cert script (some info here:
[1]).  Since then I've been playing with introduction of a third party
gate check here in my own lab.  My proposal was to have a non-voting
check that basically duplicates the base devstack gate test in my lab,
but uses different back-end devices that I have available configured
in Cinder to run periodic tests against.  Long term I'd like to be
able to purpose this gear to also do something more useful for the
over all OpenStack gating effort but to start it's strictly an
automated verification of my Cinder driver/backend.

What I'm questioning is how to report this information and the
results.  Currently patches and reviews are our mechanism for
triggering tests and providing feedback.  Myself and many other
vendors that might like to participate in something like this
obviously don't have the infrastructure to try and run something like
this on every single commit.  Also since it would be non-voting it's
difficult to capture and track the results.

One idea that I had was to set something like what I've described
above to run locally on a periodic basis (weekly, nightly etc) and
publish results to something like a third party verification
dashboard.  So the idea would be that results from various third
party tests would all adhere to a certain set of criteria WRT what
they do and what they report  and those results would be logged and
tracked publicly for anybody in the OpenStack community to access and
view?

Does this seem like something that others would be interested in
participating in?  I think it's extremely valuable for projects like
Cinder that have dozens of backend devices, and regardless of other
interest or participation in the community I intend to implement
something like this on my own regardless.  It would just be
interesting to see if we could have an organized and official effort
to gather this sort of information and run these types of tests.

Open to suggestions and thoughts as well as any of you that may
already be doing this sort of thing.  By the way, I've been looking at
things like SmokeStack and other third party gating checks to get some
ideas as well.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread Sankarshan Mukhopadhyay
On Thu, Jan 16, 2014 at 3:58 AM, John Griffith
john.griff...@solidfire.com wrote:
 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).

Could you provide the link which [1] refers to?


-- 
sankarshan mukhopadhyay
https://twitter.com/#!/sankarshan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread Sankarshan Mukhopadhyay
On Thu, Jan 16, 2014 at 3:58 AM, John Griffith
john.griff...@solidfire.com wrote:
 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).

Could you provide the link which [1] refers to?


-- 
sankarshan mukhopadhyay
https://twitter.com/#!/sankarshan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread Michael Still
On Thu, Jan 16, 2014 at 6:28 AM, John Griffith
john.griff...@solidfire.com wrote:
 Hey Everyone,

 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).  Since then I've been playing with introduction of a third party
 gate check here in my own lab.  My proposal was to have a non-voting
 check that basically duplicates the base devstack gate test in my lab,
 but uses different back-end devices that I have available configured
 in Cinder to run periodic tests against.  Long term I'd like to be
 able to purpose this gear to also do something more useful for the
 over all OpenStack gating effort but to start it's strictly an
 automated verification of my Cinder driver/backend.

 What I'm questioning is how to report this information and the
 results.  Currently patches and reviews are our mechanism for
 triggering tests and providing feedback.  Myself and many other
 vendors that might like to participate in something like this
 obviously don't have the infrastructure to try and run something like
 this on every single commit.  Also since it would be non-voting it's
 difficult to capture and track the results.

 One idea that I had was to set something like what I've described
 above to run locally on a periodic basis (weekly, nightly etc) and
 publish results to something like a third party verification
 dashboard.  So the idea would be that results from various third
 party tests would all adhere to a certain set of criteria WRT what
 they do and what they report  and those results would be logged and
 tracked publicly for anybody in the OpenStack community to access and
 view?

My concern here is how to identify what patch broke the third party
thing. If you run this once a week, then there are possible hundreds
of patches which might be responsible. How do you identify which one
is the winner?

 Does this seem like something that others would be interested in
 participating in?  I think it's extremely valuable for projects like
 Cinder that have dozens of backend devices, and regardless of other
 interest or participation in the community I intend to implement
 something like this on my own regardless.  It would just be
 interesting to see if we could have an organized and official effort
 to gather this sort of information and run these types of tests.

 Open to suggestions and thoughts as well as any of you that may
 already be doing this sort of thing.  By the way, I've been looking at
 things like SmokeStack and other third party gating checks to get some
 ideas as well.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread John Griffith
On Wed, Jan 15, 2014 at 5:39 PM, Sankarshan Mukhopadhyay
sankarshan.mukhopadh...@gmail.com wrote:
 On Thu, Jan 16, 2014 at 3:58 AM, John Griffith
 john.griff...@solidfire.com wrote:
 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).

 Could you provide the link which [1] refers to?


Sorry about that:
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022925.html


 --
 sankarshan mukhopadhyay
 https://twitter.com/#!/sankarshan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread Michael Still
John -- I agree with you entirely here. My concern is more that I
think the CI tests need to run more frequently than weekly.

Michael

On Thu, Jan 16, 2014 at 9:30 AM, John Griffith
john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:03 PM, Michael Still mi...@stillhq.com wrote:
 On Thu, Jan 16, 2014 at 6:28 AM, John Griffith
 john.griff...@solidfire.com wrote:
 Hey Everyone,

 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).  Since then I've been playing with introduction of a third party
 gate check here in my own lab.  My proposal was to have a non-voting
 check that basically duplicates the base devstack gate test in my lab,
 but uses different back-end devices that I have available configured
 in Cinder to run periodic tests against.  Long term I'd like to be
 able to purpose this gear to also do something more useful for the
 over all OpenStack gating effort but to start it's strictly an
 automated verification of my Cinder driver/backend.

 What I'm questioning is how to report this information and the
 results.  Currently patches and reviews are our mechanism for
 triggering tests and providing feedback.  Myself and many other
 vendors that might like to participate in something like this
 obviously don't have the infrastructure to try and run something like
 this on every single commit.  Also since it would be non-voting it's
 difficult to capture and track the results.

 One idea that I had was to set something like what I've described
 above to run locally on a periodic basis (weekly, nightly etc) and
 publish results to something like a third party verification
 dashboard.  So the idea would be that results from various third
 party tests would all adhere to a certain set of criteria WRT what
 they do and what they report  and those results would be logged and
 tracked publicly for anybody in the OpenStack community to access and
 view?

 My concern here is how to identify what patch broke the third party
 thing. If you run this once a week, then there are possible hundreds
 of patches which might be responsible. How do you identify which one
 is the winner?

 To be honest I'd like to see more than once a week, however the main
 point of this is to have public testing of third party drivers.
 Currently we say it's in trunk and passed review and unit tests so
 you're good to go.  Frankly that's not sufficient, there needs to be
 some sort of testing publicly that shows that a product/config
 actually works in the minimum sense at least.  This won't address
 things like a bad patch breaking things, but again in Cinder's case
 this is a bit different, it is designed more to show compatibility and
 integration completeness.  If a patch goes in and breaks a vendors
 driver but not the reference implementation, that means the vendor has
 work to do bring their driver up to date.

 Cinder is not a dumping ground, the drivers in the code base should no
 be static but require continued maintenance and development as the
 project grows.

 Non-Voting tests on every patch seems unrealistic, however there's no
 reason that if vendors have the resources they couldn't do that if
 they so choose.


 Does this seem like something that others would be interested in
 participating in?  I think it's extremely valuable for projects like
 Cinder that have dozens of backend devices, and regardless of other
 interest or participation in the community I intend to implement
 something like this on my own regardless.  It would just be
 interesting to see if we could have an organized and official effort
 to gather this sort of information and run these types of tests.

 Open to suggestions and thoughts as well as any of you that may
 already be doing this sort of thing.  By the way, I've been looking at
 things like SmokeStack and other third party gating checks to get some
 ideas as well.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-15 Thread John Griffith
On Wed, Jan 15, 2014 at 6:41 PM, Michael Still mi...@stillhq.com wrote:
 John -- I agree with you entirely here. My concern is more that I
 think the CI tests need to run more frequently than weekly.

Completely agree, but I guess in essence to start these aren't really
CI tests.  Instead it's just a public health report for the various
drivers vendors provide.  I'd love to see a higher frequency, but some
of us don't have the infrastructure to try and run a test against
every commit.  Anyway, I think there's HUGE potential for growth and
adjustment as we go along.  I'd like to get something in place to
solve the immediate problem first though.

To be honest I'd even be thrilled just to see every vendor publish a
passing run against each milestone cut.  That in and of itself would
be a huge step in the right direction in my opinion.


 Michael

 On Thu, Jan 16, 2014 at 9:30 AM, John Griffith
 john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:03 PM, Michael Still mi...@stillhq.com wrote:
 On Thu, Jan 16, 2014 at 6:28 AM, John Griffith
 john.griff...@solidfire.com wrote:
 Hey Everyone,

 A while back I started talking about this idea of requiring Cinder
 driver contributors to run a super simple cert script (some info here:
 [1]).  Since then I've been playing with introduction of a third party
 gate check here in my own lab.  My proposal was to have a non-voting
 check that basically duplicates the base devstack gate test in my lab,
 but uses different back-end devices that I have available configured
 in Cinder to run periodic tests against.  Long term I'd like to be
 able to purpose this gear to also do something more useful for the
 over all OpenStack gating effort but to start it's strictly an
 automated verification of my Cinder driver/backend.

 What I'm questioning is how to report this information and the
 results.  Currently patches and reviews are our mechanism for
 triggering tests and providing feedback.  Myself and many other
 vendors that might like to participate in something like this
 obviously don't have the infrastructure to try and run something like
 this on every single commit.  Also since it would be non-voting it's
 difficult to capture and track the results.

 One idea that I had was to set something like what I've described
 above to run locally on a periodic basis (weekly, nightly etc) and
 publish results to something like a third party verification
 dashboard.  So the idea would be that results from various third
 party tests would all adhere to a certain set of criteria WRT what
 they do and what they report  and those results would be logged and
 tracked publicly for anybody in the OpenStack community to access and
 view?

 My concern here is how to identify what patch broke the third party
 thing. If you run this once a week, then there are possible hundreds
 of patches which might be responsible. How do you identify which one
 is the winner?

 To be honest I'd like to see more than once a week, however the main
 point of this is to have public testing of third party drivers.
 Currently we say it's in trunk and passed review and unit tests so
 you're good to go.  Frankly that's not sufficient, there needs to be
 some sort of testing publicly that shows that a product/config
 actually works in the minimum sense at least.  This won't address
 things like a bad patch breaking things, but again in Cinder's case
 this is a bit different, it is designed more to show compatibility and
 integration completeness.  If a patch goes in and breaks a vendors
 driver but not the reference implementation, that means the vendor has
 work to do bring their driver up to date.

 Cinder is not a dumping ground, the drivers in the code base should no
 be static but require continued maintenance and development as the
 project grows.

 Non-Voting tests on every patch seems unrealistic, however there's no
 reason that if vendors have the resources they couldn't do that if
 they so choose.


 Does this seem like something that others would be interested in
 participating in?  I think it's extremely valuable for projects like
 Cinder that have dozens of backend devices, and regardless of other
 interest or participation in the community I intend to implement
 something like this on my own regardless.  It would just be
 interesting to see if we could have an organized and official effort
 to gather this sort of information and run these types of tests.

 Open to suggestions and thoughts as well as any of you that may
 already be doing this sort of thing.  By the way, I've been looking at
 things like SmokeStack and other third party gating checks to get some
 ideas as well.

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list