Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-21 Thread Giulio Fidente

On 08/20/2014 07:35 PM, Gregory Haynes wrote:

Excerpts from Derek Higgins's message of 2014-08-20 09:06:48 +:

On 19/08/14 20:58, Gregory Haynes wrote:

Excerpts from Giulio Fidente's message of 2014-08-19 12:07:53 +:

One last comment, maybe a bit OT but I'm raising it here to see what is
the other people opinion: how about we modify the -ha job so that at
some point we actually kill one of the controllers and spawn a second
user image?


I think this is a great long term goal, but IMO performing an update
isnt really the type of verification we want for this kind of test. We
really should have some minimal tempest testing in place first so we can
verify that when these types of failures occur our cloud remains in a
functioning state.


Greg, you said performing an update did you mean killing a controller
node ?

if so I agree, verifying our cloud is still in a working order with
tempest would get us more coverage then spawning a node. So once we have
tempest in place we can add a test to kill a controller node.



Ah, I misread the original message a bit, but sounds like were all on
the same page.


I don't see why we should wait for tempest being add too before 
introducing the node kill step.


I understand to have a view of the overall status tempest is the tool we 
need, but today we rely on small, short, scenario: we boot a guest from 
a volume and assign it a float


I think we can continue to rely on this and also introduce the node kill 
step, without interfering with the work needed to put tempest in the cycle.


--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Derek Higgins
On 19/08/14 20:55, Gregory Haynes wrote:
 Excerpts from Derek Higgins's message of 2014-08-19 10:41:11 +:
 Hi All,

I'd like to firm up our plans around the ci jobs we discussed at the
 tripleo sprint, at the time we jotted down the various jobs on an
 etherpad, to better visualize the matrix of coverage I've put it into a
 spreadsheet[1]. Before we go about making these changes I'd like to go
 through a few questions for firm things up

 1. Did we miss any jobs that we should have included?
gfidente mentioned on IRC about adding blockstoragescale and
 swiftstoragescale jobs into the mix, should we add this to the matrix so
 at each is tested on at least one of the existing jobs?

 2. Which jobs should run where? i.e. we should probably only aim to run
 a subset of these jobs (possibly 1 fedora and 1 ubuntu?) on non tripleo
 projects.

 3. Are there any jobs here we should remove?

 4. Is there anything we should add to the test matrix?
Here I'm thinking we should consider dependent libraries i.e. have at
 least one job that uses the git version of dependent libraries rather
 then the released library

 5. On selinux we had said that we would set it to enforcing on Fedora
 jobs, once its ready we can flick the switch. This may cause us
 breakages as projects evolve but we can revisit if they are too frequent.

 Once anybody with an opinion has had had a chance to look over the
 spreadsheet, I'll start to make changes to our existing jobs so that
 they match jobs on the spreadsheet and then add the new jobs (one at a time)

 Feel free to add comments to the spreadsheet or reply here.

 thanks,
 Derek

 [1]
 https://docs.google.com/spreadsheets/d/1LuK4FaG4TJFRwho7bcq6CcgY_7oaGnF-0E6kcK4QoQc/edit?usp=sharing

 
 Looks Great! One suggestion is that due to capacity issues we had a
 prioritization of these jobs and were going to walk down the list to add
 new jobs as capacity became available. It might be a good idea to add a
 column for this?

I made an attempt at sorting these into an order of priority, the top 4
jobs I would see as all required and we add the rest in order as
resources allow.

With the 4 tests on top in place we have coverage for ha, non ha,
updates and reboots on both ubuntu and fedora.

 
 -Greg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Derek Higgins
On 19/08/14 13:07, Giulio Fidente wrote:
 On 08/19/2014 12:41 PM, Derek Higgins wrote:
 Hi All,

 I'd like to firm up our plans around the ci jobs we discussed at the
 tripleo sprint, at the time we jotted down the various jobs on an
 etherpad, to better visualize the matrix of coverage I've put it into a
 spreadsheet[1]. Before we go about making these changes I'd like to go
 through a few questions for firm things up
 
 hi Derek!
 
 1. Did we miss any jobs that we should have included?
 gfidente mentioned on IRC about adding blockstoragescale and
 swiftstoragescale jobs into the mix, should we add this to the matrix so
 at each is tested on at least one of the existing jobs?
 
 thanks for bringing this up indeed
 
 mi idea is the following: given we have support for blockstorage nodes
 scaling in devtest now and will (hopefully soon) have the option to
 scale swift nodes too, it'd be nice to test an OC where we have volumes
 and objects stored on those separate nodes
 
 this would test our ability to deploy such a configuration and we have
 tests for this set in place already as our user image is now booting
 from volume and glance is backed by swift
 
 so maybe a nonha job with 1 external blockstorage and 2 external swift
 nodes would be a nice to have?

I've added block scaling and swift scaling to the matrix and have
included each in one of the tests, this should give us coverage on both,
so I think we can do this without adding a new job.

 
 3. Are there any jobs here we should remove?
 
 I was suspicious about the -juno and -icehouse jobs.
 
 Are these jobs supposed to be test lates 'stable' (juno) and 'stable -1'
 (icehouse) releases, with all other jobs deploying from 'K trunk?

I'm having difficulty recalling what we decided at the sprint, but long
term latest stable sounds like a must, anybody know where the notes are
on this?

 
 Once anybody with an opinion has had had a chance to look over the
 spreadsheet, I'll start to make changes to our existing jobs so that
 they match jobs on the spreadsheet and then add the new jobs (one at a
 time)

 Feel free to add comments to the spreadsheet or reply here.
 
 One last comment, maybe a bit OT but I'm raising it here to see what is
 the other people opinion: how about we modify the -ha job so that at
 some point we actually kill one of the controllers and spawn a second
 user image?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Derek Higgins
On 19/08/14 20:58, Gregory Haynes wrote:
 Excerpts from Giulio Fidente's message of 2014-08-19 12:07:53 +:
 One last comment, maybe a bit OT but I'm raising it here to see what is 
 the other people opinion: how about we modify the -ha job so that at 
 some point we actually kill one of the controllers and spawn a second 
 user image?
 
 I think this is a great long term goal, but IMO performing an update
 isnt really the type of verification we want for this kind of test. We
 really should have some minimal tempest testing in place first so we can
 verify that when these types of failures occur our cloud remains in a
 functioning state.

Greg, you said performing an update did you mean killing a controller
node ?

if so I agree, verifying our cloud is still in a working order with
tempest would get us more coverage then spawning a node. So once we have
tempest in place we can add a test to kill a controller node.


 
 - Greg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-20 Thread Gregory Haynes
Excerpts from Derek Higgins's message of 2014-08-20 09:06:48 +:
 On 19/08/14 20:58, Gregory Haynes wrote:
  Excerpts from Giulio Fidente's message of 2014-08-19 12:07:53 +:
  One last comment, maybe a bit OT but I'm raising it here to see what is 
  the other people opinion: how about we modify the -ha job so that at 
  some point we actually kill one of the controllers and spawn a second 
  user image?
  
  I think this is a great long term goal, but IMO performing an update
  isnt really the type of verification we want for this kind of test. We
  really should have some minimal tempest testing in place first so we can
  verify that when these types of failures occur our cloud remains in a
  functioning state.
 
 Greg, you said performing an update did you mean killing a controller
 node ?
 
 if so I agree, verifying our cloud is still in a working order with
 tempest would get us more coverage then spawning a node. So once we have
 tempest in place we can add a test to kill a controller node.
 

Ah, I misread the original message a bit, but sounds like were all on
the same page.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Future CI jobs

2014-08-19 Thread Derek Higgins
Hi All,

   I'd like to firm up our plans around the ci jobs we discussed at the
tripleo sprint, at the time we jotted down the various jobs on an
etherpad, to better visualize the matrix of coverage I've put it into a
spreadsheet[1]. Before we go about making these changes I'd like to go
through a few questions for firm things up

1. Did we miss any jobs that we should have included?
   gfidente mentioned on IRC about adding blockstoragescale and
swiftstoragescale jobs into the mix, should we add this to the matrix so
at each is tested on at least one of the existing jobs?

2. Which jobs should run where? i.e. we should probably only aim to run
a subset of these jobs (possibly 1 fedora and 1 ubuntu?) on non tripleo
projects.

3. Are there any jobs here we should remove?

4. Is there anything we should add to the test matrix?
   Here I'm thinking we should consider dependent libraries i.e. have at
least one job that uses the git version of dependent libraries rather
then the released library

5. On selinux we had said that we would set it to enforcing on Fedora
jobs, once its ready we can flick the switch. This may cause us
breakages as projects evolve but we can revisit if they are too frequent.

Once anybody with an opinion has had had a chance to look over the
spreadsheet, I'll start to make changes to our existing jobs so that
they match jobs on the spreadsheet and then add the new jobs (one at a time)

Feel free to add comments to the spreadsheet or reply here.

thanks,
Derek

[1]
https://docs.google.com/spreadsheets/d/1LuK4FaG4TJFRwho7bcq6CcgY_7oaGnF-0E6kcK4QoQc/edit?usp=sharing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-19 Thread Giulio Fidente

On 08/19/2014 12:41 PM, Derek Higgins wrote:

Hi All,

I'd like to firm up our plans around the ci jobs we discussed at the
tripleo sprint, at the time we jotted down the various jobs on an
etherpad, to better visualize the matrix of coverage I've put it into a
spreadsheet[1]. Before we go about making these changes I'd like to go
through a few questions for firm things up


hi Derek!


1. Did we miss any jobs that we should have included?
gfidente mentioned on IRC about adding blockstoragescale and
swiftstoragescale jobs into the mix, should we add this to the matrix so
at each is tested on at least one of the existing jobs?


thanks for bringing this up indeed

mi idea is the following: given we have support for blockstorage nodes 
scaling in devtest now and will (hopefully soon) have the option to 
scale swift nodes too, it'd be nice to test an OC where we have volumes 
and objects stored on those separate nodes


this would test our ability to deploy such a configuration and we have 
tests for this set in place already as our user image is now booting 
from volume and glance is backed by swift


so maybe a nonha job with 1 external blockstorage and 2 external swift 
nodes would be a nice to have?



3. Are there any jobs here we should remove?


I was suspicious about the -juno and -icehouse jobs.

Are these jobs supposed to be test lates 'stable' (juno) and 'stable -1' 
(icehouse) releases, with all other jobs deploying from 'K trunk?



Once anybody with an opinion has had had a chance to look over the
spreadsheet, I'll start to make changes to our existing jobs so that
they match jobs on the spreadsheet and then add the new jobs (one at a time)

Feel free to add comments to the spreadsheet or reply here.


One last comment, maybe a bit OT but I'm raising it here to see what is 
the other people opinion: how about we modify the -ha job so that at 
some point we actually kill one of the controllers and spawn a second 
user image?

--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-19 Thread Gregory Haynes
Excerpts from Derek Higgins's message of 2014-08-19 10:41:11 +:
 Hi All,
 
I'd like to firm up our plans around the ci jobs we discussed at the
 tripleo sprint, at the time we jotted down the various jobs on an
 etherpad, to better visualize the matrix of coverage I've put it into a
 spreadsheet[1]. Before we go about making these changes I'd like to go
 through a few questions for firm things up
 
 1. Did we miss any jobs that we should have included?
gfidente mentioned on IRC about adding blockstoragescale and
 swiftstoragescale jobs into the mix, should we add this to the matrix so
 at each is tested on at least one of the existing jobs?
 
 2. Which jobs should run where? i.e. we should probably only aim to run
 a subset of these jobs (possibly 1 fedora and 1 ubuntu?) on non tripleo
 projects.
 
 3. Are there any jobs here we should remove?
 
 4. Is there anything we should add to the test matrix?
Here I'm thinking we should consider dependent libraries i.e. have at
 least one job that uses the git version of dependent libraries rather
 then the released library
 
 5. On selinux we had said that we would set it to enforcing on Fedora
 jobs, once its ready we can flick the switch. This may cause us
 breakages as projects evolve but we can revisit if they are too frequent.
 
 Once anybody with an opinion has had had a chance to look over the
 spreadsheet, I'll start to make changes to our existing jobs so that
 they match jobs on the spreadsheet and then add the new jobs (one at a time)
 
 Feel free to add comments to the spreadsheet or reply here.
 
 thanks,
 Derek
 
 [1]
 https://docs.google.com/spreadsheets/d/1LuK4FaG4TJFRwho7bcq6CcgY_7oaGnF-0E6kcK4QoQc/edit?usp=sharing
 

Looks Great! One suggestion is that due to capacity issues we had a
prioritization of these jobs and were going to walk down the list to add
new jobs as capacity became available. It might be a good idea to add a
column for this?

-Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Future CI jobs

2014-08-19 Thread Gregory Haynes
Excerpts from Giulio Fidente's message of 2014-08-19 12:07:53 +:
 One last comment, maybe a bit OT but I'm raising it here to see what is 
 the other people opinion: how about we modify the -ha job so that at 
 some point we actually kill one of the controllers and spawn a second 
 user image?

I think this is a great long term goal, but IMO performing an update
isnt really the type of verification we want for this kind of test. We
really should have some minimal tempest testing in place first so we can
verify that when these types of failures occur our cloud remains in a
functioning state.

- Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev