[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework: INPROGRESS
- deploy testing framework for use with local provider: TODO
+ [mark-mims] deploy testing framework for use with local provider (all phase1 
tests are done w/ local provider): DONE
  deploy testing framework for use against ec2: TODO
  deploy testing framework for use against canonistack: TODO
- deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
- write charm tests for mysql: TODO
- [clint-fewbar]  write charm tests for haproxy: TODO
- [clint-fewbar]  write charm tests for wordpress: TODO
- [mark-mims]  write charm tests for hadoop: TODO
+ deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
+ write charm tests for mysql: POSTPONED
+ [clint-fewbar]  write charm tests for haproxy: POSTPONED
+ [clint-fewbar]  write charm tests for wordpress: POSTPONED
+ [mark-mims]  write charm tests for hadoop: POSTPONED
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 

[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread Mark Mims
Blueprint changed by Mark Mims:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework: INPROGRESS
  [mark-mims] deploy testing framework for use with local provider (all phase1 
tests are done w/ local provider): DONE
- deploy testing framework for use against ec2: TODO
- deploy testing framework for use against canonistack: TODO
+ [mark-mims] deploy testing framework for use against ec2: INPROGRESS
+ [mark-mims] deploy testing framework for use against canonistack: INPROGRESS
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
  write charm tests for mysql: POSTPONED
  [clint-fewbar]  write charm tests for haproxy: POSTPONED
  [clint-fewbar]  write charm tests for wordpress: POSTPONED
  [mark-mims]  write charm tests for hadoop: POSTPONED
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread Mark Mims
Blueprint changed by Mark Mims:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework: INPROGRESS
- [mark-mims] deploy testing framework for use with local provider (all phase1 
tests are done w/ local provider): DONE
+ [mark-mims] deploy testing framework for use with local provider (all phase1 
tests and charm graph tests are done w/ local provider): DONE
  [mark-mims] deploy testing framework for use against ec2: INPROGRESS
  [mark-mims] deploy testing framework for use against canonistack: INPROGRESS
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
  write charm tests for mysql: POSTPONED
  [clint-fewbar]  write charm tests for haproxy: POSTPONED
  [clint-fewbar]  write charm tests for wordpress: POSTPONED
  [mark-mims]  write charm tests for hadoop: POSTPONED
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread Mark Mims
Blueprint changed by Mark Mims:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
- implement specified testing framework: INPROGRESS
+ implement specified testing framework (phase1): DONE
  [mark-mims] deploy testing framework for use with local provider (all phase1 
tests and charm graph tests are done w/ local provider): DONE
  [mark-mims] deploy testing framework for use against ec2: INPROGRESS
  [mark-mims] deploy testing framework for use against canonistack: INPROGRESS
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
  write charm tests for mysql: POSTPONED
  [clint-fewbar]  write charm tests for haproxy: POSTPONED
  [clint-fewbar]  write charm tests for wordpress: POSTPONED
  [mark-mims]  write charm tests for hadoop: POSTPONED
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread Mark Mims
Blueprint changed by Mark Mims:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework (phase1): DONE
  [mark-mims] deploy testing framework for use with local provider (all phase1 
tests and charm graph tests are done w/ local provider): DONE
- [mark-mims] deploy testing framework for use against ec2: INPROGRESS
- [mark-mims] deploy testing framework for use against canonistack: INPROGRESS
+ [mark-mims] deploy testing framework for use against ec2: POSTPONED (this is 
an out-of-cycle activity)
+ [mark-mims] deploy testing framework for use against canonistack: POSTPONED 
(this is an out-of-cycle activitly)
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
  write charm tests for mysql: POSTPONED
  [clint-fewbar]  write charm tests for haproxy: POSTPONED
  [clint-fewbar]  write charm tests for wordpress: POSTPONED
  [mark-mims]  write charm tests for hadoop: POSTPONED
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread Chris Johnston
Blueprint changed by Chris Johnston:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework (phase1): DONE
  [mark-mims] deploy testing framework for use with local provider (all phase1 
tests and charm graph tests are done w/ local provider): DONE
- [mark-mims] deploy testing framework for use against ec2: POSTPONED (this is 
an out-of-cycle activity)
- [mark-mims] deploy testing framework for use against canonistack: POSTPONED 
(this is an out-of-cycle activitly)
+ [mark-mims] deploy testing framework for use against ec2  (this is an 
out-of-cycle activity): POSTPONED
+ [mark-mims] deploy testing framework for use against canonistack (this is an 
out-of-cycle activitly): POSTPONED
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
  write charm tests for mysql: POSTPONED
  [clint-fewbar]  write charm tests for haproxy: POSTPONED
  [clint-fewbar]  write charm tests for wordpress: POSTPONED
  [mark-mims]  write charm tests for hadoop: POSTPONED
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 

[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework (phase1): DONE
  [mark-mims] deploy testing framework for use with local provider (all phase1 
tests and charm graph tests are done w/ local provider): DONE
  [mark-mims] deploy testing framework for use against ec2  (this is an 
out-of-cycle activity): POSTPONED
  [mark-mims] deploy testing framework for use against canonistack (this is an 
out-of-cycle activitly): POSTPONED
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
  write charm tests for mysql: POSTPONED
  [clint-fewbar]  write charm tests for haproxy: POSTPONED
  [clint-fewbar]  write charm tests for wordpress: POSTPONED
  [mark-mims]  write charm tests for hadoop: POSTPONED
- [james-page]  add openstack tests: TODO
+ [james-page]  add openstack tests: DONE
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-03-28 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: Spec nearing final approval. Merge proposals submitted against
  juju to implement some parts of the wrapper inside juju itself:
  http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
- implement specified testing framework (phase1): DONE
+ [mark-mims] implement specified testing framework (phase1): DONE
+ implement specified testing framework (phase2) ( blocked on merge of 
http://pad.lv/939944 ): BLOCKED
  [mark-mims] deploy testing framework for use with local provider (all phase1 
tests and charm graph tests are done w/ local provider): DONE
  [mark-mims] deploy testing framework for use against ec2  (this is an 
out-of-cycle activity): POSTPONED
  [mark-mims] deploy testing framework for use against canonistack (this is an 
out-of-cycle activitly): POSTPONED
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): POSTPONED
  write charm tests for mysql: POSTPONED
  [clint-fewbar]  write charm tests for haproxy: POSTPONED
  [clint-fewbar]  write charm tests for wordpress: POSTPONED
  [mark-mims]  write charm tests for hadoop: POSTPONED
  [james-page]  add openstack tests: DONE
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-02-23 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work.
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
- implement specified testing framework: TODO
+ implement specified testing framework: INPROGRESS
  deploy testing framework for use with local provider: TODO
  deploy testing framework for use against ec2: TODO
  deploy testing framework for use against canonistack: TODO
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
  write charm tests for mysql: TODO
  [clint-fewbar]  write charm tests for haproxy: TODO
  [clint-fewbar]  write charm tests for wordpress: TODO
  [mark-mims]  write charm tests for hadoop: TODO
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-02-23 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
- Status: spec needed, place-holder work items added to get a better
- handle on the scope of work.
+ Status: Spec nearing final approval. Merge proposals submitted against
+ juju to implement some parts of the wrapper inside juju itself:
+ http://pad.lv/939932 , http://pad.lv/939944
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework: INPROGRESS
  deploy testing framework for use with local provider: TODO
  deploy testing framework for use against ec2: TODO
  deploy testing framework for use against canonistack: TODO
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
  write charm tests for mysql: TODO
  [clint-fewbar]  write charm tests for haproxy: TODO
  [clint-fewbar]  write charm tests for wordpress: TODO
  [mark-mims]  write charm tests for hadoop: TODO
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-02-22 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work.
  
  Work Items:
  [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework: TODO
  deploy testing framework for use with local provider: TODO
  deploy testing framework for use against ec2: TODO
  deploy testing framework for use against canonistack: TODO
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
  write charm tests for mysql: TODO
  [clint-fewbar]  write charm tests for haproxy: TODO
  [clint-fewbar]  write charm tests for wordpress: TODO
  [mark-mims]  write charm tests for hadoop: TODO
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
- [mark-mims]  basic charm tests... just test install hooks for now: INPROGRESS
+ [mark-mims]  basic charm tests... just test install hooks for now: DONE
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-25 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work.
  
  Work Items:
- [clint-fewbar] write spec for charm testing facility: INPROGRESS
+ [clint-fewbar] write spec for charm testing facility: DONE
  implement specified testing framework: TODO
  deploy testing framework for use with local provider: TODO
  deploy testing framework for use against ec2: TODO
  deploy testing framework for use against canonistack: TODO
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
  write charm tests for mysql: TODO
  [clint-fewbar]  write charm tests for haproxy: TODO
  [clint-fewbar]  write charm tests for wordpress: TODO
  [mark-mims]  write charm tests for hadoop: TODO
  [james-page]  add openstack tests: TODO
  [mark-mims]  jenkins charm to spawn basic charm tests: DONE
  [mark-mims]  basic charm tests... just test install hooks for now: INPROGRESS
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-20 Thread Dave Walker
Blueprint changed by Dave Walker:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work.
  
  Work Items:
- write spec for charm testing facility: INPROGRESS
+ [clint-fewbar] write spec for charm testing facility: INPROGRESS
  implement specified testing framework: TODO
  deploy testing framework for use with local provider: TODO
  deploy testing framework for use against ec2: TODO
  deploy testing framework for use against canonistack: TODO
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
  write charm tests for mysql: TODO
- [clint-fewbar] write charm tests for haproxy: TODO
- [clint-fewbar] write charm tests for wordpress: TODO
- [mark-mims] write charm tests for hadoop: TODO
- [james-page] add openstack tests: TODO
- [mark-mims] jenkins charm to spawn basic charm tests: DONE
- [mark-mims] basic charm tests... just test install hooks for now: INPROGRESS
+ [clint-fewbar]  write charm tests for haproxy: TODO
+ [clint-fewbar]  write charm tests for wordpress: TODO
+ [mark-mims]  write charm tests for hadoop: TODO
+ [james-page]  add openstack tests: TODO
+ [mark-mims]  jenkins charm to spawn basic charm tests: DONE
+ [mark-mims]  basic charm tests... just test install hooks for now: INPROGRESS
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-19 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work. Items blocked on spec.
  
  Work Items:
- [niemeyer] write spec for charm testing facility: TODO
- implement specified testing framework: BLOCKED
- deploy testing framework for use with local provider: BLOCKED
- deploy testing framework for use against ec2: BLOCKED
- deploy testing framework for use against canonistack: BLOCKED
- deploy testing framework for use against orchestra (managing VMs instead of 
machines): BLOCKED
- write charm tests for mysql: BLOCKED
- [clint-fewbar] write charm tests for haproxy: BLOCKED
- [clint-fewbar] write charm tests for wordpress: BLOCKED
- [mark-mims] write charm tests for hadoop: BLOCKED
- [james-page] add openstack tests: BLOCKED
+ write spec for charm testing facility: INPROGRESS
+ implement specified testing framework: TODO
+ deploy testing framework for use with local provider: TODO
+ deploy testing framework for use against ec2: TODO
+ deploy testing framework for use against canonistack: TODO
+ deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
+ write charm tests for mysql: TODO
+ [clint-fewbar] write charm tests for haproxy: TODO
+ [clint-fewbar] write charm tests for wordpress: TODO
+ [mark-mims] write charm tests for hadoop: TODO
+ [james-page] add openstack tests: TODO
  [mark-mims] jenkins charm to spawn basic charm tests: DONE
  [mark-mims] basic charm tests... just test install hooks for now: INPROGRESS
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
- 
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list

[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-19 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
- handle on the scope of work. Items blocked on spec.
+ handle on the scope of work.
  
  Work Items:
  write spec for charm testing facility: INPROGRESS
  implement specified testing framework: TODO
  deploy testing framework for use with local provider: TODO
  deploy testing framework for use against ec2: TODO
  deploy testing framework for use against canonistack: TODO
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
  write charm tests for mysql: TODO
  [clint-fewbar] write charm tests for haproxy: TODO
  [clint-fewbar] write charm tests for wordpress: TODO
  [mark-mims] write charm tests for hadoop: TODO
  [james-page] add openstack tests: TODO
  [mark-mims] jenkins charm to spawn basic charm tests: DONE
  [mark-mims] basic charm tests... just test install hooks for now: INPROGRESS
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-17 Thread Mark Mims
Blueprint changed by Mark Mims:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work. Items blocked on spec.
  
  Work Items:
  [niemeyer] write spec for charm testing facility: TODO
  implement specified testing framework: BLOCKED
  deploy testing framework for use with local provider: BLOCKED
  deploy testing framework for use against ec2: BLOCKED
  deploy testing framework for use against canonistack: BLOCKED
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): BLOCKED
  write charm tests for mysql: BLOCKED
  [clint-fewbar] write charm tests for haproxy: BLOCKED
  [clint-fewbar] write charm tests for wordpress: BLOCKED
  [mark-mims] write charm tests for hadoop: BLOCKED
  [james-page] add openstack tests: BLOCKED
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
+ [mark-mims] jenkins charm to spawn basic charm tests DONE
+ [mark-mims] basic charm tests... just test install hooks for now INPROGRESS
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-17 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
  handle on the scope of work. Items blocked on spec.
  
  Work Items:
  [niemeyer] write spec for charm testing facility: TODO
  implement specified testing framework: BLOCKED
  deploy testing framework for use with local provider: BLOCKED
  deploy testing framework for use against ec2: BLOCKED
  deploy testing framework for use against canonistack: BLOCKED
  deploy testing framework for use against orchestra (managing VMs instead of 
machines): BLOCKED
  write charm tests for mysql: BLOCKED
  [clint-fewbar] write charm tests for haproxy: BLOCKED
  [clint-fewbar] write charm tests for wordpress: BLOCKED
  [mark-mims] write charm tests for hadoop: BLOCKED
  [james-page] add openstack tests: BLOCKED
+ [mark-mims] jenkins charm to spawn basic charm tests: DONE
+ [mark-mims] basic charm tests... just test install hooks for now: INPROGRESS
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
- [mark-mims] jenkins charm to spawn basic charm tests DONE
- [mark-mims] basic charm tests... just test install hooks for now INPROGRESS
+ 
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-09 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
  Status: spec needed, place-holder work items added to get a better
- handle on the scope of work.
+ handle on the scope of work. Items blocked on spec.
  
  Work Items:
  [niemeyer] write spec for charm testing facility: TODO
- implement specified testing framework: TODO
- deploy testing framework for use with local provider: TODO
- deploy testing framework for use against ec2: TODO
- deploy testing framework for use against canonistack: TODO
- deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
- write charm tests for mysql: TODO
- [clint-fewbar] write charm tests for haproxy: TODO
- [clint-fewbar] write charm tests for wordpress: TODO
- [mark-mims] write charm tests for hadoop: TODO
- [james-page] add openstack tests: TODO
+ implement specified testing framework: BLOCKED
+ deploy testing framework for use with local provider: BLOCKED
+ deploy testing framework for use against ec2: BLOCKED
+ deploy testing framework for use against canonistack: BLOCKED
+ deploy testing framework for use against orchestra (managing VMs instead of 
machines): BLOCKED
+ write charm tests for mysql: BLOCKED
+ [clint-fewbar] write charm tests for haproxy: BLOCKED
+ [clint-fewbar] write charm tests for wordpress: BLOCKED
+ [mark-mims] write charm tests for hadoop: BLOCKED
+ [james-page] add openstack tests: BLOCKED
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
  * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
    within the charms, or outside since it's in fact exercising the whole graph?
    * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
    charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
    identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
    implement such interfaces. In addition to working as tests, this is also a 
pragmatic
    way to document the interface.
  * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
    that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2012-01-04 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Drafter: Clint Byrum = Ubuntu Server Team

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-12-22 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Drafter: juju hackers = Clint Byrum

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-12-22 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Approver: Robbie Williamson = Antonio Rosales

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-11-25 Thread Dave Walker
Blueprint changed by Dave Walker:

Definition Status: Pending Approval = Approved

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-11-22 Thread Dave Walker
Blueprint changed by Dave Walker:

Definition Status: Discussion = Pending Approval

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-11-17 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
+ Status: spec needed, place-holder work items added to get a better
+ handle on the scope of work.
+ 
  Work Items:
  [niemeyer] write spec for charm testing facility: TODO
+ implement specified testing framework: TODO
+ deploy testing framework for use with local provider: TODO
+ deploy testing framework for use against ec2: TODO
+ deploy testing framework for use against canonistack: TODO
+ deploy testing framework for use against orchestra (managing VMs instead of 
machines): TODO
+ write charm tests for mysql: TODO
+ [clint-fewbar] write charm tests for haproxy: TODO
+ [clint-fewbar] write charm tests for wordpress: TODO
+ [mark-mims] write charm tests for hadoop: TODO
  [james-page] add openstack tests: TODO
  
  Session notes:
  Welcome to Ubuntu Developer Summit!
  #uds-p #track #topic
  put your session notes here
  Requirements of automated testing of charms:
  * LET'S KEEP IT SIMPLE! :-)
  * Detect breakage of a charm relating to an interface
  * Identification of individual change which breaks a given relationship
  * Maybe implement tests that mock a relation to ensure implementers are 
compliant
  * Test dependent charms when a provider charm changes
  * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
  * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
  * Notify maintainers when the charm breaks, rather than waiting for polling
  * Verify idempotency of hooks
- * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
+ * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
  * be able to specify multiple scenarios
  * For functional tests, they are in fact exercising multiple charms. Should 
those sit
-   within the charms, or outside since it's in fact exercising the whole graph?
-   * The place for these composed tests seem to be the stack
+   within the charms, or outside since it's in fact exercising the whole graph?
+   * The place for these composed tests seem to be the stack
  * As much data as possible should be collected about the running tests so 
that a broken
-   charm can be debugged and fixed.
+   charm can be debugged and fixed.
  * Provide rich artifacts for failure analysis
  * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
-   identified, but this is hard because changes may be co-dependent
+   identified, but this is hard because changes may be co-dependent
  * It would be nice to have interface-specific tests that can run against any 
charms that
-   implement such interfaces. In addition to working as tests, this is also a 
pragmatic
-   way to document the interface.
- * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches) 
+   implement such interfaces. In addition to working as tests, this is also a 
pragmatic
+   way to document the interface.
+ * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches)
  * We need a way to know which charms trigger which tests
  * Keep it simple
  * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
-   that into account for the OpenStack testing effort.
+   that into account for the OpenStack testing effort.
  ACTIONS:
  [niemeyer] spec
  [james-page] add openstack tests
- 
  
  Proposal below is too complicated, rejected (Kept for posterity)
  
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
  calculate graph of all charms in store which require interface and all of 
its dependency combinations
  deploy requiring charm w/ dependencies and providing service
  add-relation between requiring/providing
  for test in provides/interface ; do
    run test with name of deployed requiring service
  for interface in requires ; do
  repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-11-14 Thread Clint Byrum
Blueprint changed by Clint Byrum:

Whiteboard changed:
+ Work Items:
+ [niemeyer] write spec for charm testing facility: TODO
+ [james-page] add openstack tests: TODO
+ 
+ Session notes:
+ Welcome to Ubuntu Developer Summit!
+ #uds-p #track #topic
+ put your session notes here
+ Requirements of automated testing of charms:
+ * LET'S KEEP IT SIMPLE! :-)
+ * Detect breakage of a charm relating to an interface
+ * Identification of individual change which breaks a given relationship
+ * Maybe implement tests that mock a relation to ensure implementers are 
compliant
+ * Test dependent charms when a provider charm changes
+ * Run test NxN of providers and requirers so all permutations are sane 
(_very_ expensive, probably impossible)
+ * Run testing against multiple environment providers (EC2/OpenStack/BareMetal)
+ * Notify maintainers when the charm breaks, rather than waiting for polling
+ * Verify idempotency of hooks
+ * Tricky to _verify_, and not an enforced convention at the moment, so 
not sure
+ * be able to specify multiple scenarios
+ * For functional tests, they are in fact exercising multiple charms. Should 
those sit
+   within the charms, or outside since it's in fact exercising the whole graph?
+   * The place for these composed tests seem to be the stack
+ * As much data as possible should be collected about the running tests so 
that a broken
+   charm can be debugged and fixed.
+ * Provide rich artifacts for failure analysis
+ * Ideally tests will be run in lock step mode, so that breaking charms can 
be individually
+   identified, but this is hard because changes may be co-dependent
+ * It would be nice to have interface-specific tests that can run against any 
charms that
+   implement such interfaces. In addition to working as tests, this is also a 
pragmatic
+   way to document the interface.
+ * support gerrit-like topics?  (What's that? :-) i.e., change-sets (across 
different branches) 
+ * We need a way to know which charms trigger which tests
+ * Keep it simple
+ * James mentioned he'd like to have that done by Alpha 1 (December) so that 
he can take
+   that into account for the OpenStack testing effort.
+ ACTIONS:
+ [niemeyer] spec
+ [james-page] add openstack tests
+ 
+ 
+ Proposal below is too complicated, rejected (Kept for posterity)
+ 
  Proposal:
  
  Each charm has a tests directory
  
  Under tests, you have executables:
  
  __install__ -- test to run after charm is installed with no relations
  
  Then two directories:
  provides/
  requires/
  
  These directories have a directory underneath for each interface
  provided/required. Those directories contain executables to run.
  
  The test runner follows the following method:
  
  deploy charm
  wait for installed status
  run __install__ script, FAIL if exits non-zero
  destroy service
  for interface in provides ; do
- calculate graph of all charms in store which require interface and all of 
its dependency combinations
- deploy requiring charm w/ dependencies and providing service
- add-relation between requiring/providing
- for test in provides/interface ; do
-   run test with name of deployed requiring service
+ calculate graph of all charms in store which require interface and all of 
its dependency combinations
+ deploy requiring charm w/ dependencies and providing service
+ add-relation between requiring/providing
+ for test in provides/interface ; do
+   run test with name of deployed requiring service
  for interface in requires ; do
- repeat process above with provides/requires transposed
+ repeat process above with provides/requires transposed
  
  Each commit to any branch in charm store will queue up a run with only
  that change applied, none that have been done after it, and record
  pass/fail

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-10-27 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Assignee: (none) = Ubuntu Server Team

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-10-27 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Drafter: (none) = juju hackers

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-10-27 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Definition Status: New = Discussion

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-10-27 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Priority: Undefined = Essential

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-juju-charm-testing] Juju: automated testing of charms

2011-10-27 Thread Robbie Williamson
Blueprint changed by Robbie Williamson:

Priority: Essential = High

-- 
Juju: automated testing of charms
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs