Hi friends,

The two devstack patches mentioned below have had the latest patchset up for a 
week now, and still only have one +2 on them. The other patches (and the 
ultimate goal of getting CI running on this driver) are blocked these patches.

Could a devstack core please review these soon? Thanks!

// jim

On August 7, 2014 at 3:51:21 PM, Jay Faulkner (j...@jvf.cc) wrote:

Hi all,

At the recent Ironic mid-cycle meetup, we got the first version of the 
ironic-python-agent (IPA) driver merged. There are a few reviews we need merged 
(and their dependencies) across a few other projects in order to begin testing 
it automatically. We would like to eventually gate IPA and Ironic with tempest 
testing similar to what the pxe driver does today.

For IPA to work in devstack (openstack-dev/devstack repo):

 - https://review.openstack.org/#/c/112095 Adds swift temp URL support to 

 - https://review.openstack.org/#/c/108457 Adds IPA support to Devstack


Docs on running IPA in devstack (openstack/ironic repo):

 - https://review.openstack.org/#/c/112136/


For IPA to work in the devstack-gate environment (openstack-infra/config & 
openstack-infra/devstack-gate repos):

 - https://review.openstack.org/#/c/112143 Add IPA support to devstack-gate

 - https://review.openstack.org/#/c/112134 Consolidate and rename Ironic jobs

 - https://review.openstack.org/#/c/112693 Add check job for IPA + tempest 

Once these are all merged, we'll have IPA testing via a nonvoting check job, 
using the IPA-CoreOS deploy ramdisk, in both the ironic and ironic-python-agent 
projects. This will be promoted to voting once proven stable.

However, this is only one of many possible IPA deploy ramdisk images. We're 
currently publishing a CoreOS ramdisk, but we also have an effort to create a 
ramdisk with diskimage-builder (https://review.openstack.org/#/c/110487/) , as 
well as plans for an ISO image (for use with things like iLo). As we gain 
additional images, we'd like to run those images through the same suite of 
tests prior to publishing them, so that images which would break IPA's gate 
wouldn't get published. The final state testing matrix should look something 
like this, with check and gate jobs in each project covering the variations 
unique to that project, and one representative test in consuming project's test 


 - tempest runs against Ironic+agent_ssh with CoreOS ramdisk

 - tempest runs against Ironic+agent_ssh with DIB ramdisk

 - (other IPA tests)


IPA would then, as a post job, generate and publish the images, as we currently 
do with IPA-CoreOS ( 
http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz ). 
Because IPA would gate on tempest tests against each image, we'd avoid ever 
publishing a bad deploy ramdisk.


 - tempest runs against Ironic+agent_ssh with most suitable ramdisk (due to 
significantly decreased ram requirements, this will likely be an image created 
by DIB once it exists)

 - tempest runs against Ironic+pxe_ssh

 - (what ever else Ironic runs)


Nova and other integrated projects will continue to run a single job, using 
Ironic with its default deploy driver (currently pxe_ssh).



Using this testing matrix, we'll ensure that there is coverage of each 
cross-project dependency, without bloating each project's test matrix 
unnecessarily. If, for instance, a change in Nova passes the Ironic pxe_ssh job 
and lands, but then breaks the agent_ssh job and thus blocks Ironic's gate, 
this would indicate a layering violation between Ironic and its deploy drivers 
(from Nova's perspective, nothing should change between those drivers). 
Similarly, if IPA tests failed against the CoreOS image (due to Ironic OR Nova 
change), but the DIB image passed in both Ironic and Nova tests, then it's 
almost certainly an *IPA* bug.

Thanks so much for your time, and for the Openstack Ironic community being 
welcoming to us as we have worked towards this alternate deploy driver and work 
towards improving it even further as Kilo opens.


Jay Faulkner

OpenStack-dev mailing list  
OpenStack-dev mailing list

Reply via email to