Re: [openstack-dev] [ironic] testing ansible-deploy in gates

2017-11-01 Thread Dmitry Tantsur

On 10/31/2017 07:09 PM, Pavlo Shchelokovskyy wrote:

Hi all,

as we have agreed on the PTG, we are about to pull the ansible-deploy interface 
from ironic-staging-drivers to ironic. We obviously need to test it on gates 
too, in a non-voting mode like snmp and redfish ones.


This raises couple of questions/concerns:

1. Testing hardware types on gates.
This is the first? interface that does not have any "classic" driver associated 
with it. All our devstack setup logic is currently based on classic drivers 
instead of hardware types, in particular all the "is_deployed_by" functions and 
logic depending on them.
As we are about to deprecate the classic drivers altogether and eventually 
remove them, we ought to start moving our setup and testing procedures to 
hardware types as well.
(another interesting point would be how much effort we need to adapt all our 
unit tests to use hw types instead of 'fake' and other classic drivers...)


++ to moving to hardware types. As to 'fake', there is 'fake-hardware', which 
does roughly the same.




2. Deploy ramdisk image to use.
Current job in staging drivers does small rebuild of tinyipa image during 
deploy. I'd like to avoid it as much as possible, so I propose to add all the 
logic which is there to default build options and scripts of tinyipa build. This 
includes installing SSH server and enabling SSH access to the ramdisk, and some 
small mangling with files and file links.


++

A separate question would be SSH keys. We could either not bake them to the 
image and generate them each time anew, but that would still require an image 
rebuild on (each?) devstack run. Or we could generate them once, bake the public 
key to the image and publish the private key to tarballs.o.o, so it could be 
re-used by IPA scripts and jobs to build fresh images on merge and during tests. 
There are surely certain security consideration to such approach, but as tinyipa 
is targeted for testing (virtual) environments and not production, I suppose we 
could probably neglect them.


Mmmm, I'm pretty sure somebody will try to use our published images in 
production, even if we recommend against it. How long does the image repacking 
take? It should not be hard to inject one file in an image..




WDYT?

Another aspect of this is (as we agreed) we would need to move all the 
'imagebuild' folder content from IPA repo to a separate repo, and devise a way 
to use this repo in our devstack plugin.


I've started that, but I don't have time to continue. Any help is appreciated :)



I'm eager to hear your thoughts and comments.

Best regards,
--
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] testing ansible-deploy in gates

2017-10-31 Thread Pavlo Shchelokovskyy
Hi all,

as we have agreed on the PTG, we are about to pull the ansible-deploy
interface from ironic-staging-drivers to ironic. We obviously need to test
it on gates too, in a non-voting mode like snmp and redfish ones.

This raises couple of questions/concerns:

1. Testing hardware types on gates.
This is the first? interface that does not have any "classic" driver
associated with it. All our devstack setup logic is currently based on
classic drivers instead of hardware types, in particular all the
"is_deployed_by" functions and logic depending on them.
As we are about to deprecate the classic drivers altogether and eventually
remove them, we ought to start moving our setup and testing procedures to
hardware types as well.
(another interesting point would be how much effort we need to adapt all
our unit tests to use hw types instead of 'fake' and other classic
drivers...)

2. Deploy ramdisk image to use.
Current job in staging drivers does small rebuild of tinyipa image during
deploy. I'd like to avoid it as much as possible, so I propose to add all
the logic which is there to default build options and scripts of tinyipa
build. This includes installing SSH server and enabling SSH access to the
ramdisk, and some small mangling with files and file links.
A separate question would be SSH keys. We could either not bake them to the
image and generate them each time anew, but that would still require an
image rebuild on (each?) devstack run. Or we could generate them once, bake
the public key to the image and publish the private key to tarballs.o.o, so
it could be re-used by IPA scripts and jobs to build fresh images on merge
and during tests. There are surely certain security consideration to such
approach, but as tinyipa is targeted for testing (virtual) environments and
not production, I suppose we could probably neglect them.

WDYT?

Another aspect of this is (as we agreed) we would need to move all the
'imagebuild' folder content from IPA repo to a separate repo, and devise a
way to use this repo in our devstack plugin.

I'm eager to hear your thoughts and comments.

Best regards,
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev