On 14 December 2013 09:50, Jay Pipes <jaypi...@gmail.com> wrote: > > Is this set in stone? In other words, is it a given that in order to > create the seed undercloud, that you need to use DIB to do it? Instead > of an image that is pre-constructed and virsh'd into, what about > constructing one or more LXC templates, starting a set of LXC containers > for the various undercloud support services (db, mq, OpenStack services, > etc), installing those support services using > config-mgmt-flavor-du-jour? Has this been considered as an option to > DIB? (sorry if I'm late to the discussion!) :)
Any no-frills spin-up-an-instance technology will work. We use virsh and full images because that lets folk on Mac and Windows administrator consoles bootstrap a datacentre without manually installing a Linux machine Just Because. I'd be entirely open to any patches needed to make running this via LXC/Docker etc. Note that you cannot mount iscsi volumes from within LXC so Nova BareMetal (and Ironic) cannot deploy from within LXC - you'd need to do some plumbing to permit that to work. [The block device API needed to mount the SCSI target isn't namespaced...]. As far as building the seed via DIB - we have no alternative codepaths today, but again, open to patches. The reason we use DIB is because thats how we build the Golden Images for the undercloud and then the overcloud, so we get to reuse all the work that goes into that - the only difference is that rather than using Heat as a metadata source we provide a handcrafted JSON file which we insert into the image at build time. This makes debugging a seed extremely close to debugging a regular undercloud node (and since the migration path is to scale one up and then remove it - having them be stamped from the same cloth is extremely attractive). I'd want to keep that consanginuity I think - building a seed in a fundamentally different way is more likely than not going to lead to migration issues. -Rob -- Robert Collins <rbtcoll...@hp.com> Distinguished Technologist HP Converged Cloud _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev