On Tue, Dec 24, 2013 at 4:28 PM, Clint Byrum <[email protected]> wrote: > Excerpts from James Slagle's message of 2013-12-24 10:40:23 -0800: >> In this approach, everyone uses the same undercloud vm image. In >> order to make that work, their's a script to build the config drive >> iso and that is then used to make config changes at boot time to the >> undercloud. Specifically, there's cloud-init data on the config drive >> iso to update the virtual power manager user and ssh key, and sets the >> user's ssh key in authorized keys. >> > > Is this because it is less work to build an iso than to customize an > existing seed image? How hard would it be to just mount the guest image > and drop the json file in it?
It might take a little while longer to customize a built seed image, but it would still most likely be under a minute. Building the iso takes about a second. Either approach would be fine. I just chose the config drive because it seemed more like the cloud-init way to bootstrap an image that didn't have a dynamic runtime provided datasource. Of course, I ran into a few bugs in cloud-init testing out the config drive approach, so just modifying the image with libguestfs or qemu-nbd probably would have been just as easy. > > Anyway I like the approach, though I generally do not like config drive. > :) > >> > >> > If I were trying to shrink devtest from 3 clouds to 2, I'd eliminate the >> > undercloud, not the seed. The seed is basically an undercloud in a VM >> > with a static configuration. That is what you have described but done >> > in a slightly different way. I am curious what the benefits of this >> > approach are. >> >> True, there's not a whole lot of difference between eliminating the >> seed or the undercloud. You eliminate either one, then call your >> first cloud whichever you want. To me, the seed has always seemed >> short lived, once you use it to deploy the undercloud it can go away >> (eventually, anyway). So, that's why I am calling the first cloud >> here the undercloud. Plus, since it will eventually include Tuskar >> and deploy the overcloud, it seemed more inline with the current >> devtest flow to call it an undercloud. >> > > The more I think about it the more I think we should just take the three > cloud approach. The seed can be turned off as soon as the undercloud is > running, but it allows testing and modification of the seed to undercloud > transfer, which is something we are going to need to put work in to at > some point. It would be a shame to force developers to switch gears and > use something entirely different when they need to get into that. Yea, that certainly makes sense. Also part of my motivation to not have a seed is the memory requirements on the host you're using for devtest. I'm not sure if 8gb is even enough anymore, as I haven't tried a full devtest run that recently. Especially if you're using your main development laptop with other stuff running. If you were able to shut down the seed though after deploying the undercloud, that would definitely help. I think there would be a couple of challenges with that for devtest: - If you had to reboot the undercloud, I think you'd need the seed there for the undercloud's metadata. - The seed vm is the only one in devtest with the 2 network interfaces, default and brbm - The seed handles routing all the traffic for 192.0.2.0/24 > Perhaps we could just use your config drive approach for the seed all > the time. Then users can start with pre-built images, but don't have to > change everything when they want to start changing said images. > > I'm not 100% convinced that it is needed, but I'd rather have one path > than two if we can manage that and not drive away potential > contributors. Agreed, I'd like to see it as one path. Similar to how devtest offers different options down that path today, these could be additional options to not have a seed (or be able to just shutdown your seed vm), or use pre built vm's, etc. -- -- James Slagle -- _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
