On Mon, Mar 10, 2014 at 05:26:27PM -0700, Steven Dake wrote:
> On 03/10/2014 04:32 PM, Sandro "red" Mathys wrote:
> >Just noticed that removing Python would obviously also mean no
> >heat-cfntools. Do we want to accept that (if we go with remove-python
> >at all)?
> >
> >Starting to wonder a bit how users are supposed to use Docker. What
> >other mechanisms do we have to actually run a container with Docker
> >once the image is deployed?
> 
> removing cloud-init, heat-cfntools, and the new TripleO related
> tools os-collect-config, os-apply-config and os-refresh-config may
> make sense for a *host* operating system, which I believe Atomic is
> targeting.
> 
> If these are removed from a guest operating system, the guest won't
> be able to function with TripleO, Heat, or anyone that depends on
> cloud-init.  Removing cloud-init support effectively kills any
> motivation for AWS adoption of a guest operating system that we may
> produce.
> 
> I am a bit confused at the scope because min-metadata-server was
> mentioned early on, but is unnecessary if the target of this OS is
> to only run on hosts.
> 
> Ideally a python run time would still be available to run
> virtualization platforms like OpenStack.  Such a bare-bones
> operating system would make alot of sense, but I've copied a TripleO
> upstream developer (James) for his thoughts on atomic + ostree and
> its relationship to how TripleO handles continuous deployment
> through imaging.

I figured I'd reply here, even though Steve already covered some points in his
follow up.

Part of the TripleO deployment workflow is building the images you need to do a
deployment. The image build process actually takes the Fedora (or
Ubuntu/RHEL/Suse) cloud image as input, and modifies it so that it works with
TripleO. So, this is actually more of an image customization then an image
build. Therefore, TripleO is less tied to what is *already* in the cloud image,
b/c whatever is not there, we can install, setup, etc. Having os-*-config on
the image, wasn't a hard requirement for TripleO, but I believe it was for some
other Heat use cases as Steve indicated.

For Fedora Atomic/rpm-ostree, I admit I only have a cursory understanding of
how that works.

Let me layout what TripleO is aiming for, then try to correlate the two.

There are 2 image upgrade paths being pursued by TripleO. In both cases, the
end goal is to have your instances running with a read-only root partition
and your stateful data you need preserved mounted on a separate partition. In
TripleO's case, this is the ephemeral partition that is provided to instances.
ephemeral is a bit of a misnomer at this point, b/c it's not very "ephemeral"
anymore, since there are now patches landed in Nova to preserve it across
reboots/rebuilds of instances so that it's preserved after an upgrade.

The first upgrade path is updating your image id's in your heat template and
then doing a "heat stack-update" on a deployed TripleO stack. Heat sees that
you're requesting a new image, and triggers a Nova rebuild[1]. After the
reboot, the ephemeral partition is preserved for your stateful data. Then
there's some coalescing of services, migrations, scripts, etc, that get run by
os-refresh-config on boot to make sure everything is all setup for the new
image you're now running.

The second path is for upgrades where you don't want to have to reboot, or
rebuild the whole instance to pick up a small change. You update the image id
in the heat template (somewhere, probably not the same spot as for the case
above). os-refresh-config, which actually runs every 5 minutes in the instance,
sees the Heat metadata change. It then operates on that change, by pulling down
the new image from glance, cracking it open, remounting root as rw, then
rsyncing the changes to the root partition, remounting root as ro, etc. There's
also been talk about producing rsync diff blobs, and just hosting those in
glance for the instances to use. After the upgrade completes, the instance uses
a ConditionHandle back to Heat to signal that "I'm done upgrading".

For ostree, I believe you can host the ostree repositories via http?  I'm
probably reaching here, but I suppose it might be possible for Nova to have a
rebuild implementation that used ostree, or even some type of glance backend or
"image type" that used ostree. Similarly as Heat is using image id's to trigger
upgrades now, perhaps there could be something that added the new ostree to use
in the instance metadata. The instance sees that metadata, and performs the
ugprade. Of course, you could also do the ostree repository stuff out of band
and just have it live alongside your OpenStack deployment. It'd be more of an
alternative implementation, and not really using TripleO at that point, since
you wouldn't be using OpenStack services directly to do the upgrades.

[1] 
http://docs.openstack.org/api/openstack-compute/2/content/Rebuild_Server-d1e3538.html

> 
> Regards,
> -steve
> 
> 
> >On Mon, Mar 10, 2014 at 8:21 AM, Colin Walters <walt...@verbum.org> wrote:
> >>- How does one activate the deployed product if extlinux is the active
> >>bootloader? The website has only instructions for GRUB (bls_import, etc).
> >>
> >>
> >>Ah, you probably hit this:
> >>https://bugzilla.gnome.org/show_bug.cgi?id=726007
> >Probably this, will try a workaround later. Thanks.
> >
> >>This is also related:
> >>https://bugzilla.gnome.org/show_bug.cgi?id=722845
> >>
> >>Basically to make ostree drive extlinux, the layout needs to look like this:
> >>https://github.com/cgwalters/rpm-ostree/blob/master/src/autobuilder/js/libqa.js#L359
> >>
> >>- How does one get rid of the 'traditional Fedora' once the product is
> >>active?
> >>
> >>
> >>Maybe something like:
> >>rpm -qal | while read line; do rm $line || rmdir $line; done
> >>rpm -qal | while read line; do rm $line || rmdir $line; done
> >I'm pretty sure that's not a good way to go. Not sure if there's any
> >much better. We'll see once I had time to play around some more.
> >
> >>...figure both won't be necessary anymore once we can use Anaconda to
> >>install a product directly, but in the meantime it would be helpful for some
> >>testing and more generally help my understanding. :)
> >>
> >>
> >>Yep, Anaconda support will fix both.
> >Great, looking forward to that.
> >
> >-- Sandro
> >_______________________________________________m
> >cloud mailing list
> >cloud@lists.fedoraproject.org
> >https://admin.fedoraproject.org/mailman/listinfo/cloud
> >Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
> 
--
-- James Slagle
--
_______________________________________________
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

Reply via email to