Michael,

----- "Michael H. Warfield" <m...@wittsend.com> wrote:
> I use to use Linux-vserver years and years ago but when they broke IPv6
> support moving from 1.x to 2.x I was forced to abandon Linux-vserver and
> switch a number of VM's over to OpenVZ.  To this day IPv6 remains an
> "experimental patch" for Linux-vserver and I see that question come up
> on their list periodically, so I couldn't migrate back there, even if I
> wanted to.  That being said, IPv6 support in the OpenVZ vnet device is
> nothing to brag about either and I have had to strictly use the veth
> devices.
> 
> However...  There is a new kid on the block, depending on your
> requirements.  Linux Containers or LXC.  It still has a few rough edges
> and some differences with OpenVZ but has the big advantage that it's all
> in the mainline kernel (2.6.29 and above), so no more patches (yeah!),
> it is supported under libvirt, and the utilities are in the major
> cutting edge distros like Fedora and Ubuntu.  I found that with a couple
> of scripts, I could directly convert OpenVZ config files to LXC config
> files and start my old OpenVZ containers as a container under LXC with
> no further modification inside the container.  Other than a couple of
> initial test containers I was experimenting with, once I got my scripts
> settled down and tested, I migrated over 3 dozen VM's from OpenVZ to LXC
> in a single day with none of the containers experiencing more that a
> minute or so of down time (transfer time between hosts).  Because there
> were no changes in the containers themselves, I could migrate them back,
> if I needed to, just as fast.
> 
> Because LXC requires 2.6.29 and OpenVZ is only available on 2.6.27 or
> earlier, obviously you can't run them on the same machine and kernel.
> 
> A lot of the OpenVZ developers and Linux-vserver developers have been
> contributing to the containers effort in the kernel.
> 
> Some of the rough edges:
> 
> 1) /proc/mounts shows mounts outside of the container (ugly but not
> fatal).  Fixed in git.
> 
> 2) Possible to break out of a container file system (related to #1
> above).  It's possible to break out of chrooted jails.  Fixed in git by
> using pivot root.  This is serious and if you have potential hostiles in
> a container, I wouldn't use LXC yet or use the utilities from git.
> 
> 3) Halt and Reboot of a container not working.  You have to manually
> shut down and restart the container from the host.  Being worked on
> right now.  I use a script that detects when there's only one process
> running (init) in the container and the container runlevel is 0 or 6 to
> decide to shut it down or restart it.  Ugly but works.
> 
> 4) There still seems to be a lot of development work going on in the
> kernel wrt checkpoint and restore.  Since I don't use that much, I
> haven't paid that much attention but LXC does support freezing and
> unfreezing containers.
> 
> Differences:
> 
> * LXC supports virtual consoles you can connect to and log into
> (lxc-console).
> 
> * LXC does not (yet) support the equivalent of "vzctl enter" (under
> discussion - some possible patches).
> 
> * Does not have the same level of fine grained resource control
> available with OpenVZ (something that is not a requirement for me) and
> the user_beancounters (some controls in the cgroups resources but not
> as many).
> 
> * Handles the bridge management for the eth interfaces automatically,
> so no need for extra config files in the host.
> 
> * You can not (yet) run a command via lxc-execute in a running container
> where "vzctl exec" requires a running container but lxc-execute will
> start a container and run a single command in it (reference back to the
> vzctl enter remarks).
> 
> * It looks like, if you wanted to really experiment, you could combine
> LXC with unionfs / funionfs to do something similar to the Linux-vserver
> "unify" to combine common binaries between containers into a common RO
> layer.  I haven't tried this just yet, but soon now.
> 
> Primary disadvantage to LXC is that the utlities are at 0.6.4 from
> source forge and 0.6.3 from Fedora and really really under active
> development and change.  Features and facilities are still subject to
> discussion and change and it's not fully mature on that level.  I don't
> know how long that will take but I wouldn't use anything less that
> what's in their git repo right now or 0.6.5 or higher, when it comes
> out.
> 
> Sooo...  If you WANT to run a newer leading edge distro like Ubuntu or
> Fedora for your host system and you can deal with those differences, LXC
> MIGHT be an option.  If you want to still with an LTS distro like RHEL
> or CentOS on the host or you need something that works just like OpenVZ,
> then probably not.  There might be some complex configurations of
> options and devices that will not migrate properly.  All you can do is
> test and report and problems.  I am running some CentOS guests in
> containers on Fedora 11 and Fedora 12 hosts.
> 
> Exactly.  Which is why the burning need to get to a mainstream kernel.
> Containers, namespaces, and cgroups in the kernel have matured to the
> point where they are eminently usable.  OpenVZ should be able to start
> taking advantage of them directly and begin to eliminate the kernel
> patch, if not eliminate it entirely.  Linux-vservers seems to already
> be doing some of this and taking advantage of native namespaces and
> cgroups.
> 
> I can't speak for the developers here but, I would not be surprised if 
> this were a real reason for some of the lack of recent progress on newer
> kernels.  Why invest the effort at all if you are going to be able to
> take advantage of mainline features in the near future?  Skip the
> transition period and get ready for the big jump.  Better to organize
> and prepare for when it reaches that level of maturity.  I would like to
> see OpenVZ running on a recent linux kernel just using the whole cgroup
> and namespaces facility, even if not all of the granular
> user_beancounters are fully supported (and may never be fully supported
> to that degree of granularity).
> 
> If I had the maturity and stability of the OpenVZ utilities running on
> the mainline kernel using namespaces and cgroups and no custom patch,
> that would be my ideal combination right now.

Wow, I'm really glad you gave the overview of LXC's current status.  I am 
constantly asked about it and have yet to find a good source of information.  I 
guess the mainline LXC developers have a mailing list but I was under the 
impression that it would be full of implimentation type discussions so I 
haven't joined it.  Your posting is the most informative I've seen to date.

Previously I've only seen:

IBM DeveloperWorks (from last Feb and quite outdated now
http://www.ibm.com/developerworks/linux/library/l-lxc-containers/

A few rather brief postings by one Fedora Planet blogger (which refers to you)
http://prefetch.net/blog/index.php/2009/06/21/installing-lxc-containers-on-fedora-hosts/

A webpage on OpenSUSE 
http://en.opensuse.org/LXC

I'd really like to see a practical guide that applies to the distros you 
mentioned... but as you said the code is under heavy development.  Ideally LWN 
would make a nice article about this.  I'll email them and give them a link to 
this thread to see if they'd be interested.

Anyway, for me the main reasons I like OpenVZ are because of the ease of 
install and use (especially on my preferred distros RHEL/CentOS), the resource 
management features, and the checkpointing and migration features.  I have a 
lot of respect for Linux-VServer's Unification feature and the work they do to 
adapt to newer mainline kernels.  And now, with the information you have 
provided about LXC... it just makes me wish would could wrap all three up into 
a matured LXC in the mainline with the added benefit of KVM in mainline as 
well.  Of course we'd need a mature tool that could manage all containers and 
KVM VMs in a sane way and yes libvirt / virsh seem to be what will evolve... 
although I'd love to see Kir pick up the fork of vzctl with LXC support he 
abandoned because of lack of time a while back.

I think I've seen one or two blog postings somewhere where people were using 
LXC containers as separators of KVM VMs so they could apply cgroup resource 
management on them.  Sorry for not providing the links to those as I don't have 
them handy... and can't seem to engage the google-fu.

I'd love to give LXC a try... and I may do so using your recommendations but 
the problems you covered still apply:

1) It is a big moving target and will continue to be so for some time

2) Lack of features particularly in resource controls (possible with manual 
cgroup tools?)

3) The management tool immaturity because of #1

4) The needed distros (Fedora or Ubuntu)... too rapid of a release cycle making 
them inappropriate for server use and medium to long-term deployments

While it is exciting that LXC is somewhat usable now... I still think it'll be 
at least two years before those four points are resolved... but that is just a 
guess.  I certainly hope it takes less time than that.

Mike, you are the closest thing I've found to an LXC "expert" and I'd love to 
talk with you more about LXC and possibly do an interview with you for my 
website (MontanaLinux.org) if you'd be interested.

My friends were asking me if I was going to do a presentation at LFNW this 
year.  The last two years I've done one on OpenVZ and didn't want to do yet 
another one on OpenVZ.  You appear to live in Georgia so I doubt you'd be 
interested in going to the state of Washington for a Linux conference to give a 
presentation having to foot the expenses yourself?  Maybe I could learn enough 
about LXC by the end of April to give it a go.  Anyway, thanks again for the 
information!

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
_______________________________________________
Users mailing list
Users@openvz.org
https://openvz.org/mailman/listinfo/users

Reply via email to