On 01/04/2011 06:52 AM, Daniel Lezcano wrote:
On 01/04/2011 09:36 AM, Rob Landley wrote:
I'm attempting to write a simple HOWTO for setting up a container with
LXC. Unfortunately, console handling is really really brittle and the
only way I've gotten it to work is kind of unpleasant to document.

Using lxc 0.7.3 (both in debian sid and built from source myself), I
can lxc-create a container, and when I run lxc-start it launches init
in the container. But the console is screwy.

If my init program is just a command shell, the first key I type will
crash lxc-start with an I/O error. (Wrapping said shell with a script
to redirect stdin/stdout/stderr to various /dev character devices
doesn't seem to improve matters.)

Using the busybox template and the busybox-i686 binary off of
busybox.net, it runs init and connects to the various tty devices, and
this somehow prevents lxc-start from crashing. But if I "press enter
to active this console" like it says, the resulting shell prompt is
completely unusable. If I'm running from an actual TTY device, then
some of the keys I type go to the container and some don't. If my
console is connected to a PTY when I run lxc-start (such as if I ssh
in and run lxc-start from the ssh session), _none_ of the characters I
type go to the shell prompt.

To get a usable shell prompt in the container, what I have to do is
lxc-start in one window, ssh into the server to get a fresh terminal,
and then run lxc-console in that second terminal. That's the only
magic sequence I've found so far that works.

Hmm, right. I was able to reproduce the problem.

I've got two more. (Here's another half-finished documentation file, attached, which may help with the reproduction sequence.)

I'm running a KVM instance to host the containers, and I've fed it an e1000 interface as eth0 with the normal -net user, and a tun/tap device on eth1 with 192.168.254.1 associated at the other end.

Inside KVM, I'm using this config to set up a container:

  lxc.utsname = busybox
  lxc.network.type = phys
  lxc.network.flags = up
  lxc.network.link = eth1
  #lxc.network.name = eth0

And going:

  lxc-start -n busybox -f busybox.conf -t busybox

Using that (last line of the config intentionally commented out for the moment) I get an eth1 in the container that is indeed the eth1 on the host system (which is a tun/tap device I fed to kvm as a second e1000 device). That's the non-bug behavior.

Bug #1: If I exit that container, eth1 vanishes from the world. The container's gone, but it doesn't reappear on the host. (This may be related to the fact that the only way I've found to kill a container is do "killall -9 lxc-start". For some reason a normal kill of lxc-start is ignored. However, this still shouldn't leak kernel resources like that.)

Bug #2: When I uncomment that last line of the above busybox.conf, telling it to move eth1 into the container but call it "eth0" in there, suddenly the eth0 in the container gets entangled with the eth0 on the host, to the point where dhcp gives it an address. (Which is 10.0.2.16. So it's talking to the VPN that only the host's eth0 should have access to, but it's using a different mac address. Oddly, the host eth0 still seems to work fine, and the two IP addresses can ping each other across the container interface.)

This is still using the most recent release version.


The attached html file is a long drawn-out reproduction sequence for
this.

I tried downloading lxc-git to see if this is already fixed, but
running "autoconf" doesn't seem to want to produce a ./configure file
for me. ("configure.ac:8: error: possibly undefined macro:
AM_CONFIG_HEADER") I'm really not an autoconf expert (the whole thing
is just a horrible idea at the design level), so have no idea what I'm
doing wrong there.

Is automake installed on your system ? Maybe the version is too old ...

# aptitude show automake
Package: automake
State: installed
Automatically installed: yes
Version: 1:1.11.1-1
...

It's what "debian sid" installs by default when you ask for automake.

Rob

Last time, we set up a three layer container test environment:

  • Laptop - the host system running on real hardware (my Ubuntu laptop).

  • KVM - a virtual debian Sid system running under KVM.

  • Container - a simple busybox-based system running in a container.

So "Laptop" hosts "KVM" which hosts "Container". This lets us reconfigure and reboot the container host (the KVM system) without screwing up our real host environment (the Laptop system).

We ended with a shell prompt inside a container. Now we're going to set up networking in the container, with different routing than the KVM system so the Container system and KVM system have different views of the outside world.

LXC supports several different virtual network types, listed in the lxc.conf man page: veth uses Linux's ethernet bridging support, vlan sets up a virtual interface selects packets by IP address, and macvlan sets up a virtual interface that selects packets by mac address, that routes packets at the IP level, and veth joins interfaces together using Linux's ethernet bridging support (and the ebtables subsystem).

The other two networking options LXC supports are "empty" (just the loopback interface), and "phys" to move one of the host's ethernet interfaces into the container (removing it from the host system).

We're going to add a second ethernet interface to the KVM system, and use the "phys" option to move it into the container.

Step 1: Add a TAP interface to the Laptop.

The TUN/TAP subsystem creates a virtual ethernet interface attached to a process. (A TUN interface allows a userspace program to read/write IP packets, and a TAP interface works with ethernet frames instead.) For details, see the kernel TUN/TAP documentation.

We're going to attach a TAP interface to KVM, to add a second ethernet interface to the KVM system. Doing so requires root access on the laptop, but we can use the "tunctl" program (from the "uml-utilities" package) to create a new TUN/TAP interface and then hand it over to a non-root user (so we don't have to run KVM as root).

Run this as root:

# Replace "landley" with your username
tunctl -u landley -t kvm0
ifconfig kvm0 192.168.254.1 netmask 255.255.255.0
echo 1 > /proc/sys/net/ipv4/ip_forward

The above commands last until the next time you reboot your Laptop system, at which point you'll have to re-run them. It associates the address 192.168.254.1 with the TAP interface on the Laptop host, and tells the Laptop to route packets between interfaces.

If you want to remove the tun/tap interface from the host (without rebooting), the command is:

tunctl -d kvm0

Step 2: Launch KVM with two ethernet interfaces.

We need to reboot our KVM system, still using the kernel and root filesystem we built last time but this time specifing two ethernet interfaces. The first is still eth0 masqueraded through a virtual 10.0.2.x LAN (for use by the KVM host), and the other's a TAP device connected directly to the host (for use by the container).

To do this, we append a couple new arguments to the end of the previous KVM command line:

kvm -m 1024 -kernel arch/x86/boot/bzImage -no-reboot -hda ~/sid.ext3 \
  -append "root=/dev/hda rw panic=1" \
  -net nic,model=e1000 -net user,net=10.0.2.0/8 -redir tcp:9876::22 \
  -net nic,model=e1000 -net tap,ifname=kvm0,script=no

The first "-net nic" still creates an e1000 interface as KVM's eth0, the "-net user" plugs that interface into the masqueraded 10.0.2.x LAN, and -redir forwards port 9876 of the laptop's loopback to port 22 on that interface. What's new is the second "-net nic" which adds another e1000 interface (eth1) to KVM, and "-net tap" which connects that interface to the TUN/TAP device we just created on the Laptop.

Step 3: Set up a new container in the KVM system.

We're using a more complex LXC config file this time, using the "phys" network type, telling it to move the host's eth1 into the container as "eth0". So ssh into the kvm system, in the directory containing the static "busybox" binary, and as root run:

cat > busybox.conf << EOF
lxc.utsname = busybox
lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = eth1
lxc.network.name = eth0
EOF

PATH=$(pwd):$PATH lxc-create -f busybox.conf -t busybox -n busybox
lxc-start -n busybox

And in a separate terminal:

lxc-console -n busybox
NFS export:

sudo mount -o port=4711,mountport=4711,mountvers=3,nfsvers=3,nolock,tcp 10.24.29.12:/home/landley/nfs /mnt

Let's start with the network configuration file.  The man page for "lxc.conf"
describes the file format.  We're going to move a physical interface (eth1)
from the host into the container.  This will remove it from the host's
namespace, and make it appear only in the container.

  cat > container.conf << EOF
  lxc.utsname = container
  lxc.work.type = phys
  lxc.work.flags = up
  lxc.work.link = eth1
  EOF


./unfsd -u -d -e $(pwd)/exports -n 4711 -m 4711 -p -t -l 10.24.29.12
------------------------------------------------------------------------------
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel

Reply via email to