On 01/05/2011 08:53 AM, Rob Landley wrote:
> On 01/04/2011 06:52 AM, Daniel Lezcano wrote:
>> On 01/04/2011 09:36 AM, Rob Landley wrote:
>>> I'm attempting to write a simple HOWTO for setting up a container with
>>> LXC. Unfortunately, console handling is really really brittle and the
>>> only way I've gotten it to work is kind of unpleasant to document.
>>>
>>> Using lxc 0.7.3 (both in debian sid and built from source myself), I
>>> can lxc-create a container, and when I run lxc-start it launches init
>>> in the container. But the console is screwy.
>>>
>>> If my init program is just a command shell, the first key I type will
>>> crash lxc-start with an I/O error. (Wrapping said shell with a script
>>> to redirect stdin/stdout/stderr to various /dev character devices
>>> doesn't seem to improve matters.)
>>>
>>> Using the busybox template and the busybox-i686 binary off of
>>> busybox.net, it runs init and connects to the various tty devices, and
>>> this somehow prevents lxc-start from crashing. But if I "press enter
>>> to active this console" like it says, the resulting shell prompt is
>>> completely unusable. If I'm running from an actual TTY device, then
>>> some of the keys I type go to the container and some don't. If my
>>> console is connected to a PTY when I run lxc-start (such as if I ssh
>>> in and run lxc-start from the ssh session), _none_ of the characters I
>>> type go to the shell prompt.
>>>
>>> To get a usable shell prompt in the container, what I have to do is
>>> lxc-start in one window, ssh into the server to get a fresh terminal,
>>> and then run lxc-console in that second terminal. That's the only
>>> magic sequence I've found so far that works.
>>
>> Hmm, right. I was able to reproduce the problem.
>
> I've got two more.  (Here's another half-finished documentation file, 
> attached, which may help with the reproduction sequence.)
>
> I'm running a KVM instance to host the containers, and I've fed it an 
> e1000 interface as eth0 with the normal -net user, and a tun/tap 
> device on eth1 with 192.168.254.1 associated at the other end.
>
> Inside KVM, I'm using this config to set up a container:
>
>   lxc.utsname = busybox
>   lxc.network.type = phys
>   lxc.network.flags = up
>   lxc.network.link = eth1
>   #lxc.network.name = eth0
>
> And going:
>
>   lxc-start -n busybox -f busybox.conf -t busybox
>
> Using that (last line of the config intentionally commented out for 
> the moment) I get an eth1 in the container that is indeed the eth1 on 
> the host system (which is a tun/tap device I fed to kvm as a second 
> e1000 device).  That's the non-bug behavior.
>
> Bug #1: If I exit that container, eth1 vanishes from the world.  The 
> container's gone, but it doesn't reappear on the host.  (This may be 
> related to the fact that the only way I've found to kill a container 
> is do "killall -9 lxc-start".  For some reason a normal kill of 
> lxc-start is ignored.  However, this still shouldn't leak kernel 
> resources like that.)

It is related to the kernel behavior :  netdev with a rtnl_link_ops will 
be automatically deleted when a network namespace is destroyed. The full 
answer is at net/core/dev.c :


> Bug #2: When I uncomment that last line of the above busybox.conf, 
> telling it to move eth1 into the container but call it "eth0" in 
> there, suddenly the eth0 in the container gets entangled with the eth0 
> on the host, to the point where dhcp gives it an address.  (Which is 
> 10.0.2.16.  So it's talking to the VPN that only the host's eth0 
> should have access to, but it's using a different mac address.  Oddly, 
> the host eth0 still seems to work fine, and the two IP addresses can 
> ping each other across the container interface.)
>
> This is still using the most recent release version.

What is the kernel version ?

>>>
>>> The attached html file is a long drawn-out reproduction sequence for
>>> this.
>>>
>>> I tried downloading lxc-git to see if this is already fixed, but
>>> running "autoconf" doesn't seem to want to produce a ./configure file
>>> for me. ("configure.ac:8: error: possibly undefined macro:
>>> AM_CONFIG_HEADER") I'm really not an autoconf expert (the whole thing
>>> is just a horrible idea at the design level), so have no idea what I'm
>>> doing wrong there.
>>
>> Is automake installed on your system ? Maybe the version is too old ...
>
> # aptitude show automake
> Package: automake
> State: installed
> Automatically installed: yes
> Version: 1:1.11.1-1
> ...
>
> It's what "debian sid" installs by default when you ask for automake.
>
> Rob


<javascript:void(0);>

------------------------------------------------------------------------------
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel

Reply via email to