On Thu 2013-10-24 (15:11), Serge Hallyn wrote:
If your kernel is new enough (check whether /proc/self/ns/mnt exists)
you could lxc-attach into the container with the -e flag to keep
elevated privileges, and do the remount.
Ubuntu 12.04:
root@vms3:~# l /proc/self/ns/mnt
l: /proc/self/ns/mnt -
Is it possible to create btrfs snapshots inside a container?
Or should one avoid at all the combination of btrfs and lxc?
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de
Universitaet Stuttgart
On Fri 2012-11-09 (08:31), Serge Hallyn wrote:
Since you have a real bridge, it is better to keep using br0.
I have just discovered, that br0 is still available!
I was in mistake to think only lxcbr0 and virbr0 are choosable.
In fact, edit /etc/default/lxc to set USE_LXC_BRIDGE=false to
Prologue: I run LXC successful for nearly 2 years on Ubuntu 10.04, using
veth / br0. Every container has its own IP address, no NAT. I run
production services like http://fex.rus.uni-stuttgart.de/ on it, rocksolid.
I have now set up second server with Ubuntu 12.04 and there have changed a
lot of
On Mon 2012-10-22 (18:09), swair shah wrote:
I was wondering if anyone is using lxc on production. and if you don't mind
disclosing, for what purpose do you use it on production?
fex.rus.uni-stuttgart.de is a LXC container and runs smooth for nearly 2
years. It gives more than 300 MB/s for
On Mon 2012-10-22 (14:53), Stéphane Graber wrote:
All in all, that's somewhere around 300-400 containers I'm managing
How do you handle a host (hardware) failure?
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail:
Are there recommendations on cluster filesystems?
I have several hosts with fibre channel. They should use a common
filesystem to have a half-automatic fail-over.
--
Ullrich Horlacher Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail:
On Mon 2012-10-08 (10:32), Papp Tamas wrote:
On 10/08/2012 09:47 AM, Ulli Horlacher wrote:
Are there recommendations on cluster filesystems?
I have several hosts with fibre channel. They should use a common
filesystem to have a half-automatic fail-over.
I think you should be able
On Mon 2012-10-08 (17:16), Papp Tamas wrote:
On 10/08/2012 05:00 PM, Ulli Horlacher wrote:
should - I prefer recommendations ny experience :-)
I have tried by myself gluster and it is HORRIBLE slow.
If you are interested, try Moosefs. I have quite good experiences with
it, however
On Mon 2012-05-14 (18:25), Papp Tamas wrote:
Error message?
No error message. Just shows host's uptime.
Return value?
Do you have /dev/pts ?
--
Ullrich Horlacher Server- und Arbeitsplatzsysteme
Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de
On Mon 2012-05-14 (11:57), Papp Tamas wrote:
I have now written a special uptime command to be placed in the containers
PATH:
#!/usr/bin/perl -w
$uptime = `/usr/bin/uptime`;
@s = lstat '/dev/pts' or die $uptime;
$s = time - $s[10];
if ($s172800) {
$d = int($s/86400);
On Fri 2012-05-11 (02:54), wrote:
I have tried starting linux container(lxc 0.7.5) on lucid and red hat 6, but
both failed (succeeded in ubuntu precise)
The procedure I used is somehow standard:
1 using the lxc-ubuntu(not for red hat 6) script to prepare filesystem and
then
On Fri 2012-05-04 (00:05), Samuel Maftoul wrote:
Maybe, the uptime of container's init process will show you uptime of the
container (so is accessible from within the container).
init does not provide its start time
I have now written a special uptime command to be placed in the containers
On Sun 2012-03-04 (13:49), Xavier Garcia wrote:
There is any way to execute a command inside a running container from host ?
I have written a small daemon lxc-cmdd running inside the container,
which the host-script lxc can connect to.
Example:
root@zoo:~# lxc -l
container disk
On Fri 2012-03-02 (09:02), Daniel Baumann wrote:
i'm not claiming btrfs is there yet, however, if you're using btrfs, you
should at least make sure to use something remotely up2date, say 3.2.x.
SLES11 SP2 was released this week with a 3.0 kernel and comes with btrfs.
Same b(*CENSORED*)t as
On Tue 2011-10-18 (14:54), Papp Tamas wrote:
Is it possible to limit the maximum number of processes per container?
I have the same problem. A user has killed the host (and therefore all
containers) with a simple shell command: :(){ :|: };:
(Kids, don't try this at home!)
--
Ullrich
On Tue 2012-02-21 (17:52), Martin Kone?ný wrote:
I have experience successfully creating an LXC container in many system
configurations except for when it is inside of a Virtual Machine (Oracle
VirtualBox).
More specifically, I have problems with networking.
Here is my config:
On Mon 2012-01-23 (20:12), U.Mutlu wrote:
In the container some syslog lines are garbled:
Jan 23 18:41:41 my1 kernel: imklog 5.8.6, log source = /proc/kmsg started.
Jan 23 18:41:41 my1 rsyslogd: [origin software=rsyslogd swVersion=5.8.6
x-pid=323 x-info=http://www.rsyslog.com;] start
Jan
On Wed 2012-01-04 (14:18), Whit Blauvelt wrote:
That's enough to get the containers to start from the 0.7.5 lxc-start. But
it leaves the 0.7.5 lxc-console totally unhappy. It's obviously looking at
things differently than lxc-start and lxc-info:
# lxc-info -n xfer
state: RUNNING
pid:
On Fri 2012-01-06 (12:08), Whit Blauvelt wrote:
If 0.7.5. doesn't fully work on 2.6.32, and if backports are available, it's
too bad neither is mentioned at http://wiki.debian.org/LXC, which is where
http://lxc.sourceforge.net/ links for Debian-specific info.
I have had similar problems on
On Thu 2011-12-29 (19:29), Martin Kone?ný wrote:
seem to have a problem. Before the break if I started a container using
lxc-start, boot information would appear on the terminal and I would even
be able to log in.
lxc-start never provides a console-login, you have to use lxc-console
--
On Mon 2011-12-26 (18:25), Wai-kit Sze wrote:
What are the difference between application containers and system
containers? Both of them can start a command directly.
An application container starts one single program.
A system container starts (boots) a whole linux system.
--
Ullrich
On Sat 2011-12-17 (14:38), DTK wrote:
I am trying to avoid having to install NTP in every container, so I
came up with following idea.
Main server cron job
* 0 * * * /sbin/hwclock -w
Each container has following cron job
10 0 * * * /sbin/hwclock -s
However, the
On Tue 2011-12-13 (18:43), Zhu Yanhai wrote:
My concern is deploying Btrfs only for COW is a really heavy solution
for this...Is Btrfs ready for production system?
I have tested Btrfs with kernel 2.6.38: copying 30 GB with rsync corrupted
the file system completely and the kernel run into an
On Tue 2011-11-29 (09:40), Patrick Kevin McCaffrey wrote:
allows me to SSH into my running container. However, lxc-console is
still unresponsive. When I run the command (lxc-console -n
container_name) it says press ctrl+a q to exit but anything I do after
entering the initial command
On Mon 2011-11-28 (18:40), Roberto Aloi wrote:
I'm currently dealing with a pool of LXC containers which I use for
sand-boxing purposes. Sometimes, when re-creating one of the
containers in the pool I obtain the following error:
Error: CREATE CONTAINER container-5\ndebootstrap is
On Mon 2011-11-28 (18:40), Roberto Aloi wrote:
When I say customizable I mean that I should be able to specify a port
number which a server running inside one of the containers should listen
to and this number should be different per each container. Would this be
feasible via LXC?
This has
On Mon 2011-11-28 (19:09), Roberto Aloi wrote:
I like the idea. My only concern is how customizable a clone is (see
my second question).
With aptitude: easy.
Besides this you can extend my approch by setting up different templates
(I have no need for this).
I need to start servers
On Sun 2011-11-13 (15:06), Arie Skliarouk wrote:
Where is the command lxc located? It is not provided by the
lxc-0.7.5.tar.gz...
It's all in http://fex.rus.uni-stuttgart.de/lxc-ubuntu
Ah, I see it now. It requires some daemon lxc-cmdd to be running on the
guest. Where do I take
On Thu 2011-11-10 (14:29), Arie Skliarouk wrote:
My mistake, this was possible with ubuntu 8.10 based containers and is no
longer possible with 10.04 containers. Not related to the recent changes.
ubuntu 8.10 uses init wheras ubuntu 10.04 uses upstart.
Still, how can I gratefully stop the
On Thu 2011-10-27 (11:17), Arie Skliarouk wrote:
I tried that and now the container does not start at all. I traced the
problem to the following command in the /etc/init/lxc.conf script
initctl emit filesystem --no-wait
Is there an error in your logfile?
/etc/init/hostname.conf also fails
On Wed 2011-10-26 (18:35), Arie Skliarouk wrote:
Hi,
On one of my ubuntu 10.04 vservers mountall mounts /dev from the host
machine. This causes problems for syslogd that works over /dev/log.
The vserver has properly populated /dev directory, it just mounts /dev from
host on top of it.
I
On Tue 2011-10-25 (09:11), Joerg Gollnick wrote:
Try to modprobe nfnetfilter as early as possible in user space (Ubuntu hint
add
to /etc/modules).
There is no such module:
root@vms1:/etc# lsmod | grep filter
iptable_filter 12810 1
ip_tables 27177 2
On Tue 2011-10-25 (08:58), Jean-Philippe Menil wrote:
your kernel seems to have CONFIG_NETFILTER_XT_MATCH_RECENT set?
root@vms1:/etc# uname -a
Linux vms1 2.6.38-12-server #51~lucid1-Ubuntu SMP Thu Sep 29 20:09:53 UTC 2011
x86_64 GNU/Linux
root@vms1:/etc# grep CONFIG_NETFILTER_XT_MATCH_RECENT
On Tue 2011-10-25 (12:50), Joerg Gollnick wrote:
Am Dienstag, 25. Oktober 2011, 12:36:36 schrieb Ulli Horlacher:
On Tue 2011-10-25 (09:11), Joerg Gollnick wrote:
Try to modprobe nfnetfilter as early as possible in user space (Ubuntu
hint add to /etc/modules).
There is no such module
vms1 is an Ubuntu 10.04 based host system (4 * Xeon 64bit) with:
root@vms1:/lxc# uname -a
Linux vms1 2.6.38-11-server #50~lucid1-Ubuntu SMP Tue Sep 13 22:10:53 UTC 2011
x86_64 GNU/Linux
root@vms1:/lxc# lxc-version
lxc version: 0.7.5
I can start (Ubuntu 10.04) containers without problems:
On Mon 2011-10-24 (20:56), Daniel Lezcano wrote:
I have now booted vms1 with kernel 2.6.35 instead of 2.6.38 (as before)/
This kernel crashes also on lxc-stop but it writes something to
/var/log/kern.log :
(...)
2011-10-24 19:34:40 [ 318.711984] ---[ end trace 20014711382a5389 ]---
On Mon 2011-10-24 (21:04), Daniel Lezcano wrote:
On 10/24/2011 08:40 PM, Jean-Philippe Menil wrote:
Le 24/10/2011 19:46, Ulli Horlacher a écrit :
2011-10-24 19:34:40 [ 318.526208] br0: port 2(veth2WqDOb) entering
forwarding state
2011-10-24 19:34:40 [ 318.675038] br0: port 2
On Wed 2011-10-19 (19:24), Ulli Horlacher wrote:
But nothing happens, there is only a lxc-start process dangling around:
root@vms1:/lxc# psg vmtest1
USER PID PPID %CPUVSZ COMMAND
root 31571 1 0.0 20872 lxc-start -f /data/lxc/vmtest1.cfg -n
vmtest1 -d -o /data/lxc
On Thu 2011-10-20 (09:00), Papp Tamas wrote:
On 10/20/2011 12:54 AM, Ulli Horlacher wrote:
On Wed 2011-10-19 (22:11), Papp Tamas wrote:
What version of lxc package do you use?
See my first mail:
lxc version: 0.7.4.1
Well, I don't see anything like this. Actually I use 0.7.5. Try
On Thu 2011-10-20 (16:39), Ulli Horlacher wrote:
On Thu 2011-10-20 (09:18), Serge E. Hallyn wrote:
And everytime I run lxc-start I get a new veth interface:
root@vms1:/lxc# ifconfig | grep veth
vethCmnezx Link encap:Ethernet HWaddr 3e:d6:06:4e:26:ae
vethFGQBYd Link
On Wed 2011-10-19 (19:37), Papp Tamas wrote:
Besides my problem with cannot stop/kill lxc-start (see other mail), I
have now an even more severe problem: I cannot start ANY container anymore!
I boot the container with:
root@vms1:/lxc# lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d
On Wed 2011-10-19 (21:24), Papp Tamas wrote:
On 10/19/2011 09:18 PM, Ulli Horlacher wrote:
root@vms1:/lxc# ps axf | grep vmtest1
31571 ?Ds 0:00 lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d
-o /data/lxc/vmtest1.log
2171 ?Ds 0:00 lxc-start -f /data/lxc
On Tue 2011-10-18 (15:22), Derek Simkowiak wrote:
What is the best method for gracefully shutting down LXC containers
in a production environment?
I use lxc -s container which itself executes a shutdown -h now via
cmdd, see: http://fex.rus.uni-stuttgart.de/lxc.html
lxc-attach -n CONTAINER
On Wed 2011-10-05 (21:48), Stéphane Graber wrote:
The problem is: when I stop xinetd on the host with command
/etc/init.d/xinetd stop
this stops all LXC container xinetd processes, too!
Can you file a bug here: http://launchpad.net/ubuntu/+source/xinetd/+filebug
I have already done it
On Thu 2011-10-06 (09:14), Ulli Horlacher wrote:
Then attach the patch to the bug making sure that it's flagged as a
patch. This should ensure someone will look at it, sadly not for Oneiric
(11.10) but hopefully for Precise (12.04).
Launchpad lets you mark a bug as affecting multiple
On Thu 2011-09-29 (18:05), Derek Simkowiak wrote:
Hello,
I have just published a new Open Source LXC container creation
script, called lxc-ubuntu-x. It implements all the latest best
practices I found on the web, and introduces some new features. I am
using this script in a
I have an Ubuntu LXC hosts with several containers running internet
services via xinetd.
Sometimes the container services died without any reason and no logfile
entry. First, I thought LXC is not that stable as I hoped, but now I
found the bug inside /etc/init.d/xinetd !
The problem is: when I
On Mon 2011-09-05 (09:24), Papp Tamas wrote:
On 09/05/2011 08:38 AM, Jäkel, Guido wrote:
Another (planned) way is to use lxc-execute, but this is still not
working. Ulli Hornbacher therefore wrote it's own workaround: A little
daemon executes all command pushed in by a command running at the
On Fri 2011-09-02 (18:33), Matteo Bernardini wrote:
personally, I have no problem in ssh'ing in the container and halt it. :)
This needs sshd running inside the container and correct routing.
I use a small lxc-cmdd which let me do: lxc -s vm_name
Indeed, this calls lxc -x which can execute
On Fri 2011-07-01 (12:31), Serge E. Hallyn wrote:
so lxc-clone will create a snapshot-based clone of an lvm-backed
container in about a second.
My lxc script (*) can do this in 2 seconds, without bothering LVM:
root@vms2:/lxc# lxc
usage: lxc option
options: -l list containers
-p
On Fri 2011-08-19 (15:38), Dong-In David Kang wrote:
We've found out that inside of an LXC instance, root can insert/remove
modules of the host.
Is it normal?
If it is doable, an LXC image may corrupt the host system, which is not good
in terms of security.
Put:
lxc.cap.drop = sys_module
On Mon 2011-05-23 (13:22), Ulli Horlacher wrote:
A small network application benchmark between LXC and VMware ESX:
ESX:
framstag@diaspora:~: fexsend -i unifex /tmp/2GB.tmp .
Server/User: http://fex.uni-stuttgart.de/frams...@rus.uni-stuttgart.de
/tmp/2GB.tmp : 2048 MB in 87 s (24105 kB
On Sat 2011-06-04 (11:38), Gordon Henderson wrote:
However I guess it's just for university types - those with the benefits
of Gb upload speeds... The poor people without that benefit - and the
majority will have sub 1Mb/sec upload speeds
Many home users in Germany have upload speeds at 20
After a minor kernel update lxc-start does not work any more:
root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty
lxc-start 1306913218.901 ERRORlxc_namespace - failed to
clone(0x6c02): Invalid argument
lxc-start 1306913218.901 ERRORlxc_start - Invalid argument
On Wed 2011-06-01 (10:30), Daniel Lezcano wrote:
On 06/01/2011 10:25 AM, Ulli Horlacher wrote:
On Wed 2011-06-01 (10:18), Daniel Lezcano wrote:
root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty
lxc-start 1306913218.901 ERRORlxc_namespace - failed to
clone
On Wed 2011-06-01 (11:17), Daniel Lezcano wrote:
On 06/01/2011 10:45 AM, Ulli Horlacher wrote:
[ ... ]
2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous
mode
2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is
not ready
On Thu 2011-05-26 (01:51), David Touzeau wrote:
But i did not find any information inside the LXC contener in order to
detect We are really in an LXC contener.
My trick is to mount cgroup into the container at /lxc/cgroup:
root@vms2:/lxc# grep cgroup flupp.fstab
/cgroup/flupp
On Mon 2011-05-23 (19:28), Geordy wrote:
Which 10 gb adapter did you use in the esx box?
Onboard Intel (Fujitsu RX300 Server), but it is only 1 GB/s, I was in
mistake first.
i do not know this tool
I know it, because I have written it :-)
http://fex.rus.uni-stuttgart.de/
It does HTTP POST
On Tue 2011-05-24 (12:25), Roberto wrote:
Hi Ulli,
I have written a setup for LXC on Ubuntu 10.04:
http://fex.rus.uni-stuttgart.de/lxc-ubuntu
I've just tried your tutorial, but it ends up with the following:
root@my_host:~/bin# ./lxc -b ubuntu
./lxc: cannot determine container ip
A small network application benchmark between LXC and VMware ESX:
ESX:
framstag@diaspora:~: fexsend -i unifex /tmp/2GB.tmp .
Server/User: http://fex.uni-stuttgart.de/frams...@rus.uni-stuttgart.de
/tmp/2GB.tmp : 2048 MB in 87 s (24105 kB/s)
LXC:
framstag@diaspora:~: fexsend -i flupp
On Mon 2011-05-23 (13:26), Christoph Mitasch wrote:
What kind of networking did you use in LXC. Veth?
Yes.
Hardware is an ordinary Dell office PC, whereas the ESX host is a
datacenter grade Fujitsu server. Price difference ca factor 1000 :-)
--
Ullrich Horlacher Server- und
On Mon 2011-05-23 (21:24), Christoph Mitasch wrote:
I was just thinking about another test case. The native network
performance of the host system (not inside the container).
framstag@diaspora:~: fexsend -u vms2 /tmp/2GB.tmp .
Server/User: http://vms2/frams...@rus.uni-stuttgart.de
On Sat 2010-01-30 (14:20), Dominik Schulz wrote:
I'm fairly new to LXC and I am looking for a way to execute a command inside
a
running container (a full blow one with its own rootfs and full isolation).
lxc-execute doesn't seem to do the trick and lxc-console requires credential
to
On Tue 2011-05-17 (12:11), David Touzeau wrote:
I have a debian running in a container
the /cgroup/vps-1/memory.usage_in_bytes display
10784768
10784768 - 10Mb memory used.
Probably you want:
root@vms2:/lxc# grep rss /cgroup/flupp/memory.stat
rss 6094848
total_rss 6094848
How can I determine how many memory a container uses currently?
memory.usage_in_bytes and memory.memsw.usage_in_bytes shows much too much,
4 times more than the host uses totally.
root@vms2:# cat /cgroup/flupp/memory.usage_in_bytes
4181008384
root@vms2:# cat
On Fri 2011-05-20 (08:22), Ulli Horlacher wrote:
How can I determine how many memory a container uses currently?
Found it my myself :-)
It is rss in memory.stat
--
Ullrich Horlacher Server- und Arbeitsplatzsysteme
Rechenzentrum E-Mail: horlac...@rus.uni
I have written a script lxc which is a superset of some of the lxc-*
programs and adds some extra features. Maybe it is useful for others:
http://fex.rus.uni-stuttgart.de/download/lxc
root@vms2:~# lxc -h
usage: lxc option
options: -l list containers
-p list all container processes
On Thu 2011-05-19 (10:35), Corin Langosch wrote:
But how do you set up quotas for the snapshots?
One can limit the size of the whole LVM container, but this is the same as
using a regular disk partition (for all LXC containers).
I'm by no means an lvm expert, but I would have guessed
Is there an easy way to set up a disk limit for a container?
I could create a LVM partition for each container, but this is not what I
call easy :-}
--
Ullrich Horlacher Server- und Arbeitsplatzsysteme
Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de
Memory limitation does not work for me:
root@vms2:/lxc# uname -a
Linux vms2 2.6.32-31-server #61-Ubuntu SMP Fri Apr 8 19:44:42 UTC 2011 x86_64
GNU/Linux
root@vms2:/lxc# grep CONFIG_CGROUP_MEM_RES_CTLR
/boot/config-2.6.32-31-server
CONFIG_CGROUP_MEM_RES_CTLR=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y
On Tue 2011-05-17 (09:10), Daniel Lezcano wrote:
Why can a container process allocate more than 1 GB of memory if there is
512 MB limit?
When a process reaches the memory limit size then the container will
begin to swap. This is not really what we want as
Oh... no!
In order to
On Tue 2011-05-17 (17:18), David Touzeau wrote:
the host is a Virtual Machine stored on ESXi 4.0
The container can ping the host, the host can ping the container.
Issue is others computers network. cannot ping the container and the
container cannot ping the network.
I have had the same
On Tue 2011-05-17 (17:40), Mauras Olivier wrote:
I tried this way either, but there's two blocking problems with that - At
least for me:
- Can't use this feature on 2.6.32 kernels
I have installed 2.6.39 without problems.
- Have to reboot to had a new interface to setup a new container -
On Wed 2011-05-11 (11:29), Daniel Lezcano wrote:
If you create a bridge, attach the physical interface to it, give the
bridge the ip address you usually give to eth0, (make sure ifconfig eth0
0.0.0.0) and then give an IP address to the container on the same
network than eth0, that will
On Tue 2011-05-10 (20:08), C Anthony Risinger wrote:
I believe Daniel is saying you can pass each container two interfaces -- one
is the public and one is a local only private network for your host and
containers.
Then I have secondary addresses for each server and I have to decide
manually
I have a lxc host (zoo 129.69.1.68) with a container (vmtest8 129.69.8.6).
I want all host/container communication to be internal without network
traffic going via external router.
I know I can setup host routes like:
root@vms2:# route add -host 129.69.8.6 gw 129.69.1.68
root@vms2:# route -n
On Mon 2011-05-09 (22:52), Daniel Lezcano wrote:
On 05/09/2011 03:10 PM, Ulli Horlacher wrote:
I have a lxc host (zoo 129.69.1.68) with a container (vmtest8 129.69.8.6).
I want all host/container communication to be internal without network
traffic going via external router.
Maybe
On Sat 2011-05-07 (11:45), Greg Kurz wrote:
Other ideas for a suspend mode?
AFAIK, there's nothing to implement a network freeze. A by the way, what
would be expected ?
The same behaviour as with vmware. There I have a real suspend mode.
If you were to freeze a network stack for
Is there a way to get the corresponding host PID for a container PID?
For example: inside the the container the process init has always PID 1.
But what PID has this process in the host process table?
ps aux | grep ... is not what I am looking for, I want more robust solution.
--
Ullrich
On Wed 2011-04-27 (22:16), Christoph Mitasch wrote:
BTW, is there another reliable way to initiate a clean shutdown of a
container running Natty from the Host system except using lxc-attach or
ssh to the container?
I have written a simple daemon who runs inside the (ubuntu) container and
to
I have a new container which I cannot attach to (lxc-console):
root@zoo:/lxc# /usr/bin/lxc-start -f egal.cfg -n egal
[1] 3800
root@zoo:/lxc# pstree -p 3800
lxc-start(3800)---init(3801)-+-cron(3857)
|-rsyslogd(3827)-+-{rsyslogd}(3839)
|
On Tue 2011-04-19 (09:44), Daniel Lezcano wrote:
I have a new container which I cannot attach to (lxc-console):
root@zoo:/lxc# /usr/bin/lxc-start -f egal.cfg -n egal
[1] 3800
root@zoo:/lxc# pstree -p 3800
lxc-start(3800)---init(3801)-+-cron(3857)
On Sun 2011-04-17 (08:39), Geordy Korte wrote:
Thought about it some more and i think it might be an advanced esx
feature that restricts this.
Then the first rename (eth1 -- dev3) should also fail.
But this works and the container starts und runs perfectly.
What fails is the renaming back
On Tue 2011-03-08 (07:35), Serge E. Hallyn wrote:
Nice - that should be pretty simple to whip up, too. A python app
wrapping the command line tools... Could probably even design it
such that the same core can be used by both a curses interface and
a gui interface.
I do not like both, I
On Sun 2010-01-31 (13:38), Tony Risinger wrote:
if your using the standard init program, and you are only trying to
control stutdown/reboot, i use something like this in my container
inittab:
p6::ctrlaltdel:/sbin/init 6
p0::powerfail:/sbin/init 0
ctrlaltdel responds to a SIGINT, and
On Tue 2011-04-12 (08:27), Serge Hallyn wrote:
Quoting Ulli Horlacher (frams...@rus.uni-stuttgart.de):
On Tue 2011-04-12 (09:19), Ulli Horlacher wrote:
I use lxc with physical eth1.
I can start the container, connect to it, etc. Everything looks ok. But
when I stop the container
I use lxc with physical eth1.
I can start the container, connect to it, etc. Everything looks ok. But
when I stop the container and try to restart it, eth1 is no more availble.
Looks lxc eats this interface. How can I free it (without rebooting the
host (zoo))?
root@zoo:/lxc# cat ubuntu.cfg
On Tue 2011-04-12 (09:28), Toens Bueker wrote:
zoo runs on virtual hardware (VMware ESXi), where vms2 runs on real
hardware. I assume now, lxc bridge networking is not compatible with ESXi!
What is configured on ESXi?
A virtual switch for this VLAN. I have tested it with and without
On Tue 2011-04-12 (15:37), Daniel Lezcano wrote:
Shouldn't I see it under another name then?
I see only:
root@zoo:~# ll /proc/sys/net/ipv4/conf/
dr-xr-xr-x root root - 2011-04-12 13:33:12
/proc/sys/net/ipv4/conf/all
dr-xr-xr-x root root
On Sat 2011-04-09 (18:30), Brian K. White wrote:
He's asking you to run ip addr on the host and post the result here.
Sorry for my lameness :-)
root@zoo:/lxc# ip addr
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
On Mon 2011-04-04 (19:35), Ulli Horlacher wrote:
My first Ubuntu 10.04 container is up and running on a Ubuntu 10.04 host,
but the container can only connect to the host (and vice versa), but not
to the world outside.
I saw a lot of configurations for NAT, but I want native routing for my
On Wed 2011-04-06 (12:31), Daniel Lezcano wrote:
root@zoo:/lxc# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.0050568e0003 no eth0
is your container up when you show the bridge information ?
Yes:
root@zoo:/lxc#
On Mon 2011-04-04 (19:35), Ulli Horlacher wrote:
My first Ubuntu 10.04 container is up and running on a Ubuntu 10.04 host,
but the container can only connect to the host (and vice versa), but not
to the world outside.
I found a workaround: I have added an extra ethernet card dedicated
On Tue 2011-04-05 (14:53), Daniel Lezcano wrote:
Can you give the bridge setup ? (brctl show)
root@zoo:/lxc# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.0050568e0003 no eth0
--
Ullrich Horlacher Server-
I have just subscribed to lxc-users.
To prevent sending already answered questions, I would like to have
the complete list archive, so I can use it with my local MUA (mutt).
With
http://sourceforge.net/mailarchive/forum.php?forum_name=lxc-users
one cannot download the mails in original format.
96 matches
Mail list logo