Re: [Lxc-users] Mounting filesystem for container

2010-09-20 Thread l...@jelmail.com

 As mentioned Serge, that maybe the cgroup device white list which 
 prevent you to do that.
 You can check by temporarly comment out in /var/lib/lxc/mycontainer all 
 the lxc.cgroup.devices lines and then launch the container again. If 
 you are able to mount it, then you should add in the configuration file 
 the line:

 lxc.cgroup.devices.allow = type major:minor perm

Well, yes, that fixed it. Thank you. 

I had a gap in my knowledge. I assumed incorrectly that the mount was
handled in the Host Environment and that the container would just see the
mounted file system, therefore not needing access to the file systems's
device node. 

However, I now see that is not the case - the mount is performed within the
container and is not actually visible in the host environment (actually a
good thing!). This leads me to ask some more questions though...

1) Why not just put the mount inside the container's /etc/fstab ?

2) When do these mounts happen? I have a problem with a daemon not starting
during boot because, I think, the filesystem it needs is not yet there.

sorry, just learning this stuff - very keen to leave OpenVZ behind -:)

John.
 


mail2web.com – What can On Demand Business Solutions do for you?
http://link.mail2web.com/Business/SharePoint



--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] failed to create pty #0

2010-09-20 Thread l...@jelmail.com
Hi Daniel,

I have tracked down this issue somewhat. It seems to be caused by shutting
down a container (not by lxc-stop) and is caused by the rc.shutdown script
present in Arch Linux.

I don't know what specifically causes the problem because I haven't had
time to investigate but I do know that it's fixed by removing everything
from rc.shutdown onwards from the line containing stat_busy “Saving System
Clock” as suggested on lxc.teegra.net (I had done this on a prior
container but missed this step on a new one which is why the problem only
started happening recently).

So something in that shutdown file has the capacity to disable the host's
ability to start further containers and also disable the ability to ssh
into already running ones (thankfully, lxc-console still worked).

John



myhosting.com - Premium Microsoft® Windows® and Linux web and application
hosting - http://link.myhosting.com/myhosting



--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] failed to create pty #0

2010-09-20 Thread Michael H. Warfield
On Mon, 2010-09-20 at 05:29 -0400, l...@jelmail.com wrote: 
 Hi Daniel,

 I have tracked down this issue somewhat. It seems to be caused by shutting
 down a container (not by lxc-stop) and is caused by the rc.shutdown script
 present in Arch Linux.

I've seen this problem too even when lxc-stop is used and the container
is a Fedora container (mostly F12's).  If I shut down the container and
stop it with lxc-stop then restart the container, I get that failed to
create pty #0) when sshing into the container.  I have to restart the
host system once that's happened.

 I don't know what specifically causes the problem because I haven't had
 time to investigate but I do know that it's fixed by removing everything
 from rc.shutdown onwards from the line containing stat_busy “Saving System
 Clock” as suggested on lxc.teegra.net (I had done this on a prior
 container but missed this step on a new one which is why the problem only
 started happening recently).

I'm going to have to see if there's something similar in the Fedora
shutdown scripts.

Interesting.  I hadn't tried using lxc-stop without shutting down the
contained OS, so I hadn't narrowed it down that far.  Interesting.

 So something in that shutdown file has the capacity to disable the host's
 ability to start further containers and also disable the ability to ssh
 into already running ones (thankfully, lxc-console still worked).

 John

Regards,
Mike
-- 
Michael H. Warfield (AI4NB) | (770) 985-6132 |  m...@wittsend.com
   /\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
   NIC whois: MHW9  | An optimist believes we live in the best of all
 PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


signature.asc
Description: This is a digitally signed message part
--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Mounting filesystem for container

2010-09-20 Thread Daniel Lezcano
On 09/20/2010 11:13 AM, l...@jelmail.com wrote:

 As mentioned Serge, that maybe the cgroup device white list which
 prevent you to do that.
 You can check by temporarly comment out in /var/lib/lxc/mycontainer all
 the lxc.cgroup.devices lines and then launch the container again. If
 you are able to mount it, then you should add in the configuration file
 the line:
  

 lxc.cgroup.devices.allow =type  major:minor  perm
  
 Well, yes, that fixed it. Thank you.

 I had a gap in my knowledge. I assumed incorrectly that the mount was
 handled in the Host Environment and that the container would just see the
 mounted file system, therefore not needing access to the file systems's
 device node.


That's the case if the system mounts something in the container rootfs, 
the mount point will be inherited in the container creation. It's the 
behaviour of the mount namespace.

As soon as the container is created the new mount points will be 
isolated. There is a pending discussion with propagating the host mounts 
to the containers, but I am still looking at this if that fits the 
current design.

 However, I now see that is not the case - the mount is performed within the
 container and is not actually visible in the host environment (actually a
 good thing!). This leads me to ask some more questions though...

 1) Why not just put the mount inside the container's /etc/fstab ?

You can choose the better way of creating/configuring your container 
depending of your needs : add in the container's /etc/fstab, specify it 
in a local fstab or add a lxc.mount.entry option (which correspond to a 
line of fstab).

Providing different ways of mounting allows to create a container with 
or without a root filesystem. You can use the host fs with a set of 
private directories (/var/run, /etc, /home, /tmp, ...) bind mounted to a 
private directory tree and share the host binaries, this is good to 
launch a big number of containers (eg. 1024 containers take 2,3 GB of 
private data only while the rest is shared). You can either specify the 
mount points in the container's /etc/fstab and let the 'mount' command 
to update the /etc/mtab and have different distros with different binaries.

Another alternative is to launch an application only, like apache with 
its own configuration option bind mounted in a private directory, ... so 
you can launch several instances of apache and move you contained 
environment from one host to another host, etc ...

You can create a empty rootfs with an empty directories tree (/usr, 
/lib, etc ...) and then read-only bind mount, you host directory (/usr 
= rootfs/usr, /lib = rootfs/lib, etc ...) while you keep private 
some other directories (eg. /home).

Well there are a lot of configurations for the containers, for this 
reason, there are several ways to configure it.
 2) When do these mounts happen? I have a problem with a daemon not starting
 during boot because, I think, the filesystem it needs is not yet there.


These mounts happens before jumping to the rootfs with pivot_root 
because we may want to mount host filesystem to the container's rootfs.

   -- Daniel


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] failed to create pty #0

2010-09-20 Thread C Anthony Risinger
On Mon, Sep 20, 2010 at 4:29 AM, l...@jelmail.com l...@jelmail.com wrote:
 Hi Daniel,

 I have tracked down this issue somewhat. It seems to be caused by shutting
 down a container (not by lxc-stop) and is caused by the rc.shutdown script
 present in Arch Linux.

 I don't know what specifically causes the problem because I haven't had
 time to investigate but I do know that it's fixed by removing everything
 from rc.shutdown onwards from the line containing stat_busy “Saving System
 Clock” as suggested on lxc.teegra.net (I had done this on a prior
 container but missed this step on a new one which is why the problem only
 started happening recently).

 So something in that shutdown file has the capacity to disable the host's
 ability to start further containers and also disable the ability to ssh
 into already running ones (thankfully, lxc-console still worked).

i also use arch on all my systems, and although i have not used lxc in
awhile, i ran into this issue when i did; this may not be your
problem, but it might be useful to others.

when using the 'newinstance' mount opt to devpts, you have to ensure
that /dev/ptmx is a symlink to /dev/pts/ptmx.  this is even more
critical if your host is also using 'newinstance' (as mine was), since
the legacy, kernel bound/single-instance ptmx will not exist and all
pty allocations will fail.  at one point, one of the init scripts (the
startup one IIRC) was overwriting my /dev/ptmx symlink with an actual
ptmx node, which subsequently was causing the same error you are
receiving.

my problem was persistent however; once /dev/ptmx was changed, all
allocations would fail until i repaired the symlink.

C Anthony

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users