Re: [Lxc-users] How to make a container init DIE after finishing runlevel 0

2010-01-25 Thread Daniel Lezcano
Michael H. Warfield wrote:
 On Mon, 2010-01-25 at 21:50 +0100, Daniel Lezcano wrote:

   
 apologies for the length, but how is everyone else handling this?
 this is the last thing i need to solve before i actually start running
 all my services on this setup.
   
   
 I was wondering if the kernel shouldn't send a signal to the init's 
 parent when sys_reboot is called.
 

 Which still leaves open the question of telling the difference between a
 halt and a reboot. 
Well, with the correct information in siginfo, that should do the trick:

si_num = SIGINFO ? SIGHUP ?
si_code = SI_KERNEL
si_int = the cmd passed to the reboot (2) function.



--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] How to make a container init DIE after finishing runlevel 0

2010-01-25 Thread Daniel Lezcano
Michael H. Warfield wrote:
 On Mon, 2010-01-25 at 21:50 +0100, Daniel Lezcano wrote:

   
 apologies for the length, but how is everyone else handling this?
 this is the last thing i need to solve before i actually start running
 all my services on this setup.
   
   
 I was wondering if the kernel shouldn't send a signal to the init's 
 parent when sys_reboot is called.
 

 Which still leaves open the question of telling the difference between a
 halt and a reboot.  My trick of using the final runlevel
 in /var/run/utmp ran a foul over a gotcha in the Debian containers that
 they seem to default to mounting tmpfs over /var/run and /var/lock so
 you loose that information.  I had to disable RAMRUN and RAMLOCK
 in /etc/default/rcS in the debian images to get around that but I'm not
 sure I'm happy doing that.  The alternative examining /var/log/wtmp
 didn't work out as reliable.  OpenVZ has a similar problem and it writes
 a reboot file that can be read but that seems inelegant as well.  I
 don't think anything works if someone does a reboot -f but I need to
 test that condition yet.

 To also answer the OP's question.  Here's what I use.  I have a script
 that runs periodically in the host.  If the number of processes in a
 running container is 1, then I check for the runlevel in
 ${rootfs}/var/run/utmp.  If that's ? 0 then it's a halt and I run
 lxc-stop.  If it's ? 6 then it's a reboot and I stop the container and
 then restart it.  I run it every 5 minutes or so or manually.  It runs
 fast and could be run more often.  Just sucks polling things like that,
 though.  That script, lxc-check, is attached.
   

I trick I just found:

 while $(true); do
inotifywait /var/lib/lxc/debian/rootfs/var/run/utmp;
if [ $(wc -l /cgroup/debian/tasks | awk '{ print $1 }') = 1 ]; then
lxc-stop -n debian
fi;
done

This command can stay always there and it will trigger a lxc-stop when 
the container remains with 1 process.
No polling and immediate :)
At the first glance, it seems to work well, but of course not compatible 
with upstart.

linux-owop:~ # lxc-start -n debian  reset; echo 
 exited ##
SELinux:  Could not open policy file = 
/etc/selinux/targeted/policy/policy.24:
 No such file or directory
INIT: version 2.86 booting
Activating swap...done.
Cleaning up ifupdown
Loading kernel modules...FATAL: Could not load 
/lib/modules/2.6.32-mcr-3.18/modu
les.dep: No such file or directory
Checking file systems...fsck 1.41.3 (12-Oct-2008)
done.
Setting kernel variables (/etc/sysctl.conf)...done.
Mounting local filesystems...done.
Activating swapfile swap...done.
Setting up networking
Configuring network interfaces...Internet Systems Consortium DHCP Client 
V3.1.1
Copyright 2004-2008 Internet Systems Consortium.
All rights reserved.
For info, please visit http://www.isc.org/sw/dhcp/

Listening on LPF/eth0/d6:c8:78:f7:09:12
Sending on   LPF/eth0/d6:c8:78:f7:09:12
Sending on   Socket/fallback
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7
DHCPOFFER from 172.20.0.1
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPACK from 172.20.0.1
bound to 172.20.0.10 -- renewal in 34492 seconds.
done.
INIT: Entering runlevel: 3
Starting OpenBSD Secure Shell server: sshd.

Debian GNU/Linux 5.0 debian console

debian login: root
Last login: Mon Jan 25 23:28:34 UTC 2010 on console
Linux debian 2.6.32-mcr-3.18 #19 Mon Jan 25 11:19:47 CET 2010 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
debian:~# poweroff

Broadcast message from r...@debian (console) (Mon Jan 25 23:32:07 2010):

The system is going down for system halt NOW!
INIT: Switching to runlevel: 0
INIT: Sending processes the TERM signal
debian:~# Asking all remaining processes to terminate...done.
Killing all remaining processes...failed.
Deconfiguring network interfaces...There is already a pid file 
/var/run/dhclient.eth0.pid with pid 187
removed stale PID file
Internet Systems Consortium DHCP Client V3.1.1
Copyright 2004-2008 Internet Systems Consortium.
All rights reserved.
For info, please visit http://www.isc.org/sw/dhcp/

Listening on LPF/eth0/d6:c8:78:f7:09:12
Sending on   LPF/eth0/d6:c8:78:f7:09:12
Sending on   Socket/fallback
DHCPRELEASE on eth0 to 172.20.0.1 port 67
done.
Cleaning up ifupdown
mount: / is busy
Will now halt.
INIT: no more processes left in this runlevel
 exited ##
linux-owop:~ #









--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term

Re: [Lxc-users] How to make a container init DIE after finishing runlevel 0

2010-01-25 Thread Daniel Lezcano
Michael H. Warfield wrote:
 On Mon, 2010-01-25 at 23:39 +0100, Daniel Lezcano wrote: 
   
 Michael H. Warfield wrote:
 
 On Mon, 2010-01-25 at 21:50 +0100, Daniel Lezcano wrote:

   
   
 apologies for the length, but how is everyone else handling this?
 this is the last thing i need to solve before i actually start running
 all my services on this setup.
   
   
   
 I was wondering if the kernel shouldn't send a signal to the init's 
 parent when sys_reboot is called.
 
 
 Which still leaves open the question of telling the difference between a
 halt and a reboot. 
   
 Well, with the correct information in siginfo, that should do the trick:
 

   
 si_num = SIGINFO ? SIGHUP ?
 si_code = SI_KERNEL
 si_int = the cmd passed to the reboot (2) function.
 

 I concur that sounds like a good option.  But that's a kernel mod and
 will require a kernel patch and getting that through the process.  Once
 that's agreed on that's the route to go, we've got to get the containers
 guys involved and get that pushed through.  And is this going to work
 without any modifications to init itself (per the discussion over on the
 -devel list wrt modifications to init and the difficulty and pain of
 pulling teeth).  What's the next step?
   
Send a patch with this hack even if it is not the right approach, let's 
receive some flaming and discuss with containers@/lkml@ about this problem.
As I have, one foot in userspace with lxc and another foot in the 
container kernel development, if we reach a consensus, that should not 
be a big deal to push upstream a patch, especially if this is a blocker 
for the container technology.

The objective is to have a kernel patch making possible to support the 
shutdown / halt / reboot / etc ...  without modifying the init command 
and compatible with sysv init and upstart. The patch I propose is to 
send a signal to the parent process of the pid namespace, in our case 
it's lxc-start. Handling this signal is quite easy as we have just kill 
-9 the init process and, in case of a reboot, return to the starting 
code without exiting lxc-start.




--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Network confusion

2010-01-29 Thread Daniel Lezcano
Matteo Ghezzi wrote:
 2010/1/29 Daniel Lezcano dlezc...@fr.ibm.com:
 Thanks for your answer.
 
 Can you send the config file of the container ? it should be in
 /var/lib/lxc/container_name/config or /etc/lxc/container_name.
 
 The config file of the container:
 
 lxc.tty = 4
 lxc.pts = 1024
 lxc.rootfs = /lxc/debian-nossh//rootfs
 lxc.cgroup.devices.deny = a
 # /dev/null and zero
 lxc.cgroup.devices.allow = c 1:3 rwm
 lxc.cgroup.devices.allow = c 1:5 rwm
 # consoles
 lxc.cgroup.devices.allow = c 5:1 rwm
 lxc.cgroup.devices.allow = c 5:0 rwm
 lxc.cgroup.devices.allow = c 4:0 rwm
 lxc.cgroup.devices.allow = c 4:1 rwm
 # /dev/{,u}random
 lxc.cgroup.devices.allow = c 1:9 rwm
 lxc.cgroup.devices.allow = c 1:8 rwm
 lxc.cgroup.devices.allow = c 136:* rwm
 lxc.cgroup.devices.allow = c 5:2 rwm
 # rtc
 lxc.cgroup.devices.allow = c 254:0 rwm
 # network
 lxc.utsname = debian-mini
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.hwaddr = 4a:49:43:49:79:bf
 lxc.network.ipv4 = 192.168.0.100/24
 
 
 As well as the result of ifconfig in the container and outside the
 container.
 
 In the host:
 -
 br0   Link encap:Ethernet  HWaddr 62:60:2D:80:63:DE
   inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
   inet6 addr: fe80::e2cb:4eff:fe00:5a7a/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:19229 errors:0 dropped:0 overruns:0 frame:0
   TX packets:11042 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:1656726 (1.5 Mb)  TX bytes:1824850 (1.7 Mb)
 
 eth0  Link encap:Ethernet  HWaddr E0:CB:4E:00:5A:7A
   inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
   inet6 addr: fe80::e2cb:4eff:fe00:5a7a/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:24207 errors:0 dropped:0 overruns:0 frame:0
   TX packets:14136 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:8070883 (7.6 Mb)  TX bytes:2093847 (1.9 Mb)
   Memory:fbee-fbf0

You should remove the ip address of eth0:

ifconfig eth0 0.0.0.0

Let me know if that fix the problem.

Thanks.

   -- Daniel


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] restricting container visible cpus

2010-02-01 Thread Daniel Lezcano
atp wrote:
 Hi,

   
 There is a /proc virtualization layer prototype with fuse which needs to 
 be enhanced but it's not for the short term as there are several issues 
 with the container itself to be solved before adding it.
 But any volunteer is welcome ;)

 

   I hacked up a quick modification to arch/x86/kernel/cpu/proc.c on
 friday that restricted the view of /proc/cpuinfo to only those processes
 in current-cpus_allowed as a quick way of testing if that approach did
 what I wanted. There's bugs remaining, and I'm not 100% sure that its 
 the correct approach, but if anyone's interested let me know. 
   

I already proposed this approach but it was rightfully rejected. I think 
that will be a total mess to handle that from the kernel because if you 
add the cpus, after you will need the memory, the swap, hide the content 
of some files etc ...

For this reason, it was proposed to use a fuse filesystem on top of 
/proc to override the information, there is a prototype here:

At present it overrides the /proc/meminfo and hide some files.
Adding /proc/cpuinfo is trivial.

If you are interested, I can send you a tarball.
   Anyway, by stracing getconf it turns out that the c library on fc12
 uses sysfs to determine the number of cpus, so I'm probably barking up
 the wrong tree. I read somewhere that some work to make sysfs container
 aware has already been done, so if any of the people who've done that 
 are listening, I'd appreciate a pointer or two. 
   
The sysfs per namespace is not yet merged. It was rejected because of 
some locking problem and because the sysfs itself does need some cleanup 
before adding the shadowing directories for the namespace, it's right 
now cleanup and pushed little by little. Be patient  :)

On the other side, the sysfs per namespace will only virtualize 
/sys/class/net so it will not give you the right informations for the cpu.

   I'm assuming that its not going to be contentious to try and restrict
 the container's idea of the number of cpus on the system to only those
 cpus allocated to its cgroup. 
   


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Kernel 2.6.33-rc6, 3 bugs container specific.

2010-02-04 Thread Daniel Lezcano
Serge E. Hallyn wrote:
 Quoting Daniel Lezcano (daniel.lezc...@free.fr):
   
 Serge E. Hallyn wrote:
 
 Quoting Jean-Marc Pigeon (j...@safe.ca):
   
 Hello,


 
 I was wondering out loud about the best design to solve his problem.

 If we try to redirect kernel-generated messages to containers, we have
 several problems, including whether we need to duplicate the messages
 to the host container.  So in one sense it seems more flexible to
   1. send everything to host syslog
   
No, if we do that all CONTs message will reach
the same bucket and it will be difficult to sort
them out..
CONT sys_admin and HOST sys_admin could be different
entity, so you debug CONT config and critical
needed information reach HOST (which you do not 
 have access
 to).
 
 Yes, so a privileged task on HOST must pass that information back to
 you on CONT.  That is not a valid complaint imo.  But how to sort the
 msgs out is a valid question.

 We need some sort of identifier, unique system-wide, attached to.. 
 something.
 Is ifindex unique system-wide right now?  Oh, IIRC it is, but we wnat it to
 be containerized, so that would be a bad choice :)

   
   2. clamp down on syslog use by processes not in the init_user_ns
   
Could give me more detail??...
 
 Simplest choices would be to just refuse sys_syslog() and open(/proc/kmsg)
 altogether from a container, or to only allow reading/writing messages
 to own syslog.  (I had hoped to find time to try out the second option but
 simply haven't had the time, and it doesn't look like I will very soon.
 So if anyone else wants to, pls jump at it...)

 Then /proc/kmsg can provide what I described above through a FUSE file,
 and if, as you mentioned, the container unmounts the FUSE fs and gets
 to real procfs, they just get nothing.

   
   3. let the userspace on the host copy messages into a socket or
  file so child container can pretend it has real syslog.
   
So you trap printk message from CONT on the HOST and
redirect them on CONT but on a standard syslog channel.
Seem OK to me, as long /proc/kmsg is not existing
(/dev/null) in the CONT file tree.
 
 We have:
* Commands to sys_syslog:
*
*  0 -- Close the log.  Currently a NOP.
*  1 -- Open the log. Currently a NOP.
*  2 -- Read from the log.
*  3 -- Read all messages remaining in the ring buffer.
*  4 -- Read and clear all messages remaining in the ring buffer
*  5 -- Clear ring buffer.
*  6 -- Disable printk to console
*  7 -- Enable printk to console
*  8 -- Set level of messages printed to console
*  9 -- Return number of unread characters in the log buffer
* 10 -- Return size of the log buffer

 And add:
   * 11 -- create a new ring buffer for the current process
 and its childs


 We have, let's say a global ring buffer keep untouched, used by
 syslog(2) and printk. When we create a new ring buffer, we allocate
 it and assign to the nsproxy (global ring buffer is the default in
 the nsproxy).

 The prink keeps writing in the global ring buffer and the syslog(2)
 writes to the namespaced ring buffer.

 Does it makes sense ?
 

 Yeah, it's a nice alternative.  Though (1) there is something to be said for
 forcing a new ring buffer upon clone(CLONE_NEWUSER), and (2) assuming the
 new ring buffer is pointed to from nsproxy, it might be frowned upon to do
 an unshare/clone action in yet another way.
   
Why do you want to tie clone(CLONE_NEWUSER) with a new ring buffer ?
I mean one may want to use CLONE_NEWUSER but keep the ring buffer, no ?
 I still think our first concern should be safety, and that we should consider
 just adding 'struct syslog_struct' to nsproxy, and making that NULL on a
 clone(CLONE_NEWUSER).  any sys_syslog() or /proc/kmsg access returns -EINVAL
 after that.  Then we can discuss whether and how to target printks to
 namespaces, and whether duplicates should be sent to parent namespaces.
   
That makes sense to do it step by step. Targeting the printk is the more 
difficult, no ? I mean you should have always the destination namespace 
available which is not obvious when the printk is called from an 
interrupt context.

 After we start getting flexible with syslog, the next request will be for
 audit flexibility.  I don't even know how our netlink support suffices for
 that right now.

 (So, this all does turn into a big deal...)
   
Mmh ... right.

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long

Re: [Lxc-users] Regarding lxc tools and libvirt (virsh).

2010-02-05 Thread Daniel Lezcano
Kumar L Srikanth-B22348 wrote:
 Hi,
 I am new to Linux Containers and Libvirt.
 Recently, I installed Linux Container tools on my Fedora Core 12 (64
 bit) machine and able to create/start/destroy my own containers using
 the following commands:
 lxc-create
 lxc-sshd
 lxc-start
 lxc-destroy
  
 And I also installed libvirtd on my machine, and able to
 create/start/destroy my own domains using the following commands:
 virsh -c lxc:/// define /path/to/domain/xml/configuration/file
 virsh -c lxc:/// start [Domain Name]
 virsh -c lxc:/// shutdown [Domain Name]
 virsh -c lxc:/// undefine [Domain Name]
  
 I just wonder is there any relation between the domains created with
 'virsh' and containers created with 'lxc-tools' [like lxc-create,
 lxc-start ..etc]?
 Can I start a container created using 'lxc-create' command with virsh
 [virsh -c lxc:/// start [Container Name] ...something like that]?
  
 Please let me know.
   
No, these are 2 separate projects. But a driver could be implemented to 
plug the lxc tools with the libvirt.

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Regarding lxc tools and libvirt (virsh).

2010-02-05 Thread Daniel Lezcano
Kumar L Srikanth-B22348 wrote:
 Thanks for the reply Daniel.
 I have another issue.
 I am creating a Domain using libvirt XML. In order to mount the host's
 '/home/srikanth' directory to the new container's '/' directory, my XML
 format is shown below:
 
 domain type='lxc' id='1'
   namecontainer1_vm/name
 memory50/memory
   os
   typeexe/type
   init/bin/sh/init
   /os
   vcpu1/vcpu
   clock offset='utc'/
   on_poweroffdestroy/on_poweroff
   on_rebootrestart/on_reboot
   on_crashdestroy/on_crash
   devices
 emulator/usr/libexec/libvirt_lxc/emulator
   filesystem type='mount'
 source dir='/home/srikanth'/ 
 target dir='/'/ 
   /filesystem
   console type='pty' /
   /devices
 /domain
 
 
 With the above libvirt XML, Domain is defining, but not starting. When I
 issue the start command it's saying Domain started, but showing shut
 off status. If I changed the target directory(traget dir='/'/) from
 '/' to '/home/container1'(traget dir='/home/container1'/), the domain
 is starting normally and I am able to see the contents in the target
 directory.
 
 Can you please let me know, how can I set the target directory to '/'?
 
 By the way, I am using libvirt version o.7.6.

I won't be able to answer as I don't know libvirt, it's better you ask 
to the libvirt mailing list. This mailing list is for the lxc container 
tools.


 -Original Message-
 From: Daniel Lezcano [mailto:daniel.lezc...@free.fr] 
 Sent: Friday, February 05, 2010 2:25 PM
 To: Kumar L Srikanth-B22348
 Cc: lxc-users@lists.sourceforge.net
 Subject: Re: [Lxc-users] Regarding lxc tools and libvirt (virsh).
 
 Kumar L Srikanth-B22348 wrote:
 Hi,
 I am new to Linux Containers and Libvirt.
 Recently, I installed Linux Container tools on my Fedora Core 12 (64
 bit) machine and able to create/start/destroy my own containers using 
 the following commands:
 lxc-create
 lxc-sshd
 lxc-start
 lxc-destroy
  
 And I also installed libvirtd on my machine, and able to 
 create/start/destroy my own domains using the following commands:
 virsh -c lxc:/// define /path/to/domain/xml/configuration/file
 virsh -c lxc:/// start [Domain Name]
 virsh -c lxc:/// shutdown [Domain Name] virsh -c lxc:/// undefine 
 [Domain Name]
  
 I just wonder is there any relation between the domains created with 
 'virsh' and containers created with 'lxc-tools' [like lxc-create, 
 lxc-start ..etc]?
 Can I start a container created using 'lxc-create' command with virsh 
 [virsh -c lxc:/// start [Container Name] ...something like that]?
  
 Please let me know.
   
 No, these are 2 separate projects. But a driver could be implemented to
 plug the lxc tools with the libvirt.

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Unable to SSH to the containers.

2010-02-05 Thread Daniel Lezcano
Kumar L Srikanth-B22348 wrote:
  
 Hi Daniel,
 Please see my inline comments.

 Regards,
 Srikanth

 -Original Message-
 From: Daniel Lezcano [mailto:daniel.lezc...@free.fr] 
 Sent: Friday, February 05, 2010 5:40 PM
 To: Kumar L Srikanth-B22348
 Cc: lxc-users@lists.sourceforge.net
 Subject: Re: [Lxc-users] Unable to SSH to the containers.

 Kumar L Srikanth-B22348 wrote:
   
 My lxc version is 0.6.3.
   
 
 Ok.

 I suppose you tried with lxc-ssh script to create the container, right ?
 Srikanth Yes, with lxc-sshd script.

 You should try:

 lxc-execute -n cont1 /bin/bash
 /usr/sbin/sshd
 Srikanth You mean to execute the following command: lxc-execute -n
 cont1 /bin/bash /usr/sbin/sshd ?
   
No, I meant get a shell in a container and from this shell launch sshd.

 The same for the other container:

 lxc-execute -n cont2 /bin/bash
  /usr/sbin/sshd

 if you want, you can log to lxc-devel Freenode irc's channel, that maybe
 easier to check the config.
 Srikanth lxc-devel can not be found in my linyx contianer tools.
   
I meant chatting via irc.



--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Still can not get macvlan to work.

2010-02-08 Thread Daniel Lezcano
Michael H. Warfield wrote:
 I mentioned this in an earlier posting that I was using the veth method
 with bridges because I could NOT get macvlan to work.  Problem is that
 the containers will come up and will talk on the network but the host
 can not talk to any of the guest containers.  Ping doesn't work and
 connections don't work.  Not IPv4 or IPv6.  I can connect to containers
 from other systems (both IPv4 and IPv6) but not from the system that's
 hosting them.  Someone suggested that the problem was an old bug that
 they thought was fixed in more recent kernels.  But wasn't more
 specific.
   
It's not a bug, it was just not implemented yet.

 I just recently moved several of my test containers from my Fedora 11
 engine to a newer 64 bit Fedora 12 system.  In the process, I thought,
 what the heck, lets give macvlan another shot, so I reconfigured a
 couple of the containers from veth to macvlan.  Same problem.  Latest
 kernel from Fedora and same problem.

 The Fedora 11 kernel: kernel-2.6.30.10-105.2.4.fc11.i586
 The Fedora 12 kernel: kernel-2.6.31.12-174.2.3.fc12.x86_64

 Anyone with thoughts or suggestions on what to try next?
   

The macvlan mode you are talking about is the macvlan bridge mode. This 
mode is explained in the lxc.conf (5) man.
It was recently merged upstream and should be available for the 2.6.33 
kernel.

If you want to enable this mode in lxc, you have to specify:
lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
   ...

This mode will allow several macvlan to communicate if they have the 
same lower-dev (eg eth0).
But this mode does not allow macvlan-eth0 communication, you have to 
create a macvlan in the host.


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Networking issues with LXC

2010-02-10 Thread Daniel Lezcano
Daniel Lezcano wrote:
 Michael B. Trausch wrote:
 Hello,

 I am running LXC version 0.6.5, with kernel 2.6.32.7.  I am having some 
 pretty significant troubles getting networking to reliably work with the 
 containers.  That is to say, the host name is doing just fine, and 
 answers network requests all the time.  However, the containers 
 sometimes fail to respond to network for requests for several seconds 
 and several connection attempts.  This isn't a problem in that it's 
 rejecting connections on the ports specifically; it's as if there is no 
 machine on my network with the IP address assigned to the container, 
 until it comes alive again and answers the network.

 If I work with a container after getting a connection, I can reach that 
 container for several minutes (usually---sometimes it will cut off a 
 connection, though, and then it is again as if that IP address doesn't 
 exist on my network).

 I'm out of options as far as getting this working:  This network 
 configuration works with containers under OpenVZ or full virtual 
 machines in KVM, where the virtualized network cards are attached to the 
 bridge.  The IP configuration is handed out by DHCP, (except for my 
 public IP addresses, which are manually assigned) and the IP addresses, 
 netmasks, default routes, gateways, broadcast addresses, and so forth 
 are all correct.  Nonetheless, the networking is *extremely* unreliable.

 I don't know how to provide additional information to attempt to work 
 through a problem like this; any guidance in this area would be greatly 
 appreciated.
   
 Mmh, hard to answer.
 
 Can you give the following information:
 
  * how many containers are running on the host ?
 
 For the host and the containers:
  * 'ip addr show'
  * ip neigh show
  * 'brctl show'
 
 And a tcpdump :
 
  tcpdump -i any dst or src containerip
 
 And then try to ping or  connect to/from the container to make tcpdump 
 show something.
 

Oh ! I forgot the container configurations too, please.

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Networking issues with LXC

2010-02-11 Thread Daniel Lezcano
Michael B. Trausch wrote:
 On 02/11/2010 03:46 AM, Daniel Lezcano wrote:
 If you do not set a mac address in the container configuration file, the
 kernel will choose one for you preventing duplicate mac address on the
 host.

 Will it pick something that is static for each container?  I'd like 
 for each of my containers to have stable IPv6 addresses that persist 
 over reboots.
Ah, ok. That makes sense to specify a mac address.

Maybe this script can help you to generate mac address automatically 
with the container configuration.

http://mediakey.dk/~cc/generate-random-mac-address-for-e-g-xen-guests

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] setrlimit(3) and containers

2010-04-02 Thread Daniel Lezcano
Mikhail Gusarov wrote:
 Twas brillig at 09:47:33 01.04.2010 UTC-05 when se...@us.ibm.com did gyre and 
 gimble:

   Here process drops root privileges, setuids to uid=103 and limits itself
   to 3 processes with this uid. Clone fails due to fact there are two
   processes with uid=103 running in another container.
   
   Is it a known limitation, or maybe this is already handled in newer
   kernels? (I use 2.6.32)

  SEH Hmm, you'll need to unshare the user namespace.  Try adding
  SEH CLONE_NEWUSER to the list assigned to clone_flags at
  SEH lxc/src/lxc/start.c line 353.

 I tried, and was hit by the following problem:

 [dotted...@vertex:~]255% sudo lxc-start -n cf 
  
 lxc-start: Device or resource busy - could not unmount old rootfs
 lxc-start: failed to pivot_root to '/var/lib/lxc/cf/rootfs'
 lxc-start: failed to set rootfs for 'cf'
 lxc-start: failed to setup the container
   

Did you try with the git head ?


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc.console with lxc from git

2010-04-02 Thread Daniel Lezcano
Mikhail Gusarov wrote:
 Hi.

 I have tried to run lxc tools from git and got the following output:

 [dotted...@vertex:~]% sudo lxc-start --logfile=/dev/stderr 
 --logpriority=TRACE -n cf
   lxc-start 1270236851.229 INFO lxc_conf - tty's configured
   lxc-start 1270236851.229 ERRORlxc_console - Bad address - failed to 
 open '(null)'
 lxc-start: Bad address - failed to open '(null)'
   lxc-start 1270236851.229 ERRORlxc_start - failed to create console
 lxc-start: failed to create console
   lxc-start 1270236851.229 ERRORlxc_start - failed to initialize the 
 container
 lxc-start: failed to initialize the container

 According to source, console-path is ever initalized to non-NULL if
 there is lxc.console option in config file. If lxc.console is mandatory
 now it could be cool to handle it in a bit more clear way.
   
Argh! no, it's not mandatory, it's a regression.

Thanks for reporting the problem.

I think it is fixed, please git pull.

Let me know if that works now.

  -- Daniel





--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] SSH - PTY allocation request failed on channel 0 stdin: is not a tty

2010-04-06 Thread Daniel Lezcano
Osvaldo Filho wrote:
 I this:
 ...
 mountall: mount /dev/pts [25] terminated with status 1
 mount: according to mtab, none is already mounted on /dev/shm

 mountall: mount /dev/shm [26] terminated with status 1
 mount: according to mtab, varrun is already mounted on /var/run

 mountall: mount /var/run [29] terminated with status 1
 mount: according to mtab, varlock is already mounted on /var/lock

 mountall: mount /var/lock [31] terminated with status 1
 mount: according to mtab, none is already mounted on /dev/console

 mountall: mount /dev/console [33] terminated with status 1
 mount: according to mtab, none is already mounted on /dev/tty1

 mountall: mount /dev/tty1 [35] terminated with status 1
 mount: according to mtab, none is already mounted on /dev/tty2

 mountall: mount /dev/tty2 [37] terminated with status 1
 mount: according to mtab, none is already mounted on /dev/tty3

 mountall: mount /dev/tty3 [40] terminated with status 1
 mount: according to mtab, devpts is already mounted on /dev/ptmx

 mountall: mount /dev/ptmx [42] terminated with status 1

 2010/4/6 Osvaldo Filho arquivos...@gmail.com:
   
 Ubuntu x64 10.04 (beta)

 When I try to enter the container via ssh I get this message:

 PTY allocation request failed on channel 0
 stdin: is not a tty
 

Hi Osvaldo,

what is the container configuration ?

Can you give the result of lxc-checkconfig ?



--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Lucid host container - ignored fstab?

2010-04-11 Thread Daniel Lezcano
Roman Yepishev wrote:
 Hello all,

 I am trying to use LXC to run Ubuntu Lucid Lynx containers on Lucid Lynx
 hosts. I have succeeded in configuring the container properly so it
 starts, connects to the network etc.

 However, as described in [1], my container can remount the /srv
 partition read-only. I tried to fix it using the fstab entry that was
 given at [1] but in the end mount gives:

 r...@lemon:~$ mount
 /dev/mapper/fridge-srv on / type ext4 (rw)
 ...

 Ok, it might not work, I thought.

 However, after some time I decided to bind-mount /var/cache/apt to
 container's /var/cache/apt and now my fstab is:

 /srv/vm/lxc/lemon/rootfs /srv/vm/lxc/rootfs none bind 0 0
 /var/cache/apt/srv/vm/lxc/lemon/rootfs/var/cache/apt none bind 0 0

 During startup the debug output has the following lines: 
 lxc-start 1270888370.767 DEBUGlxc_conf - mounted /srv/vm/lxc/lemon/rootfs 
 on /srv/vm/lxc/rootfs, type none
 lxc-start 1270888370.767 DEBUGlxc_conf - mounted /var/cache/apt on 
 /srv/vm/lxc/lemon/rootfs/var/cache/apt, type none

 So I guess it does mount something, however later on I see the
 following: 
 lxc-start 1270888370.773 DEBUGlxc_conf - umounted 
 '/lxc-oldrootfs-ib3iB1/srv/vm/lxc/lemon/rootfs/var/cache/apt'
   

When the container starts, it setup the root filesystem. The rootfs is 
done with the pivot_root syscall, hence the old rootfs contains the 
mount points which are duplicates with the new rootfs. The code then 
umount these duplicates entry in the old rootfs without impacting the 
mount points of the new rootfs.

I am not sure I am very clear :) but in other words for each mount 
points you will see a corresponding line saying umount 
old-rootfs/, it's a normal behavior.
 I am not quite sure it should umount that directory, but here's how my
 mount looks when the system is booted: 
 r...@lemon:/var/cache/apt$ mount
 /dev/mapper/fridge-srv on / type ext4 (rw)
 none on /proc type proc (rw,noexec,nosuid,nodev)
 none on /sys type sysfs (rw,noexec,nosuid,nodev)
 none on /dev/console type devpts 
 (rw,noexec,nosuid,relatime,gid=5,mode=620,ptmxmode=000)
 none on /dev/tty1 type devpts 
 (rw,noexec,nosuid,relatime,gid=5,mode=620,ptmxmode=000)
 none on /sys/fs/fuse/connections type fusectl (rw)
 none on /sys/kernel/debug type debugfs (rw)
 none on /sys/kernel/security type securityfs (rw)
 none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
 none on /dev/shm type tmpfs (rw,nosuid,nodev)
 none on /var/run type tmpfs (rw,nosuid,mode=0755)
 none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
 none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)

 Is there anything wrong with my set up? It looks like my first attempt
 to protect /srv fails due to the same issue - bind mounts do not work in
 the container for me.
   

The mount point specified in the configuration file is setup by lxc 
without using the mount command, so the /etc/mtab is not updated 
(which is normal). If you want to check if the mount point is 
effectively setup, you should check against /proc/mounts.

Thanks
  -- Daniel

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Lucid host container - ignored fstab?

2010-04-12 Thread Daniel Lezcano
Roman Yepishev wrote:
 Hello, Daniel.
 Thanks for your reply!

 On Sun, 2010-04-11 at 09:41 +0200, Daniel Lezcano wrote:

   
 When the container starts, it setup the root filesystem. The rootfs is 
 done with the pivot_root syscall, hence the old rootfs contains the 
 mount points which are duplicates with the new rootfs. The code then 
 umount these duplicates entry in the old rootfs without impacting the 
 mount points of the new rootfs.
 
 Ok, this makes sense.

   
 The mount point specified in the configuration file is setup by lxc 
 without using the mount command, so the /etc/mtab is not updated 
 (which is normal). If you want to check if the mount point is 
 effectively setup, you should check against /proc/mounts.
 

 Unfortunately it looks like /proc/mounts provides the same info as the
 mount command for me - 
 /dev/mapper/fridge-srv / ext4 rw,relatime,barrier=1,data=ordered 0 0
 none /dev/console\040(deleted) devpts 
 rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
 none /dev/tty1\040(deleted) devpts 
 rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0
 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
 none /sys/fs/fuse/connections fusectl rw,relatime 0 0
 none /sys/kernel/debug debugfs rw,relatime 0 0
 none /sys/kernel/security securityfs rw,relatime 0 0
 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0
 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0
 none /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0

 So the entries from the lxc.mount fstab:

  /srv/vm/lxc/lemon/rootfs /srv/vm/lxc/rootfs none bind 0 0
  /var/cache/apt  /srv/vm/lxc/lemon/rootfs/var/cache/apt none bind 0 0

 do not appear to be effective.
 I tried creating the file in /var/cache/apt of the container and it did
 not appear in the host filesystem so it looks like they are really
 separated.

 Is there anything that can be done to debug this problem?
 And even more interesting, is there anybody else experiencing such kind
 of issue?
   

I was not able to reproduce the problem with the git head.
Maybe the problem was fixed between the 0.6.5 and the git head, but I 
don't see what commit it could be.

What looks weird is you have the log saying the directory was 
effectively mounted.
Is it possible the container's distro unmounts this directory ?

Can you check by doing 'lxc-start -n lemon /bin/bash' ?
We get ride of the system init script and you can check the content of 
/proc/mounts, that will give a clear idea of where is coming from the 
problem (lxc or os). BTW, you will have to mount /proc in the container.

Thanks
  -- Daniel



--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] X server consumes 100% of CPU after launching a container

2010-04-12 Thread Daniel Lezcano
Hi all,

did someone experienced the X server consuming 100% of CPU after 
launching a container on Ubuntu 9.10 ?

Thanks
  -- Daniel



--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] [Network] ioctl on socket fails in container

2010-04-14 Thread Daniel Lezcano
stephane.rivi...@regis-dgac.net wrote:
 Hi,

 I'm using LXC to run Perl scripts that generate network traffic, using the 
 Net::RawIP package.
 The scripts work perfectly well on a real host, but fail inside an LXC 
 container.

 After a few hours of testing/debuging, the origin of the problem is that 
 some basic ioctl calls on socket fails.

 Net::RawIP relies on SIOCGIFADDR et SIOCGIFHWADDR to get the IP and MAC 
 addresses of the network interface.

 My container has 2 interfaces : 1 macvlan (normally used to generate 
 traffic) and 1 bridged (to dialogue with the host and the other 
 containers).

 In the container, these ioctl calls fail with an Invalid argument on 
 every interface, including the loopback.


 I've extracted the failing code from Net::RawIP to have a simple test 
 program (code at the end of the message).
 It just creates a socket and do the 2 ioctl calls on it.

 My LXC configuration is based on the article of Stéphane Graber 
 (http://www.stgraber.org/category/lxc):

 - host : Ubuntu 9.10 Desktop (2.6.31 kernel)
 - containers : Ubuntu 8.04 


 I really don't know what's wrong, because ifconfig relies on the same 
 basic call to get interface information...

 If anyone has any idea, I would greatly appreciate it :-)
   

Good report, thanks ! I was able to reproduce it.

The problem is coming from the kernel, the following lines are still 
there in the file net/packet/af_packet.c,

[ ... ]
   if (!net_eq(sock_net(sk), init_net))
return -ENOIOCTLCMD;

[ ... ]

in the packet_ioctl function. It shouldn't. These lines mean the 
af_packet is not namespace aware, but I think this is no longer the case 
still a long time now ... I assume just removing these two lines will 
fix the problem.

Thanks
  -- Daniel

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] tc on container gets RTNETLINK answers: Invalid argument

2010-04-17 Thread Daniel Lezcano
Przemysław Knycz wrote:
 Hi!

   
 At the first glance I would say it is not supported by the kernel yet.
 

 Is there support for IFB or IMQ in container? 2.6.33 can support this?
   

IFB and IMQ are out of kernel tree, right ?

I am discovering IFB / IMQ. Are you asking if we can move a IFB/IMQ 
device to a container ?


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-unshare woes and signal forwarding in lxc-start

2010-05-05 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Daniel Lezcano daniel.lezc...@free.fr writes:
 
 Ferenc Wagner wrote:

 I can see that lxc-unshare isn't for me: I wanted to use it to avoid
 adding the extra lxc-start process between two daemons communicating via
 signals, but it's impossible to unshare PID namespaces, so I'm doomed.
   
 There is a pending patchset to unshare the pid namespace, maybe for
 2.6.35 or 2.6.36 ...
 
 Good to know, but I'd like to stick with 2.6.32 if possible.
 
 But now I see that signal forwarding was just added to lxc-init, would
 you consider something like that in lxc-start as well?
 It's the lxc-init process who forward the signals. The lxc-kill sends
 a signal to the pid 1 of the container. When lxc-init is the first
 process, it receives the signal and forward it to the pid 2.
 
 Yes.
 
 In the case of lxc-start, let's say 'lxc-start -n foo sleep 3600'. The
 sleep' process is the first process of the container, hence if you
 lxc-kill -n foo signum' command, that will send the signal to
 sleep'.
 
 Sure, but it isn't me who sends the signals, but that who spawned
 lxc-start.  I'd like to use lxc-start a wrapper, invisible to the parent
 and the (jailed) child.  Of course I could hack around this by not
 exec-ing lxc-start but keeping the shell around, trap all signals and
 lxc-kill them forward.  But it's kind of ugly in my opinion.

Ok, got it. I think that makes sense to forward the signals, especially 
for job management. What signals do you want to forward ?

--
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Resources sharing and limit

2010-05-06 Thread Daniel Lezcano
Yanick Emiliano wrote:
 Hi everybody,
 I have just started playing with lxc and having some difficulties to set cpu
 and memory on my guests. After my searches, seems that resources controlling
 is managing in cgroup files and I think that I missed something or I didn't
 understand how deal with cgroup.
 
 After reading cgroup documentation, I understand that:
 - *cpuset.cpus* indicate to a container the number of cpu available

No exactly, it's a mask of usable cpus for the container. Let's imagine 
you have a 16 cpus machine. The content of cpuset.cpus will be,
0-15 which means cpu number 0 to cpu number 15 is used by the cgroup.

If you want to assign the cpu 1 (second cpu) to the container, you have 
to set it by echo 1  /cgroup/name/cpuset.cpus.

If you want to assign the cpu 1,2,3 to the container, echo 1,2,3  
/cgroup/name/cpuset.cpus.

If you want to assign cpu 0 up to 7 to the container, echo 0-7  
/cgroup/name/cpuset.cpus.


In the context of lxc.

lxc-execute -n foo -s lxc.cgroup.cpuset.cpus=1,2,3 myforks

etc ...

 -*cpuset.cpu_exclusive* limit the number of cpu which the container can use.
 Am I in good way?

When you assigned the cpus to the container, the processes of the 
container will run on these cpus only but that does not prevent the 
other tasks of the system to run on these cpus. If you want the cpus to 
be used by the container *only*, set them 'exclusive'. This is what I 
understood.

 For example , can I tell to my container that there are 2 cpu available (*
 cpuset.cpus)*, but use one generally (*puset.cpu_exclusive)*, use the second
 one only  when it's necessary (when there are a  lot of application to run)?

cpu on demand ? :)

Hey, externally look at the cpu usage of the container, when it reach a 
threshold you define, assign another cpu to the container.

 What I want is manage QoS with my containers.

Very likely, you are looking for the cgroup fair scheduler, it would be 
better than dynamically assign cpus to the container, IMHO.

http://lwn.net/Articles/240474/

It's /cgroup/name/cpu.shares

Create 2 containers,

lxc-execute -n foo -s lxc.cgroup.cpu.shares=1 /bin/bash

in another shell

lxc-execute -n bar -s /bin/bash


in both shell, do while $(true); do echo -n . ; done

You will see foo displaying the dots vry slowly and bar being at 
the normal speed.

As soon as bar exits or is frozen (via lxc-freeze), foo works at 
normal speed as it is no longer competing the cpu with bar.

You can dynamically change the priority of the container with 
lxc-cgroup -n foo cpu.shares=1024 for example.


 And my last question is, Can I do the same thing with memory sharing?

memory on demand :)

I will let someone else to add comments here, as I am not very familiar 
with memory vs cgroup.

Thanks
   -- Daniel

--
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start leaves temporary pivot dir behind

2010-05-06 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Daniel Lezcano daniel.lezc...@free.fr writes:
 
 Ferenc Wagner wrote:

 While playing with lxc-start, I noticed that /tmp is infested by
 empty lxc-r* directories: [...] Ok, this name comes from lxc-rootfs
 in conf.c:setup_rootfs.  After setup_rootfs_pivot_root returns, the
 original /tmp is not available anymore, so rmdir(tmpname) at the
 bottom of setup_rootfs can't achieve much.  Why is this temporary
 name needed anyway?  Is pivoting impossible without it?
 That was put in place with chroot, before pivot_root, so the distro's
 scripts can remount their '/' without failing.

 Now we have pivot_root, I suppose we can change that to something cleaner...
 
 Like simply nuking it?  Shall I send a patch?

Sure, if we can kill it, I will be glad to take your patch :)

--
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-unshare woes and signal forwarding in lxc-start

2010-05-06 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Daniel Lezcano daniel.lezc...@free.fr writes:

   
 Ferenc Wagner wrote:

 
 I'd like to use lxc-start as a wrapper, invisible to the parent and
 the (jailed) child.  Of course I could hack around this by not
 exec-ing lxc-start but keeping the shell around, trap all signals and
 lxc-killing them forward.  But it's kind of ugly in my opinion.
   
 Ok, got it. I think that makes sense to forward the signals,
 especially for job management.  What signals do you want to forward?
 

 Basically all of them.  I couldn't find a definitive list of signals
 used for job control in SGE, but the following is probably a good
 approximation: SIGTTOU, SIGTTIN, SIGUSR1, SIGUSR2, SIGCONT, SIGWINCH and
 SIGTSTP.  
Yes, that could be a good starting point. I was wondering about SIGSTOP 
being sent to lxc-start which is not forwardable of course, is it a 
problem ?

 This is application specific, though, lxc-start shouldn't have
 this hard-coded.
Ok, from the configuration then.

 Looking at the source, the SIGCHLD mechanism could be
 mimicked, but LXC_TTY_ADD_HANDLER may get in the way.
We should remove LXC_TTY_ADD_HANDLER and do everything in the signal 
handler of SIGCHLD by extending the handler. I have a pending fix 
changing a bit the signal handler function.
 I'm also worried
 about signals sent to the whole process group: they may be impossible to
 distinguish from the targeted signals and thus can't propagate correctly.
   
Good point. Maybe we can setpgrp the first process of the container ?

--
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start leaves temporary pivot dir behind

2010-05-06 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Ferenc Wagner wf...@niif.hu writes:
 
 Daniel Lezcano dlezc...@fr.ibm.com writes:

 Ferenc Wagner wrote:

 Daniel Lezcano daniel.lezc...@free.fr writes:

 Ferenc Wagner wrote:

 While playing with lxc-start, I noticed that /tmp is infested by
 empty lxc-r* directories: [...] Ok, this name comes from lxc-rootfs
 in conf.c:setup_rootfs.  After setup_rootfs_pivot_root returns, the
 original /tmp is not available anymore, so rmdir(tmpname) at the
 bottom of setup_rootfs can't achieve much.  Why is this temporary
 name needed anyway?  Is pivoting impossible without it?
 That was put in place with chroot, before pivot_root, so the distro's
 scripts can remount their '/' without failing.

 Now we have pivot_root, I suppose we can change that to something 
 cleaner...
 Like simply nuking it?  Shall I send a patch?
 Sure, if we can kill it, I will be glad to take your patch :)
 I can't see any reason why lxc-start couldn't do without that temporary
 recursive bind mount of the original root.  If neither do you, I'll
 patch it out and see if it still flies.
 
 For my purposes the patch below works fine.  I only run applications,
 though, not full systems, so wider testing is definitely needed.
 
 Thanks,
 Feri.
 
 From 98b24c13f809f18ab8969fb4d84defe6f812b25c Mon Sep 17 00:00:00 2001
 From: Ferenc Wagner wf...@niif.hu
 Date: Thu, 6 May 2010 14:47:39 +0200
 Subject: [PATCH] no need to use a temporary directory for pivoting
 
 That was put in place before lxc-start started using pivot_root, so
 the distro scripts can remount / without problems.
 
 Signed-off-by: Ferenc Wagner wf...@niif.hu
 ---
  src/lxc/conf.c |   28 +++-
  1 files changed, 3 insertions(+), 25 deletions(-)
 
 diff --git a/src/lxc/conf.c b/src/lxc/conf.c
 index b27a11d..4379a32 100644
 --- a/src/lxc/conf.c
 +++ b/src/lxc/conf.c
 @@ -588,37 +588,15 @@ static int setup_rootfs_pivot_root(const char *rootfs, 
 const char *pivotdir)
 
  static int setup_rootfs(const char *rootfs, const char *pivotdir)
  {
 - char *tmpname;
 - int ret = -1;
 -
   if (!rootfs)
   return 0;
 
 - tmpname = tempnam(/tmp, lxc-rootfs);
 - if (!tmpname) {
 - SYSERROR(failed to generate temporary name);
 - return -1;
 - }
 -
 - if (mkdir(tmpname, 0700)) {
 - SYSERROR(failed to create temporary directory '%s', tmpname);
 - return -1;
 - }
 -
 - if (mount(rootfs, tmpname, none, MS_BIND|MS_REC, NULL)) {
 - SYSERROR(failed to mount '%s'-'%s', rootfs, tmpname);
 - goto out;
 - }
 -
 - if (setup_rootfs_pivot_root(tmpname, pivotdir)) {
 + if (setup_rootfs_pivot_root(rootfs, pivotdir)) {
   ERROR(failed to pivot_root to '%s', rootfs);
 - goto out;
 + return -1;
   }
 
 - ret = 0;
 -out:
 - rmdir(tmpname);
 - return ret;
 + return 0;
  }
 
  static int setup_pts(int pts)

Thanks, I will test it with another patch I have in my backlog fixing 
the pivot_root. I Cc'ed the lxc-devel mailing list which is more 
adequate for this kind of discussion.

Thanks again.
   -- Daniel

--
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-unshare woes and signal forwarding in lxc-start

2010-05-06 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Daniel Lezcano daniel.lezc...@free.fr writes:

   
 Ferenc Wagner wrote:

 
 Daniel Lezcano daniel.lezc...@free.fr writes:
   
   
 Ferenc Wagner wrote:
 
 
 I'd like to use lxc-start as a wrapper, invisible to the parent and
 the (jailed) child.  Of course I could hack around this by not
 exec-ing lxc-start but keeping the shell around, trap all signals and
 lxc-killing them forward.  But it's kind of ugly in my opinion.
   
   
 Ok, got it. I think that makes sense to forward the signals,
 especially for job management.  What signals do you want to forward?
 
 Basically all of them.  I couldn't find a definitive list of signals
 used for job control in SGE, but the following is probably a good
 approximation: SIGTTOU, SIGTTIN, SIGUSR1, SIGUSR2, SIGCONT, SIGWINCH and
 SIGTSTP.  
   
 Yes, that could be a good starting point. I was wondering about
 SIGSTOP being sent to lxc-start which is not forwardable of course, is
 it a problem ?
 

 I suppose not, SIGSTOP and SIGKILL are impossible to use in application-
 specific ways.  On the other hand, SIGXCPU and SIGXFSZ should probably
 be forwarded, too.  Naturally, this business can't be perfected, but a
 good enough solution could still be valuable.
   
Agree.

 Looking at the source, the SIGCHLD mechanism could be
 mimicked, but LXC_TTY_ADD_HANDLER may get in the way.
   
 We should remove LXC_TTY_ADD_HANDLER and do everything in the signal
 handler of SIGCHLD by extending the handler. I have a pending fix
 changing a bit the signal handler function.
 

 Could you please send along your pending fix?  I'd like to experiment
 with signal forwarding, but without stomping on that.
   

Sure, no problem.

 I noticed something strange:

 # lxc-start -n jail -s lxc.mount.entry=/ /tmp/jail none bind 0 0 -s 
 lxc.rootfs=/tmp/jail -s lxc.pivotdir=/mnt /bin/sleep 1000
 (in another terminal)
 # lxc-ps --lxc
 CONTAINERPID TTY  TIME CMD
 jail4173 pts/100:00:00 sleep
 # kill 4173
 (this does not kill the sleep!)
 # strace -p 4173
 Process 4173 attached - interrupt to quit
 restart_syscall(... resuming interrupted call ... = ? ERESTART_RESTARTBLOCK 
 (To be restarted)
 --- SIGTERM (Terminated) @ 0 (0) ---
 Process 4173 detached
 # lxc-ps --lxc
 CONTAINERPID TTY  TIME CMD
 jail4173 pts/100:00:00 sleep
 # fgrep -i sig /proc/4173/status 
 SigQ: 1/16382
 SigPnd:   
 SigBlk:   
 SigIgn:   
 SigCgt:   
 # kill -9 4173

 That is, the jailed sleep process could be killed by SIGKILL only, even
 though (according to strace) SIGTERM was delivered and it isn't handled
 specially.  Why does this happen?
   

I sent a separate email for this problem in order to avoid confusion 
with the signal forwarding discussion.

 I'm also worried about signals sent to the whole process group: they
 may be impossible to distinguish from the targeted signals and thus
 can't propagate correctly.
   
   
 Good point. Maybe we can setpgrp the first process of the container?
 

 We've got three options:
   A) do nothing, as now
   B) forward to our child
   C) forward to our child's process group

 The signal could arrive because it was sent to
   1) the PID of lxc-start
   2) the process group of lxc-start

 If we don't put the first process of the container into a new process
 group (as now), this is what happens:

 AB C
 1   swallowedOKothers also killed
 2  OK   child gets extraeverybody gets extra

 If we put the first process of the container into a new process group:

 AB C
 1   swallowedOKothers also killed
 2   swallowed   only the child killed  OK

 Neither is a clear winner, although the latter is somewhat more
 symmetrical.  I'm not sure about wanting all this configurable...
   
hmm ... Maybe Greg, (it's an expert with signals and processes), has an 
idea on how to deal with that.

  -- Daniel

--
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start leaves temporary pivot dir behind

2010-05-08 Thread Daniel Lezcano

Ferenc Wagner wrote:

Ferenc Wagner wf...@niif.hu writes:

  

Daniel Lezcano dlezc...@fr.ibm.com writes:



Ferenc Wagner wrote:

  

Daniel Lezcano daniel.lezc...@free.fr writes:



Ferenc Wagner wrote:

  

While playing with lxc-start, I noticed that /tmp is infested by
empty lxc-r* directories: [...] Ok, this name comes from lxc-rootfs
in conf.c:setup_rootfs.  After setup_rootfs_pivot_root returns, the
original /tmp is not available anymore, so rmdir(tmpname) at the
bottom of setup_rootfs can't achieve much.  Why is this temporary
name needed anyway?  Is pivoting impossible without it?


That was put in place with chroot, before pivot_root, so the distro's
scripts can remount their '/' without failing.

Now we have pivot_root, I suppose we can change that to something cleaner...
  

Like simply nuking it?  Shall I send a patch?


Sure, if we can kill it, I will be glad to take your patch :)
  

I can't see any reason why lxc-start couldn't do without that temporary
recursive bind mount of the original root.  If neither do you, I'll
patch it out and see if it still flies.



For my purposes the patch below works fine.  I only run applications,
though, not full systems, so wider testing is definitely needed.

Thanks,
Feri.

From 98b24c13f809f18ab8969fb4d84defe6f812b25c Mon Sep 17 00:00:00 2001
From: Ferenc Wagner wf...@niif.hu
Date: Thu, 6 May 2010 14:47:39 +0200
Subject: [PATCH] no need to use a temporary directory for pivoting

That was put in place before lxc-start started using pivot_root, so
the distro scripts can remount / without problems.

Signed-off-by: Ferenc Wagner wf...@niif.hu
---
 src/lxc/conf.c |   28 +++-
 1 files changed, 3 insertions(+), 25 deletions(-)

diff --git a/src/lxc/conf.c b/src/lxc/conf.c
index b27a11d..4379a32 100644
--- a/src/lxc/conf.c
+++ b/src/lxc/conf.c
@@ -588,37 +588,15 @@ static int setup_rootfs_pivot_root(const char *rootfs, 
const char *pivotdir)
 
 static int setup_rootfs(const char *rootfs, const char *pivotdir)

 {
-   char *tmpname;
-   int ret = -1;
-
if (!rootfs)
return 0;
 
-	tmpname = tempnam(/tmp, lxc-rootfs);

-   if (!tmpname) {
-   SYSERROR(failed to generate temporary name);
-   return -1;
-   }
-
-   if (mkdir(tmpname, 0700)) {
-   SYSERROR(failed to create temporary directory '%s', tmpname);
-   return -1;
-   }
-
-   if (mount(rootfs, tmpname, none, MS_BIND|MS_REC, NULL)) {
-   SYSERROR(failed to mount '%s'-'%s', rootfs, tmpname);
-   goto out;
-   }
-
-   if (setup_rootfs_pivot_root(tmpname, pivotdir)) {
+   if (setup_rootfs_pivot_root(rootfs, pivotdir)) {
ERROR(failed to pivot_root to '%s', rootfs);
-   goto out;
+   return -1;
}
 
-	ret = 0;

-out:
-   rmdir(tmpname);
-   return ret;
+   return 0;
 }
 
 static int setup_pts(int pts)
  


We can't simply remove it because of the pivot_root which returns EBUSY.

I suppose it's coming from:
new_root and put_old must not be on the same file system as the current 
root.


But as we will pivot_root right after, we won't reuse the real rootfs, 
so we can safely use the host /tmp.


I tried the following patch and it worked.

Comments ?



Subject: no need to use a temporary directory for pivoting

From: Ferenc Wagner wf...@niif.hu

Ferenc Wagner wf...@niif.hu writes:

 Daniel Lezcano dlezc...@fr.ibm.com writes:

 Ferenc Wagner wrote:

 Daniel Lezcano daniel.lezc...@free.fr writes:

 Ferenc Wagner wrote:

 While playing with lxc-start, I noticed that /tmp is infested by
 empty lxc-r* directories: [...] Ok, this name comes from lxc-rootfs
 in conf.c:setup_rootfs.  After setup_rootfs_pivot_root returns, the
 original /tmp is not available anymore, so rmdir(tmpname) at the
 bottom of setup_rootfs can't achieve much.  Why is this temporary
 name needed anyway?  Is pivoting impossible without it?

 That was put in place with chroot, before pivot_root, so the distro's
 scripts can remount their '/' without failing.

 Now we have pivot_root, I suppose we can change that to something cleaner...

 Like simply nuking it?  Shall I send a patch?

 Sure, if we can kill it, I will be glad to take your patch :)

 I can't see any reason why lxc-start couldn't do without that temporary
 recursive bind mount of the original root.  If neither do you, I'll
 patch it out and see if it still flies.

For my purposes the patch below works fine.  I only run applications,
though, not full systems, so wider testing is definitely needed.

Thanks,
Feri.

From 98b24c13f809f18ab8969fb4d84defe6f812b25c Mon Sep 17 00:00:00 2001
Date: Thu, 6 May 2010 14:47:39 +0200

That was put in place before lxc-start started using pivot_root, so
the distro scripts can remount / without problems.

Signed-off-by: Ferenc Wagner wf...@niif.hu
---
 src/lxc/conf.c |   27

Re: [Lxc-users] lxc-start leaves temporary pivot dir behind

2010-05-10 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Daniel Lezcano daniel.lezc...@free.fr writes:

   
 Ferenc Wagner wrote:

 
 Ferenc Wagner wf...@niif.hu writes:
   
   
 Daniel Lezcano dlezc...@fr.ibm.com writes:
 
 
 Ferenc Wagner wrote:

   
 Daniel Lezcano daniel.lezc...@free.fr writes:

 
 Ferenc Wagner wrote:

   
 While playing with lxc-start, I noticed that /tmp is infested by
 empty lxc-r* directories: [...] Ok, this name comes from lxc-rootfs
 in conf.c:setup_rootfs.  After setup_rootfs_pivot_root returns, the
 original /tmp is not available anymore, so rmdir(tmpname) at the
 bottom of setup_rootfs can't achieve much.  Why is this temporary
 name needed anyway?  Is pivoting impossible without it?
 
 
 That was put in place with chroot, before pivot_root, so the distro's
 scripts can remount their '/' without failing.

 Now we have pivot_root, I suppose we can change that to something 
 cleaner...
   
   
 Like simply nuking it?  Shall I send a patch?
 
 
 Sure, if we can kill it, I will be glad to take your patch :)
   
   
 I can't see any reason why lxc-start couldn't do without that temporary
 recursive bind mount of the original root.  If neither do you, I'll
 patch it out and see if it still flies.
 
 For my purposes the patch below works fine.  I only run applications,
 though, not full systems, so wider testing is definitely needed.

 From 98b24c13f809f18ab8969fb4d84defe6f812b25c Mon Sep 17 00:00:00 2001
 From: Ferenc Wagner wf...@niif.hu
 Date: Thu, 6 May 2010 14:47:39 +0200
 Subject: [PATCH] no need to use a temporary directory for pivoting
 [...]
   
 We can't simply remove it because of the pivot_root which returns EBUSY.
 I suppose it's coming from: new_root and put_old must not be on the
 same file system as the current root.
 

 Hmm, this could indeed be a problem if lxc.rootfs is on the current root
 file system.  I didn't consider pivoting to the same FS, but looks like
 this is the very reason for the current complexity in the architecture.

 Btw. is this really a safe thing to do, to pivot into a subdirectory of
 a file system?  Is there really no way out of that?
   
It seems pivot_root on the same fs works if an intermediate mount point 
is inserted between old_root and new_root but at the cost of having a 
lazy unmount when we unmount the old rootfs filesystems . I didn't find 
a better solution in order to allow the rootfs to be a directory with a 
full files system tree.

I am looking at making possible to specify a rootfs which is a file 
system image or a block device. I am not sure this should be done by lxc 
but looking forward ...

 But as we will pivot_root right after, we won't reuse the real rootfs,
 so we can safely use the host /tmp.
 

 That will cause problems if rootfs is under /tmp, don't you think?
   
Right :)

 Actually, I'm not sure you can fully solve this.  If rootfs is a
 separate file system, this is only much ado about nothing.  If rootfs
 isn't a separate filesystem, you can't automatically find a good place
 and also clean it up. 
Maybe a single /tmp/lxc directory may be used as the mount points are 
private to the container. So it would be acceptable to have a single 
directory for N containers, no ?

 So why not require that rootfs is a separate
 filesystem, and let the user deal with it by doing the necessary bind
 mount in the lxc config?
   
Hmm, that will break the actual user configurations.

We can add a WARNING if rootfs is not a separate file system and provide 
the ability to let the user to do whatever he wants, IMO if it is well 
documented it is not a problem.

 --- lxc.orig/src/lxc/conf.c
 +++ lxc/src/lxc/conf.c
 @@ -581,37 +581,24 @@ static int setup_rootfs_pivot_root(const
  
  static int setup_rootfs(const char *rootfs, const char *pivotdir)
  {
 -char *tmpname;
 -int ret = -1;
 +const char *tmpfs = /tmp;
  
  if (!rootfs)
  return 0;
  
 -tmpname = tempnam(/tmp, lxc-rootfs);
 -if (!tmpname) {
 -SYSERROR(failed to generate temporary name);
 +if (mount(rootfs, tmpfs, none, MS_BIND|MS_REC, NULL)) {
 +SYSERROR(failed to mount '%s'-'%s', rootfs, /tmp);
 

 You probably meant tmpfs instead of /tmp in SYSERROR() above.
   

yep.


--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Ltsp on LXC: PXE-E32: TFTP open Timeout

2010-05-12 Thread Daniel Lezcano
Osvaldo Filho wrote:
 I get this error on thinclient boot: PXE-E32: TFTP open timeout
   
Can you give the version and the configuration of lxc, the host 
configuration, the kernel version, and the circumstances of the problem ?

Thanks
  -- Daniel

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Ltsp on LXC: PXE-E32: TFTP open Timeout

2010-05-12 Thread Daniel Lezcano
Osvaldo Filho wrote:
 Host Environment:
 Linux ltspserver01 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28
 13:28:05 UTC 2010 x86_64 GNU/Linux
 lxc   0.6.5-1
 Linux containers userspace tools

 ===
 lxc.utsname = ltsp2
 lxc.tty = 4
 lxc.pts = 1024
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.name = eth0
 lxc.network.mtu = 1500
 lxc.rootfs = ./rootfs
 lxc.mount = ./fstab.lxc
 #lxc.cgroup.cpuset.cpus = 0
 lxc.network.ipv4 = 192.168.6.2/24
 lxc.network.hwaddr = 0a:22:3c:4d:55:ff
 #lxc.cgroup.devices.deny = a # Deny all access to devices
 lxc.cgroup.devices.allow = c 1:3 rwm # /dev/null
 lxc.cgroup.devices.allow = c 1:5 rwm # /dev/zero
 lxc.cgroup.devices.allow = c 5:1 rwm # /dev/console
 lxc.cgroup.devices.allow = c 5:0 rwm # /dev/tty
 lxc.cgroup.devices.allow = c 5:1 rwm # /dev/console
 lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0
 lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1
 lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2
 lxc.cgroup.devices.allow = c 4:3 rwm # /dev/tty3
 lxc.cgroup.devices.allow = c 1:9 rwm # /dev/urandon
 lxc.cgroup.devices.allow = c 1:8 rwm # /dev/random
 lxc.cgroup.devices.allow = c 136:* rwm # /dev/pts/*
 lxc.cgroup.devices.allow = c 5:2 rwm # /dev/pts/ptmx
 lxc.cgroup.devices.allow = c 254:0 rwm # /dev/rtc0
 ===


 When thinclient boot, the error occur:
 I get this error on thinclient boot: PXE-E32: TFTP open timeout

 This accur when firewall block inetd, but i do not set firewall.
   
Can you give the network configuration of the host too ?

brctl show br0 and ifconfig

Thanks
  -- Daniel

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Ltsp on LXC: PXE-E32: TFTP open Timeout

2010-05-12 Thread Daniel Lezcano
Osvaldo Filho wrote:
 Sorry:

 /etc/network/interfaces

 auto lo
 iface lo inet loopback

 auto eth0
 iface eth0 inet static
  address 10.0.3.10
  netmask 255.255.255.0
  gateway 10.0.3.1
  dns-nameservers 10.0.3.1 192.168.1.1

  auto br0
  iface br0 inet static
   address 192.168.6.1
   netmask 255.255.255.0
   broadcast 192.168.6.255
   network 192.168.6.0
   #gateway 192.168.6.1
   dns-nameservers 192.168.6.1
 bridge_ports eth2
 bridge_stp off
   

Is there a forwarding between br0 and eth0 ?

At the first glance the configuration looks ok.

I don't know ltsp, so maybe I won't able to help too much.

The first thing to do is to check the traffic between the container and 
br0 is ok.

You can check that by using tcpdump -i br0 and watch for outgoing 
packets when the thinclient does tftp.

Assuming there is a forwarding between br0 and eth0:

If the packets are visible on br0, maybe the iptables are blocking 
something.

check:
 * /proc/sys/net/ipv4/ip_forward is set to 1

and you have the right iptables rules to do the masquerading for br0.

Another point is if tftp does broadcast, I am not sure the packets will 
be forwarded.


 2010/5/12 Osvaldo Filho arquivos...@gmail.com:
   
 No problem and thank you very much!

 =

 br0   Link encap:Ethernet  Endereço de HW 00:10:18:4f:8e:fc
  inet end.: 192.168.6.1  Bcast:192.168.6.255  Masc:255.255.255.0
  endereço inet6: fe80::210:18ff:fe4f:8efc/64 Escopo:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Métrica:1
  pacotes RX:17528009 erros:0 descartados:0 excesso:0 quadro:0
  Pacotes TX:10890138 erros:0 descartados:0 excesso:0 portadora:0
  colisões:0 txqueuelen:0
  RX bytes:6134616116 (6.1 GB) TX bytes:24160294112 (24.1 GB)

 eth2  Link encap:Ethernet  Endereço de HW 00:10:18:4f:8e:fc
  endereço inet6: fe80::210:18ff:fe4f:8efc/64 Escopo:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Métrica:1
  pacotes RX:17545797 erros:0 descartados:0 excesso:0 quadro:0
  Pacotes TX:25377694 erros:0 descartados:0 excesso:0 portadora:0
  colisões:0 txqueuelen:1000
  RX bytes:6554180607 (6.5 GB) TX bytes:25219335827 (25.2 GB)
  IRQ:38 Memória:d600-d6012800

 virbr0Link encap:Ethernet  Endereço de HW 0e:0c:3a:d3:3e:6e
  inet end.: 192.168.122.1  Bcast:192.168.122.255  Masc:255.255.255.0
  endereço inet6: fe80::c0c:3aff:fed3:3e6e/64 Escopo:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Métrica:1
  pacotes RX:0 erros:0 descartados:0 excesso:0 quadro:0
  Pacotes TX:1689 erros:0 descartados:0 excesso:0 portadora:0
  colisões:0 txqueuelen:0
  RX bytes:0 (0.0 B) TX bytes:330948 (330.9 KB)

 ---

 iptables -L

 Chain INPUT (policy ACCEPT)
 target prot opt source   destination
 ACCEPT udp  --  anywhere anywhereudp dpt:domain
 ACCEPT tcp  --  anywhere anywheretcp dpt:domain
 ACCEPT udp  --  anywhere anywhereudp dpt:bootps
 ACCEPT tcp  --  anywhere anywheretcp dpt:bootps

 Chain FORWARD (policy ACCEPT)
 target prot opt source   destination
 ACCEPT all  --  anywhere 192.168.122.0/24state
 RELATED,ESTABLISHED
 ACCEPT all  --  192.168.122.0/24 anywhere
 ACCEPT all  --  anywhere anywhere
 REJECT all  --  anywhere anywhere
 reject-with icmp-port-unreachable
 REJECT all  --  anywhere anywhere
 reject-with icmp-port-unreachable

 Chain OUTPUT (policy ACCEPT)
 target prot opt source   destination

 =

 The problem, perhaps, is with openbsd-inetd.


 2010/5/12 Daniel Lezcano daniel.lezc...@free.fr:
 
 Osvaldo Filho wrote:
   
 bridge name bridge id   STP enabled interfaces
 br0 8000.0010184f8efc   no  eth2
 virbr0  8000.   yes

 
 *and* ifconfig :)

 Thanks
  -- Daniel
   
 LXC container use br0

 2010/5/12 Daniel Lezcano daniel.lezc...@free.fr:

 
 Osvaldo Filho wrote:

   
 Host Environment:
 Linux ltspserver01 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28
 13:28:05 UTC 2010 x86_64 GNU/Linux
 lxc   0.6.5-1
 Linux containers userspace tools

 ===
 lxc.utsname = ltsp2
 lxc.tty = 4
 lxc.pts = 1024
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.name = eth0
 lxc.network.mtu = 1500
 lxc.rootfs = ./rootfs
 lxc.mount = ./fstab.lxc
 #lxc.cgroup.cpuset.cpus = 0
 lxc.network.ipv4 = 192.168.6.2/24
 lxc.network.hwaddr = 0a:22:3c:4d:55:ff
 #lxc.cgroup.devices.deny = a # Deny all access to devices
 lxc.cgroup.devices.allow = c 1:3 rwm # /dev/null
 lxc.cgroup.devices.allow = c 1:5 rwm # /dev/zero
 lxc.cgroup.devices.allow = c 5:1 rwm # /dev

Re: [Lxc-users] lxc-start leaves temporary pivot dir behind

2010-05-12 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Daniel Lezcano daniel.lezc...@free.fr writes:

   
 Ferenc Wagner wrote:

 
 Daniel Lezcano daniel.lezc...@free.fr writes:
   
   
 Ferenc Wagner wrote:
 
 
 Actually, I'm not sure you can fully solve this.  If rootfs is a
 separate file system, this is only much ado about nothing.  If rootfs
 isn't a separate filesystem, you can't automatically find a good
 place and also clean it up.
   
 Maybe a single /tmp/lxc directory may be used as the mount points are
 private to the container. So it would be acceptable to have a single
 directory for N containers, no ?
 
 Then why not /usr/lib/lxc/pivotdir or something like that?  Such a
 directory could belong to the lxc package and not clutter up /tmp.  As
 you pointed out, this directory would always be empty in the outer name
 space, so a single one would suffice.  Thus there would be no need
 cleaning it up, either.
   
 Agree. Shall we consider $(prefix)/var/run/lxc ?
 

 Hmm, /var/run/lxc is inconvenient, because it disappears on each reboot
 if /var/run is on tmpfs.  This isn't variable data either, that's why I
 recommended /usr above.
   
Good point. I will change that to /usr/$(libdir)/lxc and let the distro 
maintainer to choose a better place if he wants with the configure option.

 Now the question is: if rootfs is a separate file system (which
 includes bind mounts), is the superfluous rbind of the original root
 worth skipping, or should we just do it to avoid needing an extra
 code path?
   
 Good question. IMO, skipping the rbind is ok for this case but it may
 be interesting from a coding point of view to have a single place
 identified for the rootfs (especially for mounting an image). I will
 cook a patchset to fix the rootfs location and then we can look at
 removing the superfluous rbind.
 

 I'm testing your patchset now.  So far it seems to work as advertised.
   
Cool, thanks for testing.


--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start leaves temporary pivot dir behind

2010-05-13 Thread Daniel Lezcano
Ferenc Wagner wrote:
 Daniel Lezcano daniel.lezc...@free.fr writes:


 Ferenc Wagner wrote:


 Daniel Lezcano daniel.lezc...@free.fr writes:


 Ferenc Wagner wrote:


 Daniel Lezcano daniel.lezc...@free.fr writes:


 Ferenc Wagner wrote:


 Actually, I'm not sure you can fully solve this.  If rootfs is a
 separate file system, this is only much ado about nothing.  If rootfs
 isn't a separate filesystem, you can't automatically find a good
 place and also clean it up.

 Maybe a single /tmp/lxc directory may be used as the mount points are
 private to the container. So it would be acceptable to have a single
 directory for N containers, no ?

 Then why not /usr/lib/lxc/pivotdir or something like that?  Such a
 directory could belong to the lxc package and not clutter up /tmp.  As
 you pointed out, this directory would always be empty in the outer name
 space, so a single one would suffice.  Thus there would be no need
 cleaning it up, either.

 Agree. Shall we consider $(prefix)/var/run/lxc ?

 Hmm, /var/run/lxc is inconvenient, because it disappears on each reboot
 if /var/run is on tmpfs.  This isn't variable data either, that's why I
 recommended /usr above.

 Good point. I will change that to /usr/$(libdir)/lxc and let the
 distro maintainer to choose a better place if he wants with the
 configure option.


 I'm not sure what libdir is, doesn't this conflict with lxc-init?
 That's in the /usr/lib/lxc directory, at least in Debian.  I'd vote for
 /usr/lib/lxc/oldroot in this setting.

$(libdir) is the variable defined by configure --libdir=path
Usually it is /usr/lib on 32bits or /usr/lib64 on 64bits.

lxc-init is located in $(libexecdir), that is /usr/libexec or /libexec 
depending of the configure setting.






--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC a feature complete replacement of OpenVZ?

2010-05-13 Thread Daniel Lezcano
On 05/13/2010 06:17 PM, Christian Haintz wrote:
 Hi,

 At first LXC seams to be a great work from what we have read already.

 There are still a few open questions for us (we are currently running
 dozens of OpenVZ Hardwarenodes).

 1) OpenVZ in the long-term seams to be a dead end. Will LXC be a
 feature complete replacement for OpenVZ in the 1.0 Version?

Theorically speaking, LXC is not planned to be a replacement to OpenVZ. 
When a specific functionality is missing, it is added. Sometimes that 
needs a kernel development implying an attempt to mainline inclusion.

When the users of LXC want a new functionality, they send a patchset or 
ask if it possible to implement it. Often, the modifications need a 
kernel modification at that takes sometime to reach the upstream kernel 
(eg. sysfs per namespace).

Practically speaking, LXC evolves following the needs (eg. entering a 
container) of the users and that may lead to a replacement of OpenVZ.

The version 1.0 is planned to be a stable version, with documentation 
and frozen API.

 As of the current version
 2) is there IPTable support, any sort of control like the OpenVZ
 IPTable config.

The iptables support in the container is depending on the kernel version 
you are using. AFAICS, iptables per namespace is implemented now.

 3) Is there support for tun/tap device

The drivers are ready to be used in the container but not sysfs and that 
unfortunately prevent to create a tun/tap in a container.

sysfs per namespace is on the way to be merged upstream.

 4) is there support for correct memory info and disk space info (are
 df and top are showing the container ressources or the resources of
 the hardwarenode)

No and that will not be supported by the kernel but it is possible to do 
that with fuse. I did a prototype here:

http://lxc.sourceforge.net/download/procfs/procfs.tar.gz

But I gave up with it because I have too much things to do with lxc and 
not enough free time. Anyone is welcome to improve it ;)

 5) is there something compared to the fine grained controll about
 memory resources like vmguarpages/privmpages/oomguarpages in LXC?

I don't know these controls you are talking about but LXC is plugged 
with the cgroups. One of the subsystem of the cgroup is the memory 
controller allowing to assign an amount of physical memory and swap 
space to the container. There are some mechanism for notification as 
well. There are some other resource controller like io (new), freezer, 
cpuset, net_cls and device whitelist (googling one of these name + lwn 
may help).

 6) is LXC production ready?

yes and no :)

If you plan to run several webserver (not a full system) or non-root 
applications, then yes IMHO it is ready for production.

If you plan to run a full system and you have very aggressive users 
inside with root privilege then it may not be ready yet. If you setup a 
full system and you plan to have only the administrator of the host to 
be the administrator of the containers, and the users inside the 
container are never root, then IMHO it ready if you accept for example 
to have the iptables logs to go to the host system.

Really, it depends of what you want to do ...

I don't know OpenVZ very well, but AFAIK it is focused on system 
container while LXC can setup different level of isolation allowing to 
run an application sharing a filesystem or a network for example, as 
well as running a full system. But this flexibility is a drawback too 
because the administrator of the container needs a bit of knowledge on 
the system administration and the container technology.

 Thanks in Advance, and we are looking forward to switch to Linux
 Containers when all Questions are answered with yes :-)

Hope that helped.

Thanks
   -- Daniel

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] [lxc-devel] lxc-start and lucid container

2010-05-19 Thread Daniel Lezcano
On 05/17/2010 05:00 PM, Bodhi Zazen wrote:

Sorry for the delay, I missed your mail in the spam filter :/

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] help with root mount parameters

2010-05-26 Thread Daniel Lezcano
On 05/26/2010 11:07 AM, atp wrote:
 Thanks to both for the replies.

 This now makes sense. I've specified the rootfs.mount in the container
 config, and it gets past there and boots ok.

 Just in case anyone else cares, a very handy debug log can be had by
 using this command.

 lxc-start --logpriority=TRACE -o /tmp/trace.log --name my_container

 It was not clear from the man page that to get the higher levels of
 verbosity (DEBUG|TRACE) you need to specify an output file, rather
 than just stderr.


Right.

I used to have this options:

lxc-start -l DEBUG -o $(tty) -s lxc.console=$(tty) -n name


 The autoconf maze has me befuddled as well. I tried briefly to see where
 VERSION and PACKAGE_VERSION were defined but to no avail.


They should be defined in src/config.h (generated by autoconf).

 In answer to Daniels question;

 islab01 is an FC12 machine, running 2.6.34 (for the macvlan stuff)

 [r...@islab01 lxc]# grep /usr/lib64/lxc *
 [r...@islab01 lxc]# head config.log
 This file contains any messages produced by compilers while
 running configure, to aid debugging if configure makes a mistake.

 It was created by lxc configure 0.6.5, which was
 generated by GNU Autoconf 2.63.  Invocation command line was

$ ./configure

 ## - ##
 ## Platform. ##
 [r...@islab01 lxc]# grep /usr/lib64/lxc *
 [r...@islab01 lxc]# grep LXCROOTFS *
 config.log:LXCROOTFSMOUNT='/usr/local/lib/lxc'
 config.log:#define LXCROOTFSMOUNT /usr/local/lib/lxc
 config.status:S[LXCROOTFSMOUNT]=/usr/local/lib/lxc
 config.status:D[LXCROOTFSMOUNT]= \/usr/local/lib/lxc\
 configure:LXCROOTFSMOUNT
 configure:EXP_VAR=LXCROOTFSMOUNT
 configure:LXCROOTFSMOUNT=$full_var
 configure:#define LXCROOTFSMOUNT $LXCROOTFSMOUNT
 configure.ac:AS_AC_EXPAND(LXCROOTFSMOUNT, ${with_rootfs_path})
 configure.ac:AH_TEMPLATE([LXCROOTFSMOUNT], [lxc default rootfs mount
 point])
 configure.ac:AC_DEFINE_UNQUOTED(LXCROOTFSMOUNT, $LXCROOTFSMOUNT)
 Makefile:LXCROOTFSMOUNT = /usr/local/lib/lxc
 Makefile.in:LXCROOTFSMOUNT = @LXCROOTFSMOUNT@

 [r...@islab01 lxc]# ls -l /usr/local/lib/lxc /usr/lib64/lxc
 ls: cannot access /usr/local/lib/lxc: No such file or directory
 ls: cannot access /usr/lib64/lxc: No such file or directory

 from the trace.log
lxc-start 1274794074.668 WARN lxc_conf - failed to mount
 '/dev/pts/2'-'./rootfs.test/dev/tty2'
lxc-start 1274794074.668 WARN lxc_conf - failed to mount
 '/dev/pts/3'-'./rootfs.test/dev/tty3'
lxc-start 1274794074.668 WARN lxc_conf - failed to mount
 '/dev/pts/4'-'./rootfs.test/dev/tty4'
lxc-start 1274794074.668 INFO lxc_conf - 4 tty(s) has been
 setup
lxc-start 1274794074.668 ERRORlxc_conf - No such file or
 directory - failed to access to '/usr/lib64/lxc', check it is present
lxc-start 1274794074.668 ERRORlxc_conf - failed to set rootfs
 for 'test'
lxc-start 1274794074.668 ERRORlxc_start - failed to setup the
 container

 So something is going awry, but I'm really puzzled as to how having read
 the setup_rootfs section. Its probably not worth chasing down at the
 moment.


Can you check the LXCROOTFSMOUNT macro in src/config.h and src/config.h.in ?

Thanks
   -- Daniel

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-unshare woes and signal forwarding in lxc-start

2010-05-26 Thread Daniel Lezcano
On 05/13/2010 02:22 PM, Ferenc Wagner wrote:


[ ... ]
 I attached a proof-of-concept patch which seems to work good enough for
 me.  The function names are somewhat off now, but I leave that for later


Ferenc,

do you have definitive version for this ?
I have some modifications in the start function and they may conflict 
with your patch.

Thanks
   -- Daniel

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] help with root mount parameters

2010-05-26 Thread Daniel Lezcano
On 05/26/2010 08:10 PM, Brian K. White wrote:
 On 5/26/2010 4:54 AM, Ralf Schmitt wrote:
 Daniel Lezcanodlezc...@fr.ibm.com   writes:


 This is internal stuff of lxc. Before this commit, several temporary
 directories were created and never destroyed, polluting '/tmp'.

 In order to do pivot_root, we have to mount --bind the rootfs somewhere.
 This 'somewhere' was a temporary directory and now it is
 /usr/lib64/lxc by default (choosen at configure time), or optionally
 configurable with lxc.rootfs.mount.

 /var/run/lxc looks like a much better choice to me.


 As has been discussed pretty thoroughly already, this is not variable
 data but a completely fixed, static bit of package-specific support
 infrastructure. It's just like a package specific library or other
 component file whose name never changes and which that single file
 services all running instances concurrently.
 The library or other support file just happens to be an empty
 directory in this case.
 As such, something/lib/package/something is really the most correct
 place. Just pretend you can't hear the word temporary in the
 description of it's purpose.

 Maybe the install target that creates this directory could also place a
 small text file in the directory explaining the directories purpose?
 This directory must exist, even though no contents are ever placed
 here. see http: for details
 That shouldn't affect it's use as a mount point and helps the system to
 self-document.

Good idea.

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread Daniel Lezcano
On 05/27/2010 10:21 AM, Toby Corkindale wrote:
 On 27/05/10 18:06, atp wrote:
 As requested:



 ifconfig br0 from the host
  
 br0   Link encap:Ethernet  HWaddr 00:1e:37:4d:8c:d8
 inet addr:192.168.1.206  Bcast:192.168.1.255  Mask:255.255.255.0
 inet6 addr: fe80::21e:37ff:fe4d:8cd8/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:3867723 errors:0 dropped:0 overruns:0 frame:0
 TX packets:1849343 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:3451303555 (3.4 GB)  TX bytes:382610461 (382.6 MB)


Can you give the routes of the host please ?

 ifconfig eth0 from the container
  
 eth0  Link encap:Ethernet  HWaddr 36:d1:4f:d9:51:59
 inet addr:192.168.1.88  Bcast:192.168.1.255  Mask:255.255.255.0
 inet6 addr: fe80::34d1:4fff:fed9:5159/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:1416 errors:0 dropped:0 overruns:0 frame:0
 TX packets:495 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:1020033 (1.0 MB)  TX bytes:37512 (37.5 KB)




 and the version of lxc you're using.
  
 It's close to the git head, master branch.
 Last commit was 0093bb8ced5784468daf8e66783e6be3782e8fea on May 18th.
 (The version that originally shipped with ubuntu was giving me errors
 about not being able to pivot_root)


 Do you have anything special with
 the /etc/sysctl.conf?
  
 I think these came with the system, are they likely to be problematic?

 net.ipv4.conf.default.rp_filter=1
 net.ipv4.conf.all.rp_filter=1
 net.ipv4.tcp_syncookies=1
 vm.mmap_min_addr = 65536
 fs.inotify.max_user_watches = 524288
 kernel.shmmax = 38821888




 On a completely blank container with no tuning, I get with scp;

 host-container squashfs.img 100% 639MB 33.6MB/s 00:19
 container-host squashfs.img 100% 639MB 29.0MB/s 00:22

 Both tests inside the container. The limiting resource here is cpu for the
 encryption.
  
 mm, yeah, I'd be waiting all week to copy an equivalently sized file
 like that. Although if i copy it to another host on the network, then
 back again, it's all fine :/


 I'm on kernel 2.6.34/fc12 for this.
  
 I'm on 2.6.32-22/ubuntu 10.04


 thanks,
 Toby

 --

 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users




--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Storage with lxc

2010-05-27 Thread Daniel Lezcano
On 05/27/2010 12:05 PM, Yanick Emiliano wrote:
 Hi everybody

 I would like know if lxc at this stage suppots centralization network
 storage. I mean a Storage Filer , iSCSI, or *AoE storage*,For example can I
 have all my rootfs on a network filer and start each VM on a specific device
 storage on a network.


I don't know, I never tried :)

Is it possible to mount one of them in a specific directory on the host 
? and then specify this directory as a rootfs ?

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-28 Thread Daniel Lezcano
On 05/28/2010 02:58 AM, Toby Corkindale wrote:
 On 28/05/10 05:55, Matt Bailey wrote:
 /usr/sbin/ethtool -K br0 sg off
 /usr/sbin/ethtool -K br0 tso off

 Might fix your problem, YMMV; this worked for me.

 Bam! Problem fixed.
 All I needed was the 'sg' option - tso wasn't enabled anyway.

 Now getting a healthy 15-16 mbyte/sec.

Great !

 Thanks for that..

 Is this a bug in a driver somewhere that I should, or just something one
 always needs to be aware of with LXC? (and thus should go in a FAQ)

The true is the first time I see this problem solved by this trick. I 
suppose that has something related to the capabilities of your nic and 
the bridge inherit them. Dunno ...

Matt,

how did you find that ? Is it a problem spotted with the other 
virtualization solution (xen, vmware, qemu, openvz, ...) ? Do you have 
some pointer describing the problem/solution ? So we can add a FAQ with 
a good description / diagnostic of the problem.

Thanks
   -- Daniel


--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] [lxc-devel] template-script for ubuntu [lucid] containers

2010-06-01 Thread Daniel Lezcano
On 06/01/2010 09:50 PM, Wilhelm wrote:
 Am 01.06.2010 16:06, schrieb Daniel Lezcano:
 On 06/01/2010 06:04 PM, Daniel Lezcano wrote:
 On 05/30/2010 07:07 PM, Wilhelm wrote:
 Hi,

 for all interested: attached you'll find a template script for ubuntu
 containers.

 Hi Willem,

 thanks a lot for the script, I fixed some nasty things but I was happy
 to play with it :)

 Do you mind to modify the script in order to have '/var/tmp' not being

 sorry, I meant '/var/run'

 ok, changed it in the attached script (and added the patches you 
 posted and some other tweaks)


 mounted as a tmpfs, so the mechanism within lxc can 'shutdown' /
 'reboot' properly ?
 but a halt from inside the container isn't handled properly: the 
 init-process still remains ...
 Any ideas?

I added a mechanism to watch the utmp file in the container's rootfs in lxc.
This is not available for lxc 0.6.5, do you have this version ?


--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] [lxc-devel] template-script for ubuntu [lucid] containers

2010-06-01 Thread Daniel Lezcano
On 06/01/2010 10:12 PM, Wilhelm wrote:
 Am 01.06.2010 20:05, schrieb Daniel Lezcano:
 On 06/01/2010 09:50 PM, Wilhelm wrote:
 Am 01.06.2010 16:06, schrieb Daniel Lezcano:
 On 06/01/2010 06:04 PM, Daniel Lezcano wrote:
 On 05/30/2010 07:07 PM, Wilhelm wrote:
 Hi,

 for all interested: attached you'll find a template script for 
 ubuntu
 containers.

 Hi Willem,

 thanks a lot for the script, I fixed some nasty things but I was 
 happy
 to play with it :)

 Do you mind to modify the script in order to have '/var/tmp' not 
 being

 sorry, I meant '/var/run'

 ok, changed it in the attached script (and added the patches you 
 posted and some other tweaks)


 mounted as a tmpfs, so the mechanism within lxc can 'shutdown' /
 'reboot' properly ?
 but a halt from inside the container isn't handled properly: the 
 init-process still remains ...
 Any ideas?

 I added a mechanism to watch the utmp file in the container's rootfs 
 in lxc.
 This is not available for lxc 0.6.5, do you have this version ?

 No, I used latest git.

Ok, I suppose something is missing somewhere, will try to have a look at 
that tomorrow.

Thanks
   -- Daniel

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] container shutdown

2010-06-02 Thread Daniel Lezcano
On 06/01/2010 08:27 PM, atp wrote:
 Ok,
   absolutely the last post tonight. I promise.

 I fixed the find /var/run -exec rm -f {} command in rc.sysinit.

 Now the problem is that the runlevel is written whilst things are
 still shutting down;

 /lxc/test01.dev.tradefair/rootfs/var/run/utmp MODIFY
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp CLOSE_WRITE,CLOSE
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp OPEN
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp ACCESS
 /lxc/test01.dev.tradefair/rootfs/var/run/utmp CLOSE_NOWRITE,CLOSE
lxc-start 1275416907.960 DEBUGlxc_utmp - utmp handler fired
lxc-start 1275416907.960 DEBUGlxc_utmp - run level is 3/0
lxc-start 1275416907.960 DEBUGlxc_utmp - there is 13 tasks
 remaining

 By the time I can cat /cgroup/machine name/tasks there's only one
 left, but when the MODIFY fires there are still tasks shutting down.

 Rather annoying after all that, looks like I'll have to find another
 way.


Is it possible upstart respawns the services when they are killed ?

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Questions on lxc-execute

2010-06-03 Thread Daniel Lezcano
On 06/03/2010 09:51 AM, Nirmal Guhan wrote:
 Have few questions on lxc-execute :

 1) Getting an error as :
 [r...@guhan-fedora lxc]# lxc-execute --name=centos /bin/bash
 lxc-execute: No such file or directory - failed to exec
 /usr/libexec/lxc-init
 [r...@guhan-fedora lxc]# lxc-execute --name=centos -- /bin/bash
 lxc-execute: No such file or directory - failed to exec
 /usr/libexec/lxc-init

 [r...@guhan-fedora lxc]# ls -l /usr/libexec/lxc-init
 -rwxr-xr-x. 1 root root 8004 2010-02-17 21:38 /usr/libexec/lxc-init


hmm .. ? weird.
Can you give a strace -f please ?

Wasn't the container previously created and, if yes, did you specified a 
rootfs which may not contain /bin/bash ?

 2) Can the container run only one application at a time - such as one
 instance of lxc-execute ?

No you can run thousand of them but you need to specify different names.

lxc-execute -n foo1 /bin/bash
lxc-execute -n foo2 /bin/bash
etc ...

 So do I have to create multiple containers if I
 have to lxc-execute multiple applications

Not necessarily, you can call lxc-execute with a configuration file, 
without creating the container before.

   or if I want to run lxc-start and
 lxc-execute in parallel ? From the man pages, it looks like the case but
 please clarify.


You can launch any numbers of containers you want. It is up to you to 
define the right configuration for each container you launch in order to 
prevent resources overlaps and conflicts.

I was able to spawned 1000 applications on the same host simultaneously, 
as well as launching 100 debian containers with  a btrfs cow filesystem.

Thanks
   -- Daniel


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Questions on lxc-execute

2010-06-03 Thread Daniel Lezcano
On 06/03/2010 06:24 PM, Nirmal Guhan wrote:
 Forgot to mention previously that I am able to successfully do lxc-start on
 the same container and /bin/bash is also part of rootfs.

 Here is the strace output (pretty long). Failure seems to be at :

 [pid  2386] execve(/usr/libexec/lxc-init, [/usr/libexec/lxc-init, --,
 /bin/bash], [/* 26 vars */]) = -1 ENOENT (No such file or directory)
 [pid  2386] gettimeofday({1275567472, 880216}, NULL) = 0
 [pid  2386] write(2, lxc-execute: , 13lxc-execute: ) = 13
 [pid  2386] write(2, No such file or directory - fail..., 64No such file
 or directory - failed to exec /usr/libexec/lxc-init) = 64

 [r...@guhan-fedora lxc]# file /usr/libexec/lxc-init
 /usr/libexec/lxc-init: ELF 32-bit LSB executable, Intel 80386, version 1
 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18,
 stripped
 [r...@guhan-fedora lxc]#



 - Begin strace ---

[ ... }
 [pid  2386] pivot_root(., ./lxc-oldrootfs-9mRXDz) = 0


lxc-init may not be present in the rootfs.
IMO, you should try to start your container with lxc-start instead of 
lxc-execute.

lxc-execute is mostly used for application containers which does share 
the file system.
AFAICS, you setup a system container, so you may use the lxc-start command.

The lxc-sshd script, is a good example on how to setup an application 
container with a rootfs.

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] File sharing between host and container during startup

2010-06-06 Thread Daniel Lezcano
On 06/04/2010 05:44 PM, Nirmal Guhan wrote:
 Hi,

 I tried to extend the fstab as below:

 /etc/resolv.conf  /lxc/lenny/rootfs.lenny/etc/
 resolv.conf none bind 0 0
 /test  /testdir  none bind 0 0--- I added this line

  From the host :
 # ls /testdir
 a  b  c

  From the container :
 [r...@test-fedora lenny]# chroot rootfs.lenny/
 test-fedora:/# ls /test
 test-fedora:/#

 But when I do lxc-start I get an error as :
 #lxc-start -n lencon
 lxc-start: No such file or directory - failed to mount '/test' on '/testdir'

 Basically what am trying to do is to share the host library files (/lib)
 between the containers.

 Any clues on the error above? Please let me know. Also, any better way to
 share the files between host and container will be helpful.


Hi Nimal,

I am not sure to understand what you are trying to achieve. You created 
a system container, but you want to launch it as an application 
container. Can you give your use case if possible, so I may be able to 
give more clues on how to set ip up.

Thanks
   -- Daniel

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Set default GW

2010-06-09 Thread Daniel Lezcano
On 06/09/2010 09:15 PM, Bodhi Zazen wrote:
 Is there a way to set the default gateway in a linux container ?

 If I set an ipaddress in the config file

 lxc.utsname = foo
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.name = eth0
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 192.168.0.10/24

 These lines set the ip address of the guest.

 Now if I use it either as a container (adding a rootfs , fstab, devices and 
 running lxc-start -n foo) or as an application (using the system / and 
 lxc-execute -n foo -f /lxc/foo.config /bin/bash) , everything starts as 
 expected, I can enter the container with lxc-console, etc ...

 The problem is the default gw is not set. I have to either add the gw 
 manually (when running /bin/bash) or adding it to the init scripts (if 
 running a container)

 route add default gw 192.168.0.1 eth0

 after running the route command everything works as expected.

 So, what I am asking, is there a way of setting the default route / gw in the 
 config file, foo.config ?


Unfortunately, no yet. It is in the TODO list :/


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Copy-on-write hard-link / hashify feature

2010-06-10 Thread Daniel Lezcano
On 06/10/2010 11:25 AM, Gordan Bobic wrote:
 On 06/09/2010 11:47 PM, Daniel Lezcano wrote:

 On 06/09/2010 10:46 PM, Gordan Bobic wrote:
  
 On 06/09/2010 09:08 PM, Daniel Lezcano wrote:

 On 06/09/2010 08:45 PM, Gordan Bobic wrote:
  
 Is there a feature that allows unifying identical files between guests
 via hard-links to save both space and memory (on shared libraries)?
 VServers has a feature for this called hashify, but I haven't been able
 to find such a thing in LXC documentation. Is there such a thing?

 Obviously, I could manually do the searching and hard-linking, but this
 is dangerous since without the copy-on-write feature for such
 hard-linked files that VServers provides, it would be dangerous as any
 guest could change a file on all guests.

 Is there a way to do this safely with LXC?

 No because it is supported by the system with the btrfs cow / snapshot
 file system.

 https://btrfs.wiki.kernel.org

 You can create your btrfs filesystem, mount it somewhere in your fs,
 install a distro and then make a snapshot, that will result in a
 directory. Assign this directory as the rootfs of your container. For
 each container you want to install, create a snapshot of the initial
 installation and assign each resulting directory for a container.
  
 OK, this obviously saves the disk space. What about shared libraries
 memory conservation? Do the shared files in different snapshots have the
 same inodes?

 Yes.
  
 So this implicitly implements COW hard-linking?

I am not an expert with btrfs, but if I understand correctly what you 
mean by COW hard-linking, IMO yes.

I created a btrfs image, put a file, checked the inode, did a snapshot, 
modified the file in the snapshot, checked the inode, it was the same 
but the file content was different.

 What about re-merging them after they get out of sync? For example, if I
 yum update, and a new glibc gets onto each of the virtual hosts, they
 will become unshared and each get different inode numbers which will
 cause them to no longer be mmap()-ed as one, thus rapidly increasing the
 memory requirements. Is there a way to merge them back together with the
 approach you are suggesting? I ask because VServer tools handle this
 relatively gracefully, and I see it as a frequently occurring usage
 pattern.

 The use case you are describing suppose the guests do not upgrade their
 os, so no need of a cow fs for some private modifications, no ?
  
 No, the use-case I'm describing treats guests pretty independently. I am
 saying that I can see a lot of cases where I might update a package in
 the guest which will cause those files to be COW-ed and unshared. I
 might then update another guest with the same package. It's files will
 not be COW-ed and unshared, too. Proceed until all guests are updated.
 now all instances of files in this package are COW-ed and unshared, but
 they are again identical files. I want to merge them back into COW
 hard-links in order to save disk-space and memory.

Ok, I see, thanks for explanation.

 I know that BTRFS has block-level deduplication feature (or will have
 such a feature soon), but that doesn't address the memory saving, does
 it? My understanding (potentially erroneous?) is that DLLs get mapped
 into same shared memory iif their inodes are the same (i.e. if the two
 DLLs are hard-linked).

Mmmh, that need to be investigated, but I don't think.

 VServer's hashify feature handles this unmerge-remerge scenario
 gracefully so as to preserver both the disk and memory savings. I can
 understand that BTRFS will preserve (some of) the disk savings with it's
 features, but it is not at all clear to me that it will preserve the
 memory savings.


It's an interesting question, I think we should ask this question to the 
btrfs maintainers.

 In this case, an empty file hierarchy as a rootfs and the hosts system
 libraries, tools directories can be ro-binded-mounted in this rootfs
 with a private /etc and /home.
  
 That is an interesting idea, and might work to some extent, but it is
 rather inflexible compared to the VServer alternative that is
 effectively fully dynamic.


Do you have any pointer explaining this feature ?

Thanks
   -- Daniel

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Copy-on-write hard-link / hashify feature

2010-06-11 Thread Daniel Lezcano
On 06/10/2010 10:54 PM, Gordon Henderson wrote:
 On Thu, 10 Jun 2010, John Drescher wrote:


 BTW, a second option is lessfs.

 http://www.lessfs.com/wordpress/?page_id=50
  
 What about the KSM kernel option? It's aimed at KVM I think and in the
 kernel from 2.6.32. See:

http://lwn.net/Articles/306704/
 and
http://lwn.net/Articles/330589/

 Not sure if that could be used to help here - it seems a bit of a
 retrospective way to find data duplications - assuming we could enable it
 for whole containers...


KSM is enabled on my ubuntu 10.04. When I do a compilation, ksm takes 
more cpu than the compilation itself and is always eating 10-30% of my 
cpu (Intel(R) Core(TM)2 Duo CPU T9500  @ 2.60GHz). So I disabled it 
definitively ...



--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Copy-on-write hard-link / hashify feature

2010-06-11 Thread Daniel Lezcano
On 06/11/2010 11:08 AM, Gordan Bobic wrote:
 On 06/11/2010 09:57 AM, Daniel Lezcano wrote:
 On 06/10/2010 10:54 PM, Gordon Henderson wrote:
 On Thu, 10 Jun 2010, John Drescher wrote:


 BTW, a second option is lessfs.

 http://www.lessfs.com/wordpress/?page_id=50

 What about the KSM kernel option? It's aimed at KVM I think and in the
 kernel from 2.6.32. See:

  http://lwn.net/Articles/306704/
 and
  http://lwn.net/Articles/330589/

 Not sure if that could be used to help here - it seems a bit of a
 retrospective way to find data duplications - assuming we could enable it
 for whole containers...


 KSM is enabled on my ubuntu 10.04. When I do a compilation, ksm takes
 more cpu than the compilation itself and is always eating 10-30% of my
 cpu (Intel(R) Core(TM)2 Duo CPU T9500  @ 2.60GHz). So I disabled it
 definitively ...

 Are you saying that KSM is performing memory de-duplication on bare
 metal, rather than inside a KVM VM? That can't be right.

 My guess that you have it misconfigured to be scanning the memory too
 frequently and it's spinning empty?

Yes, I think it is probable. I didn't tune the ubuntu default settings.


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Networking Qs

2010-06-18 Thread Daniel Lezcano
On 06/17/2010 06:49 PM, Nirmal Guhan wrote:
 Hi,

 Any reason why we require bridging in the host for lxc ? Am not able to
 setup IP address for the container unless I configure bridge in the host.

You can use the macvlan but the container -- host communication won't 
work.

 Also couple of other questions :
 1. Can I configure container and host be in different networks / subnets
 (assuming I have multiple interfaces) ? I can't try this yet as I just have
 one interface.
 2. Does container and host use different routing tables / VRFs ?


Yes, the virtualization begins at the network layer 2 and a virtual 
interface is created for the container.
Look at the lxc.conf man page and the doc/examples configuration files.

A quick start:

lxc-execute -n foo -s lxc.network.type=macvlan -s lxc.network.link=eth0 
-s lxc.network.flags=up -s lxc.network.ipv4=1.2.3.4 -- /bin/bash


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] GPL

2010-06-22 Thread Daniel Lezcano
On 06/22/2010 01:44 AM, Nirmal Guhan wrote:
 Hi,

 Are the tools (lxc-create, lxc-enter etc.) and liblxc.so licensed
 under GPL v2 or v3 ? Please let me know.


LGPL v2.1

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] starting a container causes Xorg to consume 100% cpu

2010-06-22 Thread Daniel Lezcano
On 06/22/2010 07:05 PM, Daniel Lezcano wrote:
 On 06/22/2010 06:55 PM, Jon Nordby wrote:

 On 22 June 2010 17:32, Stuart Nixonstu...@rednut.net   wrote:
  
 Hello LXCers

 When ever I start a lxc container the hosts Xorg process starts
 consuming 100% cpu time.

 Is this a known issue? Are there any work-arounds to avoid this behaviour?

 I can reproduce this issue on a container (with separate rootfs) which
 does not even have
 pts, tty or networking set up. gettys in the container are commented
 out, of course. This is on 2.6.34 with lxc 0.7.0
 It also leaves me unable to switch ttys in the host. Shutting down the
 container does not fix the problem, I have to kill X for it to go back
 to normal.
  

 The problem does no longer occur since I upgraded to ubuntu 10.04, what
 is your distro ? I am not suggesting you have to upgrade ;)


Correction, I just got it again :)


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] starting a container causes Xorg to consume 100% cpu

2010-06-22 Thread Daniel Lezcano
On 06/22/2010 05:32 PM, Stuart Nixon wrote:
 Hello LXCers

 When ever I start a lxc container the hosts Xorg process starts
 consuming 100% cpu time.

 Is this a known issue? Are there any work-arounds to avoid this behaviour?


Ok, I think I got the problem.

Until I fix this, you can use the workaround by specifying:

  lxc-start -n name -s lxc.console=$(tty)

or lxc.console=/dev/null or lxc.console=mylog, whatever ...

Thanks
   -- Daniel

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] starting a container causes Xorg to consume 100% cpu

2010-06-22 Thread Daniel Lezcano
On 06/22/2010 08:57 PM, Jon Nordby wrote:
 On 22 June 2010 19:50, Daniel Lezcanodaniel.lezc...@free.fr  wrote:

 Until I fix this, you can use the workaround by specifying:

   lxc-start -n name -s lxc.console=$(tty)

 or lxc.console=/dev/null or lxc.console=mylog, whatever ...
  
 This does indeed work around the issue. Thanks


Fixed by commit:

http://lxc.git.sourceforge.net/git/gitweb.cgi?p=lxc/lxc;a=commitdiff;h=cd453b38b778652cb341062fbf3c38edefc3a478;hp=8119235833dc0861c34086f639a60546cda2739c

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] lxc-0.7.1 released

2010-06-24 Thread Daniel Lezcano
Hi All,

Notes:
==

Bug fixes only.


ChangeLog:
==

Ciprian Dorin, Craciun (1):
   lxc to apply mount options for bind mounts

Daniel Lezcano (6):
   fix sshd template
   fix bad free when reading the configuration file
   fix default console to /dev/tty
   fix /proc not mounted in debian container
   remove bad default console option in ubuntu template


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] patch for read-only bind-mount

2010-06-24 Thread Daniel Lezcano
On 06/22/2010 07:25 AM, John Brendler wrote:
 lxc fails to make read-only bind mounts as documented.  Read-only bind
 mounts are important to many use cases.

 A simple patch has been submitted to the lxc-devel mailing list (by
 Ciprian Dorin), but when I last checked, it was not clear if any action
 had been taken on it.  It is clear, however, that the bug still
 exists in release 0.7.0.

 I have tested the patch, and it fixes the problem in both 0.6.5 and
 0.7.0.  I have been using it for a couple months.

 This is where the patch was submitted to the lxc-devel list.-
 http://sourceforge.net/mailarchive/forum.php?thread_name=4B9E0AE0.9000100%40free.frforum_name=lxc-devel

 I think this patch should be implemented (when it is convenient
 to do so).  This is a significant loss of functionality that effects the
 security of a security-oriented application.

 So I am posting so that others know the patch exists and also to see
 what should be done to get this included in the next release.


 Details: -

 In short, a line like this in a container's configuration file should
 have the effect of bind-mounting the file (e.g. /sbin directory below)
 within the container and making it *read-only*:

lxc.mount.entry = /sbin /lxc/container07/sbin none ro,bind 0 0

 Or in a fstab-formatted file referred to by a lxc.mount entry in the
 config file, it would simply be:

/sbin /lxc/container07/sbin none ro,bind 0 0

 Unfortunately, it doesn't work.  It bind-mounts, but gives a little
 warning that it appears to mounted read-write.  This is easily
 confirmed by writing and deleting files in the filesystems that should
 have been mounted read-only.

 This is unforunate, considering the whole point of these tools is secure
 compartmentalization.

 Normally, a read-only bind mount requires two steps:

   mount -o bind /sbin /lxc/container07/sbin
   mount -o remount,ro /lxc/container07/sbin

 So, one may work around this bug by executing a script (after starting
 the container) to carry out that second step, remounting the appropriate
 things in read-only mode. But this shouldn't be necessary, since
 handling read-only bind-mounts are an intended feature of the lxc tools.

 The patch is very simple and does seem to fix the problem nicely.
 Barring regressions I may not be aware of, I, for one, would like to see
 it implemented.

 I am using it as a means to re-use the host operating system's files, in
 read-only bind-mounts, with exceptions overlaid on top of them (rather
 than having to maintain an additional and separate guest operating
 system filesystem).  With the patch, this seems to work quite well.

John,

I merged the Ciprian's patch and released the 0.7.1 with it.
Thanks for pointing the problem.

   -- Daniel

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start on openvz tempate

2010-06-24 Thread Daniel Lezcano
On 06/24/2010 11:16 PM, Papp Tamás wrote:

 Daniel Lezcano wrote, On 2010. 06. 24. 22:38:
 That's probably mean the container is already running. Did you checked
 with lxc-ps --name fsn ?

 Well, you are right. But shouldn't it also show it with lxc-ps --lxc ?

Yes, correct. The --lxc option will show all the containers.

[ ... ]

 Finally I could start it successfully.

Cool :)

 Thank for your help,

You are welcome.

   -- Daniel

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Gathering information about containers - from inside and from outside

2010-07-18 Thread Daniel Lezcano
On 07/18/2010 04:36 PM, Clemens Perz wrote:
 On 07/18/2010 02:39 PM, Daniel Lezcano wrote:

 On 07/18/2010 11:48 AM, Clemens Perz wrote:
  
 Hi,

 So doing a while on /var/lib/lxc as a starting point, run lxc-info on
 each, find out which one is running, examine its cgroup and so on.

 You should look for the abstract socket @/var/lib/lxc/name/command
 too. This socket exists when the container is running. The container can
 run without a previous creation, hence you won't find it in
 /var/lib/lxc. These containers are called volatile containers.
  
 Cool, found them in /proc/net/unix, thats good. Is there a way to read
 this path - /var/lib/lxc - from somewhere? Just thinking if some distro
 might want to change it at compiletime and then the script sucks :)


The scripts should rely on pkg-config:

In your case it is:

  pkg-config --variable=localstatedir lxc

should give you /var/lib/lxc/.

You can look at /usr/share/pkgconfig/lxc for the list of variables.

 that the macvlan module is not used.

 Do you mean the refcount is 0 ? or the module is not visible ?
  
 Sorry, yes, the refcount stays 0. I was looking for host-side
 information about the container and its network interfaces too. Maybe
 this is a way to make things more visible in case you need to track
 problems. And of course give me a chance to understand it :D


You are right, the refcount is 0. Weird because the refcount prevents 
the unloading of the module.
At the first glance that seems to not hurt the system to unload the 
macvlan module while it is in use,
it seems to gracefully fall into network is unreachable. But it should 
be interesting to ask netdev@ why.

 BTW, giving the option lxc.network.ipv4 in the config, does that do
 anything inside the container? Can it be propagated?

This ip address will be assigned to the container's network.

 Or can I just omit
 it, when I use the containers dist tools to setup the interface?

Right.  If your system boots and set an IP address, this config option 
is useless.
It is mainly used for application container but it can be used for 
system containers too, if you want lxc to handle the network 
configuration (partially supported).

Thanks
   -- Daniel


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Running Fedora12 container on Ubuntu

2010-07-20 Thread Daniel Lezcano
On 07/20/2010 01:58 PM, Nikola Simidzievski wrote:
 Hi,
 I am trying to get Fedora 12 container on Ubuntu 10.4, but I have several
 problems. I installed new  rootfs using febootstrap , and configured it, and
 created new fstab (guided form
 http://blog.bodhizazen.net/linux/lxc-configure-fedora-containers ) :
 ---
 none /lxc/rootfs.fedora/dev/pts devpts defaults 0 0
 none /lxc/rootfs.fedora/proc proc defaults 0 0
 none /lxc/rootfs.fedora/sys sysfs defaults 0 0
 #none /lxc/rootfs.fedora/var/lock tmpfs defaults 0 0
 #none /lxc/rootfs.fedora/var/run tmpfs defaults 0 0
 /etc/resolv.conf /lxc/rootfs.fedora/etc/resolv.conf none bind 0 0
 --
 My lxc conf. file look like this :
 
 lxc.utsname = fedora
 lxc.tty = 4
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br-node0
 lxc.network.name = eth0
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 10.0.0.1/24
 lxc.rootfs = /lxc/rootfs.fedora
 lxc.mount = /lxc/fstab.fedora
 lxc.cgroup.devices.deny = a
 # /dev/null and zero
 lxc.cgroup.devices.allow = c 1:3 rwm
 lxc.cgroup.devices.allow = c 1:5 rwm
 # consoles
 lxc.cgroup.devices.allow = c 5:1 rwm
 lxc.cgroup.devices.allow = c 5:0 rwm
 lxc.cgroup.devices.allow = c 4:0 rwm
 lxc.cgroup.devices.allow = c 4:1 rwm
 # /dev/{,u}random
 lxc.cgroup.devices.allow = c 1:9 rwm
 lxc.cgroup.devices.allow = c 1:8 rwm
 # /dev/pts/* - pts namespaces are coming soon
 lxc.cgroup.devices.allow = c 136:* rwm
 lxc.cgroup.devices.allow = c 5:2 rwm
 # rtc
 lxc.cgroup.devices.allow = c 254:0 rwm
 ---
 When I start the machine with this setup I get this error:
 ==
 lxc-start: Device or resource busy – failed to mount ‘none’ on
 ‘/lxc/rootfs.fedora/dev/pts’
 lxc-start: failed to setup the mounts for ‘fedora′
 lxc-start: failed to setup the container
 
 So I remove the lxc.mount part from the conf. file and it starts normally
 but a get  error when I make ssh connection (PTY allocation request failed
 on channel 0) so I assume that has something to do with wrong mount on
 /dev/pts. I tried to mount it manually via chroot but then the booting slows
 down and services like mysqld and httpd can't be started. So any suggestions
 how to solve this?

Can you check by removing the none /lxc/rootfs.fedora/dev/pts devpts 
defaults 0 0 line in the fstab and add lxc.pts=1 in the configuration 
file ?

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC container SSH X forwarding kernel crash

2010-07-23 Thread Daniel Lezcano
On 07/22/2010 05:09 PM, Arie Skliarouk wrote:
   Quoting Ferenc Holzhauser (ferenc.holzhau...@gmail.com):


 I'm experiencing an annoying kernel crash each time I'm trying to use SSH X 
 forwarding into the container.
 I can open an SSH session but as soon as I start an X app, the crash happens.

 I have exactly the same issue with both 2.6.32-23-server and
  
 2.6.31-22-server kernel packages (ubuntu lucid x64).

 I too can open ssh section into the container and ping hosts on the network,
 but once I start doing even slightly intensive network operations, the LXC
 kernel crashes hard. I took a screen photo, if it can help to someone:
 http://81.218.46.173/DSC00209.JPG


Hi Arie,

I am trying to reproduce the problem on my system with the same kernel 
but it does not occur.
What network configuration are you using ?

Thanks
   -- Daniel

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC container SSH X forwarding kernel crash

2010-07-25 Thread Daniel Lezcano
On 07/24/2010 07:48 PM, Arie Skliarouk wrote:
 Hi,

 On Sat, Jul 24, 2010 at 00:14, Daniel Lezcanodaniel.lezc...@free.frwrote:


 I am trying to reproduce the problem on my system with the same kernel but
 it does not occur.
 What network configuration are you using ?

  
 I use following shell script for configuring the network:

 #!/bin/sh
 brctl addbr br0
 brctl setfd br0 0
 ifconfig br0 192.168.11.32 promisc up
 brctl addif br0 eth0
 ifconfig eth0 0.0.0.0 up
 route add default gw 192.168.11.1


Thanks.

I tried several network configuration (macvlan, veth + bridge + ip 
forwarding, veth+bridge+eth0) but none raised the kernel bug. I 
downloaded a 4GB iso image from internet within the container and in 
parallel the same iso on the host, while running a X application with 
ssh X11 forwarding  in the container (gkrellm).

That was done on a 2.6.32-23-server kernel from ubuntu and lxc-0.7.1. 
The hardware was a Bi-Xeon Quad core, 7GB of RAM with an ethernet NIC 
Intel Corporation 80003ES2LAN Gigabit Ethernet Controller.

Do you have any suggestion on how to raise the bug ?

Thanks
   -- Daniel

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC container SSH X forwarding kernel crash

2010-07-26 Thread Daniel Lezcano
On 07/26/2010 11:34 AM, Ferenc Holzhauser wrote:
 On 26 July 2010 10:56, Daniel Lezcanodaniel.lezc...@free.fr  wrote:

 On 07/26/2010 08:57 AM, Ferenc Holzhauser wrote:
  
 Hi,

 I beileve I can reproduce it and get a crash image.
 Then apport or crash could possibly show something useful.

 Would that help?


 Arie gave a JPG image of the kernel crash which is in ip_rcv_finish, a good
 indication.
 Yes, that would be great if you can check if the bug is raised in the same
 place.
 If possible, that would be nice to check if the bug happens with a 2.6.34.1
 or 2.6.35-rc6 kernel too.

 As I have the same hardware than Arie, I was expecting to reproduce the bug
 easily :(

 Is *tc* used on your system or any specific network configuration ?

 Thanks
 -- Daniel






  
 Daniel,

 I've already searched for a known issue based on the crash information
 I have, with no luck.
 I don't think I have anything special. Attached my interfaces and
 lxc-config files as well as lspci and lsmod output.


Thanks.

 Below is the information I have from a crash, perhaps you can spot
 something more in this.

 [  177.390249] BUG: unable to handle kernel NULL pointer dereference at (null)
 [  177.390975] IP: [(null)] (null)
 [  177.391362] PGD 0
 [  177.391649] Oops: 0010 [#1] SMP
 [  177.392110] last sysfs file:
 /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
 [  177.392632] CPU 1
 [  177.392917] Modules linked in: veth bridge stp fbcon tileblit font
 bitblit softcursor vga16fb vgastate serio_raw ioatdma lp parport
 raid10 raid456 async_raid6_recov async_pq usbhid hid raid6_pq
 async_xor mptsas mptscsih xor async_memcpy mptbase async_tx ahci igb
 raid1 scsi_transport_sas dca raid0 multipath linear
 [  177.398732] Pid: 0, comm: swapper Not tainted 2.6.32-22-server
 #36-Ubuntu SUN FIRE X4170 SERVER
 [  177.399353] RIP: 0010:[]  [(null)] (null)
 [  177.399939] RSP: 0018:880010e23d38  EFLAGS: 00010293
 [  177.400355] RAX: 8802739ccec0 RBX: 8802757a9b00 RCX: 
 
 [  177.400820] RDX:  RSI: 8802757a9b00 RDI: 
 8802757a9b00
 [  177.401284] RBP: 880010e23d70 R08: 8149f0a0 R09: 
 880010e23d38
 [  177.401748] R10: 88027728d080 R11:  R12: 
 8802674e1050
 [  177.402212] R13: 8802757a9b00 R14: 0008 R15: 
 8185eca0
 [  177.402679] FS:  () GS:880010e2()
 knlGS:
 [  177.403227] CS:  0010 DS: 0018 ES: 0018 CR0: 8005003b
 [  177.403621] CR2:  CR3: 01001000 CR4: 
 06e0
 [  177.404086] DR0:  DR1:  DR2: 
 
 [  177.404550] DR3:  DR6: 0ff0 DR7: 
 0400
 [  177.418610] Process swapper (pid: 0, threadinfo 88027710c000,
 task 8802771044d0)
 [  177.447188] Stack:
 [  177.447190]  8149f1cd 0002 8802757a9b00
 8802757a9b00
 [  177.447193]0  88024d92b800 8802757a9b00 0008
 880010e23db0
 [  177.447196]0  8149f755 8000 880273381158
 8802733f8000
 [  177.447199] Call Trace:
 [  177.447201]IRQ
 [  177.447207]  [8149f1cd] ? ip_rcv_finish+0x12d/0x440
 [  177.447210]  [8149f755] ip_rcv+0x275/0x360
 [  177.447216]  [8146ffea] netif_receive_skb+0x38a/0x5d0
 [  177.447219]  [814702b3] process_backlog+0x83/0xe0
 [  177.447225]  [810880c2] ? enqueue_hrtimer+0x82/0xd0
 [  177.447229]  [81470adf] net_rx_action+0x10f/0x250
 [  177.447233]  [8106e257] __do_softirq+0xb7/0x1e0
 [  177.447237]  [810c4880] ? handle_IRQ_event+0x60/0x170
 [  177.447242]  [810142ec] call_softirq+0x1c/0x30
 [  177.447245]  [81015cb5] do_softirq+0x65/0xa0
 [  177.447247]  [8106e0f5] irq_exit+0x85/0x90
 [  177.447252]  [8155c675] do_IRQ+0x75/0xf0
 [  177.447255]  [81013b13] ret_from_intr+0x0/0x11
 [  177.447256]EOI
 [  177.447261]  [8130ccd7] ? acpi_idle_enter_bm+0x28a/0x2be
 [  177.447265]  [8130ccd0] ? acpi_idle_enter_bm+0x283/0x2be
 [  177.447270]  [81449297] ? cpuidle_idle_call+0xa7/0x140
 [  177.447278]  [81011e63] ? cpu_idle+0xb3/0x110
 [  177.447283]  [8154f5e0] ? start_secondary+0xa8/0xaa
 [  177.447284] Code:  Bad RIP value.
 [  177.447290] RIP  [(null)] (null)
 [  177.447292]  RSP880010e23d38
 [  177.447293] CR2: 


It is the same stack as Arie. Do you have iptables rules set on your 
system ?

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share 
of $1 Million in cash or HP Products. Visit us here for more details:
http://ad.doubleclick.net/clk;226879339;13503038;l?
http://clk.atdmt.com/CRS/go/247765532/direct/01/

Re: [Lxc-users] LXC container SSH X forwarding kernel crash

2010-07-26 Thread Daniel Lezcano
On 07/26/2010 11:34 AM, Ferenc Holzhauser wrote:
 On 26 July 2010 10:56, Daniel Lezcanodaniel.lezc...@free.fr  wrote:
 On 07/26/2010 08:57 AM, Ferenc Holzhauser wrote:

[ ... ]

 I've already searched for a known issue based on the crash information
 I have, with no luck.
 I don't think I have anything special. Attached my interfaces and
 lxc-config files as well as lspci and lsmod output.

Is it possible your container receives ICMP redirect ?

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share 
of $1 Million in cash or HP Products. Visit us here for more details:
http://ad.doubleclick.net/clk;226879339;13503038;l?
http://clk.atdmt.com/CRS/go/247765532/direct/01/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-attach setns patch for kernel 2.6.34

2010-07-28 Thread Daniel Lezcano
On 07/28/2010 10:59 AM, Sebastien Pahl wrote:
 Nice thats what I thought. I will build a 2.6.34.1 kernel. If
 everything works like expected I will send you the updated patchset.

Cool. Thanks !

 Does it have to be in the quilt format?

Yes if possible. You can send me directly a tarball, I will upload it to 
the web site.

Thanks
   -- Daniel

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share 
of $1 Million in cash or HP Products. Visit us here for more details:
http://ad.doubleclick.net/clk;226879339;13503038;l?
http://clk.atdmt.com/CRS/go/247765532/direct/01/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Analise this: config without fstab / root directory (/) is read only

2010-07-30 Thread Daniel Lezcano
On 07/30/2010 07:52 AM, Osvaldo Filho wrote:


What are the lxc and kernel version ?


 config
 lxc.utsname = lucid64
 lxc.tty = 4
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.name = eth0
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 192.168.10.0/24
 lxc.rootfs = ./rootfs
 lxc.cgroup.devices.deny = a
 # /dev/null and zero
 lxc.cgroup.devices.allow = c 1:3 rwm
 lxc.cgroup.devices.allow = c 1:5 rwm
 # consoles
 lxc.cgroup.devices.allow = c 5:1 rwm
 lxc.cgroup.devices.allow = c 5:0 rwm
 lxc.cgroup.devices.allow = c 4:0 rwm
 lxc.cgroup.devices.allow = c 4:1 rwm
 # /dev/{,u}random
 lxc.cgroup.devices.allow = c 1:9 rwm
 lxc.cgroup.devices.allow = c 1:8 rwm
 # /dev/pts/* - pts namespaces are coming soon
 lxc.cgroup.devices.allow = c 136:* rwm
 lxc.cgroup.devices.allow = c 5:2 rwm
 # rtc
 lxc.cgroup.devices.allow = c 254:0 rwm


 cat rootfs/etc/init/lxc.conf
 # LXC – Fix init sequence to have LXC containers boot with upstart

 # description “Fix LXC container - Lucid”

 start on startup

 task
 pre-start script
 mount -t proc proc /proc
 mount -t devpts devpts /dev/pts
 mount -t sysfs sys /sys
 mount -t tmpfs varrun /var/run
 mount -t tmpfs varlock /var/lock
 mkdir -p /var/run/network
 touch /var/run/utmp
 chmod 664 /var/run/utmp
 chown root.utmp /var/run/utmp
 if [ $(find /etc/network/ -name upstart -type f) ]; then
 chmod -x /etc/network/*/upstart || true
 fi
 end script

 script
 start networking
 initctl emit filesystem --no-wait
 initctl emit local-filesystems --no-wait
 initctl emit virtual-filesystems --no-wait
 init 2
 end script

 The root directory is read only, inside and outside of container.

 --
 The Palm PDK Hot Apps Program offers developers who use the
 Plug-In Development Kit to bring their C/C++ apps to Palm for a share
 of $1 Million in cash or HP Products. Visit us here for more details:
 http://p.sf.net/sfu/dev2dev-palm
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users




--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC and iperf: Why the difference between flows?

2010-08-03 Thread Daniel Lezcano
On 08/03/2010 07:53 PM, Nirmal Guhan wrote:
 On Tue, Aug 3, 2010 at 10:24 AM, Osvaldo Filhoarquivos...@gmail.com  wrote:

 Why have this difference?

 Client on container
 [  3] local 192.168.6.10 port 58172 connected with 192.168.6.1 port 5001
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0-60.0 sec  13.3 GBytes  1.90 Gbits/sec

 Client  on Host
 [  3] local 192.168.6.1 port 46711 connected with 192.168.6.10 port 5001
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0-60.5 sec  1.79 MBytes248 Kbits/sec
  

Can you give a bit more informations please ?
eg. Is 192.168.1.1 another host on the network or the host running the 
container ?

 What is the TCP window size ? Can you share the iperf command as well
 ? I tried in Fedora 12 with 2.6.32.1090.fc12.i686 and seems to be ok.


Right, MTU size maybe one issue:

https://lists.linux-foundation.org/pipermail/containers/2009-March/016355.html

TSO and SG maybe another:

http://www.mail-archive.com/lxc-users@lists.sourceforge.net/msg00386.html

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LTSp Server on LXC container.

2010-08-03 Thread Daniel Lezcano
On 08/03/2010 03:10 AM, Osvaldo Filho wrote:
 I want to jail users to a LTSP server.
 I thought it would be easy to LXC. But I'm having problems.
 - the ltsp-build-client - arch i386 does not end.

 r...@localhost:/opt# ltsp-build-client --arch i386
 ...
 I: Extracting upstart...
 I: Extracting util-linux...
 I: Extracting zlib1g...
 error: LTSP client installation ended abnormally


On my host a quick look to /opt/ltsp/debootstrap/debootstrap.log shows 
devices can not be created.

Before building the client again the command:

  lxc-cgroup -n ubuntu devices.allow a

makes the installation to go ahead (not yet finished on my host, still 
installing and downloading packages).



--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LTSp Server on LXC container.

2010-08-03 Thread Daniel Lezcano
On 08/04/2010 12:16 AM, Daniel Lezcano wrote:
 On 08/03/2010 03:10 AM, Osvaldo Filho wrote:

 I want to jail users to a LTSP server.
 I thought it would be easy to LXC. But I'm having problems.
 - the ltsp-build-client - arch i386 does not end.

 r...@localhost:/opt# ltsp-build-client --arch i386
 ...
 I: Extracting upstart...
 I: Extracting util-linux...
 I: Extracting zlib1g...
 error: LTSP client installation ended abnormally

  
 On my host a quick look to /opt/ltsp/debootstrap/debootstrap.log shows
 devices can not be created.

 Before building the client again the command:

lxc-cgroup -n ubuntu devices.allow a

 makes the installation to go ahead (not yet finished on my host, still
 installing and downloading packages).


Gah ! I just got:


ERROR: Neither /var/log/messages nor /var/log/syslog exists.  Unable to log.
Updating /var/lib/tftpboot directories for chroot: /opt/ltsp/amd64
/usr/sbin/ltsp-update-kernels: 155: /sbin/restorecon: not found
error: LTSP client installation ended abnormally

I installed selinux and called:

ltsp-update-kernels again.

followed by:

ltsp-update-image

Seems to work I have a ltsp login splash screen now. Still have to log 
into ...

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-attach setns patch for kernel 2.6.34

2010-08-05 Thread Daniel Lezcano
On 08/04/2010 02:13 AM, Sebastien Pahl wrote:
 Here is a refreshed patch for 2.6.35.

 The 2.6.34.2 kernel can use the 2.6.34.1 patch without problems.

 I tested both and they work.


Uploaded ! Thanks Sebastian !

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] port numbers for containers

2010-08-12 Thread Daniel Lezcano
On 08/12/2010 01:05 AM, Nirmal Guhan wrote:
 On Wed, Aug 11, 2010 at 11:05 AM, Serge Hallyn
 serge.hal...@canonical.com  wrote:
 Quoting Nirmal Guhan (vavat...@gmail.com):
 On Wed, Aug 11, 2010 at 5:06 AM, Serge Hallyn
 serge.hal...@canonical.com  wrote:
 Quoting Nirmal Guhan (vavat...@gmail.com):
 Hi,

 Want to know if port numbers are virtualized for containers or do the
 containers and host share the port space ? Please let me know.

 Wrong layer.  If the container shares a network namespace with the
 host, then it shares its networking.  If it has its own network
 namespace, then it has its own entire network stack.  So no, 'port
 space' isn't virtualized.vs.shared, but the network devices are.

 Thanks. How do I configure the container to have its own network stack?

 I did

 cat  /etc/lxc-basic.conf  EOF
 lxc.network.type=veth
 lxc.network.link=virbr0
 lxc.network.flags=up
 EOF

 lxc-create -n ubuntu1 -f /etc/lxc-basic.conf -t ubuntu

 Thanks. If I do macvlan, I assume there is no separate network
 namespace and hence ports will be shared and otherwise(veth) not ?

If you specify a lxc.network.type=type, you will have automatically a 
new network stack. That means your own interfaces, ip addresses, routes, 
iptables, ports, etc ...

As Serge explained, the network isolation/virtualization acts at the 
layer2, meaning it *begins* at the layer2, so the upper network layer 
will be virtualized too.

When you have a new network stack, your port numbers will not overlap 
with the system or the other containers. For example, you can launch 
several sshd or httpd in different containers without conflicting with 
the port 22 or 80.

If you don't specify lxc.network.type, your container will share the 
network stack with the host, hence if the host is running sshd, you 
won't be able to start another sshd in the container because they will 
conflict on port 22.

Answering to your question, if you do lxc.network.type=macvlan, the 
network stack will be private to your container.

  -- Daniel

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] 6mn to launch a VM (lxc hung_task_timeout_secs)

2010-08-19 Thread Daniel Lezcano
On 08/19/2010 11:18 AM, Sebastien Douche wrote:
 Hi folks,
 I have the problem each time I launch a VM (no more result with the
 sysctl setting). Any ideas to resolve this issue?

 The dmesg:

 [ 7560.404075] INFO: task mount:27788 blocked for more than 120 seconds.
 [ 7560.404123] echo 0  /proc/sys/kernel/hung_task_timeout_secs
 disables this message.
 [ 7560.404169] mount D 0002 0 27788  27701 0x0004
 [ 7560.404175]  88012bc58710 0082 0286
 88000540fa68
 [ 7560.404181]   0286 f8a0
 8800b80abfd8
 [ 7560.404186]  000155c0 000155c0 88012db062e0
 88012db065d8
 [ 7560.404191] Call Trace:
 [ 7560.404201]  [811052ab] ? bdi_sched_wait+0x0/0xe
 [ 7560.404206]  [811052b4] ? bdi_sched_wait+0x9/0xe
 [ 7560.404217]  [812e5dc6] ? __wait_on_bit+0x41/0x70
 [ 7560.404220]  [811052ab] ? bdi_sched_wait+0x0/0xe
 [ 7560.404224]  [812e5e60] ? out_of_line_wait_on_bit+0x6b/0x77
 [ 7560.404228]  [81064adc] ? wake_bit_function+0x0/0x23
 [ 7560.404232]  [8110532c] ? sync_inodes_sb+0x73/0x129
 [ 7560.404235]  [81108e81] ? __sync_filesystem+0x4b/0x70
 [ 7560.404240]  [810ed562] ? do_remount_sb+0x60/0x122
 [ 7560.404243]  [81101a96] ? do_mount+0x27a/0x792
 [ 7560.404246]  [8110202e] ? sys_mount+0x80/0xba
 [ 7560.404252]  [81010b02] ? system_call_fastpath+0x16/0x1b
 [ 7594.577136] br1: port 3(vethvaLKCB) entering disabled state
 [ 7594.592611] br1: port 3(vethvaLKCB) entering disabled state
 [ 7594.657095] br0: port 3(vethLxLiar) entering disabled state
 [ 7594.672608] br0: port 3(vethLxLiar) entering disabled state
 [ 7753.540023] device vethvh8sS7 entered promiscuous mode
 [ 7753.540839] ADDRCONF(NETDEV_UP): vethvh8sS7: link is not ready
 [ 7753.543136] device vethhNdkjz entered promiscuous mode
 [ 7753.543778] ADDRCONF(NETDEV_UP): vethhNdkjz: link is not ready
 [ 7753.546074] lo: Disabled Privacy Extensions
 [ 7753.602317] ADDRCONF(NETDEV_CHANGE): vethvh8sS7: link becomes ready
 [ 7753.602343] br1: port 3(vethvh8sS7) entering forwarding state
 [ 7753.605034] ADDRCONF(NETDEV_CHANGE): vethhNdkjz: link becomes ready
 [ 7753.605058] br0: port 3(vethhNdkjz) entering forwarding state
 [ 7763.612089] eth0: no IPv6 routers present
 [ 7763.912148] vethhNdkjz: no IPv6 routers present
 [ 7764.504028] eth1: no IPv6 routers present
 [ 7764.584042] vethvh8sS7: no IPv6 routers present
 [ 7920.404063] INFO: task mount:27993 blocked for more than 120 seconds.
 [ 7920.404096] echo 0  /proc/sys/kernel/hung_task_timeout_secs
 disables this message.
 [ 7920.404142] mount D 0002 0 27993  27986 0x
 [ 7920.404148]  88012bc58710 0086 0286
 88000540fa68
 [ 7920.404154]   0286 f8a0
 8800379e7fd8
 [ 7920.404159]  000155c0 000155c0 88012db062e0
 88012db065d8
 [ 7920.404164] Call Trace:
 [ 7920.404175]  [811052ab] ? bdi_sched_wait+0x0/0xe
 [ 7920.404179]  [811052b4] ? bdi_sched_wait+0x9/0xe
 [ 7920.404186]  [812e5dc6] ? __wait_on_bit+0x41/0x70
 [ 7920.404190]  [811052ab] ? bdi_sched_wait+0x0/0xe
 [ 7920.404199]  [812e5e60] ? out_of_line_wait_on_bit+0x6b/0x77
 [ 7920.404203]  [81064adc] ? wake_bit_function+0x0/0x23
 [ 7920.404207]  [8110532c] ? sync_inodes_sb+0x73/0x129
 [ 7920.404210]  [81108e81] ? __sync_filesystem+0x4b/0x70
 [ 7920.404214]  [810ed562] ? do_remount_sb+0x60/0x122
 [ 7920.404217]  [81101a96] ? do_mount+0x27a/0x792
 [ 7920.404221]  [8110202e] ? sys_mount+0x80/0xba
 [ 7920.404226]  [81010b02] ? system_call_fastpath+0x16/0x1b
 [ 8040.404062] INFO: task mount:27993 blocked for more than 120 seconds.
 [ 8040.404095] echo 0  /proc/sys/kernel/hung_task_timeout_secs
 disables this message.
 [ 8040.404141] mount D 0002 0 27993  27986 0x
 [ 8040.404147]  88012bc58710 0086 0286
 88000540fa68
 [ 8040.404152]   0286 f8a0
 8800379e7fd8
 [ 8040.404157]  000155c0 000155c0 88012db062e0
 88012db065d8
 [ 8040.404162] Call Trace:
 [ 8040.404173]  [811052ab] ? bdi_sched_wait+0x0/0xe
 [ 8040.404178]  [811052b4] ? bdi_sched_wait+0x9/0xe
 [ 8040.404184]  [812e5dc6] ? __wait_on_bit+0x41/0x70
 [ 8040.404188]  [811052ab] ? bdi_sched_wait+0x0/0xe
 [ 8040.404198]  [812e5e60] ? out_of_line_wait_on_bit+0x6b/0x77
 [ 8040.404202]  [81064adc] ? wake_bit_function+0x0/0x23
 [ 8040.404206]  [8110532c] ? sync_inodes_sb+0x73/0x129
 [ 8040.404209]  [81108e81] ? __sync_filesystem+0x4b/0x70
 [ 8040.404213]  [810ed562] ? do_remount_sb+0x60/0x122
 [ 8040.404216]  [81101a96] ? do_mount+0x27a/0x792
 [ 8040.404219]  [8110202e] ? 

Re: [Lxc-users] port numbers for containers

2010-08-19 Thread Daniel Lezcano
On 08/19/2010 02:33 PM, Sebastien Douche wrote:
 On Thu, Aug 12, 2010 at 10:29, Daniel Lezcanodlezc...@fr.ibm.com  wrote:

 Answering to your question, if you do lxc.network.type=macvlan, the
 network stack will be private to your container.
  
 Hi Daniel,
 not sure I understand your response: with macvlan option, you cannot
 access to the container from outside?

With the macvlan network configuration (lxc.network.type=macvlan), the 
container will use a specific network device which is faster and simpler 
to configure than the veth, but the network traffic won't go to the host 
or the other containers on the same host. Only direct access to your 
real network will happen.

   What means private network
 stack ?



 From the point of view of the system (the kernel services), the 
different system resources are splitted and separated into a base brick 
called a 'namespace'. There are the pid namespace, the network 
namespace, the ipc namespace, the mount namespace, etc ...

When you boot your system (not a container), the loopback and the 
network devices are created. These are setup by the system by assigning 
IP addresses. The routes and the route cache, the hash tables for udp, 
tcp, raw, etc ... port mappings, iptables, etc ... are created and setup 
by your system (automatically by the kernel) or by userland scripts at 
boot time.

When you create a network namespace, this occurs again giving you a new 
loopback instances as well as a new route tables, new hash tables for 
tcp udp. Because these resource mustn't overlap with the system, they 
are isolated, which means a process running in this namespace can not 
see the network of another namespace (eg. the host). This is why we say 
a private network stack because it belongs to a set of processes and a 
process can only have a namespace at a time.

As I know I am often not very clear :) I would recommend this document 
http://lxc.sourceforge.net/doc/sigops/appcr.pdf

   -- Daniel






--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] 6mn to launch a VM (lxc hung_task_timeout_secs)

2010-08-20 Thread Daniel Lezcano
On 08/20/2010 09:16 AM, Sebastien Douche wrote:
 On Thu, Aug 19, 2010 at 23:23, Daniel Lezcanodaniel.lezc...@free.fr  wrote:


 Hmm, it is very probable the problem is located in the kernel. This kind of
 warnings appears when a task is in a uninterruptible state and the kernel
 compilation option CONFIG_DETECT_HUNG_TASK is set.

 What is the kernel version ?
  
 # uname -a
 Linux srv3 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010
 x86_64 GNU/Linux

 It's a Debian Squeeze (up to date). After some tests, it seems related
 with some device nodes creation (/dev/pts, /dev/shm...) but i'm not
 sure.


I think you hit a kernel bug. May be this one : 
https://bugzilla.kernel.org/show_bug.cgi?id=14430
Normally, a system container spawns instantaneously.

When the containers starts it does pivot_root and unmount all previous 
mount points belonging to the old rootfs.
Is it possible you have a particular mount on your system which triggers 
this problem ?

Thanks
   -- Daniel

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Running XOrg in a container

2010-08-23 Thread Daniel Lezcano
On 08/23/2010 01:13 PM, l...@jelmail.com wrote:

 With some tweaks that maybe possible, but IMHO it is not adequate.
 Lxc is not like QEMU/KVM or Virtualbox, the hardware is not virtualized,
 so you may have conflicts with the differents X server because they will
 share the same hardware (eg. different resolutions in different
  
 containers).

 It should be adequate for what I want to do. I can already run multiple X
 servers on the host and switch between them (e.g. Alt-F7, Alt-F8). This
 works fine with no problems.

Interesting, do you have any pointer explaining how to setup this ?

 I just want to containerise the separate environments with a view towards
 keeping the host environment as lean and clean as possible. It will make
 for an easier life maintaining the host.

 IMHO, using ssh won't work when you need to make use of the 3D graphics
 card.

Ok.

   I may be wrong here but I believe direct hardware access is the only
 way. I've proved to myself it works outside containers - all I now need is
 to be able to containerise my configuration.


Ok, if running several X server is valid, I think containerize them is 
not a problem.

 What's stopping me doing that right now is the inability to configure
 access to /dev/mem which I think the container needs.


Is it possible to copy the content of the host's /dev directory to the 
container's /dev ?

eg. cp -a /dev /var/lib/lxc/name/rootfs/dev

and then run the container.



Thanks
   -- Daniel

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Running XOrg in a container

2010-08-23 Thread Daniel Lezcano
On 08/23/2010 03:01 PM, l...@jelmail.com wrote:



 With some tweaks that maybe possible, but IMHO it is not adequate.
 Lxc is not like QEMU/KVM or Virtualbox, the hardware is not virtualized,
 so you may have conflicts with the differents X server because they will
 share the same hardware (eg. different resolutions in different

  
 containers).

 It should be adequate for what I want to do. I can already run multiple X
 servers on the host and switch between them (e.g. Alt-F7, Alt-F8). This
 works fine with no problems.


 Interesting, do you have any pointer explaining how to setup this ?
  
 Of the top of my head, the below should work, I’ll double check it tonight
 though.

 Log on to VT1 (Alt-F1)
$ startx -- vt7

 Log on to VT2 (Alt-F2)
$ startx -- vt8

 Access first desktop on vt7 (Alt-F7)
 Access second desktop on vt8 (Alt-F8)



Thanks for info.

 What's stopping me doing that right now is the inability to configure
 access to /dev/mem which I think the container needs.


 Is it possible to copy the content of the host's /dev directory to the
 container's /dev ?

 eg. cp -a /dev /var/lib/lxc/name/rootfs/dev

 and then run the container.
  
 I will try that tonight but I think cgroup device allow is needed to make
 such device files valid anyway. The problem I have right now is that a
 container will not start if either of the following two lines are present
 in its configuration file:

 lxc.cgroup.devices.allow = c 1:1   rwm  # dev/mem
 lxc.cgroup.devices.allow = c 13:63  rwm # dev/input/mice

 Is this some restriction imposed by LXC and/or the kernel (i.e. you can’t
 have /dev/mem in a cgroup) or is there something else that I am missing ?

 It would be good if someone can try adding the above to a working
 container’s configuration and confirm whether it stops working. It would be
 good to confirm it isn’t due to an error or omission that I have made.

Ok, checked that your configuration lines have too many spaces in the 
fields and the kernel does not like this.
I suppose, the kernel should trim the spaces but anyway, that can be 
fixed in userspace ...

If you remove the extra spaces between dev minor:major and rwm that 
should work.

lxc.cgroup.devices.allow = c 1:1 rwm # dev/mem
lxc.cgroup.devices.allow = c 13:63 rwm # dev/input/mice

Thanks
-- Daniel




--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] getty's and lxc-console

2010-08-25 Thread Daniel Lezcano
On 08/25/2010 11:45 AM, Clemens Perz wrote:
 Daniel,

 maybe you want to have a look on this, too. The current debian templates
 add lxc.pts = 1024 to the config.

 If you have two containers with that setting and you start them, only
 the first one started will give you access to the console. The second wont.

 Dunno if this needs a fix in the template or if it point to an issue.


Strange, I just create 2 debian containers and I was able to access the 
console to both.

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] getty's and lxc-console

2010-08-25 Thread Daniel Lezcano
On 08/25/2010 12:51 PM, Clemens Perz wrote:
 On 08/25/2010 12:05 PM, Daniel Lezcano wrote:

 On 08/25/2010 11:45 AM, Clemens Perz wrote:
  
 Daniel,

 maybe you want to have a look on this, too. The current debian templates
 add lxc.pts = 1024 to the config.

 If you have two containers with that setting and you start them, only
 the first one started will give you access to the console. The second
 wont.

 Dunno if this needs a fix in the template or if it point to an issue.


 Strange, I just create 2 debian containers and I was able to access the
 console to both.

  
 Oouch! :) I rebooted my host and started the containers one after the
 other, uncommenting the setting in the conf again. Now its working fine.
 Maybe I confused myself while hunting the other issue, killing some
 getty's, but not all.

 Sorry for bugging you with nonsense ;-))


No problem, maybe you really hit a bug but it is hard to reproduce. It 
is preferable to report a bug even it is not one ;)

BTW, I got the problem with the console when we try to open a second 
console to the container. The busy console slot is resetted each time 
we do a disconnect when sending a command. In our case, the first 
lxc-console ask for a console to lxc-start, the busy slot is set and 
right after reseted when lxc-console disconnect after completing its 
command. Then the second lxc-console ask for a console, but as the first 
busy slot was reseted, the same console is provided to the second 
lxc-console. Hence we have both lxc-console commands using the same 
container console.

A workaround is to specify the console number with lxc-console -n name 
-t 2


--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Cannot start a container with a new MAC address

2010-08-27 Thread Daniel Lezcano
On 08/27/2010 11:27 AM, Sebastien Douche wrote:
 I created a container with an interface. I stop it, I change the MAC
 address, restart it:

 lxc-start: ioctl failure : Cannot assign requested address
 lxc-start: failed to setup hw address for 'eth0'
 lxc-start: failed to setup netdev
 lxc-start: failed to setup the network for 'vsonde43'
 lxc-start: failed to setup the container
 lxc-start: invalid sequence number 1. expected 2

What is the hardware address value you are trying to setup ?

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] unstoppable container

2010-08-30 Thread Daniel Lezcano
On 08/30/2010 02:11 PM, Ferenc Wagner wrote:
 Daniel Lezcanodaniel.lezc...@free.fr  writes:


 On 08/30/2010 12:40 PM, Papp Tamás wrote:

  
 In the tasks file I saw three processes: udevd, init and one more, which
 I don't remember. I killed them all, but the cgroup still exists.

 The cgroup is removed by lxc-start, but this is not a problem, because
 it will be removed (if empty), when running lxc-start again.
  
 I suspect a transmission error in this sentence, could you please resend it?


The cgroup is not removed automatically by the cgroup infrastructure 
when all the tasks die, it's how the cgroup is implemented. So it is up 
to lxc-start to remove the cgroup after the pid 1 of the container 
exits. If lxc-start was killed, this directory will not be removed and 
will stay there.

If you start your container again, lxc-start will try to remove this 
directory if it is present and recreate a new cgroup.

 Usually, there is a mechanism used in lxc to kill -9 the process 1 of
 the container (which wipes out all the processes of the containers) when
 lxc-start dies.
  
 I guess this mechanism has no chance when lxc-start is killed by SIGKILL...


Yes, but hopefully there is a linux specific process control, where the 
kernel sends a signal to a child process when its parent dies.


...
PR_SET_PDEATHSIG (since Linux 2.1.57)
   Set the parent process death signal of the calling 
process to arg2 (either a signal value in the
   range 1..maxsig, or 0 to clear).  This is the signal that 
the calling process will get when  its
   parent dies.  This value is cleared for the child of a 
fork(2).
...


This prctl is used in lxc as a safe guard in case lxc-start is killed 
widely, in order to wipe out container's processes.

 So if you still have the processes running inside the container but
 lxc-start is dead, then:
* you are using a 2.6.32 kernel which is buggy (this mechanism is broken).
   or/and
* there are processes in 'T' states within the container
  
 Is this a kernel mechanism to clean up all processes of a container when
 the container init exits, or is it a user-space thing implemented in
 lxc-start?
When the container init exits, it sends a SIGKILL to all the child 
processes and reap them (aka wait), that happens at the kernel level 
(zap_pid_ns). Hence, in userspace, when wait('init') returns you have 
the guarantee there are no more processes in the container.

   If the former, in which versions of 2.6.32 is this feature
 broken?


I meant the prctl(PR_SET_PDEATHSIG) is broken on 2.6.32


--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] unstoppable container

2010-08-30 Thread Daniel Lezcano
On 08/31/2010 12:23 AM, Serge E. Hallyn wrote:
 Quoting Daniel Lezcano (daniel.lezc...@free.fr):

 On 08/30/2010 02:36 PM, Serge E. Hallyn wrote:
  
 Quoting Papp Tamás (tom...@martos.bme.hu):

 Daniel Lezcano wrote, On 2010. 08. 30. 13:08:
  
 Usually, there is a mechanism used in lxc to kill -9 the process 1 of
 the container (which wipes out all the processes of the containers)
 when lxc-start dies.

 It should wipe out them, but in my case it was unsuccessfull, even if I
 killed the init process by hand.

  
 So if you still have the processes running inside the container but
 lxc-start is dead, then:
* you are using a 2.6.32 kernel which is buggy (this mechanism is
 broken).

 Ubuntu 10.04, so it's exactly the point, the kernel is 2.6.32 .


 Could you point me (or the Ubuntu guy in the list) to an URL, which
 describes the problem or maybe to the kernel patch. If it's possible,
 maybe the Ubuntu kernel maintainers would fix the official Ubuntu kernel.
  
 Daniel,

 which patch are you talking about?  (presumably a patch against
 zap_pid_ns_processes()?)  If it's keeping containers from properly
 shutting down, we may be able to SRU a small enough patch, but if
 it involves a whole Oleg rewrite then maybe not :)

 I am referring to these ones:

 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=13aa9a6b0f2371d2ce0de57c2ede62ab7a787157
 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=dd34200adc01c5217ef09b55905b5c2312d65535
 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=dd34200adc01c5217ef09b55905b5c2312d65535
  
 (note, second and third are identical - did you mean to paste 2 or 3 links?


3 links, was this one.

http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=614c517d7c00af1b26ded20646b329397d6f51a1

 Are they small enough for a SRU ?
  
 The first one looks trivial enough.  I'd be afraid the second one would be
 considered to have deep and subtle regression potential.  But, we can
 always try.  I'm not on the kernel team so am not likely to have any say
 on it myself :)


Shall we ask directly to the kernel-team@ mailing list ? Or do we have 
to do a SRU first ?

Thanks
   -- Daniel



--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] unstoppable container

2010-08-31 Thread Daniel Lezcano
On 08/31/2010 12:07 PM, Papp Tamás wrote:

 Serge E. Hallyn wrote, On 2010. 08. 31. 4:06:
 Quoting Daniel Lezcano (daniel.lezc...@free.fr):
 On 08/31/2010 12:23 AM, Serge E. Hallyn wrote:
 Quoting Daniel Lezcano (daniel.lezc...@free.fr):
 On 08/30/2010 02:36 PM, Serge E. Hallyn wrote:
 Quoting Papp Tamás (tom...@martos.bme.hu):
 Daniel Lezcano wrote, On 2010. 08. 30. 13:08:
 Usually, there is a mechanism used in lxc to kill -9 the 
 process 1 of
 the container (which wipes out all the processes of the 
 containers)
 when lxc-start dies.
 It should wipe out them, but in my case it was unsuccessfull, 
 even if I
 killed the init process by hand.

 So if you still have the processes running inside the container 
 but
 lxc-start is dead, then:
   * you are using a 2.6.32 kernel which is buggy (this 
 mechanism is
 broken).
 Ubuntu 10.04, so it's exactly the point, the kernel is 2.6.32 .


 Could you point me (or the Ubuntu guy in the list) to an URL, which
 describes the problem or maybe to the kernel patch. If it's 
 possible,
 maybe the Ubuntu kernel maintainers would fix the official 
 Ubuntu kernel.
 Daniel,

 which patch are you talking about?  (presumably a patch against
 zap_pid_ns_processes()?)  If it's keeping containers from properly
 shutting down, we may be able to SRU a small enough patch, but if
 it involves a whole Oleg rewrite then maybe not :)
 I am referring to these ones:

 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=13aa9a6b0f2371d2ce0de57c2ede62ab7a787157
  

 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=dd34200adc01c5217ef09b55905b5c2312d65535
  

 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=dd34200adc01c5217ef09b55905b5c2312d65535
  

 (note, second and third are identical - did you mean to paste 2 or 
 3 links?
 3 links, was this one.

 http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=commit;h=614c517d7c00af1b26ded20646b329397d6f51a1
  


 Ah, thanks.

 I had a feeling the second one depended on defining si_fromuser in all
 lowercase, but for some reason git wasn't showing that one to me easily.

 Are they small enough for a SRU ?
 The first one looks trivial enough.  I'd be afraid the second one 
 would be
 considered to have deep and subtle regression potential.  But, we can
 always try.  I'm not on the kernel team so am not likely to have 
 any say
 on it myself :)
 Shall we ask directly to the kernel-team@ mailing list ? Or do we
 have to do a SRU first ?

 Actually, first step would be for Papp to open a bug against both
 lxc and the kernel.  Papp, do you mind doing that?

 Without a bug, an SRU ain't gonna fly.

 Sure I can do this. What should I write in the report exactly and what 
 is the correct email address I write to?

 - kernel version (2.6.32.x)
 - system (Ubuntu)
 - container was unstoppable(?) even if there were no processess

 - the way I was successful

IMO, we should keep it simple as we can not reproduce the bug you had yet.

The container's processes are not killed when the container init parent 
dies.
This mechanism relies on prctl(PR_SET_PDEATHSIG, ...), which works fine 
for all kernel version except for 2.6.32.
The bug was reported and fixed.

https://lists.linux-foundation.org/pipermail/containers/2009-October/021052.html

please note, there is a simple test program spotting the bug.

Is it possible to backport this fix in 2.6.32 ?

Well something like that :)

 - ...and?

I think you have to create a launchpad profile and open a bug.


--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] linux32/setarch

2010-09-07 Thread Daniel Lezcano
On 09/07/2010 11:05 AM, Ralf Schmitt wrote:
 Hi all,

 I'm running a 32 bit container on a 64 bit host system. Is there a
 configuration option to set the architecture? Currently I'm using
 'linux32 lxc-start -n NAME' to set the architecture.


hum, no. Maybe that would be worth to add a configuration option for 
this, so the personality is set right before exec'ing the command (eg. 
/sbin/init).

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-13 Thread Daniel Lezcano
On 09/13/2010 12:16 AM, Papp Tamás wrote:
 Papp Tamás wrote, On 2010. 09. 12. 23:18:

 hi!

 I also tried with qemu and no problem.

  
 I've just upgraded the box to Maverick, and after a short time it looks
 better. After 1 hour still it's up and working.

 I don't know, if it helps.


Yes, that helps. At least we have some boundaries for the bug in the kernel.
I desperately tried to reproduce the problem on my host, with a 
configuration similar of yours and the bug didn't appeared :(

It can be interesting if you can try the following,

  (1) try to reproduce the bug with all the nic offloading capabilities 
disabled
  (2) try with a macvlan configuration instead of veth+bridge

Thanks
   -- Daniel



--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptodate Ubuntu Lucid guests

2010-09-13 Thread Daniel Lezcano
On 09/13/2010 10:15 AM, Ferenc Holzhauser wrote:
 On 13 September 2010 00:16, Papp Tamástom...@martos.bme.hu  wrote:

 Papp Tamás wrote, On 2010. 09. 12. 23:18:

 hi!

 I also tried with qemu and no problem.


 I've just upgraded the box to Maverick, and after a short time it looks
 better. After 1 hour still it's up and working.

 I don't know, if it helps.

 tamas


 Sorry fo the delay. The requested information from my side (stopped
 qemu and started a container).

Thanks Ferenc !

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Launch multiple apps in exactly on container

2010-09-16 Thread Daniel Lezcano
On 09/16/2010 09:36 AM, Jue Hong wrote:
 As I understand, running one application with the command lxc-execute
 will create a container instance. E.g., by running lxc-execute -n foo
 /bin/bash, a container named foo will be created, and I can find a foo
 directory under the mounted cgroup directory, like /dev/cgroup/foo.
 While retype lxc-execute -n foo /bin/bash, I'm told that:lxc-execute:
 Device or resource busy.

 Does it mean I cannot run multiple apps within exactly the same
 container foo via using lxc-execute or lxc-start? Or what should I do
 if it's possible?

The name is unique for the container. You can not start a container if 
it is already running. In your case you should do:

lxc-execute -n foo /bin/bash
lxc-execute -n bar /bin/bash

If you want to use the same configuration for both, you can supply these 
to the command line with the -f option.

lxc-execute -n foo -f lxc.conf /bin/bash
lxc-execute -n bar -f lxc.conf /bin/bash

You can also pass the configuration parameters with:

lxc-execute -n foo -s lxc.utsname=foo /bin/bash
lxc-execute -n bar -s lxc.utsname=bar /bin/bash

Cheers
   -- Daniel

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Launch multiple apps in exactly on container

2010-09-16 Thread Daniel Lezcano
On 09/16/2010 10:56 AM, Jue Hong wrote:
 Sure Daniel, what you say actually works. But I still want to know,
 whether I can launch another app into a running container.

 Doing as you say:

 lxc-execute -n foo /bin/bash  -- this bash runs inside container 'foo'
 lxc-execute -n bar /bin/bash  -- this bash runs inside container 'bar'
  
 the 2nd bash will run in a different container named 'bar': e.g.
 /dev/cgroup/bar.
 What if I want to launch another app, like helloworld, inside the
 first running container 'foo'?


Ah, ok. Sorry I misunderstood your question. This is not yet supported 
because there is a missing feature in the kernel.
Hopefully, the kernel modification providing this functionality is ready 
and the author said he will submit it this week or next week, so that 
will be possible in a near future.

This feature is very important because it allows build a set of scripts 
to facilitate container management from the outside like shutdown, 
netstat, etc ...

If you wish to experiment this feature, it is available at:

http://lxc.sourceforge.net/patches/linux/

and the command using it is lxc-attach, which is already available with 
the 0.7.2 version.

At this point, it's experimental but very usable. I will be happy to 
have feedbacks ;)

Thanks
   -- Daniel







--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] failed to open '/proc/12580/ns/pid'

2010-09-17 Thread Daniel Lezcano
On 09/17/2010 05:06 PM, Sebastien Pahl wrote:
 I hope so too:-)

 Daniel do you have any news about this?

Yes, I had a private discussion last week with Eric Bierderman, and he 
said he will send the patches for upstream at the end of this week or 
next week if he has time and enough bandwidth to support a proper review.

 On Fri, Sep 17, 2010 at 16:52, Scott Bronsonbron...@rinspin.com  wrote:
 Thanks Sebastian, that makes perfect sense.  Looks like the proc namespace
 patches didn't make 2.6.36 either (-rc4 is current).  Fingers crossed for
 2.6.37...?

 On Sep 17, 2010 12:28 AM, Sebastien Pahls...@dotcloud.com  wrote:

 You need a patched kernel for that to work.

 Have a look at: http://lxc.sourceforge.net/patches/linux/

 On Fri, Sep 17, 2010 at 08:54, Scott Bronsonbron...@rinspin.com  wrote:
 When I call lxc-attach,...


 --
 Start uncovering the many advantages of virtual appliances
 and start using them to simplify application deployment and
 accelerate your shift to cloud computing.
 http://p.sf.net/sfu/novell-sfdev2dev
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users




 --
 Sebastien Pahl
 @sebp






--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Mounting filesystem for container

2010-09-18 Thread Daniel Lezcano
On 09/17/2010 11:41 PM, l...@jelmail.com wrote:
 Hi, I just tried to mount a filesystem in a container and I got this:

 [root ~]# lxc-start -n mycontainer
 lxc-start: Operation not permitted - failed to mount '/dev/sdd1' on
 '/srv/lxc/mycontainer/mnt'
 lxc-start: failed to setup the mounts for 'mycontainer'
 lxc-start: failed to setup the container
 lxc-start: invalid sequence number 1. expected 2
 lxc-start: failed to spawn 'mycontainer'
 [root ~]#

 What I did was put this in /etc/lxc/mycontainer.fstab:

 /dev/sdd1 /srv/lxc/mycontainer/mnt ext3 defaults 0 1


As mentioned Serge, that maybe the cgroup device white list which 
prevent you to do that.
You can check by temporarly comment out in /var/lib/lxc/mycontainer all 
the lxc.cgroup.devices lines and then launch the container again. If 
you are able to mount it, then you should add in the configuration file 
the line:

lxc.cgroup.devices.allow = type major:minor perm

type : b (block), c (char), etc ...
major : major number
minor : minor number (wildcard is accepted)
perms : r (read), w (write), m (mapping)


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Mounting filesystem for container

2010-09-20 Thread Daniel Lezcano
On 09/20/2010 11:13 AM, l...@jelmail.com wrote:

 As mentioned Serge, that maybe the cgroup device white list which
 prevent you to do that.
 You can check by temporarly comment out in /var/lib/lxc/mycontainer all
 the lxc.cgroup.devices lines and then launch the container again. If
 you are able to mount it, then you should add in the configuration file
 the line:
  

 lxc.cgroup.devices.allow =type  major:minor  perm
  
 Well, yes, that fixed it. Thank you.

 I had a gap in my knowledge. I assumed incorrectly that the mount was
 handled in the Host Environment and that the container would just see the
 mounted file system, therefore not needing access to the file systems's
 device node.


That's the case if the system mounts something in the container rootfs, 
the mount point will be inherited in the container creation. It's the 
behaviour of the mount namespace.

As soon as the container is created the new mount points will be 
isolated. There is a pending discussion with propagating the host mounts 
to the containers, but I am still looking at this if that fits the 
current design.

 However, I now see that is not the case - the mount is performed within the
 container and is not actually visible in the host environment (actually a
 good thing!). This leads me to ask some more questions though...

 1) Why not just put the mount inside the container's /etc/fstab ?

You can choose the better way of creating/configuring your container 
depending of your needs : add in the container's /etc/fstab, specify it 
in a local fstab or add a lxc.mount.entry option (which correspond to a 
line of fstab).

Providing different ways of mounting allows to create a container with 
or without a root filesystem. You can use the host fs with a set of 
private directories (/var/run, /etc, /home, /tmp, ...) bind mounted to a 
private directory tree and share the host binaries, this is good to 
launch a big number of containers (eg. 1024 containers take 2,3 GB of 
private data only while the rest is shared). You can either specify the 
mount points in the container's /etc/fstab and let the 'mount' command 
to update the /etc/mtab and have different distros with different binaries.

Another alternative is to launch an application only, like apache with 
its own configuration option bind mounted in a private directory, ... so 
you can launch several instances of apache and move you contained 
environment from one host to another host, etc ...

You can create a empty rootfs with an empty directories tree (/usr, 
/lib, etc ...) and then read-only bind mount, you host directory (/usr 
= rootfs/usr, /lib = rootfs/lib, etc ...) while you keep private 
some other directories (eg. /home).

Well there are a lot of configurations for the containers, for this 
reason, there are several ways to configure it.
 2) When do these mounts happen? I have a problem with a daemon not starting
 during boot because, I think, the filesystem it needs is not yet there.


These mounts happens before jumping to the rootfs with pivot_root 
because we may want to mount host filesystem to the container's rootfs.

   -- Daniel


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc doesn't work on Fedora 13

2010-09-21 Thread Daniel Lezcano
On 09/17/2010 10:27 AM, Scott Bronson wrote:
 I have lxc working pretty well on my Ubuntu Lucid box.  Now I'm trying
 to get it to work on my Fedora 13 laptop but I can't seem to get it to
 connect to any guest consoles.


I was able to reproduce it. I have the 'init' process and the 'mountall' 
processes.

The latter is blocked on:

[8108ff74] utrace_stop+0x128/0x186
[8109001a] finish_resume_report+0x48/0x83
[8109093a] utrace_get_signal+0x4ac/0x5fc
[8105e34c] get_signal_to_deliver+0x125/0x3c8
[81009038] do_signal+0x72/0x6b8
[810096a6] do_notify_resume+0x28/0x86
[81009f3e] int_signal+0x12/0x17
[] 0x

Smells like a kernel bug :(
AFAIK, utrace was backported to the fedora 13 kernel.

I tested with a fedora rawhide kernel (2.6.36... ) and it appears too, 
pfff ...

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc doesn't work on Fedora 13

2010-09-23 Thread Daniel Lezcano
On 09/23/2010 05:28 AM, Scott Bronson wrote:
 On Tue, Sep 21, 2010 at 8:27 AM, Daniel Lezcano
 daniel.lezc...@free.fr  wrote:  I was able to reproduce it. I have
 the 'init' process and the 'mountall'

 processes.

 The latter is blocked on:
  
 That smells right.  Good find!

 Wouldn't surprise me if this explains why Chrome won't start on F13
 too (it only fails when the cgroup filesystem is mounted).


I entered a new bug in the redhat's bugzilla:

https://bugzilla.redhat.com/show_bug.cgi?id=636210

Not sure it is visible without subscribing.

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] automount in the container

2010-09-24 Thread Daniel Lezcano
On 09/24/2010 09:02 AM, Helmut Lichtenberg wrote:
 Hi,
 I set up my first container and want to mount the home directories for the
 users with automount/autofs5.

 During installation of autofs5 in the container, it complained that it can't
 create /dev/autofs. Create this node with mknod was possible but did not help.
 When I want to step into the automount dir it seems to hang (the prompt does
 not return).

 Also bind mounting the mount point for the automount directory on the host
 into the container did not work. Even when the nfs-directories are mounted on
 the host, they are not visible in the container.

 Background: Debian GNU/Linux squeeze/sid both in host and container
  lxc tools 0.7.2-1
  kernel2.6.32-5-amd64
  autofs5   5.0.4-3.1

 Any help is appreciated.
 Helmut


Can you check if adding the following line in /var/lib/lxc/name/config 
fix your problem ?

lxc.cgroup.devices.allow = c 10:52 rwm # dev/autofs

Thanks
   -- Daniel

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] automount in the container

2010-09-24 Thread Daniel Lezcano
On 09/24/2010 12:31 PM, Helmut Lichtenberg wrote:
 Some more experiments:

 The last lines of an strace look like this:

 r...@cc2,~: strace ls -l /net/fs-v1
 [...]
 open(/usr/lib/gconv/gconv-modules.cache, O_RDONLY) = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=26048, ...}) = 0
 mmap(NULL, 26048, PROT_READ, MAP_SHARED, 3, 0) = 0x7f1273115000
 close(3)= 0
 futex(0x7f12728d9f60, FUTEX_WAKE_PRIVATE, 2147483647) = 0
 lstat(/net/fs-v1,

 I don't understand these internals, but maybe it makes sense for some of you.

lstat blocks on:

[a0e94b18] autofs4_wait+0x2e8/0x760 [autofs4]
[a0e92d20] try_to_fill_dentry+0x110/0x130 [autofs4]
[a0e934d5] autofs4_revalidate+0x155/0x1f0 [autofs4]
[a0e93fa6] autofs4_lookup+0x4f6/0x5c0 [autofs4]
[8114d9d2] real_lookup+0xe2/0x160
[8114f978] do_lookup+0xb8/0xf0
[811504a5] __link_path_walk+0x765/0xf80
[81150f3a] path_walk+0x6a/0xe0
[8115110b] do_path_lookup+0x5b/0xa0
[81151dd7] user_path_at+0x57/0xa0
[8114852c] vfs_fstatat+0x3c/0x80
[8114869b] vfs_stat+0x1b/0x20
[811486c4] sys_newstat+0x24/0x50
[810133c5] tracesys+0xd9/0xde


 With '/etc/init.d/autofs stop' I cannot stop the service, but when I kill the
 automount process with signal -9 and start it again with '/etc/init.d/autofs
 start' -- then it suddenly works.
 I can cd into /net/fs-v1 and have all directories available.

 It's reproducable after a reboot.
 Strange, isn't it?

Yes :s




--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] automount in the container

2010-09-24 Thread Daniel Lezcano
On 09/24/2010 12:31 PM, Helmut Lichtenberg wrote:
 Some more experiments:

 The last lines of an strace look like this:

 r...@cc2,~: strace ls -l /net/fs-v1
 [...]
 open(/usr/lib/gconv/gconv-modules.cache, O_RDONLY) = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=26048, ...}) = 0
 mmap(NULL, 26048, PROT_READ, MAP_SHARED, 3, 0) = 0x7f1273115000
 close(3)= 0
 futex(0x7f12728d9f60, FUTEX_WAKE_PRIVATE, 2147483647) = 0
 lstat(/net/fs-v1,

 I don't understand these internals, but maybe it makes sense for some of you.

 With '/etc/init.d/autofs stop' I cannot stop the service, but when I kill the
 automount process with signal -9 and start it again with '/etc/init.d/autofs
 start' -- then it suddenly works.
 I can cd into /net/fs-v1 and have all directories available.

 It's reproducable after a reboot.
 Strange, isn't it?


It seems the patchset 
http://kerneltrap.org/mailarchive/linux-kernel/2007/3/20/68572/thread 
was not taken upstream.
A quick look at the code, make me think the pids are not virtualized and 
that should mess up autofs4.


--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Running LXC containers on a laptop

2010-09-24 Thread Daniel Lezcano
On 09/24/2010 05:17 PM, Sebastien Pahl wrote:
 Hi,

 you need to setup snat or masquerading if you want your containers to
 access the network.

 # sysctl -w net.ipv4.ip_forward=1

 snat (you need your ):
 # iptables -t nat -A POSTROUTING -o wlan0 -j SNAT --to-source=WLAN0_IP

 OR

 masquerading:
 # iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE

mid-air collision :P

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] automount in the container

2010-09-24 Thread Daniel Lezcano
On 09/24/2010 03:03 PM, Daniel Lezcano wrote:
 On 09/24/2010 12:31 PM, Helmut Lichtenberg wrote:

 Some more experiments:

 The last lines of an strace look like this:

 r...@cc2,~: strace ls -l /net/fs-v1
 [...]
 open(/usr/lib/gconv/gconv-modules.cache, O_RDONLY) = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=26048, ...}) = 0
 mmap(NULL, 26048, PROT_READ, MAP_SHARED, 3, 0) = 0x7f1273115000
 close(3)= 0
 futex(0x7f12728d9f60, FUTEX_WAKE_PRIVATE, 2147483647) = 0
 lstat(/net/fs-v1,

 I don't understand these internals, but maybe it makes sense for some of you.

 With '/etc/init.d/autofs stop' I cannot stop the service, but when I kill the
 automount process with signal -9 and start it again with '/etc/init.d/autofs
 start' -- then it suddenly works.
 I can cd into /net/fs-v1 and have all directories available.

 It's reproducable after a reboot.
 Strange, isn't it?

  
 It seems the patchset
 http://kerneltrap.org/mailarchive/linux-kernel/2007/3/20/68572/thread
 was not taken upstream.
 A quick look at the code, make me think the pids are not virtualized and
 that should mess up autofs4.


I respinned the kernel patchset from the mailing list and autofs4 now works.

Thanks
   -- Daniel


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] automount in the container

2010-09-27 Thread Daniel Lezcano
On 09/27/2010 09:40 AM, Helmut Lichtenberg wrote:
 Hi Daniel,

 Daniel Lezcano schrieb am 25. Sep 2010 um 00:05:41 CEST:
 It seems the patchset
 http://kerneltrap.org/mailarchive/linux-kernel/2007/3/20/68572/thread
 was not taken upstream.
 A quick look at the code, make me think the pids are not virtualized and
 that should mess up autofs4.


 I respinned the kernel patchset from the mailing list and autofs4 now works.

 thanks for solving this problem.

 I'm not quite clean, what this means for me now. Will this patch be included
 into the next kernel or do I have to patch my current (Debian Squeeze) kernel
 2.6.32-5-amd64?

May be both. I will look at resending the patchset to the upstream 
kernel. If it is merged, that will take awhile before hitting a distro.
I suppose you should have to patch your kernel, except if we found a 
very good reason to ask for a 2.6.32 inclusion but I doubt :/

   -- Daniel


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] automount in the container

2010-09-27 Thread Daniel Lezcano

On 09/27/2010 11:59 AM, Helmut Lichtenberg wrote:

Daniel Lezcano schrieb am 27. Sep 2010 um 11:17:12 CEST:
   

Daniel Lezcano schrieb am 25. Sep 2010 um 00:05:41 CEST:
   

It seems the patchset
http://kerneltrap.org/mailarchive/linux-kernel/2007/3/20/68572/thread
was not taken upstream.
   

[...]
   

I'm not quite clean, what this means for me now. Will this patch be included
into the next kernel or do I have to patch my current (Debian Squeeze) kernel
2.6.32-5-amd64?
   

May be both. I will look at resending the patchset to the upstream
kernel. If it is merged, that will take awhile before hitting a distro.
I suppose you should have to patch your kernel, except if we found a
very good reason to ask for a 2.6.32 inclusion but I doubt :/
 

Your link above points to a somehow lengthy discussion.
Is the inline patch from sukadev (2007-03-12) at the very top of the thread
the patch in question? Or does a patchfile float somewhere around?
   


Added in attachment the patch.
It applies against 2.6.36-rc5 but I think backport it to 2.6.32 is trivial.

From: Sukadev Bhattiprolu suka...@us.ibm.com

Subject: Replace pid_t in autofs4 with struct pid reference.

Make autofs4 container-friendly by caching struct pid reference rather
than pid_t and using pid_nr() to retreive a task's pid_t.

ChangeLog:
	- Fix Eric Biederman's comments - Use find_get_pid() to hold a
	  reference to oz_pgrp and release while unmounting; separate out
	  changes to autofs and autofs4.
	- Also rollback my earlier change to autofs_wait_queue (pid and tgid
	  in the wait queue are just used to write to a userspace daemon's
	  pipe).
- Fix Cedric's comments: retain old prototype of parse_options()
  and move necessary change to its caller.

Signed-off-by: Sukadev Bhattiprolu suka...@us.ibm.com
Cc: Cedric Le Goater c...@fr.ibm.com
Cc: Dave Hansen haveb...@us.ibm.com
Cc: Serge Hallyn se...@us.ibm.com
Cc: Eric Biederman ebied...@xmission.com
Cc: contain...@lists.osdl.org

---
 fs/autofs4/autofs_i.h  |   28 ++--
 fs/autofs4/dev-ioctl.c |2 +-
 fs/autofs4/inode.c |   22 --
 fs/autofs4/root.c  |3 ++-
 fs/autofs4/waitq.c |4 ++--
 5 files changed, 35 insertions(+), 24 deletions(-)

Index: linux-next/fs/autofs4/autofs_i.h
===
--- linux-next.orig/fs/autofs4/autofs_i.h
+++ linux-next/fs/autofs4/autofs_i.h
@@ -39,25 +39,25 @@
 /* #define DEBUG */
 
 #ifdef DEBUG
-#define DPRINTK(fmt, args...)\
-do {			\
-	printk(KERN_DEBUG pid %d: %s:  fmt \n,	\
-		current-pid, __func__, ##args);	\
+#define DPRINTK(fmt, args...)	\
+	do {			\
+	printk(KERN_DEBUG pid %d: %s:  fmt \n,		\
+	   pid_nr(task_pid(current)), __func__, ##args);\
 } while (0)
 #else
 #define DPRINTK(fmt, args...) do {} while (0)
 #endif
 
-#define AUTOFS_WARN(fmt, args...)			\
-do {			\
-	printk(KERN_WARNING pid %d: %s:  fmt \n,	\
-		current-pid, __func__, ##args);	\
+#define AUTOFS_WARN(fmt, args...)\
+	do {			\
+	printk(KERN_WARNING pid %d: %s:  fmt \n,		\
+	   pid_nr(task_pid(current)), __func__, ##args);	\
 } while (0)
 
-#define AUTOFS_ERROR(fmt, args...)			\
-do {			\
-	printk(KERN_ERR pid %d: %s:  fmt \n,	\
-		current-pid, __func__, ##args);	\
+#define AUTOFS_ERROR(fmt, args...)\
+	do {			\
+	printk(KERN_ERR pid %d: %s:  fmt \n,		\
+	   pid_nr(task_pid(current)), __func__, ##args);	\
 } while (0)
 
 /* Unified info structure.  This is pointed to by both the dentry and
@@ -122,7 +122,7 @@ struct autofs_sb_info {
 	u32 magic;
 	int pipefd;
 	struct file *pipe;
-	pid_t oz_pgrp;
+	struct pid *oz_pgrp;
 	int catatonic;
 	int version;
 	int sub_version;
@@ -156,7 +156,7 @@ static inline struct autofs_info *autofs
filesystem without magic.) */
 
 static inline int autofs4_oz_mode(struct autofs_sb_info *sbi) {
-	return sbi-catatonic || task_pgrp_nr(current) == sbi-oz_pgrp;
+	return sbi-catatonic || task_pgrp(current) == sbi-oz_pgrp;
 }
 
 /* Does a dentry have some pending activity? */
Index: linux-next/fs/autofs4/inode.c
===
--- linux-next.orig/fs/autofs4/inode.c
+++ linux-next/fs/autofs4/inode.c
@@ -111,7 +111,7 @@ void autofs4_kill_sb(struct super_block 
 
 	/* Free wait queues, close pipe */
 	autofs4_catatonic_mode(sbi);
-
+	put_pid(sbi-oz_pgrp);
 	sb-s_fs_info = NULL;
 	kfree(sbi);
 
@@ -133,7 +133,7 @@ static int autofs4_show_options(struct s
 		seq_printf(m, ,uid=%u, root_inode-i_uid);
 	if (root_inode-i_gid != 0)
 		seq_printf(m, ,gid=%u, root_inode-i_gid);
-	seq_printf(m, ,pgrp=%d, sbi-oz_pgrp);
+	seq_printf(m, ,pgrp=%d, pid_nr(sbi-oz_pgrp));
 	seq_printf(m, ,timeout=%lu, sbi-exp_timeout/HZ);
 	seq_printf(m, ,minproto=%d, sbi-min_proto);
 	seq_printf(m, ,maxproto=%d, sbi-max_proto);
@@ -263,6 +263,7 @@ int autofs4_fill_super(struct super_bloc
 	int pipefd;
 	struct autofs_sb_info *sbi;
 	struct

  1   2   3   4   >