Re: [Lxc-users] container shutdown

2012-03-19 Thread Brian K. White
On 3/19/2012 9:25 AM, Serge Hallyn wrote:
 Quoting Daniel Lezcano (daniel.lezc...@free.fr):
 On 03/19/2012 03:50 AM, Serge Hallyn wrote:
 Quoting Daniel Lezcano (daniel.lezc...@free.fr):
 On 03/19/2012 12:00 AM, Serge Hallyn wrote:
 Hi,

 Thanks to Jäkel's and Fajar's great ideas, we can now cleanly shut down
 a container by sending it SIGPWR.  I'm attaching two ways to do that.
 In-line is a patch which modifies lxc-stop to take optional -s and -t
 args - -s for shutdown (meaning send SIGPWR), and -t for a timeout,
 after sending SIGPWR, to hard-kill the container.
 That may make more sense to implement a lxc-reboot | lxc-shutdow
 Is there another signal that would make sense for lxc-reboot?

 Yes, SIGINT will make the init process to restart the services. I
 said lxc-reboot but that could be lxc-shutdown -r.

 I personally prefer lxc-reboot, but I can imagine people liking
 lxc-shutdown -r.  What do others prefer?

 script on top of on lxc-kill.

 IMHO, I don't think adding a timeout is a good idea because the
 shutdown process may take more than the timeout to stop the services
 and the container could be killed while the services are doing some
 cleanup or flush or whatever. If this option is present, people will
 tend to use it instead of investigating if a service is stuck, or
 working, or flushing.
 I would recommend to let the shutdown script to handle the timeout
 by themselves.
 By 'let the shutdown script to handle the timeout by themselves, you
 mean let the scripts calling lxc-shutdown handle the timeout?

 I meant the initrd scripts within the container to be fixed to
 properly shutdown (for example add timeout or optimize the stopping
 services). The init process will send SIGTERM to all the processes
 and then SIGKILL after awhile. I don't think that should be handled
 from outside.

 I agree we want to do that where we can.  I disagree that we should
 rely on it.

 Some services are bogus because they don't care when
 they are stopped in the shutdown process because they expect to be
 killed. For example, the sshd service was automatically respawned
 after being killed by init at the shutdown time but that was only
 spotted with containers.

 Right, and we should (and did) fix that, but lxc shouldn't look
 broken when the container misbehaves.

 leave lxc-shutdown to be as simple as 'lxc-kill -n $1 SIGPWR ?

 Yes, lxc-shutdown could be in this case very trivial (may be adding
 a couple of things like waiting for the container to stop before
 exiting in order to have a synchronous command).

 (I dunno, from there it seems to me the next logical step to add a
 timeout :)  But just waiting is fine for me.)

 Ok, so

 lxc-kill -n $1 SIGPWR
 lxc-wait -n $1 STOPPED

 I'll wait for comments on lxc-reboot v lxc-shutdown -r.

Timout:
I can think of no excuse to omit a timeout option. It would be easy and 
it would be useful and it would be more admin-friendly than requiring 
the init script author to do it, or fail to do it, or do it poorly, or 
have 12 different distro's all do it differently, etc...

Any that want to do it themselves, still can, since it's merely an 
option not a hard coded behavior. If you need to watch for something 
that _you_ know means it's ok to destroy, yet doesn't look like 
stopped to lxc-wait, no problem, just don't use that option.

But by far the more usual and therefor should be the default behavior, 
would be don't allow a hung container to prevent the host from shutting 
down gracefully. That allows one bad container to possibly harm the host 
and thereby all other containers on that host. My own init script for 
suse has this problem. I know ways I could fix it but I've just been 
busy with other work so it just continues to have this problem for a 
year now...

Executable name:
I would prefer several almost identical actions to be implemented in one 
program with options instead of several almost identical programs. So I 
say lxc-shutdown -r than lxc-reboot. But I have no problem with 
lxc-shutdown doing -r based on argv0 as well as getopts. Everyone can 
have what they want without asking you the author to write multiple 
programs.

-- 
bkw

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] failed to rename cgroup ?

2012-03-07 Thread Brian K. White
Are you running vsftpd inside the container?
If so, make sure it has these two lines in /etc/vsftpd.conf (in the 
container not the host)

# LXC compatibility
# http://www.mail-archive.com/lxc-users@lists.sourceforge.net/msg01110.html
isolate=NO
isolate_network=NO

Do the same for all containers.

Then reboot the host, because you can't clear the problem any other way, 
only prevent it from happening in the first place.

Then see if you still have a problem shutting down and restarting that 
container.

-- 
bkw


On 3/7/2012 4:32 AM, 陈竞 wrote:
 i dont konw how to solve the problem, can you give me some advise? thank you

 在 2012年3月7日 下午5:04,Papp Tamas tom...@martos.bme.hu
 mailto:tom...@martos.bme.hu写道:

 On 03/07/2012 09:58 AM, 陈竞 wrote:
   i want to start lxc-sshd, but get error:
  
   localhost lxc # /usr/local/bin/lxc-start -n sshd
   lxc-start: No such file or directory - failed to rename cgroup
   /cgroup//lxc/9740-/cgroup//lxc/sshd
   lxc-start: failed to spawn 'sshd'
   lxc-start: No such file or directory - failed to remove cgroup
   '/cgroup//lxc/sshd'
  
   what does it mean?

 Remove, not rename.

 tamas

 
 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net mailto:Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users




 --
 陈竞,中科院计算技术研究所,高性能计算机中心
 Jing Chen HPCC.ICT.AC http://HPCC.ICT.AC China


 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/



 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] How to start the network services so as to get the IP address using lxc-execute???

2011-12-08 Thread Brian K. White
This isn't meant as an insult but you seem to be trying to do things 
backwards and expecting, worse, demanding, a low level tool to contain 
high level features that really should be provided by your own 
scripting, or by other tools that already exist for that purpose.

If you want to assign container ip addresses in some particular order, 
then simply do so. You can ensure container IP's get set whatever way 
you want by any number of means. You can write a start script that 
writes IP's into the container config files and/or rewrites them 
dynamically every time it's started up, you can tell the containers to 
use dhcp and you can control the dhcp server. If you want to read the 
containers IP, you can get it from the hosts dhcp server state/log file 
or possibly from the arp table on the host or by directly reading files 
from the containers filesystem.

Someone already gave you a good example of a simple bit of shell 
scripting that takes the container name and uses that to produce the ip 
address as long as the container names adhered to a consistent pattern.

If that's not what you wanted then what? Chronological order? That's 
sort of meaningless since containers can be stopped and restarted.

If you want the ip's to be assigned chronologically, ie, the first 
container to be started gets ip #1, that's trivial too, just write the 
start script to keep count in a temp file every time it is started, or 
have it parse the list of all running containers and add one to whatever 
is the current highest number running before starting a new container. 
But then this points out how meaningless this request is from the 
beginning. What happens after containers have been stopped and 
restarted? Do you want a restarted container to get a new next-highest 
number? or remember it's original number? If you want it to get a new 
number, then what happens when the always incrementing number gets to 
the end of the netmask? If you want it to remember the number that it 
got originally then what why not just write that number in it's config 
file from the beginning? If you want it to reuse IP's dynamically from a 
pool then that is already what a dhcp server does.

I really don't understand what you are trying to do or trying to avoid 
doing or why, that isn't easily answered by a little shell scripting 
and/or a dhcp server.

-- 
bkw

On 12/8/2011 7:59 AM, nishant mungse wrote:

 Hi Greg,

 Thanks 4 reply.

 I just want the IP addresses of the containers. And one more thing can I
 get the IP address of containers in sequence for eg. container1 ::
 198.208.168.1 container 2 :: 198.208.168.2 and like this.

 Please help me ASAP.

 Regards,
 Nishant

 On Thu, Dec 8, 2011 at 5:09 PM, Greg Kurz gk...@fr.ibm.com
 mailto:gk...@fr.ibm.com wrote:

 On Thu, 2011-12-08 at 16:03 +0530, nishant mungse wrote:
   Hi,
  
   I want to manually invoke a networking setup to start the network
   service to get the IP address of container , But the problem is i
   don't want to start the container and want to use lxc-execute.
  
   When I tried these things happened::
  
   command :: lxc-execute -n base
   -f /home/nishant/ubuntu.conf
 /var/lib/lxc/base1/rootfs/etc/init.d/networking start

 Ok... this can't work. lxc-execute is for application containers only:
 it runs lxc-init instead of standard /sbin/init. The networking script
 you invoke needs upstart to be already running in the container... You
 seem to have a system container here, it _MUST_ be started with
 lxc-start.

  
   O/P
  
   Rather than invoking init scripts through /etc/init.d, use the
   service(8)
   utility, e.g. service networking start
  
   Since the script you are attempting to invoke has been converted
 to an
   Upstart job, you may also use the start(8) utility, e.g. start
   networking
   start: Unable to connect to Upstart: Failed to connect to
   socket /com/ubuntu/upstart: Connection refused
  
  
   How to start the network services so as to get the IP addresses of
   containers?
  

 What's your true need here ? Controlling the containers network services
 from the host or just knowing the addresses used by the containers ? I
 guess both are doable in a variety of ways.

 Cheers.

  
   Regards,
   Nishant
  
  
  
  
 
 --
   Cloud Services Checklist: Pricing and Packaging Optimization
   This white paper is intended to serve as a reference, checklist
 and point of
   discussion for anyone considering optimizing the pricing and
 packaging model
   of a cloud services business. Read Now!
   http://www.accelacomm.com/jaw/sfnl/114/51491232/
   ___ Lxc-users mailing
 list Lxc-users@lists.sourceforge.net
 

Re: [Lxc-users] cannot start any more any container?!

2011-10-19 Thread Brian K. White
On 10/19/2011 1:24 PM, Ulli Horlacher wrote:
 Besides my problem with cannot stop/kill lxc-start (see other mail), I
 have now an even more severe problem: I cannot start ANY container anymore!

 I am sure I have overlooked something, but I cannot see what. I am really
 desperate now, because this happens to my production environment!

 Server host is:

 root@vms1:/lxc# lsb_release -a; uname -a; lxc-version
 No LSB modules are available.
 Distributor ID: Ubuntu
 Description:Ubuntu 10.04.3 LTS
 Release:10.04
 Codename:   lucid
 Linux vms1 2.6.35-30-server #60~lucid1-Ubuntu SMP Tue Sep 20 22:28:40 UTC 
 2011 x86_64 GNU/Linux
 lxc version: 0.7.4.1

 (linux-image-server-lts-backport-maverick)

 All my lxc files reside in /lxc :

 root@vms1:/lxc# l vmtest1*
 dRWX   - 2011-05-17 19:47 vmtest1
 -RWT   1,127 2011-10-19 18:54 vmtest1.cfg
 -RW- 476 2011-10-19 18:54 vmtest1.fstab

 I boot the container with:

 root@vms1:/lxc# lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d -o 
 /data/lxc/vmtest1.log


 But nothing happens, there is only a lxc-start process dangling around:

 root@vms1:/lxc# psg vmtest1
 USER   PID  PPID %CPUVSZ COMMAND
 root 31571 1  0.0  20872 lxc-start -f /data/lxc/vmtest1.cfg -n 
 vmtest1 -d -o /data/lxc/vmtest1.log

 The logfile is empty:

 root@vms1:/lxc# l vmtest1.log
 -RW-   0 2011-10-19 19:09 vmtest1.log


 And no corresponding /cgroup/vmtest1 entry:

 root@vms1:/lxc# l /cgroup/
 dRWX   - 2011-10-10 17:50 /cgroup/2004
 dRWX   - 2011-10-10 17:50 /cgroup/2017
 dRWX   - 2011-10-10 17:50 /cgroup/libvirt
 -RW-   0 2011-10-10 17:50 /cgroup/cgroup.event_control
 -RW-   0 2011-10-10 17:50 /cgroup/cgroup.procs
 -RW-   0 2011-10-10 17:50 /cgroup/cpu.rt_period_us
 -RW-   0 2011-10-10 17:50 /cgroup/cpu.rt_runtime_us
 -RW-   0 2011-10-10 17:50 /cgroup/cpu.shares
 -RW-   0 2011-10-10 17:50 /cgroup/cpuacct.stat
 -RW-   0 2011-10-10 17:50 /cgroup/cpuacct.usage
 -RW-   0 2011-10-10 17:50 /cgroup/cpuacct.usage_percpu
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.cpu_exclusive
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.cpus
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.mem_exclusive
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.mem_hardwall
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_migrate
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_pressure
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_pressure_enabled
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_spread_page
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_spread_slab
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.mems
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.sched_load_balance
 -RW-   0 2011-10-10 17:50 /cgroup/cpuset.sched_relax_domain_level
 -RW-   0 2011-10-10 17:50 /cgroup/devices.allow
 -RW-   0 2011-10-10 17:50 /cgroup/devices.deny
 -RW-   0 2011-10-10 17:50 /cgroup/devices.list
 -RW-   0 2011-10-10 17:50 /cgroup/memory.failcnt
 -RW-   0 2011-10-10 17:50 /cgroup/memory.force_empty
 -RW-   0 2011-10-10 17:50 /cgroup/memory.limit_in_bytes
 -RW-   0 2011-10-10 17:50 /cgroup/memory.max_usage_in_bytes
 -RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.failcnt
 -RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.limit_in_bytes
 -RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.max_usage_in_bytes
 -RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.usage_in_bytes
 -RW-   0 2011-10-10 17:50 /cgroup/memory.move_charge_at_immigrate
 -RW-   0 2011-10-10 17:50 /cgroup/memory.oom_control
 -RW-   0 2011-10-10 17:50 /cgroup/memory.soft_limit_in_bytes
 -RW-   0 2011-10-10 17:50 /cgroup/memory.stat
 -RW-   0 2011-10-10 17:50 /cgroup/memory.swappiness
 -RW-   0 2011-10-10 17:50 /cgroup/memory.usage_in_bytes
 -RW-   0 2011-10-10 17:50 /cgroup/memory.use_hierarchy
 -RW-   0 2011-10-10 17:50 /cgroup/net_cls.classid
 -RW-   0 2011-10-10 17:50 /cgroup/notify_on_release
 -RW-   0 2011-10-10 17:50 /cgroup/release_agent
 -RW-   0 2011-10-10 17:50 /cgroup/tasks

 At last the container config file:

 lxc.utsname = vmtest1
 lxc.tty = 4
 lxc.pts = 1024
 lxc.network.type = veth
 lxc.network.link = br0
 lxc.network.name = eth0
 lxc.network.flags = up
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 129.69.1.42/24
 lxc.rootfs = /lxc/vmtest1
 lxc.mount = /lxc/vmtest1.fstab
 # which CPUs
 lxc.cgroup.cpuset.cpus = 1,2,3
 lxc.cgroup.cpu.shares = 1024
 # http://www.mjmwired.net/kernel/Documentation/cgroups/memory.txt
 lxc.cgroup.memory.limit_in_bytes = 512M
 lxc.cgroup.memory.memsw.limit_in_bytes = 512M

Re: [Lxc-users] shutting down CentOS6 container

2011-10-18 Thread Brian K. White
On 10/17/2011 5:01 PM, Papp Tamas wrote:
 On 10/17/2011 10:54 PM, Derek Simkowiak wrote:
  /I tried the python script, it just works fine./

 Q1: How does the kill -INT init method affect running processes,
 especially MySQL and other databases that may need to shutdown
 gracefully to avoid data corruption?

 I believe that the child processes (incl. mysqld, apache, etc.) would
 be able to shutdown gracefully without data corruption, because they'd
 be killed with a signal that will invoke their internal signal
 handlers. But, I am looking for independent confirmation.

 That's right.

 Q2: How is lxc-stop -n $CONTAINERNAME different from the Python script
 mentioned below? Will lxc-stop on a container cause an unclean
 shutdown, or does it also use a Unix signal?

 lxc-stop is part of the script.
 If I'm right it's equivalent of pushing the power button of the machine.

I would say it's like pulling the power cord.

Not just being a pedant. The terminology matters since you are trying 
specifically to clarify and nail down exactly that all-important 
behavioral detail.

Pressing the power button is ambiguous since pressing the power button 
can be either a polite signal resulting in a graceful and orderly 
shutdown, OR not, depending on the machine. And that difference is all 
the difference in the world.

Just in case someone asks I guess you could also say lxc-destroy is like 
removing everything but the hard drive.

-- 
bkw

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Fwd: RE: Price Request For lxc.org

2011-10-11 Thread Brian K. White
IEEE does not honor requests for applicant requested id's, but even 
for the locally administered,

lxc 6c:78:63
and
LXC 4c:58:43

are both out. l (6c) and L (4c) both have a second nibble c which has a 
second-least-significant bit 0

1100
--^-

Too bad, it's nice and high to avoid the bridge low mac address interaction.


Backwards, cxl and CXL are both ok.
43:58:4c
63:78:6c
-^:--:--

3
0011
--^-

cxl, containers by linux ?


-- 
bkw

On 10/11/2011 4:07 PM, Derek Simkowiak wrote:
   /Add it to the possible wish list along with the MAC address prefix/

  If there is interest in an official LXC vendor MAC address prefix,
 I'd like to call your attention to this Linux kernel bug:

 https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/584048
 https://www.redhat.com/archives/libvir-list/2010-July/msg00450.html

  These bug reports are for KVM with bridging, however, I have seen
 the same symptom using LXC.  The symptom is that the network bridge goes
 dead for several seconds when starting or stopping containers.  The root
 of the issue is in the Linux kernel, and how it handles the MAC address
 of bridges (and bond interfaces, too).

  In summary, the MAC prefix can't be arbitrary, because a low MAC
 vendor prefix causes a short-term network blackout on the bridge device
 when starting or stopping LXC containers, or KVM/qemu VMs, or any other
 environment using non-physical interfaces.  The blackout is (apparently)
 caused by the bridge changing its MAC address.

  I have added a workaround to my script for this bug (see Comment
 #60 in Launchpad, above).  According to Serge Hallyn: That it is a
 general bridge property is indeed known. The fix in this bug is, like
 your script, simply working around that fact.


 Thank You,
 Derek Simkowiak
 de...@simkowiak.net

 On 10/11/2011 06:08 AM, Brian K. White wrote:
 That's a pretty substantial reduction. Add it to the possible wish list
 along with the MAC address prefix. Sadly I never finished the research
 to get that. The $600 is easy, the time to figure out what you're
 supposed to do is not ;)




 --
 All the data continuously generated in your IT infrastructure contains a
 definitive record of customers, application performance, security
 threats, fraudulent activity and more. Splunk takes this data and makes
 sense of it. Business sense. IT sense. Common sense.
 http://p.sf.net/sfu/splunk-d2d-oct



 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] New LXC Creation Script: lxc-ubuntu-x

2011-10-06 Thread Brian K. White
Ideally, for the stated purpose, we need something not named ubuntu.

I already have the same sort of wiki page on opensuse.org since a year 
ago but that's of course highly opensuse specific, which is exactly the 
problem a central wiki proposes to avoid.

Meanwhile I'm getting less and less in love with suse every day anyways 
due to changes over the last couple years, and so I'm probably going to 
start basing my systems on Arch or who-knows-what sooner or later.

So, sourceforge or code.google.com or .. blah, lxc.org is for sale for a 
mere $5000 haha.

-- 
bkw

On 10/6/2011 8:44 AM, Serge E. Hallyn wrote:
 Quoting Jäkel, Guido (g.jae...@dnb.de):
  I think there is about 80% overlap between the two projects but
 enough differences to be interesting.  I'll take a closer look at your
 script looking for ideas I may have missed, and I invite you to do the same.

 @Derek: well-spoken.


 @Daniel  Serge: Is there already something like a Wiki to collect such 
 contribute work? I think, there are much more people around here which 
 have developed such tools around LXC: Focused on their own requirements and 
 conditions and therefore not fitted to publish to the community. But usefull 
 to study for others to take an idea of it for own purposes.

 I've just created https://wiki.ubuntu.com/lxc.  Please feel free to add your 
 own or, Derek and Uli, please fill in your own description of yours :)

 thanks,
 -serge

 --
 All the data continuously generated in your IT infrastructure contains a
 definitive record of customers, application performance, security
 threats, fraudulent activity and more. Splunk takes this data and makes
 sense of it. Business sense. IT sense. Common sense.
 http://p.sf.net/sfu/splunk-d2dcopy1
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] stopping a container

2011-09-06 Thread Brian K. White
On 9/5/2011 12:34 PM, Michael H. Warfield wrote:
 On Mon, 2011-09-05 at 09:24 +0200, Papp Tamas wrote:
 On 09/05/2011 08:38 AM, Jäkel, Guido wrote:
 What is the right way to stop a container?
 Dear Papp,

 Like with the thread paradigm in computing langugages, the right
 way is that the thread decides to stop. Therefore your container have
 to leave.

 Depending on your Linxus flavor inside the container, you e.g. may
 send a signal to it's init process to proper shut down. This mechanism
 is historical intended to be used by an USV poser supply. In the
 moment, I'm using an old-style sytem v init and I may just send a
 SIGINT to reboot and a SIGPWR to halt it (must be enabled in the
 inittab).


 Another (planned) way is to use lxc-execute, but this is still not
 working. Ulli Hornbacher therefore wrote it's own workaround: A little
 daemon executes all command pushed in by a command running at the host
 -- disregarding to all aspects of security.

 If you're running a sshd inside the container -- and in the most
 case you will I think -- you may use this (with a deposited key) to
 directly send commands to it.

 hi,

 I don't like the ssh way. I think, halting a container automatically
 through an ssh connection is a joke, wich should not be used in any
 way.

 Another way that I have used is to send the init process a kill signal.
 I think it was the power fail sig but that should do it.  That
 definitely worked with the sysv init but I see to recall I had some
 problem with upstart (and systemd won't currently run in a container
 anyways - so you can forget that pig).

That's not another way. That's exactly the way already stated first above.

-- 
bkw

--
Malware Security Report: Protecting Your Business, Customers, and the 
Bottom Line. Protect your business and customers by understanding the 
threat from malware and how it can impact your online business. 
http://www.accelacomm.com/jaw/sfnl/114/51427462/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] can't remove cgroup

2011-06-17 Thread Brian K. White
On 6/17/2011 12:06 PM, Serge Hallyn wrote:
 Quoting Brian K. White (br...@aljex.com):
 On 6/16/2011 3:26 PM, Serge Hallyn wrote:
 Quoting Brian K. White (br...@aljex.com):
 I thought we killed this problem?
 ...
 nj12:~ # rm -rf /sys/fs/cgroup/vps001

 rmdir


 Did that too. no joy.

 In fact I did both the main directory and several runs of find|xargs to
 delete files and directories using rm -f , rm -rf and rmdir.
 I'll have to wait for it to happen again to diagnose what the problem
 was. I had to reboot the host because I needed that vm back up.

 I'm guessing the developer was doing something I didn't expect within
 the vm, besides the use of the reboot command, to tie up the context
 group even after all processes went away.

 Or maybe, if you don't have a release agent set, he just ran something
 like vsftpd which created new cgroups by cloning?

 -serge


I do have a release agent, and I usually have the required vsftpd config 
options to disable namespace usage as part of my recipe for setting up 
all systems, but I did not do most of the setup of these particular 
vm's, I'm trying to get one of my people up to speed so they can do it 
so I intentionally stayed away.

It's entirely possible the special vsftpd config either didn't get done, 
or got lost in a full distribution version in-place upgrade that was 
done from within the vm.

... aha, just checked. An old version of my template vsftpd config was 
used which did not yet have the namespace options.

I will add them and test! (as well as update the source of the template 
config obviously)

Thank you even if this doesn't turn out to be the culprit of this 
incident, it's still a hole I missed.

-- 
bkw

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] can't remove cgroup

2011-06-16 Thread Brian K. White
I thought we killed this problem?

nj12:~ # lxc-start -n vps001 -f /etc/lxc/vps001/config
lxc-start: Device or resource busy - failed to remove previous cgroup 
'/sys/fs/cgroup/vps001'
lxc-start: failed to spawn 'vps001'
lxc-start: Device or resource busy - failed to remove cgroup 
'/sys/fs/cgroup/vps001'

nj12:~ # lxc-ps auxwww |grep vps001
root  9307  0.0  0.0   7668   808 pts/0S+   14:06 
0:00 grep vps001

nj12:~ # lxc-info -n vps001
'vps001' is STOPPED

nj12:~ # lxc-destroy -n vps001
'vps001' does not exist

nj12:~ # mount |grep cgroup
cgroup on /sys/fs/cgroup type cgroup (rw)

nj12:~ # rm -rf /sys/fs/cgroup/vps001
rm: cannot remove 
`/sys/fs/cgroup/vps001/30149/cpuset.memory_spread_slab': Operation not 
permitted
rm: cannot remove 
`/sys/fs/cgroup/vps001/30149/cpuset.memory_spread_page': Operation not 
permitted
[...]
rm: cannot remove `/sys/fs/cgroup/vps001/cgroup.procs': Operation not 
permitted
rm: cannot remove `/sys/fs/cgroup/vps001/tasks': Operation not permitted
nj12:~ #

The dirs and files still exist so just ignore the error doesn't apply 
here. What happened was the user issued the command reboot from within 
the container. In my own testing I had only ever used shutdown -r now 
which worked fine.

This is lxc 0.7.4.2 on kernel 2.6.39

How can I clear this cgroup? How can I even tell if there are really any 
processes holding it open if lxc-ps shows none?
How can I restart this container other than by editing the start script 
to use a different cgroup name or restarting the entire host?

-- 
bkw

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] can't remove cgroup

2011-06-16 Thread Brian K. White
On 6/16/2011 3:26 PM, Serge Hallyn wrote:
 Quoting Brian K. White (br...@aljex.com):
 I thought we killed this problem?
 ...
 nj12:~ # rm -rf /sys/fs/cgroup/vps001

 rmdir


Did that too. no joy.

In fact I did both the main directory and several runs of find|xargs to 
delete files and directories using rm -f , rm -rf and rmdir.
I'll have to wait for it to happen again to diagnose what the problem 
was. I had to reboot the host because I needed that vm back up.

I'm guessing the developer was doing something I didn't expect within 
the vm, besides the use of the reboot command, to tie up the context 
group even after all processes went away.

-- 
bkw

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] [lxc-devel] [PATCH] ignore non-lxc configuration line

2011-06-02 Thread Brian K. White
On 6/2/2011 3:41 PM, Daniel Lezcano wrote:
 On 06/02/2011 07:03 PM, Michael H. Warfield wrote:
 On Wed, 2011-06-01 at 20:10 -0400, Michael H. Warfield wrote:
 On Fri, 2011-05-13 at 22:32 +0200, Daniel Lezcano wrote:
 From: Daniel Lezcanodaniel.lezc...@free.fr
 We ignore the line of in the configuration file not beginning by lxc.
 So we can mix the configuration file with another information used for
 another component through the lxc library.
 Wow...

 I seem to recall requesting this sort of thing ages ago.  Maybe even
 before we created the -users list and only had the -dev list and was
 shot down.  I have s wanted this feature.  That can implement many
 of the OpenVZ compatibility things we need the high level scripts to
 perform and keep them in one file.  Many thanks.  I as SO glad to see
 this!
 I see that this has not, apparently, made it into a release bundle yet.
 Any idea when it will be out?

 It will be for the lxc-0.7.5 version. No ETA for the moment.
 I would like to have new feature for lxc before releasing a new version,
 the delta with 0.7.4 are mostly bug fixes.

Bugfixes-only for a micro (0.0.x) version increase sounds perfectly fine 
to me. Hold up 0.8.0 or 1.0.0 for features indefinitely and it's fine, 
but hold back bug fixes just so features can go with them?
This is rather what the micro part of the version is for isn't it?

-- 
bkw

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Howto detect we are in LXC contener

2011-05-25 Thread Brian K. White
On 5/25/2011 7:51 PM, David Touzeau wrote:
 Dear all

 to detect if we are inside an OpenVZ, openvzve,xen machine
 we can check the presence of :
 /proc/vz/veinfo
 /proc/vz/version
 /proc/sys/xen
 /sys/bus/xen
 /proc/xen

 But i did not find any information inside the LXC contener in order to
 detect We are really in an LXC contener.

 Is there a tip ??

 Best regards

Recent lxc_start sets an environment variable in the init process.

http://lxc.git.sourceforge.net/git/gitweb.cgi?p=lxc/lxc;a=commit;h=3244e75040a98d2854144ebc169a5a61ddbe0a26

But I like the other trick of checking if init is in a cgroup better for 
now, since it will work for containers that were created by other means 
than lxc_start.

-- 
bkw

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Mixing public and private IPs for guests - network configuration?

2011-05-23 Thread Brian K. White
On 5/21/2011 7:48 PM, Benjamin Kiessling wrote:
 Hi,

 Indeed this is not a virtualization specific problem. You want your host to 
 operate as a router for
 the other two IP addresses and, depending on the configuration of OVH, 
 ARP-Proxy the whole stuff.
 Assuming you want have PUB-IP1 on the host and want to assign PUB-IP2 to the 
 container (lets say
 with veths).
 Just assign PUB-IP1 to your host (ip addr a PUB-IP1 dev ethN), add the route 
 for PUB-IP2 to the
 veth of the container on the host (ip r a PUB-IP2 dev vethN), add PUB-IP2 to 
 the interface in the
 container (ip addr a PUB-IP2 dev vethContainer) and set a default route over 
 PUB-IP1 in the
 container (ip r a PUB-IP1/32 dev vethContainer  ip r a default via PUB-IP1 
 dev vethContainer).
 Enable Routing (/proc/sys/net/ipv4/ip_forward) and if OVH uses reverse path 
 filtering proxy-arp
 (/proc/sys/net/ipv4/conf/$DEV/proxy_arp) on the host.
 That should do it. You could use a bridge and still reach all containers (the 
 bridge would have the
 address PUB-IP1 and would include all veths and the physical device) but 
 it'll complicate the setup
 if NAT is required for certain containers. Just set the routes explicitly for 
 each container veth.

 Regards,
 Benjamin Kiessling

I'm not the OP but just wanted to say this was good stuff and I 
appreciate a handy run-down like that.

-- 
bkw

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] updated lxc template for debian squeeze - with attachedscript ; )

2011-04-24 Thread Brian K. White
Not at all, this is good info.

It's not an old thread as long as the proposed task hasn't been done 
yet, and it hasn't.

I still need to finish researching what exactly we should get, and then how.

-- 
bkw

On 4/23/2011 3:25 AM, Geordy Korte wrote:
 Hello,

 Sorry to revive an old thread but I would like to share some information
 with you that might give you an insight into why an OUI is advisable.

 I work for IBM as a Technical Pre-Sales consultant for Blade network
 technologies (what a mouth full). BNT creates switches that are very
 very good but that is not the point. One of the features that we have is
 VMready which basically means that when the switch detects a Virtualized
 uplink to a server it will analyse the traffic and create PORTS for
 every virtual host running on that server. This tech allows you to
 create policy for that port with which you can set QOS, ACL and anything
 else you would like. Now Vmready is fully vmotion enabled so that when
 you migrate a virtualhost to another server, the policy moves with it.

 The reason for me writing this to the list is that Vmready works for
 Hypervisor, vmware, kvm, powervm...  and it only works because of the
 mac address. Each switch has a database of Macs that belong to a
 virtualization product and by matching passing traffic to the list
 Vmready works. Should LXC get it's own block then I can make sure it's
 added to the Vmready database.

 Sorry if this sounds like a sales pitch... it's not meant too.

 Geordy Korte

 On Fri, Mar 11, 2011 at 11:08 PM, Brian K. White br...@aljex.com
 mailto:br...@aljex.com wrote:

 On 3/11/2011 10:14 AM, Michael H. Warfield wrote:
   On Thu, 2011-03-10 at 19:09 +, Walter Stanish wrote:
   ...  I have read up on the OUI documentation and
   looking at the detail on the site LXC could opt for a 32bit
 OUI which would
   cost $600 for one block. The dev guys might want to setup a
 pledge program...
  
   I will pay for it.
  
   I too am willing to pay the whole thing, so, halvsies? Or see
 how many
   others want to split even?
  
   Sounds good.  I guess we can nominate you as the finance go-to
 on this
   one then :)
  
   Let us know details when they emerge.
  
   Can someone explain to me why we can't simply use a block of
 addresses
   with the 0200 (local administration) bit or'ed in.  Out of 48 bits of
   addressing, we can use 46 bits of them for anything we want as
 long as
   that bit is set and the 0100 bit (multicast) is clear.  By the
 standard,
   those are locally managed and allocated MAC addresses that are not
   guaranteed to be globally unique.  They don't even need to be
 unique in
   an entire network, only on the local subnet.  Use any convention you
   want.  Stuff the 32 bit IP address of the host in the lower 32
 bits and
   you've still got 14 bits worth of assignable addressing per host.
   That's what that bit is intended for.

 That is exactly what I do myself.

 I'm not sure there is a specific need for a recognizable lxc address
 space, but exactly the same thing could be said about xen and for some
 reason they have one. I don't claim it's necessary I just claim three
 things:

 1) It wouldn't hurt.

 2) It's cheap enough in both cash and time not to matter, more than
 enough volunteers have already presented themselves.

 3) I don't presume that because I don't perceive a reason, that no
 reason exists.

 One scenario I envision off-hand would be that automated vmware tools
 and xen tools and lxc tools could each provision addresses from their
 own spaces and guaranteed never step on each others toes.

 --
 bkw

 
 --
 Colocation vs. Managed Hosting
 A question and answer guide to determining the best fit
 for your organization - today and in the future.
 http://p.sf.net/sfu/internap-sfd2d
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net mailto:Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users




 --
 ==
 Geordy Korte
 MSN geo...@geordy.nl mailto:geo...@geordy.nl



 --
 Fulfilling the Lean Software Promise
 Lean software platforms are now widely adopted and the benefits have been
 demonstrated beyond question. Learn why your peers are replacing JEE
 containers with lightweight application servers - and what you can gain
 from the move. http://p.sf.net/sfu/vmware-sfemails



 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users

Re: [Lxc-users] native (non-NAT) routing?

2011-04-09 Thread Brian K. White
On 4/9/2011 3:00 AM, Ulli Horlacher wrote:
 On Wed 2011-04-06 (12:31), Daniel Lezcano wrote:

 root@zoo:/lxc# brctl show
 bridge name bridge id   STP enabled interfaces
 br0 8000.0050568e0003   no  eth0

 is your container up when you show the bridge information ?

 Yes:

 root@zoo:/lxc# brctl  show
 bridge name bridge id   STP enabled interfaces
 br0 8000.0050568e0003   no  eth0

 root@zoo:/lxc# lxc -l
 container  size (MB)   start-PID
 fex  377   0
 test 376   0
 ubuntu   6003311


 is it possible you give the ip addr result on the host ?

 What do you mean? Which result?



He's asking you to run ip addr on the host and post the result here.

-- 
bkw

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container with different architecture like arm on x86 [How-to]

2011-04-07 Thread Brian K. White
On 4/7/2011 5:23 AM, l...@zitta.fr wrote:

 qemu has 2 modes system and user
 You described system mode and I used user mode

That resolves a lot of the mystery right there. I hadn't realized qemu 
had such a mode.

The other issues are either

* obviated by the fact that you're already doing it and the world didn't 
blow up (kernel/abi/environment compatibility),

* or the usefulness outweighs the cost (each process runs in it's own 
qemu instance).

Pretty slick.

-- 
bkw

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Cluster Resource Agent

2011-04-06 Thread Brian K. White
On 4/6/2011 4:56 AM, Christoph Mitasch wrote:
 Hi,

 I'm wondering if anybody is using LXC in a high availability cluster.

 I tried to use it in a Pacemaker Cluster together with DRBD.

 In theory there would be the VirtualDomain Resource Agent supporting
 libvirt. But since my libvirt experience together with LXC was not
 promising, I think the best option is to use lxc-tools.

 It worked for me when using the lxc init script (/etc/init.d/lxc) for
 active/passive configurations.

 As far as I found out only /etc/lxc/ and corresponding lxc rootfs dirs
 have to be shared. /var/lib/lxc should not be necessary, because lxc
 init script doesn't use lxc-create/lxc-destroy.

 Anything else to take care of when moving LXC containers around machines?

 For active/active and more advanced configurations an OCF Resource Agent
 for LXC would be nice. It could be similar to the ManageVE RA for OpenVZ:
 http://hg.linux-ha.org/agents/raw-file/tip/heartbeat/ManageVE

 Regards,
 Christoph

What lxc init script? I think we all write our own and there is no 
official one yet.

I write one for openSUSE but it's not in the suse lxc package nor in any 
other official suse package, just in a stand-alone rclxc package in my 
buildservice repo.

If the official packages for other distros includes an init script, it 
will be different for each one since containers are such a low-level 
feature that can be used for so many different kinds of jobs, it's hard 
to imagine what an official init script would even be like.

-- 
bkw

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container with different architecture like arm on x86 [How-to]

2011-04-06 Thread Brian K. White
On 4/6/2011 3:26 PM, l...@zitta.fr wrote:
 Hi,

 I tried to run an arm container under a x86_64 host and it works 

 Little how-to :

 build a static compiled qemu-arm
   take qemu sources and build it with :
   ./configure --static --target-list=arm-linux-user; make
   U will find static qemu for arm at ./arm-linux-user/qemu-arm
 use the binfmt_misc kernel module
   mount the pseudofs :
   mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
 have an arm container
   let's say it is at /lxc/armcontainer
 copy qemu in the container :
   cp ./arm-linux-user/qemu-arm /lxc/armcontainer/
 enable binfmt :
 echo
 ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/local/bin/qemu-arm:'/proc/sys/fs/binfmt_misc/register
 launch your container normaly.

 I found this cool, I hope it be useful someone else.

 I have made this how-to from bash history, I could have made some mistakes.
 feel free to ask if you're in troubles.

 regards,

 Guillaume ZITTA


A few questions,

The echo command references /usr/local/bin/qemu-arm, but I don't see 
that anywhere else in the recipe. Is that a x86_64 binary on the host or 
is that supposed to be a arm binary in the container, or is it simply 
ignored in this case and doesn't matter that it doesn't exist?

It sort of looks like you are telling the host x86_64 kernel to run a 
x86_64 qemu-arm executable any time it encounters an arm elf executable, 
and then since you are placing an arm qemu-arm executable in the 
container fs I guess you are implying that the arm executable you will 
be trying to run will be that arm qemu executable? Why would you do that?

foo - qemu - qemu - kernel ??

ie: arm-executable foo - arm executable qemu-arm - x86_64 executable 
qemu-arm - x86_64 host kernel ??

Assuming that even works. Doesn't there have to be an arm kernel in 
there somewhere? Like:

arm-foo - arm-kernel - x86_64-qemu-arm - x86_64-host-kernel

I don't see the point in this. As long as you have qemu in there 
anywhere it means you are doing full cpu virtualization, avoiding which 
is pretty much the sole purpose of containers.

If it's really true that you can have qemu provide _only_ cpu 
virtualization yet somehow have the host kernel support the arm 
executables through that I guess that's a win since you have a single 
kernel doling out resources directly to all processes instead of kernels 
within kernels. Then again wouldn't that result in every single arm 
executable running inside it's own instance of qemu, auto launched by 
the binfmt? That might be ok for application containers that only run 
one process but that would be terrible for a full system container 
unless that container really only ran one process directly, an arm 
kernel. And in that case I don't see the point of doing that inside a 
container. It's already even more isolated inside qemu than what the 
container provides and the container layer just becomes pointless overhead.

But doesn't the arm kernel have rather a lot more differences than 
merely understanding the arm binary format and cpu? I would have thought 
the container would have to run an x86_64 (or i386) binary, which would 
be qemu, and that qemu would have to run an arm kernel, and all other 
arm processes would have to run in that arm kernel.

I think I need an example to illustrate a use case for this.

-- 
bkw

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container with different architecture like arm on x86 [How-to]

2011-04-06 Thread Brian K. White
On 4/6/2011 5:30 PM, Justin Cormack wrote:
 On Wed, 2011-04-06 at 16:45 -0400, Brian K. White wrote:

 A few questions,

 The echo command references /usr/local/bin/qemu-arm, but I don't see
 that anywhere else in the recipe. Is that a x86_64 binary on the host or
 is that supposed to be a arm binary in the container, or is it simply
 ignored in this case and doesn't matter that it doesn't exist?

 It sort of looks like you are telling the host x86_64 kernel to run a
 x86_64 qemu-arm executable any time it encounters an arm elf executable,
 and then since you are placing an arm qemu-arm executable in the
 container fs I guess you are implying that the arm executable you will
 be trying to run will be that arm qemu executable? Why would you do that?

 foo -  qemu -  qemu -  kernel ??

 ie: arm-executable foo -  arm executable qemu-arm -  x86_64 executable
 qemu-arm -  x86_64 host kernel ??

 Assuming that even works. Doesn't there have to be an arm kernel in
 there somewhere? Like:

 arm-foo -  arm-kernel -  x86_64-qemu-arm -  x86_64-host-kernel

 I don't see the point in this. As long as you have qemu in there
 anywhere it means you are doing full cpu virtualization, avoiding which
 is pretty much the sole purpose of containers.

 If it's really true that you can have qemu provide _only_ cpu
 virtualization yet somehow have the host kernel support the arm
 executables through that I guess that's a win since you have a single
 kernel doling out resources directly to all processes instead of kernels
 within kernels. Then again wouldn't that result in every single arm
 executable running inside it's own instance of qemu, auto launched by
 the binfmt? That might be ok for application containers that only run
 one process but that would be terrible for a full system container
 unless that container really only ran one process directly, an arm
 kernel. And in that case I don't see the point of doing that inside a
 container. It's already even more isolated inside qemu than what the
 container provides and the container layer just becomes pointless overhead.

 But doesn't the arm kernel have rather a lot more differences than
 merely understanding the arm binary format and cpu? I would have thought
 the container would have to run an x86_64 (or i386) binary, which would
 be qemu, and that qemu would have to run an arm kernel, and all other
 arm processes would have to run in that arm kernel.

 I think I need an example to illustrate a use case for this.


 Qemu is just being used as an arm instruction set interpreter, making
 x86 system calls to the native kernel. binfmt_misc lets you run other
 architecture binaries via emulation just by executing the binary.
 Obviously its slow, but if you want to build an arm distro say it gives
 another option other than cross compiling or a native compile on a slow
 machine.

 Justin

Back to the first question, this actually works for binaries other than 
hello.c ? How many binaries live entirely within the high level calls 
that are really fully abstracted by the kernel?

I guess I have to try this because I don't believe it.
qemu just emulates hardware, as in the cpu and some of the supporting 
system. You can't run user executable on hardware. Only specially 
crafted free-standing things which is pretty much just bios/efi, 
bootloaders, kernels, and memtest86. Not ls for instance.

I'm familiar with binfmt since I used to use iBCS and then linux-abi 
ages ago to run SCO binaries on linux, and similarly to run linux 
binaries within lxrun on SCO, and similarly osrcompat to run unixware 
binaries on open server and vice-versa, and linux on freebsd, etc ...)

But in all those cases, the following always is true:

* The executables have the same cpu instruction set as the host kernel.

* The executables have the same endianness as the host kernel and 
filesystem and utilities.

And at least one or more of the following is also always true:

* The emulation layer explicitly goes to lengths to handle various 
unavoidable differences and conflicts. Like remapping syscalls that take 
different numbers or types of arguments, and exhibit different behavior, 
even though they are named the same and do nominally the same job. 
Faking various system level environmental things like fake device nodes, 
/proc entries, etc... maybe cpu registers or memory/io addresses too I 
don't know everything but I know it's not exactly trivial or ignorable.

* The emulation layer provides fail-as-graceful-as-possible stubs for 
things that can't or haven't yet been really translated.

* Users of the emulation layer simply know that only a small subset of 
things will actually work. Anything might have any sort of problem, and 
if it breaks you get to keep the pieces. It's useful in a few very 
special cases and requires significant hacks and workarounds and 
compromises, but isn't generally useful.

I mean it's not just a few things, it's things everywhere you turn, 
filesystems that return values

Re: [Lxc-users] Control panel

2011-03-23 Thread Brian K. White
On 3/16/2011 5:11 AM, Geordy Korte wrote:
 On Tue, Mar 8, 2011 at 12:42 PM, Stuart Johnson stu...@stu.org.uk
 mailto:stu...@stu.org.uk wrote:


   maybe just define what you want. Gathering ideas could/would inspire
   someone to implement it.

 Ideally I want a simple ncurses application that shows you what
 containers are active, and allows simple functionality, such as create,
 start, stop and configure settings.  Super easy to install, and runs
 from the ssh console. No need for web servers, or opening up special
 ports.

 Hello,

 Had some time to spare and decided that I would pitch in. Attached a
 simple dialog system that will allow you to start/stop an lxc container
 and open the console.  It's really really really early (lol took me 5
 minutes) but let me know if this is what you are looking for and if so
 what you would like to have added to it.

 You need the dialog package for this to work:
 apt-get install dialog
 --
 Geordy Korte

This is a great start.

Although it's true what has been said that a real curses (or other 
compiled) app will ultimately be able to do a lot more by using liblxc, 
I really want to build on this in the mean time. I already have several 
things I want to do to it to use in conjunction with my inid.d script 
for openSUSE.

With almost trivial additions I could have it integrate with my 
particular config scheme so that this could also:

* start/stop all containers
* mark a container as enabled/disabled for automatic start at (host) boot
* display per/container status like process list
* poor-mans-top that adds a column to show what container a process 
belongs to
* display other misc info that's handy for the host admin, like the 
ip's/bridges/vlans associated with each container, rootfs and other 
mounts, etc.
* display/edit container config

With somewhat more work I could add:

* a wizard to create new containers (very simple at first, where it only 
creates one kind of container system)
* iotop-alike that shows container
* iftop-alike that shows container
* wall one or all containers

Can we put this up in a google code project, or do you mind if I do it?

A few of the things I want will be specific to openSUSE, or rather, 
specific to how I chose to do my init.d, since there is no standard yet, 
I just made up my own. But I still think it shouldn't be too hard to 
make this distribution agnostic. The wizard to create a container will 
just use template scripts and I'll just supply an openSUSE template 
script. The init.d integration is the only thing that will be really 
distribution specific but that is very simple.

I'd love to get a few more things into it and include it into my rclxc 
package for openSUSE or make it it's own package.

-- 
bkw

--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] updated lxc template for debian squeeze - with attachedscript ; )

2011-03-11 Thread Brian K. White
On 3/11/2011 10:14 AM, Michael H. Warfield wrote:
 On Thu, 2011-03-10 at 19:09 +, Walter Stanish wrote:
 ...  I have read up on the OUI documentation and
 looking at the detail on the site LXC could opt for a 32bit OUI which 
 would
 cost $600 for one block. The dev guys might want to setup a pledge 
 program...

 I will pay for it.

 I too am willing to pay the whole thing, so, halvsies? Or see how many
 others want to split even?

 Sounds good.  I guess we can nominate you as the finance go-to on this
 one then :)

 Let us know details when they emerge.

 Can someone explain to me why we can't simply use a block of addresses
 with the 0200 (local administration) bit or'ed in.  Out of 48 bits of
 addressing, we can use 46 bits of them for anything we want as long as
 that bit is set and the 0100 bit (multicast) is clear.  By the standard,
 those are locally managed and allocated MAC addresses that are not
 guaranteed to be globally unique.  They don't even need to be unique in
 an entire network, only on the local subnet.  Use any convention you
 want.  Stuff the 32 bit IP address of the host in the lower 32 bits and
 you've still got 14 bits worth of assignable addressing per host.
 That's what that bit is intended for.

That is exactly what I do myself.

I'm not sure there is a specific need for a recognizable lxc address 
space, but exactly the same thing could be said about xen and for some 
reason they have one. I don't claim it's necessary I just claim three 
things:

1) It wouldn't hurt.

2) It's cheap enough in both cash and time not to matter, more than 
enough volunteers have already presented themselves.

3) I don't presume that because I don't perceive a reason, that no 
reason exists.

One scenario I envision off-hand would be that automated vmware tools 
and xen tools and lxc tools could each provision addresses from their 
own spaces and guaranteed never step on each others toes.

-- 
bkw

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Control panel

2011-03-11 Thread Brian K. White
On 3/10/2011 9:04 PM, Stuart Johnson wrote:

 Meaning you think python would be too heavyweight?
 Certainly as one approaches the embedded end of the spectrum, there's
 something to be said for avoiding dependencies on large (compared to
 busybox ash) interpreters.  It'd be neat if I could deploy per-service
 containers on, say, a router with an 8MB MTD and 32MB RAM, and using
 python/perl/ruby/whatever would make that harder.  Mybe lua would be
 a better fit; I tend to stick to busybox ash, since it's already there.

 Perhaps we should be asking first, should an ncurses control panel be
 part of LXC or a separate project?  If it's the latter, then its rather
 immaterial, although personally I would love it to be part of LXC.  I
 want to able to install LXC on any Linux box, and start managing
 containers with the least amount of effort.

The embedded example just shows how everyones needs are very different.
I can see that wanting the lightest of non-interactive shell scripts run 
as cgi from a web server and all ui done via browser, or possibly run 
directly by the web server from a built-in mod_php or such with no use 
for ncurses at all.

I say let the main lxc package just continue to make better 
non-interactive command line tools, and let all higher level front-ends 
be separate packages written in whatever language(s) the authors like, 
serving whatever diverse special purposes they may.

Ideally of course would be to improve libvirt and/or other virt managers 
add lxc support to them and modify lxc relatively little just to 
facilitate that where indicated.

-- 
bkw

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] How are pseudorandom MACs selected?

2011-02-02 Thread Brian K. White
On 2/2/2011 6:20 AM, Daniel Lezcano wrote:
 On 02/02/2011 10:26 AM, Trent W. Buck wrote:
 For each lxc.network.type = veth, if you DON'T specify an
 lxc.network.hwaddr, you get one assigned at random (example below).

 Are these assignments made from a reserved range (a la 169.254/16 in
 IPv4), or are they randomized across the entire address space?  AFAICT,
 it MUST be the latter.

 Further, when manually allocating a static hwaddr (so I can map it to an
 IP within the DHCP server),

 The dhcp relies on an identifier to map the IP, the default is to use
 the mac address but you can use another identifier like the hostname for
 example. AFAIR, there is an option in the system network configuration
 scripts to send the hostname for the dhcp requests.

is there any particular range I should avoid
 or stick to?

 This is how the kernel allocates the mac address.

 /**
* random_ether_addr - Generate software assigned random Ethernet address
* @addr: Pointer to a six-byte array containing the Ethernet address
*
* Generate a random Ethernet address (MAC) that is not multicast
* and has the local assigned bit set.
*/
 static inline void random_ether_addr(u8 *addr)
 {
   get_random_bytes (addr, ETH_ALEN);
   addr [0]= 0xfe;   /* clear multicast bit */
   addr [0] |= 0x02;   /* set local assignment bit (IEEE802) */
 }


 Maybe you can use the mac address range used for the virtual nic of vmware.

I just use 02:00:ip address which ends up being automatically unique 
enough to not collide with anything else on your subnet assuming you 
already know the ip's you want to use

IP=192.168.0.50   # container nic IP
HA=`printf 02:00:%x:%x:%x:%x ${IP//./ }` # generate a MAC from the IP

The assumption is that you are already prepared to manage IP's somehow 
and wish the MAC's would be automatic yet at least relatively stable to 
keep from breaking things too often that track macs. So this way as you 
provision vm's, the macs will never collide without you having to 
actually track them manually, yet they aren't generated randomly and 
they stay the same as long as a given vm's ip stays the same.

-- 
bkw

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] release candidate for lxc

2011-02-01 Thread Brian K. White
On 2/1/2011 5:22 AM, Daniel Lezcano wrote:
 Hi All,

 The lxc-0.7.4 version will be released very soon.

 I suppose most of you are using the version 0.7.3, not the dev version.
 Before releasing this new version, I would like to release a pre-version
 and that will be very nice if you can check this version is ok for you,
 especially the mount regression we had with the 0.7.3.

 Thanks in advance
 -- Daniel

Works for me.

I'm currently on 0.7.2 not 0.7.3 because of the breakage, but I had no 
problems building/installing/trying 0.7.3 and am just as ready to try a 
new version any time.

Do I just grab the current latest git version now or will you post a 
specific version to get?

-- 
bkw

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] An application container for apache?

2011-01-20 Thread Brian K. White
On 1/20/2011 10:29 AM, Sergio Daniel Troiano wrote:
 Andre,

 I'm using Slackware and i've compiled lxc-7.2 because when i tried to
 use lxc-7.3 i couldnt mount anything within the container.

 You have to create a root enviroment , i use /container, here are all
 shared files and directories (/usr, /bin, /etc and so on)
 besides you must create no-shared directories (for example apache logs)
 you'll mount them when you start the container.

 You can use DEBUG mode when you start the container lxc-start -n web -d
 -lDEBUG -o log_debug_file -f lxc.conf
 i use 2 config files , the first one is lxc.conf

 lxc.utsname = web
 lxc.mount = config.fstab
 lxc.rootfs = /container
 lxc.tty = 12
 lxc.pts = 1024
 lxc.cgroup.cpuset.cpus = 0,1,2,3,4,5,6,7
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.name = eth0
 lxc.network.ipv4 = 192.168.1.241/24
 lxc.cgroup.devices.allow = a

 The second one is config.fstab


 ## Apache logs, fcgid
 /home/skel.containers/web/usr/local/apache2/logs
 /container/usr/local/apache2/logs bind defaults,bind 0 0
 ## Apache conf
 /home/skel.containers/web/usr/local/apache2/conf
 /container/usr/local/apache2/conf bind defaults,bind 0 0
 none /container/proc proc defaults 0 0
 none /container/dev/pts devpts newinstance 0 0


 */Sergio D. Troiano/*
 /Development Team./


 /Av. de los Incas 3085/
 /(C1426ELA) Capital Federal/



 On Thu, 2011-01-20 at 13:51 -0200, Andre Nathan wrote:
 On Thu, 2011-01-20 at 11:44 -0200, Sergio Daniel Troiano wrote:
   Sure but there are a lot of things i have found about lxc , how far
   are you? where are you  stuck?

 I'm just beginning with LXC... I have tried to use the lxc-sshd script
 as a starting point but I still haven't got it to work yet.

 Do you have apache starting up as a normal user? Are you using read-only
 bind mounts? Which directories did you have to make user-specific, and
 which are shared by the host and the containers?

 Thanks a lot,
 Andre

I've been unable to use 7.3 also, at least with my existing 7.2 configs.

I assume it must work, and only the developer who changed it knows how 
to make it work after that change.

I'd like a description of that here or somewhere so I can stay up to 
date. I saw only an unclear answer to a similar question a while back.
Something about the mount paths being relative to some other context 
than before, but it wasn't explained exactly what needs to be specified 
relative to what.

I haven't tried to deduce it by trial  error since I need all my 
working lxc hosts to actually work at the moment, so they have to keep 
running 7.2. I haven't had time to set up a new lxc host purely for 
testing that I can try 7.3 on without disturbing the production boxes.

-- 
bkw

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] multi-homed host

2010-12-14 Thread Brian K. White
Shouldn't I be able to have two different nics on a host, on two 
different, unrelated, public networks, and have two bridge devices on 
the host, and some containers on one bridge and some containers on the 
other bridge, and have all containers be able to talk to their 
respective internet connections regardless which nic happens to be the 
default gateway fro the host?

Host setup:

eth0 - 10.0.0.x - lan with other 10.0.0.x machines

eth1 - br0 - a.a.a.x - public wan 1 , cable modem

eth2 - br1 - b.b.b.x - public wan 2 , fios

ip forwarding is enabled

eth0 lan works fine.
The host talks to other 10.0.0.x boxes via this with no problem.

eth1/br0 works fine.
The hosts's default gateway is a.a.a.1
The host talks to the internet  vice/versa just fine via this.

eth2/br1 works fine from the hosts point of view.
other b.b.b.x machines are reached directly via this, not routing over 
eth1/br0.

Containers:

Containers with a.a.a.x ip's work fully and as expected.
They can reach the internet and the internet can reach them.
These containers have a.a.a.x ips and their default gw is a.a.a.1

Containers with b.b.b.x addresses do not work fully.
These have b.b.b.x ip's and default gw b.b.b.1
They can see the host and each other on the same host, and they can even 
see other neighboring b.b.b.x hosts, external to the host, but on the 
same physical local switch where traffic does not have to go out of the 
switch up to the b.b.b.1 default gateway.
(b.b.b.1 is on the other end of the fios line, not on premises and not 
owned or operated by me but by verizon)

None of the hosts nor the switch has any vlans or tagging other than the 
default vlan id is 1 in the switch when left undefined.
Software firewalls are disabled in the hosts and containers at least for 
now while still trying to figure this out.

What in the world could allow a container in the host talk outside the 
host well enough to talk to other neighboring hosts on the same switch, 
but but just not be able to reach the default gateway outside the 
switch? It's like the gateway has firewalled certain ip's and not 
others, but the ips actually work fine if put on a laptop directly or if 
the hosts default gateway and nameserver are switched over to the 
b.b.b.x network. Say the host br1 is b.b.b.50 and a container is 
b.b.b.60, and there is one single switch connecting 4 things
b.b.b.1 - default gateway on other end of uplink
b.b.b.40 - neighboring host, regular traditional server, single ip.
b.b.b.41 - neighboring host, regular traditional server, single ip.
b.b.b.50 - the host
b.b.b.51 - container 1 on host
b.b.b.52 - container 2 on host
All but the container are plugged into the same single switch, but .50 
and .51 are on the same bridge on the host.

The host .50 can ping and be pinged by all, itself, it's containers, 
neighboring hosts, containers inside neighboring hosts, and the gateway.

The container .51 can ping .50, .52, and .40 and .41, but not .1 !
How in the world can .51 reach across the hosts br1 and across the 
switch to .41, and yet not do exactly the same thing for .1 which is 
exactly the same number and forms of hops away ?

I've already called verizon tech support and they just said their equip 
ony reports all well, and I tested all ip's with a laptop directly on 
the b.b.b.x ethernet drop and they all worked fine that way , and 
swapped out my switch for another one just for the heck of it, so I'm 
down to config in my lxc hosts as the culprit.

About the only consistent pattern I can find is the hosts default 
gateway. The only the containers that work fully are the ones that 
happen to use the same gateway as the host, but if a bridge interface is 
just a software switch then why should the hosts default gateway 
setting matter at all to the containers ability to talk across it?

-- 
bkw


--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] regular lxc development call?

2010-12-13 Thread Brian K. White
On 12/13/2010 1:03 PM, Stéphane Graber wrote:
 On Tue, 2010-11-30 at 03:06 +, Serge E. Hallyn wrote:
 Quoting Daniel Lezcano (daniel.lezc...@free.fr):
 On 11/29/2010 03:53 PM, Serge E. Hallyn wrote:
 Hi,

 at UDS-N we had a session on 'fine-tuning containers'.  The focus was
 things we can do in the next few months to improve containers.  The
 meeting proeedings can be found at
 https://wiki.ubuntu.com/UDSProceedings/N/CloudInfrastructure#Make%20LXC%20ready%20for%20production

 We have a few work items written down at
 https://blueprints.edge.launchpad.net/ubuntu/+spec/cloud-server-n-containers-finetune
 The list is flexible fwiw, but we thought it might help to have a regular
 call, perhaps every other week, to discuss work items, their design,
 and their progress.  For some features like reboot/shutdown, I think
 design still needs discussion.  For other things, it's more important
 that we just discuss who's doing what and what's been done.

 Is there interest in having such a call?


 Yep, IMO it is a good idea.

 I suspect most of the containers work now is purely volunteer driven,
 so a free venue seems worthwhile.  Should we do this over skype?  IRC?
 Does someone want to set up a conference number?


 I don't have a conf number, if anyone has one that will be great,
 otherwise I am fine with skype or irc.

 Looks like we'll be starting small anyway, so let's just try skype.  Anyone
 interested in joining, please send me your skype id.

 What is a good time?  I'll just toss thursday at 9:30am US Central time
 (15:30 UTC) out there.

 -serge

 I'd like to attend that call, Skype ID: stgraber

 Depending on how many people are going to attend and where they're from,
 I might be able to provide a conf number.
 I asked my company (Revolution Linux) and we can use our 1-800 number
 for the call. I can also invite people from other countries as long as
 they are on landline.

 9:30am central is a bit early for me as I tend to arrive at the office
 around 10am central (9am eastern).

 I'm usually around from 9am eastern to 11:30am and 12:30pm to 5:30pm.
 Monday being usually quite busy so would like to avoid if possible :)

 I guess it might be useful to have a list somewhere (wiki ?) of people
 who'd like to attend with availabilities and timezone.

I would like to lurk on that.
I don't have a skype account and would have preferred irc but I'll get a 
skype account.

I'm currently using lxc in limited forms of production (production that 
must work, but can tolerate occasional down time, apologies, growing 
pains) using that as a way to create and then improve 
distribution-specific packaging and integration in opensuse. Using that 
as a stepping stone to real production later where customers will 
actually run on lxc vm's and they must work well enough that I never 
have to apologize to a customer.

I'm not affiliated with opensuse, but no one else is writing any distro 
integration yet so my build service packages and opensuse wiki article 
are pretty much all there is for opensuse yet.

As you said, several startup/shutdown issues need to be discussed 
further to arrive at a general consensus for best practice and most 
expected behavior, which is where my interest lies at the moment. C/R 
will be nice but it's too far off for me to use practically although I 
think some of the userspace glue is within my areas of interest and ability.

-- 
bkw

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] regular lxc development call?

2010-12-13 Thread Brian K. White
skype ID brian.kenyon.white

-- 
bkw

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] On clean shutdown of Ubuntu 10.04 containers

2010-12-06 Thread Brian K. White
... 
 done

nj10:~ # rclxc status
Checking for LXC containers... 
 unused

nj10:~ # rclxc list
Listing LXC containers...
'vps001' is STOPPED
'vps002' is STOPPED
'vps003' is STOPPED
'vps004' is STOPPED
'vps005' is STOPPED
'vps006' is STOPPED
'vps007' is STOPPED
'vps008' is STOPPED
'vps009' is STOPPED
'vps011' is STOPPED
'vps012' is STOPPED
'vps013' is STOPPED
nj10:~ # time rclxc start
Starting LXC containers... 
 done


real0m0.242s
user0m0.012s
sys 0m0.000s
nj10:~ # rclxc list
Listing LXC containers...
'vps001' is RUNNING
'vps002' is RUNNING
'vps003' is RUNNING
'vps004' is RUNNING
'vps005' is RUNNING
'vps006' is RUNNING
'vps007' is RUNNING
'vps008' is RUNNING
'vps009' is RUNNING
'vps011' is RUNNING
'vps012' is RUNNING
'vps013' is RUNNING
nj10:~ # screen -r vps013

INIT: version 2.88 booting
INIT: Entering runlevel: 3
blogd: can not set console device to /dev/pts/34: Device or resource busy
Master Resource Control: previous runlevel: N, switching to runlevel:3
Initializing random number generator done
Starting syslog services done
Starting D-Bus daemondone
No keyboard map to load
Loading compose table winkeys shiftctrl latin1.add   done
Stop Unicode modedone
Setting up (localfs) network interfaces:
lo
loIP address: 127.0.0.1/8
  IP address: 127.0.0.2/8
lo   done
eth0
eth0  IP address: 71.187.206.90/24
eth0 done
Setting up service (localfs) network  .  .  .  .  .  .  .  .  .  .   done
Starting SSH daemon  done
Loading CPUFreq modules (CPUFreq not supported)
Starting HAL daemon  done
Setting up (remotefs) network interfaces:
Setting up service (remotefs) network  .  .  .  .  .  .  .  .  .  .  done
Re-Starting syslog services  done
Starting auditd The audit system is disabled
 done
Starting incron  done
Starting mail service (Postfix)  done
Starting CRON daemon done
Starting rpcbind done
Starting rsync daemondone
Starting smartd  unused
Starting vsftpd  done
Starting INET services. (xinetd) done
Master Resource Control: runlevel 3 has been reached
Skipped services in runlevel 3:splash smartd

Welcome to openSUSE 11.3 Teal - Kernel 2.6.37-rc3-3-default (console).


nj10-013 login:

[detached]
nj10:~ # time rclxc stop
Shutting down LXC containers... 
 done


real0m8.537s
user0m0.048s
sys 0m0.124s
nj10:~ # rclxc list
Listing LXC containers...
'vps001' is STOPPED
'vps002' is STOPPED
'vps003' is STOPPED
'vps004' is STOPPED
'vps005' is STOPPED
'vps006' is STOPPED
'vps007' is STOPPED
'vps008' is STOPPED
'vps009' is STOPPED
'vps011' is STOPPED
'vps012' is STOPPED
'vps013' is STOPPED
nj10:~ # screen -ls
No Sockets found in /var/run/screens/S-root.
nj10:~ # lxc-ps --lxc auxwww
CONTAINER  USER   PID %CPU %MEMVSZ   RSS TTY  STAT START 
TIME COMMAND

nj10:~ #


--
bkw
#!/bin/sh
# /etc/init.d/lxc
#   and its symbolic link
# /usr/sbin/rclxc
#
# System startup script for LXC containers.
# For lxc 0.7.2 which doesn't require an external monitor process to perform
# the lxc-stop when a containers init process requests init 0|1|6 .
#
# 20101108 - Brian K. White - br...@aljex.com
#
### BEGIN INIT INFO
# Provides:  lxc
# Required-Start:$ALL
# Should-Start:
# Required-Stop: $ALL
# Should-Stop:
# Default-Start: 3 5
# Default-Stop:  0 1 2 6
# Short-Description: LXC Linux Containers
# Description:   Start/Stop LXC containers.

### END INIT INFO

. /etc/rc.status

LXC_ETC=/etc/lxc
LXC_SRV=/srv/lxc
CGROUP_MOUNT_POINT=/var/run/lxc/cgroup
CGROUP_MOUNT_NAME=lxc
CGROUP_MOUNTED=false
CGROUP_RELEASE_AGENT=/usr/sbin/lxc_cgroup_release_agent
LXC_CONF=${LXC_ETC}/lxc.conf
[[ -s $LXC_CONF ]]  . $LXC_CONF

# Various possible overrides to cgroup mount point.
# If kernel supplies cgroup mount point, prefer it.
[[ -d /sys/fs/cgroup ]]  CGROUP_MOUNT_POINT=/sys/fs/cgroup 
CGROUP_MOUNT_NAME=cgroup
# If cgroup already mounted, use it no matter where

Re: [Lxc-users] On clean reboot of Ubuntu 10.04 containers

2010-12-06 Thread Brian K. White
On 12/6/2010 3:01 AM, Trent W. Buck wrote:
 Trent W. Buck writes:

 This post describes my attempts to get clean shutdown of Ubuntu 10.04
 containers.  The goal here is that a shutdown -h now of the dom0
 should not result in a potentially inconsistent domU postgres database,
 cf. a naive lxc-stop.


In my previous note about parallel shutdowns, that same system also 
works for this too. User may ssh in to the container as root and issue 
shutdown-r now or shutdown -h now and it works as expected from 
their point of view. No cron job on the host. In lxc 0.6.5 you would 
have a watchdog process per container that uses inotify to be alerted 
the instant the containers runlevel file and/or cgroup tasks list file 
changed. I had that as just a shell function right in the init script. 
In 0.7.2 this is handled by lxc internally and is rather more reliable, 
since it was possible to break or kill the separate watchdog processes.

I think you are working harder than necessary for some things although 
it appears you have a legitimate problem with the upstart and tmpfs 
issue. Whether the fault is lxc's or ubuntu's in that case I can't say 
because ideally neither should have such a hard coded assumption.

-- 
bkw

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] On clean shutdown of Ubuntu 10.04 containers

2010-12-06 Thread Brian K. White
On 12/6/2010 3:34 PM, Michael H. Warfield wrote:
 On Mon, 2010-12-06 at 12:38 -0500, Brian K. White wrote:
 On 12/6/2010 2:42 AM, Trent W. Buck wrote:
 This post describes my attempts to get clean shutdown of Ubuntu 10.04
 containers.  The goal here is that a shutdown -h now of the dom0
 should not result in a potentially inconsistent domU postgres database,
 cf. a naive lxc-stop.

 As at Ubuntu 10.04 with lxc 0.7.2, lxc-start detects that a container
 has halted by 1) seeing a reboot event incontainer/var/run/utmp; or
 2) seeingcontainer's PID 1 terminate.

 Ubuntu 10.04 simply REQUIRES /var/run to be a tmpfs; this is hard-coded
 into mountall's (upstart's) /lib/init/fstab.  Without it, the most
 immediate issue is that /var/run/ifstate isn't reaped on reboot, ifup(8)
 thinks lo (at least) is already configured, and the boot process hangs
 waiting for the network.

 Unfortunately, lxc 0.7's utmp detect requires /var/run to NOT be a
 tmpfs.  The shipped lxc-ubuntu script works around this by deleting the
 ifstate file and not mounting a tmpfs on /var/run, but to me that is
 simply waiting for something else to assume /var/run is empty.  It also
 doesn't cope with a mountall upgrade rewriting /lib/init/fstab.

 More or less by accident, I discovered that I can tell lxc-start that
 the container is ready to halt by crashing upstart:

   container# kill -SEGV 1

 Likewise I can spoof a ctrl-alt-delete event in the container with:

   dom0# pkill -INT lxc-start

 I automate the former signalling at the end of shutdowns thusly:

   chroot $template_dir dpkg-divert --quiet --rename /sbin/reboot
   chroot $template_dir tee/dev/null /sbin/reboot-EOF
 #!/bin/bash
 while getopts nwdfiph opt
 do [[ f = \$opt ]]   exec kill -SEGV 1
 done
 exec -a $0 \$0.distrib \$@
 EOF
   chroot $template_dir chmod +x /sbin/reboot
   chroot $template_dir ln -s reboot.distrib /sbin/halt.distrib
   chroot $template_dir ln -s reboot.distrib /sbin/poweroff.distrib

 I use the latter in my customized /etc/init.d/lxc stop rule.
 Note that the lxc-wait's SHOULD be parallelized, but this is not
 possible as at lxc 0.7.2 :-(

 Sure it is.
 I parallelize the shutdowns (in any version, including 0.7.2) by doing
 all the lxc-stop in parallel without looking or waiting, then in a
 separate following step do a loop that waits for no containers running.

 Here is my openSUSE init.d/lxc:
 https://build.opensuse.org/package/files?package=lxcproject=home:aljex
 And the packages:
 http://download.opensuse.org/repositories/home:/aljex/*/lxc-0.7.2*.rpm

 It makes assumptions that are wrong for ubuntu and is more limited than
 you may want in terms of what it even tries to handle. But that's beside
 the point of parallel shutdowns.

 Also wrong for Fedora.

 * cgroup handling includes a particular stack of override logic for
 possible cgroup mount points that makes sense to me.
 - start with built-in default /var/run/lxc/cgroup, and name it lxc so
 as not to conflict with any other cgroup setup by default.
 - if you defined something in $LXC_CONF, prefer it over default
 - if kernel is providing /sys/fs/cgroup automatically, prefer that over
 either default or $LXC_CONF
 - if a cgroup named lxc is already mounted, prefer that over all else

 I'm not quite sure if I would put those last two in that order.
 Especially after the last little discussion over on LKML over the per
 tty cgroups in the kernel vs in user space, I think I would let the
 kernel defined /sys/fs/cgroup trump all else if it exists.  Something
 that's been mounted may not have been mounted with all the options you
 may want, but I'm not sure how much difference that's going to make.  I
 would think the kernel definition would be preferable.  Is there
 something specific you had in mind that would lead you to want to
 override that?

 * assumes lxc 0.7.2 because the script is part of a lxc-0.7.2 rpm
 - removes the shutdown/reboot watchdog functions that were needed in
 0.6.5 but are built in to 0.7.2 now.

 * only starts containers that are defined by $LXC_ETC/*/config

 Yeah, that's something where I wish we had an onboot and/or disabled
 config file like OpenVZ does.  So you can have some configured but that
 don't autoboot when you boot the system.  As that stands, you would have
 to rename or remove the config file.  :-P

 * only shuts down containers that it started

 I don't quite see that as happening literally as described.  Looks like
 it's going to shut down any container for which it can find a powerfail
 init, even if it was started by some other means, say manually.  It
 doesn't seem to be actually tracking what ones it started.  Granted,
 during normal operation, you're going to try to start everything with a
 config but it looks like it will shut down manually started containers
 as well, even if they are not listed with configs and would not even
 show up in your status

Re: [Lxc-users] Proposal for an FHS-compliant default guest filesystem location

2010-11-08 Thread Brian K. White
On 11/8/2010 1:14 PM, Michael H. Warfield wrote:
 On Mon, 2010-11-01 at 08:40 -0500, Serge E. Hallyn wrote:
 Quoting Walter Stanish (walter.stan...@saffrondigital.com):
 http://lxc.git.sourceforge.net/git/gitweb.cgi?p=lxc/lxc;a=commitdiff_plain;h=c01d62f21b21ba6c2b8b78ab3c2b37cc8f8fd265

 This commit only moves the location of the 'templates', which are
 just scripts that install a guest fs.  It doesn't/shouldn't move
 the location of the actual guest fs's.

 Therefore I humbly propose:
   - the establishment of /var/lib/lxc as the default top-level
 directory for guest filesystems

 AFAICS we are still using /var/cache/lxc right now.  Which I like
 better than /var/lib/lxc.  If it has 'lib' in the pathname, it should
 have libraries!

 Actually, I would beg to differ with you on that since it's in /var and
 that's where system applications write and store data.  Libraries
 (meaning linked libraries, dynamic and static) should be under /usr
 or /lib since they are not generally written to.  You could have
 libraries in there, I suppose, but I would not consider that the safest
 place for them and most of what you find there is not libraries, unless
 you mean libraries in the sense of libraries of files as in a
 collection of files, which is another sense of the word.  But then,
 that would certainly be an applicable location for the machine
 configuration files as now.

 Mailman is another example application which keeps most of its python
 code under /usr/lib/mailman while longer term storage of lists,
 archives, and databases are stored in /var/lib/mailman.

 Samba is another fine example of this and, in fact, we (the Samba team)
 and the distros moved away from using /var/cache/samba for things like
 the tdb databases and storing extraneous data such as Windows device
 drivers the server can serve up.

 Personally, I like and use /srv/lxc for my VMs and don't see any
 conflict with the FHS.  It is, after all, a site local configuration
 sort of thing that gets set up when you build the images and comprises,
 potentially, entire FHS-like sub hierarchies for the VMs.

(eg: /var/lib/lxc/guestname)
   - all use of /etc/lxc/guestname/rootfs should be considered deprecated

 For the cgroup mount point, I've been using /var/lib/cgroup and I think
 (believe) that was the consensus of a discussion quite some time ago and
 is what's recommended in some howtos.  For the container mount-points
 and storage of the registered configuration files(s), /var/lib/lxc works
 just fine and would be in agreement with the strategy if /var/lib/cgroup
 for the cgroups, IMHO.

Why in the world would you want to break the ability to safely back up 
just /etc and know that you got practically everything needed to 
re-create a server without having to back up the entire server full of 
redundant junk that would be better to come from new install media?

Yes there are already special cases that break this assumption but they 
are few and should be reduced and avoided not embraced and increased.

I have rsync/backup scripts that just grab /etc, /home, /srv, and a 
couple application specific data dirs, and this not only makes my 
backups (and restores, and migrations, and clones) small and fast, it 
makes it easier to move to newer versions of the distribution, different 
cpu platforms, and even different OS's.

I'd like config files in /etc/lxc/guestname/config just like most 
other things work.

/srv/lxc sounds good to me for the rootfs's for the same reason I want 
/etc/lxc for the configs.

cgroups is another issue.
/cgroup makes sense because of /proc /sys /dev etc, but there are also 
/dev/pts and /sys/kernel/debug etc so mounting kernel virtual fs's on / 
is not universal.

I think, just talking about the undifferentiated (distribution agnostic) 
default here, it might make sense to have a lxc-specific cgroup mount 
point in one of:
/var/run/lxc/cgroup
/var/lxc/cgroup
This way lxc can organize itself and tend to it's own needs without 
caring how/where/if the distribution mounts a generic system cgroup fs 
or not. Your lxc start/stop/status scripts can safely know the location 
of the mount, and can safely write the notify/release options and/or 
delete the unused cgroups itself and/or lxc-stop/start could manipulate 
them itself safely too.

And the expect and encourage most distributions to override that with 
their particular system wide generic or separate cgroup mount points 
organized according to their particular design principles.

The special empty directory for the pivot_root mount point should 
probably be in /usr/lib/lxc as was discussed some time ago. (I don't 
remember if that's what was decided, just that it was discussed)

-- 
bkw

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 

Re: [Lxc-users] can't restart container without rebooting entire host, because can't delete cgroups files, tasks is 0

2010-11-08 Thread Brian K. White
On 11/8/2010 1:32 PM, Serge Hallyn wrote:
 Quoting Brian K. White (br...@aljex.com):
 But also, since upgrading to kernel 2.6.36 (and already using lxc 0.7.2)
 I haven't had to delete any cgroups manually anyways. It's probably not
 my release_agent because I just noticed I didn't have a working
 release_agent (no output in it's log, probably because the script wasn't
 chmod 755)

 It's only been a couple days and only a few starts/stops while working
 on a new start/stop/status init script though.

 Hm, really?  Can you please let me know if that continues to be the
 case?  If it is, then I won't bother with a patch for lxc.  Really,
 since it'll drop ns cgroup support anyway, I suppose the patch might
 not be worthwhile anyway.

 (I ran my test on a 2.6.35 kernel)


I might be full of crap. I forgot that I had added the find -delete 
command in the and of the stop) section of my new lxc init script, I 
will test more diligently and report back.

-- 
bkw

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] can't restart container without rebooting entire host, because can't delete cgroups files, tasks is 0

2010-11-05 Thread Brian K. White
I have lxc 0.7.2 on openSUSE 11.2, which is kernel 2.6.31

I get this all the time on my other boxes which up to now have been lxc 
0.6.5 on the same kernel, but I've lived with it by just trying to never 
reboot containers, and only using containers for services that can stand 
to be rebooted so that I can actually reboot the host and thus all 
containers if I have to.

Now I have a few containers on another box with lxc 0.7.2 and the user 
of one of the containers tried to reboot his vps and it can't restart 
because there are cgroups files that can't be deleted. tasks file is 
empty in that cgroups directory,
nj9:~ # cat /cgroup/nj10-014/tasks |od
000
nj9:~ #
but there are several pid subdirectories with files in each. They can't 
be deleted.
lxc-ps -elf shows no processes in that container.

lxc-ls shows no containers at all, although definitely one other 
container is running and working and has processes in lxc-ps.

And I can't really reboot the host this time without telling a lot of 
paying customers to get out and stop working for a while.

I could probably get this container back up temporarily by just renaming 
it so it doesn't collide with the stale cgroups files, but the question 
is, I thought this was fixed? was it a kernel bug and I need a newer 
kernel to clear this up ?

-- 
bkw

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] can't restart container without rebooting entire host, because can't delete cgroups files, tasks is 0

2010-11-05 Thread Brian K. White
On 11/5/2010 1:34 PM, Serge E. Hallyn wrote:
 A few comments:

 1. To remove the directories, rmdir all descendent directories.  I'd
 think something like 'find . -type d -print0 | xargs rmdir' would
 do.

I can't delete _anything_ in there. Not a file, let alone a directory 
with or without files. Of course I tried that.

 2. You can prevent this from happening by using a notify-on-release
 handler.

How will it delete a file I can not? But I do remember the discussion 
about that a while ago and I did forget to set that up on this new box 
so I'll do that also, but I can't see how it will fix the root problem 
of not being able to delete the files.

 3. This should stop happening when lxc (soon) switches to using the
 clone-child cgroup helper instead of the ns cgroup.

Here's hoping. Thanks.

-- 
bkw

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] can't restart container without rebooting entire host, because can't delete cgroups files, tasks is 0

2010-11-05 Thread Brian K. White
On 11/5/2010 1:34 PM, Serge E. Hallyn wrote:
 A few comments:

 1. To remove the directories, rmdir all descendent directories.  I'd
 think something like 'find . -type d -print0 | xargs rmdir' would
 do.
 2. You can prevent this from happening by using a notify-on-release
 handler.
 3. This should stop happening when lxc (soon) switches to using the
 clone-child cgroup helper instead of the ns cgroup.

 -serge


Just to make it clear...

nj9:~ # lxc-stop -n nj10-014
nj9:~ # lxc-info -n nj10-014
'nj10-014' is STOPPED
nj9:~ # lxc-destroy -n nj10-014
'nj10-014' does not exist
nj9:~ # lxc-ps -elf |grep nj10-014
0 S root  3037 32341  0  80   0 -   579 pipe_w 14:25 
pts/400:00:00 grep nj10-014
nj9:~ #
nj9:~ # rm -vrf /cgroup/nj10-014
rm: cannot remove `/cgroup/nj10-014/19237/3/cpuset.memory_spread_slab': 
Operation not permitted
rm: cannot remove `/cgroup/nj10-014/19237/3/cpuset.memory_spread_page': 
Operation not permitted
[...]
rm: cannot remove `/cgroup/nj10-014/net_cls.classid': Operation not 
permitted
rm: cannot remove `/cgroup/nj10-014/notify_on_release': Operation not 
permitted
rm: cannot remove `/cgroup/nj10-014/tasks': Operation not permitted
nj9:~ #

I don't know how to track down if there is possibly some process that is 
part of the cgroup even though lxc-ps doesn't show any.
Examine every single process and verify that it's part of the host or 
another container until I find one I can't account for?

Since this happens to me all the time and on different hosts (albeit 
using the same kernel versions and other software all configured the 
same way) I can't believe this doesn't happen to many others and I'm 
surprised I don't see more acknowledgment of the issue here. I see other 
people reporting the problem, but I also see the responses simply say to 
delete the files, which, we can't do.

So i wonder is my configuration and usage simply wrong? I'm using very 
simple config files copied from the veth samples.

nj9:~ # find /etc/lxc/nj10-010 -type f |xargs -tn1 cat
cat /etc/lxc/nj10-010/fstab
none /lxc/nj10-010/dev/pts devpts defaults 0 0
none /lxc/nj10-010/procproc   defaults 0 0
none /lxc/nj10-010/sys sysfs  defaults 0 0
none /lxc/nj10-010/dev/shm tmpfs  defaults 0 0
cat /etc/lxc/nj10-010/config
lxc.utsname = nj10-010
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 02:00:47:bb:ce:56
lxc.network.ipv4 = 71.187.206.86/24
lxc.network.name = eth0
lxc.mount = /etc/lxc/nj10-010/fstab
lxc.rootfs = /lxc/nj10-010
nj9:~ #


How are you not having the same problem?

-- 
bkw

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] can't restart container without rebooting entire host, because can't delete cgroups files, tasks is 0

2010-11-05 Thread Brian K. White
On 11/5/2010 4:20 PM, Serge E. Hallyn wrote:
 Quoting Brian K. White (br...@aljex.com):
 I don't know how to track down if there is possibly some process that is
 part of the cgroup even though lxc-ps doesn't show any.
 Examine every single process and verify that it's part of the host or
 another container until I find one I can't account for?

 Does find /cgroup -name tasks -print0 | xargs cat show anything?


It shows a bezillion things, but what does that prove?

Did you mean just for the bad container?

find /cgroup/nj10-014 -name tasks -print0 | xargs -0 cat
produces no output.

This is a clearer picture:

nj9:~ # find /cgroup/nj10-014 -name tasks -print0 | xargs -t0n1 cat
cat /cgroup/nj10-014/19237/3/tasks
cat /cgroup/nj10-014/19237/2/tasks
cat /cgroup/nj10-014/19237/tasks
cat /cgroup/nj10-014/19206/3/tasks
cat /cgroup/nj10-014/19206/2/tasks
cat /cgroup/nj10-014/19206/tasks
cat /cgroup/nj10-014/19064/3/tasks
cat /cgroup/nj10-014/19064/2/tasks
cat /cgroup/nj10-014/19064/tasks
cat /cgroup/nj10-014/19061/2/tasks
cat /cgroup/nj10-014/19061/tasks
cat /cgroup/nj10-014/19056/2/tasks
cat /cgroup/nj10-014/19056/tasks
cat /cgroup/nj10-014/16826/2/tasks
cat /cgroup/nj10-014/16826/tasks
cat /cgroup/nj10-014/16818/2/tasks
cat /cgroup/nj10-014/16818/tasks
cat /cgroup/nj10-014/6363/2/tasks
cat /cgroup/nj10-014/6363/tasks
cat /cgroup/nj10-014/6360/2/tasks
cat /cgroup/nj10-014/6360/tasks
cat /cgroup/nj10-014/2845/2/tasks
cat /cgroup/nj10-014/2845/tasks
cat /cgroup/nj10-014/2842/2/tasks
cat /cgroup/nj10-014/2842/tasks
cat /cgroup/nj10-014/tasks
nj9:~ #

nj9:~ # find /cgroup/nj10-014 -name tasks -print0 | xargs -0 ls -l
-rw-r--r-- 1 root root 0 2010-11-03 09:36 /cgroup/nj10-014/16818/2/tasks
-rw-r--r-- 1 root root 0 2010-11-03 09:36 /cgroup/nj10-014/16818/tasks
-rw-r--r-- 1 root root 0 2010-11-03 09:38 /cgroup/nj10-014/16826/2/tasks
-rw-r--r-- 1 root root 0 2010-11-03 09:38 /cgroup/nj10-014/16826/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:38 /cgroup/nj10-014/19056/2/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:38 /cgroup/nj10-014/19056/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:38 /cgroup/nj10-014/19061/2/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:38 /cgroup/nj10-014/19061/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:38 /cgroup/nj10-014/19064/2/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:38 /cgroup/nj10-014/19064/3/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:38 /cgroup/nj10-014/19064/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:40 /cgroup/nj10-014/19206/2/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:40 /cgroup/nj10-014/19206/3/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:40 /cgroup/nj10-014/19206/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:40 /cgroup/nj10-014/19237/2/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:40 /cgroup/nj10-014/19237/3/tasks
-rw-r--r-- 1 root root 0 2010-11-03 15:40 /cgroup/nj10-014/19237/tasks
-rw-r--r-- 1 root root 0 2010-11-01 18:27 /cgroup/nj10-014/2842/2/tasks
-rw-r--r-- 1 root root 0 2010-11-01 18:27 /cgroup/nj10-014/2842/tasks
-rw-r--r-- 1 root root 0 2010-11-01 18:27 /cgroup/nj10-014/2845/2/tasks
-rw-r--r-- 1 root root 0 2010-11-01 18:27 /cgroup/nj10-014/2845/tasks
-rw-r--r-- 1 root root 0 2010-11-01 22:06 /cgroup/nj10-014/6360/2/tasks
-rw-r--r-- 1 root root 0 2010-11-01 22:06 /cgroup/nj10-014/6360/tasks
-rw-r--r-- 1 root root 0 2010-11-01 22:08 /cgroup/nj10-014/6363/2/tasks
-rw-r--r-- 1 root root 0 2010-11-01 22:08 /cgroup/nj10-014/6363/tasks
-rw-r--r-- 1 root root 0 2010-11-01 18:04 /cgroup/nj10-014/tasks
nj9:~ #

! wait, you are saying just ignore the fact that there are files in 
the directories and try to remove the directories, uh directly?

nj9:~ # find /cgroup/nj10-014 -type d -delete
nj9:~ # ls -lR /cgroup/nj10-014
ls: cannot access /cgroup/nj10-014: No such file or directory


 It never even slightly occured to me to try that!

Thanks! Now I know what to put in the release agent too. Awsome. Thanks 
again.

-- 
bkw

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Launch multiple apps in exactly on container

2010-09-16 Thread Brian K. White
On 9/16/2010 3:36 AM, Jue Hong wrote:
 As I understand, running one application with the command lxc-execute
 will create a container instance. E.g., by running lxc-execute -n foo
 /bin/bash, a container named foo will be created, and I can find a foo
 directory under the mounted cgroup directory, like /dev/cgroup/foo.
 While retype lxc-execute -n foo /bin/bash, I'm told that:lxc-execute:
 Device or resource busy.

 Does it mean I cannot run multiple apps within exactly the same
 container foo via using lxc-execute or lxc-start? Or what should I do
 if it's possible?

You can run essentially as many apps as you want inside a single 
container, you just can't start them from the outside.

For a single service or app, run lxc-execute ... myapp

For multiple services/apps, run lxc-start , which will run /sbin/init 
inside the container, and init starts up multiple services the same way 
a regular server does.

You could do almost as much with lxc-execute.../bin/bash, but you do it 
from that shell, from inside the container, not by trying to run 
lxc-execute multiple times to create multiple processes.

-- 
bkw

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Cannot start a container with a new MAC address

2010-08-27 Thread Brian K. White
On 8/27/2010 8:20 AM, Matto Fransen wrote:
 Hi,

 On Fri, Aug 27, 2010 at 11:27:16AM +0200, Sebastien Douche wrote:
 I created a container with an interface. I stop it, I change the MAC
 address, restart it:

 lxc-start: ioctl failure : Cannot assign requested address
 lxc-start: failed to setup hw address for 'eth0'
 lxc-start: failed to setup netdev
 lxc-start: failed to setup the network for 'vsonde43'
 lxc-start: failed to setup the container
 lxc-start: invalid sequence number 1. expected 2


 Have I Missed a step?

 This happens to me when I choose a 'wrong' mac-address.

 Example:
 lxc.network.hwaddr = 4a:59:43:49:79:bf works fine
 lxc.network.hwaddr = 4b:59:43:49:79:bfA results in a errormessage like above.

 Perhaps it is best to keep the first three pairs the same as in
 the LXC exampels.

Picking mac addresses is always going to require a little special care.
You can't just use anything.
Based on looking at what openvz and various other virtualization systems 
and/or their front-ends, and reading the rules for mac addresses, I do 
this to generate a valid mac from a desired ip address.
It uses only the mac address space reserved for local/virtual use and 
will never collide with any real nic, and will usually be unique 
automatically because even grocery-bagger-by-day admins are used to 
worrying about keeping ip's unique within a lan.
If the IP is 192.168.20.115:

$ printf 02:00:%x:%x:%x:%x 192 168 20 115

Or as part of a script where you can enter the ip or read it out of a 
config file in normal format:

$ IP=192.168.20.115
$ HA=`printf 02:00:%x:%x:%x:%x ${IP//./ }`
# echo $HA
02:00:c0:a8:14:73

macs are expected to be stable and ip's mutable, so it's a bit backwards 
to define a mac from an ip, but it's easier for most people in most 
cases that way. Everyone already is used to tracking ip's and making 
sure they're unique and recognizing immediately when there is an 
collision. Not so with mac's. I guess the downside will come when you 
generate a mac for a vm, then change that vm's ip, and then generate a 
new vm with the same IP that you just freed up. In that case the new vm 
will get the same mac as the old one.

The real answer of course is to track all your macs, virtual and real, 
in a spreadsheet or purpose designed app so you can sort and find dupes, 
prevent entering new dupes, and prevent entering invalid macs and merely 
unwise macs. An app might even be able to probe the network and try to 
determine if a proposed new mac is visible at the moment, and maybe even 
try to find past evidence in arp caches, syslog, etc.

-- 
bkw

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Best way to shutdown a container

2010-08-17 Thread Brian K. White
On 8/17/2010 11:43 AM, Gordon Henderson wrote:
 On Fri, 13 Aug 2010, Clemens Perz wrote:

 Hi!

 I used to run lxc-stop on my system containers when I actually want to
 run a halt. Only today I noticed, that stop actually kills all
 processes, not really doing a halt. I went through the lxc commands and
 did not find something graceful to do this job from the host systems
 shutdown scripts.

 Did I miss it? Maybe lxc-halt is a missing piece ;-) Is there a simple
 way to do it, preventing the need to login to the container and run halt?

 Am I the only one using lxc-watchdog by Dobrica Pavlinusic ?

 http://blog.rot13.org/2010/03/lxc-watchdog_missing_bits_for_openvz_-_linux_containers_migration.html

 I've had to tweak it a bit for my own setup, but otherwise it seems to
 work OK.

 It modified a containers inittab at start time and then sends a powerfail
 event to it's running init to simulate a reboot...

 Gordon

Have you not read the various posts in this thread?
That is essentially what everyone already does.

Everyone has their own slight twist on the scripts and packaging, 
including that one, including myself (on openSUSE where I put some 
scripts and setup logic into an rpm with lxc-tools and a wiki page to 
document it), but they all do exactly that same thing, based off a 
couple posts to this list a while back when someone first came up with 
the init idea and somene else came up with the inotify enhancement. They 
don't all actually modify the containers init automatically, but neither 
do I agree that that is necessarily correct. It's a handy twist, but a 
trivial detail on top of everything else.

In my case, although my scripts could be enhanced a lot, rather than 
enhance them further I'm living with them for now since they do work (in 
production), and when I'm ready to upgrade to the new lxc with built-in 
watchdog/agent, I'll invest in that for further enhancements.

-- 
bkw

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] IPC between containers

2010-06-07 Thread Brian K. White
On 6/7/2010 7:51 PM, Nirmal Guhan wrote:
 Hi,

 Is there a way to use shared memory between the containers? Any other 
 better/faster IPC mechanisms? I don't want to use sockets.

 Please let me know.

Fifos on shared filesystem on the host?
Multiply hardlinked files on the host which appear in the same place in 
each container?

Except I don't know how you could safely allow more than one client 
mount the fs except read-only, other than by means which are ultimately 
sockets just with fs overhead on top of that. (various network and 
distributed filesystems, and distributed ipc, distributed locking 
systems, all are network based)

Or if the multiple-hardlink idea doesn't actually work, I guess you 
could put an incron job on the host which has access to all the 
container's fs's and can watch a special directory in the same place in 
all containers fs's and whenever a file is modified in one container, 
incrond on the host notices and replicates it in all other containers.

None of this sounds as good as ordinary socket communications, which is 
my point.

The whole point of a container is to ensure that exactly that (IPC) 
can't happen so I am tempted to say if you don't want something which 
contains, then don't use containers.

-- 
bkw

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] help with root mount parameters

2010-05-26 Thread Brian K. White
On 5/26/2010 4:54 AM, Ralf Schmitt wrote:
 Daniel Lezcanodlezc...@fr.ibm.com  writes:


 This is internal stuff of lxc. Before this commit, several temporary
 directories were created and never destroyed, polluting '/tmp'.

 In order to do pivot_root, we have to mount --bind the rootfs somewhere.
 This 'somewhere' was a temporary directory and now it is
 /usr/lib64/lxc by default (choosen at configure time), or optionally
 configurable with lxc.rootfs.mount.
  
 /var/run/lxc looks like a much better choice to me.


As has been discussed pretty thoroughly already, this is not variable 
data but a completely fixed, static bit of package-specific support 
infrastructure. It's just like a package specific library or other 
component file whose name never changes and which that single file 
services all running instances concurrently.
The library or other support file just happens to be an empty 
directory in this case.
As such, something/lib/package/something is really the most correct 
place. Just pretend you can't hear the word temporary in the 
description of it's purpose.

Maybe the install target that creates this directory could also place a 
small text file in the directory explaining the directories purpose?
This directory must exist, even though no contents are ever placed 
here. see http: for details
That shouldn't affect it's use as a mount point and helps the system to 
self-document.

-- 
bkw

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start leaves temporary pivot dir behind

2010-05-10 Thread Brian K. White
On 5/10/2010 10:48 AM, Daniel Lezcano wrote:
 Ferenc Wagner wrote:

 Daniel Lezcanodaniel.lezc...@free.fr  writes:


  
 Ferenc Wagner wrote:



 Ferenc Wagnerwf...@niif.hu  writes:


  
 Daniel Lezcanodlezc...@fr.ibm.com  writes:



 Ferenc Wagner wrote:


  
 Daniel Lezcanodaniel.lezc...@free.fr  writes:



 Ferenc Wagner wrote:


  
 While playing with lxc-start, I noticed that /tmp is infested by
 empty lxc-r* directories: [...] Ok, this name comes from lxc-rootfs
 in conf.c:setup_rootfs.  After setup_rootfs_pivot_root returns, the
 original /tmp is not available anymore, so rmdir(tmpname) at the
 bottom of setup_rootfs can't achieve much.  Why is this temporary
 name needed anyway?  Is pivoting impossible without it?



 That was put in place with chroot, before pivot_root, so the distro's
 scripts can remount their '/' without failing.

 Now we have pivot_root, I suppose we can change that to something 
 cleaner...

  

 Like simply nuking it?  Shall I send a patch?



 Sure, if we can kill it, I will be glad to take your patch :)

  

 I can't see any reason why lxc-start couldn't do without that temporary
 recursive bind mount of the original root.  If neither do you, I'll
 patch it out and see if it still flies.


 For my purposes the patch below works fine.  I only run applications,
 though, not full systems, so wider testing is definitely needed.

  From 98b24c13f809f18ab8969fb4d84defe6f812b25c Mon Sep 17 00:00:00 2001
 From: Ferenc Wagnerwf...@niif.hu
 Date: Thu, 6 May 2010 14:47:39 +0200
 Subject: [PATCH] no need to use a temporary directory for pivoting
 [...]

  
 We can't simply remove it because of the pivot_root which returns EBUSY.
 I suppose it's coming from: new_root and put_old must not be on the
 same file system as the current root.


 Hmm, this could indeed be a problem if lxc.rootfs is on the current root
 file system.  I didn't consider pivoting to the same FS, but looks like
 this is the very reason for the current complexity in the architecture.

 Btw. is this really a safe thing to do, to pivot into a subdirectory of
 a file system?  Is there really no way out of that?

  
 It seems pivot_root on the same fs works if an intermediate mount point
 is inserted between old_root and new_root but at the cost of having a
 lazy unmount when we unmount the old rootfs filesystems . I didn't find
 a better solution in order to allow the rootfs to be a directory with a
 full files system tree.

 I am looking at making possible to specify a rootfs which is a file
 system image or a block device. I am not sure this should be done by lxc
 but looking forward ...


 But as we will pivot_root right after, we won't reuse the real rootfs,
 so we can safely use the host /tmp.


 That will cause problems if rootfs is under /tmp, don't you think?

  
 Right :)


 Actually, I'm not sure you can fully solve this.  If rootfs is a
 separate file system, this is only much ado about nothing.  If rootfs
 isn't a separate filesystem, you can't automatically find a good place
 and also clean it up.
  
 Maybe a single /tmp/lxc directory may be used as the mount points are
 private to the container. So it would be acceptable to have a single
 directory for N containers, no ?


 So why not require that rootfs is a separate
 filesystem, and let the user deal with it by doing the necessary bind
 mount in the lxc config?

  
 Hmm, that will break the actual user configurations.

 We can add a WARNING if rootfs is not a separate file system and provide
 the ability to let the user to do whatever he wants, IMO if it is well
 documented it is not a problem.


Just putting in a hopefully unnecessary vote, if you are still deciding 
what's ultimately going to be possible or impossible:
As a user, I can say I really want to continue using a shared filesystem 
where the containrs roots are subdirectories on a single host filesystem.
The ability to use seperate filesystems or image files or real devices 
would be nice options, but the way I want to run most instances, is out 
of subdirectories.
I specifically deliberately want to allow any container to consume as 
much or as little space as it needs at any time without warning and at 
unpredictable rates, changing or spiking at unpredictable times.

I can describe all the reasons why I want that and why it's not wrong 
in my case but I'm assuming they are unnecessary and uninteresting.

Switching to bind mounts are ok. I don't mind if the details change 
about how to set up the config files and what steps the init scripts 
have to perform to launch a container, as long as it's still true that I 
don't have to provision fixed container sizes.

-- 
bkw

--


Re: [Lxc-users] lxc-start: Device or resource busy - could not unmount old rootfs

2010-04-13 Thread Brian K. White
'/lxc-oldrootfs-y10fSV/var/lib/urizen-slicer/fs/test/dev/console'
  lxc-start 1271189821.573 DEBUGlxc_conf - umounted
'/lxc-oldrootfs-y10fSV/var/lib/urizen-slicer/fs/test/dev/tty1'
  lxc-start 1271189821.573 DEBUGlxc_conf - umounted
'/lxc-oldrootfs-y10fSV/var/lib/urizen-slicer/fs/test/dev/tty2'
  lxc-start 1271189821.573 DEBUGlxc_conf - umounted
'/lxc-oldrootfs-y10fSV/var/lib/urizen-slicer/fs/test/dev/tty3'
  lxc-start 1271189821.573 DEBUGlxc_conf - umounted
'/lxc-oldrootfs-y10fSV/var/lib/urizen-slicer/fs/test/dev/tty4'
  lxc-start 1271189821.573 DEBUGlxc_conf - umounted
'/lxc-oldrootfs-y10fSV/dev'
  lxc-start 1271189821.573 DEBUGlxc_conf - umounted
'/lxc-oldrootfs-y10fSV/sys'
  lxc-start 1271189821.573 DEBUGlxc_conf - umounted
'/lxc-oldrootfs-y10fSV/var'
  lxc-start 1271189821.573 ERRORlxc_conf - Device or resource
busy - could not unmount old rootfs
lxc-start: Device or resource busy - could not unmount old rootfs
  lxc-start 1271189821.573 ERRORlxc_conf - failed to
pivot_root to '/var/lib/urizen-slicer/fs/test'
lxc-start: failed to pivot_root to '/var/lib/urizen-slicer/fs/test'
  lxc-start 1271189821.573 ERRORlxc_conf - failed to set
rootfs for 'test'
lxc-start: failed to set rootfs for 'test'
  lxc-start 1271189821.573 ERRORlxc_start - failed to setup
the container
lxc-start: failed to setup the container
  lxc-start 1271189821.573 NOTICE   lxc_start - '/sbin/init'
started with pid '2229'
  lxc-start 1271189821.573 DEBUGlxc_utils - closing fd '1'
  lxc-start 1271189821.573 DEBUGlxc_utils - closing fd '0'
  lxc-start 1271189821.573 DEBUGlxc_utils - closed all
inherited file descriptors
  lxc-start 1271189821.634 DEBUGlxc_start - child exited
  lxc-start 1271189821.634 INFO lxc_error - child2229  ended
on error (255)
  lxc-start 1271189821.634 DEBUGlxc_cgroup - using cgroup
mounted at '/cgroup'
  lxc-start 1271189821.714 DEBUGlxc_cgroup - '/cgroup/test' unlinked


Thanks,
--
Matt Bailey


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users




--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev



___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users



--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users



Re: [Lxc-users] Can't start a 2nd container with 0.6.5
Daniel Lezcano
Sat, 23 Jan 2010 13:44:33 -0800

Brian K. White wrote:
 However, now when I go to make a 2nd container, I can't start it.
 I can create it, but not execute or start.
 [...] 
 Well I'm more boggled now.
 I stopped my first container nj12.
 lxc-ls shows nothing, screen -ls shows nothing, mount shows nothing extra, 
 yet trying to start nj13 still fails, and trying to start nj12 still succeeds.

 I can't find anything functionally different between nj12 and nj13...
 What could I be missing???

Yep, there is a problem with the pivot root and the umount of the different 
mount point in the old rootfs. This problem appears with some configuration (I 
didn't figure out which one yet), I did a hot fix, which is more a workaround 
than a real fix, (I didnt't understand where is coming the real problem).

As soon as I find the culprit, I will release a 0.6.6 version to fix that as 
the 0.6.5 is bogus.

In the meantime, if you wish to test, I attached the patch to this email.

Thanks for reporting the problem.

 -- Daniel

---
 src/lxc/conf.c |   41 ++---
 1 file changed, 30 insertions(+), 11 deletions(-)

Index: lxc/src/lxc/conf.c
===
--- lxc.orig/src/lxc/conf.c
+++ lxc/src/lxc/conf.c
@@ -67,6 +67,10 @@ lxc_log_define

Re: [Lxc-users] mac addresses

2010-02-12 Thread Brian K. White
Brian K. White wrote:
 Michael H. Warfield wrote:
 On Fri, 2010-02-12 at 11:37 -0500, Brian K. White wrote:
 So my question is, is 02:x:x:x:x:x in some way non-routable just 
 because it sets the locally-administered bit?
 I use that all the time without any problems.  It may be something in
 the way their switch is set up that limits the number of mac addresses
 on that port.
 
 Aha. Plausible. I'll check it out. 24 hrs is still 12 hrs away... I 
 wonder which will be quicker, calling Verizon and actually getting 
 anyone who can even spell MAC or just waiting another day! :)
 
 Thanks much.
 

In the course of talking to Verizon I discovered the off the cuff 
shell/awk loop I used to re-write all my config files at once had a typo 
and created the same exact mac in all config files.

stopped all containers, wrote the intended _non_duplicate_ macs in all 
files and restarted all containers and everything is fine.

*sigh*

They were actually pretty helpful believe it or not and it only took a 
minute to bump past the first couple layers to get to a person whose 
time was more valuable for me to waste.

-- 
bkw

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users