Re: [Lxc-users] Mount /dev/shm of the host inside a container

2012-01-10 Thread Gordon Henderson
On Tue, 10 Jan 2012, Daniel Lezcano wrote:

 On 01/10/2012 01:39 AM, Fred Finkelstein wrote:
 I finally found it with the help of the #lxcontainers irc channel. I have
 to replace this in lxc.fstab:
 /dev/shm /dev/shm bind 0 0
 with this:
 /dev/shm /srv/shm none bind 0 0
 and I can access it.

 Why /srv/shm ?

And do you actually need to mount it at all?

I don't in my containers - but the startup scripts running inside my 
containers mount it for me automatically. I guess it depends on 
permissions in /dev for this to work, but I've never had an issue.

Although re-reading the original post - perhaps you're after a shared 
/dev/shm for all containers?

(And I've not heard of /srv/ either - maybe a distribution specific 
thing? I used Debian - a-ha: 
http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/srv.html

Gordon

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] PostgreSQL - sh: cannot create /dve/null: Permission denied - LXC Issue?

2011-12-24 Thread Gordon Henderson
On Tue, 20 Dec 2011, Patrick Kevin McCaffrey wrote:

 I'm running into this issue when trying to set up a PostgreSQL server 
 inside one of my containers.  The Postgre mailing list seems suspect of 
 my LXC setup, so I thought I'd see if anyone has any input.  The outline 
 of my problem is below.  I've got Postgre installed/configured, but I 
 can't run the initdb command as seen below...

 Subject: Re: [GENERAL] New User: PostgreSQL Setup - The Program 'postgress' 
 is needed by initdb but was not found in the same directory...

 Patrick Kevin McCaffrey p...@uwm.edu writes:
 I'm following the instructions that come with the source, and am stuck on 
 this line:
 /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
 When I run it, I get the following:
 sh: cannot create /dev/null: Permission denied

If the other replies aren't helping, I'm wondering if your running your 
container under NFS?

I've been doing this recently and having issues with NFSv3  4, but 2 is 
OK. I think it's something to do with ACSs possibly being enabled, but 
I've not had time to check further - as it works with good old v2 and 
that's OK for me, for now...

Gordon

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container size minialisation

2011-12-12 Thread Gordon Henderson

On Mon, 12 Dec 2011, István Király - LaKing wrote:


Hi folks.

I'm trying to compose a system, where lxc containers behave like virtual 
hosts for a web server.


As next step I would like to minimize container size. My question is, 
what the best, most elegant and fail proof  technique for that?


At this moment I'm thinking of a master container and slave 
containers where the /usr folder for example in the slave containers is 
a mount from the master container. That gives a significant size drop 
already, from 400 to 40 megabytes.


I would like to keep the containers really minimal. 4 megabyte should be 
small enough.


Lets say only some important files in /etc 

Has anyone any experience with this technique?   Thank you for sharing.


I use something similar for my hosted PBXs (asterisk).

Each PBX container mounts a commonn set of directories from teh host for 
the asterisk installation - e.g. the asterisk binaries, libraries, modules 
and sounds. also a common set for apache. This is all in the fstab 
referenced from each containers config file, so they're all bind-mounted 
at container start time. (read only too)


One thing to beware of - you can't share everything like this - e.g. /usr 
- which I initially thought I could - well, maybe I could, but doing 
things like apt-get update in one container would potentially update files 
in /usr in all containers which might not be the best thing. You could 
probably do it with care though - I simply can't be bothered.


I have to say: I'm not really that bothered about disk space - it's not a 
big deal. I do it that way as it makes it easier to update asterisk over 
all the containers. My LAMP type containers don't do any of this.


Gordon--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-destroy does not destroy cgroup

2011-12-12 Thread Gordon Henderson
On Thu, 8 Dec 2011, Arie Skliarouk wrote:

 When I tried to restart the vserver, it did not came up. Long story short,
 I found that lxc-destroy did not destroy the cgroup of the same name as the
 server. The cgroup remains visible in the /sys/fs/cgroup/cpu/master
 directory. The tasks file is empty though.

And just now, I've had the same thing happen - a container failed to 
start and it left it's body in /cgroup - with empy tasks.

This is latest  greatest - kernel 3.1.4, lxc 0.7.5, Debian squeeze 
(kernel  lxc compiled)

It may well have been my own fault - trying to start a container whos 
disk image was NFS mounted and I got the error:

mount.nfs: an incorrect mount option was specified

and lxc-start hung. so I may be doing something bogus anyway, however...

(like e.g. trying to bind-mount /proc, /sys, /dev/pts, etc. into an nfs 
mounted directory?)

Gordon

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-destroy does not destroy cgroup

2011-12-11 Thread Gordon Henderson
On Sun, 11 Dec 2011, Arie Skliarouk wrote:


 When I tried to restart the vserver, it did not came up. Long story short,
 I found that lxc-destroy did not destroy the cgroup of the same name as the
 server. The cgroup remains visible in the /sys/fs/cgroup/cpu/master
 directory. The tasks file is empty though.

 I had to rename the container to be able to start it.


 About 18 hours after the event, the physical machine locked up hard.
 Without any message in dmesg or on its console. Before that, the machine
 worked pretty hard for about 60 days without a hitch.

Ouch. And oddly enough, I had a hard-lockup a few days ago myself that 
needed a power cycle.

 My gut feeling is that it is related to the stale cgroup somehow.

 Out of curiosity, what kernel are you running? I'm on 2.6.35, but looking
 at some of the later ones now...

 I use kernel 3.0.0-12-server amd64 as packaged in the ubuntu 11.10. I had
 problems with earlier kernels as they locked up the machine every week or
 so.

My base is Debian Stable, but I custom compile the kernels to match 
hardware. I've put the latest  greatest on a test server to see how it 
fares - so-far so good, but there's no real load on it.

I think it would be good for more people to start to post their 
experiences with LXC though - who knows how many people are using it - any 
big companies using it in anger (as opposed to KVM, XEN, etc.) and so 
on. (or small companies with big installations!)

I have 2 areas of application for it - one is hosted Asterisk PBXs, and 
for that it seems to work really well, but the run-time environment is 
very carefully controlled - it basically runs sendmail, sshd, apache+php 
and asterisk and nothing else. The other application I use them for it 
more of a management side - for running general purpose LAMP servers in - 
mostly to make sure I can relatively quickly move an image from one server 
to another to cover hardware issues or temporarily/permanent increase (or 
reduction!) in avalable resources... I don't consider my own use big by 
any means at all though..

Gordon

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-destroy does not destroy cgroup

2011-12-08 Thread Gordon Henderson
On Thu, 8 Dec 2011, Arie Skliarouk wrote:

 When I tried to restart the vserver, it did not came up. Long story short,
 I found that lxc-destroy did not destroy the cgroup of the same name as the
 server. The cgroup remains visible in the /sys/fs/cgroup/cpu/master
 directory. The tasks file is empty though.

 I had to rename the container to be able to start it.

Did you remember to stop it first?

 All this on ubuntu 11.04, 3.0.0-12-server amd64. Thoughts, comments?

Very very similar to what I experience from time to time. (Posted about 
recently with zero response) Although my more drastic solution is to 
reboot the host, but I have gotten away with lxc-stop then a start.

I've now stopped using memory limits in containers and for the time being 
will let them swap (or share more memory with other containers and swap 
if needed) - they're mostly well behaved though.

I don't have a solution I'm afraid.

Gordon

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-destroy does not destroy cgroup

2011-12-08 Thread Gordon Henderson
On Thu, 8 Dec 2011, Arie Skliarouk wrote:

 On Thu, Dec 8, 2011 at 14:05, Gordon Henderson gor...@drogon.net wrote:

 On Thu, 8 Dec 2011, Arie Skliarouk wrote:

 When I tried to restart the vserver, it did not came up. Long story
 short, I found that lxc-destroy did not destroy the cgroup of the same name
 as the server. The cgroup remains visible in the /sys/fs/cgroup/cpu/master
 directory. The tasks file is empty though.

  I had to rename the container to be able to start it.

 Did you remember to stop it first?

 Of course! It is part of the vserver stop script.

Just checking!

 All this on ubuntu 11.04, 3.0.0-12-server amd64. Thoughts, comments?

 Very very similar to what I experience from time to time. (Posted about
 recently with zero response) Although my more drastic solution is to reboot
 the host, but I have gotten away with lxc-stop then a start.

 Well, with 65 running containers (24GB of RAM) it is easier to rename the
 vserver :)

Yes. I can see that a system restart might irritate a few other people!

Out of curiosity, what kernel are you running? I'm on 2.6.35, but looking 
at some of the later ones now...

 I've now stopped using memory limits in containers and for the time being
 will let them swap (or share more memory with other containers and swap if
 needed) - they're mostly well behaved though.

 My vservers do not behave well and require restrictions.

OK.

 BTW, do you know how can I restrict number of running processes in a
 container (like in openvz)?

No idea I'm afraid. I guess some sort of super limit passed into the 
containers init (via setrlimit() ?) is what's needed...

Gordon

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC Container: Network Configuration

2011-12-01 Thread Gordon Henderson
On Thu, 1 Dec 2011, Patrick Kevin McCaffrey wrote:

 Thanks a bunch, Gordon.  I ran route -n inside the container, as saw 
 there was no gateway.  Assigning 192.168.80.1 (the address of br0) as 
 the default gateway inside the container works beautifully.

I think sometimes we overlook the obvious! Glad its going now.

 I can now 
 apt-get from the container, and ping it from another subnet too.  I had 
 been playing with the gateway setting in /etc/network/interfaces on 
 the host machine, but it seems like everything worked (as far as the 
 machine acting as my router, and each subnet having access to the 
 Internet and each other) without defining a default gateway, so it 
 totally slipped my mind to try assigning one inside the container.

My containers don't look at /etc/network/* at all during startup - the 
networing is setup in /etc/init.d/rcS.

I'm actually switching to using file-rc in my containers because what the 
need to do to boot is really very minimal and I can then trivially 
disable things by editing them out of one config file (/etc/runlevel.conf)

One other thing you might want to check is the NAT on the host - if you're 
not careful to exclude each LAN (or bridged vlan) subnet, you can end up 
sending data through the NAT tables. It may be that smoothwall does this 
for you, but it's always handy to check.

Cheers,

Gordon

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Host crash after oom killer in a container...

2011-11-24 Thread Gordon Henderson

I've noticed a few oddities recently which has resulted in me needing to 
reboot (and in once case power cycle) a server which isn't good...

I've recently start to set the memoy linits - e.g.

lxc.cgroup.memory.limit_in_bytes   = 1024M
lxc.cgroup.memory.memsw.limit_in_bytes = 1024M

That, as I understand it will limit a container to 1024M of RAM and 1024M 
of RAM+SWAP - ie. it should prevent using swap at all...

The issues come when a container exceeds that and the oom killer comes in 
- which seems to do what it's supposed to do, but after that some real 
oddities start to happen. The process table expands and lots and lots of 
processes sit in a D state waiting on something. Load average gets to over 
300. What's worse is that doing some operations on the host server seem to 
stall to - e.g. ps ax - it listed a few 100 processes then stalled!

LXC is 0.7.2 (Debian Stable/Squeeze)
Kernel is 2.6.35.13

I've noticed it in 2 different servers now - one a dual-core Intel, the 
other a quad core AMD - both running Debian Squeeze and the same 2.6.35.13 
kernel custom compiled for the underlying hardware.

I have the kernel log-files avalable if anyone wants them, but I'm really 
intersted to know if I'm missing anything obvious - wrong paramters, or 
just expecting too much - known issues here - should I use a different 
kernel and so on...

Thanks,

Gordon


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] NTP in a LXC

2011-11-22 Thread Gordon Henderson
On Tue, 22 Nov 2011, Jeremy MAURO wrote:

 Hi everyone

 Is it relevant to setup ntpd on the lxc container?

Probably not..

 And has anyone setup
 a ntp-server on a lxc container?

Yes, but only by accident.

Remember that we only have one kernel here, so I suspect it's a good idea 
to only have one NTP process taking to it to keep it in sync and provide 
NTP services for others (if required)

So what I do is run NTP in the host - then all containers get the time 
synced automatically. Less processes to run, less network, etc.

Unless you're importing an older server into a container on new hardware 
of-course, but even then, it's probably an idea to turn off any other 
services that are no-longer required, if possible. Reduces the overall 
process count which can only be a good thing.

If I had to run NTP in a container (because it's IP address was burnt into 
devices I had no control over - for example) then I'd make sure that was 
the only container on the host server running NTP. I'm really not sure 
what effect that 2 or more NTP processes talking to the kernel has...

Gordon

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Stats 'n' Stuff

2011-11-12 Thread Gordon Henderson

I'm looking for ways to get stats out of each container on a host - the 
sort of stuff I'm after is the bandwidth of the network interface and cpu 
cycles.

On the CPU monitoring front there is /cgroup/xxx/cpuacct.stat, memory from 
memory.usage_in_bytes and memory.memsw.usage_in_bytes ...

But on the network side...

There is nothing in /cgroup that I can see..

And there seems to be some oddities as the internal interface number is 
created new for each interface - so on the host, eth0 is (typically) 
interface number 2, but in a container it's not 2, but more than 2 - it 
can be found, however the next time you reboot the container or host, it 
will change (or it's highly probable that it will change)

So even if I ran snmp in each container, using something like mrtg to get 
the stats is going to be problematic as you need to encode the interface 
number in the mrtg config file...

In any case, I'd really rather only run one snmpd on the host... The same 
down-side applies in that the interface number changes every time you 
restart a container..

Has anyone worked round this?

My thoughts are that every time I start/stop a container or reboot the 
host, to run mrtgs 'cfgmaker' program then parse the output, matching the 
interface name to the container name (fixing the veth interface name 
in the containers config file so I know which is which) then extracting 
the network interface, dynamically writing an mrtg.cfg file, then running 
mrtg...

e.g.

   # cfgmaker public@localhost | grep 'Interface.*veth'
   ### Interface 6  Descr: 'vethdU4ae6' | Name: 'vethdU4ae6' | Ip: '' | Eth: 
'4a-d4-c2-97-a9-c0' ###
   ### Interface 10  Descr: 'vethr3M2xv' | Name: 'vethr3M2xv' | Ip: '' | Eth: 
'76-9a-d2-be-a2-50' ###
   ### Interface 14  Descr: 'vethiPsSOE' | Name: 'vethiPsSOE' | Ip: '' | Eth: 
'9e-da-70-6c-b1-93' ###
   ### Interface 18  Descr: 'vethQ6lLx8' | Name: 'vethQ6lLx8' | Ip: '' | Eth: 
'6a-c2-18-f6-10-95' ###
   ### Interface 22  Descr: 'veth8gX8cw' | Name: 'veth8gX8cw' | Ip: '' | Eth: 
'76-16-5d-a9-0c-fb' ###
   ### Interface 26  Descr: 'vethOSG0De' | Name: 'vethOSG0De' | Ip: '' | Eth: 
'5e-95-9b-b7-b4-e8' ###

and parse that to extract the interface numbers (6, 10, 14..) and write an 
appropriate mrtg config file ...

Or am I missing something obvious?

Does anyone else bother with the stats of the individual containers?

Gordon

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Stats 'n' Stuff

2011-11-12 Thread Gordon Henderson
On Sat, 12 Nov 2011, Matt Franz wrote:

 Yes.  The random Ethernet device names make monitoring with munin zenoss 
 or whatever very painful.

 One of the nice features of openvz is that it uses the container ID in 
 the device name which will be consistent across container reboots and 
 also allows you to easily identify which Nic belongs to the container so 
 you can easily view traffic stats across multiple containers while only 
 monitoring the bare metal host.

 The right answer would be to find the .c or .sh that creates the devices 
 (assuming it is in user space) and modify that code so the random Device 
 names can be overridden.

Actually, that's not an issue - you can trivially fix the device name in 
the config file - e.g.

lxc.network.veth.pair = veth-dsr

it's the device number used in mrtg that's the issue.

So running cfgmaker on that device shows:

### Interface 204  Descr: 'veth-dsr' | Name: 'veth-dsr' | Ip: '' | Eth: 
'66-5a-6c-77-d4-76' ###

but if I stop and start the container:

### Interface 208  Descr: 'veth-dsr' | Name: 'veth-dsr' | Ip: '' | Eth: 
'66-5a-6c-77-d4-76' ###

it's interface ID changes from 204 to 208... And it's the ID number that's 
coded into the mrtg files. (And I'm not sure how snmp handles it either)

Maybe other monitoring systems handle it better...

Gordon

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] moving LXC containers to a new server

2011-11-06 Thread Gordon Henderson
On Sun, 6 Nov 2011, Geordy Korte wrote:

 Hello all,

 Just a quick question. I have LXC running on a server and have purchased a
 new server. Now I would like to copy the LXC's to the new server. Do I need
 to do anything special with the cgroups or just copy the containers from
 /var/lib/lxc and then start them?

Copy the containers (and config file and fstab), then lxc-create and start 
them.

A few things to bear in-mind:

Preserve user IDs when copying: I use rsync - so use the --numeric-ids 
parameter.

To minimise down time: rsync while it's live, lxc-create on the new 
server, shutdown the container on the old server, rsync again, then 
lxc-start on the new server. (and finally lxc-destroy on the old server 
in-case it auto-starts after a reboot!)

Gordon

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mknod after instance creation?

2011-11-05 Thread Gordon Henderson
On Sat, 5 Nov 2011, Daniel Lezcano wrote:

 On 11/05/2011 12:06 AM, Dong-In David Kang wrote:
   Hi,

   Is it possible to do mknod after creation of an LXC instance?
 I need to do mknod not only at bootup time, but also at run-time.
 This is needed when I want to dynamically add devices to LXC instance.
 Is it possible?
 If it is, how can I do it?

   I've seen the case of mknod at bootup time of an LXC instance.
 But, I haven't seen the usage of mknod at run-time after boot-up.
 Is it the limitation of LXC?

 Just comment out the lxc.cgroup.devices.* lines in the configuration file.

Yup - same issue I had a few days ago.

However it also helped me yesterday too when I had been given a vmware 
instance to extract some data from - I manged to unpack it into a regular 
filesystem, then on a whim, I decided to run it up under LXC - it kicked 
off udev which mknods, so letting it do that make it work OK - actually 
work very OK after I tweaked a few things in the startup scripts to stop 
it grabbing the console, so much so that the people I was doing it for 
want to keep it going for a while rather than extract the data and import 
it into their new system - it turned out to be an FC11 image - my host is 
Debian!

Gordon

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mknod inside a container

2011-11-04 Thread Gordon Henderson
On Fri, 4 Nov 2011, Daniel Lezcano wrote:

 On 11/04/2011 03:34 PM, Gordon Henderson wrote:
 
 I have a container that's used to build a Linux image for an embedded
 device - and as part of the build script, it creates /dev/ via a sequence
 of mknod commands  Which all fail )-:
 
 There are no cap.drop lines in the contianers config files and I'm
 currently working round this by doing it on the host and copying the
 directory from the host to the container but I'd really rather do it
 inside the container...
 
 So what have I missed, or is it simply not possible?

 You probably have mknod restrictions through the lxc configuration file.

 Check for lxc.cgroup.devices.* in the configuration file and comment them 
 all.

Yup. That was it, thanks!

I had it in my mind that it was capabilities rather than simple devices 
stuff.

Cheers,

Gordon

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Startup scripts [Was: Re: security question]

2011-08-21 Thread Gordon Henderson
On Sat, 20 Aug 2011, John wrote:

 Hi, very interested in this. I've been using LXC for a while but only to
 segregate functions on my own servers. I am well aware of how delicate
 the LXC setup is when considering security. For example, unless I
 customise the init scripts a container can bring down the host.

FWIW:

I've been using the file-rc boot script mechanisms rather than the sysv-rc 
system for LXC containers. That might seem like a step backwards, but 
actually, it's fine and gives you much finer ( easier IMO) control over 
what gets started and stopped when a container is booted. You still get 
the usual /etc/init.d with scripts in it, but rather than a lot of 
/etc/rc.X directorys, just one file; /etc/runlevel.conf with hooks into 
the scripts and what runlevels to execute them in.

It doesn't address any issues though, but when you know what's getting 
started and in what order, it makes management easier... For me, anyway. 
E.g. I was being plagued recently with really weird keyboard issues when a 
Debian Squeeze container was starting - it was the 
/etc/init.d/keyboard-setup script running - stopped that, and all was 
fine.

And really - all I need to run when booting a container is syslog, sshd, 
apache, maybe cron and one or 2 others. Unless I'm doing anything fancy 
with networking. No point running other stuff that the host needs to do 
like ntp, urandom, checkroot, the various mounts and so on.

Gordon

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Startup scripts [Was: Re: security question]

2011-08-21 Thread Gordon Henderson
On Sun, 21 Aug 2011, John wrote:

 On 21/08/11 18:01, Gordon Henderson wrote:
 I've been using the file-rc boot script mechanisms rather than the
 sysv-rc system for LXC containers. That might seem like a step
 backwards, but actually, it's fine and gives you much finer ( easier
 IMO) control over what gets started and stopped when a container is
 booted. Y

 Have you tried Arch Linux Gordon? it uses a BSD-Style init which is what
 I think you mean.

Not tried it - I'm more or less wedded to Debian right now though (but 
really only because I have a couple of dozen physical servers all running 
Debian - if only there were 25 hours in a day ;-)

 I think it's much cleaner and easier to work with. All
 switches are in rc.conf, there isn't loads of rc.runlevel directories
 full of symlinks and you can point your inittab at a lxc-specific
 rc.sysinit and rc.shutdown. This is what I have and it works well. My
 point was about the fact that using a stock rc.shutdown, for example,
 will shut down the host.

I found the stock shutdown (in Debian, anyway) rendered the host with a 
read-only root partition rather than shut down... The 'lxc' script I'm 
using attempts to 'fix' inittab and some of the rc scripts though, but 
file-rc gives me better control.

file-rc still has /etc/init.d but no /etc/rc.x directories. Instead 
/etc/init.d/rcS is a big script that parses /etc/runlevel.conf - and that 
file kicks off scripts as needed.

e.g. /etc/runlevel.conf contains lines like:

# sort off- on-levels command
02  -   S   /etc/init.d/hostname.sh
#06 -   S   /etc/init.d/keyboard-setup
#11 -   2,3,4,5 /etc/init.d/klogd
#12 -   2,3,4,5 /etc/init.d/acpid
12  -   S   /etc/init.d/mtab.sh
16  -   2,3,4,5 /etc/init.d/ssh
19 -   2,3,4,5 /etc/init.d/mysql
21 0,1,6   -   /etc/init.d/mysql

etc. The default runlevel in Debian is 2.

Gordon

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC and Tun/Tap ?

2011-07-21 Thread Gordon Henderson
On Thu, 21 Jul 2011, Daniel Lezcano wrote:

 On 07/13/2011 06:40 PM, Gordon Henderson wrote:
 ISTR that about a year ago tun/tap use inside an LXC container wasn't
 possible... Just wondering if things have changed?

 No nothing was done around that.

 As the thread is old, can you recall what you want to achieve with tun/tap ?

Hi,

Sorry - I tried to setup some tests myself, but got distracted with real 
work in the past week or so...

 If you can describe with some details what you want with tun/tap, that 
 would be great. I am familiar with tun/tap coding, so I can implement it 
 if I can refer to your description.

I currently host (amongst many) a server which is a VPN endpoint for a 
project I'm working on - we're using OpenVPN and vTUN - both which use the 
tun/tap interfaces. As part of my overall managent strategy for all my 
hosted servers, I'm moving as many of them them as possible to containers 
- so was thinking of containerising this particular server inside a 
newer host server.

(The move isn't so much to optimise server resources but for management 
and I'm finding containers to be excellent for this as I can move a whole 
server from one hardware platform to another with relative ease!)

What I'll probably do is move the VPN endpoints to the hosting server and 
run the rest of the applications inside a container on the same physical 
host.

Cheers,

Gordon

--
5 Ways to Improve  Secure Unified Communications
Unified Communications promises greater efficiencies for business. UC can 
improve internal communications as well as offer faster, more efficient ways
to interact with customers and streamline customer service. Learn more!
http://www.accelacomm.com/jaw/sfnl/114/51426253/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Containers in NFS, or ...

2011-07-21 Thread Gordon Henderson

A few months ago there were some posts about running containers in a 
diskless host - just looking for some more info about this in my ponderous 
ponderings!

I'm not after having a diskless host (although it's an option), but to 
have a host NFS mount a filesystem of a container, then start it

ie. big NFS servery type thing. Many front-end hosts with lots of RAM, but 
minimal disks. Container image NFS (or ?) mounted off server. Lets assume 
2 LAN interfaces on the fronting server - a private one to the filestore 
and a public one to the rest of the world... (although that's not critical 
for what I'm thinking of)

That would then make management of the images utterly trivial and give the 
ability to migrate them from one physical host to another with nothing 
more than a shutdown/de-config on one host, and a config/startup on a new 
host...

However, assuming LXC is happy with it, there's the issue of running 
services on an NFS server - but that's really not something for here - I'm 
just interested in the scenario of server + multiple hosts... mounting 
images via NFS. I can't think why it might not work... Obviously there 
might be performance issues, but lets assume the environment is mostly 
read access of a typical LAMP type server (with the M part on another 
separate server using standard MySQL network access to it, rather than 
local access to the (NFS) disk)

This is basically a managemnt type issue more than anything else - the 
ability to migrate containers to faster/less loaded hosts, or failled 
hosts and so on. (Lets assume the file and sql servers are adequately 
backed up by other means)

Anyone see any issues? Would anyone do it differently?

Cheers,

Gordon

--
5 Ways to Improve  Secure Unified Communications
Unified Communications promises greater efficiencies for business. UC can 
improve internal communications as well as offer faster, more efficient ways
to interact with customers and streamline customer service. Learn more!
http://www.accelacomm.com/jaw/sfnl/114/51426253/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] LXC and Tun/Tap ?

2011-07-13 Thread Gordon Henderson

ISTR that about a year ago tun/tap use inside an LXC container wasn't 
possible... Just wondering if things have changed?

Thanks,

Gordon

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC vs ESX

2011-06-04 Thread Gordon Henderson
On Sat, 4 Jun 2011, Ulli Horlacher wrote:

 I have now coupled both:

 The F*EX service http://fex.uni-stuttgart.de/index.html runs on Ubuntu in
 LXC on ESX. The throuput is as expected the same as with Ubuntu on ESX
 alone.

LXV vs. ESX not withstanding, it's an intersting concept...

However I guess it's just for university types - those with the benefits 
of Gb upload speeds... The poor people without that benefit - and the 
majority will have sub 1Mb/sec upload speeds (and finite data caps) from 
their home Cable/DSL connections will still post a CD/DVD/USB data key of 
anything big...

On day though .. . :)

Gordon

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC vs ESX

2011-06-04 Thread Gordon Henderson

On Sat, 4 Jun 2011, Ulli Horlacher wrote:


On Sat 2011-06-04 (11:38), Gordon Henderson wrote:


However I guess it's just for university types - those with the benefits
of Gb upload speeds... The poor people without that benefit - and the
majority will have sub 1Mb/sec upload speeds


Many home users in Germany have upload speeds at 20 Mb/s. As far as I
know standard connection for South Korea home users is 100 Mb/s.


*sigh* Not in the UK. Standard DSL upload speed for the majority is 
448Kb/sec - 830Kb/sec on a business line, or up to 1.2Mb/sec on ADSL2+. 
The lucky ones on FTTC get 2Mb/sec, or up to 10Mb/sec if they pay silly 
amounts more. (I have one FTTC customer - they get 30Mb/sec in and 
9.5Mb/sec out - they're 2 weeks into their first months usage and have 
already consumed nearly half their 90GB allowance )-:


So there's still a lot of value in using USB data keys/CD/DVD to transport 
large quantities of data!


However, personally I'd much rather have higher data caps and lower 
contention and more stabiltiy than high speed any day of the week. And 
it's all very well having 100Mb/sec but if there's nothing to use it with, 
or your international links are so congested it's not worthwhile, then ...



Besides this all German universities and most big companies have 1 Gb/s
and above (eg my university has 40 Gb/s).


That's true for the UK Universities too (although 10 and 100Mb/sec is more 
common for medium sized companies - and even then it's not cheap - I have 
one customer on a 10Mb leased line - 1:1 contention from their premises to 
the edge of the ISPs network, no data cap and it's £500 a month. (They 
used to post DVDs to their hosting company to upload very high resolution 
photos to their website)



The sad truth is that people here aren't willing to pay the real price - 
so the huge ISPs dominate - under cut everyone else, offer high speeds 
(relatively speaking), but then have oversubscribed networks and lowish 
data caps )-:



So, it is good to know to have software which supports such fast links.


Indeed...

So one day ...

And now back to LXC :)

Cheers,

Gordon--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 ___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Output of 'top' with lxc.cgroup.memory.limit_in_bytes set?

2011-05-22 Thread Gordon Henderson

I think this has been on the list before, but my arching search is 
failling me... I've got containers working with memory limitations using

lxc.cgroup.memory.limit_in_bytes
and
lxc.cgroup.memory.memsw.limit_in_bytes

and I can prove that it's working by writing a program to malloc memory 
and watch the oom killer in operation when it reached the limit.. However 
the output of 'top' still shows the entire memory of the host...

Is this something fixable, or just the way it is for now?

Cheers,

Gordon

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Output of 'top' with lxc.cgroup.memory.limit_in_bytes set?

2011-05-22 Thread Gordon Henderson
On Sun, 22 May 2011, Gordon Henderson wrote:


 I think this has been on the list before, but my arching search is
 failling me... I've got containers working with memory limitations using

 lxc.cgroup.memory.limit_in_bytes
 and
 lxc.cgroup.memory.memsw.limit_in_bytes

 and I can prove that it's working by writing a program to malloc memory
 and watch the oom killer in operation when it reached the limit.. However
 the output of 'top' still shows the entire memory of the host...

 Is this something fixable, or just the way it is for now?

Ah, just done more searching, (emails from Jan. this year), so forget this 
for now - seems it's just the way is is for now, but patches from 
http://www.tinola.com/lxc/ may be promising...

Cheers,

Gordon

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] disk limit?

2011-05-19 Thread Gordon Henderson
On Wed, 18 May 2011, Serge Hallyn wrote:

   dd if=/dev/zero of=/srv/container1.rootfs.img bs=1M skip=1 count=1

That ought to be seek=1, not skip. (you skip the input, seek the 
outout)

I'm not a fan of this though - if you create the image file(s) using dd 
there is a good chance it's going to be mostly consecutive blocks on the 
disk which is probably going to be more efficient in the long-run.

Gordon

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mapping host PID - container PID

2011-04-28 Thread Gordon Henderson
On Thu, 28 Apr 2011, Ulli Horlacher wrote:

 Is there a way to get the corresponding host PID for a container PID?

 For example: inside the the container the process init has always PID 1.
 But what PID has this process in the host process table?

 ps aux | grep ... is not what I am looking for, I want more robust solution.

lxc-ps ?

From Dobrica Pavlinusic's LXC script:

   init_pid=`lxc-ps -C init -o pid | grep ^$name | cut -d  -f2-`

So if you're looking for 'init' in a container called pegasus:

   lxc-ps -C init -o pid | grep ^pegasus | cut -d  -f2-

Yields 24766 on my system, and a regular ps ax | fgrep init does give:

   24766 ?Ss 7:55 init [2]

(amongst others)

Gordon

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] LXC and sched_setscheduler ?

2011-04-10 Thread Gordon Henderson

I have a program that calls sched_setscheduler - however it fails when run 
inside a container - it doesn't overly impact anything, but I'm wondering 
if it's because I've missed something or that it's just not supported?

Any clues?

Gordon

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Moving lxc containers

2011-04-08 Thread Gordon Henderson
On Sun, 27 Mar 2011, Amit Uttamchandani wrote:

 I'm just wondering what the best way is to move an lxc container? Can I
 just tar the root filesystem and untar it on another system? Or should I
 rsync it over?

 I understand that before doing any of the above, the container should be
 shutdown first. However, is there a way to do this while the container
 is running?

I use rsync - the important option is  --numeric-ids.

Typically, I'll do an rsync with the original container running, then 
setup the configs, lxc-create, etc. on the new host, then shutdown the 
live container, do a final rsync to copy any changes, then start it on the 
new host. For most containers I have the down-time is minimal - under a 
minute in some cases.

Gordon

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] limiting RAM usage and disk space usage

2010-12-03 Thread Gordon Henderson
On Fri, 3 Dec 2010, Matt Rechenburg wrote:

 Hi Lxc team,

 actually I would vote against a loop mount.

I would vote to allow the local systems administrator the choice of what 
suits them best.

And since there's no reason to explicitly block loopback mounts, then 
don't do it.

 Much easier and more flexible is to use lvm and mount a logical volume
 at the containers root directory before the container starts.

Much easier and flexible (for me) would be to use files mounted via loop.

 This will automatically limit the available disk space for the container
 and gives you the flexibility to e.g. snapshot and resize the
 containers disk.

Files mounted via loop will automatically limit the avalable disk space 
for the container and gives me the flexibility to e.g. snapshot and resize 
the containers disk.

 In my setup I have several master images located on
 logical volumes and when I need a new lxc container I just
 snapshot one of the master images and deploy the snapshot.
 btw: this way Lxc is integrated into openQRM (root on lvm).

In my setup ... Well, it's different from yours, and I don't use LVM. My 
Linux kernel does not use LVM, I do not need the overhead of yet another 
layer of software for my servers to go through before the data gets 
to/from the disks, so I've no interst in LVM. Do not force LXC into one 
system or another - let us all have the choice to use what we want to use.

 many thanks + keep up the great work,
 Matt Rechenburg
 Project Manager openQRM

So you have a vested interest in LVM - just because others don't, then 
don't exclude us from using other mechanisms.

There's always more than one way to do something.

Gordon

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] limiting RAM usage and disk space usage

2010-12-03 Thread Gordon Henderson
On Fri, 3 Dec 2010, Serge E. Hallyn wrote:

 Quoting Matt Rechenburg (m...@openqrm.com):
 Hi Lxc team,

 actually I would vote against a loop mount.

 Note that this wouldn't take the place of LVMs :)  But since
 LVMs require you to have installed your distro in a particular
 way to begin with (or add a new disk), not everyone is able
 to use them.

 A '--with-lvm=LXC-apache1 option to lxc-create, and maybe
 a '--clone-from=LXC-template2 option as well, would also be
 very nice to have.

Maybe, but why? All it's doing is adding bloat to an already overloaded 
command. We can do everything else currently via simple commands and 
scripts. Don't force the inclusion of libraries, modules and other 
software some of us would not normally have installed.

Gordon

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] limiting RAM usage and disk space usage

2010-12-02 Thread Gordon Henderson
On Mon, 29 Nov 2010, Trent W. Buck wrote:

 Siju George sgeorge...@gmail.com writes:

 1) how do I limit the RAM usage of a container?

 In lxc.conf(5):

lxc.cgroup.memory.limit_in_bytes = 256M
lxc.cgroup.memory.memsw.limit_in_bytes = 1G

 2) how do I limit the disk usage of a container ?

 Ensure the rootfs is a dedicated filesystem (e.g. an LVM LV), and limit
 its size accordingly.

Finally, a use for LVM

And here was me about to embark on using filesystems in files loopback 
mounted...

Gordon

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] LXC and IPv6

2010-11-18 Thread Gordon Henderson

Anyone tried LXC with IPv6? Any reason it shouldn't just work?

Cheers,

Gordon

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Container Filesystem in a file (loopback mount)

2010-09-30 Thread Gordon Henderson

Looking to put hard limits on a containers filesystem size by creating a 
fixed-length file, putting a filesystem in it, loopback mounting it, then 
using that as the containers root ...

I've not tried it yet, but wondering if anyone has done anything like 
this? Any pitfalls? (Other than maybe performance)

Cheers,

Gordon

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container Filesystem in a file (loopback mount)

2010-09-30 Thread Gordon Henderson
On Thu, 30 Sep 2010, Daniel Lezcano wrote:

 On 09/30/2010 11:04 AM, Gordon Henderson wrote:
 
 Looking to put hard limits on a containers filesystem size by creating a
 fixed-length file, putting a filesystem in it, loopback mounting it, then
 using that as the containers root ...
 
 I've not tried it yet, but wondering if anyone has done anything like
 this? Any pitfalls? (Other than maybe performance)

 Yep, I tried, no problem.

Great.

 In a near future, we will be able to specify directly the image in 
 lxc.rootfs. The code doing that is ready but there are some problems with the 
 consoles I have to fix before.

Sounds good, thanks!

Gordon

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Cannot start a container with a new MAC address

2010-08-27 Thread Gordon Henderson
On Fri, 27 Aug 2010, Sebastien Douche wrote:

 I created a container with an interface. I stop it, I change the MAC
 address, restart it:

 lxc-start: ioctl failure : Cannot assign requested address
 lxc-start: failed to setup hw address for 'eth0'
 lxc-start: failed to setup netdev
 lxc-start: failed to setup the network for 'vsonde43'
 lxc-start: failed to setup the container
 lxc-start: invalid sequence number 1. expected 2

 Have I Missed a step?

lxc-destroy on the old one, lxc-create for the new one?

Gordon

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Firewalling ...

2010-07-02 Thread Gordon Henderson
On Fri, 2 Jul 2010, Bodhi Zazen wrote:


 In general, if I were managing the containers I would configure the firewall 
 on the host as you suggest.

 With that general advice, additional considerations include

 - Can use NAT for your guests ?

No.

 - Do you want your guests to be private or public ?

Public

 - How uniform are your containers ? Do they all have the same firewall 
 needs ? Are they web servers that can be serviced by a reverse proxy ? 
 What services are you running in the containers ?

Almost all the same configuration. Services - LAMP or Asterisk (with a 
tiny bit of apache+php). (The host is built differently depending on LAMP 
or asterisk - I don't mix the 2 on the same host!)

 - What are you wanting to accomplish with a firewall ?

Mostly reducing DOS attacks on SIP servers (there's been a huge surge in 
crackers trying to break SIP username/passwords recently - presumably to 
get free calls, and they're not being nice about it - I've seen 200 
requests a second to servers in the past, and asterisk doesn't limit it 
like ssh/pop, etc.), and I don't trust asterisks own code to protect 
itself.

 Just some initial considerations.

Thanks,

Gordon





 - Original Message -
 From: Gordon Henderson gor...@drogon.net
 To: lxc-users@lists.sourceforge.net
 Sent: Friday, July 2, 2010 8:09:52 AM
 Subject: Re: [Lxc-users] Firewalling ...

 On Fri, 2 Jul 2010, Daniel Lezcano wrote:

 On 07/02/2010 03:06 PM, Gordon Henderson wrote:
 Further to my logging stuff, which I seem to be able to get round now, I'm
 now wondering about the issues surrounding firewalling - wondering if it
 might be more efficient to have one firewall on the host which hooks into
 the forwarding table, (eth0 rather than br0?) or individual firewalls on
 each container - all doing more or less the same thing

 Any thoughts/comments?


 I didn't look at the netfilter code within the kernel but at the first glance
 if the tables are 'namespacized', it would be more efficient to have the
 iptables rules per container because the tables will be smaller and then the
 lookup faster but *maybe* at the cost of an extra memory consumption. In the
 other hand, it could be preferable to keep all on the host to centralize the
 administration in a single network stack, that could be easier to configure.
 Moreover if there is a large number of container, hence a big number of veth
 attached to the bridge, the sooner the packet is dropped the better it is,
 that should reduce the packet processing on the bridge (eg. prevent to find
 the dest interface, deliver the packet to it, which result to a drop).

 IMHO it's a decision to be made against the containers number vs iptable
 rules number.

 Well these are random thoughts and assumptions, so don't give too much credit
 to it ;)

 Always good to have another view on things though!

 FWIW: I'm looking at up to 20 containers in a host for this application.
 (virtual asterisk servers) and I'm probably leaning towards centralised
 administration more than anything else...

 Thanks,

 Gordon

 --
 This SF.net email is sponsored by Sprint
 What will you do first with EVO, the first 4G phone?
 Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users

 --
 This SF.net email is sponsored by Sprint
 What will you do first with EVO, the first 4G phone?
 Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Benchmarking with LXC

2010-06-13 Thread Gordon Henderson
On Fri, 11 Jun 2010, Richard Thornton wrote:

 Gordon wrote:

 Are you sure it's wise to even consider LXC here?

 And can one PC really keep up with 20Gb/sec of Ethernet traffic? i.e. How
 do you know the bottleneck here won't be the PC rather than the firewall
 appliance... I'd seriously consider using 2 PCs - firstly back to back,
 then with the firewall in-between...

 Hi Gordon,

 Thanks for the info.

 Its a home project and I only have one 10G adapter and no 10G switch
 (I got a SMC 10G adapter from ebay for $250).

Expensive little toy :)

 The PC is an whitebox 6-core processor (AMD 1055T) with Ubuntu server
 on there (I was considering using OpenSolaris Zones), 10G card is
 detected.

Not convinced the number of cores will help you here - it'll be data over 
the PCIe bus and interrupt latency. Saying that, what do I know - I've no 
first-hand expeirence of 10Gb networking - yet! I really don't know if a 
PC can actually sustain 10Gb on it's own without having to wory about 
anything else...

 The firewall I want to test only supports 10Gbps maximum.

 I just want to figure out if it will work with LXC.

Personally, I'd probably see if I could do it without LXC - create 2 VLAN 
devices on the host and get netperf-server and netperf-client to bind to 
each Interface. If the client is looping data back then you'll only see 
half the line speed and both the server and client will br Txing at the 
same time...

No experience with netperf though - I've used iperf which can be bound to 
an interface/host - i'd be surprised if netperf couldn't.

Gordon

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Benchmarking with LXC

2010-06-10 Thread Gordon Henderson
On Thu, 10 Jun 2010, Richard Thornton wrote:

 Hi,

 I wish to use netperf to benchmark a firewall appliance but I only want to
 use a single physical 10GbE adapter.

 So I have my PC and the firewall.

 I wasy thinking two LXC containers, netperf-client and netperf-server,
 basically I want to force the traffic through the firewall.

 The firewall will have two .1q VLANs configured, one for netperf-client the
 other for netperf-server.  The firewall will route between them.

 So netperf traffic should go: pc  firewall  pc (netperf-client  firewall
 netperf-server)

 Because it is benchmarking I want to get it as close to 10Gbps through the
 firewall as I can.

 I was looking at macvlan but not sure if this is required if .1q is
 possible?

 Really not sure if this is feasible, so any advice is much appreciated.

Are you sure it's wise to even consider LXC here?

And can one PC really keep up with 20Gb/sec of Ethernet traffic? i.e. How 
do you know the bottleneck here won't be the PC rather than the firewall 
appliance... I'd seriously consider using 2 PCs - firstly back to back, 
then with the firewall in-between...

Gordon


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Copy-on-write hard-link / hashify feature

2010-06-10 Thread Gordon Henderson
On Thu, 10 Jun 2010, John Drescher wrote:

 BTW, a second option is lessfs.

 http://www.lessfs.com/wordpress/?page_id=50

What about the KSM kernel option? It's aimed at KVM I think and in the 
kernel from 2.6.32. See:

  http://lwn.net/Articles/306704/
and
  http://lwn.net/Articles/330589/

Not sure if that could be used to help here - it seems a bit of a 
retrospective way to find data duplications - assuming we could enable it 
for whole containers...

Gordon

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Set default GW

2010-06-09 Thread Gordon Henderson
On Wed, 9 Jun 2010, Bodhi Zazen wrote:

 Daniel - Thank you for answering, not a big deal.

 Gordon - Aye, that is what I do for containers. For applications I write an 
 init script

 #!/bin/bash

 route add default gw 192.168.0.1 eth0

 Additional commands / config

 service start foo

 or what not (depending on the application).

 then

 lxc-execute -n foo -f foo.config /path_to/init_script

Actually, I sort of mis-read your mail and later realised you already were 
using an init script.

In my init scripts, I often do a bit more - and often the default route 
isn't on the same network that container's IP address is (same physical 
LAN, different subnet) - a (real, live!) example:

In the config file, I have:

lxc.network.ipv4 = 195.10.226.165/27

and in the init script, (/etc/init.d/rcS) I have:

   route add -net 195.10.225.64  netmask 255.255.255.224 dev eth0
   route add -net 195.10.230.96  netmask 255.255.255.240 dev eth0
# route add -net 195.10.226.160 netmask 255.255.255.224 dev eth0

   route add default gw 195.10.225.67

   ifconfig eth0:53 195.10.226.164 netmask 255.255.255.224
   ifconfig eth0:25 195.10.230.105 netmask 255.255.255.240

   sh /etc/network/firewall

So how generic or otherwise should the config file be? I'd hate to see it 
hard-wired into something that doesn't allow me that sort of 
flexability... Although for starting simple applications (which I've no 
use for), I can see it might be handy.

Gordon

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] File sharing between host and container during startup

2010-06-06 Thread Gordon Henderson
On Sun, 6 Jun 2010, Nirmal Guhan wrote:

 I want to run my application on fedora as a container and use the libraries
 (/lib, /usr/lib) from the host (so my application container size is small).
 I did lxc-create but lxc-execute failed (I had sent a mail earlier on this).
 Suggestion was to use lxc-start itself and run as system container.

 I changed the fstab file and could share the lib directory.

 Please let me know if there are better solution for my use case. I would
 like to try it too.

You can import directories from the host into a contianer using bind 
mounts.

For example, I have this in some of my systems - this is the fstab file 
named in a container config file:

none /vservers/vdsx10/dev/pts devpts defaults 0 0
none /vservers/vdsx10/procproc   defaults 0 0
none /vservers/vdsx10/sys sysfs  defaults 0 0
none /vservers/vdsx10/dev/shm tmpfs  defaults 0 0

#/usr   /vservers/vdsx10/usr/   
nonedefaults,bind,ro 0 0
/usr/lib/asterisk   /vservers/vdsx10/usr/lib/asterisk   
nonedefaults,bind,ro 0 0
/var/lib/asterisk/moh   /vservers/vdsx10/var/lib/asterisk/moh   
nonedefaults,bind,ro 0 0
/var/lib/asterisk/sounds/vservers/vdsx10/var/lib/asterisk/sounds
nonedefaults,bind,ro 0 0

The first 4 lines ought to be faimilar, but the bottom ones are ones I use 
which are common over all containers on that host.

/usr is commented out here - can't remember why - I'd need to go back 
through my notes for this instance, but it's sunday night here

Gordon

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC a feature complete replacement of OpenVZ?

2010-05-13 Thread Gordon Henderson
On Thu, 13 May 2010, Christian Haintz wrote:

 Hi,

 At first LXC seams to be a great work from what we have read already.

 There are still a few open questions for us (we are currently running
 dozens of OpenVZ Hardwarenodes).

I can't answer for the developers, but here's my answers/observations 
based on what I've seen and used ...

 1) OpenVZ in the long-term seams to be a dead end. Will LXC be a
 feature complete replacement for OpenVZ in the 1.0 Version?

I looked at OpenVZ and while it looked promising, didn't seem to be going 
anywhere. I also struggled to get their patches into a recent kernel and 
it looked like there was no Debian support for it. LXC was in the kernel 
as standard - I doubt it'll come out now... (and there is a back-ported 
lxc debian package that works fine under Lenny)


 As of the current version
 2) is there IPTable support, any sort of control like the OpenVZ
 IPTable config.

I run iptables - and in some cases different iptable setups in each 
container on a host (which also has it's own iptables).

Seems to just work. Each container has an eth0 and the host has a br0 
(as well as an eth0).

Logging is at the kernel level though, so goes into the log-files on the 
server host rather than in the container - it may be possible to isolate 
that, but it's not something I'm too bothered with.

My iptables are just shell-scripts that get called as part of the boot 
sequence - I really don't know what sort of control OpenVZ gives you.


 3) Is there support for tun/tap device

Doesn't look like it yet...

http://www.mail-archive.com/lxc-users@lists.sourceforge.net/msg00239.html


 4) is there support for correct memory info and disk space info (are
 df and top are showing the container ressources or the resources of
 the hardwarenode)

Something I'm looking at myself - top gives your own processes, but cpu 
usage is for the whole machine. 'df' I can get by manipulating /etc/mtab - 
then I get the size of the entire partition my host is running under. I'm 
not doing anything 'clever' like creating a file and loopback mounting it 
- all my containers in a host are currently on the same partition. I'm not 
looking to give fixed-size disks to each container though. YMMV.

However gathering cpu stats for each container is something I am 
interested in - and was about to post to the list about it - I think there 
are files (on the host) under /cgroup/container-name/cpuacct.stat and a 
few others which might help me though, but I'm going to have to look them 
up...

 5) is there something compared to the fine grained controll about
 memory resources like vmguarpages/privmpages/oomguarpages in LXC?

Pass..

 6) is LXC production ready?

Not sure who could make that definitive decision ;-)

It sounds like the lack of tun/tap might be a show-stopper for you though. 
(come back next week ;-)

However, I'm using it in production - got a dozen LAMPy type boxes running 
it so-far with several containers inside, and a small number of asterisk 
hosts. (I'm not mixing the LAMP and asterisk hosts though) My clients 
haven't noticed any changes which makes me happy. I don't think what I'm 
doing is very stressful to the systems though, but so-far I'm very happy 
with it.

I did test it to my own satisfaction before I committed myself to it on 
servers 300 miles away though. One test was to create 20 containers on an 
old 1.8GHz celeron box, each running asterisk with one connected to the 
next and so on - then place a call into the first. It manged 3 loops 
playing media before it had any problems - and those were due to kernel 
context/network switching rather than anything to do with the LXC setup. 
(I suspect there is more network overhead though due to the bridge and 
vlan nature of the underlying plumbing)

So right now, I'm happy with LXC - I've no need for other virtualisation 
as I'm purely running Linux, so don't need to host Win, different kernels, 
etc. And for me, it's a management tool - I can now take a container and 
move it to different hardware (not yet a proper live migration, but the 
final rsync is currently only a few minutes and I can live with that) I 
have also saved myself a headache or two by moving old servers with OS's I 
couldn't upgrade into new hardware - so I have one server running Debian 
Lenny, kernel 2.6.33.1 hosting an old Debian Woody server inside a 
container running the customers custom application which they developed 6 
years ago... They're happy as they got new hardware and I'm happy as I 
didn't have to worry about migrating their code to a new version of Debian 
on new hardware... And I can also take that entire image now and move it 
to another server if I needed to load-balance, upgrade, cater for h/w 
failure, etc.

I'm using kernel 2.6.33.x (which I custom compile for the server hardware) 
and Debian Lenny FWIW.

I'm trying to not sound like a complete fanboi, but until the start of 
this year, I had no interest in virtualisation at all, but once