[lxc-users] ArchLinux container network problems with systemd 244 (systemd 243 works ok)

2020-01-01 Thread John
Hello,

Just reporting this problem I'm experiencing with Arch Linux on LXD.

Create container using "images:archlinux/current/amd64" and with a
network interface connected to a bridge.

Configure /etc/systemd/network/mynetif.network to configure by DHCP:

[Match]
Name=mynetif

[Network]
DHCP=ipv4

Start network

# systemctl enable --now systemd-networkd

Observe network stuck pending

# networkctl
IDX LINK  TYPE OPERATIONAL SETUP
  1 loloopback carrier unmanaged
335 mynetif   etherroutablepending

Confirm systemd version

# systemctl --version

systemd 244 (244.1-1-arch)
+PAM +AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP
+LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS
+KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid

Install systemd 243.78.2-arch
(download from https://archive.archlinux.org/packages/s/systemd)

(from outside container)
# lxc file push systemd-243.78-2-x86_64.pkg.tar.xz mycontainer/root

(then inside container)
# pacman -U systemd-243.78-2-x86_64.pkg.tar.xz

Confirm systemd version

# systemctl --version
systemd 243 (243.78-2-arch)
+PAM +AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP
+LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS
+KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid

Restart systemd-networkd

# systemctl restart systemd-networkd

Observer network configured successfully

# networkctl

IDX LINK  TYPE OPERATIONAL SETUP
  1 loloopback carrier unmanaged
335 mynetif   etherroutableconfigured

I did look at the system-networkd journal and there was nothing there to
indicate a problem. If I manually configure the interface (using ip)
then it works (so the network layer is ok, it's just systemd starting
things that's broken).

Anyone else observe this?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Converting network from LXC to LXD

2019-12-21 Thread John Lane
On 21/12/2019 16:51, John Lane wrote:

> 
> I can't do this:
> 
> $ lxc config device set mycontainer eth0 ipv4.address 192.168.21.2/24
>Error: Invalid devices:
> Invalid value for device option ipv4.address: Not an IPv4 address:
> 192.168.21.2/24
> 
> Also there appears to be no setting for gateway:
> 
> $ lxc config device set mycontainer eth0 ipv4.gateway 192.168.21.1
> Error: Invalid devices: Invalid device option: ipv4.gateway
> 

Reading this
(https://github.com/lxc/lxd/issues/1259#issuecomment-166416979):

> lxc.network.ipv4 => Not supported by LXD, IP configuration must be
done from inside the container. Most distros flush any pre-existing
kernel network configuration when they boot, so this pretty much never
works anyway.

> lxc.network.ipv4.gateway => Not supported by LXD, IP configuration
must be done from inside the container. Most distros flush any
pre-existing kernel network configuration when they boot, so this pretty
much never works anyway.

I guess that it doesn't work.

I also tried using "lxc.raw" to work around it but could not get that to
work. I kept getting "Config parsing error: Initialize LXC: Failed to
load raw.lxc". Does raw.lxc not work any more? it is documented but
there aren't really any explanations of how to use it. Anecdotal posts
exist elsewhere but I suspect they contain stale information becuase the
suggsted methods didn't work for me.

Anyway, I can work around this issue (as suggested) by configuring the
static network details inside the container. That works.

I don't know if there is a page accessible on the main LXD site
documentation that explains this kind of thing which would be useful to
help transitioning from plain-old lxc.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Converting network from LXC to LXD

2019-12-21 Thread John Lane
On 20/12/2019 14:10, Fajar A. Nugraha wrote:
> 
> https://linuxcontainers.org/lxd/docs/master/containers#type-nic
> 
> So something like this for veth on a bridge (on "lxc config edit
> CONTAINER_NAME", in case you haven't figure it out):
> 
> devices:
>   eth0:
> name: eth0
> host_name: c1-0
> nictype: bridged
> parent: lxdbr0
> type: nic
> 
> "parent" should be whatever the bridge is called on your host (lxd
> creates lxdbr0 by default).
> "host_name" is what the host side of the veth will be called (very
> useful if you're doing host-side traffic monitoring).
> 

Looking at nictype=bridged, I can set up DHCP addresses, thanks, but am
having difficulty with static configuration.

Looking at that document there seems to be no equivalent of the
following lxc configuration:

lxc.net.0.ipv4.address = 192.168.21.2/24
lxc.net.0.ipv4.gateway = 192.168.21.1

The "ipv4.address" entry documented as "An IPv4 address to assign to the
container through DHCP" and not as a CIDR address as per lxc.

I can't do this:

$ lxc config device set mycontainer eth0 ipv4.address 192.168.21.2/24
   Error: Invalid devices:
Invalid value for device option ipv4.address: Not an IPv4 address:
192.168.21.2/24

Also there appears to be no setting for gateway:

$ lxc config device set mycontainer eth0 ipv4.gateway 192.168.21.1
Error: Invalid devices: Invalid device option: ipv4.gateway

I can manually add them afterwards, i.e.

$ lxc exec mycontainer ip address add 192.168.21.2/24 dev eth0
$ lxc exec mycontainer ip route add default via 192.168.21.1 dev eth0

What am I missing? Can I assign static addresses with LXD configuration?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Converting network from LXC to LXD

2019-12-20 Thread John Lane
On 20/12/2019 14:10, Fajar A. Nugraha wrote:
> 
> You're looking in the wrong section
>  
> https://linuxcontainers.org/lxd/docs/master/containers#type-nic
> 

Thank you, don't know how I missed that :)

I have the first one working nicely with 3 interfaces, deployed using
terraform-provider-lxd.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Converting network from LXC to LXD

2019-12-20 Thread John Lane
Hello,

I have some lxc containers that I want to migrate to lxd. I'm using
lxc/lxd v3 (3.1.8).

I'm struggling to find documentation explaining how to configure the
"phys" network type I use to assign a physical interface to a container
and the "veth" network type that I use to join a container to an
existing bridge.

I already have a few "classic lxc" containers that have multiple
interfaces connected to host's physical or bridge interfaces that get
configured by those networks' main dhcp servers. I am investigating
migrating these containers to lxd.

I'm not looking to change the network configuration inside or outside
the container; I just want what I have working with lxc to work with lxd.

I've looked at

https://linuxcontainers.org/lxd/docs/master/networks (mentions neither
phys nor veth)

https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/

I'd appreciate some pointers towards the appropriate documentation or an
explanation of how to do this with lxd.

Much appreciated,

John





___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] sharing files and unprivileged LXC container

2019-12-11 Thread John
I use setfacl/getfacl to change permissions on host files so they are
accessible to container's users.  I am doing only basic stuff with very few
users so not sure how that approach scales.

Of course anything you do open up what the container can do will reduce the
security of using containers.

On Wed, Dec 11, 2019, 8:19 AM Justus Schubert 
wrote:

> Hi everyone,
>
> I'm trying the first time lxc. something I do not understand is the shared
> use
> of resources. This seems to be a problem especially with unprivileged
> containers.
> My first thought was to have a shared folder with custom user/group
> mapping in
> unprivileged LXC container for (user)mount
>
> I set up a LCX Container. My hostsystem is ArchLinux and the Container use
> Debian. I start the container as root and use user/group mapping so the
> container run 'unprivileged'.
> >> my /etc/lxc/default.conf:
> >> lxc.idmap = u 0 10 65536
> >> lxc.idmap = g 0 10 65536
>
> >> my /etc/subuid & /etc/subgid:
> >> root:10:65536
>
> Now i like to share my homedir within the container.
> >> my /var/lib/lxc//config:
> >> lxc.mount.entry = /home/ /var/lib/lxc//rootfs/mnt/share
> none bind 0 0
>
> Because of the mapping described above rights of the shared folder are set
> to
> nobody nogroup.
>
> After some research, I came to the idea that there are certainly other
> ways to
> solve the problem. Maybe SSHfs, NFS or SAMBA? something that the
> 'usermapping'
> can implement in the protocol?
> can someone tell me his experiences or show ways of solution?
> in concrete terms, I am looking for ideas for the realization:
> 1) How can I share rights among 'unprivileged' users from the host to the
> container? User1 from host shares a folder to user1 from the container-os.
> both are not root. How can I achieve this?
> 2) sharing files between unprivileged lxc containers
>
> I can imagine that such questions are asked frequently. but unfortunately
> I
> have not found a simple and consistent solution.
>
> Thanks in advance for your help!
>
> --
>
> Justus Schubert
> 01099 Dresden___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] What is the state of the art for lxd and wifi ?

2018-07-23 Thread john



On 07/23/2018 06:47 AM, Pierre Couderc wrote:


On 07/23/2018 12:37 PM, Fajar A. Nugraha wrote:
On Mon, Jul 23, 2018 at 5:33 PM, Pierre Couderc > wrote:



On 07/23/2018 12:12 PM, Fajar A. Nugraha wrote:

Relevant to all VM-like in general (including lxd, kvm and
virtualbox):
- with the default bridged setup (on lxd this is lxdbr0),
VMs/containers can access internet
(...)
- bridges (including macvlan) does not work on wifi



Sorry, it is not clear for me how default bridges "can access
internet",  if simultaneously "bridges (including macvlan) does
not work on wifi" ?



My bad for not being clear :)

I meant, the default setup uses bridge + NAT (i.e. lxdbr0). The NAT 
is automatically setup by LXD. That works. If your PC can access the 
internet, then anything on your container (e.g. wget, firefox, etc) 
can access the internet as well.



Bridge setups WITHOUT nat (those that bridge containers interface 
directly to your host interface, e.g. eth0 or wlan), on the other 
hand, will only work for wired, and will not work for wireless.




Mmm, do you mean that there is no known solution to use LXD with wifi ?

Based on what has been indicated here and my own understanding there are 
two high level ways you can access the network:  1) bridge + NAT and 2) 
containers directly access the host interface.


With option 1 the container can access the network via the host even if 
that host interface is wifi based.   You will configure the wifi on the 
host.



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] forcing a container to use a specific cgroup with v2.0.8

2018-03-02 Thread Marshall2, John (SSC/SPC)
I did not find a solution with 2.0.8, so I moved on to 2.1. The lxc.cgroup.dir 
seems to do
what I want. Unfortunately, the cgroup configuration process seems to change 
settings
outside of the cgroup/dir pointed to be lxc.cgroup.dir, namely, sets 
cgroup.clone_children
to 1 when it should be 0.

I have asked about this latter issue on the lxc forum at
https://discuss.linuxcontainers.org/t/why-cgroups-are-setup-as-they-are/1313/2

John

On Wed, 2018-02-28 at 22:58 +, Marshall2, John (SSC/SPC) wrote:
Hi,

At the moment I am using LXC 2.0.8 (ubuntu 16.04) and have noticed a change in 
behavior
since previous releases of LXC I have used (1.x and 2.0.5, I believe). I've not 
yet narrowed
when the change happened.

My goal is to start an LXC container within an existing (arbitrary) cgroup. 
Previously, I was
able to run lxc-start from within a cgroup and the new LXC cgroup would be 
created under
it. I used lxc.cgroup.pattern=%n in lxc.conf.

Is there any way to do the same thing with v2.0.8? I see that 2.1 has 
lxc.cgroup.dir which
appears to do what I need, but I am trying to do this with 2.0.8, at least for 
now.

Thanks,
John

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] forcing a container to use a specific cgroup with v2.0.8

2018-03-01 Thread Marshall2, John (SSC/SPC)
Hi,

So I have started trying out 2.1.1 but have encountered a different issue.

I have the following:
/sys/fs/cgroup//
jobs/
cgroup.clone_children (= 0)
xx/
cgroup.clone_children (= 1)
cpusets.cpus (= 1)
cpusets.mems (= 0-1)

In the config file I have:
lxc.cgroup.dir = jobs/xx

As before, I want lxc-start to set up its own cgroup inside jobs/xx (e.g., at
jobs/xx/xx where the container name is "xx"), but inherit the settings from
jobs/xx.

I find that this is not working. Also, unexpectedly, jobs/cgroup.clone_children
is being reset to 1, which I do not want.

Is there any way to force lxc-start to just inherit and not touch anything else?

Thanks,
John


On Wed, 2018-02-28 at 22:58 +, Marshall2, John (SSC/SPC) wrote:
Hi,

At the moment I am using LXC 2.0.8 (ubuntu 16.04) and have noticed a change in 
behavior
since previous releases of LXC I have used (1.x and 2.0.5, I believe). I've not 
yet narrowed
when the change happened.

My goal is to start an LXC container within an existing (arbitrary) cgroup. 
Previously, I was
able to run lxc-start from within a cgroup and the new LXC cgroup would be 
created under
it. I used lxc.cgroup.pattern=%n in lxc.conf.

Is there any way to do the same thing with v2.0.8? I see that 2.1 has 
lxc.cgroup.dir which
appears to do what I need, but I am trying to do this with 2.0.8, at least for 
now.

Thanks,
John

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org<mailto:lxc-users@lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] forcing a container to use a specific cgroup with v2.0.8

2018-02-28 Thread Marshall2, John (SSC/SPC)
Hi,

At the moment I am using LXC 2.0.8 (ubuntu 16.04) and have noticed a change in 
behavior
since previous releases of LXC I have used (1.x and 2.0.5, I believe). I've not 
yet narrowed
when the change happened.

My goal is to start an LXC container within an existing (arbitrary) cgroup. 
Previously, I was
able to run lxc-start from within a cgroup and the new LXC cgroup would be 
created under
it. I used lxc.cgroup.pattern=%n in lxc.conf.

Is there any way to do the same thing with v2.0.8? I see that 2.1 has 
lxc.cgroup.dir which
appears to do what I need, but I am trying to do this with 2.0.8, at least for 
now.

Thanks,
John
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to enable SElinux for LXC ?

2018-02-08 Thread john
Per the conf man page, have you confirmed that the host has selinux 
enabled and that lxc was compiled with selinux support?


John



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unable to use mknod

2017-11-25 Thread john
Thanks.    Adding "lxc.cap.keep = mknod" gives me error on container 
startup due to simultaneously using lxc.cap.drop. Drop is probably 
defined in some include file I will track that down.


lxc-start: conf.c: lxc_setup: 3965 Container requests lxc.cap.drop and 
lxc.cap.keep: either use lxc.cap.drop or lxc.cap.keep, not both.


John


On 11/25/2017 04:37 PM, Pavol Cupka wrote:

and here http://man7.org/linux/man-pages/man5/lxc.container.conf.5.html

On Sat, Nov 25, 2017 at 11:36 PM, Pavol Cupka <pavol.cu...@gmail.com> wrote:

CAP_MKNOD

http://man7.org/linux/man-pages/man7/capabilities.7.html

You need to explicitly add the CAP_MKNOD capability to your container.

   lxc.cap.keep
   Specify the capability to be kept in the container. All other
   capabilities will be dropped. When a special value of "none"
   is encountered, lxc will clear any keep capabilities specified
   up to this point. A value of "none" alone can be used to drop
   all capabilities.

You could also try to automate this (if you happen to use systemd
inside the container) using:

   lxc.hook.autodev
   A hook to be run in the container's namespace after mounting
   has been done and after any mount hooks have run, but before
   the pivot_root, if lxc.autodev == 1.  The purpose of this hook
   is to assist in populating the /dev directory of the container
   when using the autodev option for systemd based containers.
   The container's /dev directory is relative to the
   ${LXC_ROOTFS_MOUNT} environment variable available when the
   hook is run.

which can point to a script running mknod.


On Sat, Nov 25, 2017 at 11:30 PM, john <j...@tonebridge.com> wrote:

Hello,

I have done enough Web searching in how to get access to usb cdrom drive
from an unprivileged container that I would like to think I have a unique
problem :)

I am using Debian Stretch and lxc 2.0.7.  My container config is below.

In container:

container:/# mknod -m 666 /tmp/cdrom b 11 0
mknod: /tmp/cdrom: Operation not permitted

 From outside:

host# lxc-device -n ripper add /dev/sr0
lxc-device: lxccontainer.c: do_add_remove_node: 3798 mknod failed
lxc-device: lxccontainer.c: do_add_remove_node: 3764 Failed to create note
in guest
lxc-device: tools/lxc_device.c: main: 166 Failed to add /dev/sr0 to ripper.

host# ls -l /dev/sr0
brw-rw 1 root cdrom 11, 0 Nov 25 14:17 /dev/sr0

I have attempted to disable seccomp by commeting this out in
/usr/share/lxc/config/common.conf:

# Blacklist some syscalls which are not safe in privileged
# containers
#lxc.seccomp = /usr/share/lxc/config/common.seccomp

I can't get that node created and it seems like it should.

What am I missing?


Container config:

lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.include = /usr/share/lxc/config/debian.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536
lxc.mount.auto = proc:mixed sys:ro cgroup:mixed
lxc.rootfs = /containers/ripper/rootfs
lxc.rootfs.backend = dir
lxc.utsname = ripper

lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:80:78:fc

lxc.aa_profile = lxc-container-default-with-mounting
#lxc.aa_profile = unconfined

lxc.mount.entry = /dev/bus/usb/001 dev/bus/usb/001  none
bind,optional,create=dir

# lxc.cgroup.devices.allow = typeofdevice majornumber:minornumber rwm
lxc.cgroup.devices.allow = b 11:* rwm


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Unable to use mknod

2017-11-25 Thread john

Hello,

I have done enough Web searching in how to get access to usb cdrom drive 
from an unprivileged container that I would like to think I have a 
unique problem :)


I am using Debian Stretch and lxc 2.0.7.  My container config is below.

In container:

container:/# mknod -m 666 /tmp/cdrom b 11 0
mknod: /tmp/cdrom: Operation not permitted

From outside:

host# lxc-device -n ripper add /dev/sr0
lxc-device: lxccontainer.c: do_add_remove_node: 3798 mknod failed
lxc-device: lxccontainer.c: do_add_remove_node: 3764 Failed to create 
note in guest

lxc-device: tools/lxc_device.c: main: 166 Failed to add /dev/sr0 to ripper.

host# ls -l /dev/sr0
brw-rw 1 root cdrom 11, 0 Nov 25 14:17 /dev/sr0

I have attempted to disable seccomp by commeting this out in 
/usr/share/lxc/config/common.conf:


# Blacklist some syscalls which are not safe in privileged
# containers
#lxc.seccomp = /usr/share/lxc/config/common.seccomp

I can't get that node created and it seems like it should.

What am I missing?


Container config:

lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.include = /usr/share/lxc/config/debian.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536
lxc.mount.auto = proc:mixed sys:ro cgroup:mixed
lxc.rootfs = /containers/ripper/rootfs
lxc.rootfs.backend = dir
lxc.utsname = ripper

lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:80:78:fc

lxc.aa_profile = lxc-container-default-with-mounting
#lxc.aa_profile = unconfined

lxc.mount.entry = /dev/bus/usb/001 dev/bus/usb/001  none 
bind,optional,create=dir


# lxc.cgroup.devices.allow = typeofdevice majornumber:minornumber rwm
lxc.cgroup.devices.allow = b 11:* rwm


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] [Feature requests] lxc launch --description "This container runs HTTPd frontend"

2017-09-04 Thread John Doe
Hello,


I do not know if this is the correct place to ask this but it would be great to 
be able to attach a description to containers created with LXD (lxc launch). 
This description could be printed out afterwards while listing container 
characteristics:


This could be invoked with a "description" or "comment" optionnal tag appended 
to "lxc launch" command.


As the number of container grows it becomes difficult to track them all, this 
feature could help.

Regards,

J.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] suddenly I cannot start lxcs due to cgroup error

2017-07-03 Thread John
I have been running lxcs for over a year without issues on an Odroid-C2 running 
Arch ARM (aarch64).  Recently, I am unable to start any of the containers due 
to some errors around cgroups.  This happened today after an update where 4 
packages got updated but note that downgrading them back to the last-good 
versions and a reboot did not fix the problem.  I am not finding anything 
contemporary when googling for causes.  Advice is appreciated.


libsystemd (232-8 -> 233-6)
systemd (232-8 -> 233-6)
device-mapper (2.02.171-1 -> 2.02.172-1) 
python2 (2.7.13-2 -> 2.7.13-3)
systemd-sysvcompat (232-8 -> 233-6)


Below is an example of the error when starting in foreground mode:


# lxc-start -n base-odroid64 -F
systemd 233 running in system mode. (+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK 
-SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID 
+ELFUTILS +KMOD +IDN default-hierarchy=hybrid)
Detected virtualization lxc.
Detected architecture arm64.

Welcome to Arch Linux ARM!

Set hostname to .
Cannot determine cgroup we are running in: No medium found
Failed to allocate manager object: No medium found
[!!] Failed to allocate manager object, freezing.
Freezing execution.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXD enters failed state when auto_update is on

2017-06-30 Thread John Seland
I was having an issue with lxd using an image from the latest daily 
build of artful 17.10 in that lxd entered failed state when starting. If 
images auto_update was set to 1, lxd would enter failed state when the 
trying to download from https://cloud-images.ubuntu.com/daily.

The issue ocurred after updating lxd from 2.14 to 2.15.
As lxd would not start I had to edit the lxd.db directly. Disabling 
auto_update using sqlite3 fixed the issue: sqlite> update images set 
auto_update=0 where id=8;


Lxd is running OK now but where lies the problem?
Is it the auto_update? Is it the daily image download?
Maybe using a daily build 17.10 image is not a good idea?
Is the default for auto_update enabled or disabled?

The listing from my other hosts list the following:
H1
17.04 Auto update: disabled
17.10 Auto update: disabled (was enabled and caused a problem)
H2
16.04 Auto update: disabled
16.10 Auto update: enabled
17.04 Auto update: enabled
H3
16.10 Auto update: enabled

Host: Ubuntu 17.04, Linux 4.10.0-26-generic, LXD 2.15
From the log files:
alias=17.10 lvl=info msg="Downloading image" 
server=https://cloud-images.ubuntu.com/daily t=2017-06-30T07:59:33+0200


juni 30 07:29:26 control lxd[4413]: panic: runtime error: invalid memory 
address or nil pointer dereference
juni 30 07:29:26 control lxd[4413]: [signal SIGSEGV: segmentation 
violation code=0x1 addr=0x0 pc=0x564206c1d15b]

juni 30 07:29:26 control lxd[4413]: goroutine 271 [running]:
juni 30 07:29:26 control lxd[4413]: panic(0x564206dc8960, 0xc42000e070)
juni 30 07:29:26 control lxd[4413]: 
/usr/lib/go-1.7/src/runtime/panic.go:500 +0x1a1
juni 30 07:29:26 control lxd[4413]: 
github.com/lxc/lxd/shared/cancel.CancelableDownload.func1(0x0, 
0xc420420a20, 0xc4200c42d0, 0xc42
juni 30 07:29:26 control lxd[4413]: 
/build/lxd-fUvRgH/lxd-2.15/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/shared/cancel/can
juni 30 07:29:26 control lxd[4413]: created by 
github.com/lxc/lxd/shared/cancel.CancelableDownload
juni 30 07:29:26 control lxd[4413]: 
/build/lxd-fUvRgH/lxd-2.15/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/shared/cancel/can
juni 30 07:29:26 control systemd[1]: lxd.service: Main process exited, 
code=exited, status=2/INVALIDARGUMENT


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Using lxc-copy snapshots with overlayfs

2017-06-29 Thread John
I have a linux container "base" that I snapshot using /usr/bin/lxc-copy for 
various containers using overlayfs like so: lxc-copy -n base -N snapshot1 -M -s 
-B overlayfs
Can I start the "base" container while the snapshots are running?  My goal is 
to update packages in the base image without disturbing the overlayfs clones.  
I read in some guides on this topic that the readonly base image should not be 
running while clones are running.___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Am I misusing LXCs?

2017-03-30 Thread John Lewis
It is traditional LXC because LXD wasn't out when I set it up
originally.  I won't build the packages for LXD if I am not even using
it properly.

I direct incoming connections using iptables with both the the host and
the virtual router.

I am extremely confident about moving my installation. I will use
Ansible for the provisioning and the configuration. I will install all
of the packages I need on a simple VPS. I can still use cgroups to
control the resource usage of the processes. It will be moderately
easier for me to secure because it is easy to see where everything is
and what state everything is in. 

I backup the VPS with rsnapshot that is running on a host that I have
physical access too and I rotate the backup drive to another location.
The LXCs are disk images.

Could you elaborate on separating data from services?

On Thu, 2017-03-30 at 23:07 +0300, Simos Xenitellis wrote:
> Is that the traditional LXC or is it LXD/LXC containers?
> I have a similar set-up (the latter, with LXD/LXC) and there is also a
> vsftpd in the mix.
> 
> I think your question is about best practices and whether your
> installation adheres
> to some best practices.
> How do you direct incoming connections to each container? Do you use
> iptables or something else?
> If you where to migrate your installation to another VPS, how
> confident would you be to do that?
> How do you get backups? Do you take snapshots as backups?
> 
> I think that if you reach a point where you separate your data from
> the services, the management of the containers
> will become much easier and you will feel more confident with the 
> installation.
> 
> Simos
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Am I misusing LXCs?

2017-03-30 Thread John Lewis
I build an LXC network on my VPS to separate all of my personal service
from each other how similar they are to each other while not having to
buy more VPS that I don't utilize intensely. Both my containers and my
host are running Debian 8.

I made a container for Email communications (Email and PBX) Two for
authentication, One for web sites one for SQL Database and one for DNS
DHCP.

It was a nice learning experience, but right now, I think the setup is
annoying to maintain because this wasn't the simplest configuration I
could have used. 

Should I even use containers for this kind of thing? If I should use
containers at all, how should I use them? 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Can unprivileged containers start from a loop device?

2017-02-08 Thread John Lewis
I am trying to build containers on my really powerful desktop and then 
export them to VPS provider who would shut off my machine if it takes 
too much CPU time.


Moving one an system image file is much faster to move than moving a 
root of a system recursively. It is also far less error prone. Having to 
maintain premount scripts is inconvenient compared to having LXC do it. 
I can't use LXD yet because my whole environment is Debian 8.



On 02/08/2017 08:42 AM, Fajar A. Nugraha wrote:
On Wed, Feb 8, 2017 at 7:57 PM, John Lewis <oflam...@gmail.com 
<mailto:oflam...@gmail.com>> wrote:


Can unprivileged containers start from a loop device?


IMHO you should explain what you're trying to achieve, and how you 
think using a loop device will help.


I can say that "lxd uses unpriv containers by default, and it also 
creates a zfs pool on top of file as container storage by default", 
which satisfies both the "unpriv container" and "loop device" 
(somewhat) part of your question, but probably not what you're looking 
for.


--
Fajar


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Can unprivileged containers start from a loop device?

2017-02-08 Thread John Lewis
Can unprivileged containers start from a loop device?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Risk/benefit of enabling user namespaces in the kernel for running unprivileged containers

2017-01-22 Thread John
Thanks guys, for providing the context around this.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Risk/benefit of enabling user namespaces in the kernel for running unprivileged containers

2017-01-13 Thread John




- Original Message -
> From: Serge E. Hallyn 
> To: LXC users mailing-list 
> Sent: Friday, January 13, 2017 11:20 AM
> Subject: Re: [lxc-users] Risk/benefit of enabling user namespaces in the 
> kernel for running unprivileged containers

>>  I'm unclear about several points:
>>  *Is it true that enabling CONFIG_USER_NS makes LXCs safer but at the cost 
> of decreasing security on the host?
> 
> "basically"
> 
> "decreasing security on the host" implies there are known 
> vulnerabilities or
> shortcomings which you are enabling as a tradeoff.  That's not the case.  
> Rather,
> there are so many interactions between types of resources that we keep running
> into new ways in which unanticipated interactions can lead to vulnerabilities
> when unprivileged users gain the ability to create new namespaces.
> 
> Some of the 'vulnerabilities' are pretty arguable, for instance the 
> ability
> for an unprivileged user to escape a negative acl by dropping a group, or to
> see an overmounted file in a new namespace.  But others are very serious.
> 
> When that will settle down, noone really knows.


Again, thank you for the detailed reply.  Are the nature of these sorts of 
interactions such that users require physical access or ssh access to the host 
machine in order to exploit, or can they originate from within the container?  
If it's a physical/remote access thing, no big deal assuming we do not open the 
host up to ssh, right?  If however the vector is the container itself, that's 
entirely different.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Risk/benefit of enabling user namespaces in the kernel for running unprivileged containers

2017-01-11 Thread John
From S. Graber's blog[1] and other sources, consensus is that unprivileged 
containers offer the best security from the container's perspective.  There is 
quite a discussion in an Arch Linux feature request[2] around the risks of 
enabling user namespaces in the distro default kernel as it applies to the host 
OS as I understand it.  Ultimately, the Arch developers believe that it is too 
much of a risk to implement, and this has been echoed as recently as May of 
2016[3].

I'm unclear about several points:
*Is it true that enabling CONFIG_USER_NS makes LXCs safer but at the cost of 
decreasing security on the host?
*Under what circumstances is that true if at all?
*How contemporary are the arguments against enabling this option now in 2017 
with Linux kernel v3.9.2 and lxc v2.0.6?
*Are any of the concerns valid against older kernels such as the 4.4.x series 
or the 3.14.x series?  I ask because several ARM devices use these as their 
mainline kernels.

Thanks all!

1. https://www.stgraber.org/2014/01/17/lxc-1-0-unprivileged-containers

2. https://bugs.archlinux.org/task/36969
3. https://bugs.archlinux.org/task/49337
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Numerous errors running unprivileged container on Arch Linux x86_64

2017-01-11 Thread John

> From: Fajar A. Nugraha 
>To: LXC users mailing-list  
>Sent: Wednesday, January 11, 2017 7:38 PM
>Subject: Re: [lxc-users] Numerous errors running unprivileged container on 
>Arch Linux x86_64
>
>
>It's a known openvpn-systemd-unpriv-container issue. You need to edit (or 
>overide) openvpn@.service.
>
>
>http://askubuntu.com/questions/747023/systemd-fails-to-start-openvpn-in-lxd-managed-16-04-container
>

Yes!  I did not find that link despite my best efforts googling.  I am able to 
get openvpn up and running in the unpriviliged container. Thank you very much!
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Numerous errors running unprivileged container on Arch Linux x86_64

2017-01-11 Thread John





>
> From: Fajar A. Nugraha 
>To: LXC users mailing-list  
>Sent: Tuesday, January 10, 2017 10:23 PM
>Subject: Re: [lxc-users] Numerous errors running unprivileged container on 
>Arch Linux x86_64
>
>Short version: if you can get login prompt, and the system works as expected 
>(e.g. services are running, you get ip address, etc), then it's safe to ignore 
>the errors. Mostly they're just warnings due to running unprivileged.
>
>
>Some distro versions (e.g. debian jessie) requires systemd update (e.g. from 
>debian stretch packages) to work properly as unpriv container, but from what 
>you pasted, archlinux should be fine.
>


Thank you for the kind reply.  My goal is to have openvpn and a LAMP stack run 
from within the 
unprivileged container.  The problem (perhaps related to my config being 
incorrectly configured) is that openvpn will not run when systemd starts it. 
Interestingly, if I run openvpn as root from within the container, it runs just 
fine.  Is there a way to use the systemd service to run openvpn?


Error:
# systemctl status openvpn-server@splus.service
● openvpn-server@splus.service - OpenVPN service for splus
Loaded: loaded (/usr/lib/systemd/system/openvpn-server@.service; disabled; 
vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-01-11 19:56:49 UTC; 7s ago
Docs: man:openvpn(8)
https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
https://community.openvpn.net/openvpn/wiki/HOWTO
Process: 49 ExecStart=/usr/sbin/openvpn --status 
%t/openvpn-server/status-%i.log --status-version 2 --suppress-timestamps --co
Main PID: 49 (code=exited, status=1/FAILURE)

Jan 11 19:56:49 nw openvpn[49]: TUN/TAP device tun0 opened
Jan 11 19:56:49 nw openvpn[49]: Note: Cannot set tx queue length on tun0: 
Operation not permitted (errno=1)
Jan 11 19:56:49 nw openvpn[49]: do_ifconfig, tt->did_ifconfig_ipv6_setup=0
Jan 11 19:56:49 nw openvpn[49]: /usr/bin/ip link set dev tun0 up mtu 1500
Jan 11 19:56:49 nw openvpn[49]: openvpn_execve: unable to fork: Resource 
temporarily unavailable (errno=11)
Jan 11 19:56:49 nw openvpn[49]: Exiting due to fatal error
Jan 11 19:56:49 nw systemd[1]: openvpn-server@splus.service: Main process 
exited, code=exited, status=1/FAILURE
Jan 11 19:56:49 nw systemd[1]: Failed to start OpenVPN service for splus.
Jan 11 19:56:49 nw systemd[1]: openvpn-server@splus.service: Unit entered 
failed state.
Jan 11 19:56:49 nw systemd[1]: openvpn-server@splus.service: Failed with result 
'exit-code'.


Config:
---
lxc.include = /usr/share/lxc/config/archlinux.common.conf
lxc.include = /usr/share/lxc/config/archlinux.userns.conf
lxc.arch = x86_64
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536
lxc.rootfs = /var/lib/lxc/nw/rootfs
lxc.rootfs.backend = dir
lxc.utsname = nw
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.mount.entry = /dev/net dev/net none bind,create=dir
lxc.cgroup.devices.allow = c 10:200 rwm
---
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Numerous errors running unprivileged container on Arch Linux x86_64

2017-01-10 Thread John
I setup /etc/subuid and /etc/subgid and modified /etc/lxc/default.conf to add 
the needed uid/gids:

% grep root /etc/sub*
/etc/subgid:root:10:65536
/etc/subuid:root:10:65536


% cat /etc/lxc/default.conf 
lxc.network.type = empty
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536


I then created an lxc via:
# lxc-create -t download -n nw

I pulled down the archlinux current amd64 image.

This is my config:
-
Distribution configuration
lxc.include = /usr/share/lxc/config/archlinux.common.conf
lxc.include = /usr/share/lxc/config/archlinux.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536
lxc.rootfs = /var/lib/lxc/nw/rootfs
lxc.rootfs.backend = dir
lxc.utsname = nw

# Network configuration
lxc.network.type = empty

-

The problem is when I start the container, I see numerous errors relating to 
systemd and I am now sure what is missing from my config.  Advice is deeply 
appreciated.

# lxc-start -n nw -F

systemd 232 running in system mode. (+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK 
-SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID 
+ELFUTILS +KMOD +IDN)
Detected virtualization lxc.
Detected architecture x86-64.

Welcome to Arch Linux!

Set hostname to .
Failed to read AF_UNIX datagram queue length, ignoring: No such file or 
directory
Failed to install release agent, ignoring: No such file or directory
[  OK  ] Listening on Journal Socket.
[  OK  ] Started Forward Password Requests to Wall Directory Watch.
[  OK  ] Listening on Process Core Dump Socket.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
[  OK  ] Listening on Device-mapper event daemon FIFOs.
user.slice: Failed to reset devices.list: Operation not permitted
user.slice: Failed to set invocation ID on control group /user.slice, ignoring: 
Operation not permitted
[  OK  ] Created slice User and Session Slice.
[  OK  ] Listening on Network Service Netlink Socket.
[  OK  ] Reached target Remote File Systems.
[  OK  ] Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Reached target Encrypted Volumes.
[  OK  ] Reached target Paths.
system.slice: Failed to reset devices.list: Operation not permitted
system.slice: Failed to set invocation ID on control group /system.slice, 
ignoring: Operation not permitted
[  OK  ] Created slice System Slice.
dev-mqueue.mount: Failed to reset devices.list: Operation not permitted
dev-mqueue.mount: Failed to set invocation ID on control group 
/system.slice/dev-mqueue.mount, ignoring: Operation not permitted
Mounting POSIX Message Queue File System...
systemd-journald.service: Failed to reset devices.list: Operation not permitted
systemd-journald.service: Failed to set invocation ID on control group 
/system.slice/systemd-journald.service, ignoring: Operation not permitted
Starting Journal Service...
systemd-remount-fs.service: Failed to reset devices.list: Operation not 
permitted
systemd-remount-fs.service: Failed to set invocation ID on control group 
/system.slice/systemd-remount-fs.service, ignoring: Operation not permitted
Starting Remount Root and Kernel File Systems...
[  OK  ] Reached target Slices.
systemd-sysctl.service: Failed to reset devices.list: Operation not permitted
systemd-sysctl.service: Failed to set invocation ID on control group 
/system.slice/systemd-sysctl.service, ignoring: Operation not permitted
Starting Apply Kernel Variables...
system-container\x2dgetty.slice: Failed to reset devices.list: Operation not 
permitted
system-container\x2dgetty.slice: Failed to set invocation ID on control group 
/system.slice/system-container\x2dgetty.slice, ignoring: Operation not permitted
[  OK  ] Created slice system-container\x2dgetty.slice.
system-getty.slice: Failed to reset devices.list: Operation not permitted
system-getty.slice: Failed to set invocation ID on control group 
/system.slice/system-getty.slice, ignoring: Operation not permitted
[  OK  ] Created slice system-getty.slice.
[  OK  ] Reached target Swap.
tmp.mount: Failed to reset devices.list: Operation not permitted
tmp.mount: Failed to set invocation ID on control group 
/system.slice/tmp.mount, ignoring: Operation not permitted
Mounting Temporary Directory...
[  OK  ] Listening on LVM2 metadata daemon socket.
dev-random.mount: Failed to reset devices.list: Operation not permitted
dev-tty1.mount: Failed to reset devices.list: Operation not permitted
proc-sys-net.mount: Failed to reset devices.list: Operation not permitted
dev-tty.mount: Failed to reset devices.list: Operation not permitted
dev-zero.mount: Failed to reset devices.list: Operation not permitted
dev-full.mount: Failed to reset devices.list: Operation not permitted
dev-tty3.mount: Failed to reset devices.list: Operation not permitted
dev-urandom.mount: Failed to reset devices.list: Operation not permitted
dev-tty2.mount: Failed to reset devices.list: Operation not 

[lxc-users] Error starting systemd-tmpfiles-setup.service in unprivileged container

2017-01-09 Thread John
When I start my unprivileged container, systemd-tmpfiles-setup.service fails to 
start with the following errors per journalctl:

Jan 09 14:16:20 playtime systemd[1]: systemd-tmpfiles-setup.service: Failed to 
reset devices.list: Operation not permitted
Jan 09 14:16:20 playtime systemd[1]: systemd-tmpfiles-setup.service: Failed to 
set invocation ID on control group 
/system.slice/systemd-tmpfiles-setup.service, ignoring: Operation not permitted
Jan 09 14:16:20 playtime systemd[1]: Starting Create Volatile Files and 
Directories...
Jan 09 14:16:20 playtime systemd-tmpfiles[18]: Setting default ACL 
"u::rwx,g::r-x,g:adm:r-x,g:wheel:r-x,g:4294967295:r-x,g:4294967295:r-x,m::r-x,o::r-x"
 on /var/log/journal failed: Invalid argument
Jan 09 14:16:20 playtime systemd-tmpfiles[18]: Setting access ACL 
"u::rwx,g::r-x,g:adm:r-x,g:wheel:r-x,g:4294967295:r-x,g:4294967295:r-x,m::r-x,o::r-x"
 on /var/log/journal failed: Invalid argument
Jan 09 14:16:20 playtime systemd-tmpfiles[18]: Setting default ACL 
"u::rwx,g::r-x,g:adm:r-x,g:wheel:r-x,g:4294967295:r-x,g:4294967295:r-x,m::r-x,o::r-x"
 on /var/log/journal/838a973609414ab38d2bc4af2756cc27 failed: Invalid argument
Jan 09 14:16:20 playtime systemd-tmpfiles[18]: Setting access ACL 
"u::rwx,g::r-x,g:adm:r-x,g:wheel:r-x,g:4294967295:r-x,g:4294967295:r-x,m::r-x,o::r-x"
 on /var/log/journal/838a973609414ab38d2bc4af2756cc27 failed: Invalid argument
Jan 09 14:16:20 playtime systemd[1]: systemd-tmpfiles-setup.service: Main 
process exited, code=exited, status=1/FAILURE
Jan 09 14:16:20 playtime systemd[1]: Failed to start Create Volatile Files and 
Directories.
Jan 09 14:16:20 playtime systemd[1]: systemd-tmpfiles-setup.service: Unit 
entered failed state.
Jan 09 14:16:20 playtime systemd[1]: systemd-tmpfiles-setup.service: Failed 
with result 'exit-code'.


Can you please review my config below and suggest what I am missing?  Thank you!

lxc.rootfs = /var/lib/lxc/playtime/rootfs
lxc.utsname = playtime
lxc.arch = x86_64
lxc.include = /usr/share/lxc/config/archlinux.common.conf
lxc.rootfs.backend = dir

## for namespaces
lxc.include = /usr/share/lxc/config/archlinux.userns.conf
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536


## network
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 = 192.168.1.105/24
lxc.network.ipv4.gateway = 192.168.1.1


## mounts
lxc.mount.entry = /dev/net dev/net none bind,create=dir
lxc.mount.entry = tmpfs tmp tmpfs defaults
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir
lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none bind,optional,create=dir
lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file

lxc.cgroup.devices.allow = c 10:200 rwm
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Running an unprivileged container through systemd as root rather than as a user

2017-01-09 Thread John
I would like to call the systemd unit lxc@.service to run an unprivileged 
container that I created as the root user rather than as a system user. Does 
doing so present any security concerns?

For reference, I created the container like this:


1) Added the following to /etc/lxc/default.conf
 lxc.id_map = u 0 10 65536
 lxc.id_map = g 0 10 65536
2) Created /etc/subgid and /etc/subuid (both 644) that both contain the 
following line:

 root:10:65536
3) as root, ran `lxc-create -n unprivileged -t download`
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] question about lxc-copy snapshots

2017-01-09 Thread John
>To: "lxc-users@lists.linuxcontainers.org" 
> 
>Sent: Sunday, January 8, 2017 7:36 PM
>Subject: question about lxc-copy snapshots
> 
>
>
>I have a base image that I snapshot for my container.  My question is: can I 
>start the base image while the snapshots are running?  My goal is to update 
>the packages in the base image without disturbing the overlayfs clones.
>

>

Forgot to show the syntax invocation:  lxc-copy -n base -N snapshot1 -M -s -B 
overlayfs

So this leads to "snapshot1" running with base as the underlying read-only 
root.  Is it OK to run and to upate package in "base" while "snapshot1" is 
running?  I read in some guides on this topic that the readonly base image 
should not be running while clones are running.  Thanks!
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] question about lxc-copy snapshots

2017-01-08 Thread John
I have a base image that I snapshot for my container.  My question is: can I 
start the base image while the snapshots are running?  My goal is to update the 
packages in the base image without disturbing the overlayfs clones.___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] OpenVPN server in a container... can connect but no webpages load

2016-12-29 Thread John
Sorry for the post, the problem was in my lxc configuration.  

 
  From: Idafe Houghton <idafe.hough...@gmail.com>
 To: LXC users mailing-list <lxc-users@lists.linuxcontainers.org> 
 Sent: Wednesday, December 28, 2016 9:54 PM
 Subject: Re: [lxc-users] OpenVPN server in a container... can connect but no 
webpages load
   
Any feedback is welcome.

Best regards.
2016-12-29 3:45 GMT+01:00 Idafe Houghton <idafe.hough...@gmail.com>:

Or else you should enable  proxy_arp=1 to your bridge interface.

Have you checked that you can go outside internet from within your container? 
(without all the vpn thing?)
2016-12-29 3:39 GMT+01:00 Idafe Houghton <idafe.hough...@gmail.com>:

What I may say, may seem stupid, but just to make sure...

May you tell us your NATting tables?

Thanks.
2016-12-27 21:13 GMT+01:00 John <da_audioph...@yahoo.com>:

Goal: I currently have standalone box running openvpn that is correctly 
configured and works.  My goal is to move that to a container.


Problem: I can connect to the openvpn server in the container but I cannot load 
webpages, they just timeout. I must not have something configured correctly.

I have a very basic setup without a firewall currently (I will add ufw once I 
verify function without it):


1) Host OS: Arch Linux x86_64. I have a netctl loading br0 (see below).
2) LXC: I created a basic lxc with just base and openvpn.  I copied the 
contents of /etc/openvpn/* from the functional system to the lxc's /etc/openvpn.
3) I am forwarding port 443 (which is what I am running openvpn on, to the 
internal IP of the container).

My netctl bridge profile on the host OS, /etc/netctl/bridge:

=
Description='lxc bridge'
Interface=br0
Connection=bridge
BindsToInterfaces=('eth0')
IP=dhcp


Output of `ip a` on the host OS:
=
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 4096 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever


2: eth0: <BROADCAST,MULTICAST,PROMISC,U P,LOWER_UP> mtu 1500 qdisc fq_codel 
master br0 state UP group default qlen 1000
link/ether 00:1e:06:33:59:e7 brd ff:ff:ff:ff:ff:ff
inet6 fe80::21e:6ff:fe33:59e7/64 scope link
valid_lft forever preferred_lft forever


3: br0: <BROADCAST,MULTICAST,UP,LOWER_ UP> mtu 1500 qdisc noqueue state UP 
group default
link/ether 00:1e:06:33:59:e7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.245/24 brd 192.168.1.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::21e:6ff:fe33:59e7/64 scope link
valid_lft forever preferred_lft forever


Output of `ip r` on the host OS:
=
default via 192.168.1.1 dev br0 src 192.168.1.245 metric 203
192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.245 metric 203


Output of `sysctl net.ipv4.conf | grep forward` on the host OS:
=
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.all.mc_forwardin g = 0
net.ipv4.conf.br0.forwarding = 1
net.ipv4.conf.br0.mc_forwardin g = 0
net.ipv4.conf.default.forwardi ng = 1
net.ipv4.conf.default.mc_forwa rding = 0
net.ipv4.conf.eth0.forwarding = 1
net.ipv4.conf.eth0.mc_forwardi ng = 0
net.ipv4.conf.lo.forwarding = 1
net.ipv4.conf.lo.mc_forwarding = 0



My container config, /var/lib/lxc/base/config:

=
lxc.rootfs = /var/lib/lxc/base/rootfs
lxc.rootfs.backend = dir
lxc.utsname = base
lxc.arch = x86_64
lxc.include = /usr/share/lxc/config/archlinu x.common.conf

## network
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 = 192.168.1.246/24
lxc.network.ipv4.gateway = 192.168.1.1

## systemd within the lxc
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/base/autodev
lxc.pts = 1024
lxc.kmsg = 0

## for openvpn
lxc.cgroup.devices.allow = c 10:200 rwm
__ _
lxc-users mailing list
lxc-users@lists.linuxcontainer s.org
http://lists.linuxcontainers.o rg/listinfo/lxc-users






___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

   
 ___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] OpenVPN server in a container... can connect but no webpages load

2016-12-27 Thread John
Goal: I currently have standalone box running openvpn that is correctly 
configured and works.  My goal is to move that to a container.


Problem: I can connect to the openvpn server in the container but I cannot load 
webpages, they just timeout. I must not have something configured correctly.

I have a very basic setup without a firewall currently (I will add ufw once I 
verify function without it):


1) Host OS: Arch Linux x86_64. I have a netctl loading br0 (see below).
2) LXC: I created a basic lxc with just base and openvpn.  I copied the 
contents of /etc/openvpn/* from the functional system to the lxc's /etc/openvpn.
3) I am forwarding port 443 (which is what I am running openvpn on, to the 
internal IP of the container).

My netctl bridge profile on the host OS, /etc/netctl/bridge:

=
Description='lxc bridge'
Interface=br0
Connection=bridge
BindsToInterfaces=('eth0')
IP=dhcp


Output of `ip a` on the host OS:
=
1: lo:  mtu 4096 qdisc noqueue state UNKNOWN group 
default 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever


2: eth0:  mtu 1500 qdisc fq_codel 
master br0 state UP group default qlen 1000
link/ether 00:1e:06:33:59:e7 brd ff:ff:ff:ff:ff:ff
inet6 fe80::21e:6ff:fe33:59e7/64 scope link 
valid_lft forever preferred_lft forever


3: br0:  mtu 1500 qdisc noqueue state UP group 
default 
link/ether 00:1e:06:33:59:e7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.245/24 brd 192.168.1.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::21e:6ff:fe33:59e7/64 scope link 
valid_lft forever preferred_lft forever


Output of `ip r` on the host OS:
=
default via 192.168.1.1 dev br0 src 192.168.1.245 metric 203 
192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.245 metric 203 


Output of `sysctl net.ipv4.conf | grep forward` on the host OS:
=
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.br0.forwarding = 1
net.ipv4.conf.br0.mc_forwarding = 0
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.eth0.forwarding = 1
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.lo.forwarding = 1
net.ipv4.conf.lo.mc_forwarding = 0



My container config, /var/lib/lxc/base/config:

=
lxc.rootfs = /var/lib/lxc/base/rootfs
lxc.rootfs.backend = dir
lxc.utsname = base
lxc.arch = x86_64
lxc.include = /usr/share/lxc/config/archlinux.common.conf

## network
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 = 192.168.1.246/24
lxc.network.ipv4.gateway = 192.168.1.1

## systemd within the lxc
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/base/autodev
lxc.pts = 1024
lxc.kmsg = 0

## for openvpn
lxc.cgroup.devices.allow = c 10:200 rwm
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers don't start with LXC 2.0.6 on Arch Linux

2016-12-22 Thread John
Raised issue https://github.com/lxc/lxc/issues/1363

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers don't start with LXC 2.0.6 on Arch Linux

2016-12-21 Thread John
On 21/12/16 16:13, Pavol Cupka wrote:
> edit the line
> lxc.tty = 1   #allow this many ttys
> and remove the comment
> so it will look like this
> lxc.tty = 1
> 
> enjoy :)
> 

Ok, yes that resolves the problem but I also had to remove the comment
from the "lxc.pts" line too. Although that fixes the immediate issue, it
does raise the question why this is the case ?

I have many more lines in the file with appended comments and they
appear to be fine. I wonder what has changed since 2.0.4 to make
comments in some situations break the parser.

Would you agree that this is more of a work-around than a fix?

Should I raise a bug report ?

Thanks for looking at this with me.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers don't start with LXC 2.0.6 on Arch Linux

2016-12-21 Thread John
On 21/12/16 14:45, Pavol Cupka wrote:
> so the containers restarted after upgrade to 2.0.6
> 
> do you mind pasting your config?
> 

Sure, here is a config file. It is one of many. None work under 2.0.6
but all work under 2.0.4. I haven't modified these configs in a couple
of years because, until now, they've worked fine for my needs. There may
now be better ways to do things than what I have done here :)

# Use autodev to be compatible with systemd
lxc.autodev = 1
lxc.hook.autodev = /srv/lxc/nitrogen/host/etc/lxc/autodev
# hostname
lxc.utsname = nitrogen
#
# network
# if the network is not defined then the container
# will be able to use the host's network
lxc.network.type = veth
#lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.mtu = 1500
lxc.network.hwaddr = 12:34:0A:00:C8:07
# restrict capabilities (security) see "man capabilities"
lxc.cap.drop = sys_module
#lxc.cap.drop = sys_admin
# only explicit device access
lxc.cgroup.devices.deny = a
#
# Memory Devices
lxc.cgroup.devices.allow = c 1:3 rwm # /dev/null  null stream
lxc.cgroup.devices.allow = c 1:5 rwm # /dev/zero  zero stream
lxc.cgroup.devices.allow = c 1:7 rwm # /dev/full  full stream
lxc.cgroup.devices.allow = c 1:8 rwm # /dev/urandom   blocking random stream
lxc.cgroup.devices.allow = c 1:9 rwm # /dev/randomnon blocking stream
#
# Terminals
lxc.tty = 1   #allow this many ttys
lxc.pts = 1024   #private instance
of /dev/pts
lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0  current virtual
terminal
lxc.cgroup.devices.allow = c 5:0 rwm # /dev/tty   current tty device

lxc.cgroup.devices.allow = c 5:1 rwm # /dev/console   system console
lxc.cgroup.devices.allow = c 5:2 rwm   # /dev/ptmxpseudo terminal
creator
lxc.cgroup.devices.allow = c 136:* rwm # /dev/pts/*   psuedo terminal slaves
#
# root filesystem
lxc.rootfs = /srv/lxc/nitrogen
# bind mount the host's pacman cache so container uses the same cache
# rather than wasting time downloading packages already downloaded.
lxc.mount.entry = /var/cache/pacman/pkg
/srv/lxc/nitrogen/var/cache/pacman/pkg none rw,bind 0 0
# Build files
lxc.mount.entry = /dev/platters/build /srv/lxc/nitrogen/home/build ext4
defaults 0 0
# Allow access to LVM filesystem
lxc.cgroup.devices.allow = b 254:* rwm # /dev/mapper/* LVM partitions
# transfer
lxc.mount.entry = /srv/transfer /srv/lxc/nitrogen/srv/transfer none
rw,bind 0 0
# 32 bit schroot
lxc.mount.entry = /srv/lxc/nitrogen32 /srv/lxc/nitrogen/opt/nitrogen32
none rw,bind 0 0
lxc.rootfs = /srv/lxc/nitrogen



When I start this config on 2.0.6 I get this output:


lxc@nitrogen.service - LXC Container nitrogen
   Loaded: loaded (/etc/systemd/system/lxc@.service; enabled; vendor
preset: disabled)
   Active: failed (Result: exit-code) since Wed 2016-12-21 15:06:46 GMT;
1min 19s ago
  Process: 10158 ExecStop=/usr/bin/lxc-stop -n %i (code=exited,
status=1/FAILURE)
  Process: 10153 ExecStart=/usr/bin/screen -dmS systemd-%i
/usr/bin/lxc-start -F -n %i (code=exited, status=0/SUCCESS)
 Main PID: 10154 (code=exited, status=0/SUCCESS)

Dec 21 15:06:46 hydrogen systemd[1]: Starting LXC Container nitrogen...
Dec 21 15:06:46 hydrogen systemd[1]: Started LXC Container nitrogen.
Dec 21 15:06:46 hydrogen lxc-stop[10158]: lxc-stop: parse.c:
lxc_file_for_each_line: 57 Failed to parse config: lxc.tty = 1
#allow this many tty
Dec 21 15:06:46 hydrogen lxc-stop[10158]: Error opening container
Dec 21 15:06:46 hydrogen systemd[1]: lxc@nitrogen.service: Control
process exited, code=exited status=1
Dec 21 15:06:46 hydrogen systemd[1]: lxc@nitrogen.service: Unit entered
failed state.
Dec 21 15:06:46 hydrogen systemd[1]: lxc@nitrogen.service: Failed with
result 'exit-code'.



If I try to start it without systemd the result is the same:

$ sudo /usr/bin/lxc-start -F -n nitrogen
lxc-start: parse.c: lxc_file_for_each_line: 57 Failed to parse config:
lxc.tty = 1   #allow this many ttys

lxc-start: tools/lxc_start.c: main: 279 Failed to create lxc_container


The containers start without issue after downgrading like this:

$ sudo pacman -U /var/cache/pacman/pkg/lxc-1:2.0.4-2-x86_64.pkg.tar.xz




Let me know if I can provide anything else.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to apply commands in howtos - macvlan and disk passthrough

2016-12-21 Thread John Gubert

Hi Pavol,
thanks for the link, I did some testing with the out of the box setup 
(removed root:1000:1) of ubuntu, created two containers and passed the 
same host directory through to both of them, then I created the same 
users in the same order on both containers:

root(1000)
neuer(1001)
zweiter(1002)

this seems to work, when I create files inside this folder on one 
container as neuer, I can only read them as neuer on the other container 
and vice versa.
I would assume, that as soon as I create the users in a different order, 
zweiter might become 1001 and neuer 1002 and therefore files created by 
neuer in one container would be seen as files created bei zweiter in the 
other, right? On the host, all files are seens as 101001 or 101001 anyway.
I would go ahead and use this setup for my homeserver to store 
media/backups and run a fileserver in one container and other tasks in 
another, is this setup stable enough if I set it up as described above?


this is my lxc config, is there anything I should change?

  disktest:
path: /testdisk
source: /home/me/testdisk
type: disk

kind regards,
John

Am 21.12.2016 um 15:04 schrieb Pavol Cupka:

some of your questions are answered here
https://wiki.gentoo.org/wiki/LXD#Configure_subuid.2Fsubgid

answering to the list is fine

On Wed, Dec 21, 2016 at 1:34 PM, John Gubert <john.gub...@web.de 
<mailto:john.gub...@web.de>> wrote:


Hi Tycho,

thank you for your fast response.

My id on the host is indeed 1000. I read your blog article and
then had
a look at /etc/subuid:

before:
"me@host:~$ cat /etc/subuid
lxd:10:65536
root:10:65536
me:165536:65536"

after:
"me@host:~$ cat /etc/subuid
lxd:10:65536
root:10:65536
me:165536:65536
root:1000:1"

root seems to be already set up, maybe this is due to lxd being
installed on ubuntu 16.04? It would be really helpful if you could
explain to me what the mapping defined in this file really does.
Does it
make a difference if I add your line, or use the one already
there? How
does this file use the numbers (10 and 65536)? Does 1000:1 tell
ubuntu to map the id 1 to 1, if so, what does 10:65536 mean? Add
65536 to the 10? If there is a user called "me" in the conatainer,
does a line "me:1000:1" work as well?

    I appreciate any help.

with kind regards,
John

P.S.:
I answered to the mailing list, is this the right way to do it, or
should I answer to you directly?



Am 20.12.2016 um 22:52 schrieb Tycho Andersen:

Hi John,

On Tue, Dec 20, 2016 at 10:39:07PM +0100, john.gub...@web.de
<mailto:john.gub...@web.de> wrote:

Hello,
 I have a directory on my host system and want to
create several containers
with the same users inside. I would like to pass the
directory through to
each container and allow the users to write and read
on it. The network
connection should be done using macvlan.
The howtos I have read so far show how to set up lxd,
which works very
well on my 16.04 host. Starting a container works out
of the box as
unpriviliged user as well.
 My questions:
Is it even possible to share one directory on the host
with several
container?
All the howtos I could find mention some commands,
that need to be
applied, but they do not tell me about the commands I
need to type in to
make it work:

"That means you can create a container with the
following configuration:

lxc.id_map = u 0 10 65536

  lxc.id_map = g 0 10 65536"

There is a big list of possible options on github, but
where does it tell
how to apply them?
 Does someone know a detailed howto, that
describes a similiar setup like
mine?

http://tycho.ws/blog/2016/12/uidmap.html
<http://tycho.ws/blog/2016/12/uidmap.html> is a blog post I
wrote a
while ago talking about how to set this up with your home
directory.
You can mimic the settings for whatever user map you want, though.

Cheers,

Tycho

Every time I read something, I feel like missing
something important,
because I could not find a coherent compendium of
possible options on how
    to do something.
 kind regards,
John
___
lxc-users mailing list
   

Re: [lxc-users] How to apply commands in howtos - macvlan and disk passthrough

2016-12-21 Thread John Gubert

Hi Tycho,

thank you for your fast response.

My id on the host is indeed 1000. I read your blog article and then had
a look at /etc/subuid:

before:
"me@host:~$ cat /etc/subuid
lxd:10:65536
root:10:65536
me:165536:65536"

after:
"me@host:~$ cat /etc/subuid
lxd:10:65536
root:10:65536
me:165536:65536
root:1000:1"

root seems to be already set up, maybe this is due to lxd being
installed on ubuntu 16.04? It would be really helpful if you could
explain to me what the mapping defined in this file really does. Does it
make a difference if I add your line, or use the one already there? How
does this file use the numbers (10 and 65536)? Does 1000:1 tell
ubuntu to map the id 1 to 1, if so, what does 10:65536 mean? Add
65536 to the 10? If there is a user called "me" in the conatainer,
does a line "me:1000:1" work as well?

I appreciate any help.

with kind regards,
John

P.S.:
I answered to the mailing list, is this the right way to do it, or
should I answer to you directly?


Am 20.12.2016 um 22:52 schrieb Tycho Andersen:

Hi John,

On Tue, Dec 20, 2016 at 10:39:07PM +0100, john.gub...@web.de wrote:

Hello,
 
I have a directory on my host system and want to create several containers

with the same users inside. I would like to pass the directory through to
each container and allow the users to write and read on it. The network
connection should be done using macvlan.
The howtos I have read so far show how to set up lxd, which works very
well on my 16.04 host. Starting a container works out of the box as
unpriviliged user as well.
 
My questions:

Is it even possible to share one directory on the host with several
container?
All the howtos I could find mention some commands, that need to be
applied, but they do not tell me about the commands I need to type in to
make it work:
 


"That means you can create a container with the following configuration:

lxc.id_map = u 0 10 65536

  lxc.id_map = g 0 10 65536"

There is a big list of possible options on github, but where does it tell
how to apply them?
 
Does someone know a detailed howto, that describes a similiar setup like

mine?

http://tycho.ws/blog/2016/12/uidmap.html is a blog post I wrote a
while ago talking about how to set this up with your home directory.
You can mimic the settings for whatever user map you want, though.

Cheers,

Tycho


Every time I read something, I feel like missing something important,
because I could not find a coherent compendium of possible options on how
to do something.
 
kind regards,

John
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers don't start with LXC 2.0.6 on Arch Linux

2016-12-21 Thread John
On 21/12/16 10:37, Pavol Cupka wrote:
> any strange invisible characters on that line
> try to make a minimal config by typing everything by hand
nothing strange - configs were hand-typed originally. They've been in
place for a couple of years at least without any problems. They still
work if I use 2.0.4 so I'm left wondering what's different between 2.0.4
and 2.0.6 wrt config parsing... ?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers don't start with LXC 2.0.6 on Arch Linux

2016-12-21 Thread John
On 21/12/16 10:24, Pavol Cupka wrote:
> what happens when you comment out that line?
It makes no difference. That was the first thing I tried :)

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] How to apply commands in howtos - macvlan and disk passthrough

2016-12-20 Thread john . gubert
Hello,

 

I have a directory on my host system and want to create several containers with the same users inside. I would like to pass the directory through to each container and allow the users to write and read on it. The network connection should be done using macvlan.

The howtos I have read so far show how to set up lxd, which works very well on my 16.04 host. Starting a container works out of the box as unpriviliged user as well.

 

My questions:

Is it even possible to share one directory on the host with several container?

All the howtos I could find mention some commands, that need to be applied, but they do not tell me about the commands I need to type in to make it work:

 


"That means you can create a container with the following configuration:

lxc.id_map = u 0 10 65536

lxc.id_map = g 0 10 65536"

There is a big list of possible options on github, but where does it tell how to apply them?

 

Does someone know a detailed howto, that describes a similiar setup like mine?

 

Every time I read something, I feel like missing something important, because I could not find a coherent compendium of possible options on how to do something.

 

kind regards,

John

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Containers don't start with LXC 2.0.6 on Arch Linux

2016-12-20 Thread John
After a recent update to my Arch system, no containers will start.
Instead I get this:

lxc-start: parse.c: lxc_file_for_each_line: 57 Failed to parse
config: lxc.tty = 1
lxc-start: tools/lxc_start.c: main: 253 Failed to create lxc_container

It happens with all containers. Downgrading (lxc (1:2.0.6-2 =>
1:2.0.4-2) fixes the problem.

Linux myhost 4.8.13-1-ARCH #1 SMP PREEMPT Fri Dec 9 07:24:34 CET 2016
x86_64 GNU/Linux

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Can not stop lxc with lxc-stop

2016-09-27 Thread John Y.
The same as lxc-stop -n testlxc


Thank you for your help.
John

2016-09-27 14:21 GMT+08:00 Marat Khalili <m...@rqc.ru>:

> What's with:
>
> # lxc-stop -n testlxc -k
>
> ?
> --
>
> With Best Regards,
> Marat Khalili
>
> On 27/09/16 05:46, John Y. wrote:
>
> I create a container with lxc 2.0.4.
> lxc-stop hangs up when I want to stop it.
>
> #lxc-stop -n testlxc
>
> But it may already stoped, because I exited from lxc auto automatically
> and lxc-attach failed.:
>
> #lxc-attach -n testlxc
> lxc-attach: attach.c: lxc_attach_to_ns: 257 No such file or directory -
> failed to open '/proc/23193/ns/mnt'
> lxc-attach: attach.c: lxc_attach: 948 failed to enter the namespace
>
> And lxc-stop still hanged without any output.
>
> Anyone know why?
>
> Thanks,
> John
>
>
>
>
>
> ___
> lxc-users mailing 
> listlxc-users@lists.linuxcontainers.orghttp://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Can not stop lxc with lxc-stop

2016-09-26 Thread John Y.
I create a container with lxc 2.0.4.
lxc-stop hangs up when I want to stop it.

#lxc-stop -n testlxc

But it may already stoped, because I exited from lxc auto automatically and
lxc-attach failed.:

#lxc-attach -n testlxc
lxc-attach: attach.c: lxc_attach_to_ns: 257 No such file or directory -
failed to open '/proc/23193/ns/mnt'
lxc-attach: attach.c: lxc_attach: 948 failed to enter the namespace

And lxc-stop still hanged without any output.

Anyone know why?

Thanks,
John
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-ls show containers which does not exist with lxc 1.0.6

2016-09-19 Thread John Y.
before craete a container:

#lxc-ls
show nothing

#lxc-ls --active
test2  test2-1 test2-2  test2-3  test2-4

create lxc:
#lxc-start -n test2 -f /root/yaowj/lxc.xml -d

#lxc-ls --active
test2  test2-1 test2-2  test2-3  test2-4  test2-5

I create test2, but it show test2-5.
But I can get info by use test2.

#lxc-info -n test2
Name:   test2
State:  RUNNING
PID:32251
CPU use:1.79 seconds
BlkIO use:  0 bytes
Memory use: 1.25 MiB
KMem use:   0 bytes

#ps -ef | grep test2
root 32246 1  0 23:41 ?00:00:00 lxc-start -n test2 -f
/root/yaowj/lxc.xml -d


1. Why there are some lxc still in lxc-ls when I use `lxc-stop -n test2 -k`
(or use `kill -9 pid`) to stop it?
2. How to remove these unused lxc info?

Thanks,
John
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [SOLVED] Will I need pre-mount hooks if redesign my lxc file systems?

2016-07-08 Thread John Lewis
On 07/08/2016 10:46 PM, Serge E. Hallyn wrote:
> Quoting John Lewis (oflam...@gmail.com):
>> I have a filesystem like this inside of a filesystem image
>>
>>/
>>
>> lost+found
>>
>>  rootfs/[Linux root file system directories]
>>
>>
>> If I change it to the following, will I have to use premount hooks?
>>
>>   / [Linux root file system directories]
>>
>>   lost+found
> nope then you should be able to use it directly.
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

Thanks.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Will I need pre-mount hooks if redesign my lxc file systems?

2016-07-07 Thread John Lewis
I have a filesystem like this inside of a filesystem image

   /

lost+found

 rootfs/[Linux root file system directories]


If I change it to the following, will I have to use premount hooks?

  / [Linux root file system directories]

  lost+found


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Network space visibility in containers

2016-07-06 Thread John Lewis
Oh, those are the tap interfaces. LXC doesn't hide that from the host. I
am not sure if it should.

On 07/06/2016 01:36 PM, st...@linuxsuite.org wrote:
>> Try defining lxc.network.name and see if it fixes it.
>>
>   version 1.08
>
>Nope.
>
> [root@admn-101 ~]# ifconfig
> admn101-1 Link encap:Ethernet  HWaddr 26:3C:0B:06:A2:AF
>   inet addr:10.2.3.101  Bcast:10.2.255.255  Mask:255.255.0.0
>   inet6 addr: fe80::243c:bff:fe06:a2af/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:312 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:129 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:48616 (47.4 KiB)  TX bytes:26791 (26.1 KiB)
>
> admn101-4 Link encap:Ethernet  HWaddr FE:3D:09:F8:AA:AA
>   inet addr:10.5.3.101  Bcast:10.5.255.255  Mask:255.255.0.0
>   inet6 addr: fe80::fc3d:9ff:fef8:/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:6 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:468 (468.0 b)  TX bytes:468 (468.0 b)
>
> admn101-5 Link encap:Ethernet  HWaddr 72:26:66:8B:0E:FB
>   inet addr:10.1.3.101  Bcast:10.1.255.255  Mask:255.255.0.0
>   inet6 addr: fe80::7026:66ff:fe8b:efb/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:10 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:920 (920.0 b)  TX bytes:468 (468.0 b)
>
> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet6 addr: ::1/128 Scope:Host
>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
>
> [root@admn-101 ~]# netstat -an
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address   Foreign Address   
>  State
> tcp0  0 0.0.0.0:25  0.0.0.0:* 
>  LISTEN
> tcp0  0 10.5.5.101:443  207.11.1.163:12508
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:41572 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19664
>  SYN_RECV
> tcp0  0 10.5.5.101:443  73.112.14.86:25891
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19641
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:3458  
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:54481 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19608
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19644
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19619
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:57090 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:1215  
>  SYN_RECV
> tcp0  0 10.5.5.101:443  172.56.42.139:38995   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19565
>  SYN_RECV
> tcp0  0 10.5.5.101:443  172.56.42.139:36355   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19532
>  SYN_RECV
> tcp0  0 10.5.5.101:443  142.27.78.252:51543   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  172.56.42.139:27733   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19585
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:19024 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:29653 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19611
>  SYN_RECV
> tcp0  0 10.5.5.101:443  89.77.132.239:45287   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19599
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19629
>  SYN_RECV
> tcp0  0 10.5.5.101:443  1.39.15.205:32231 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  58.11.176.101:53361   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  172.56.42.139:23182   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:19558

Re: [lxc-users] Network space visibility in containers

2016-07-06 Thread John Lewis
Try defining lxc.network.name and see if it fixes it.

On 07/06/2016 12:04 PM, st...@linuxsuite.org wrote:
>> How are these containers networked together? Are you using a Bridges on
>> the host or are you just bringing up new interfaces on the host?
>   I have  a bridge for each interface.  No interfaces on the host
> have
> IP's except br1. Use veth in config
>
> lxc.network.type = veth
> lxc.network.flags = up
> lxc.network.link = br1
> #lxc.network.hwaddr = fe:41:31:7f:5c:d6
> lxc.network.veth.pair = admn101-1
> lxc.network.ipv4 = 10.2.3.101/16
> lxc.network.ipv4.gateway = 10.2.1.2
>
> lxc.network.type = veth
> lxc.network.flags = up
> lxc.network.link = br4
> #lxc.network.hwaddr = fe:41:31:7f:5c:d6
> lxc.network.veth.pair = admn101-4
> lxc.network.ipv4 = 10.5.3.101/16
>
> [root@lxc100 ~]$ brctl show
> bridge name   bridge id   STP   enabled   interfaces
> br1   8000.0024e85d25ea   noadmn101-1
> 
> em1
> 
> mfs101-1
> br2   8000.0024e85d25ec   noem2
> 
> mfs101-2
> br3   8000.0024e85d25ee   noem3
> 
> mfs101-3
> br4   8000.0024e85d25f0   noadmn101-4
> 
> em4
> 
> mfs101-4
> br5   8000.00151778923c   no   admn101-5
>em5
>
>

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Network space visibility in containers

2016-07-06 Thread John Lewis
How are these containers networked together? Are you using a Bridges on
the host or are you just bringing up new interfaces on the host?

On 07/06/2016 10:24 AM, st...@linuxsuite.org wrote:
> Howdy!
>
>   I have a number of containers running. Is it expected that
> information about the network of other containers is "visible".. for
> example
>
> the container admn-101 has ip 10.2.3.101
>
> [root@admn-101 admn-101]# netstat -an|grep LIST
> tcp0  0 0.0.0.0:514 0.0.0.0:* 
>  LISTEN
> tcp0  0 10.2.3.101:22   0.0.0.0:* 
>  LISTEN
> tcp0  0 0.0.0.0:25  0.0.0.0:* 
>  LISTEN
> tcp0  0 :::514  :::*  
>  LISTEN
> unix  2  [ ACC ] STREAM LISTENING 69697909
> @/com/ubuntu/upstart
>
>  The other container on the host has ip 10.5.5.101
>
> [root@admn-101 admn-101]# netstat -an
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address   Foreign Address   
>  State
> tcp0  0 0.0.0.0:514 0.0.0.0:* 
>  LISTEN
> tcp0  0 10.5.5.101:443  103.14.89.19:10165
>  SYN_RECV
> tcp0  0 10.5.5.101:443  114.77.25.146:50649   
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:51060
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:51051
>  SYN_RECV
> tcp0  0 10.5.5.101:443  122.106.235.197:61016 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  84.74.55.62:63064 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  39.110.173.3:6985 
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:50958
>  SYN_RECV
> tcp0  0 10.5.5.101:443  171.99.169.231:53917  
>  SYN_RECV
> tcp0  0 10.5.5.101:443  96.53.94.194:51018
>  SYN_RECV
> tcp0  0 10.5.5.101:443  116.15.8.112:64049
>  SYN_RECV
> tcp0  0 10.5.5.101:443  71.56.250.124:58672   
>  SYN_RECV
> tcp0  0 10.2.3.101:22   0.0.0.0:* 
>  LISTEN
> tcp0  0 0.0.0.0:25  0.0.0.0:* 
>  LISTEN
> tcp0  0 10.2.3.101:22   10.2.1.2:48356
>  ESTABLISHED
> tcp0  0 :::514  :::*  
>  LISTEN
> udp0  0 0.0.0.0:514 0.0.0.0:*
> udp0  0 :::514  :::*
>
>   Why is information about 10.5.5.101 visable??? Is this expected?
> shouldn't cgroup limit this visibility??
>
> Also iptables in admn-101 logs packets from 10.5.5.101 but only
> some???
>
> [root@admn-101 admn-101]# tail -f kern
> kern.warning: Jul  6 10:22:06 admn-101 kernel:IN= OUT=eth3 SRC=10.5.5.101
> DST=52.0.92.26 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=46910 DF PROTO=TCP
> SPT=34378 DPT=443 WINDOW=14600 RES=0x00 SYN URGP=0
> kern.warning: Jul  6 10:22:06 admn-101 kernel:IN= OUT=eth3 SRC=10.5.5.101
> DST=52.7.169.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=49586 DF PROTO=TCP
> SPT=57832 DPT=443 WINDOW=14600 RES=0x00 SYN URGP=0
> kern.warning: Jul  6 10:22:07 admn-101 kernel:IN= OUT=eth3 SRC=10.5.5.101
> DST=52.7.169.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=53263 DF PROTO=TCP
> SPT=57856 DPT=443 WINDOW=4600 RES=0x0SNUG= <4>IN= OUT=eth3 SRC=10.5.5.101
> DST=52.0.92.26 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=866 DF PROTO=TCP
> SPT=34456 DPT=443 WINDOW=14600 RES=0x00 SYN URGP=0
> kern.info: Jul  6 10:22:12 admn-101 kernel:1209.6LN6 O=x0PE=x0TL6 D673D
> RT=TPST366DT43WNO=40 E=x0SNUG= <4>IN= OUT=eth3 SRC=10.5.5.101
> DST=52.7.169.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=60707 DF PROTO=TCP
> SPT=58190 DPT=443 WINDOW=14600 RES=0x00 SYN URGP=0
>
>
>
>
>
> root@admn-101 # ifconfig
> eth0  Link encap:Ethernet  HWaddr 52:D0:AF:B6:9D:16
>   inet addr:10.2.3.101  Bcast:10.2.255.255  Mask:255.255.0.0
>   inet6 addr: fe80::50d0:afff:feb6:9d16/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:6758 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:814 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:1270156 (1.2 MiB)  TX bytes:150528 (147.0 KiB)
>
> eth1  Link encap:Ethernet  HWaddr 3E:43:D5:B7:2C:DF
>   inet addr:10.5.3.101  Bcast:10.5.255.255  Mask:255.255.0.0
>   inet6 addr: fe80::3c43:d5ff:feb7:2cdf/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:12 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
> 

Re: [lxc-users] How do you mount this file as an lxc rootfs?

2016-06-25 Thread John Lewis
On 06/25/2016 06:38 PM, Serge E. Hallyn wrote:
> On Sat, Jun 25, 2016 at 03:20:08PM -0400, John Lewis wrote:
>> On 06/20/2016 11:51 AM, Serge E. Hallyn wrote:
>>> The pre-mount hook runs in the container's mount namespace but before
>>> mounting the rootfs.  So the fs you mount only shows up in the container's
>>> namespace, not on the host.  Auto-cleanup is just a nice bonus.  I would
>>> have been not entirely surprised if the loopdev remained attached, happy
>>> to see it apparently gets cleaned up.
>>>
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>> Is the pre-start and pre-mount hooks the standard way of mounting a lxc
>> root filesystem,
> No.  But iirc you're not using it to mount the rootfs, but to mount  the
> thing which contains the rootfs.
>

Would making the rootfs the root of the filesystem image potentially
give me more options for how I mount it?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How do you mount this file as an lxc rootfs?

2016-06-25 Thread John Lewis
On 06/20/2016 11:51 AM, Serge E. Hallyn wrote:
> The pre-mount hook runs in the container's mount namespace but before
> mounting the rootfs.  So the fs you mount only shows up in the container's
> namespace, not on the host.  Auto-cleanup is just a nice bonus.  I would
> have been not entirely surprised if the loopdev remained attached, happy
> to see it apparently gets cleaned up.
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

Is the pre-start and pre-mount hooks the standard way of mounting a lxc
root filesystem, or was there some way to do it via the configuration
file. If there is no standard way using the config file, has anyone
developed a hook script that uses the configuration file as a directive
for the mounts?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [SOLVED] How do you mount this file as an lxc rootfs?

2016-06-20 Thread John Lewis
On 06/20/2016 10:16 AM, John Lewis wrote:
> On 06/20/2016 10:04 AM, Mike Wright wrote:
>> On 06/20/2016 06:47 AM, John Lewis wrote:
>>> I have a ext4 formatted file called pmd.simg with a directory structure
>>> like this.
>>>
>>> lost+found  rootfs
>> You should be able to mount that via the loop device:
>>
>> 
>> mount pmd.simg  -o loop
>> 
>>
>> Then rootfs will be available at /rootfs
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> That worked, thank you.
>

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How do you mount this file as an lxc rootfs?

2016-06-20 Thread John Lewis
On 06/20/2016 10:04 AM, Mike Wright wrote:
> On 06/20/2016 06:47 AM, John Lewis wrote:
>> I have a ext4 formatted file called pmd.simg with a directory structure
>> like this.
>>
>> lost+found  rootfs
>
> You should be able to mount that via the loop device:
>
> 
> mount pmd.simg  -o loop
> 
>
> Then rootfs will be available at /rootfs
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

That worked, thank you.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] How do you mount this file as an lxc rootfs?

2016-06-20 Thread John Lewis
I have a ext4 formatted file called pmd.simg with a directory structure
like this.

lost+found  rootfs

How do I mount it and pivot to it properly so I can start the lxc? The
rootfs directory has a debian chroot filesystem on it.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Is there anything in LXC that would prevent DHCPv6 from working?

2016-03-18 Thread John Lewis
I am use wide-dhcpv6-server and wide-dhcpv6-client in two diffrent LXCs
with an iproute2 created bridge and lxc created tun/tap devices and I am
using 3.16.0-4-amd64 #1 SMP and my kernel. I don't have any firewall
that would block ipv6 request and responses that would occur on port 546
and 547, but I don't see any packets out of the interface on the client
that are the packets that I am looking for when I tcpdump it. It is
probably an application issue, but I just want to double check.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Better error logging when starting containers?

2016-02-22 Thread John Siu
I can give you the issue I filed with lxc/lxc as an example : 
https://github.com/lxc/lxc/issues/819

If you try to create a container using the following command:

lxc-create -t download -n lxc10009 -f lxc10009.conf -- -d ubuntu -r 
xenial -a amd64

And the config file contain a “lxc.roofs” line, it will failed with following 
error:

lxc-create: lxc_create.c: main: 303 Error creating container lxc10009

I can’t determine the cause of the error even I went into the code 
(lxc_create.c, line 303).

I end up commenting the config file section by section, line by line to find 
the offending line.

Another example is if you add network interfaces to an unprivileged container 
and forget to add that nic in /etc/lxc/lxc-usernet, the fail to start error 
messages contain very little hint about it. Even with “-l 9”, you will only get 
the following:

  lxc-start 20160222151046.765 ERRORlxc_start - start.c:lxc_spawn:1108 
- failed to create the configured network
  lxc-start 20160222151046.765 ERRORlxc_start - 
start.c:__lxc_start:1274 - failed to spawn 'lxc10001'
  lxc-start 20160222151052.301 ERRORlxc_start_ui - lxc_start.c:main:344 
- The container failed to start.
  lxc-start 20160222151052.301 ERRORlxc_start_ui - lxc_start.c:main:346 
- To get more details, run the container in foreground mode.
  lxc-start 20160222151052.301 ERRORlxc_start_ui - lxc_start.c:main:348 
- Additional information can be obtained by setting the --logfile and 
--logpriority options.

So is it a network interface error on the host? Or lxc10001 config file error 
in the network line? Is it ip conflict? Bla bla bla … You get the idea. I 
personally wasted a lot of time on this particular one :(

Currently I take that as “growing pain” as all packages, technologies(lxc, 
cgroup, systemd) surrounding Linux container are all evolving rapidly. However 
I hope this issue can be improved faster. Currently it is difficult for end 
user (non-developer), or even developers but not actively involve in those 
packages, to determine the cause (config error? actual bug?) with those error 
messages.

John

> On Feb 22, 2016, at 14:11, Akshay Karle <akshay.a.ka...@gmail.com> wrote:
> 
> I agree with your comments and the fact that the team is busy working lxc v2, 
> but I wanted to get a sense of whether it was a problem everyone is facing 
> and that the lxc team are aware of. I didn't look into the codebase for 
> logging yet and I'm not a C programmer (anymore), but I would like to give a 
> shot at improving the logs and hence asked for your suggestions. I will begin 
> by looking at how we can improve error logging for lxc-start at least and 
> then look at the general lxc-* commands. Thanks for the comments Bostjan, 
> I'll keep that in mind when I look through the code.
> 
> Thanks for the tip on bumping the log level for ephemeral containers John!
> 
> On Mon, Feb 22, 2016 at 4:41 AM Bostjan Skufca <bost...@a2o.si 
> <mailto:bost...@a2o.si>> wrote:
> Dear Akshay,
> 
> I do agree with you and find this behaviour a bit annoying, yet I believe 
> "patches welcome" response will follow shortly :)
> 
> On a more serious note:
> As I skimmed over LXC code a while ago, it seems LXC bails out on first error 
> that occurs. This means that implementing your suggestion would simply mean 
> keeping last error stored somewhere and displaying it before exiting 
> lxc-start itself.
> 
> This would be solution for lxc-start, which you (and I) are probably the most 
> interested in. Some more generic solution for all lxc-* tools should probably 
> be more adequate, but that would need attention from one of the 
> maintainers/core devs.
> 
> b.
> 
> On 22 February 2016 at 00:50, Akshay Karle <akshay.a.ka...@gmail.com 
> <mailto:akshay.a.ka...@gmail.com>> wrote:
> Hello lxc users,
> 
> After having used lxc for a while now, I've realized that when the container 
> fails to start, it fails with a very generic message as follows:
> 
> $ lxc-start -n test
> lxc-start: lxc_start.c: main: 344 The container failed to start.
> lxc-start: lxc_start.c: main: 346 To get more details, run the container in 
> foreground mode.
> lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
> setting the --logfile and --logpriority options.
> 
> And if you are using ephemeral containers, the error is even more generic and 
> with no way to increase the log level:
> 
> $ lxc-start-ephemeral -n e1 -o test -d
> setting rootfs to .%s. /home/vagrant/.local/share/lxc/e1/rootfs
> The container 'e1' failed to start.
> 
> I was wondering if someone felt the need of having a little more meaningful 
> error messages giving a summary of the error in the console output. The 
> container logfile does have

Re: [lxc-users] Are these messages normal for un-previlieged lxc containers?

2016-02-22 Thread John Siu

> On Feb 22, 2016, at 12:55, Serge Hallyn <serge.hal...@ubuntu.com> wrote:
> 
> Quoting John Siu (john.sd@gmail.com):
>> OS: Ubuntu 16.04
>> LXC: 2.0.0-rc1
>> 
>> Following are from host journal when starting up a lxc container:
>> 
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/blkio/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/cpuacct/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/cpuset/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/devices/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/hugetlb/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/net_prio/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/perf_event/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/pids/user.slice/user-1000.slice/session-2.scope/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/blkio/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/cpuacct/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/cpuset/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/devices/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/hugetlb/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/net_prio/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/perf_event/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18930 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/pids/user.slice/user-1000.slice/session-2.scope/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18936 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/blkio/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18935 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/blkio/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18936 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/cpuacct/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18935 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/cpuacct/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18936 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/cpuset/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18935 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/cpuset/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18936 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/devices/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18935 
>> (uid 1000 gid 1000) may not create under 
>> /run/cgmanager/fs/devices/user.slice/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18936 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/hugetlb/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18935 
>> (uid 1000 gid 1000) may not create under /run/cgmanager/fs/hugetlb/lxc
>> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_c

Re: [lxc-users] Are these messages normal for un-previlieged lxc containers?

2016-02-22 Thread John Siu
 lxc1 systemd[1]: Reached target Basic System.
Feb 22 01:31:20 lxc1 systemd[1]: Starting getty on tty2-tty6 if dbus and 
logind are not available...
Feb 22 01:31:20 lxc1 systemd[1]: Starting LSB: Set the CPU Frequency 
Scaling governor to "ondemand"...
Feb 22 01:31:20 lxc1 systemd[1]: Started Regular background program 
processing daemon.
Feb 22 01:31:20 lxc1 cron[299]: (CRON) INFO (pidfile fd = 3)
Feb 22 01:31:20 lxc1 cron[299]: (CRON) INFO (Running @reboot jobs)
Feb 22 01:31:20 lxc1 systemd[1]: Starting Permit User Sessions...
Feb 22 01:31:21 lxc1 systemd[1]: Started Permit User Sessions.
Feb 22 01:31:21 lxc1 systemd[1]: Started LSB: Set the CPU Frequency Scaling 
governor to "ondemand".
Feb 22 01:31:21 lxc1 systemd[1]: Started getty on tty2-tty6 if dbus and 
logind are not available.
Feb 22 01:31:21 lxc1 dhclient[279]: DHCPREQUEST of 192.168.0.216 on public 
to 255.255.255.255 port 67 (xid=0x7786db89)
Feb 22 01:31:21 lxc1 ifup[252]: DHCPREQUEST of 192.168.0.216 on public to 
255.255.255.255 port 67 (xid=0x7786db89)
Feb 22 01:31:21 lxc1 ifup[252]: DHCPOFFER of 192.168.0.216 from 192.168.0.2
Feb 22 01:31:21 lxc1 dhclient[279]: DHCPOFFER of 192.168.0.216 from 
192.168.0.2
Feb 22 01:31:21 lxc1 ifup[252]: DHCPACK of 192.168.0.216 from 192.168.0.2
Feb 22 01:31:21 lxc1 dhclient[279]: DHCPACK of 192.168.0.216 from 
192.168.0.2
Feb 22 01:31:21 lxc1 dhclient[279]: bound to 192.168.0.216 -- renewal in 
110360 seconds.
Feb 22 01:31:21 lxc1 ifup[252]: bound to 192.168.0.216 -- renewal in 110360 
seconds.
Feb 22 01:31:21 lxc1 systemd[1]: Started Raise network interfaces.
Feb 22 01:31:21 lxc1 systemd[1]: Reached target Network.
Feb 22 01:31:21 lxc1 systemd[1]: Starting OpenBSD Secure Shell server...
Feb 22 01:31:21 lxc1 sshd[363]: Server listening on 0.0.0.0 port 22.
Feb 22 01:31:21 lxc1 sshd[363]: Server listening on :: port 22.
Feb 22 01:31:21 lxc1 systemd[1]: Starting The PHP 7.0 FastCGI Process 
Manager...
Feb 22 01:31:21 lxc1 systemd[1]: Starting /etc/rc.local Compatibility...
Feb 22 01:31:21 lxc1 php-fpm[370]: [NOTICE] configuration file 
/etc/php/7.0/fpm/php-fpm.conf test is successful
Feb 22 01:31:21 lxc1 systemd[1]: Started Journal Remote Upload Service.
Feb 22 01:31:21 lxc1 systemd[1]: Started OpenBSD Secure Shell server.
Feb 22 01:31:21 lxc1 systemd[1]: Started /etc/rc.local Compatibility.
Feb 22 01:31:21 lxc1 php-fpm[378]: [NOTICE] fpm is running, pid 378
Feb 22 01:31:21 lxc1 php-fpm[378]: [NOTICE] ready to handle connections
Feb 22 01:31:21 lxc1 php-fpm[378]: [NOTICE] systemd monitor interval set to 
1ms
Feb 22 01:31:21 lxc1 systemd[1]: Started Container Getty on /dev/pts/3.
Feb 22 01:31:21 lxc1 systemd[1]: Started Console Getty.
Feb 22 01:31:21 lxc1 systemd[1]: Started Container Getty on /dev/pts/0.
Feb 22 01:31:21 lxc1 systemd[1]: Started Container Getty on /dev/pts/1.
Feb 22 01:31:21 lxc1 systemd[1]: Started Container Getty on /dev/pts/2.
Feb 22 01:31:21 lxc1 systemd[1]: Reached target Login Prompts.
Feb 22 01:31:21 lxc1 systemd[1]: Started The PHP 7.0 FastCGI Process 
Manager.
Feb 22 01:31:21 lxc1 systemd[1]: Reached target Multi-User System.
Feb 22 01:31:21 lxc1 systemd[1]: Reached target Graphical Interface.
Feb 22 01:31:21 lxc1 systemd[1]: Starting Update UTMP about System Runlevel 
Changes...
Feb 22 01:31:21 lxc1 systemd[1]: systemd-update-utmp-runlevel.service: 
Failed to kill control group 
/user.slice/user-1000.slice/session-2.scope/lxc/lxc1/system.slice/systemd-update-utmp-runlevel.service,
 ignoring: Invalid argument
Feb 22 01:31:21 lxc1 systemd[1]: systemd-update-utmp-runlevel.service: 
Failed to kill control group 
/user.slice/user-1000.slice/session-2.scope/lxc/lxc1/system.slice/systemd-update-utmp-runlevel.service,
 ignoring: Invalid argument
Feb 22 01:31:21 lxc1 systemd[1]: systemd-update-utmp-runlevel.service: 
Failed to kill control group 
/user.slice/user-1000.slice/session-2.scope/lxc/lxc1/system.slice/systemd-update-utmp-runlevel.service,
 ignoring: Invalid argument
Feb 22 01:31:21 lxc1 systemd[1]: systemd-update-utmp-runlevel.service: 
Failed to kill control group 
/user.slice/user-1000.slice/session-2.scope/lxc/lxc1/system.slice/systemd-update-utmp-runlevel.service,
 ignoring: Invalid argument
Feb 22 01:31:21 lxc1 systemd[1]: Started Update UTMP about System Runlevel 
Changes.
Feb 22 01:31:21 lxc1 systemd[1]: Startup finished in 3.365s.


> On Feb 22, 2016, at 02:46, John Siu <john.sd@gmail.com> wrote:
> 
> OS: Ubuntu 16.04
> LXC: 2.0.0-rc1
> 
> Following are from host journal when starting up a lxc container:
> 
> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgmanager:do_create_main: pid 18926 
> (uid 1000 gid 1000) may not create under 
> /run/cgmanager/fs/blkio/user.slice/lxc
> Feb 22 01:31:18 JS-HP cgmanager[2978]: cgm

[lxc-users] LXD Live Migration: error - must have criu 1.9 or greater

2016-02-22 Thread John Dupont
Hello,

I am trying to live migrate a container, following the steps described at: 
https://insights.ubuntu.com/2015/05/06/live-migration-in-lxd/

However, when I execute the migrate command, I obtain an error:
# lxc move migratee lxd2:migratee
error: Error transferring container data: checkpoint failed:
checkpoint failed

When checking the log file (/var/log/lxd/migratee/lxc.log), I see the following 
line:
lxc [...] ERRORlxc_criu - criu.c:criu_version_ok:348 - must have criu 1.9 
or greater to checkpoint/restore

The machine (Ubuntu 14.04) is running criu version 1.7.2, and on the criu 
webpage (https://criu.org/Main_Page), the latest version appears to be version 
1.8. However, the error suggests a more recent version 1.9 to be required. 
Would you know where I can find version 1.9 of criu? Or is the failure caused 
by something else?

Thank you,

John

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [Help]LXD:point of differentiation. please answer my question

2016-02-21 Thread John Siu

> On Feb 20, 2016, at 02:39, 디케이  wrote:
> 
> Hi^^ 
> 
> Recently, I first know about LXD and I have searched information with great 
> interest. 
> 
> (from articles, linuxcontainers.org , ubuntu 
> product page)
> 
> But, until now, There are some parts that I can't yet understand.
> 
> That parts are very important for me. please anyswer my question below.
> 
> thanks in advance.
> 
>  
> 
> [Q1] I read that "LXD container provides a full OS environment within 
> container." 
> 
>So I think that is one of the Point of Differences beween LXD and 
> other containers.
> 
>Of course I know that app container like a docker does not support 
> full OS envinronment.
> 
>But before announcing LXD, already LXC technology has existed. well 
> known technology. 
> 
>Before LXD, Does LXC have already provided full OS environment??  
> 
>I know other container like solaris zone also supports.
> 
> ( I know that LXD uses LXC, However I want to distinguish beween LXD new 
> features and LXC origin feautre that have continued to support   before.)
> 
> 
>   Am I right? or wrong? 
> 
>   If I am wrong, 
> 
>  What is the main reason that LXD provides a full OS environment in 
> comparison with lxc and zone??
> 
>   except for functions like a multiple hosts, snapshots just focus on 
> full OS environment.
> 
> 

LXD is actually a management tool wrapper around LXC. LXC is the one that 
provide you the full OS container. LXD come with remote management 
capabilities, while LXC itself can only manage local containers.

So LXD is a LXC container manager. If you are familiar with VMware, you can 
think LXD as the VMSphere for LXC.

> [Q2] where can I get LXD manual? I have found a just few "get started webpage"
> 
>I want to get "how to configure resource management", 
> 
>how to assign block device and volume, how to connect container to 
> outside and about configuraiotn files...
> 
> 

As far as I know, there are very limited documentation for LXD and LXC, not 
even ebook. Following blog series maybe helpful to you:

https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/
https://www.flockport.com/tag/lxc-3/

Both Flockport and Stéphane Graber's blog are very informative. Flockport is a 
bit more up to date, while Stéphane’s blog is more organized.
> [Q3] LXD container can not servcie itself by own funtion? 
> 
>It means LXD container must use other tool like a SDN, openstack??
> 
>(Docker can uses unixsocket, tcpsocket for service with other hosts, 
> Docker does not need SDN)
> 
There is no LXD container. LXD manage LXC containers. LXD can be a standalone 
tools or work with Openstack, etc.

>  
> [Q4] All container technology use a host's kernel features (cgroup, 
> namespace. etc...)
> 
>  I know LXD also use host's kernel features. 
> 
>  So VM like a virtubalbox, vmware can support better isolation and 
> security than container.
> 
>  because VMs have own kernel and VMs does not share kernel resource.
> 
>  If so, How can LXD provide support better security and isolation than 
> other container technology?? 
> 
Though cgroup have been in development for over 5yrs(or more?), IMHO it is 
still a new technology. The reason is it hasn’t been heavily tested and used 
till Docker, LXC and systemd ngspawn become available.

As with all technology, old or new, there will be bugs and security holes, and 
they will be fixed in time.
> How can LXD be called linux hypervisor in comparison with other 
> container(lxc, solaris zone).
> 
>  ( I know that LXD uses LXC, However I want to distinguish beween LXD new 
> features and LXC origin feautre that have continued to support   before.)
> 
Again, as a simplified answer, LXC is comparable to Solaris Zone. They are both 
kernel level container, and the container use the host running kernel.

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Better error logging when starting containers?

2016-02-21 Thread John Siu
I agree that lxc-start need better error logging, not only for lxc-start, but 
for most lxc-* tools. But on the other hand, dev properly are too busy 
straighten out other issue for lxc v2.

Regarding your issue, you can try following:

lxc-start-ephemeral -n e1 -l 9 -o test -d

The “-l 9” (lower-case L) set the log level to 9. That should give you more 
information. Without it the log level is default to 0 or 1, which is the same 
as what you see on screen (stderr).

John Siu


> On Feb 21, 2016, at 18:50, Akshay Karle <akshay.a.ka...@gmail.com> wrote:
> 
> Hello lxc users,
> 
> After having used lxc for a while now, I've realized that when the container 
> fails to start, it fails with a very generic message as follows:
> 
> $ lxc-start -n test
> lxc-start: lxc_start.c: main: 344 The container failed to start.
> lxc-start: lxc_start.c: main: 346 To get more details, run the container in 
> foreground mode.
> lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
> setting the --logfile and --logpriority options.
> 
> And if you are using ephemeral containers, the error is even more generic and 
> with no way to increase the log level:
> 
> $ lxc-start-ephemeral -n e1 -o test -d
> setting rootfs to .%s. /home/vagrant/.local/share/lxc/e1/rootfs
> The container 'e1' failed to start.
> 
> I was wondering if someone felt the need of having a little more meaningful 
> error messages giving a summary of the error in the console output. The 
> container logfile does have way more descriptive error messages, but since 
> you don't directly have the errors in the console output, you are forced to 
> open the logfiles everytime something goes wrong. Instead if you had an 
> output that just included a few important error lines from the logfile such 
> as the following example:
> 
> $ lxc-start -n test
> lxc-start: lxc_start.c: main: 344 The container failed to start.
> lxc_start - start.c:lxc_spawn:1031 - failed creating cgroups
> lxc-start: lxc_start.c: main: 346 To get more details, run the container in 
> foreground mode.
> lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
> setting the --logfile and --logpriority options.
> 
> Do you think this would help? Although I have no idea if this is simple to 
> implement, I just wanted to get your ideas, suggestions and concerns (if any) 
> before attempting to figure out a solution.
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration: error - must have criu 1.9 or greater

2016-02-18 Thread John Dupont
Thank you for your suggestions! I re-built criu from the github repository, and 
when trying to execute the "lxc move" command, I now obtain the following error 
message:
error: Error transferring container data: checkpoint failed:
checkpoint failed

The lxc.log file indicates: "lxc_criu - criu.c:criu_ok:405 - couldn't find 
devices.deny = c 5:1 rwm"
Would you know what causes it, and how to fix it? 

Thank you!


From: lxc-users <lxc-users-boun...@lists.linuxcontainers.org> on behalf of 
Tycho Andersen <tycho.ander...@canonical.com>
Sent: Tuesday, February 16, 2016 6:38 AM
To: LXC users mailing-list
Subject: Re: [lxc-users] LXD Live Migration: error - must have criu 1.9 or 
greater

On Sat, Feb 13, 2016 at 10:14:25AM +0100, Thomas Lamprecht wrote:
> Hi,
>
> On 11.02.2016 19:33, John Dupont wrote:
> > Hello,
> >
> > I am trying to live migrate a container, following the steps described
> > at: https://insights.ubuntu.com/2015/05/06/live-migration-in-lxd/
> >
> > However, when I execute the migrate command, I obtain an error:
> > # lxc move migratee lxd2:migratee
> > error: Error transferring container data: checkpoint failed:
> > checkpoint failed
> >
> > When checking the log file (/var/log/lxd/migratee/lxc.log), I see the
> > following line:
> > lxc [...] ERRORlxc_criu - criu.c:criu_version_ok:348 - must have
> > criu 1.9 or greater to checkpoint/restore
> >
> > The machine (Ubuntu 14.04) is running criu version 1.7.2, and on the
> > criu webpage (https://criu.org/Main_Page), the latest version appears to
> > be version 1.8. However, the error suggests a more recent version 1.9 to
> > be required. Would you know where I can find version 1.9 of criu? Or is
> > the failure caused by something else?
> >
>
> No there's a hard version check to criu 1.9 in the code, why I'm not
> quite clear, but I assume there's a reason, e.g. not working with the
> latest release 1.8 but working with the newest code commits since then.

Right, liblxc uses options now (--lsm-profile) that aren't in any
released version of criu.

> You could build it yourself from crius github repository, LXC has a
> separate check when criu is build from (git) sources, so that should
> work - at least for the version check.

Yep, liblxc is smart enough to check criu's version output for the
right patchlevel, so a sufficiently new source build of CRIU should
work.

Tycho

> cheers,
> Thomas
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC console not working with new Kali linux container, nothing being logged

2016-02-07 Thread John Lewis
I made an LXC by unpacking the squashfs image on the installation DVD of
Kali Linux. It starts but, if I start it as a daemon I can't use LXC
console to log into it. If I start it interactively I can log into it.
The problem is that there is no logging even when I configure the log
level to be debug.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC console not working with new Kali linux container, nothing being logged

2016-02-07 Thread John Lewis
I forgot to append the config file.

On 02/07/2016 07:37 AM, John Lewis wrote:
> I made an LXC by unpacking the squashfs image on the installation DVD of
> Kali Linux. It starts but, if I start it as a daemon I can't use LXC
> console to log into it. If I start it interactively I can log into it.
> The problem is that there is no logging even when I configure the log
> level to be debug.

# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)
lxc.network.type = none
lxc.rootfs = /home/diskimg/lxc/khalid

# Common configuration
lxc.include = /usr/share/lxc/config/debian.common.conf

# Container specific configuration
#lxc.mount = /var/lib/lxc/debian8_base/fstab
lxc.utsname = khalid
lxc.arch = amd64
lxc.autodev = 1
lxc.kmsg = 0
lxc.loglevel = 1
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Could not find writable mount point for cgroup hierarchy 8 while trying to create cgroup

2016-01-26 Thread John Lewis
root@thunderguard:/home/diskimg/lxc# lxc-start -n rosxubuntu
lxc-start: Could not find writable mount point for cgroup hierarchy 8
while trying to create cgroup.
lxc-start: failed creating cgroups
lxc-start: failed to spawn 'rosxubuntu'
lxc-start: The container failed to start.
lxc-start: Additional information can be obtained by setting the
--logfile and --logpriority options.

container

root@thunderguard:/home/diskimg/lxc# cat rosxubuntu/etc/os-release
NAME="Ubuntu"
VERSION="14.04, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/;
SUPPORT_URL="http://help.ubuntu.com/;
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;

host

root@thunderguard:/home/diskimg/lxc# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/;
SUPPORT_URL="http://www.debian.org/support;
BUG_REPORT_URL="https://bugs.debian.org/;


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is right way to backup and restore linux containers?

2015-12-05 Thread John Lewis
What I do is store my containers in a disk image with a filesystem,
usually ext4. I store the image in the LXC server's /opt. I mount the
LXC's to /srv before starting them because I haven't figured out how to
run them directly out of the disk images yet. I back up the disk images
with rsnapshot with a sparse option. It saves a lot of time because
there is only one file to backup instead of hundreds for each LXC.

To restore I mount the disk image and rsync the target file back to the
original container or copy up the whole container disk image over the
one that wasn't in the the state I needed it to be in. To back up
databases, you need to make sure you get a database dump before the
backup. The way I like to do it is by using a remote ssh command and
dump the database over an ssh socket from the backup machine, I copy the
dump command up using standard input and copy the database dump back
down using standard output. Keeping database files on a separate image
file is helpful to reduce the size of backups but not required.

On 12/04/2015 11:32 AM, Saint Michael wrote:
> I was going t ask the same question.
> It  is a very important one. I am moving containers via rsync, but it
> takes tooo long.
>
> On Fri, Dec 4, 2015 at 11:03 AM, Eax Melanhovich  > wrote:
>
> Hello.
>
> Lets say I have some container. I would like to run something like:
>
> lxc-backup -n test-container my-backup.tgz
>
> Then move backup somewhere (say, to Amazon S3). Then say I would like
> to restore my container or create its copy on different machine. So I
> need something like:
>
> lxc-restore -n copy-of-container my-backup.tgz
>
> I discovered lxc-snapshot, but it doesn't do exactly what I need.
>
> So what is the right way of backuping and restoring linux containers?
>
> --
> Best regards,
> Eax Melanhovich
> http://eax.me/
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> 
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] re I can't create tun device in Systemd Linux container {john lewis} SOLVED

2015-11-29 Thread John Lewis
On 11/29/2015 05:51 PM, brian mullan wrote:
> Check the syntax of tuntap creation and make sure you
> have the command right...
>
> http://baturin.org/docs/iproute2/#Add%20an%20tun/tap%20device%20useable%20by%20root
>
> brian
>

Someone on gave me a link to the Arch wiki that had the fix.

https://wiki.archlinux.org/index.php/OpenVPN_in_Linux_containers

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Naming under /sys/fs/cgroup in LXC container

2015-07-24 Thread John Marshall

Hi,

I have all my controllers under a common directory /sys/fs/cgroup/unified:

   cgroup on /sys/fs/cgroup/unified type cgroup 
(rw,relatime,hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset)

under which is a jobs cgroup running jobs in jobs/jobid. When I start an 
LXC container,
e.g., under /jobs/123, and have this in my lxc config file:

   lxc.mount.auto = cgroup
   lxc.aa_profile = unconfined

I get the following under /sys/fs/cgroup (in the container):

   lrwxrwxrwx 1 root root 66 Jul 24 14:47 blkio - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 cpu - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 cpuacct - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 cpuset - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 devices - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 freezer - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 hugetlb - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   drwxr-xr-x 3 root root 60 Jul 24 14:47 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 memory - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset
   lrwxrwxrwx 1 root root 66 Jul 24 14:47 perf_event - 
hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset

Is there some way to get this exposed under a single directory (e.g., 
/sys/fs/cgroup/unified)
as I do on the host system rather than entries for all the controllers? In 
effect could the
directory hugetlb,perf_event,blkio,freezer,devices,memory,cpuacct,cpu,cpuset 
be given
a name unified instead?

Thanks,
John

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking LXC and VirtualBox on the same host

2014-09-22 Thread John

On 20/09/14 14:21, J Bc wrote:

route -n


Not sure what you mean, everything's on the same subnet. Also, if it 
were routing then pings wouldn't work either...


My route -n is this

Destination Gateway Genmask Flags   MSS Window irtt 
Iface
0.0.0.0 10.0.0.138  0.0.0.0 UG0 0  0 
eth0
10.0.0.00.0.0.0 255.0.0.0   U 0 0  0 
eth0


If I need to configure something then I'd be grateful if someone would 
explain what I'm missing.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Networking LXC and VirtualBox on the same host

2014-09-20 Thread John
I have a test rig with LXC and VirtualBox guests on it. Host is 
ArchLinux 64-bit.


All virtual machines and containers use the same bridged network config.

I've noticed that

 * the containers can talk to each other, the host and anything else on
   the network
 * the VB guests can talk to each other, the host and anything else on
   the network
 * the containers and the VB guests cannot talk to each other
 * the containers and the VB guests can ping each other

I've tried both UDP and TCP tests in both directions between containers 
and VB guests. Nothing works.


I've done some basic testing I think data gets from source to 
destination but the replies don't come back.


My bridge is configured like this

|Description=Bridge Network (Static Host)
Connection=bridge
Interface=br0
BindsToInterfaces=enp6s0
IP=static
Address=10.0.200.1/8
Gateway=10.0.0.138
DNS=10.0.0.138
DNSDomain=example.co.uk
FwdDelay=0
|

And typical container config is like this:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.mtu = 1500

VB guests using the bridged adapter.

Can anyone suggest anything that I can check or do so I can get VBox 
guests and containers talking to each other ?


Thanks.


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Substitute for lxc-ps?

2014-09-10 Thread John Drescher
On Wed, Sep 10, 2014 at 1:30 PM, Michael Chinn
michael.ch...@simpleprecision.com wrote:
 I see that lxc-ps was removed for v 1.x:

 http://permalink.gmane.org/gmane.linux.kernel.containers.lxc.general/7623


 Q: Why was it removed?

 Q: Is there a replacement?


Here is some additional info on the commit that removed lxc-ps
https://github.com/lxc/lxc/commit/7f12cae956c003445e6ee182b414617b52532af6

John
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Substitute for lxc-ps?

2014-09-10 Thread John Drescher
On Wed, Sep 10, 2014 at 1:52 PM, John Drescher dresche...@gmail.com wrote:
 On Wed, Sep 10, 2014 at 1:30 PM, Michael Chinn
 michael.ch...@simpleprecision.com wrote:
 I see that lxc-ps was removed for v 1.x:

 http://permalink.gmane.org/gmane.linux.kernel.containers.lxc.general/7623


 Q: Why was it removed?

 Q: Is there a replacement?


 Here is some additional info on the commit that removed lxc-ps
 https://github.com/lxc/lxc/commit/7f12cae956c003445e6ee182b414617b52532af6

Here is an example of using lxc-attach to run ps in a container:

http://docs.oracle.com/cd/E37670_01/E37355/html/ol_shutdown_containers.html

John
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-ps not found

2014-08-28 Thread John Drescher
On Thu, Aug 28, 2014 at 6:29 AM, Lukas Schulze lspc...@gmail.com wrote:
 Hi,

 I'm using lxc-1.0.5 and I'm wondering why $ lxc-ps is not available on my
 machine?
 My host is a debian system: Linux 3.2.0-4-amd64 #1 SMP Debian
 3.2.60-1+deb7u1 x86_64 GNU/Linux
 Any ideas?

This has been removed with the 1.x release.

John
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [Lxc-users] LXC and sound in container -

2014-01-17 Thread John

On 25/12/13 15:33, TuxRaiderPen wrote:

On Friday, November 15, 2013 04:50:58 John wrote:

On 09/11/13 15:12, brian mullan wrote:

I've searched the web for 2 weeks now and can find no documentation
describing steps to configure sound in an LXC container.

Here is what I do. It's just ALSA (not Pulseaudio) but I do run a
desktop in a container and it works for me.

Apoligies for late response to this post - it fell off my radar. I was 
busy eating turkey on the 25th December!!!

Very interesting! !! THANK YOU!

I am interested in sound device use inside LXC containers, under ALSA *only*,
so I have a few questions...

Is this sound *output* only ???

Have you tried using :

1) Line in
2) Mic In
3) ALSA plugins:

I have thus far only tried sound output, as that is all that I needed.
If I get a chance I may try out a few tests of sound input - I can't see 
why it wouldn't work...

I use something like this:

asound.conf
pcm.onboard{
 type hw
 card 0
}
ctl.onboard {
 type hw
 card 0
}
### Dsnoop both channels
pcm.dsnoop_onboard {
 type dsnoop
 ipc_key 32
 slave {
 pcm onboard
 channels 2
 period_size 320
 rate 48000
 buffer_size 8192
 format S32_LE
 }
 bindings {
 0 0
 1 1
 }
}
### Dsnoop splited channels
pcm.onboard_left {
  type dsnoop
  ipc_key 32
  slave {
  pcm onboard
  channels 2
  }
  bindings.0  0
}

pcm.onboard_right {
  type dsnoop
  ipc_key 32
  slave {
  pcm onboard
  channels 2
  }
  bindings.0  1
}

### PLUGS ##
### used with darkice
### device = plug:plug_onboard_left
pcm.plug_onboard_left{
 type route
 slave.pcm onboard_left
 slave.channels 1
 ttable.0.0 1
}
pcm.plug_onboard_right{
 type route
 slave.pcm onboard_right
 slave.channels 1
 ttable.0.0 1
}


I then feed these pcm.plug_onboard_right to some software to process each
plugin...

I am not so much interested in SOUND OUTPUT, but CAPTURE of audio via a
software package and then feeding it onward...

I am very interested in your solution as it uses ALSA only, as I may be using
KUbuntu, highly customized, but first step I do is

  sudo apt-get purge pulseaudio

So I am very interested in your solution and getting access to the ALSA
plugins

Hmmm.. maybe it would be best to get a few of those USB fob sound cards and
just assign each to a specific LXC container, and then put the asound.conf into
each container and thus each container could do two encodings... h...

Thanks again for your input on ALSA based sound in LXC!
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] iptabes kernel modules not loading in containers

2014-01-15 Thread John Baker
You just need to make sure that iptables is running on the host in some way
or another.If you run lsmod on it you should see these modules:

xt_multiport   12597  2
iptable_filter 12810  2
ip_tables  27473  1 iptable_filter
x_tables   29891  3 xt_multiport,iptable_filter,ip_tables

 If it's not there it's not loaded and can't share with the containers. I
have the hosts on a separate and much more secure network so I didn't think
about a firewall.

The easiest thing is to install fail2ban on the host. It just watches ssh
or whatever services you define for brute force attacks by using iptables.
It's useful and sets iptables rules. Alternately setup a firewall on the
host or load the iptables modules in /etc/modules at boot on the host.


On Wed, Jan 15, 2014 at 3:25 AM, Gandhi, Ibha (HP Software) ib...@hp.comwrote:

  Hi John,



 Even I am facing similar issue, container throws this error:

 ubuntu@root-local-machine-2:~$ iptables -L

 FATAL: Could not load /lib/modules/3.11.0-12-generic/modules.dep: No such
 file or directory

 iptables v1.4.12: can't initialize iptables table `filter': Table does not
 exist (do you need to insmod?)

 Perhaps iptables or your kernel needs to be upgraded.



 It’ll be great if you can share what changes you made in init scripts.



 Thanks,

 - Ibha



 *From:* lxc-users-boun...@lists.linuxcontainers.org [mailto:
 lxc-users-boun...@lists.linuxcontainers.org] *On Behalf Of *John Baker
 *Sent:* Wednesday, January 15, 2014 2:09 AM
 *To:* LXC users mailing-list
 *Subject:* Re: [lxc-users] iptabes kernel modules not loading in
 containers



 Yes, that was it thanks.



 On Tue, Jan 14, 2014 at 3:31 PM, Stéphane Graber stgra...@ubuntu.com
 wrote:

 On Tue, Jan 14, 2014 at 03:00:32PM -0500, John Baker wrote:
  Hi,
 
  I'm using lxc in 12.04.4 LTS and seem to have a chronic issue with the
  iptables modfule not loading inside a container. I have found that it
 does
  sometimes work and my coworker never seems to have problems with it in
 the
  servers he runs. But it happens all the time on mine and I can't see
  anything at all that we do differently. Sometimes it will start running
  inside a container and then mysteriously have stopped next time I check
 in.
  I can't find any error messages pertaining to it besides the one I get
 when
  I try to load rules or view the set loaded.
 
  The only fix I have been able to come up with is to manually
  copy /lib/modules/kernel ver.-generic/modules.dep and net directory
 from
  the host into the container. Then it seems willing to load iptables
 modules
  consistently but always breaks when the kernel is updated on the host and
  has to be redone.
 
  Any ideas on what I might be missing? Is there a cgroup I should include
  for sharing iptables modules?

 Kernel modules aren't loaded per-container but globally for the whole host.

 It's not recommended (and usually blocked by either dropping the
 capability or by having apparmor prevent it) to load modules from within
 a container. Instead you should make sure all your kernel modules are
 loaded from the host before you start your containers.

 I suspect the difference between your server and your colleague's is
 that he has some init scripts or something else calling iptables before
 he starts his containers which will load any modules required by his
 container.

 --
 Stéphane Graber
 Ubuntu developer
 http://www.ubuntu.com

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users





 --

 John Baker

 Network Administrator

 Marlboro College

 Phone: 451-7551 Cell: 490-0066

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users




-- 
John Baker
Network Administrator
Marlboro College
Phone: 451-7551 Cell: 490-0066
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] iptabes kernel modules not loading in containers

2014-01-14 Thread John Baker
Yes, that was it thanks.


On Tue, Jan 14, 2014 at 3:31 PM, Stéphane Graber stgra...@ubuntu.comwrote:

 On Tue, Jan 14, 2014 at 03:00:32PM -0500, John Baker wrote:
  Hi,
 
  I'm using lxc in 12.04.4 LTS and seem to have a chronic issue with the
  iptables modfule not loading inside a container. I have found that it
 does
  sometimes work and my coworker never seems to have problems with it in
 the
  servers he runs. But it happens all the time on mine and I can't see
  anything at all that we do differently. Sometimes it will start running
  inside a container and then mysteriously have stopped next time I check
 in.
  I can't find any error messages pertaining to it besides the one I get
 when
  I try to load rules or view the set loaded.
 
  The only fix I have been able to come up with is to manually
  copy /lib/modules/kernel ver.-generic/modules.dep and net directory
 from
  the host into the container. Then it seems willing to load iptables
 modules
  consistently but always breaks when the kernel is updated on the host and
  has to be redone.
 
  Any ideas on what I might be missing? Is there a cgroup I should include
  for sharing iptables modules?

 Kernel modules aren't loaded per-container but globally for the whole host.

 It's not recommended (and usually blocked by either dropping the
 capability or by having apparmor prevent it) to load modules from within
 a container. Instead you should make sure all your kernel modules are
 loaded from the host before you start your containers.

 I suspect the difference between your server and your colleague's is
 that he has some init scripts or something else calling iptables before
 he starts his containers which will load any modules required by his
 container.

 --
 Stéphane Graber
 Ubuntu developer
 http://www.ubuntu.com

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users




-- 
John Baker
Network Administrator
Marlboro College
Phone: 451-7551 Cell: 490-0066
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users