[lxc-users] How to get rid of pesky extra dhcp IP

2018-04-07 Thread jjs - mainphrame
Greetings,

Running lxd-3.0.0 on ubuntu 18.04 beta

I've set up a couple of new 16.04 cts and they act as I expect.

I set up an 18.04 ct and a persistent unwanted dhcp IP appears in the lxc
list:

root@ronnie:~# lxc list
+---+-++--++---+
|   NAME|  STATE  |  IPV4  | IPV6 |TYPE|
SNAPSHOTS |
+---+-++--++---+
| dbserv111 | RUNNING | 192.168.111.221 (eth0) |  | PERSISTENT | 0
   |
+---+-++--++---+
| kangal| RUNNING | 192.168.111.44 (eth0)  |  | PERSISTENT | 0
   |
|   | | 192.168.111.239 (eth0) |  ||
   |
+---+-++--++---+
| mg111 | RUNNING | 192.168.111.222 (eth0) |  | PERSISTENT | 0
   |
+---+-++--++---+

However, inside the 18.04 container (kangal), only the static IP is listed:
root@kangal:~# ifconfig
eth0: flags=4163  mtu 1500
inet 192.168.111.44  netmask 255.255.255.0  broadcast
192.168.111.255
inet6 fe80::216:3eff:fef3:857e  prefixlen 64  scopeid 0x20
ether 00:16:3e:f3:85:7e  txqueuelen 1000  (Ethernet)
RX packets 80658  bytes 97973222 (97.9 MB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 56964  bytes 5320056 (5.3 MB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 1000  (Local Loopback)
RX packets 1400  bytes 115581 (115.5 KB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1400  bytes 115581 (115.5 KB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

root@kangal:~#

However I can ssh to this dhcp IP and gain access to the box.

Any clues as to how to get rid of this unwanted extra IP?

Jake
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 3.0.0: Packaging Changes To Be Aware Of

2018-04-07 Thread Mihamina RAKOTOMANDIMBY

On 4/7/18 5:54 PM, Christian Brauner wrote:

2. **Important** distrobuilder is the new way of creating machine/system
 container images
The templates have been replaced by a new project called "distrobuilder"
[5]. It aims to be a very simple Go project focussed on letting you easily
build full system container images by either using the official cloud image
if one is provided by the distro or by using the respective distro's
recommended tooling (e.g. debootstrap for Debian or pacman for ArchLinux).
It aims to be declarative, using the same set of options for all
distributions while having extensive validation code to ensure everything
that's downloaded is properly validated.

**Warning: Advertisement** please consider packaging distrobuilder.
https://github.com/lxc/distrobuilder

A more lengthy justification can be found at:

https://brauner.github.io/2018/02/27/lxc-removes-legacy-template-build-system.html



Hello,

I'm looking for some tutorial of using the image built with distrobuilder.

After having build the image: how to start it with lxc-start?

--

--

*Mihamina RAKOTOMANDIMBY
Tél: +261 32 11 635 03
Calendar: http://mihamina.rktmb.org/p/calendar.html

/DevOps, Linux, Jira, Confluence, PHP, Java/ *



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] authentication in containers jacked-up!

2018-04-07 Thread Ray Jender
So in Ubuntu 16.04.4 I created  4 LXD containers using LXC. From the host I
created the first container then did $ lxc copy containter1 container2 , 1
to 3 and 1 to 4.

It was a challenge for me to make them accessible from the outside world but
I conquered that.

 

Now however I cannot su or sudo inside the containers?  For instance:

 

sudo find / -name testfile -print

sudo: no tty present and no askpass program specified

 

also.

 

When I do   $  lxc exec container1  /bin/bash  from the host:

 

I am put in:  ray@container1:/root$  // The "/root"  does not seem correct?

 

Also.

 

ray@container2:/etc$  visudo

visudo: /etc/sudoers: Permission denied

 

ray@ container2:/etc$  su visudo

su: must be run from a terminal

 

ray@ container 2:/etc$  sudo visudo

sudo: no tty present and no askpass program specified

 

Also when I try to putty into the container, I get the "login as:"  prompt,
but when I enter the user name, I get:

 

puTTY Fatal Error

Disconnected: No supported authentication methods available (server sent:
publickey)

(same error from WinSCP)

 

Any idea what I am missing?

 

Thanks, appreciate your help on this.


Ray

 

 

 

 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Question regarding container affecting the system mounts

2018-04-07 Thread Ronak Desai
Hi,

I came a cross a problem where if the containers are running then it
affects the unmount process of the system's mount points. I am not
using these mount points as shared partitions with container.

For example, I am using SD card and NAND as external storage device
and mounting it to a mount point in the ramfs. Then I am launching
containers and then when I try to unmount the NAND partition then my
"unmount" call succeeds but I don't see the UBIFS hooks being called
and because of that my UBI detach process fails. When I stopped the
container then I see that kernel is calling the unmount as I see my
debug prints inside the UBIFS for unmounting that partition. It seems
like the calls are buffered/queued because of container.

If I tried to mount my NAND partition once the container is up and try
to unmount then it does unmount and detach without issue. It seems
like there is an issue with namespace.

I am using 4.1.8 kernel . Please let me know if you need any
additional detail from my end.

Thanks in advance !

-- 
Ronak A Desai
Sr. Software Engineer
Airborne Information Solutions / RC Linux Platform Software
MS 131-100, C Ave NE, Cedar Rapids, IA, 52498, USA
ronak.de...@rockwellcollins.com
https://www.rockwellcollins.com/
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC 3.0.0: Packaging Changes To Be Aware Of

2018-04-07 Thread Christian Brauner
Hey everyone,

LX{C,FS,D} upstream here. :)

I'm sorry to ping you all at once in this mail and I seriously hope I only
added actual package maintainers for LXC based projects in their respective
distros to this mail. If not I'm genuinely sorry to have banged on your door
(or rather inbox) on a Saturday!

A few days ago we released LXC [1] and LXD [2] 3.0.0 which are going to be our
next LTS releases receiving support from upstream for 5 years until 2023.

LXC 3.0.0 not just introduces a lot of changes and improvements on all fronts
in general but will also likely require changes in packaging. These changes are
what I'd like to inform you about because we really don't want you all to run
into pointless confusion and problems.

The distros I think should be reached by this mail are:

Alpine
ArchLinux
Debian
Fedora
Gentoo
NixOS
openSUSE
OpenWrt

Please, if anyone of you know other packagers in other distros that are not
derivatives of the above please forward this mail. Don't leave fellow
maintainers hanging. :)

Here is a list of what we consider will most likely affect you as packagers:

1. **Important** the lxc-templates have been moved out of the main LXC tree
   into a separate repository
   https://github.com/lxc/lxc-templates

   This means that without this separate package LXC will now only come with
   the following templates:

   lxc-busybox
   lxc-download
   lxc-local
   lxc-oci

2. **Important** distrobuilder is the new way of creating machine/system
container images
   The templates have been replaced by a new project called "distrobuilder"
   [5]. It aims to be a very simple Go project focussed on letting you easily
   build full system container images by either using the official cloud image
   if one is provided by the distro or by using the respective distro's
   recommended tooling (e.g. debootstrap for Debian or pacman for ArchLinux).
   It aims to be declarative, using the same set of options for all
   distributions while having extensive validation code to ensure everything
   that's downloaded is properly validated.

   **Warning: Advertisement** please consider packaging distrobuilder.
   https://github.com/lxc/distrobuilder

   A more lengthy justification can be found at:
   
https://brauner.github.io/2018/02/27/lxc-removes-legacy-template-build-system.html

3. The python3 bindings have been moved out of the main LXC tree and are
   maintained in a separate Github repo under the LXC namespace.
   https://github.com/lxc/python3-lxc

   This means that the

   --with-python

   configure flag should be dropped.

   A more lengthy justification can be found at:
   
https://brauner.github.io/2018/02/27/lxc-removes-legacy-template-build-system.html

4. The lua bindings have been moved out of the main LXC tree and are
   maintained in a separate Github repo under the LXC namespace.
   https://github.com/lxc/lua-lxc

   This means that the

   --with-lua

   configure flag should be dropped.

   A more lengthy justification can be found at:
   
https://brauner.github.io/2018/02/27/lxc-removes-legacy-template-build-system.html

5. **Important** the pam_cgfs.so pam module has moved from the LXCFS tree into
   the LXC tree
   https://github.com/lxc/lxc/blob/master/src/lxc/pam/pam_cgfs.c

   This means that in order to compile the pam module with LXC you should pass:

   --enable-pam

   and

   --with-pamdir=PAM_PATH

   when compiling LXC.
   In case you don't know what the pam module is for it is used to allow
   unprivileged cgroup management for fully unprivileged containers. It
   useful for all container runtimes (e.g. openSUSE is shipping and
   using it). For a slightly deeper look at it I suggest you read [3].

6. Removeal of legacy cgroup drivers
   This includes the cgmanager driver. Which also implies that

   This means that the

   --with-cgmanager

   configure flag should be dropped. The cgmanager package can likely also be
   dropped unless you maintain a package for our 1.0 stable branch!

   A more lengthy justification can be found at:
   https://brauner.github.io/2018/02/20/lxc-removes-legacy-cgroup-drivers.html

7. All legacy configuration keys have been removed.
   With LXC 2.1.0 we started to print warning when legacy configuration keys
   were used in the container config and started yelling at people that we will
   remove legacy configuration keys in LXC 3.0.0. This is now reality.
   We ship an upgrade script since LXC 2.1:

   chb@conventiont|~
   > lxc-update-config
   /usr/bin/lxc-update-config -h|--help [-c|--config]
   config: the container configuration to update

   which will automatically replace legacy configuration keys with their new
   counterparts. If the upgrade fails it will have left a *.backup file in the
   same directory where the config file was and it can simply be restored.

   Please make sure your users know about this update script. Fwiw, [4]
   provides a list of all removed legacy configuration keys and their new
   counterparts.

[lxc-users] debugging a failing clone() call

2018-04-07 Thread Andrew Cann
Hello,

The folowing sycall is failing when called on a Travis-CI build machine.

clone(..,
CLONE_FILES |
CLONE_IO |
CLONE_SIGHAND |
CLONE_VM |
CLONE_SYSVSEM |
CLONE_NEWNET |
CLONE_NEWUTS |
CLONE_NEWUSER,
..
);

This works when I run it on my machine, but inside the Docker container that
Travis creates it fails with EPERM. Can anyone suggest why this might be
happening? The clone(2) manpage lists possible reasons:


EPERM   CLONE_NEWCGROUP, CLONE_NEWIPC, CLONE_NEWNET, CLONE_NEWNS, CLONE_NEWPID,
or CLONE_NEWUTS was specified by an unprivileged process (process
without CAP_SYS_ADMIN).

This shouldn't apply since I'm using CLONE_NEWUSER


EPERM   CLONE_PID was specified by a process other than process 0. (This error
occurs only on Linux 2.5.15 and earlier.)

Doesn't apply.


EPERM   CLONE_NEWUSER was specified in flags, but either the effective user ID
or the effective group ID of the caller does not have a mapping in the
parent namespace (see user_namespaces(7)).

Again, this shouldn't apply. The process creating the namespace has a valid
(not-nobody) uid and gid.


EPERM   (since Linux 3.9) CLONE_NEWUSER was specified in flags and the caller
is in a chroot environment (i.e., the caller's root directory does not
match the root directory of the mount namespace in which it resides).

Possibly this one? The docker container shouldn't be aware that it's running in
a chroot though. Calling mount inside the container lists:

overlay on / type overlay (rw,relatime,...)

Which indicates that it's living inside its own mount namespace with its own
root directory.

So I'm confused. Does anyone have any suggestions for why else this might be
failing or thngs I could try to debug it? Is there a way to get more than just
an eror code out of linux? Are there reasons for giving that error code that
aren't listed in the man page?

Any help would be greatly appreciated.
 - Andrew



signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Using devices in an unprivileged LXC container

2018-04-07 Thread Avadhut Bhangui
Hello,
I have an ubuntu system. i login to the device as root user. I have two LXC 
containers created using the busbox template. One is privileged and other one 
is unprivileged.
I want to ensure that when a USB device is connected to my ubuntu box, i should 
be able grant access to the unprivileged container. What options are possible 
and how do we do this?
Regards,Avadhut.___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] SIGRTMIN+3

2018-04-07 Thread Eric Wolf
One of my containers is shutting down seemingly randomly. I'm trying
to figure out why, but so far all I can find in syslog is systemd[1]:
Received SIGRTMIN+3. which seems to be related to the LXC/LXD stop
command, but I can't find anything that might be sending that command
from my host, so I'm here looking for help finding the source. I'm not
sure what to look for in my logs, either in the container or on the
host.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Limit network bandwidth to LXC containers

2018-04-07 Thread Angel Lopez
Hi,

I need to limit the network bandwidth available to each LXC container using
cgroup's net_cls.classid feature. Each LXC container would have its own
classid value in such a way that all packets from containers would be
tagged with the classid and afterwards classified in the correct host
configured traffic class where the bandwidht limit applies.

To achieve this, I followed these steps:

1. Configure traffic control:

# tc qdisc del dev eno54 root
# tc qdisc add dev eno54 root handle 10: htb
# tc class add dev eno54 parent 10: classid 10:1 htb rate 10mbit
# tc class add dev eno54 parent 10: classid 10:2 htb rate 50mbit
# tc filter add dev eno54 parent 10: protocol ip handle 1: cgroup

The device eno54 is the physical network interface that connect the host
with the network. It's part of the bridge where container virtual network
interfaces are added.

# brctl show br0
bridge name bridge id   STP enabled interfaces
br0 8000.00163ee2fda2   no  eno54

2. Set the classid value in container config file.

lxctest1 container config file has: lxc.cgroup.net_cls.classid = 0x0011
lxctest2 container config file has: lxc.cgroup.net_cls.classid = 0x0012

3. Start both containers. Check that classid is correct and that they
belong to the bridge.

# lxc-start -n lxctest1
# lxc-start -n lxctest2

# cat /sys/fs/cgroup/net_cls/lxc/lxctest1/net_cls.classid
1048577
# cat /sys/fs/cgroup/net_cls/lxc/lxctest2/net_cls.classid
1048578

# brctl show br0
bridge name bridge id   STP enabled interfaces
br0 8000.00163ee2fda2   no  eno54
veth0-lxctest1
veth0-lxctest2
4. Start iperf in both containers.

Expected behaviour: iperf running on container lxctest1 being limited to 10
Mbps and iperf running on lxctest2 container being limited to 50 Mbps.
What I get: both iperf running unconstrained at maximum speed.

5. I took the iperf process running on lxctest1 container and checked that
it was in the tasks of the cgroup

# pstree -c -p 37108
lxc-start(37108)───systemd(37118)─┬─agetty(37167)
  ├─agetty(37168)
  ├─dbus-daemon(37157)
  ├─rsyslogd(37156)─┬─{rsyslogd}(37161)
  │ └─{rsyslogd}(37162)
  ├─sshd(37336)───sshd(41156)───
bash(41167)───iperf3(41523)
  ├─systemd-journal(37131)
  └─systemd-logind(37153)

# cat /sys/fs/cgroup/net_cls/lxc/lxctest1/tasks
37118
37131
37153
37156
37157
37161
37162
37167
37168
37336
39618
41156
41167
41523

# cat /proc/41523/cgroup
10:memory:/lxc/lxctest1
9:hugetlb:/lxc/lxctest1
8:perf_event:/lxc/lxctest1
7:cpuset:/lxc/lxctest1
6:devices:/lxc/lxctest1
5:net_cls,net_prio:/lxc/lxctest1
4:blkio:/lxc/lxctest1
3:cpu,cpuacct:/lxc/lxctest1
2:freezer:/lxc/lxctest1
1:name=systemd:/user.slice/user-0.slice/session-1288.
scope/user.slice/user-0.slice/session-1288.scope

6. I don't know how to check that packets going out the container are
actually being tagged with the classid value, but the reality is that
packets are not filtered acording this value on the host and are not going
to the correct class, where bandwidth limit is applied.

7. I'm using Oracle Linux 7 and the standard lxc package delivered in this
distribution. Versions:

# uname -a
Linux exapru-aa.dit.aeat 4.1.12-112.14.15.el7uek.x86_64 #2 SMP Thu Feb 8
09:58:19 PST 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/oracle-release
Oracle Linux Server release 7.4

# yum info lxc
Loaded plugins: ulninfo
Installed Packages
Name: lxc
Arch: x86_64
Version : 1.1.5
Release : 2.0.9.el7
Size: 725 k
Repo: installed
>From repo   : ol7_latest
Summary : Linux Containers userspace tools
URL : http://linuxcontainers.org
License : LGPLv2+
Description : Containers are insulated areas inside a system, which have
their own namespace
: for filesystem, network, PID, IPC, CPU and memory allocation
and which can be
: created using the Control Group and Namespace features
included in the Linux
: kernel.
:
: This package provides the lxc-* tools, which can be used to
start a single
: daemon in a container, or to boot an entire "containerized"
system, and to
: manage and debug your containers.


8. What is wrong here? Anything wrong with this LXC version? Anything wrong
with the setup?

Thanks!

-- 
Angel Lopez
http://futur3.com/
... the geeks shall inherit the Earth
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users