Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?

2016-09-21 Thread Ruzsinszky Attila
Hi,

Why don't you make a test with OpenVSwitch?
You can setup an SDN with it.

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] OpenVSwitch compiling on CentOS 6.8

2016-09-08 Thread Ruzsinszky Attila
Hi,

Thanks for your prompt answer!
My workaround was this: there is a "--without check" parameter.
The rpm was built successfully. :-)

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] OpenVSwitch compiling on CentOS 6.8

2016-09-08 Thread Ruzsinszky Attila
Hi,

I don't know what could be the problem.

Host is Ubuntu 16.04 64 bit
ii  lxc 2.0.4-0ubuntu1~ubuntu16.04.2
rattila@fcubi:~$ uname -a
Linux fcubi 4.4.0-34-generic #53-Ubuntu SMP Wed Jul 27 16:06:39 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux

Container:
[rattila@cos64 ~]$ uname -a
Linux cos64 4.4.0-34-generic #53-Ubuntu SMP Wed Jul 27 16:06:39 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux

I wanted to build OVS 2.5.0 rpm:

[root@cos64 openvswitch-2.5.0]# rpmbuild -bb -D 2.6.32-642
rhel/openvswitch.spec
...
1725: ovn-sbctl - testok
1726: ovn-controller - ovn-bridge-mappingsok
1727: ovn-controller-vtep - test chassis  skipped (
ovn-controller-vtep.at:113)
1728: ovn-controller-vtep - test binding 1skipped (
ovn-controller-vtep.at:180)
1729: ovn-controller-vtep - test binding 2skipped (
ovn-controller-vtep.at:244)
1730: ovn-controller-vtep - test vtep-lswitch skipped (
ovn-controller-vtep.at:283)
1731: ovn-controller-vtep - test vtep-macs 1  skipped (
ovn-controller-vtep.at:335)
1732: ovn-controller-vtep - test vtep-macs 2  skipped (
ovn-controller-vtep.at:406)

Waiting for something since some days. :-(

2700 pts/4S  0:00 -bash
14824 pts/4S+ 0:00 rpmbuild -bb -D 2.6.32-642 rhel/openvswitch.spec
16544 pts/4T  0:00 /bin/sh ./tests/testsuite -C tests
AUTOTEST_PATH=util
16549 pts/4T  0:00 /bin/sh ./tests/testsuite -C tests
AUTOTEST_PATH=util
16550 pts/4T  0:00 /bin/sh ./tests/testsuite -C tests
AUTOTEST_PATH=util
16551 pts/4T  0:00 cat
16553 pts/4T  0:00 /bin/sh ./tests/testsuite -C tests
AUTOTEST_PATH=util
16556 pts/4T  0:00 /usr/bin/perl
21876 ?Ss 0:00 sshd: rattila [priv]
21878 ?S  0:00 sshd: rattila@pts/5
21879 pts/5Ss 0:00 -bash
21895 pts/5R+ 0:00 ps ax
25797 pts/4S+ 0:00 /bin/sh -e /var/tmp/rpm-tmp.0qU6e1
25798 pts/4S+ 0:00 make check TESTSUITEFLAGS=-j2
25799 pts/4S+ 0:00 make check-recursive
25800 pts/4S+ 0:00 /bin/sh -c fail=; \?if (target_option=k; case
${targe
26420 pts/4S+ 0:00 make check-am
26522 pts/4S+ 0:00 make check-local
26523 pts/4S+ 0:00 /bin/sh ./tests/testsuite -C tests
AUTOTEST_PATH=util

I don't know it is an LXC problem or not.

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC priv container and OpenVSwitch

2016-08-29 Thread Ruzsinszky Attila
Hi,

Ubuntu 16.04.

My OVS config:
...
Port "veth-lub4"
tag: 800
Interface "veth-lub4"
error: "could not open network device veth-lub4 (No such
device)"
Port "veth-lub6"
tag: 800
Interface "veth-lub6"
error: "could not open network device veth-lub6 (No such
device)"
Port "veth-lub5"
tag: 800
Interface "veth-lub5"
error: "could not open network device veth-lub5 (No such
device)"
Port "veth-lub7"
tag: 800
Interface "veth-lub7"
error: "could not open network device veth-lub7 (No such
device)"
ovs_version: "2.5.0"

The containers are stopped so the error is true.

I wanted to start lub4:
rattila@fcubi:~$ sudo lxc-start -n lub4
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in
foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by
setting the --logfile and --logpriority options.

The problem is the pre-configured OVS ports! If I remove the above ports
and re-start the containers and configure again OVS for the given port
everything will be OK.
Is that the needed solution?

I found the port will be setup automatically except tagging (VLAN) by lxc.
So after the container was started I have to configure VLAN manually by
ovs-vsctl for getting IP from the DHCP server which is on the tagged VLAN.

It is very inconvenient if the machine was rebooted because OVS "learned"
the config and the restarted containers are not able to start because of
the pre-configured ports in OVS.

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unprivileged container strange behaviour

2016-07-28 Thread Ruzsinszky Attila
Hi,

I found the problem description for networking.
lxc.network.veth.pair = veth-lub8 was ignored because of security.

I found this URL in the topic:
http://blog.scottlowe.org/2014/01/23/automatically-connecting-lxc-to-open-vswitch/

Is that working with unprivileged container?

Ubuntu 16.04 doesn't run dhclient without startup networking or waiting for
DHCP.
Because of the missing permanent host's ethernet interface name I can't
configure
OVS before starting lxc container so I have to run dhclient manually after
the login
prompt was appeared. It is very ugly. :-(
Any better solution? Only fix IP address for VM?

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Unprivileged container strange behaviour

2016-07-27 Thread Ruzsinszky Attila
Hi,

This is my 1st unprivileged container.

Host is Ubuntu 14.04 64 bit.
Container is Ubuntu 16.04 64 bit. It was created by lxc-create.
LXC version is: 2.0.3.

"nincs csatolva" means in English: not mounted.

rattila@fcubi:~$ lxc-start -F -n lub8
umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/blkio/user/1000.user/c4.session/lxc/lub8
nincs csatolva
   umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/cpuset/user/1000.user/c4.session/lxc/lub8
nincs csatolva
   umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/cpu/user/1000.user/c4.session/lxc/lub8
nincs csatolva
umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/cpuacct/user/1000.user/c4.session/lxc/lub8
nincs csatolva
 umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/devices/user/1000.user/c4.session/lxc/lub8
nincs csatolva
  umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/freezer/user/rattila/1/lxc/lub8
nincs csatolva

umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/hugetlb/user/1000.user/c4.session/lxc/lub8
nincs csatolva
 umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/memory/user/rattila/1/lxc/lub8
nincs csatolva
  umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/perf_event/user/1000.user/c4.session/lxc/lub8
nincs csatolva

umount:
/usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/systemd/user/1000.user/c4.session/lxc/lub8
nincs csatolva
   systemd 229 running in system mode. (+PAM +AUDIT
+SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT
+GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN)
Detected virtualization lxc.
Detected architecture x86-64.

Welcome to Ubuntu 16.04.1 LTS!

Set hostname to .
Failed to read AF_UNIX datagram queue length, ignoring: No such file or
directory
Failed to install release agent, ignoring: No such file or directory
Couldn't move remaining userspace processes, ignoring: Invalid argument
[  OK  ] Reached target Encrypted Volumes.
...
[  OK  ] Started Journal Service.
[FAILED] Failed to mount Huge Pages File System.
See 'systemctl status dev-hugepages.mount' for details.
[  OK  ] Started Remount Root and Kernel File Systems.
 Starting Load/Save Random Seed...
...
[FAILED] Failed to start Set console scheme.
See 'systemctl status setvtrgb.service' for details.
[  OK  ] Started getty on tty2-tty6 if dbus and logind are not available.
[FAILED] Failed to start Raise network interfaces.
See 'systemctl status networking.service' for details.
[  OK  ] Reached target Network.
...
(I got the login prompt after 5 mins. because of failed network.)

root@lub8:~# init 0
[  OK  ] Stopped target Timers.
[  OK  ] Reached target Unmount All Filesystems.
...
[  OK  ] Stopped Load/Save Random Seed.
[FAILED] Failed unmounting /dev/null.
[  OK  ] Unmounted /dev/zero.
[  OK  ] Reached target Shutdown.

Broadcast message from systemd-journald@lub8 (Thu 2016-07-28 05:23:35 UTC):

systemd[1]: Caught , dumped core as pid 699.


Broadcast message from systemd-journald@lub8 (Thu 2016-07-28 05:23:35 UTC):

systemd[1]: Freezing execution.

... and it was freezed. Waiting for something.
>From the above "[FAILED] Failed to start Raise network interfaces." is true
because I haven't configured OVS for this container.

Switching off lub8 (finishing the freezing):
rattila@fcubi:~/.local/share/lxc/lub8$ lxc-stop -n lub8
lxc-stop: monitor.c: lxc_monitor_read_fdset: 244 No such file or directory
- client failed to recv (monitord died?) No such file or directory
rattila@fcubi:~/.local/share/lxc/lub8$

Other problem:
>From the config file for lub8:

# Network configuration
lxc.network.type = veth
lxc.network.link = vbr0
lxc.network.veth.pair = veth-lub8
lxc.network.flags = up

There isn't veth-lub8 interface on host. Instead of this there is the
generated Ethernet interface which will be renamed every time when I
(re)start lub8:
vethSIIWU5 Link encap:Ethernet  HWaddr fe:db:7d:ba:17:5a
For OVS config I need permanent interface name!

Can I solve these problems?

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-create using offline mode

2016-07-27 Thread Ruzsinszky Attila
Hi,

I think, the mentioned description is wrong.
I checked my cache and the rootfs was unpacked and there isn't anything
from meta.tar.xz.

The good news for me is lxc-create (2.0.3) is working with Squid proxy +
authenticate! ;-)
So I used lxc-create as usual. It was a long process. Much more longer than
copying rootfs.tar.xz.
I hope it is working ...

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-create using offline mode

2016-07-26 Thread Ruzsinszky Attila
HI,

I found this:
https://lists.linuxcontainers.org/pipermail/lxc-devel/2014-July/009784.html

I followed the description for Ubuntu Xenial amd64.

root@fcubi:~# LANG="C";lxc-create -n lub7 -t ubuntu -- -r xenial -a amd64
Checking cache download in /var/cache/lxc/xenial/rootfs-amd64 ...
Copy /var/cache/lxc/xenial/rootfs-amd64 to /var/lib/lxc/lub7/rootfs ...
Copying rootfs to /var/lib/lxc/lub7/rootfs ...
/usr/share/lxc/templates/lxc-ubuntu: line 95:
/var/lib/lxc/lub7/rootfs/etc/network/interfaces: No such file or directory
lxc-create: lxccontainer.c: create_run_template: 1290 container creation
template for lub7 failed
lxc-create: lxc_create.c: main: 318 Error creating container lub7

lxc-create doesn't work with our auth Squid proxy so I have to download the
rootfs manually.
I can setup a new container by hand (unpacking the rootfs and making a new
config file) but I'd like to use lxc-create. Is that possible in "offline"
mode?

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC networking stop working between containers and real network

2016-07-20 Thread Ruzsinszky Attila
Hi Alex,

Thanks for your information!

I'll test soon what you wrote.
I did a workaround. I forgot the lxcbr0 bridge and my LXC containers were
"connected" directly into my vbr0 in OVS. It was almost perfect without any
scripting except I has to tagging those interface and I did it by hand
(tag=myVLANid).
It is working perfectly.

Is that a bug or a feature with Ubuntu's bridge? Or kernel problem? Under
Fedora 23 everything is working but I think more clear the direct connected
containers than double bridge (lxcbr0 under vbr0). I think theoretically
both of them have to work so I don't understand exactly why not.

Here is my LXC container's config:
# Network configuration
lxc.network.type = veth
lxc.network.flags = up
#lxc.network.link = lxcbr0
lxc.network.link = vbr0
lxc.network.veth.pair=veth-lub4
#lxc.network.hwaddr = 00:16:3e:9f:1f:b8

OVS:
Bridge "vbr0"
Port "vbr0"
Interface "vbr0"
type: internal
Port "mgmt0"
tag: 999
Interface "mgmt0"
type: internal
Port "veth-lub4"
tag: 800
Interface "veth-lub4"
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.52.141"}
Port "mgmtlxc0"
tag: 800
Interface "mgmtlxc0"
type: internal
Port "veth-lub5"
tag: 800
Interface "veth-lub5"
Port "veth-lub6"
tag: 800
Interface "veth-lub6"
ovs_version: "2.0.2"

On Fedora 23 the normal config:
# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:9f:1f:b8

 Bridge "vbr0"
Port "lxcbr0"
tag: 800
Interface "lxcbr0"
Port "mgmtlxc0"
tag: 800
Interface "mgmtlxc0"
type: internal
Port "vsar2_111"
tag: 100
Interface "vsar2_111"
Port "vlan10"
tag: 10
Interface "vlan10"
type: internal
Port "vsar2_a1"
tag: 999
Interface "vsar2_a1"
Port "mgmt0"
tag: 999
Interface "mgmt0"
type: internal
Port "vsar3_111"
tag: 100
Interface "vsar3_111"
Port "vbr0"
Interface "vbr0"
type: internal
Port "vsar3_a1"
tag: 999
Interface "vsar3_a1"
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.52.141"}
Port "vx0"
Interface "vx0"
type: vxlan
options: {remote_ip="192.168.52.141"}

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC networking stop working between containers and real network

2016-07-18 Thread Ruzsinszky Attila
Hi,

There is an Ubuntu 14.04 64 bit up to date host.
LXC version is: 2.0.3 (from backport packages)
OpenvSwitch: 2.0.2.

Container1: Ubuntu 14.04
Container2: Ubuntu 16.04 (both of them was installed from root.fs.zx,
because lxc-create doesn't work with auth. Squid proxy)

Both containers are working perfectly in "standalone" mode.
I use lxcbr0 as a bridge between the containers. There is dnsmasq for DHCP
and it is working, because containers get IP address (from 10.0.3.0/24
range).
There is an OVS bridge: vbr0 and its port is lxcbr0 on the host. The real
Ethernet interface is: eth0 which is connected to the real network. There
is a mgmtlxc0 virt. management interface which IP is: 10.0.3.2/24. I can
ping every machine in the 10.0.3.0/24 range.
The MAC addresses of the containers are different. I checked them.
mgmtlxc0 and the lxcbr0 are tagged for VLAN (tag=800 in OVS config)

I want to MASQUERADE the lxc-net to the real network:
Chain POSTROUTING (policy ACCEPT 54626 packets, 5252K bytes)
 pkts bytes target prot opt in out source
destination
  246 20520 MASQUERADE  all  --  *  *   10.0.3.0/24 !
10.0.3.0/24

Routing table:
root@fcubi:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
defaultreal_router 0.0.0.0 UG0  00 eth0
LXCnet  *   255.255.255.0   U 0  00
mgmtlxc0
FCnet   *   255.255.255.0   U 1  00 eth0

The problem is:
I try to ping from container1 (lub4) to a host on the real network. It is
working.
I try to ping from container2 (lub5) to the same host and it is not
working! The DNS resolving is OK, but no answer from the real host.

I checked the traffic on eth0 on lub4 or 5 (inside the containers). I can
see the ICMP echo REQ packets.
They are arrived to the host's lxcbr0 interface. I think it is good.
I checked the hosts's mgmtlxc0 interface which is the routing interface on
IP level. I can see the REQ packets.
ip4_forwarding is enabled (=1).
The next interface is eth0 and no traffic from containers on it! I filtered
for ICMP and no REQ! So the host "filter out" (or not routing) my MASQUed
ICMP packets.
I think it is not a MASQ problem, because without MASQUERADING I had to see
the outgoing REQ packets with wrong source IP (10.0.3.x) and of course
there won't be any answer because the real host knows nothing about routing
to 10.0.3.0 lxcnet. But no any outgoing packets.
I tried to remove the all iptables rules except MASQ and nothing was
changed.

If I ping between lub4 and 5 it is working (virtual) when the real not.

If I restart the containers one by one and I change the ping test (1st is
lub5 and the 2nd is lub4) the 2nd won't ping so not depend ont the
containers OS version.

I think the problem maybe in MASQ or routing between mgmtlxc0 and eth0.
netstat-nat doesn't work and I don't know why.
Do you have any clue?

I've got another host which is Fedora 23 64 bit (OVS 2.5) with 3 U14.04
containers and it seems working.

I'll do some more test. For example making a new U14.04 container because
on F23 the container's versions are the same.
LXD was installed but not used or configured.

TIA,
Ruzsi
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users