Re: [lxc-users] Bonding inside container? Or any other ideas?

2017-11-22 Thread Lai Wei-Hwa
Hi Andrey, 

Are you trying to use bond0 as the container's interface? If so, I think that's 
going to cause issues. You need an interface behind (in front?) bond0. 

Here is my interfaces - you're going to need some device between bond0 and LXC, 
though. 


lai@R610-LXD1-Dev-DMZ:~$ cat /etc/network/interfaces 
# This file describes the network interfaces available on your system 
# and how to activate them. For more information, see interfaces(5). 

source /etc/network/interfaces.d/* 

# The loopback network interface 
auto lo 
iface lo inet loopback 

auto eno1 
iface eno1 inet manual 
bond-master bond0 

auto eno2 
iface eno2 inet manual 
bond-master bond0 

auto eno3 
iface eno3 inet manual 
bond-master bond0 

auto eno4 
iface eno4 inet manual 
bond-master bond0 

auto bond0 
iface bond0 inet manual 
bond-mode 4 
bond-slaves none 
bond-miimon 100 
bond-lacp-rate 1 
bond-downdelay 200 
bond-updelay 200 
bond-xmit-hash-policy layer2+3 

auto br0 
iface br0 inet static 
bridge_ports bond0 
bridge_maxwait 10 
address 10.1.1.139 
netmask 255.255.0.0 
broadcast 10.1.255.255 
network 10.1.0.0 
gateway 10.1.1.5 
dns-nameservers 10.1.1.84 8.8.8.8 


From: "Andrey Repin" <anrdae...@yandex.ru> 
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>, "lxc-users" 
<lxc-users@lists.linuxcontainers.org> 
Sent: Wednesday, November 22, 2017 5:25:11 PM 
Subject: Re: [lxc-users] Bonding inside container? Or any other ideas? 

Greetings, Lai Wei-Hwa! 



I'm not sure I follow. I have multiple servers running Bond Mode 4 (for 
LACP/802.3ad). 



802.3ad (mode 4) requires switch support. 
Unfortunately, my switch is "managed", but does not offer this essential 
specification. 


BQ_BEGIN
I then created a bridge, br0 which becomes the main (only) interface. 

BQ_END

After having a hard time with some of the configurations, I avoid brctl like 
plague. It may be a tool to bridge physical interfaces, but for single host it 
is an extreme overhead. 


-- 
With best regards, 
Andrey Repin 
Thursday, November 23, 2017 01:20:52 

Sorry for my terrible english... 

___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Bonding inside container? Or any other ideas?

2017-11-22 Thread Andrey Repin
Greetings, Lai Wei-Hwa!

> I'm not sure I follow. I have multiple servers running Bond Mode 4 (for
> LACP/802.3ad).

802.3ad (mode 4) requires switch support.
Unfortunately, my switch is "managed", but does not offer this essential
specification.

> I then created a bridge, br0 which becomes the main (only) interface.

After having a hard time with some of the configurations, I avoid brctl like
plague. It may be a tool to bridge physical interfaces, but for single host it
is an extreme overhead.


-- 
With best regards,
Andrey Repin
Thursday, November 23, 2017 01:20:52

Sorry for my terrible english...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Bonding inside container? Or any other ideas?

2017-11-21 Thread Lai Wei-Hwa
I'm not sure I follow. I have multiple servers running Bond Mode 4 (for 
LACP/802.3ad). I then created a bridge, br0 which becomes the main (only) 
interface. I'm using flat networking with no NATS between containers and edited 
the profiles to use br0. Everything works for me. I can't speak to the other 
bond modes, though. 

Thanks! 
Lai

- Original Message -
From: "Andrey Repin" <anrdae...@yandex.ru>
To: "lxc-users" <lxc-users@lists.linuxcontainers.org>
Sent: Tuesday, November 21, 2017 6:38:55 PM
Subject: [lxc-users] Bonding inside container? Or any other ideas?

Greetings, All!

Some time ago I've managed to install a second network card into one of
my servers, and have been experimenting with bonding on host.
The field is: a host with two cards in one bond0 interface.
A number of containers sitting as macvlans on top of bond0.

Some success was achieved with bond mode 5 (balance-tlb) - approx 2:1 TX
counts with five clients, but all upload is weighted on one network card.

Attempt to change the mode to balance-alb(mode 6) immediately broke the
loading of roaming Windows profiles, the issue immediately disappear once I
switch back to mode 5.

I suppose this happens because bonding balancer creates havoc with macvlan and
own bonding MAC addresses, which the network can't easily solve, or Windows
clients got picky and refuse to load stuff from randomly changed source.

While I could turn back to internal LXC bridge and route requests between it
and bond0 on host to dissolve the MAC issue, I'd like to see if there's a more
direct solution could be found, such as creating a bonding inside container?

Or if not, is there any other way to use bonding and maintain broadcast
visibility range between containers and the rest of the network?


-- 
With best regards,
Andrey Repin
Wednesday, November 22, 2017 02:23:22

Sorry for my terrible english...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Bonding inside container? Or any other ideas?

2017-11-21 Thread Andrey Repin
Greetings, All!

Some time ago I've managed to install a second network card into one of
my servers, and have been experimenting with bonding on host.
The field is: a host with two cards in one bond0 interface.
A number of containers sitting as macvlans on top of bond0.

Some success was achieved with bond mode 5 (balance-tlb) - approx 2:1 TX
counts with five clients, but all upload is weighted on one network card.

Attempt to change the mode to balance-alb(mode 6) immediately broke the
loading of roaming Windows profiles, the issue immediately disappear once I
switch back to mode 5.

I suppose this happens because bonding balancer creates havoc with macvlan and
own bonding MAC addresses, which the network can't easily solve, or Windows
clients got picky and refuse to load stuff from randomly changed source.

While I could turn back to internal LXC bridge and route requests between it
and bond0 on host to dissolve the MAC issue, I'd like to see if there's a more
direct solution could be found, such as creating a bonding inside container?

Or if not, is there any other way to use bonding and maintain broadcast
visibility range between containers and the rest of the network?


-- 
With best regards,
Andrey Repin
Wednesday, November 22, 2017 02:23:22

Sorry for my terrible english...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users