Re: [lxc-users] Networking

2020-03-25 Thread Saint Michael
I did not explain myself.  Suppose you have a large network of machines and
containers, all with public IPs, not private. You constantly bring up new
containers and need to assigning new IPs. You either scan the network each
time you need a new IP or use DHCP to give you one and then you change that
IP to static.  My industry only uses public IPs.

On Wed, Mar 25, 2020 at 5:05 PM Andrey Repin  wrote:

> Greetings, Saint Michael!
>
> > It is a common practice to trust the DHCP server to keep track of free
> IPs
> > in a large network, like /21, and once the DHCP assigns an IP address, we
> > adopt it as static and flag it a such in the router.
> > Otherwise, you need to scan the whole network every time.
>
> Why scan? You just say that "this IP block is assigned statically" and
> call it
> a day.
>
>
> --
> With best regards,
> Andrey Repin
> Wednesday, March 25, 2020 23:50:46
>
> Sorry for my terrible english...
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-25 Thread Andrey Repin
Greetings, Saint Michael!

> It is a common practice to trust the DHCP server to keep track of free IPs
> in a large network, like /21, and once the DHCP assigns an IP address, we
> adopt it as static and flag it a such in the router. 
> Otherwise, you need to scan the whole network every time.

Why scan? You just say that "this IP block is assigned statically" and call it
a day.


-- 
With best regards,
Andrey Repin
Wednesday, March 25, 2020 23:50:46

Sorry for my terrible english...
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-25 Thread Saint Michael
It is a common practice to trust the DHCP server to keep track of free IPs
in a large network, like /21, and once the DHCP assigns an IP address, we
adopt it as static and flag it a such in the router.
Otherwise, you need to scan the whole network every time.



On Wed, Mar 25, 2020 at 3:20 PM Andrey Repin  wrote:

> Greetings, Saint Michael!
>
> > I use L2. Can somebody clarify what advantage/disadvantage is there for
> L2,L3,L3S?
> > I need also to be able to use DHCP inside the container. In a first boot
> I
> > get an IP from DHCP, and set the interface down and turn that IP into
> static.
>
> This seems to be overengineered.
> Why do you need DHCP, if you are going to use static IP anyway?
> Can't you do it differently?
>
>
> --
> With best regards,
> Andrey Repin
> Wednesday, March 25, 2020 22:09:45
>
> Sorry for my terrible english...
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-25 Thread Andrey Repin
Greetings, Saint Michael!

> I use L2. Can somebody clarify what advantage/disadvantage is there for 
> L2,L3,L3S?
> I need also to be able to use DHCP inside the container. In a first boot I
> get an IP from DHCP, and set the interface down and turn that IP into static.

This seems to be overengineered.
Why do you need DHCP, if you are going to use static IP anyway?
Can't you do it differently?


-- 
With best regards,
Andrey Repin
Wednesday, March 25, 2020 22:09:45

Sorry for my terrible english...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-25 Thread Saint Michael
I use L2. Can somebody clarify what advantage/disadvantage is there for
L2,L3,L3S?
I need also to be able to use DHCP inside the container. In a first boot I
get an IP from DHCP, and set the interface down and turn that IP into
static.
Any way, ipvlan should work as simply as the other network options.
Another question for the networking gurus, how do you represent this
configuration with netplan?
---
auto lo eth0 eth1
iface lo inet loopback
allow-hotplug eth0 eth1
iface eth0 inet dhcp
iface eth1 inet static
address X.XX.X.215
netmask 255.255.255.0
mtu 1500
post-up echo "Setting up $IFACE"
post-up ip route replace default via X.XX.X .1 dev $IFACE
post-up ip rule add from X.XX.X.215 table $IFACE
post-up ip route replace default via X.XX.X .1 dev $IFACE table
$IFACE
post-up ip rule add iif $IFACE table $IFACE
post-up ip route replace default via 192.168.88.1 dev eth0
post-up ip route show table $IFACE
given
/etc/iproute2/rt_tables
1   eth0
2   eth1

The purpose is to send to eth1 only packets going  X.XX.X.0, which is a
public IPs network, and anything else via eth0 192.168.88.1.
I tried to figure this scheme out with Netplan and I cannot see the light.


On Wed, Mar 25, 2020, 5:31 AM Fajar A. Nugraha  wrote:

> On Tue, Mar 24, 2020 at 6:22 PM Saint Michael  wrote:
> >
> > That scheme in my case would not work. I have two interfaces inside the
> container, and each one talks to a different network, for business reasons.
> I use policy-based-routing to make sure that packets go to the right
> places. I need that the container can hold a full configuration. In my
> case, I use ifupdown, not netplan, since my containers are for an older
> version of Debian.
> > It is "not right" that ipvlan does not work out-of-the-box like macvlan
> or veth. Somebody has to fix it. I cannot use macvlan because Vmware only
> allows multiple macs if the entire network is set in promiscuous mode, and
> that kills performance. So basically the only workaround is ipvlan. As I
> said, if you use type=phys and ipvlan inside the host, it works fine,
> without altering the container.
>
>
> Apparently this also works, as long as you have the same ip in
> container config and inside the container
>
> Container config:
> # Network configuration
> lxc.net.0.name = eth0
> lxc.net.0.type = ipvlan
> lxc.net.0.ipvlan.mode = l3s
> lxc.net.0.l2proxy = 1
> lxc.net.0.link = eth0
> lxc.net.0.ipv4.address = 10.0.3.222
>
> inside the container -> normal networking config (e.g.
> /etc/netplan/10-lxc.yaml)
> network:
>   version: 2
>   ethernets:
> eth0:
>   dhcp4: no
>   addresses: [10.0.3.222/24]
>   gateway4: 10.0.3.1
>   nameservers:
> addresses: [10.0.3.1]
>
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-24 Thread Saint Michael
The 6.7 solution from Vmware is very expensive in terms of licensing,
nobody is going to upgrade because of that. The 6.5 solution is useless
because the mac addresses never age, they accumulate. The real solution is
to use a kernel 5.X plus ipvlan, and I already know it works fine, no need
to set the whole network in promiscuous mode. It should be ideal if LXC
would make ipvlan work for real, meanwhile, just add many ipvlans
interfaces to the host, and export each one to a different container as
type=phys.

On Tue, Mar 24, 2020 at 8:20 AM Michael Honeyman 
wrote:

> I don't often write to this list so apologies as I'm probably messing up
> the thread somehow.
>
> Saint Michael wrote: "... Vmware only allows multiple macs if the entire
> network is set in promiscuous mode..."
>
> Not strictly LXC/LXD related, but VMware has implemented two solutions to
> this problem since 6.5. They first released the MAC-learning dVfilter fling
> which still requires promiscuous mode but removed the flooding behaviour
> (more like a filtered hub than a switch - not sure if this improves the
> performance problem).
>
> There is also the Learnswitch which requires a distributed virtual switch,
> but implements proper MAC flooding and learning, which removes the
> requirement for promiscuous mode. This allows the VM to have multiple MACs
> behind one NIC, just as you'd expect on a physical network. This fling was
> released as a standard feature in 6.7, but as it requires DVS it is
> unfortunately locked behind a license. I haven't seen if the MAC-learning
> dVfilter fling has been ported to vSphere 6.7 yet or not.
>
> Hope that helps,
> Michael.
>
> On Tue, 24 Mar 2020 at 23:00, 
> wrote:
>
>> Send lxc-users mailing list submissions to
>> lxc-users@lists.linuxcontainers.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>> or, via email, send a message with subject or body 'help' to
>> lxc-users-requ...@lists.linuxcontainers.org
>>
>> You can reach the person managing the list at
>> lxc-users-ow...@lists.linuxcontainers.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of lxc-users digest..."
>> Today's Topics:
>>
>>1. Re: Networking (Fajar A. Nugraha)
>>2. Re: Networking (Saint Michael)
>>3. Re: Networking (Serge E. Hallyn)
>>4. Re: Networking (Saint Michael)
>>5. Re: Networking (Fajar A. Nugraha)
>>    6. Re: Networking (Saint Michael)
>>
>>
>>
>> -- Forwarded message --
>> From: "Fajar A. Nugraha" 
>> To: LXC users mailing-list 
>> Cc:
>> Bcc:
>> Date: Mon, 23 Mar 2020 19:26:18 +0700
>> Subject: Re: [lxc-users] Networking
>> On Fri, Mar 20, 2020 at 5:36 PM Saint Michael  wrote:
>> >
>> > I use plain LXC, not LXD. is  ipvlan supported?
>>
>> https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
>>
>> --
>> Fajar
>>
>>
>>
>>
>> -- Forwarded message --
>> From: Saint Michael 
>> To: LXC users mailing-list 
>> Cc:
>> Bcc:
>> Date: Mon, 23 Mar 2020 09:15:57 -0400
>> Subject: Re: [lxc-users] Networking
>> As I said, type=ipvlan does not work on the latest version if LXC from
>> git. BUT there is a workaround: create as many ipvlan interfaces as you
>> need at the host level, which shall be used later as type="phys" networking
>> on containers. That works.
>>
>>
>>
>> On Mon, Mar 23, 2020 at 8:26 AM Fajar A. Nugraha  wrote:
>>
>>> On Fri, Mar 20, 2020 at 5:36 PM Saint Michael  wrote:
>>> >
>>> > I use plain LXC, not LXD. is  ipvlan supported?
>>>
>>> https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
>>>
>>> --
>>> Fajar
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>
>>
>>
>> -- Forwarded message --
>> From: "Serge E. Hallyn" 
>> To: LXC users mailing-list 
>> Cc:
>> Bcc:
>> Date: Mon, 23 Mar 2020 11:37:18 -0500
>> Subject: Re: [lxc-users] Networking
>> Hi,
>>
>> just to make sure i understand right - you mean it is not supported in
>> lxc-user-nic?  And never was, so not a regression?
>>
>&

Re: [lxc-users] Networking

2020-03-24 Thread Saint Michael
That scheme in my case would not work. I have two interfaces inside the
container, and each one talks to a different network, for business reasons.
I use policy-based-routing to make sure that packets go to the right
places. I need that the container can hold a full configuration. In my
case, I use ifupdown, not netplan, since my containers are for an older
version of Debian.
It is "not right" that ipvlan does not work out-of-the-box like macvlan or
veth. Somebody has to fix it. I cannot use macvlan because Vmware only
allows multiple macs if the entire network is set in promiscuous mode, and
that kills performance. So basically the only workaround is ipvlan. As I
said, if you use type=phys and ipvlan inside the host, it works fine,
without altering the container.

On Tue, Mar 24, 2020 at 4:20 AM Fajar A. Nugraha  wrote:

> On Mon, Mar 23, 2020 at 11:48 PM Saint Michael  wrote:
> >
> > It is supported, there is no error, but there is no communication at all
> with the gateway. If you start the same exact network configuration in the
> container with the type=phys, it works fine, ergo, the issue is type=ipvlan.
>
> "exact network configuration" inside the container? I'm pretty sure it
> would fail.
>
> If you read what I wrote earlier:
> "
> set /etc/resolv.conf on the container manually, and disable network
> interface setup inside the container.
> "
>
> This works in my test (using lxc 3.2.1 from
> https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/daily):
> # Network configuration
> lxc.net.0.name = eth0
> lxc.net.0.type = ipvlan
> lxc.net.0.ipvlan.mode = l3s
> lxc.net.0.l2proxy = 1
> lxc.net.0.link = eth0
> lxc.net.0.ipv4.gateway = dev
> lxc.net.0.ipv4.address = 10.0.3.222/32
> lxc.net.0.flags = up
>
>
> While inside the container, setup resolv.conf manually, and disable
> networking setup (e.g. removing everything under /etc/netplan/ on
> ubuntu should work).
>
> Common issue with macvlan/ipvlan of "container not being able to
> contact the host" would still apply.
>
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-24 Thread Fajar A. Nugraha
On Mon, Mar 23, 2020 at 11:48 PM Saint Michael  wrote:
>
> It is supported, there is no error, but there is no communication at all with 
> the gateway. If you start the same exact network configuration in the 
> container with the type=phys, it works fine, ergo, the issue is type=ipvlan.

"exact network configuration" inside the container? I'm pretty sure it
would fail.

If you read what I wrote earlier:
"
set /etc/resolv.conf on the container manually, and disable network
interface setup inside the container.
"

This works in my test (using lxc 3.2.1 from
https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/daily):
# Network configuration
lxc.net.0.name = eth0
lxc.net.0.type = ipvlan
lxc.net.0.ipvlan.mode = l3s
lxc.net.0.l2proxy = 1
lxc.net.0.link = eth0
lxc.net.0.ipv4.gateway = dev
lxc.net.0.ipv4.address = 10.0.3.222/32
lxc.net.0.flags = up


While inside the container, setup resolv.conf manually, and disable
networking setup (e.g. removing everything under /etc/netplan/ on
ubuntu should work).

Common issue with macvlan/ipvlan of "container not being able to
contact the host" would still apply.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-23 Thread Saint Michael
It is supported, there is no error, but there is no communication at all
with the gateway. If you start the same exact network configuration in the
container with the type=phys, it works fine, ergo, the issue is type=ipvlan.


On Mon, Mar 23, 2020 at 12:37 PM Serge E. Hallyn  wrote:

> Hi,
>
> just to make sure i understand right - you mean it is not supported in
> lxc-user-nic?  And never was, so not a regression?
>
> Or has something regressed?
>
> On Mon, Mar 23, 2020 at 09:15:57AM -0400, Saint Michael wrote:
> > As I said, type=ipvlan does not work on the latest version if LXC from
> git.
> > BUT there is a workaround: create as many ipvlan interfaces as you need
> at
> > the host level, which shall be used later as type="phys" networking on
> > containers. That works.
> >
> >
> >
> > On Mon, Mar 23, 2020 at 8:26 AM Fajar A. Nugraha  wrote:
> >
> > > On Fri, Mar 20, 2020 at 5:36 PM Saint Michael 
> wrote:
> > > >
> > > > I use plain LXC, not LXD. is  ipvlan supported?
> > >
> > >
> https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
> > >
> > > --
> > > Fajar
> > > ___
> > > lxc-users mailing list
> > > lxc-users@lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> > >
>
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-23 Thread Serge E. Hallyn
Hi,

just to make sure i understand right - you mean it is not supported in
lxc-user-nic?  And never was, so not a regression?

Or has something regressed?

On Mon, Mar 23, 2020 at 09:15:57AM -0400, Saint Michael wrote:
> As I said, type=ipvlan does not work on the latest version if LXC from git.
> BUT there is a workaround: create as many ipvlan interfaces as you need at
> the host level, which shall be used later as type="phys" networking on
> containers. That works.
> 
> 
> 
> On Mon, Mar 23, 2020 at 8:26 AM Fajar A. Nugraha  wrote:
> 
> > On Fri, Mar 20, 2020 at 5:36 PM Saint Michael  wrote:
> > >
> > > I use plain LXC, not LXD. is  ipvlan supported?
> >
> > https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
> >
> > --
> > Fajar
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-23 Thread Saint Michael
As I said, type=ipvlan does not work on the latest version if LXC from git.
BUT there is a workaround: create as many ipvlan interfaces as you need at
the host level, which shall be used later as type="phys" networking on
containers. That works.



On Mon, Mar 23, 2020 at 8:26 AM Fajar A. Nugraha  wrote:

> On Fri, Mar 20, 2020 at 5:36 PM Saint Michael  wrote:
> >
> > I use plain LXC, not LXD. is  ipvlan supported?
>
> https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
>
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-23 Thread Fajar A. Nugraha
On Fri, Mar 20, 2020 at 5:36 PM Saint Michael  wrote:
>
> I use plain LXC, not LXD. is  ipvlan supported?

https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-20 Thread Saint Michael
I use plain LXC, not LXD. is  ipvlan supported?
Also my containers have public IPs, same network as the host. This is why I
cannot use NAT.




On Fri, Mar 20, 2020 at 12:02 AM Fajar A. Nugraha  wrote:

> On Thu, Mar 19, 2020 at 12:02 AM Saint Michael  wrote:
> >
> > The question is: how do we share the networking from the host to the
> containers, all of if. each container will use one IP, but they could see
> all the IPs in the host. This will solve the issue, since a single network
> interface,  single MAC address, can be associated with hundreds of IP
> addresses.
>
> If you mean "how can a container has it's own ip on the same network
> as the host, while also sharing the hosts's mac address", there are
> several ways.
>
> The most obvious one is nat. You NAT each host's IP address to
> corresponding vms.
>
>
> A new-ish (but somewhat cumbersome) method is to use ipvlan:
> https://lxd.readthedocs.io/en/latest/instances/#nictype-ipvlan
>
> e.g.:
>
> # lxc config show tiny
> ...
> devices:
>   eth0:
> ipv4.address: 10.0.3.101
> name: eth0
> nictype: ipvlan
> parent: eth0
> type: nic
>
> set /etc/resolv.conf on the container manually, and disable network
> interface setup inside the container. You'd end up with something like
> this inside the container:
>
> tiny:~# ip ad li eth0
> 10: eth0@if65:  mtu 1500
> qdisc noqueue state UNKNOWN qlen 1000
> ...
> inet 10.0.3.101/32 brd 255.255.255.255 scope global eth0
> ...
>
> tiny:~# ip r
> default dev eth0
>
>
> Other servers on the network will see the container using the host's MAC
>
> # arp -n 10.0.3.162 <=== the host
> Address  HWtype  HWaddress   Flags Mask
> Iface
> 10.0.3.162   ether   00:16:3e:77:1f:92   C
>  eth0
>
> # arp -n 10.0.3.101 <=== the container
> Address  HWtype  HWaddress   Flags Mask
> Iface
> 10.0.3.101   ether   00:16:3e:77:1f:92   C
>  eth0
>
>
> if you use plain lxc instead of lxd, look for similar configuration.
>
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-19 Thread Fajar A. Nugraha
On Thu, Mar 19, 2020 at 12:02 AM Saint Michael  wrote:
>
> The question is: how do we share the networking from the host to the 
> containers, all of if. each container will use one IP, but they could see all 
> the IPs in the host. This will solve the issue, since a single network 
> interface,  single MAC address, can be associated with hundreds of IP 
> addresses.

If you mean "how can a container has it's own ip on the same network
as the host, while also sharing the hosts's mac address", there are
several ways.

The most obvious one is nat. You NAT each host's IP address to
corresponding vms.


A new-ish (but somewhat cumbersome) method is to use ipvlan:
https://lxd.readthedocs.io/en/latest/instances/#nictype-ipvlan

e.g.:

# lxc config show tiny
...
devices:
  eth0:
ipv4.address: 10.0.3.101
name: eth0
nictype: ipvlan
parent: eth0
type: nic

set /etc/resolv.conf on the container manually, and disable network
interface setup inside the container. You'd end up with something like
this inside the container:

tiny:~# ip ad li eth0
10: eth0@if65:  mtu 1500
qdisc noqueue state UNKNOWN qlen 1000
...
inet 10.0.3.101/32 brd 255.255.255.255 scope global eth0
...

tiny:~# ip r
default dev eth0


Other servers on the network will see the container using the host's MAC

# arp -n 10.0.3.162 <=== the host
Address  HWtype  HWaddress   Flags MaskIface
10.0.3.162   ether   00:16:3e:77:1f:92   C eth0

# arp -n 10.0.3.101 <=== the container
Address  HWtype  HWaddress   Flags MaskIface
10.0.3.101   ether   00:16:3e:77:1f:92   C eth0


if you use plain lxc instead of lxd, look for similar configuration.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking Issues

2018-05-31 Thread Ray Jender
Ok, I am starting from scratch. It's seems the more I google LXD MACVLAN,
the more confused I get. I've seen at least 3 different ways to configure
this and none of them seemed to work for me??

So right now I am sitting at a fresh and updated install of Ubuntu 16.04.04.
I have created a partition for ZFS but have not installed it. This is
another confusing part because I have seen it as sudo apt-get install
zfsutils-linux bridge-utils and without the bridge-utils?? Which one is
correct for MACVLAN, if it actually matters?

So, what I need is a simple procedure to configure MACVLAN and one container
so the container can access the internet and also be accessed from the
internet. Can some supply me with that?

Eventually I need to have 4 containers so hopefully once I have one
container up and able to communicate with the internet, the next 3
containers will have no problems.

Thanks and I owe you a beer if I get this running with your help!

Thanks!

 

Ray

 

From: Ray Jender [mailto:rayjen...@gmail.com] 
Sent: Tuesday, May 22, 2018 11:25 AM
To: lxc-users@lists.linuxcontainers.org
Subject: [lxc-users] Networking Issues

 

So, can anyone assist me in a LXD container network issue?

 

How do you configure the networking so the containers have access to the
internet, as well as the internet having access to the container?

Right now I have one container on a Ubuntu 18.04 host.  The Ubuntu host is
actually a Vbox VM which is hosted on a Windows 7 Pro  box. I created the VM
Network as bridged.

 

The VM cannot ping the Windows 7 box but the Win 7 box can ping the VM

On the VM console there is no ping response at all.

 

Obviously I am not a networking kind of guy and can use some help.   I
appreciate it.


Ray

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-13 Thread Saint Michael
UPDATE
I was right after all in the first place. My network changed, but in
another subnet, not mine. What happens is that lxc networking mode=macvlan
stopped working or works intermittently. How do I know? because I started
using mode=phys, thus moving the interface inside the container, and all
problems went away. The Kernel or something started to drop the packets
that were destined to the same subnet. There is no iptables here, and also
I tested to rp_filter on and off, etc. No difference. Using tcpdump, I can
see the icmp requests and responses traveling back and forth through the
host, but never reaching their destination.
So LXC networking mode macvilan is not working after the latest Ubuntu host
updates.



On Wed, Nov 9, 2016 at 9:56 AM, Saint Michael  wrote:

> I SOLVED
> Many thanks to all. Using your input I concluded that the issue was not in
> LXC or the Kernel. In fact. the colo changed the subnets without telling me.
> Yours
> Federico
>
> ​
>
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-09 Thread Saint Michael
I SOLVED
Many thanks to all. Using your input I concluded that the issue was not in
LXC or the Kernel. In fact. the colo changed the subnets without telling me.
Yours
Federico

​
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-09 Thread Saint Michael
I  want to confirm that both the LXC Host and the Container see the packets
going back and forth with
tcpdump -n -i eth1 "(icmp)"

There is no rp_filter
sysctl  -a | grep [.]rp_filter
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth1.rp_filter = 0
net.ipv4.conf.eth2.rp_filter = 0
net.ipv4.conf.eth3.rp_filter = 0
net.ipv4.conf.eth4.rp_filter = 0
net.ipv4.conf.eth5.rp_filter = 0
net.ipv4.conf.eth6.rp_filter = 0
net.ipv4.conf.eth7.rp_filter = 0
net.ipv4.conf.eth8.rp_filter = 0
net.ipv4.conf.eth9.rp_filter = 0
net.ipv4.conf.lo.rp_filter = 0

But the response from the container never reach the machine that is trying
to ping the container.

Any idea what can be wrong?
The fact is I did not change anything on my network.





On Wed, Nov 9, 2016 at 9:42 AM, Saint Michael  wrote:

> I don't know how to downgrade the kernel.
> This is Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)
>
> I always use apt-get -y update and apt-get -y dist-upgrade
>
>
>
>
> On Wed, Nov 9, 2016 at 2:22 AM, Janjaap Bos  wrote:
>
>> Downgrade the kernel to verify your guess, as the other feedback you got
>> also points to the kernel. If that solves it, go file a kernel bug.
>>
>> 2016-11-09 7:33 GMT+01:00 Saint Michael :
>>
>>> It was working fine until a week ago.
>>> I have two sites, it happened on both, so the issue is not on my router
>>> or my switch, since they are different sites and we did not upgrade
>>> anything.
>>> Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)
>>> LXC installed from apt-get install lxc1
>>> iptables off in both hosts and containers. I protect my network at the
>>> perimeter.
>>>
>>> All my container networking is defined
>>>
>>> lxc.network.type=macvlan
>>> lxc.network.macvlan.mode=bridge
>>> lxc.network.link=eth1
>>> lxc.network.name = eth0
>>> lxc.network.flags=up
>>> lxc.network.hwaddr = XX:XX:XX:XX:XX:XX
>>> lxc.network.ipv4 = 0.0.0.0/24
>>>
>>> Now suppose I have a machine, not a container, in the same broadcast
>>> domain as the containers, same subnet.
>>> It cannot ping or ssh into a container, which is accessible from outside
>>> my network.
>>> However, from inside the container the packets come and go perfectly,
>>> when the connection is originated by the container.
>>> A container can ping that host I mentioned, but the host cannot ping
>>> back the container.
>>> It all started a few days ago.
>>> Also, from the host, this test works
>>> arping -I eth0 (container IP address)
>>> it shows that we share the same broadcast domain.
>>>
>>> My guess is that the most recent kernel update in the LXC host, is
>>> blocking the communication to the containers, but it allows connections
>>> from the containers or connections from IP addresses not on the same
>>> broadcast domain.
>>> Any idea?
>>>
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-09 Thread Saint Michael
I don't know how to downgrade the kernel.
This is Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)

I always use apt-get -y update and apt-get -y dist-upgrade




On Wed, Nov 9, 2016 at 2:22 AM, Janjaap Bos  wrote:

> Downgrade the kernel to verify your guess, as the other feedback you got
> also points to the kernel. If that solves it, go file a kernel bug.
>
> 2016-11-09 7:33 GMT+01:00 Saint Michael :
>
>> It was working fine until a week ago.
>> I have two sites, it happened on both, so the issue is not on my router
>> or my switch, since they are different sites and we did not upgrade
>> anything.
>> Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)
>> LXC installed from apt-get install lxc1
>> iptables off in both hosts and containers. I protect my network at the
>> perimeter.
>>
>> All my container networking is defined
>>
>> lxc.network.type=macvlan
>> lxc.network.macvlan.mode=bridge
>> lxc.network.link=eth1
>> lxc.network.name = eth0
>> lxc.network.flags=up
>> lxc.network.hwaddr = XX:XX:XX:XX:XX:XX
>> lxc.network.ipv4 = 0.0.0.0/24
>>
>> Now suppose I have a machine, not a container, in the same broadcast
>> domain as the containers, same subnet.
>> It cannot ping or ssh into a container, which is accessible from outside
>> my network.
>> However, from inside the container the packets come and go perfectly,
>> when the connection is originated by the container.
>> A container can ping that host I mentioned, but the host cannot ping back
>> the container.
>> It all started a few days ago.
>> Also, from the host, this test works
>> arping -I eth0 (container IP address)
>> it shows that we share the same broadcast domain.
>>
>> My guess is that the most recent kernel update in the LXC host, is
>> blocking the communication to the containers, but it allows connections
>> from the containers or connections from IP addresses not on the same
>> broadcast domain.
>> Any idea?
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-09 Thread Fajar A. Nugraha
On Wed, Nov 9, 2016 at 1:33 PM, Saint Michael  wrote:

> It was working fine until a week ago.
> I have two sites, it happened on both, so the issue is not on my router or
> my switch, since they are different sites and we did not upgrade anything.
> Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)
> LXC installed from apt-get install lxc1
> iptables off in both hosts and containers. I protect my network at the
> perimeter.
>
> All my container networking is defined
>
> lxc.network.type=macvlan
>

ah, macvlan :)


> lxc.network.macvlan.mode=bridge
> lxc.network.link=eth1
> lxc.network.name = eth0
> lxc.network.flags=up
> lxc.network.hwaddr = XX:XX:XX:XX:XX:XX
> lxc.network.ipv4 = 0.0.0.0/24
>
> Now suppose I have a machine, not a container, in the same broadcast
> domain as the containers, same subnet.
> It cannot ping or ssh into a container, which is accessible from outside
> my network.
> However, from inside the container the packets come and go perfectly, when
> the connection is originated by the container.
> A container can ping that host I mentioned, but the host cannot ping back
> the container.
> It all started a few days ago.
> Also, from the host, this test works
> arping -I eth0 (container IP address)
> it shows that we share the same broadcast domain.
>
> My guess is that the most recent kernel update in the LXC host, is
> blocking the communication to the containers, but it allows connections
> from the containers or connections from IP addresses not on the same
> broadcast domain.
> Any idea?
>
>
If you still have the old kernel, Janjaap's suggestion is relevant. Try
downgrading your kernel. If downgrading works, file a bug (see
https://wiki.ubuntu.com/Kernel/Bugs)

Another way to check is using generic methods to test network connectivity:
- from both the other machine and the container, ping each other, and then
"arp -n". Verify that the mac listed there is correct, and not (for
example) the hosts's MAC address. arping should also show which MAC address
is replying.
- ping from the other machine, and while its running, do a tcpdump on all
relevant interfaces (e.g. on container's eth0, on host's eth1, etc),
something like

tcpdump -n -i eth1 "(icmp or arp) and host container_ip_address"

and see where the traffic dissappears.

I had problems with macvlan when combined with proxyarp on the same host.
It works fine now with just macvlan on kernel 4.4.0-38-generic.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-09 Thread Mateusz Korniak
On Wednesday 09 of November 2016 01:33:55 Saint Michael wrote:
> Now suppose I have a machine, not a container, in the same broadcast domain
> as the containers, same subnet.
> It cannot ping or ssh into a container, which is accessible from outside my
> network.
> However, from inside the container the packets come and go perfectly, when
> the connection is originated by the container.
> A container can ping that host I mentioned, but the host cannot ping back
> the container.

Assuming you have container on host and external machine,
if you can:
machine ~]$  ping container
but  not (if I understand correctly):
container ~]$  ping machine

compare (tcpdump  -e icmp  -n )  of both pings on machine and host to see if 
they are different?

check if they do not get filtered by rp_filter 
sysctl  -a | grep [.]rp_filter

-- 
Mateusz Korniak
"(...) mam brata - poważny, domator, liczykrupa, hipokryta, pobożniś,
krótko mówiąc - podpora społeczeństwa."
Nikos Kazantzakis - "Grek Zorba"

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-09 Thread Mateusz Korniak
On Wednesday 09 of November 2016 01:33:55 Saint Michael wrote:
> lxc.network.type=macvlan
> lxc.network.hwaddr = XX:XX:XX:XX:XX:XX
> 
> Now suppose I have a machine, not a container, in the same broadcast domain
> as the containers, same subnet.
> It cannot ping or ssh into a container, which is accessible from outside my
> network.
> However, from inside the container the packets come and go perfectly, when
> the connection is originated by the container.
> (...)
> Any idea?

Make sure you do not have lxc.network.hwaddr duplicates (many containers with 
same hwaddr).
Turn off container and make sure it stops being "accessible from outside my
network"

-- 
Mateusz Korniak
"(...) mam brata - poważny, domator, liczykrupa, hipokryta, pobożniś,
krótko mówiąc - podpora społeczeństwa."
Nikos Kazantzakis - "Grek Zorba"

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issue

2016-11-08 Thread Janjaap Bos
Downgrade the kernel to verify your guess, as the other feedback you got
also points to the kernel. If that solves it, go file a kernel bug.

2016-11-09 7:33 GMT+01:00 Saint Michael :

> It was working fine until a week ago.
> I have two sites, it happened on both, so the issue is not on my router or
> my switch, since they are different sites and we did not upgrade anything.
> Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)
> LXC installed from apt-get install lxc1
> iptables off in both hosts and containers. I protect my network at the
> perimeter.
>
> All my container networking is defined
>
> lxc.network.type=macvlan
> lxc.network.macvlan.mode=bridge
> lxc.network.link=eth1
> lxc.network.name = eth0
> lxc.network.flags=up
> lxc.network.hwaddr = XX:XX:XX:XX:XX:XX
> lxc.network.ipv4 = 0.0.0.0/24
>
> Now suppose I have a machine, not a container, in the same broadcast
> domain as the containers, same subnet.
> It cannot ping or ssh into a container, which is accessible from outside
> my network.
> However, from inside the container the packets come and go perfectly, when
> the connection is originated by the container.
> A container can ping that host I mentioned, but the host cannot ping back
> the container.
> It all started a few days ago.
> Also, from the host, this test works
> arping -I eth0 (container IP address)
> it shows that we share the same broadcast domain.
>
> My guess is that the most recent kernel update in the LXC host, is
> blocking the communication to the containers, but it allows connections
> from the containers or connections from IP addresses not on the same
> broadcast domain.
> Any idea?
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issues with LXC containers in EC2

2016-04-27 Thread Peter Steele

On 01/12/2016 07:03 PM, Fajar A. Nugraha wrote:

On Tue, Jan 12, 2016 at 9:29 PM, Peter Steele  wrote:

On 01/12/2016 05:59 AM, Fajar A. Nugraha wrote:

On Tue, Jan 12, 2016 at 8:40 PM, Peter Steele  wrote:

I should have added that I have no issue running our software on a single
EC2 instance with containers running on that instance. We can assign
multiple IPs to the instance itself, as well as to the containers running
under the instance, and the containers can all communicate with each
other
as well as with the host.


can the containers in that setup communicate with systems outside the
host (e.g. access the internet)?

if "no", then you might hit the multiple mac problem

Sadly the answer is no. They cannot even ping another host in the same
VPC...

Looks like multiple mac problem. As in, EC2 only allows one mac from
your interface.

Proxyarp should work:

(1) Make SURE your EC2 instances (I'd call them "host" from now on)
supports multiple IPs (private or elastic/public IPs, depending on
your needs). The easiest way is to add those IPs to your host
interface, make sure that that new IP can be accessed (e.g. ping that
IP from another host), and then remove it.

(2) Enable proxy arp on the host

echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

It turned out that proxy arp was indeed the solution here, but a few 
other parameters had to be set as well. I just need to run the following 
commands on each EC2 instance:


echo 1 > /proc/sys/net/ipv4/conf/br0/forwarding
echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp_pvlan
echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp
echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects
echo 0 > /proc/sys/net/ipv4/conf/br0/send_redirects

With these settings, my containers and hosts can all talk to each other 
just like they were all residing on the same subnet. An easy solution in 
the end.


Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking LXD containers

2016-03-26 Thread efersept


I found this article that describes several virtualization 
networking techniques in great detail. It is mostly based on the legacy 
lxc tools but was fairly easy for me to translate to the LXD tools. Hope 
it helps other networking dummies, like my self, that may be watching 
this mail list.


http://containerops.org/2013/11/19/lxc-networking/

It really helped me to get a better understanding of what was 
taking place behind the scenes.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking LXD containers

2016-03-23 Thread efersept
After doing some homework on virtualization networking techniques and 
studying the contents of /usr/lib/x86_64-linux-gnu/lxc/lxc-net am I 
correct in deducing that the default lxc/lxd bridge (lxcbr0) is a NATed 
interface? If I wanted to attach containers to a simple bridged 
interface and give them IPs on my network would it be as simple as 
creating the following entry in the container's config after the bridge 
was setup on the host?


devices:
  eth0:
name: eth0
nictype: bridged
parent: br0
type: nic

Or is there some other configuration that would need to be done with 
lxc/lxd to accomplish this?





___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking LXD containers

2016-03-11 Thread Fajar A. Nugraha
On Fri, Mar 11, 2016 at 3:12 PM, Kean Sum Ooi  wrote:
> Hi Steve,
>
> Do you mean LXC containers? On Ubuntu?

@Kean: I think he means lxd, not lxc

@Steve: I assume you use ubuntu host?

Some info in https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-network
still apply. In particular, iptables forwarding is the easiest way to
allow access to a port in your container.

However if you use nested containers, and want outside hosts to reach
all the nested containers, you'd probably need bridge:
https://github.com/lxc/lxd/blob/master/specs/configuration.md#type-nic

The outside container bridges host's eth0 (e.g.
https://help.ubuntu.com/lts/serverguide/network-configuration.html#bridging),
and then on that container you create a bridge which the inside
container uses. Should work.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking LXD containers

2016-03-11 Thread Kean Sum Ooi
Hi Steve,

Do you mean LXC containers? On Ubuntu?

PS:
https://wiki.debian.org/LXC/SimpleBridge
http://askubuntu.com/questions/231666/how-do-i-setup-an-lxc-guest-so-that-it-gets-a-dhcp-address-so-i-can-access-it-on
https://www.flockport.com/lxc-macvlan-networking/

There are at least two ways to do this. Bridging (container is visible
from host) or macvlan (container is not visible from host).

1. Bridging
On the host we bridge to eth0, edit /etc/network/interfaces:
auto br0
iface br0 inet dhcp
  bridge_ports eth0

Restart the host. You should now see br0 with ifconfig.

Next in the config file for your container (eg. for privileged mode by
default it's in /var/lib/lxc//config)

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
# give a dummy hwaddr
lxc.network.hwaddr = 00:16:3e:86:62:10

To get more information about the config file, PS:
$ man lxc.container.conf

Start up your container and it should be bridged to your LAN (so
accessible from other nodes on your LAN).

2. macvlan
On the host create the macvlan to your eth0 network interface.
$ sudo ip link add mvlan0 link eth0 type macvlan mode bridge
$ sudo ifconfig mvlan0 up
The mvlan0 does not need an IPv4 address as it has a IPv6 address by
default, but if you need to give it an IP address can try this:
$ sudo dhclient -v mvlan0

You should see mvlan0 with ifconfig.

Next in the config file for your container

lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge
lxc.network.flags = up
lxc.network.link = mvlan0
# dummy hwaddr
lxc.network.hwaddr = 00:16:4e:75:b0:ca
lxc.network.mtu = 1500
# Get mask and broadcast address from "ifconfig eth0"
lxc.network.ipv4 = 192.168.10.50/24 192.168.10.255
# Get gateway from "route -n"
lxc.network.ipv4.gateway = 192.168.10.254

Start up your container and it should be bridged to your LAN (so
accessible from other nodes on your LAN but now since it's macvlan not
from the host).

Hope it helps. Thanks

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issues with LXC containers in EC2

2016-01-12 Thread Fajar A. Nugraha
On Wed, Jan 13, 2016 at 10:03 AM, Fajar A. Nugraha  wrote:
> On Tue, Jan 12, 2016 at 9:29 PM, Peter Steele  wrote:
>> On 01/12/2016 05:59 AM, Fajar A. Nugraha wrote:
>>>
>>> On Tue, Jan 12, 2016 at 8:40 PM, Peter Steele  wrote:

 I should have added that I have no issue running our software on a single
 EC2 instance with containers running on that instance. We can assign
 multiple IPs to the instance itself, as well as to the containers running
 under the instance, and the containers can all communicate with each
 other
 as well as with the host.
>>>
>>>
>>> can the containers in that setup communicate with systems outside the
>>> host (e.g. access the internet)?
>>>
>>> if "no", then you might hit the multiple mac problem
>>
>> Sadly the answer is no. They cannot even ping another host in the same
>> VPC...
>
> Looks like multiple mac problem. As in, EC2 only allows one mac from
> your interface.

>
> (3) See 
> https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg02380.html


Actually my reply on your past thread should be simpler:
https://lists.linuxcontainers.org/pipermail/lxc-users/2015-September/010069.html

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issues with LXC containers in EC2

2016-01-12 Thread Fajar A. Nugraha
On Tue, Jan 12, 2016 at 9:29 PM, Peter Steele  wrote:
> On 01/12/2016 05:59 AM, Fajar A. Nugraha wrote:
>>
>> On Tue, Jan 12, 2016 at 8:40 PM, Peter Steele  wrote:
>>>
>>> I should have added that I have no issue running our software on a single
>>> EC2 instance with containers running on that instance. We can assign
>>> multiple IPs to the instance itself, as well as to the containers running
>>> under the instance, and the containers can all communicate with each
>>> other
>>> as well as with the host.
>>
>>
>> can the containers in that setup communicate with systems outside the
>> host (e.g. access the internet)?
>>
>> if "no", then you might hit the multiple mac problem
>
> Sadly the answer is no. They cannot even ping another host in the same
> VPC...

Looks like multiple mac problem. As in, EC2 only allows one mac from
your interface.

Proxyarp should work:

(1) Make SURE your EC2 instances (I'd call them "host" from now on)
supports multiple IPs (private or elastic/public IPs, depending on
your needs). The easiest way is to add those IPs to your host
interface, make sure that that new IP can be accessed (e.g. ping that
IP from another host), and then remove it.

(2) Enable proxy arp on the host

echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

of course, adjust to your environment (e.g. change interface name if
needed). You can also add entries in /etc/sysctl.conf or /etc/sysctl.d
so that this setting will persist on reboot.

(3) See 
https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg02380.html

This should make all outgoing packets use eth0's MAC, and the host
will effectively function as a router.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issues with LXC containers in EC2

2016-01-12 Thread Fajar A. Nugraha
On Tue, Jan 12, 2016 at 8:40 PM, Peter Steele  wrote:
> I should have added that I have no issue running our software on a single
> EC2 instance with containers running on that instance. We can assign
> multiple IPs to the instance itself, as well as to the containers running
> under the instance, and the containers can all communicate with each other
> as well as with the host.


can the containers in that setup communicate with systems outside the
host (e.g. access the internet)?

if "no", then you might hit the multiple mac problem
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issues with LXC containers in EC2

2016-01-12 Thread Peter Steele
I should have added that I have no issue running our software on a 
single EC2 instance with containers running on that instance. We can 
assign multiple IPs to the instance itself, as well as to the containers 
running under the instance, and the containers can all communicate with 
each other as well as with the host. The problem occurs when we have 
more than one EC2 instance and need to have the containers in separate 
instances to communicate with each other. You're right though: If no one 
on this list has actually dealt with this issue themselves, the quickest 
answer is probably to talk to AWS directly.


Thanks.

Peter

On 01/11/2016 06:55 PM, Fajar A. Nugraha wrote:

On Tue, Jan 12, 2016 at 6:31 AM, Peter Steele  wrote:

 From what I've read, I understand that Amazon has implemented some
special/restricted behavior for the networking stack of EC2 instances. The
question I have is whether I can accomplish what I've attempted here,
specifically, can I access a LXC container hosted on one EC2 instance
directly from another EC2 instance or from another LXC container hosted on
another EC2 instance?

You might want to ask them first. Looks like it's only available for
VPC setup: 
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI

If they do allow multiple IP address, then the next step is to check
whether they allow multiple MACs (which is what you get when you use
bridge). There's a workaround for this if the ONLY limitation is the
MAC, using proxyarp.






___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking issues with LXC containers in EC2

2016-01-11 Thread Fajar A. Nugraha
On Tue, Jan 12, 2016 at 6:31 AM, Peter Steele  wrote:
> From what I've read, I understand that Amazon has implemented some
> special/restricted behavior for the networking stack of EC2 instances. The
> question I have is whether I can accomplish what I've attempted here,
> specifically, can I access a LXC container hosted on one EC2 instance
> directly from another EC2 instance or from another LXC container hosted on
> another EC2 instance?

You might want to ask them first. Looks like it's only available for
VPC setup: 
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI

If they do allow multiple IP address, then the next step is to check
whether they allow multiple MACs (which is what you get when you use
bridge). There's a workaround for this if the ONLY limitation is the
MAC, using proxyarp.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking not working in unconfined overlayfs container

2015-10-12 Thread Serge Hallyn
Hi,

before I try to reproduce this, can you confirm whether using the
kernel from vivid-proposed fixes it?

Quoting Frederico Araujo (arau...@gmail.com):
> Hi Serge,
> 
> Yes, I downloaded a fresh template for ubuntu and its overlay clones start
> okay, and I'm able to attach and run commands on them. However, eth0 has no
> IP assigned when unconfined.
> 
> I think the problem might be related to changes in systemd (I'm using
> version 219) and overlayfs on vivid. I do see many permission denied
> messages in the boot logs of the container (please see attached an example
> output), but couldn't find much help online.
> 
> lxc-attach -n test -- ifconfig -a
> eth0  Link encap:Ethernet  HWaddr 00:16:3e:23:59:24
>   inet6 addr: fe80::216:3eff:fe23:5924/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:29 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:4285 (4.2 KB)  TX bytes:648 (648.0 B)
> 
> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet6 addr: ::1/128 Scope:Host
>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>   RX packets:24 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:1888 (1.8 KB)  TX bytes:1888 (1.8 KB)
> 
> lxc-attach -n test -- ps -ef
> UIDPID  PPID  C STIME TTY  TIME CMD
> root 1 0  0 15:45 ?00:00:00 /sbin/init
> root   352 1  0 15:45 ?00:00:00
> /lib/systemd/systemd-journald
> root   613 1  0 15:45 ?00:00:00 /usr/sbin/cron -f
> syslog 673 1  0 15:45 ?00:00:00 /usr/sbin/rsyslogd -n
> root   710 1  0 15:45 ?00:00:00 /usr/sbin/sshd -D
> root   760 1  0 15:45 pts/100:00:00 /sbin/agetty --noclear
> --keep-baud pts/1 115200 38400 9600 vt220
> root   770 1  0 15:45 lxc/console 00:00:00 /sbin/agetty --noclear
> --keep-baud console 115200 38400 9600 v
> root   780 1  0 15:45 pts/200:00:00 /sbin/agetty --noclear
> --keep-baud pts/2 115200 38400 9600 vt220
> root   790 1  0 15:45 pts/000:00:00 /sbin/agetty --noclear
> --keep-baud pts/0 115200 38400 9600 vt220
> root   800 1  0 15:45 pts/300:00:00 /sbin/agetty --noclear
> --keep-baud pts/3 115200 38400 9600 vt220
> root   913 0  0 15:50 pts/200:00:00 ps -ef
> 
> Thanks!
> 
> Best,
> Fred
> 
> 
> On Mon, Oct 5, 2015 at 11:49 AM, Serge Hallyn 
> wrote:
> 
> > Quoting Frederico Araujo (arau...@gmail.com):
> > > Hi,
> > >
> > > I've been using LXC for over two years without problems. This week, I
> > > upgraded my Ubuntu from Trusty to Vivid, and I noticed that my overlayfs
> > > containers stopped getting IP assigned. In my machine the error can be
> > > reproduced in this way:
> > >
> > > 1. lxc-create -n base -t ubuntu
> >
> > Do you have this problem if you use the download template?
> >
> > > 2. Edit ubuntu/config to add  lxc.aa_profile = unconfined
> >
> > interesting that it has to be unconfined.
> >
> > if you tail -f /var/log/syslog and then start the container, does
> > the tail -f output show any DENIED messages?
> >
> > > 3. lxc-clone -s -B overlayfs ubuntu tmp
> >
> > Does the 'ubuntu' container start ok?
> >
> > > 4. lxc-start -n tmp -d
> > > 5. lxc-ls -f shows:
> > >
> > > NAME   STATEIPV4IPV6  GROUPS  AUTOSTART
> > > ---
> > > tmpRUNNING  - *(no IP)*   - -   NO
> > > ubuntu STOPPED  -   - -   NO
> >
> > Are you able to lxc-attach -n tmp and look around?  what does 'ps -ef'
> > and 'ifconfig -a' show?
> >
> > > Interestingly, I don't run into this issue when running the container in
> > > confined mode (without lxc.aa_profile = unconfined). I checked past
> > threads
> > > in this list and in launchpad, and noticed that some people had problems
> > > with overlayfs when upgrading to vivid, but it seems that these problems
> > > were fixed in LXC 1.1 release. I'm running on LXC 1.1.2.
> > >
> > > Any thoughts?
> > >
> > > Thanks,
> > > Fred
> >
> > > ___
> > > lxc-users mailing list
> > > lxc-users@lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users


> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list

Re: [lxc-users] Networking not working in unconfined overlayfs container

2015-10-05 Thread Frederico Araujo
Hi Serge,

Yes, I downloaded a fresh template for ubuntu and its overlay clones start
okay, and I'm able to attach and run commands on them. However, eth0 has no
IP assigned when unconfined.

I think the problem might be related to changes in systemd (I'm using
version 219) and overlayfs on vivid. I do see many permission denied
messages in the boot logs of the container (please see attached an example
output), but couldn't find much help online.

lxc-attach -n test -- ifconfig -a
eth0  Link encap:Ethernet  HWaddr 00:16:3e:23:59:24
  inet6 addr: fe80::216:3eff:fe23:5924/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:29 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:4285 (4.2 KB)  TX bytes:648 (648.0 B)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:24 errors:0 dropped:0 overruns:0 frame:0
  TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:1888 (1.8 KB)  TX bytes:1888 (1.8 KB)

lxc-attach -n test -- ps -ef
UIDPID  PPID  C STIME TTY  TIME CMD
root 1 0  0 15:45 ?00:00:00 /sbin/init
root   352 1  0 15:45 ?00:00:00
/lib/systemd/systemd-journald
root   613 1  0 15:45 ?00:00:00 /usr/sbin/cron -f
syslog 673 1  0 15:45 ?00:00:00 /usr/sbin/rsyslogd -n
root   710 1  0 15:45 ?00:00:00 /usr/sbin/sshd -D
root   760 1  0 15:45 pts/100:00:00 /sbin/agetty --noclear
--keep-baud pts/1 115200 38400 9600 vt220
root   770 1  0 15:45 lxc/console 00:00:00 /sbin/agetty --noclear
--keep-baud console 115200 38400 9600 v
root   780 1  0 15:45 pts/200:00:00 /sbin/agetty --noclear
--keep-baud pts/2 115200 38400 9600 vt220
root   790 1  0 15:45 pts/000:00:00 /sbin/agetty --noclear
--keep-baud pts/0 115200 38400 9600 vt220
root   800 1  0 15:45 pts/300:00:00 /sbin/agetty --noclear
--keep-baud pts/3 115200 38400 9600 vt220
root   913 0  0 15:50 pts/200:00:00 ps -ef

Thanks!

Best,
Fred


On Mon, Oct 5, 2015 at 11:49 AM, Serge Hallyn 
wrote:

> Quoting Frederico Araujo (arau...@gmail.com):
> > Hi,
> >
> > I've been using LXC for over two years without problems. This week, I
> > upgraded my Ubuntu from Trusty to Vivid, and I noticed that my overlayfs
> > containers stopped getting IP assigned. In my machine the error can be
> > reproduced in this way:
> >
> > 1. lxc-create -n base -t ubuntu
>
> Do you have this problem if you use the download template?
>
> > 2. Edit ubuntu/config to add  lxc.aa_profile = unconfined
>
> interesting that it has to be unconfined.
>
> if you tail -f /var/log/syslog and then start the container, does
> the tail -f output show any DENIED messages?
>
> > 3. lxc-clone -s -B overlayfs ubuntu tmp
>
> Does the 'ubuntu' container start ok?
>
> > 4. lxc-start -n tmp -d
> > 5. lxc-ls -f shows:
> >
> > NAME   STATEIPV4IPV6  GROUPS  AUTOSTART
> > ---
> > tmpRUNNING  - *(no IP)*   - -   NO
> > ubuntu STOPPED  -   - -   NO
>
> Are you able to lxc-attach -n tmp and look around?  what does 'ps -ef'
> and 'ifconfig -a' show?
>
> > Interestingly, I don't run into this issue when running the container in
> > confined mode (without lxc.aa_profile = unconfined). I checked past
> threads
> > in this list and in launchpad, and noticed that some people had problems
> > with overlayfs when upgrading to vivid, but it seems that these problems
> > were fixed in LXC 1.1 release. I'm running on LXC 1.1.2.
> >
> > Any thoughts?
> >
> > Thanks,
> > Fred
>
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


test.log
Description: Binary data
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking not working in unconfined overlayfs container

2015-10-05 Thread Serge Hallyn
Quoting Frederico Araujo (arau...@gmail.com):
> Hi,
> 
> I've been using LXC for over two years without problems. This week, I
> upgraded my Ubuntu from Trusty to Vivid, and I noticed that my overlayfs
> containers stopped getting IP assigned. In my machine the error can be
> reproduced in this way:
> 
> 1. lxc-create -n base -t ubuntu

Do you have this problem if you use the download template?

> 2. Edit ubuntu/config to add  lxc.aa_profile = unconfined

interesting that it has to be unconfined.

if you tail -f /var/log/syslog and then start the container, does
the tail -f output show any DENIED messages?

> 3. lxc-clone -s -B overlayfs ubuntu tmp

Does the 'ubuntu' container start ok?

> 4. lxc-start -n tmp -d
> 5. lxc-ls -f shows:
> 
> NAME   STATEIPV4IPV6  GROUPS  AUTOSTART
> ---
> tmpRUNNING  - *(no IP)*   - -   NO
> ubuntu STOPPED  -   - -   NO

Are you able to lxc-attach -n tmp and look around?  what does 'ps -ef'
and 'ifconfig -a' show?

> Interestingly, I don't run into this issue when running the container in
> confined mode (without lxc.aa_profile = unconfined). I checked past threads
> in this list and in launchpad, and noticed that some people had problems
> with overlayfs when upgrading to vivid, but it seems that these problems
> were fixed in LXC 1.1 release. I'm running on LXC 1.1.2.
> 
> Any thoughts?
> 
> Thanks,
> Fred

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] networking and permissions questions

2015-04-27 Thread Fajar A. Nugraha
On Tue, Apr 28, 2015 at 6:53 AM, Joe McDonald ideafil...@gmail.com wrote:
 1) Do I need to specify this IP in both the
 config file and the rootfs/etc/network/interfaces file?
 Is there a better way to do this?

IMHO the best way is on container's interfaces file


 2) why does one container (ubsharedweb) show the same IP address twice?


try lxc-attach to that container, and do ip ad li. My guess is
there's some misconfiguration there, which makes it assign the same IP
to multiple interfaces (e.g. eth0 and eth0:1)


 3) How is user lxcuser able to just take whatever IP's it wants?
 I have: lxcuser veth lxcbr0 100 in /etc/lxc/lxc-usernet

That's the way bridging works. The same way a computer on your LAN can
use whatever IP it wants on that LAN

 So I'm guessing that is how it can do it, but how can I
 constrain lxcuser to only use IP's within a certain range?


Short version: you can't.

Long version:
There's a workaround that I posted sometime ago, which in essence does
NOT use bridging, but use routing + proxy_arp. However it currently
ONLY works on priviledged container (since it needs persistent veth
name on the host side, which is currently not possible for
unpriviledged containers)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking LXC and VirtualBox on the same host

2014-09-22 Thread John

On 20/09/14 14:21, J Bc wrote:

route -n


Not sure what you mean, everything's on the same subnet. Also, if it 
were routing then pings wouldn't work either...


My route -n is this

Destination Gateway Genmask Flags   MSS Window irtt 
Iface
0.0.0.0 10.0.0.138  0.0.0.0 UG0 0  0 
eth0
10.0.0.00.0.0.0 255.0.0.0   U 0 0  0 
eth0


If I need to configure something then I'd be grateful if someone would 
explain what I'm missing.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread bryn1u85 .
Hey,

I made some changes:
root@ns321124:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto br0
iface br0 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address 94.23.237.216
netmask 255.255.255.0
network 94.23.237.0
broadcast 94.23.237.255
gateway 94.23.237.254

post-up /sbin/ifconfig br0:0 91.121.239.228 netmask 255.255.255.255
broadcast 91.121.239.228
post-down /sbin/ifconfig br0:0 down


root@ns321124:~# cat /var/lib/lxc/Oksymoron/config
# Template used to create this container:
/usr/share/lxc/templates/lxc-ubuntu
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

# Common configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf

# Container specific configuration
lxc.rootfs = /var/lib/lxc/Oksymoron/rootfs
lxc.mount = /var/lib/lxc/Oksymoron/fstab
lxc.utsname = Oksymoron
lxc.arch = amd64

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 = 91.121.239.228/32
lxc.network.ipv4.gateway = 91.121.239.254
lxc.network.hwaddr = 00:16:3e:3e:35:9e

root@ns321124:~# ifconfig
br0   Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  inet addr:94.23.237.216  Bcast:94.23.237.255  Mask:255.255.255.0
  inet6 addr: fe80::230:48ff:febd:ee08/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1432 errors:0 dropped:4 overruns:0 frame:0
  TX packets:785 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:109849 (109.8 KB)  TX bytes:149687 (149.6 KB)

br0:0 Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  inet addr:91.121.239.228  Bcast:91.121.239.228
 Mask:255.255.255.255
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0  Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1507 errors:0 dropped:0 overruns:0 frame:0
  TX packets:826 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:133897 (133.8 KB)  TX bytes:163263 (163.2 KB)
  Interrupt:16 Memory:fbce-fbd0

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:6 errors:0 dropped:0 overruns:0 frame:0
  TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:370 (370.0 B)  TX bytes:370 (370.0 B)

 LXC ###

root@Oksymoron:~# lxc-console -n Oksymoron

root@Oksymoron:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp


root@Oksymoron:~# ifconfig
eth0  Link encap:Ethernet  HWaddr 00:16:3e:3e:35:9e
  inet addr:91.121.239.228  Bcast:255.255.255.255
 Mask:255.255.255.255
  inet6 addr: fe80::216:3eff:fe3e:359e/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:489 errors:0 dropped:0 overruns:0 frame:0
  TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:41460 (41.4 KB)  TX bytes:10332 (10.3 KB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:60 errors:0 dropped:0 overruns:0 frame:0
  TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:4966 (4.9 KB)  TX bytes:4966 (4.9 KB)

And failed:


root@Oksymoron:~# apt-get update
0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com]
0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com]

I don;t know what can i do more
Please give me some advice





2014-08-13 10:29 GMT+02:00 Tamas Papp tom...@martos.bme.hu:


 On 08/13/2014 10:28 AM, bryn1u85 . wrote:

 Hey,

 what do u mean standard configuration ?


 man interfaces



 t
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread Tamas Papp

Try traceroute, show route command output and things like that.
BTW you setup looks the same as before.

tamas



On 08/13/2014 12:32 PM, bryn1u85 . wrote:

Hey,

I made some changes:
root@ns321124:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto br0
iface br0 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address 94.23.237.216
netmask 255.255.255.0
network 94.23.237.0
broadcast 94.23.237.255
gateway 94.23.237.254

post-up /sbin/ifconfig br0:0 91.121.239.228 netmask 
255.255.255.255 broadcast 91.121.239.228

post-down /sbin/ifconfig br0:0 down


root@ns321124:~# cat /var/lib/lxc/Oksymoron/config
# Template used to create this container: 
/usr/share/lxc/templates/lxc-ubuntu

# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

# Common configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf

# Container specific configuration
lxc.rootfs = /var/lib/lxc/Oksymoron/rootfs
lxc.mount = /var/lib/lxc/Oksymoron/fstab
lxc.utsname = Oksymoron
lxc.arch = amd64

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name http://lxc.network.name = eth0
lxc.network.ipv4 = 91.121.239.228/32 http://91.121.239.228/32
lxc.network.ipv4.gateway = 91.121.239.254
lxc.network.hwaddr = 00:16:3e:3e:35:9e

root@ns321124:~# ifconfig
br0   Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  inet addr:94.23.237.216  Bcast:94.23.237.255  Mask:255.255.255.0
  inet6 addr: fe80::230:48ff:febd:ee08/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1432 errors:0 dropped:4 overruns:0 frame:0
  TX packets:785 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:109849 (109.8 KB)  TX bytes:149687 (149.6 KB)

br0:0 Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  inet addr:91.121.239.228  Bcast:91.121.239.228 
 Mask:255.255.255.255

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0  Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1507 errors:0 dropped:0 overruns:0 frame:0
  TX packets:826 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:133897 (133.8 KB)  TX bytes:163263 (163.2 KB)
  Interrupt:16 Memory:fbce-fbd0

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:6 errors:0 dropped:0 overruns:0 frame:0
  TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:370 (370.0 B)  TX bytes:370 (370.0 B)

 LXC ###

root@Oksymoron:~# lxc-console -n Oksymoron

root@Oksymoron:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp


root@Oksymoron:~# ifconfig
eth0  Link encap:Ethernet  HWaddr 00:16:3e:3e:35:9e
  inet addr:91.121.239.228  Bcast:255.255.255.255 
 Mask:255.255.255.255

  inet6 addr: fe80::216:3eff:fe3e:359e/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:489 errors:0 dropped:0 overruns:0 frame:0
  TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:41460 (41.4 KB)  TX bytes:10332 (10.3 KB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:60 errors:0 dropped:0 overruns:0 frame:0
  TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:4966 (4.9 KB)  TX bytes:4966 (4.9 KB)

And failed:


root@Oksymoron:~# apt-get update
0% [Connecting to archive.ubuntu.com http://archive.ubuntu.com] 
[Connecting to security.ubuntu.com http://security.ubuntu.com]
0% [Connecting to archive.ubuntu.com http://archive.ubuntu.com] 
[Connecting to security.ubuntu.com http://security.ubuntu.com]


I don;t know what can i do more
Please give me some advice





2014-08-13 10:29 GMT+02:00 Tamas Papp tom...@martos.bme.hu 
mailto:tom...@martos.bme.hu:



On 08/13/2014 10:28 AM, bryn1u85 . wrote:

Hey,

what do u mean standard configuration ?


man interfaces



t

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread bryn1u85 .
From LXC nothing:
root@Oksymoron:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
^C

From Host:
root@ns321124:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
default vss-gw-6k.fr.eu 0.0.0.0 UG0  00 br0
10.0.3.0*   255.255.255.0   U 0  00
lxcbr0
94.23.237.0 *   255.255.255.0   U 0  00 br0



2014-08-13 12:35 GMT+02:00 Tamas Papp tom...@martos.bme.hu:

  Try traceroute, show route command output and things like that.
 BTW you setup looks the same as before.

 tamas




 On 08/13/2014 12:32 PM, bryn1u85 . wrote:

 Hey,

  I made some changes:
  root@ns321124:~# cat /etc/network/interfaces
 # This file describes the network interfaces available on your system
 # and how to activate them. For more information, see interfaces(5).

  # The loopback network interface
 auto lo
 iface lo inet loopback

  auto br0
 iface br0 inet static
 bridge_ports eth0
 bridge_stp off
 bridge_maxwait 0
 bridge_fd 0
 address 94.23.237.216
 netmask 255.255.255.0
 network 94.23.237.0
  broadcast 94.23.237.255
 gateway 94.23.237.254

  post-up /sbin/ifconfig br0:0 91.121.239.228 netmask
 255.255.255.255 broadcast 91.121.239.228
 post-down /sbin/ifconfig br0:0 down


  root@ns321124:~# cat /var/lib/lxc/Oksymoron/config
 # Template used to create this container:
 /usr/share/lxc/templates/lxc-ubuntu
 # Parameters passed to the template:
 # For additional config options, please look at lxc.container.conf(5)

  # Common configuration
 lxc.include = /usr/share/lxc/config/ubuntu.common.conf

  # Container specific configuration
 lxc.rootfs = /var/lib/lxc/Oksymoron/rootfs
 lxc.mount = /var/lib/lxc/Oksymoron/fstab
 lxc.utsname = Oksymoron
 lxc.arch = amd64

  # Network configuration
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.name = eth0
 lxc.network.ipv4 = 91.121.239.228/32
 lxc.network.ipv4.gateway = 91.121.239.254
 lxc.network.hwaddr = 00:16:3e:3e:35:9e

  root@ns321124:~# ifconfig
 br0   Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
   inet addr:94.23.237.216  Bcast:94.23.237.255  Mask:255.255.255.0
   inet6 addr: fe80::230:48ff:febd:ee08/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:1432 errors:0 dropped:4 overruns:0 frame:0
   TX packets:785 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:109849 (109.8 KB)  TX bytes:149687 (149.6 KB)

  br0:0 Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
   inet addr:91.121.239.228  Bcast:91.121.239.228
  Mask:255.255.255.255
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  eth0  Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:1507 errors:0 dropped:0 overruns:0 frame:0
   TX packets:826 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:133897 (133.8 KB)  TX bytes:163263 (163.2 KB)
   Interrupt:16 Memory:fbce-fbd0

  loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:65536  Metric:1
   RX packets:6 errors:0 dropped:0 overruns:0 frame:0
   TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:370 (370.0 B)  TX bytes:370 (370.0 B)

   LXC ###

  root@Oksymoron:~# lxc-console -n Oksymoron

  root@Oksymoron:~# cat /etc/network/interfaces
 # This file describes the network interfaces available on your system
 # and how to activate them. For more information, see interfaces(5).

  # The loopback network interface
 auto lo
 iface lo inet loopback

  auto eth0
 iface eth0 inet dhcp


  root@Oksymoron:~# ifconfig
 eth0  Link encap:Ethernet  HWaddr 00:16:3e:3e:35:9e
   inet addr:91.121.239.228  Bcast:255.255.255.255
  Mask:255.255.255.255
   inet6 addr: fe80::216:3eff:fe3e:359e/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:489 errors:0 dropped:0 overruns:0 frame:0
   TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:41460 (41.4 KB)  TX bytes:10332 (10.3 KB)

  loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:65536  Metric:1
   RX packets:60 errors:0 dropped:0 overruns:0 frame:0
   TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:4966 

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread Tamas Papp

Resending..


On 08/13/2014 02:27 PM, Tamas Papp wrote:

host machine:

auto br0
iface br0 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address 94.23.237.216
netmask 255.255.255.0
network 94.23.237.0
broadcast 94.23.237.255
gateway 94.23.237.254


Don't setup br0:0


Remove the following entries from CONTAINER/config:

lxc.network.ipv4 = 91.121.239.228/32 http://91.121.239.228/32
lxc.network.ipv4.gateway = 91.121.239.254



guest machine:

auto eth0
iface eth0 inet static
address 91.121.239.228
netmask 255.255.255.0
gateway 94.23.237.254


This should work.


tamas


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread m.byryn1u


W dniu 2014-08-13 16:57, Fajar A. Nugraha pisze:

On Wed, Aug 13, 2014 at 7:34 PM, Tamas Papp tom...@martos.bme.hu wrote:

Resending..


On 08/13/2014 02:27 PM, Tamas Papp wrote:

host machine:

auto br0
iface br0 inet static
 bridge_ports eth0
 bridge_stp off
 bridge_maxwait 0
 bridge_fd 0
 address 94.23.237.216
 netmask 255.255.255.0
 network 94.23.237.0
 broadcast 94.23.237.255
 gateway 94.23.237.254


Don't setup br0:0


Remove the following entries from CONTAINER/config:

lxc.network.ipv4 = 91.121.239.228/32
lxc.network.ipv4.gateway = 91.121.239.254



guest machine:

auto eth0
iface eth0 inet static
 address 91.121.239.228
 netmask 255.255.255.0
 gateway 94.23.237.254


This should work.

No, that won't work.

94.23.237.254 is not part of 91.121.239.0/24.

@Bryn, how is the ROUTER (i.e. 94.23.237.254) setup? Is it configured
to route the additional IP (91.121.239.228) thru host's IP
(94.23.237.216), the way some dedicated server provider does (e.g.
serverloft)?

If yes, then the EASY way would be to put 91.121.239.228 as an alias
in host's interface (I'd just use eth0, no need to use a bridge there)
and setup a static NAT to whatever IP the container has (e.g.
10.0.3.251, connected to host's lxcbr0 bridge)


Hey,

It's server dedicated in OVH.
I have one ip like  94.23.237.216 and i bought one more called ip 
failover like 91.121.239.228. You say as an alias and setup static NAT. 
But what about 2 ip failover ?


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread Fajar A. Nugraha
On Wed, Aug 13, 2014 at 10:08 PM, m.byryn1u m.bry...@gmail.com wrote:

 W dniu 2014-08-13 16:57, Fajar A. Nugraha pisze:

 On Wed, Aug 13, 2014 at 7:34 PM, Tamas Papp tom...@martos.bme.hu wrote:

 Resending..


 On 08/13/2014 02:27 PM, Tamas Papp wrote:

 host machine:

 auto br0
 iface br0 inet static
  bridge_ports eth0
  bridge_stp off
  bridge_maxwait 0
  bridge_fd 0
  address 94.23.237.216
  netmask 255.255.255.0
  network 94.23.237.0
  broadcast 94.23.237.255
  gateway 94.23.237.254


 Don't setup br0:0


 Remove the following entries from CONTAINER/config:

 lxc.network.ipv4 = 91.121.239.228/32
 lxc.network.ipv4.gateway = 91.121.239.254



 guest machine:

 auto eth0
 iface eth0 inet static
  address 91.121.239.228
  netmask 255.255.255.0
  gateway 94.23.237.254


 This should work.

 No, that won't work.

 94.23.237.254 is not part of 91.121.239.0/24.

 @Bryn, how is the ROUTER (i.e. 94.23.237.254) setup? Is it configured
 to route the additional IP (91.121.239.228) thru host's IP
 (94.23.237.216), the way some dedicated server provider does (e.g.
 serverloft)?

 If yes, then the EASY way would be to put 91.121.239.228 as an alias
 in host's interface (I'd just use eth0, no need to use a bridge there)
 and setup a static NAT to whatever IP the container has (e.g.
 10.0.3.251, connected to host's lxcbr0 bridge)

 Hey,

 It's server dedicated in OVH.

Then ask OVH how to use that IP.

 I have one ip like  94.23.237.216 and i bought one more called ip failover
 like 91.121.239.228. You say as an alias and setup static NAT. But what
 about 2 ip failover ?


If my guess is right, it's similar to serverloft. They will say
simply put it as an IP alias/additional IP on your server.

As in, the additional IP is routed to ONE of your server's IP.
Permanently. Can't be used on other server. Thus, there can be NO
failover.

It's not a standard failover setup where two or more physical servers
each have an IP in the same network segment, and you can have one or
more virtual IP for your services that can fail over to any of the
servers.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread m.byryn1u


W dniu 2014-08-13 17:24, Fajar A. Nugraha pisze:

On Wed, Aug 13, 2014 at 10:08 PM, m.byryn1u m.bry...@gmail.com wrote:

W dniu 2014-08-13 16:57, Fajar A. Nugraha pisze:


On Wed, Aug 13, 2014 at 7:34 PM, Tamas Papp tom...@martos.bme.hu wrote:

Resending..


On 08/13/2014 02:27 PM, Tamas Papp wrote:

host machine:

auto br0
iface br0 inet static
  bridge_ports eth0
  bridge_stp off
  bridge_maxwait 0
  bridge_fd 0
  address 94.23.237.216
  netmask 255.255.255.0
  network 94.23.237.0
  broadcast 94.23.237.255
  gateway 94.23.237.254


Don't setup br0:0


Remove the following entries from CONTAINER/config:

lxc.network.ipv4 = 91.121.239.228/32
lxc.network.ipv4.gateway = 91.121.239.254



guest machine:

auto eth0
iface eth0 inet static
  address 91.121.239.228
  netmask 255.255.255.0
  gateway 94.23.237.254


This should work.

No, that won't work.

94.23.237.254 is not part of 91.121.239.0/24.

@Bryn, how is the ROUTER (i.e. 94.23.237.254) setup? Is it configured
to route the additional IP (91.121.239.228) thru host's IP
(94.23.237.216), the way some dedicated server provider does (e.g.
serverloft)?

If yes, then the EASY way would be to put 91.121.239.228 as an alias
in host's interface (I'd just use eth0, no need to use a bridge there)
and setup a static NAT to whatever IP the container has (e.g.
10.0.3.251, connected to host's lxcbr0 bridge)


Hey,

It's server dedicated in OVH.

Then ask OVH how to use that IP.


I have one ip like  94.23.237.216 and i bought one more called ip failover
like 91.121.239.228. You say as an alias and setup static NAT. But what
about 2 ip failover ?


If my guess is right, it's similar to serverloft. They will say
simply put it as an IP alias/additional IP on your server.

As in, the additional IP is routed to ONE of your server's IP.
Permanently. Can't be used on other server. Thus, there can be NO
failover.

It's not a standard failover setup where two or more physical servers
each have an IP in the same network segment, and you can have one or
more virtual IP for your services that can fail over to any of the
servers.

I had a server on FreeBSD-10 before and i had 2 jails with 1 failover ip 
per jail, work well.

OVH says :

post-up /sbin/ifconfig eth0:X IP.FAIL.OVER netmask 255.255.255.255 
broadcast IP.FAIL.OVER

post-down /sbin/ifconfig eth0:X down

Or as bridge:


/etc/network/interfaces
auto lo eth0
iface lo inet loopback
iface eth0 inet static
addressIP.FAIL.OVER
netmask 255.255.255.255
broadcastIP.FAIL.OVER
post-up route addIP..SERWERA.254dev eth0
post-up route add default gwIP.SERWERA.254
post-down route delIP..SERWERA.254dev eth0
post-down route del default gwIP.SERWERA.254

That's why i think that method 2 post befor should works.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Networking in Ubuntu with 2 ip failover in LXC

2014-08-13 Thread m.byryn1u


W dniu 2014-08-13 18:43, Fajar A. Nugraha pisze:

On Wed, Aug 13, 2014 at 10:55 PM, m.byryn1u m.bry...@gmail.com wrote:

94.23.237.254 is not part of 91.121.239.0/24.

@Bryn, how is the ROUTER (i.e. 94.23.237.254) setup? Is it configured
to route the additional IP (91.121.239.228) thru host's IP
(94.23.237.216), the way some dedicated server provider does (e.g.
serverloft)?

If yes, then the EASY way would be to put 91.121.239.228 as an alias
in host's interface (I'd just use eth0, no need to use a bridge there)
and setup a static NAT to whatever IP the container has (e.g.
10.0.3.251, connected to host's lxcbr0 bridge)

Hey,

It's server dedicated in OVH.

Then ask OVH how to use that IP.

I have one ip like  94.23.237.216 and i bought one more called ip failover
like 91.121.239.228. You say as an alias and setup static NAT. But what
about 2 ip failover ?

If my guess is right, it's similar to serverloft. They will say
simply put it as an IP alias/additional IP on your server.

As in, the additional IP is routed to ONE of your server's IP.
Permanently. Can't be used on other server. Thus, there can be NO
failover.

It's not a standard failover setup where two or more physical servers
each have an IP in the same network segment, and you can have one or
more virtual IP for your services that can fail over to any of the
servers.

I had a server on FreeBSD-10 before and i had 2 jails with 1 failover ip per
jail, work well.

To prevent confusion and wasting everyone's time, what do you mean by
failover IP?

That simply looks like addtional IP (i.e. you only have one server),
not an IP that can be failed over between two physical servers.



OVH says :

post-up /sbin/ifconfig eth0:X IP.FAIL.OVER netmask 255.255.255.255 broadcast
IP.FAIL.OVER
post-down /sbin/ifconfig eth0:X down

Or as bridge:


/etc/network/interfaces
auto lo eth0
iface lo inet loopback
iface eth0 inet static
address IP.FAIL.OVER
netmask 255.255.255.255
broadcast IP.FAIL.OVER
post-up route add IP..SERWERA.254 dev eth0
post-up route add default gw IP.SERWERA.254
post-down route del IP..SERWERA.254 dev eth0
post-down route del default gw IP.SERWERA.254

If that snippet work, you can use it as the container's
/etc/network/interfaces. ONLY put the additional IP there. Do NOT put
the ip address in the host's bridge.
E This LXC doesn't work.  I don't know what to do. Don't know how to 
add secondary ip to lxc.

There are settings by default after install dedicated server (ubuntu 14.04):

root@Host:~# ifconfig
eth0  Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  inet addr:94.23.237.216  Bcast:94.23.237.255 Mask:255.255.255.0
  inet6 addr: 2001:41d0:2:70d8::/64 Scope:Global
  inet6 addr: fe80::230:48ff:febd:ee08/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:10138 errors:0 dropped:11 overruns:0 frame:0
  TX packets:4474 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:8616516 (8.6 MB)  TX bytes:1121846 (1.1 MB)
  Interrupt:16 Memory:fbce-fbd0

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:238 errors:0 dropped:0 overruns:0 frame:0
  TX packets:238 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:29523 (29.5 KB)  TX bytes:29523 (29.5 KB)

lxcbr0Link encap:Ethernet  HWaddr fe:a1:3c:60:a1:a7
  inet addr:10.0.3.1  Bcast:10.0.3.255  Mask:255.255.255.0
  inet6 addr: fe80::f025:12ff:fe20:a50d/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:193 errors:0 dropped:0 overruns:0 frame:0
  TX packets:64 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:13928 (13.9 KB)  TX bytes:8339 (8.3 KB)

vethNW6I9M Link encap:Ethernet  HWaddr fe:a1:3c:60:a1:a7
  inet6 addr: fe80::fca1:3cff:fe60:a1a7/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:62 errors:0 dropped:0 overruns:0 frame:0
  TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:5656 (5.6 KB)  TX bytes:2422 (2.4 KB)

I created first lxc by lxc-create -t ubuntu -n Oksymoron and works fine.
And now i want to add additional ip which i bought - 91.121.239.228

I tried fast add alias /sbin/ifconfig eth0:X IP.FAIL.OVER netmask 
255.255.255.255 broadcast IP.FAIL.OVER

In my case:
sbin/ifconfig eth0:0 91.121.239.228 netmask 255.255.255.255 broadcast 
91.121.239.228


I tried ping and connet by ssh and works perfect.

eth0:0Link encap:Ethernet  HWaddr 00:30:48:bd:ee:08
  inet addr:91.121.239.228  Bcast:91.121.239.228 
Mask:255.255.255.255

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  Interrupt:16 

Re: [lxc-users] Networking in LXC

2014-06-10 Thread Ajith Adapa
Thanks for the reply @Fajar.

(From Host)
# lxc-attach -n root -- echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin

(Inside container)
# ifconfig
-bash: ifconfig: command not found
# echo $PATH
/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

As you mentioned there is a difference in PATH. I am using default
config to create a lxc container. Is it an issue with fedora or we
have to manually set it everytime a container is created ?



I have a doubt regarding binding a physical interface to a lxc-container.

As per the instructions provided, we are creating a file at /run/netns
say user1 and then attaching interfaces to the network namespace user1
using ip command. Then we are using mount command to mount the network
namespace user1 in the process.

If I restart the container then the container starts with the new
process-id. Then if I try to mount the same network namespace user1 to
new process I am not seeing the physical interfaces inside the
container.

Should I delete the network-namespace user1 when the container is
stopped and create it again when we restart the container to make it
work ?

Regards,
Ajith



On Tue, Jun 10, 2014 at 2:48 PM, Fajar A. Nugraha l...@fajar.net wrote:
 On Tue, Jun 10, 2014 at 3:12 PM, Ajith Adapa ajith.ad...@gmail.com wrote:
 Hi,

 First I need to really thank the community for helping me out in
 starting LXC container on fedora 20.

 I have some basic questions regarding networking in LXC.

 1. Is there any tutorial or doc regarding support for various network
 options in lxc container ?

 Probably 
 http://manpages.ubuntu.com/manpages/trusty/man5/lxc.container.conf.5.html
 ?

 IIRC Fedora rawhide has lxc 1.0.3, so if you update to that you should
 have the same manpage. Otherwise you'd still be using lxc-0.9.0 which
 might be missing some features.


 2. When I login into container and try ifconfig command I am getting
 error saying command not found but I am able to run the same command
 using lxc-attach. Any reason why ?


 incorrect PATH? Try

 (from the host) lxc-attach -n CONTAINER_NAME -- echo $PATH
 (inside the container) echo $PATH

 in your case those two should display different output

 3. Is it possible to attach a physical interface to lxc container
 which is in running state ? Currently we need to set the configuration
 in the config file and restart the container.

 There's probably an easier way. The long way would be like this:

 # lxc-start -d -n template

 # lxc-info -n template
 Name:   template
 State:  RUNNING
 PID:8320  = this is what we need, the PID of a process
 inside the container
 CPU use:0.93 seconds
 BlkIO use:  6.28 MiB
 Memory use: 18.72 MiB
 KMem use:   0 bytes
 Link:   vethDUGP01
  TX bytes:  1.24 KiB
  RX bytes:  84 bytes
  Total bytes:   1.32 KiB

 # mkdir -p /run/netns

 # touch /run/netns/8320 = this one could be any name you want, which
 would then be used by ip ... netns

 # mount --bind /proc/8320/ns/net /run/netns/8320

 # ip link show dummy1
 8: dummy1: BROADCAST,NOARP mtu 1500 qdisc noqueue state DOWN mode
 DEFAULT group default
 link/ether 76:c6:a2:7f:c6:57 brd ff:ff:ff:ff:ff:ff

 # ip link set dummy1 netns 8320

 # ip link show dummy1
 Device dummy1 does not exist.

 # lxc-attach -n template -- ip link show dummy1
 8: dummy1: BROADCAST,NOARP mtu 1500 qdisc noop state DOWN mode
 DEFAULT group default
 link/ether 76:c6:a2:7f:c6:57 brd ff:ff:ff:ff:ff:ff


 --
 Fajar


 Regards,
 Ajith
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users